pmcid
large_stringlengths 9
11
| pmid
large_stringlengths 0
8
| authors
large_stringlengths 0
32.1k
| title
large_stringlengths 0
1.39k
| publication_date
large_stringlengths 10
10
| keywords
large_stringlengths 0
5.5k
| abstract
large_stringlengths 0
17.8k
| text
large_stringlengths 4
3.47M
|
---|---|---|---|---|---|---|---|
PMC10000005 | D. Mason | Case of Ovariotomy | 01-01-1867 | Case of Ovariotomy |
|||
PMC10000006 | C. R. Parke | Case of Death from Chloroform | 01-01-1867 | Case of Death from Chloroform |
|||
PMC10000007 | H. Nance | A Paper on Epidemics | 01-01-1867 | A Paper on Epidemics |
|||
PMC10000009 | R. Dexter | A Case of Elephantiasis—Successful Recovery | 01-01-1867 | A Case of Elephantiasis—Successful Recovery |
|||
PMC10000010 | Addison Niles | Quincy Medical Society | 01-01-1867 | Quincy Medical Society |
|||
PMC10000013 | Sayan Chatterjee,Lakshmi Vineela Nalla,Monika Sharma,Nishant Sharma,Aditya A. Singh,Fehmina Mushtaque Malim,Manasi Ghatage,Mohd Mukarram,Abhijeet Pawar,Nidhi Parihar,Neha Arya,Amit Khairnar | Association of COVID-19 with Comorbidities: An Update | 27-02-2023 | COVID-19,comorbidity,diabetes,cancer,Parkinson’s disease,cardiovascular disease | Coronavirus disease (COVID-19) is caused by severe acute respiratory syndrome-coronavirus-2 (SARS-CoV-2) which was identified in Wuhan, China in December 2019 and jeopardized human lives. It spreads at an unprecedented rate worldwide, with serious and still-unfolding health conditions and economic ramifications. Based on the clinical investigations, the severity of COVID-19 appears to be highly variable, ranging from mild to severe infections including the death of an infected individual. To add to this, patients with comorbid conditions such as age or concomitant illnesses are significant predictors of the disease’s severity and progression. SARS-CoV-2 enters inside the host cells through ACE2 (angiotensin converting enzyme2) receptor expression; therefore, comorbidities associated with higher ACE2 expression may enhance the virus entry and the severity of COVID-19 infection. It has already been recognized that age-related comorbidities such as Parkinson’s disease, cancer, diabetes, and cardiovascular diseases may lead to life-threatening illnesses in COVID-19-infected patients. COVID-19 infection results in the excessive release of cytokines, called “cytokine storm”, which causes the worsening of comorbid disease conditions. Different mechanisms of COVID-19 infections leading to intensive care unit (ICU) admissions or deaths have been hypothesized. This review provides insights into the relationship between various comorbidities and COVID-19 infection. We further discuss the potential pathophysiological correlation between COVID-19 disease and comorbidities with the medical interventions for comorbid patients. Toward the end, different therapeutic options have been discussed for COVID-19-infected comorbid patients. | Association of COVID-19 with Comorbidities: An Update
Coronavirus disease (COVID-19) is caused by severe acute respiratory syndrome-coronavirus-2 (SARS-CoV-2) which was identified in Wuhan, China in December 2019 and jeopardized human lives. It spreads at an unprecedented rate worldwide, with serious and still-unfolding health conditions and economic ramifications. Based on the clinical investigations, the severity of COVID-19 appears to be highly variable, ranging from mild to severe infections including the death of an infected individual. To add to this, patients with comorbid conditions such as age or concomitant illnesses are significant predictors of the disease’s severity and progression. SARS-CoV-2 enters inside the host cells through ACE2 (angiotensin converting enzyme2) receptor expression; therefore, comorbidities associated with higher ACE2 expression may enhance the virus entry and the severity of COVID-19 infection. It has already been recognized that age-related comorbidities such as Parkinson’s disease, cancer, diabetes, and cardiovascular diseases may lead to life-threatening illnesses in COVID-19-infected patients. COVID-19 infection results in the excessive release of cytokines, called “cytokine storm”, which causes the worsening of comorbid disease conditions. Different mechanisms of COVID-19 infections leading to intensive care unit (ICU) admissions or deaths have been hypothesized. This review provides insights into the relationship between various comorbidities and COVID-19 infection. We further discuss the potential pathophysiological correlation between COVID-19 disease and comorbidities with the medical interventions for comorbid patients. Toward the end, different therapeutic options have been discussed for COVID-19-infected comorbid patients.
Coronavirus disease (COVID-19) is a communicable disease associated with the dysfunction of the upper respiratory tract caused by severe acute respiratory syndrome-coronavirus-2 (SARS-CoV-2). The city of Wuhan, China was the first to document pneumonia cases of unknown etiology at the end of December 2019. It was spread to around 188 nations, resulting in many confirmed cases and severe health and socioeconomic consequences. The total number of confirmed cases worldwide reached 574,818,625 with 6,395,451 fatalities by the fourth week of July 2022; this data was presented by the Coronavirus Resource Center at Johns Hopkins University. Ejaz and colleagues reported that SARS-CoV-2 infections could cause various symptoms, from mild diseases that go away independently to dangerous ones that affect many organs. There are mainly four types of coronavirus (CoV), classified as α-CoV, β-CoV, γ-CoV, and δ-CoV. Of these, only α-CoV and β-CoV have been shown to cause animal sickness. Further, β-CoV was responsible for SARS in 2003 and the Middle East respiratory syndrome (MERS) in 2012. According to several genomic studies, SARS-CoV-2 is an encapsulated virus with a positive sense-RNA genome. The genome of SARS-CoV-2 is approximately 96% similar to that of bat CoV RaTG13. Furthermore, the genomic sequence and evolution of the analysis of SARS-CoV-2 have a 79.5% genomic similarity to the severe acute respiratory syndrome-coronavirus (SARS-CoV). SARS-CoV-2 enters human cells by attaching to the angiotensin convertase enzyme2 (ACE2) receptor of the upper respiratory tract, which acts as an entry point for this virus. Although the virus travels by intranasal and oral pathways, it affects olfactory sensory neurons. Eventually, it infects the central nervous system (CNS), causing hyposmia (loss of sensation of smell) and hypogeusia (loss of taste) as well as other sensory symptoms. Although SARS-CoV-2 infects people of all ages and genders, research shows that individuals with comorbidities are more susceptible to COVID-19 infection. Further evidence suggests that male patients 50 years of age with or without comorbidities show a significantly increased risk of death. According to the Centers for Disease Control and Prevention (CDC), USA, individuals aged 65 and above accounted for around 30% of COVID-19 infections, 45% of hospitalizations, 53% of intensive care unit (ICU) admissions, and 80% of deaths. In addition, after COVID-19 infection, those with a compromised immune system due to cancer treatments or steroids requiring hospitalization are prone to mortality. Given those the events of ICU admissions or mortality following COVID-19 infection increase. It is vital to comprehend the mechanisms and treatment alternatives that are most suited for marginalized populations. Based on clinical data on COVID-19, comorbidities like cardiovascular disease (CVD) including hypertension and diabetes have been the most prevalent. In this review, we focused on the link between COVID-19 and comorbidities such as Parkinson’s disease (PD), cancer, diabetes, and CVD. We also looked at epidemiological data, pathological relationships, and potential treatment options for COVID-19-infected people with comorbidities.
As was highlighted, this review illustrates the relationship between several comorbidities and COVID-19. A literature search was conducted online using several databases and search engines like PubMed to exploit this. The keywords such as COVID-19, SARS-CoV-2, and comorbidity in COVID-19 were used to get the most relevant articles that support this study. On the other hand, the relationship between PD, cancer, diabetes mellitus, CVD, and hypertension with COVID-19 articles were also used to accomplish this study. Besides, the rest of the articles with mismatched or irrelevant keywords were not considered for this study. All the publications were examined and referenced based on their relevancy and compatibility with the current topic of discussion. We used PubMed’s “Boolean Operators” (AND, NOT, and OR) search criterion to acquire relevant search results for this. Figure 1 depicts the strategy for obtaining the data and subsequently filtering the articles, and Table 1 provides the keywords used for searching through Pubmed. The significant comorbidities of COVID-19, such as PD, cancer, diabetes, and CVD, have been included after a literature search and screening of published research articles, meta-analyses, and systemic review studies. In this study, 198 articles were discussed in depth to show how these significant comorbidities were to blame for hospitalization, ICU admission, and mortality in most cases. In addition, we looked at the molecular and cellular mechanisms of COVID-19 and how they relate to its pathophysiology, linked with substantial comorbidities. For instance, cytokine storm is typical of all the main comorbidities described above. We looked for several inflammatory cytokines connected to COVID-19-related comorbidities in this context. Then, we checked the further information on clinical trials web site to learn more about the clinical studies that used the repurposed drugs for the appropriate comorbidities. Furthermore, we have also included future projections as well as several treatment strategies for each comorbidity. Patients with COVID-19 have received treatment using a wide range of therapeutic modalities globally. In the absence of a vaccine or a SARS-CoV neutralizing antibody, convalescent plasma therapy and pharmaceutical repurposing lead the charge.
During genomic replication, a virus’s genetic code changes (gene mutation), a phenomenon also prevalent with the SARS-CoV-2 virus, wherein constant gene mutations have led to many variants (lineage) of the same virus over time. A lineage is a set of genetically related viral variants that share a common ancestor. SARS-CoV-2 has several lineages, all of which produce COVID-19 infection. Some lineage changes propagate more rapidly and easily than others, perhaps contributing to make COVID-19 cases more common. A rise in the number of cases have imposed a higher burden on healthcare resources, resulting in additional hospitalizations and, perhaps, fatalities. In the USA, epidemiological investigations into viral genetic sequence-based monitoring and laboratory research are routinely conducted to track SARS-CoV-2 genetic lineages. The SARS-CoV-2 Interagency Group (SIG) of the US government categorized Omicron as a Variant of Concern on November 30, 2021 (Control and Prevention, 2021). According to SIG, there are four types of SARS-CoV-2 variants: 1. Variant Being Monitored (VBM): Alpha (B.1.1.7 and Q lineages), Beta (B.1.351 and descendent lineages), Gamma (P.1 and descendent lineages), Epsilon (B.1.427 and B.1.429), Eta (B.1.525), Iota (B.1.526), Kappa (B.1.617.1), 1.617.3, Mu (B.1.621, B.1.621.1), Zeta (P.2); 2. Variant of Interest (VOI): None of the variant(s) yet identified; 3. A variant of Concern (VOC): Delta (B.1.617.2 and AY lineages), Omicron (B.1.1.529 and BA lineages); 4. A variants of High Consequence (VOHC): This sort of variation is yet to be detected internationally. Omicron is constantly evolving mutations even after 3 years of the pandemic and still giving rise to several new subvariants, such as BA.2.75 and BA.4.6. Importantly, several of these new variations, including BA.2.3.20, BA.2.75.2, CA.1, BR.2, BN.1, BM.1.1.1, BU.1, BQ.1.1, and XBB, exhibit notable growth advantages over BA.5. Recently reported in the second week of October 2022 is the name of the latest lineages variant of Omicron is XBB and sublineages XBB.1 (S:252 V) found in major regions such as China, United Kindom (UK), Europe, and North America, resulting in all these countries announcing that they are now going on nationwide lockdown-like restriction once again due to the sudden surge of the new COVID variant. XBB and XBB.1 (S:252 V) are mainly found in Bangladesh, Singapore, and India. The prevalence of Delta and Omicron (BA.1) coinfections and Omicron lineages BA.1 and BA.2 coinfections were estimated at 0.18% and 0.26%, respectively. Among 6,242 hospitalized patients, ICU admission rates were 1.64%, 4.81, and 15.38% in Omicron, Delta, and Delta/Omicron patients, respectively. Among patients admitted to the ICU, there were no reports of BA.1/BA.2 coinfections. A total of 21 patients (39.6%) of the 53 coinfected patients missed vaccinations. Even though SARS-CoV-2 coinfections were rare in their clinical study, it is still essential to accurately identify them so that they can figure out how they affect patients and how likely they will make recombinants. One clinical study has been conducted in France to detect the prevalence of SARS-CoV-2 coinfection during spread of the Delta, Omicron, Delta/Omicron variant. This study was held from December 2021 to February 2022. They tested the effectiveness of four sets of whole-genome sequencing primers using 11 blends of Delta/Omicron isolates at multiple ratios, and they developed a bioinformatics technique that is impartial for identifying coinfections involving various genetic SARS-CoV-2 lineages. Applied to 21,387 samples collected from 6 December 2021 to 27 February 2022, random genomic surveillance in France, they detected 53 coinfections between different lineages. The Delta variation was the most susceptible and transmissible of all of these variants, that resulted in an increase in the percentage of fatalities as well as comorbidities such as hospitalization of older persons. The Omicron variant may spread more quickly than other variants, such as Delta; however, Omicron was less lethal compared to the Delta variant. These differences in response could be attributed to many factors such as the less efficient cleaving of the S protein fraction of omicron and more α helix stabilization than the delta variant.
COVID-19 infection is associated with aging, which is a major risk factor for severe illness and mortality, especially for those who are in long-term care facilities. In addition, people at any age with serious underlying medical conditions are more at risk of getting COVID-19 infection. The elderly, SARS-CoV-2 infected persons with comorbidities, including PD, diabetes, cancer, and hypertension (HTN) and CVD, are at higher risk of death. Early evidence from several epidemiological data sets shows that the COVID-19 case fatality ratio (CFR) increases with age. Table 2 reflects the number of CFRs in various countries; overall, the CFR of China and USA is 2.3% and 2.7%, respectively, while the global CFR was at 2.8%. Italy was the first nation to be affected by the pandemic after China. The total CFR of Italy was higher (7.2%) compared to China (2.3%). This is attributed to a more significant proportion of older adults (22.8% and 11.9%, respectively) in Italy. In addition, an 82-year-old man in Brooklyn was the first COVID-19 fatality reported in New York City. A significant case series of 5,700 COVID-19 patients admitted to hospitals in New York City revealed a similar pattern of COVID-19 fatalities with age. However, the mechanism of SARS-CoV-2 infection is still unknown. However, the primary mechanism of SARS-CoV-2 underlies the ACE2 enzyme’s expression and utilization of the ACE2 receptor, which helps to enter the SARS-CoV-2 inside the cell. Lymphopenia, abnormal respiration, and a high level of pro-inflammatory cytokines in plasma are the main manifestations ascribed to individuals with COVID-19 infection along with very high body temperature and respiratory issues. COVID-19 is caused by several metabolic and viral disorders, all of which have a part in the developing of the more complicated symptoms. The CFR for India seems to be lower than in several European nations. This might attributed to low percentage of population (6.38%) to be above the age of 65 as per 2019 statistics. According to reports, COVID-19 puts elderly persons (those over 60) at a greater risk of mortality. The relationship between CFR and several other health and socioeconomic factors, variations in the virulence of SARS-CoV-2 across geographical areas, and COVID-19 response indicators unique to certain countries has to be further studied. The projected CFRs (July 2020) based on the random- and fixed-effect models were 1.42% (95% Cl 1.19–1.70%) and 2.97% (95% CI 2.94–3.00%), respectively. Estimates made using the random-effects model were more likely to accurately reflect the real CFR for India because of the high level of variability. In earlier research, the COVID-19 CFR was estimated using random-effect models, or the CFR was provided using both random- and fixed-effect models. We made sure that states with a lot of cases and fatalities got more weightage than those with fewer cases and deaths by using a random effects model.Table 3 shows that COVID-19-infected patients with comorbidities had a higher death risk. As a consequence of aging, the body experiences progressive biological alterations in immune function, concurrently causing an increased susceptibility to age-related inflammation (inflammaging) and other associated inflammatory conditions, which makes the elderly population vulnerable to enhanced risk of infection following exposure to the virus. Inflammaging is a chronic low-stage inflammation mediated by dysfunction in the basal responses of the pattern recognition receptors (PRRs) and pathogen-associated molecular patterns (PAMPs). A progressive impairment in autophagic signals affect the PRR signals in the aging population and consequently cause the exorbitant release of reactive oxygen species (ROS). This condition may further be worsened with the binding of the virus to immune cells as they also work in synchronization with PAMPs and PRRs, thereby causing oxidative damage to cells in older individuals. Another hallmark of inflammaging is increased production of interleukins (ILs), especially IL-1β and IL-18, due to activation of the NLR family pyrin domain containing 3 (NLRP3) inflammasome, in COVID-19 infections. NLRP3 activation is strongly correlated to the aging population. This consequently causes increased pyroptosis, which is central to cell death post-infection; increased release of IL-1β, IL-18 as well as damage-associated molecular patterns (DAMPs) further cumulates inflammatory responses in the elderly. This mechanism has also been reported to be the root cause of inflammatory conditions such as cancer, diabetes, PD, and acute myocardial injury. Furthermore, with age, there is a progressive hardening of the endothelial cells, causing the increased formation of plaque, and further leading to a hypercoagulative state. Following COVID-19 infection, reports have demonstrated that a hypercoagulative state is associated with an increased risk of comorbid conditions such as ischemic stroke and myocardial infarction. Therefore, it can be concluded that aging and comorbid conditions are crucial factors in eliciting the response mediated by COVID-19 infection alone. So, in the subsequent sections, we discussed the latest reports of age-related comorbidities, such as PD, cancer, diabetes, and CVD, and how they relate to the severity and pathology of COVID-19.
The population of patients needing hospital admission is disproportionately composed of older adults, men, and people with comorbid conditions, including diabetes and CVDs. After COVID-19 expanded to multiethnic communities in Western Europe and North America, multiple studies claimed that Black, Asian, or other minority ethnic groups were more likely to be affected by the illness. According to the statistical report from USA, in several cities or cumulative analyses across large states, Black and Hispanic people had higher per capita mortality rates than White people, but the underlying reasons were unknown. A large cohort study has been conducted in the United Kingdom (UK), wherein they reported higher overall death rates for Black and South Asian people compared to White people. However, the study ignores the wide variances in the ethnic makeup of local populations across various geographic locations. In this regard, a case-control and cohort study was conducted at King’s College Hospital Foundation Trust, UK to investigate if ethnic origin influences the probability of hospital admission with severe COVID-19 and/or in-hospital mortality. Inner city adult patients with confirmed COVID-19 admitted to the hospital (n = 872 cases) were compared with 3,488 matched controls randomly drawn from a primary healthcare database consisting of 344,083 people dwelling in the same area. For the cohort study, the authors examined 1,827 people continuously hospitalized with COVID-19. Self-defined ethnicity served as the primary exposure factor and analyses were adjusted for socio-demographic and clinical characteristics. Based on the results obtained by conditional logistic regression analysis, it was demonstrated that the Black and Mixed/Other ethnicity were linked with greater admission risk than the White. Further, the Black and Mixed ethinicity could be linked to disease severity but not to in-hospital mortality. This was majorly attributed to ethinicity and partly to comorbities and socioeconomic factors. In addition, the study elucidated the association of increased in-hospital mortality with ICU admission for Asian ethinicity. Therefore,it could be concluded that COVID-19 disease outcome is influenced by the ethnic background.
Although aging is a prominent factor for comorbidities, it may not be as labeled as the only confounding factor for the disease severity. Reports have suggested that hospitalized males had the highest mortality rate as compared to females and this association was more prominent with patients with predisposing conditions such as hypertension, diabetes and obesity in an age-dependent manner. One study has shown male patients’ as a predictor of ICU admissions. Contrastingly, in context of the long term COVID-19 manifestations, women were more likely to report uneasiness, breathlessness, and fatigue following recovery. Outcomes in severity also resulted from biochemical differences in males and females; compared to male patients, females had higher lymphocyte counts, higher levels of high-density lipoprotein, as well as lower levels of highly sensitive C-reactive protein. Another hypothesis for gender disparity in protection is that the females have biallelic Toll-like receptor (TLR)7 expression, thereby leading to a better interferon (IFN)-mediated response after early infection. Interestingly, studies have reported that estrogen confers some protection against the severity of COVID-19. This has been validated in a preclinical setting, where mice infected with the SARS-CoV virus had a higher mortality rate following ovariectomy or estrogen receptor antagonist administration. Further, pregnant women with mild infection demonstrate same outcome as uninfected pregnant women. However, those with severe infection demonstrate a higher risk of perinatal infection as well as mortality, usually having a tendency to feel unwell and this further exacerbates during COVID-19 infection and may result in worsening of conditions of the patients. Children are disportionately infected with COVID-19 compared to older population but with low infection severity and could be attributed to lower concentration of ACE2 receptors in children as well as trained/acquired immunity as a result of vaccination, indigenous virus competition, as well as maternal immunity. Thus, it can be concluded that sex differences play a major role in the outcomes and severity of COVID-19 infection. Due to the pivotal role of female hormones, they may be less susceptible to long-term manifestations which are prevalent in male patients.
SARS-CoV-2 is a neurotropic virus, that can enter the CNS either by hematogenous or neuronal retrograde dissemination. In an autopsy study, viral RNA in the brains of several COVID-19 patients was detected. Patients having neurological diseases are generally more vulnerable to the respiratory system infections.and could be attributed to the involvement of central respiratory centers in the case of COVID-19 infections. There are shreds of evidence that suggest the potential of SARS-CoV-2 to enter the brain through the olfactory epithelium and cause neuronal death in mice. Furthermore, there have been cases of COVID-19 individuals developing symptomatic parkinsonism after 2–5 weeks of viral infection. SARS-CoV was discovered in the cerebral fluid of individuals suffering from acute SARS-CoV disease, as was also reported in the COVID-19 cases. Recent data from a study performed in three designated COVID-19 care hospitals of the Union Hospital of Huazhong University of Science and Technology in Wuhan, China suggests that 78 patients out of 214 COVID-19 patients demonstrated neurologic manifestations. This involved the CNS, peripheral nervous system, and skeletal muscles, subsequently indicating the neurotropic potential of this virus. Hitherto, there is not enough evidence regarding the susceptibility of PD patients to COVID-19. According to a study from the Parkinson’s and Movement Disorders Unit in Padua, Italy and the Parkinson’s Foundation Centre of Excellence at King’s College Hospital in London, UK, PD patients with an average age of more than 78.3 years and with a disease period of greater than 12.7 years are more prone to COVID-19. They also have a significantly high mortality rate of 40%. The patients in extreme conditions of PD with respiratory muscle rigidity, dyspnoea, and on deep brain stimulation or levodopa infusion therapy showed high vulnerability and 50% mortality. Moreover, SARS-CoV and H1N1 viruses (structurally and functionally similar to COVID-19) can aggravate the mechanisms involved in PD pathophysiology as supported by previous studies. A few studies also suggest the role of the CNS in COVID-19 infection indicating that PD patients might be more prone to COVID-19 infection.
There are various theories about how SARS-CoV-2 enters the CNS. Peripheral SARS-CoV-2 infection can lead to a cytokine storm, which might disrupt the blood–brain barrier integrity and might be a mechanism for SARS-CoV-2 to infiltrate into the CNS. Besides this, ACE2 receptors are highly expressed in the substantia nigra and striatum (the regions which are potentially affected in PD) making dopaminergic neurons present in these brain regions more susceptible to SARS-CoV-2 infection. Furthermore, the accumulation of alpha-synuclein (α-Syn) is the major hallmark of PD. According to previous findings, the entrance of SARS-CoV-2 into the CNS may upregulate this protein, causing aggregation. On the contrary, few studies have indicated α-Syn’s protective effect in blocking viral entry and propagation into the CNS. Moreover, its expression in the neurons can act as a barrier to viral RNA replication. In a retrospective cohort study conducted in Japan, PD patients suffering from pneumonia showed a lower mortality rate. Furthermore, the main pathophysiological pathway involved in PD development, includes autophagy disruption, ER stress, and mitochondrial dysfunction. As a result, COVID-19 may trigger frequent modulations in these pathways, as seen in SARS-CoV and influenza A virus. Proteostasis plays a vital role in protein translation, folding, and subsequent clearance with the help of heat shock proteins (HSPs). Viral infection hijacks the host cellular machinery for its replication and disrupts the proteostasis pathways by interacting with Hsp40. The viral-Hsp40 interaction results in the binding of Hsp40 with two subunits of viral RNA polymerase, which further assists the viral genome to get translocated into the nucleus via interaction with the viral nucleoprotein and inhibition of protein kinase R (PKR) activation, thereby restricting the host from producing an antiviral response. Hsp90 also modulates the activity of viral RNA polymerase after it enters the nucleus. Under normal conditions, Hsp90 and Hsp70 restrict apoptosis initiation pathways thereby reducing apoptosis. However, infection with SARS-CoV-2 suppresses Hsp90 and Hsp70 function, leading to activation of caspase cascade followed by apoptosis and subsequent propogation of infection. Autophagy lysosomal pathways and ubiquitin-proteasome pathways are two important components of proteostasis and are responsible for the degradation of impaired proteins. It has been reported that H1N1 infection obstructs autophagic flux at the initial stages resulting in the reduced number of autophagosomes and hindering autophagosome-lysosome fusion at later stages of autophagy. Both these activities result in autophagy disruption in human dopaminergic neurons and mouse brain and subsequently lead to α-Syn aggregation. Furthermore, when H1N1 was instilled intranasally in Rag knockout mice, α-Syn aggregates were found in the cells near olfactory bulbs, which may further spread in a prion-like manner to other regions of the brain and originate downstream events of PD pathogenesis. Furthermore, the ubiquitin-proteasome system can destroy viral proteins by ubiquitination; however, the H1N1 virus hijacks this mechanism and inhibits the host cell opponents of viral reproduction. This disrupts proteostasis and toxic protein aggregation. Further, SARS-CoV-2 has also been shown to act similarly to H1N1 virus and might be involved in α-Syn accumulation. Besides this, ER has also been reported as a target of various viruses. SARS-CoV-2 utilizes ER for the synthesis and processing of viral proteins. It has been shown previously that the Spike (S) protein gets collected in the ER and induces unfolded protein response (UPR) by transcriptional activation of several UPR effectors, including glucose-regulated protein 78 (GRP78), GRP94, and CCAAT/Enhancer-binding protein (C/EBP) homologous protein to aid viral replication. UPR may result in ER stress, which further activates cellular signals triggering neuronal death, are implicated in PD. Another study discovered that SARS-CoV open reading frames (ORF) 6 and 7a produce ER stress via GRP94 activation. Mitochondrial dysfunction is another pathway that connects PD with COVID-19. ORF-9b of SARS-CoV-2 degrades dynamin-like protein 1 (Drp1), involved in mitochondrial fission, thus causing mitochondrial elongation. Moreover, it suppresses antiviral cellular signaling by targeting the mitochondrial-associated adaptor molecule signalosome. ORF3b is located partially in mitochondria and is involved in apoptosis along with other accessory proteins (ORF3a, ORF6, and ORF 7a of SARS-CoV). The virus utilizes the mitochondria for caspase activation for apoptosis, thereby causing viral dissemination to other cells. The aforementioned cellular malfunctions result in increased ROS, redox imbalance, and mitochondrial and lysosomal dysfunction making the cells more susceptible to infection. According to current research, neuroinflammation is a defining factor in COVID-19 infection. Proinflammatory cytokine levels are higher in the periphery and cerebrospinal fluid in PD patients. Strikingly, studies have reported that viral infection can also induce neuroinflammation. Due to the compromised anti-inflammatory mechanisms in old age, the older population is more susceptible to develop neurodegenerative diseases as well as severe COVID-19 infection. TLRs may play a role in the immunological response to coronavirus infections indicated by the presence of PAMPs (lipopolysaccharides, dsDNA/RNA, ssRNA) in the host cells recognized by specific TLRs derived from viruses following infection. TLR 3 is known to be activated in the case of HSV-I and influenza A infection. Whenever TLRs are triggered, pro-inflammatory cytokines are released (IL-1, IL-6, and tumor necrosis factor-alpha (TNF-alpha)) and type I IFN-α/β via MyD88-dependent and MyD88-independent pathways, which further translocate nuclear factor kappa-light-chain-enhancer of activated B cell (NF-κB), interferon regulatory factor (IRF), IRF-3 and IRF-7, inside the nucleus. Moreover, NF-κB has been reported to contribute to the pathogenesis of PD by triggering the release of pro-inflammatory mediators and subsequent neuroinflammation. Therefore, it can be concluded that NF-κB plays a common role in inflammation in both PD and COVID-19 pathogenesis. Neuroinflammation can also trigger misfolding and aggregation of α-Syn. Aggregated α-Syn leads to the activation of microglia which further favors the production of pro-inflammatory cytokines, ultimately causing neurodegeneration (Figure 2).
None of the anti-Parkinsonian drugs render PD patients at risk for COVID-19; therefore, PD patients should not alter or stop any medicine without a clinician’s consultation. However, in order to avoid possible interactions, PD patients with COVID-19 infection should not use cough suppressants containing dextromethorphan and pseudoephedrine with Selegiline. Previous data suggest that the therapies used for COVID-19 such as angiotensin converting enzyme inhibitors (ACEi), might be safe for PD patients as well. Neuroprotective effects of ACEi like captopril and perindopril have been observed in PD animal models, which act by preventing dopaminergic cell loss and increasing striatal dopamine content, respectively. ACEi also work as antioxidants by reducing oxidative stress, which has been linked to PD and has been found to decrease the number of falls in a cross-sectional study involving 91 PD patients. Furthermore, hydroxychloroquine exhibits anti-Parkinsonian effects by raising Nurr1 expression, inhibiting glycogen synthase kinase-3 beta (GSK-3β) and functioning as an anti-inflammatory drug, making it a viable treatment option for PD patients infected with COVID-19. In another study, a COVID-19 patient with PD was cured with remdesivir in a clinical trial. In addition, the antiviral drug amantadine used in PD treatment has also been used for years in the treatment of influenza. Amantadine inhibits viral replication by blocking the influenza M2 ion channel, thereby preventing the delivery of viral ribonucleoprotein into the cytoplasm of the host and might have a disruptive effect on the lysosomal pathway. As a result, amantadine might be advantageous for treatment in COVID-19-positive PD patients as a potential treatment approach to lower viral load in these individuals. Oseltamivir phosphate, an anti-influenza drug, was found to be useful in PD as it inhibits H1N1-induced α-Syn aggregation. Many studies have highlighted that older people are typically deficient in vitamin D; vitamin D might have antiviral properties. As a result, vitamin D3 supplementation (2000–5000 IU/day) has been recommended in older PD patients, which might protect them against COVID-19. Moreover, it has been suggested that vitamin D3 can slow down the progression of PD. Above all, Fenoldopam, a dopamine D1 receptor agonist, was shown to be protective against inflammation as well as lung permeability and pulmonary edema in an endotoxin-induced acute lung injury mouse model. Taken together, it can be concluded that therapies used in PD and COVID-19 infection do not have any detrimental drug interaction. However, diagnosis presents a challenge in PD patients especially in the older population as they might neglect the COVID-19 symptoms because of other chronic diseases. Furthermore, early symptoms of COVID-19, for example dyssomnia, might be neglected by PD patients as they commonly suffer from olfactory dysfunction. Still, people with COVID-19 pneumonia who take medications for PD should have their doses changed because motor symptoms can impair breathing thereby worsening the condition.
There is a scarcity of data linking PD and COVID-19 outcomes. However, evidence from molecular processes of SARS coronaviruses changing proteostasis, mitochondrial and ER malfunction, and α-Syn aggregation suggests that it is necessary to identify medications acting on these pathways as a treatment for PD patients suffering from COVID-19. Detailed investigations on these mechanisms are required to determine PD patients’ sensitivity to COVID-19 as well as the safety of antiviral medicines, vaccinations, ACEi, and other antiviral drugs used for COVID-19. The indirect effect of this pandemic on PD patients including no direct patient–doctor visits, depression due to social distancing, reduced physical activity, and battery failure in patients on deep brain stimulation therapy also need attention and should be taken care of with the help of video conferencing and the availability of sufficient stock of medications.
Reports suggest patients with lung cancer, hematological cancer, or any metastatic cancer are potentially at high risk of COVID-19 infection either due to treatment or disease susceptibility. Toward this, COVID-19-infected cancer patients were studied retrospectively; analysis showed a higher incidence of severe events following infection of COVID-19 infection, especially in the patients who received the anticancer treatment for 14 days. In one study, a total of 15 (53.6%) patients developed severe clinical event, and 28.6% were found to be morbid. Similar studies suggested being vigilant toward cancer patients who are on anticancer treatments as they are prone to COVID-19 infection as a result of reduced immunity. As per a nationwide study in China, roughly 39% (7 out of 18) of cancer patients infected with COVID-19 experienced severe symptoms, compared to only 8% (124 out of 1572) of patients not suffering with cancer. Another collective report has stated that there were 52 different studies worldwide involving 18,650 cancer patients infected with COVID-19 with 4,243 deaths. The data further implied that the risk of mortality of cancer patients is about 25.6 % COVID-19 as compared to 2.3% in the normal population. COVID-19 puts lung cancer patients at significant risk of developing severe episodes. In this regard, a study reported that out of 102 lung cancer patients infected with COVID-19 infection, 62% of lung cancer patients were hospitalized, and the mortality rate was approximately 25%. In another study, 55% (6 out of 11) mortality was observed in lung cancer patients infected with COVID-19 which was very high compared to other cancers.
SARS-CoV-2 infects the host cell via the ACE2, which is a cell surface receptor. It is highly expressed on the lung epithelial cells and subsequently gets cleaved with the help of the host transmembrane serine protease2 (TMPRSS2). The viral internalization evokes the host immune response through the activation of alveolar macrophages and the complement cascade. This activation leads to a massive release of pro-inflammatory cytokines such as IL-1, IL-6, IL-8, and IFN-γ. This phenomenon, termed the cytokine storm, further causes alveolar endothelial tissue damage. Among these cytokines, IL-6 is involved in the pathophysiology of cancer, especially, lung cancer and other chronic diseases. It has been reported that IL-6 promotes tumorigenesis and anti-apoptotic signaling and is an important biomarker for cancer diagnosis and prognosis. The involvement of IL-6 in abnormally immune-activated conditions like cancer inflammation and immunosuppression have also been reported. Apart from IL-6, the activation of serine/threonine p21 activated kinase1 (PAK1), an essential component of malaria and some viral infections, is also a critical mediator of cytokine storm and gets overexpressed in SARS-CoV-2 infected lungs, resulting in mortality of COVID-19 patients. One study has reported that the human PAK, is an important component of host–pathogen interactions. PAK paralogues (Group I PAKs, include PAK1, PAK2, and PAK3; Group II PAKs, including PAK4, PAK5, and PAK6) are found in nearly all mammalian tissues, wherein they play important roles in a variety of processes including cell survival and proliferation, cell cycle progression, and cytoskeletal organization and are involved in different types of cancer. In drug development, the role of PAKs in cell survival along with proliferation, as well as participation in a variety of malignancies, is of significant interest. PAK1 activation can lead to the development of lung fibrosis by stimulation of the chemokine (C–C motif) ligand 2 (CCL2) production, thereby aggravating the patient’s condition. PAK1 blockers can help in restoring the immune response thereby combating virus-induced lung fibrosis. In this regard, the PAK1 inhibitor (propolis) was tested as a therapeutic approach for treating COVID-19 patients. Its extract components were shown to have inhibitory effects against other targets like ACE2 and TMPRSS2 (Figure 3). Besides, underlying conditions and altered immune responses increase the risk of developing venous thromboembolism, microvascular COVID-19 lung and vessels obstructive thrombo-inflammatory syndrome in cancer patients. The progressive endothelial thrombo-inflammatory syndrome may also cover the brain’s microvascular bed and other vital organs, resulting in multiple organ failures and death. Additionally, Bhotla et al. and colleagues hypothesized that platelets are getting infected due to COVID-19 infection which exacerbate the SARS-CoV-2 infection and ultimately lead to the bronchopneumonia or death. In conclusion, the biochemical and immunological characteristics outlined above demonstrate that cancer patients, are more susceptible to COVID-19 infections than the general population. However, non-biological variables such as increased contact with the healthcare system for cancer treatment may potentially contribute to the increase in COVID-19 prevalence in cancer patients.
Because of the current pandemic situation, as well as the associated risk factors, cancer patients must take additional precautions. As a result, in order to minimize the adverse consequences of the COVID-19 pandemic on highly susceptible cancer patients, hospitals should have, more robust management procedures in place. To this end, chemotherapy or surgery should be postponed, intense treatment should be provided, greater personal protection should be provided, telemedicine should be used, and a separate treatment approach for COVID-19 cancer patients should be implemented. As discussed in the previous section, IL-6 has been recognized as a crucial component of the immune response to SARS-CoV-2. Many clinical trials are currently underway to investigate treatment strategies targeting IL-6 by repurposing anti-IL-6 therapeutics for COVID-19 in cancer patients. Tocilizumab, a monoclonal antibody against the IL-6 receptor, has shown promising results in a double-blind, placebo-controlled phase-III study called EMPACTA (NCT04372186). Similarly, siltuximab, an IL-6 receptor chimeric mouse–human monoclonal antibody, has already exhibited its antitumor efficacy and is under diverse randomized control trials for further assessing its efficacy for COVID-19 patients (NCT04486521, NCT04330638, and NCT04329650). In addition, tumor reversion therapy might be the near future therapy for the treatment of cancer. The molecular biology behind the tumor reversal process is not only fascinating but alluring. Some chemical compounds used for tumor reversion include LY294002, metformin, sertraline, and ellipticine. Apart from this, T cell therapy such as immune checkpoint inhibitors (ICIs) and chimeric antigen receptor T cell (CAR-T) therapy may increase the risk of cytokine release resulting in increased severity of COVID-19 infection. Cytokine storm is directly linked to COVID-19-associated diseases such as acute respiratory distress syndrome (ARDS) and multiorgan failure. Moreover, an immunocompromised cancer patient with immunocompromised therapy and also those at risk for immune-related side effects in response to immuno-oncology treatments should be monitored closely. Another treatment approach is mesenchymal stem cell therapy; this is approved and is now used for cancer treatment alone or in combination with other drugs. It was even applied on 7 patients with COVID-19 infection, resulting in a negative effect on the expression of ACE2 along with TMPRSS2. On the sixth day, cytokine-secreting cells (CXCR3+ CD4+ T, CXCR3+ CD8+ T, and NK CXCR3+ cells) disappeared as peripheral lymphocyte counts increased. While the TNF-alpha levels were reduced, dendritic cell populations and IL10 levels were increased. Along with IL-6, other cytokines associated with the pathogenesis of COVID-19 infection in cancer patients include type I IFN, IL-1, IL-7, IL-17, and TNF-alpha. Even a clinical study named Bee-COVID was conducted with the Brazilian Green Propolis Extract for the treatment of the COVID-19 condition (NCT04480593). In conclusion, targeting these inflammatory markers may serve as a good approach to treating COVID-19 infection in cancer patients.
The COVID-19 outbreak is a global threat to the health system. Despite the substantial study, there is currently no recognized treatment for COVID-19. Still, it is unclear why some people respond abruptly to SARS-CoV-2 infection and others are asymptomatic. Further, why people with coexisting comorbidities are more susceptible to severe clinical events of COVID-19 is unclear. Being at high risk of infection and deteriorating outcomes, cancer patients are suggested to be more cautious and follow the guidelines issued by World Health Organization and the European Society for Medical Oncology.
Diabetes mellitus (DM) is a common metabolic disorder with multiple etiologies primarily associated with a deficiency of insulin secretion and/or its action. Besides the clinical complication of the disease, an individual with diabetes is more susceptible to a broad range of infections (such as foot infection, rhinocerebral mucormycosis, malignant external otitis, and gangrenous cholecystitis) as well as predisposed to certain conditions that primarily affect lungs like influenza, tuberculosis, and legionella pneumonia. Moreover, diabetes and its complications such as disrupted glycemic control and ketoacidosis were found to be a potential risk factor for mortality in the influenza A (H1N1) pandemic in 2009, SARS-COV, and MERS coronavirus infection. Several studies have reported high mortality in COVID-19 patients with diabetes. More specifically, diabetes was the most common underlying comorbidity in approximately 22% of the 32 nonsurvivors from a cohort of 52 COVID-19 patients in intensive care. Detailed clinical research on 140 hospitalized COVID-19 patients in Wuhan indicated the second highest prevalence was diabetes (12.1%) after hypertension. Another study on a subset of 1099 patients discovered that out of 177 severe cases, 16.2% of patients with seriously infected conditions had diabetes. Eighteen of these patients had composite outcomes, including death, use of mechanical ventilation, and admission to an ICU. Recent epidemiological research of 72,314 COVID-19 patients at the Chinese Center for Disease Control and Prevention found that diabetics mortality in diabetic patients was three times greater than non-diabetic patients (7.3% mortality rate as compared to the overall 2.3% mortality rate). Besides, a recent retrospective multicenter cohort study on 191 laboratory-confirmed COVID-19 cases observed a statistically significant association between diabetes and increased mortality. In yet another study, Fadini and his colleagues at the University Hospital of Padova discovered that the mortality rate of diabetic SARS-CoV-2 patients were 1.75 times greater than the general population. This research, coupled with earlier research, suggested that diabetes may exacerbate the outcome of a new coronavirus illness. The CDC data further suggested that COVID-19-infected diabetic patients have a higher risk of the developing severe symptoms. Toward this, in a retrospective research conducted in Wuhan, China, 32% of cases patients had comorbidities, 20% of which were diabetics. In addition, hyperglycemia or dysregulated glycemic management is associated with a high risk of complications. Diabetes as a major risk factor for the course and prognosis of COVID-19 was further mentioned according to a study of 174 COVID-19 patients admitted to Wuhan Union Hospital.
COVID-19 infection in diabetic patients might increase stress hormone levels such as glucocorticoids and catecholamines, leading to high glucose levels. Hyperglycemia in diabetes induces glucose allowing non-enzymatic glycosylation of lung collagen and elastin by advanced glycation end products thereby resulting in reduced elasticity of the lungs COVID-19 infection. This causes thickening of the alveolar epithelial basal lamina and microvascular alterations in the pulmonary capillary beds, further leading to a reduction in pulmonary capillary blood volume and diffusing capacity, which influence the patient’s overall survival. Even a brief period of hyperglycemia has the potential to alter immune cell function. Diabetes increases pro-inflammatory cytokines, including IL-1, IL-6, and TNF-alpha. IL-6, C-reactive protein, serum ferritin, and coagulation index, D-dimer are significantly higher in diabetic individuals than in those without diabetes. Current research suggests that diabetic patients are more prone to cytokine storms. This may be further exaggerated in response to a stimulus as seen in patients with COVID-19 infection. In a study, patients infected with COVID-19 have developed a fatal hyperinflammatory syndrome characterized by a fulminant and fatal hypertyrosinemia, and was thought to be associated with disease severity as demonstrated increased IL-2, IL-7, IL-12, TNF-alpha, and IFN-γ inducible protein 10, macrophage inflammatory protein-1α (MIP-1α/CCL3). A prolonged hyperglycemic state causes an elevated immune response, which further leads to inflammation. Interestingly, increased glucose levels were discovered to directly promote SARS-CoV-2 replication in human monocytes and to sustain SARS-CoV-2 replication through the formation of ROS and activation of hypoxia-inducible factor-1α. This massive influx of inflammatory cells has the potential to disrupt the activities of the primary insulin-responsive organs, i.e., the skeletal muscles and liver, which are primarily responsible for insulin-mediated glucose absorption. High glucose concentration in the plasma leads to cytokine production, glucotoxicity, and viral-induced oxidative stress; these factors promoted a greater risk of thromboembolic problems as well as damage to important organs in diabetic patients (Figure 4). Moreover, one enzyme, dipeptidyl peptidase-4 (CD26 or adenosine deaminase complexing protein 2), tends to bind with the virus and promote the ACE2 expression which is involved to initiate the infectious disease. Furthermore, studies have shown severe lung pathology with dysregulated immune response in mice (with existing Type2 DM (T2DM)) infected with MERS-CoV. In addition, COVID-19 patients with diabetes have an easier progression to acute respiratory distress syndrome and septic shock resulting in multiple organ failures. Another direct metabolic link that exists between coronavirus infection and diabetes is based on the expression of ACE2 present in specific tissue. ACE2 is a transmembrane glycoprotein that converts angiotensin II (Ang II) to angiotensin (Ang). ACE2 is also expressed in blood vessels, macrophages, and monocytes. The expression of ACE2 is of prime importance in the etiology of SARS-CoV-2. It has been observed that inflammatory signals produced by macrophages, such as type I IFN, increase ACE2 receptor expression. The infection of pancreatic macrophages may have triggered these inflammatory signals. As a result, more immune cells particularly pro-inflammatory monocyte/macrophages are recruited, causing further damage to islets of Langerhans and β-cells in the pancreas. Subsequently, it reduces insulin release, subsequently resulting in acute hyperglycemia and transitory diabetes in healthy people. Additionally, macrophages and monocytes are mobile, and once infected with the SARS-CoV-2 virus, they can infiltrate the cells of the pancreatic islets and spread the virus throughout the pancreas. Furthermore, SARS-CoV-2 infection of the β-cells can directly damage them, causing apoptosis, and further worsening the glycemic control of diabetic patients. Furthermore, anti-diabetic medicines, such as glucagon-like peptide 1 agonists, antihypertensive drugs, and statins, also increase ACE2 expression. COVID-19-infected patients were treated with different medications, such as systemic corticosteroids and antiviral medicines, which may potentially cause hyperglycemia. Glucocorticoid-induced DM (GIDM) is a frequent and potentially critical issue to address in clinical practice, although it may contribute to worsening hyperglycemia and ultimately aggravate the diabetic condition associated with COVID-19 infection. The detailed mechanism causing glucocorticoid-induced hyperglycemia is the promotion of weight gain, decrease in peripheral insulin sensitivity, increase in the production of glucose with the promotion of gluconeogenesis, β cell injury due to the destruction of pancreatic cells, increase in the levels of fatty acids, and impairment of insulin release. However, in the case of SARS-CoV-2 infection, the effect of diabetes on ACE2 expression still needs to be studied in detail. In a study, an increased expression of ACE2 was observed in the kidney of diabetic patients in the early stage followed by a decreased expression in later stages that relatively overlaps with the occurrence of diabetic nephropathy. Each of these pathways working synergistically may worsen the situation for diabetic patients making them frailer and further increasing the severity of COVID-19 disease.
The glucose level in diabetic patients is mainly maintained with the administration of insulin and is mainly recommended for critically ill patients infected with SARS-CoV-2. It has also been observed that insulin infusion significantly reduces the inflammatory cytokines and helps in lowering the severity of COVID-19. Metformin has been proven to have anti-inflammatory properties in preclinical investigations, and it has also been demonstrated to lower circulating levels of inflammatory biomarkers in persons with T2DM. Besides the anti-inflammatory action of metformin, it is potentially used against the SARS-CoV-2 virus. Metformin acts through inhibiting the virus–host-cell association as well as prevents the expression of ACE2 through the activation of adenosine monophosphate-activated protein kinase. Another promising molecule, DPP-4 antagonist (Linagliptin), an anti-diabetic agent with potential anti-aging properties, was repurposed to combat COVID-19 infection also demonstrates antiaging properties. Further, plant-based natural compounds such as resveratrol, catechin, curcumin, procyanidin, and theaflavin have been tested for the treatment of COVID-19 disease through in silico, in vitro, and in vivo studies. More recently, convalescent plasma therapy has also been applied for COVID-19 associated comorbidities wherein it was used to downregulate the inflammatory cytokines and the viral load in COVID-19 patients. Sulfonylurea must be avoided in COVID-19 patients comorbid with T2DM, as they can cause hypoglycemia. Thiazolidinediones can be used in mild diseases with caution as they have protective effects on the cardiovascular system. However, due to weight gain, edema, and heart failure, thiazolidinediones were not used in moderate and severe conditions.
COVID-19 pathology majorly involves inflammation that eventually leads to multiple organ failure and even death. Comorbid diabetic patients are highly susceptible to hospitalization-based COVID-19 infection. Diabetes is characterized by a persistent state of hyperglycemia that can aggravate SARS-CoV-2-induced inflammation. Therefore, diabetic patients’ medication should be monitored as some drugs can increase the expression of viral entry receptors leading to a poor prognosis of COVID-19 in these patients. Further, insulin resistance has also been reported in comorbid, especially, T2DM patients. Hence, management of diabetes is of utmost importance in infected patients. Drugs that do not interact with ACE2 receptors could be given, though insulin treatment is unanimously given for COVID-19 patients with comorbid status of diabetes.
CVD is a broad classification for a range of conditions that involve dysregulation of the heart and vascular systems. Since COVID-19 infection is a multiplexed pathophysiological condition, it is obvious that the cardiovascular systems are at the crux of exposure. Toward this, in a multicenter study conducted in Wuhan involving 191 patients, 24% of the patients that died had coronary heart disease. In addition, in a systemic review of 199 patients with COVID-19, 40% of them were diagnosed with myocarditis. Further, a meta-analysis study reported that 25% of the patients developed acute cardiac injury and the mortality rate was 20 times higher than those pre-existing comorbidities in comparison to those with no pre-existing comorbidities. As discussed in the previous section, systemic inflammation due to viral infection is prevalent and is a consequence of the cytokine storm. This inflammation is of particular concern for cardiac tissues and has been shown to cause myocarditis. The mechanism proposed for this pathology demonstrates the involvement of hepatocyte growth factor release, followed by priming of the immune cells such as T cells and subsequent release of IL-6 As a result, COVID-19 patients have been found to have pericardial effusion, which can lead to inflammation of the membranes around the heart and pericarditis. Conversely, the safety of mRNA vaccines is of particular concern as they have been reported to cause myocarditis. In the earlier section we discussed PD, its association with COVID-19 and the importance of olfactory nerves and the brain. Although nerves express ACE2 receptors, which are of utmost importance for infectivity, one cannot rule out the importance of the possible involvement of the heart–brain axis. This bidirectional communication is important as it is mediated by the control of endothelial cells and the regulation of cerebral blood flow by somatosensory signaling mechanisms. Thus, infection with COVID-19 disrupts this normal physiological function and, as a consequence, vascular–neuronal communication is disturbed thereby causing headaches and increased incidences of stroke in infected patients. Along with this, serious implications such as the neural spread of viruses and consequent, central disturbances such as anxiety are predisposed due to neuroinflammation and lower brain-derived neurotropic factor levels. Elevated levels of inflammatory biomarkers, CRP and D-dimer are the most common markers for COVID-19-associated coagulopathy. Particularly, elevated levels of D-dimer in hypertensive patients indicate the severity of the disease since COVID-19 patients’ death rates are more likely to rise with D-dimers upon admission or throughout time. Alternatively, in patients with diabetic complications, depletion of the ACE2 receptor causes activation of the renin-angiotensin-aldosterone system (RAAS), leading to β-cell destruction and an increased risk of cardiomyopathy in an age-dependent manner. However, these concepts are still premature and need a discussion on hypertension, which has been studied to a great extent with COVID-19 and is a predisposing factor for other cardiovascular complications as well. HTN is the most common disorder in people aged 50 years or more. Epidemiological studies have reported a high risk of hospitalization for patients with CVD such as HTN after COVID-19 infection. One study has shown hypertension (30%) and coronary heart disease (20%) as the most prevalent comorbidities (8%) in COVID-19-infected patients. Another investigation has shown that HTN (27%) and cardiovascular complications were the most prevalent comorbidities among COVID-19 patients with acute respiratory distress syndrome (6%). The high prevalence of hypertension in COVID-19 patients is not surprising, nor does it necessarily imply a causal relationship between hypertension and COVID-19 or its severity. Further, hypertension is extremely common in the elderly and older people appear to be at a higher risk of contracting SARS-CoV-2 and developing severe forms and complications of COVID-19. Hypertensive patients are frequently treated with ACEi or angiotensin receptor blockers (ARBs) to lower the volume of blood and ultimately blood pressure. However, the application of ACEi or ARBs in hypertensive patients is questionable as ACEi or ARBs are reported to increase the levels of ACE2 in hypertensive patients, and importantly, SARS-CoV-2, is reported to enter inside the lung cells through ACE2 receptors. Therefore, it is crucial to understand how RAAS reacts to COVID-19 infection and whether it is feasible to use ACEi or ARBs.
ACE2 is a monocarboxypeptidase that is homologous to ACE and has an an extracellular active site. Angiotensin I (Ang I) is cleaved by ACE to produce Ang II, which constricts blood vessels and increases salt and fluid retention by binding to and activating the angiotensin receptor 1 (AT1 receptor), causing HTN. However, the membrane-bound ACE2 inactivates Ang II to Angiotensin I–VII (Ang I–VII) which then binds to the Mas1 oncogene (Mas receptor), which has further shown to possess a vasodilator effect. Furthermore, ACE2 converts Ang II into Angiotensin I–IX (Ang I–IX) which is then transformed into Ang I–VII by ACE. In addition, ACE2 converts Ang I with less binding affinity as compared to Ang II. ACE2 acts as a negative regulator of the RAAS, modulating vasoconstriction, fibrosis, and hypertrophy. Hypertensive individuals have lower gene expression and/or ACE2 activity than normotensive patients. Ang II, on the other hand, inhibits ACE2. It has been reported in preclinical studies that ACE2 deficiency induces HTN in rats when Ang II exceeds. In several studies, SARS-CoV infection lowered ACE2 expression in cells, causing severe organ damage by disturbing the physiological homeostasis between ACE/ACE2 and Ang II/Ang I–VII. The ACE2 transmembrane domain is internalized along with the virus during SARS-CoV-2 infection, which reduces ACE2 expression. For some transmembrane proteinases and proteins, disintegrin is one of the proteins that may be involved in the binding and membrane fusion processes and ADAM metallopeptidases domain 17 (ADAM17), TMPRSS2, and TNF-converting enzyme. TMPRSS2 cleaves ACE2 to increase viral uptake, and ADAM17 can cleave ACE2 to produce ectodomain shedding. Importantly, lower levels of ACE2 in COVID-19-infected hypertensive patients resulted in a lower degradation rate of Ang II, overexpression of the AT1 receptor with reduced activity of Ang I–IX and I–VII and promotion of hypertension, ARDS, hypertrophy, and myocardial injury. The ACE2/Ang I–VII/Mas axis has been shown to play a beneficial role in the heart. It can enhance post-ischemic heart functions by inducing coronary vessel vasorelaxation, inhibiting oxidative stress, attenuating abnormal cardiac remodeling, and inhibiting oxidative stress. ACE2 expression rises early in the course of a heart attack but declines as the disease progresses. ACE2 knockout mice develop myocardial hypertrophy and interstitial fibrosis, which accelerates heart failure. Furthermore, ACE2 deletion in mice exacerbates diabetes-related heart failure. ACE2 expression in cardiac cells has been shown to be significantly reduced in both SARS-CoV-infected humans and mice. Due to the significant downregulation of ACE2 and overexpression of Ang II in COVID-19 infection, the lack of protective actions of Ang I–VII may exacerbate and perpetuate cardiac damage. According to current research and several clinical studies, HTN is a comorbidity in a significant proportion of individuals with severe illness. RAAS overactivation may have already occurred in these people before infection. The absence of the protective effects of Ang I–VII may accelerate and perpetuate cardiac damage due to considerable downregulation of ACE2 and overexpression of Ang II in COVID-19 infection. ACE2/Ang I–VII/Mas receptor axis counteracts excessively activated ACE/Ang-II/AT1 receptor axis as seen in HTN (Figure 5). Notably, soluble ACE2 molecules have been demonstrated to limit SARS-CoV infection and suggested that a soluble recombinant form of ACE2 molecules can act as competitive interceptor of SARS-CoV-2 virus and prevent it from latching onto cellular membrane-bound ACE2. In ARDS, recombinant human soluble ACE2 was planned to be tested clinically for its efficacy against COVID-19 infection so that a larger phase IIB trial can be performed which may potentially benefit the hypertensive COVID-19-infected patients (NCT04287686). Cardiac damage induced by SARS-CoV and SARS-CoV-2 is a major cause of mortality and morbidity, affecting up to a third of those who have the disease in its most severe form. SARS-CoV was found in one-third of human autopsy hearts, accompanied by a substantial drop in cellular ACE2. Also, the involvement of ACE2 has been studied in critically ill patients, wherein continued treatment with ACE2i (ACE2 inhibitor) was deemed to demonstrate less load on the heart as the alveolar spaces are critically dismantled in these patients. Even though the powerful immune response seen in these people might affect cardiac dysfunction similar to the lungs, Ang II is expected to contribute to the negative effects of SARS-CoV on the heart and SARS-associated cardiomyopathy. Inflammatory signals are thought to lower ACE2 cell-surface expression and transcription. Some contrasting studies report that ACE2 may also be involved in vasodilation, anti-inflammatory and antioxidant roles due to the generation of fragment Ang (I–VII) from Ang II. However, this is scantly reported and as a result, the classical hypothesis of disease worsening is most prevalent. In conclusion, a decrease in cellular ACE2 may render the cells less sensitive to SARS-CoV-2, whereas overexpression of the AT1 receptor causes more severe tissue damage. On the other hand, as AT1 receptor activity is decreased, the cell membrane becomes more sensitive to viral particles with increase in ACE2 levels.
Both ACEi and ARB have been demonstrated to upregulate ACE2, and some researchers have speculated that ARB and ACEi treatments may have a deleterious effect on SARS-CoV-2 infection. Given how commonly these compounds are used to treat HTN and heart failure, this might be a major source of concern. In animal studies, ACEi treatment increased plasma Ang I–VII levels, decreased plasma Ang II levels, and increased ACE2 expression in the heart. In contrast, Ang II receptor blockers (ARBs) increase plasma levels of both Ang II and Ang I–VII, as well as ACE2 expression and activity in the heart. ACEi/ARBs, renin inhibitors, and Ang I–VII analogs may minimize organ damage by blocking the RAAS pathway and/or increasing Ang I–VII levels. In population-based research, the use of ACEi and ARBs significantly lowered the 30-day mortality in pneumonia patients requiring hospitalization. Treatment with ACEi/ARBs has also prompted concerns that increasing the expression of ACE2 in target organs might facilitate the infection-induced development in severe COVID-19 infection. According to two large cohort studies, administration of ACEi/ARBs were connected to hospitalized patients’ having a lower risk of all-cause mortality rather than an increased chance of SARS-CoV-2 infection. However, further research is warranted to examine the protective effects of ACEi/ARBs in COVID-19. Despite the various probable confounders, a decrease in membrane ACE2 expression might explain many of the anomalies seen in SARS-CoV-2 infection. There is not enough clinical evidence to suggest that there is a higher chance of acquiring a severe COVID-19 infection; further, it is unclear if continuation or discontinuation of ARB/ACEi is a wise decision. In addition, we do not know if switching to another treatment approach could worsen the patient’s situation, notably in individuals with heart failure and a poor ejection fraction. Further, whether RAAS inhibitor medication is useful or detrimental for virally induced lesions and switching to another medicine could make the patient’s situation even worse. Clinical trials to detect the effect of losartan as a potential treatment approach for COVID-19, are currently starting (NCT04311177 and NCT04312009). Further, trial to detect if stopping or continuing ACEi/ARB treatment has any consequences are underway (NCT04338009). ACE inhibitors and angiotensin-converting enzyme inhibitors (ARBs) are not only used to treat HTN and heart failure, but they also have a minor impact on ACE2. Although β-blockers are unlikely to interact with ACE or ACE2, they do lower plasma Ang II levels by preventing the conversion of pro-renin to renin. Calcium channel blockers appear to reduce Ang II-induced ACE2 downregulation. However, the data is limited to a single research article that studies nifedipine’s effect on fractionated cell extracts. Thiazides and mineralocorticoid receptor antagonists did not improve the hypertensive rats’ naturally low ACE2 activity, although mineralocorticoid receptor antagonists did reduce ACE expression. In heart failure patients, mineralocorticoid receptor antagonists, on the other hand, increase membrane ACE2 activity. Newer therapies such as DNA aptamers, short oligonucleotide sequences that bind to specific proteins, are being investigated for masking the ACE2 binding domain. In this regard, a group reported the synthesis of novel DNA aptamers that were shown to specifically bind to the ACE2-K353 domain and blocked the entry of the virus through ACE2 receptors. It is still controversial that non-ACEi/BRA drugs (β-blockers, calcium channel blockers, diuretics) are more likely to increase the risk of adverse outcomes than ACEi/BRA drugs that increase ACE2 and provide theoretical protection if the reduction in membranous ACE2 seen in HTN and obesity is important in the pathophysiology of severe COVID-19.
It has been confirmed that SARS-CoV-2 enters the lung through the ACE2 receptor followed by other tissues like the liver, bile duct, gastrointestinal system (small intestine, duodenum), esophagus, and kidney. SARS-CoV-2 can damage these organs associated with the heart and transmit it from human to human rapidly, resulting in serious illness and life-threatening diseases such as heart attack or cancer. Although higher levels of ACE2 enzyme may be expressed in hypertensive patients, this can be a marker for the severity of COVID-19. Effective treatment and prevention of coronavirus infection should begin immediately. However, developing vaccines or drugs for humans in a shorter period is difficult. Nevertheless, only Covaxin and Covishield are currently available in India. In the current scenario, if any person is infected with SARS-CoV-2, then he/she should be isolated and should be controlling the origin of the infection. Scientists are trying to develop vaccines or repurposing drugs with the help of different in vivo or in vitro studies resulting in some positive evidence for the treatment of COVID-19, but these pieces of evidence are not sufficient to cure the infection. As COVID-19 treatment options are evaluated, it will be important to understand the possible side effects of CVD, with a focus on drug–drug interactions. Further, in order to learn more about how COVID-19 affects the heart, we need comprehensive molecular tests that may look at how things work and prospective and retrospective studies with good clinical methodology.
The COVID-19 pandemic is still globally stressful. SARS-CoV-2 has infected the entire globe and has caused pneumonia-like symptoms in severely infected populations. Conversely, some new variants have been reported which have less serious effects with some patients being asymptomatic post-infection. However, the population afflicted with comorbid conditions is found to be more vulnerable to death or ICU admissions comparable to people without comorbid conditions. Age is found to be the highest risk factor in COVID-19 infection, as aged people show enhanced sensitivity to immune responses. Similarly, SARS-CoV-2 causes the cytokine storm, which makes aged people more sensitive to severe infection. There was a significant increase in mortality observed with COVID-19-infected people having age-related disease conditions such as PD, cancer, diabetes, and CVD. COVID-19 infection in comorbid patients causes mitochondrial and ER malfunction that induces α-Syn aggregation, activates TLRs, and ultimately causes nuclear translocation of NFκB through JAK/STAT3 pathway in cancer patients. SARS-CoV-2 infection in diabetic individuals leads to depletion of beta cells of islets of Langerhans creating imbalance in glucose levels. Hypertensive patients infected with COVID-19 demonstrate enhanced Ang-II levels which are responsible for vasoconstriction ultimately worsening hypertensive condition. Therefore, the population infected with SARS-CoV-2 and with comorbid conditions needs to be monitored carefully at the onset of the infection, as the symptoms may worsen with time which may lead to severe life-threatening conditions. Toward this, more clinical studies are warranted for an improved mechanistic understanding of the susceptibility of comorbid patients to COVID-19 infection. Further, the comorbid population should be vaccinated on a priority basis as compared to people without comorbid conditions. Though various studies indicate the importance of comorbid disorders as the possible determinants for COVID-19 infected individuals, a comprehensive evaluation of an clarification regarding the vulnerability in COVID-19 patients is required. This article has limits on clinical-epidemiological conclusions and lacks critical and statistical analysis of COVID-19 and comorbidities and the reader should be aware of the aforementioned restrictions in our review. |
|
PMC10000014 | Felipe Mendes Delpino,Lílian Munhoz Figueiredo,Ândria Krolow Costa,Ioná Carreno,Luan Nascimento da Silva,Alana Duarte Flores,Milena Afonso Pinheiro,Eloisa Porciúncula da Silva,Gabriela Ávila Marques,Mirelle de Oliveira Saes,Suele Manjourany Silva Duro,Luiz Augusto Facchini,João Ricardo Nickenig Vissoci,Thaynã Ramos Flores,Flávio Fernando Demarco,Cauane Blumenberg,Alexandre Dias Porto Chiavegatto,Inácio Crochemore da Silva,Sandro Rodrigues Batista,Ricardo Alexandre Arcêncio,Bruno Pereira Nunes | Emergency department use and Artificial Intelligence in Pelotas: design and baseline results Uso serviços de serviços de urgência e emergência e Inteligência Artificial em Pelotas: protocolo e resultados iniciais | 10-03-2023 | Machine learning,Chronic diseases,Multimorbidity,Urgent and emergency care,Aprendizado de máquina,Doenças crônicas,Multimorbidade,Urgência e emergência | RESUMO Objetivo: To describe the initial baseline results of a population-based study, as well as a protocol in order to evaluate the performance of different machine learning algorithms with the objective of predicting the demand for urgent and emergency services in a representative sample of adults from the urban area of Pelotas, Southern Brazil. Methods: The study is entitled “Emergency department use and Artificial Intelligence in PELOTAS (RS) (EAI PELOTAS)” (https://wp.ufpel.edu.br/eaipelotas/). Between September and December 2021, a baseline was carried out with participants. A follow-up was planned to be conducted after 12 months in order to assess the use of urgent and emergency services in the last year. Afterwards, machine learning algorithms will be tested to predict the use of urgent and emergency services over one year. Results: In total, 5,722 participants answered the survey, mostly females (66.8%), with an average age of 50.3 years. The mean number of household people was 2.6. Most of the sample has white skin color and incomplete elementary school or less. Around 30% of the sample has obesity, 14% diabetes, and 39% hypertension. Conclusion: The present paper presented a protocol describing the steps that were and will be taken to produce a model capable of predicting the demand for urgent and emergency services in one year among residents of Pelotas, in Rio Grande do Sul state. | Emergency department use and Artificial Intelligence in Pelotas: design and baseline results Uso serviços de serviços de urgência e emergência e Inteligência Artificial em Pelotas: protocolo e resultados iniciais
To describe the initial baseline results of a population-based study, as well as a protocol in order to evaluate the performance of different machine learning algorithms with the objective of predicting the demand for urgent and emergency services in a representative sample of adults from the urban area of Pelotas, Southern Brazil.
The study is entitled “Emergency department use and Artificial Intelligence in PELOTAS (RS) (EAI PELOTAS)” (https://wp.ufpel.edu.br/eaipelotas/). Between September and December 2021, a baseline was carried out with participants. A follow-up was planned to be conducted after 12 months in order to assess the use of urgent and emergency services in the last year. Afterwards, machine learning algorithms will be tested to predict the use of urgent and emergency services over one year.
In total, 5,722 participants answered the survey, mostly females (66.8%), with an average age of 50.3 years. The mean number of household people was 2.6. Most of the sample has white skin color and incomplete elementary school or less. Around 30% of the sample has obesity, 14% diabetes, and 39% hypertension.
The present paper presented a protocol describing the steps that were and will be taken to produce a model capable of predicting the demand for urgent and emergency services in one year among residents of Pelotas, in Rio Grande do Sul state.
Chronic diseases affect a large part of the population of adults and older adults, leading these individuals to seek urgent and emergency care. The implementation in 1988 of the Unified Health System (SUS) resulted in a model aimed at prevention and health promotion actions based on collective activities – starting at Basic Health Units (UBS). There is also the National Emergency Care Policy, which advanced in the construction of the SUS, and has as guidelines universality, integrity, decentralization, and social participation, alongside humanization, the right of every citizen . In a study that evaluated the characteristics of users of primary health care services in a Brazilian urban-representative sample, it was found that the vast majority were women and part of poorer individuals, in addition to almost 1/4 of the sample receiving the national income distribution program (family allowance) . Brazil is a country highly unequal in socioeconomic terms; approximately 75% of the Brazilian population uses the SUS and depends exclusively on it, and do not have private health insurance . Individuals with multimorbidity are part of the vast majority who seek urgent and emergency services . Multimorbidity is a condition that affects a large part of the population , especially older adults . In addition, the association of multimorbidity with higher demand for emergency services is a challenge to appropriately manage and prevent these problems . Innovative approaches may allow health professionals to provide direct care to individuals who are more likely to seek urgent and emergency services. The use of artificial intelligence can make it possible to identify and monitor a group of individuals with a higher probability of developing multimorbidity. In this context, machine learning (ML), an application of artificial intelligence, is a promising and feasible tool to be used on large scale to identify these population subgroups. Some previous studies have demonstrated that ML models can predict the demand for urgent and emergency services . Besides, a systematic review showed that ML could accurately predict the triage of patients entering emergency care . However, in a search for studies in Brazil, we found no published article on the subject. In Brazil, urgent and emergency services are a fundamental part of the health care network, ensuring timely care in cases of risk to individuals’ lives . Urgent and emergency services are characterized by overcrowding and high demand. In addition, with the current pandemic of COVID-19, updated evidence on the characteristics of the users seeking these services is timely and necessary. The objective of this article was to describe the initial baseline results of a population-based study, as well as a protocol in order to evaluate the performance of different ML algorithms with the objective of predicting the demand for urgent and emergency services in a representative sample of adults from the urban area of Pelotas.
The present cohort study is entitled “Emergency department use and Artificial Intelligence in PELOTAS-RS (EAI PELOTAS)” (https://wp.ufpel.edu.br/eaipelotas/). The baseline was conducted between September and December 2021, and a follow-up was planned to be conducted 12 months later. We utilized the cross-sectional study to measure the prevalence of urgent and emergency care and the prevalence of multimorbidity, in addition to other variables and instruments of interest. The prospective cohort design intends to estimate the risk of using and reusing urgent emergency services after 12 months. Contact information, collected to ensure follow-up, included telephone, social networks, and full address. In addition, we also collected the latitude and longitude of households for control of the interviews.
The present study was conducted in adult households in the Pelotas, Rio Grande do Sul (RS), Southern Brazil. According to estimates by the Brazilian Institute of Geography and Statistics (IBGE) in 2020, Pelotas had an estimated population of 343,132 individuals (https://cidades.ibge.gov.br/brasil/rs/pelotas/panorama). Figure 1 shows the location of the city of Pelotas in Brazil. Pelotas has a human development index (HDI) of 0.739 and a gross domestic product per capita (GDP) of BRL 27,586.96 (https://www.ibge.gov.br/cidades-e-estados/rs/pelotas.html). The municipality has a Municipal Emergency Room that operates 24 hours a day, seven days a week, and serves about 300 patients a day, according to data provided by the unit.
We included adults aged 18 years or older residing in the urban area of Pelotas. Children and individuals who were mentally unable to answer the questionnaire were not included in the sample.
The sample size was calculated considering three objectives. First, to determine the sample size required to assess the prevalence of urgent and emergency services use, it was considered an estimated prevalence of 9%, with±two percentage points as a margin of error and a 95% confidence level , concluding that 785 individuals would be necessary. Second, for multimorbidity prevalence, an estimated prevalence of 25%, with ± three percentage points as a margin of error and a confidence level of 95% was used ; reaching again, a total of 785 individuals needed. Finally, for the association calculations, similar studies in Brazil were assessed, and the following parameters were considered: significance level of 95%, power of 80%, exposed/unexposed ratio of 0.1, percentage of the outcome in the unexposed 20%, and a minimum prevalence ratio of 1.3. With these parameters, 5,104 individuals would be necessary to study the proposed associations. Adding 10 to 20% for losses and/or refusals, the final sample size would be composed of 5,615–5,890 participants. The process to provide a population-based sample was carried out in multiple stages. The city of Pelotas has approximately 550 census tracts, according to the last update estimates provided by IBGE in 2019. From there, we randomly selected 100 sectors. Since the sectors vary in size, we defined a proportional number of households for each. Thus, it was estimated that, in total, the 100 sectors had approximately 24,345 eligible households. To interview one resident per household, we divided the total number of households by the sample size required, which resulted in 4.3. Based on this information, we divided each of the 100 sectors by 4.3 to reach the necessary number of households for each sector. One resident per household was interviewed, resulting in a total of 5,615 households. If there was more than one eligible resident, the choice was made by a random number generator application. Residents were placed in order, a number was assigned for each one, and one of them was selected according to the result of the draw. The first household interviewed in each sector was selected through a draw, considering the selected jump (4.3 households). Trades and empty squares were considered ineligible, and thus, the next square was chosen. Due to a large number of empty houses, it was necessary to select another 50 sectors to complete the required sample size. The additional households were drawn according to the same methodological criteria as the first draw to ensure equiprobability.
We collected the data with the Research Electronic Data Capture (REDCap), a data collection program using smartphones . Experienced and trained research assistants collected the data. The questionnaire from EAI PELOTAS was prepared, when possible, based on standardized instruments, including questions about chronic diseases, physical activity, food security, use of urgent and emergency services, functional disability, frailty syndrome, self-perception of health, COVID-19, in addition to sociodemographic and behavioral questions. Supplementary Table 1 shows the instruments utilized in the present study.
The use of urgent and emergency services was assessed on a baseline using the following question: “In the last 12 months, how many times have you sought urgent and emergency services, such as an emergency room?”. This was followed by the characterization of the service used, city of service, frequency of use, and referral after use. One year after the study baseline, we will contact again the respondents to inquire about the use of urgent and emergency care services (number of times and type of service used).
We assessed multimorbidity as the main exposure using a list of 22 chronic diseases and others (asthma/bronchitis, osteoporosis, arthritis/arthrosis/rheumatism, hypertension, diabetes, cardiac insufficiency, pulmonary emphysema/chronic obstructive pulmonary disease, acute kidney failure, Parkinson’s disease, prostate disease, hypo/hyperthyroidism, glaucoma, cataract, Alzheimer’s disease, urinary/fecal incontinence, angina, stroke, dyslipidemia, epileptic fit/seizures, depression, gastric ulcer, urinary infection, pneumonia, and the flu). The association with urgent and emergency services will be performed with different cutoff points, including total number, ≥2, ≥3, and combinations of morbidities. We will also perform network analyzes to assess the pattern of morbidities. Other independent variables were selected from previous studies in the literature , including demographic, socioeconomic information, behavioral characteristics, health status, access, use and quality of health services.
We will test artificial intelligence algorithms, ML, to predict the use of urgent and emergency services after 12 months. The purpose of ML is to predict health outcomes through the basic characteristics of the individuals, such as sex, education, and lifestyle. The algorithms will be trained to predict the occurrence of health outcomes, which will contribute to decision-making. With a good amount of data and the right algorithms, ML may be able to predict health outcomes with satisfactory performance. The area of ML in healthcare has shown rapid growth in recent years, having been used in significant public health problems such as diagnosing diseases and predicting the risk of adverse health events and deaths . The use of predictive algorithms aims to improve health care and support decision-making by health professionals and managers. For the present study, individuals’ baseline characteristics will be used to train popular ML algorithms such as Support Vector Machine (SVM), Neural Networks (ANNs), Random Forests, Penalized Regressions, Gradient Boosted Trees, and Extreme Gradient Boosting (XGBoost). These models were chosen based on a previous review in which the authors identified the most used models in healthcare studies . We will use the Python programming language to perform the analyzes. To test the predictive performance of the algorithms in new unseen data, individuals will be divided into training (70% of patients, which will be used to define the parameters and hyperparameters of each algorithm) and testing (30%, which will be used to test the predictive ability of models in new data). We will also perform all the preliminary steps to ensure a good performance of the algorithms, especially those related to the pre-processing of predictor variables, such as the standardization of continuous variables, separation of categorical predictors with one-hot encoding, exclusion of strongly correlated variables, dimension reduction using principal component analysis and selection of hyperparameters with 10-fold cross-validation. Different metrics will evaluate the predictive capacity of the models, the main one being the area under the receiver operating characteristic (ROC) curve (AUC). In a simplified way, the AUC is a value that varies from 0 to 1, and the closer to 1 the better the model’s predictive capacity . The other metrics will be F1-score, sensitivity, specificity, and accuracy. As measures of model fit, we will perform hyperparameters and balancing fit, as well as K-fold (cross-validation).
The current pandemic, caused by the SARS-CoV-2 virus, has brought uncertainty to the world population. Although vaccination coverage is already high in large parts of the population, the arrival of new variants and the lack of other essential measures to face the pandemic still create uncertainty about the effects of the pandemic on people. General questions about symptoms, tests, and possible effects caused by coronavirus contamination were included in our baseline survey. We will also use SARS-CoV-2-related questions to evaluate the performance of ML algorithms. In September 2021, restrictive measures were relaxed due to a decrease in COVID-19 cases in Pelotas, allowing the study to begin. A vaccination passport was required from the interviewers to ensure the safety of both participants and interviewers. In addition, all interviewers received protective equipment against COVID-19, including masks, face shields, and alcohol gel. Finally, the interviewers were instructed to conduct the research in an open and airy area, ensuring the protection of the participants.
The activities to allow for control and data quality were characterized by a series of measures aimed at ensuring results without the risk of bias. Initially, we developed a research protocol, followed by an instruction manual for each interviewer. Thereafter, interviewers were trained and standardized in all necessary aspects. REDCap was also important to garanteee the control and quality of responses as the questions were designed using validation checks according to what was expected for each answer. Another measure that ensured the control of interviews was the collection of latitude and longitude of households, which was plotted by two members of the study coordination weekly on maps, to ensure that the data collection was performed according to the study sample. With latitude and longitude data, it is also intended to carry out spatial analysis articles with techniques such as sweep statistics and Kernel. The database of the questions was checked daily to find possible inconsistencies. Finally, two members of the study coordination made random phone calls to 10% of the sample, in which a reduced questionnaire was applied, with the objective of comparing the answers with the main questionnaire.
We carried out this study using free and informed consent, as determined by the ethical aspects of Resolution No. 466/2012 of the National Council of the Ministry of Health and the Code of Ethics for Nursing Professionals, of the duties in Chapter IV, Article 35, 36 and 37, and the prohibitions in chapter V, article 53 and 54. After identifying and selecting the study participants, they were informed about the research objectives and signed the Informed Consent Form (ICF). The project was referred to the Research Ethics Committee via the Brazilian platform and approved under the CAAE 39096720.0.0000.5317.
Initially, we conducted a stage for the preparation of an electronic questionnaire at the beginning of 2021. In February 2021, we initiated data collection after preparing the online questionnaire. The database verification and cleaning steps occurred simultaneously with the collection, and continued until March 2022. After this step, data analysis and writing of scientific articles began.
Of approximately 15,526 households approached, 8,196 were excluded — 4,761 residents were absent at the visit, 1,735 were ineligible, and 1,700 were empty (see Figure 2). We identified 7,330 eligible participants, of which 1,607 refused to participate in the study, totalizing 5,722 residents. Comparing the female gender percentage of the refusals with the completed interviews, we observed a slightly lower prevalence with 63.2% (95%CI 60.7–65.5) among the refusals, and 66.8% (95%CI 65.6–68.0) among the complete interviews. The mean age was similar between participants who agreed to participate (50.3; 95%CI 49.9–50.8) and those who refused (50.4; 95%CI 49.0–51.9). To evaluate the first descriptive results of our sample, we compared our results with the 2019 Brazilian National Health Survey (PNS) database. The PNS 2019 was collected by the IBGE in partnership with the Ministry of Health. The data are in the public domain and are available in the IBGE website (https://www.ibge.gov.br/). To ensure the greatest possible comparability between studies, we used only residents of the urban area of the state of Rio Grande do Sul, aged using the command svy from Stata, resulting in 3,002 individuals (residents selected to interview). We developed two models to compare our data with the PNS 2019 survey: Crude model (crude results from the EAI PELOTAS study, without considering survey design estimates); Model 1 using survey design: primary sampling units (PSUs) using census tracts as variables and post-weight variables based on estimates of Pelotas population projection for 2020 (Table 1). We evaluated another model using individual sampling weight (i.e., the inverse of the probability of being interviewed in each census tract). These models are virtually equal to the above estimates (data not shown). The mean age of our sample was 50.3 years (Table 1), 46.2 for model 1, which was similar to PNS 2019 (46.7 years). Our weighted estimates presented a similar proportion of females compared to the PNS 2019 sample. The proportions of skin colors were similar in all categories and models. Our crude model presented a higher proportion of participants with incomplete elementary school or less compared to model 1 and PNS 2019. Table 2 describes the prevalence of chronic diseases and lifestyle factors in our study and the PNS 2019 sample. Our prevalence of diabetes was higher in the crude model compared to weighted estimates and PNS 2019 sample. In both models, we had a higher proportion of individuals with obesity and hypertension than in PNS 2019. Asthma and/or bronchitis presented similar proportions in our results compared to PNS 2019; the same occurred for cancer. Our study presented a higher proportion of smoking participants in both models than in the PNS 2019 sample.
We described the initial descriptive results, methodology, protocol, and the steps required to perform the ML analysis for predicting the use of urgent and emergency services among the residents of Pelotas, Southern Brazil. We expect to provide subsidies to health professionals and managers for decision-making, helping to identify interventions targeted at patients more likely to use urgent and emergency services, as well as those more likely to develop multimorbidity and mortality. We also expect to help health systems optimize their space and resources by directing human and physical capital to those at greater risk of developing multiple chronic diseases and dying. Recent studies in developed countries have found this a feasible challenge with ML . If our study presents satisfactory results, we intend to test its practical applicability and acceptance to assist health professionals and managers in decision-making in emergency services among residents of Pelotas. The baseline and methods used to select households resemble the main population-based studies conducted in Brazil, such as the Brazilian Longitudinal Study of Aging (ELSI-Brazil) , the EPICOVID , and the PNS. The applicability of ML requires suitable predictive variables. Our study included sociodemographic and behavioral variables related to urgent and emergency services, and chronic diseases. EAI PELOTAS study also includes essential topics that deserve particular importance during the COVID-19 pandemic, such as food insecurity, decreased income, physical activity, access to health services, and social support. We also presented one weighting option in order to obtain sample estimates considering the complex study design. All estimates have their strength and limitation. Each research question answered through this study may consider these possibilities and choose the most suitable one. The estimates were similar without weighting and those considering the primary sampling unit (PSU) and sampling weight. Using the census tract in the PSU is fundamental to consider the sampling design in the estimates of variability (standard error, variance, 95%CI, among others). In addition, due to the possible selection bias in the sample, which contains more women and older people than expected, the use of a post-weighting strategy becomes necessary to obtain estimates adjusted for the sex and age distributions of the target population (due to the lack of census data, we used population projections). However, it should be noted that this strategy can produce estimates simulating the expected distribution only by sex and age. Still, we do not know how much this strategy can distort the estimates since the demographic adjustment cannot reproduce adjustment in all sample characteristics, especially for non-measured variables that may have influenced the selection of participants. Thus, we recommend defining the use of each strategy on a case-by-case basis, depending on the objective of the scientific product. Finally, we suggest reporting the different estimates according to the sample design for specific outcomes (e.g., the prevalence of a specific condition) that aim to extrapolate the data to the target population (adults of the city of Pelotas). In conclusion, the present article presented a protocol describing the steps that were and will be taken to produce a model capable of predicting the demand for urgent and emergency services in one year among residents in Pelotas (RS), Southern Brazil. |
|
PMC10000015 | Dib Sassine,Daniella Rogerson,Matei Banu,Patrick Reid,Caryn St. Clair | Clear Cell Ovarian Carcinoma With C1 Lateral Mass Metastasis and Pathologic Fracture: A Case Report | 08-02-2023 | ovarian clear cell carcinoma,vertebral fusion,pathological fracture,osseous metastasis,ovarian carcinoma | Osseous metastasis (OM) in ovarian cancer (OC) are rare, with an incidence ranging from 0.8% to 2.6%, and are associated with poor prognosis. The available literature on their management and associated complications is scarce. We report a case of International Federation of Gynecology and Obstetrics (FIGO) stage IVB clear cell epithelial OC (EOC) who presented with neck pain. Imaging revealed multiple cervical spine metastases with left vertebral artery encasement and concurrent C1 lateral mass compression fracture, without neurological deficit, requiring occiput to C2 posterior instrumentation and fusion. Early OM may be associated with shorter overall survival, and survival after OM diagnosis is on the order of months. Management of OM should include a multidisciplinary team and may require surgical stabilization in addition to systemic chemotherapy, local radiotherapy, and osteoclast inhibitors. | Clear Cell Ovarian Carcinoma With C1 Lateral Mass Metastasis and Pathologic Fracture: A Case Report
Osseous metastasis (OM) in ovarian cancer (OC) are rare, with an incidence ranging from 0.8% to 2.6%, and are associated with poor prognosis. The available literature on their management and associated complications is scarce. We report a case of International Federation of Gynecology and Obstetrics (FIGO) stage IVB clear cell epithelial OC (EOC) who presented with neck pain. Imaging revealed multiple cervical spine metastases with left vertebral artery encasement and concurrent C1 lateral mass compression fracture, without neurological deficit, requiring occiput to C2 posterior instrumentation and fusion. Early OM may be associated with shorter overall survival, and survival after OM diagnosis is on the order of months. Management of OM should include a multidisciplinary team and may require surgical stabilization in addition to systemic chemotherapy, local radiotherapy, and osteoclast inhibitors.
Ovarian cancer (OC) is the most common cause of gynecological cancer death [1]. Direct spread to adjacent pelvic organs, peritoneal spread, and metastasis via the lymphatic route are common, while hematogenous spread to further sites such as the liver, lung, bone and brain occur less commonly and have a poor prognosis [2-4]. Osseous metastases (OM) in OC are among the rarest, with incidences between 0.82% and 4.0%, and may occur by direct invasion, hematogenous, lymphatic, and transperitoneal spread [2, 3, 5, 6, 7]. The vertebral column is the most common site, though case reports document findings throughout the axial skeleton and pelvis [5]. OMs are associated with shorter overall survival [4]. A recent study found the 1-, 3- and 5-year survival after OM diagnosis were 33%, 15%, and 8%, respectively [8]. Skeletal-related events (SREs) and neurovascular compromise secondary to OM have rarely been described. Here, we report a case of International Federation of Gynecology and Obstetrics (FIGO) stage IVB clear cell epithelial ovarian cancer (EOC) complicated by C1 vertebral metastases with vertebral artery involvement and C1 vertebral fracture, necessitating occiput to C2 posterior instrumentation and fusion and palliative C2 neurectomy.
A 59-year-old African American P0 (nulliparous) with a history of uterine leiomyoma presented with abdominal pain and bloating. A transabdominal sonogram demonstrated multiple large, complex adnexal masses. Staging computed tomography (CT) chest, abdomen and pelvis was suspicious for metastatic ovarian carcinoma, with bilateral complex adnexal masses, peritoneal carcinomatosis, omental caking, and pericardiac, abdominal, peripancreatic, mesenteric, retroperitoneal, and pelvic lymphadenopathy. No osseous lesions were identified at the time of the initial scan. An initial consult with gynecological oncology revealed an abdominal mass above the umbilicus, pelvic masses, and a palpable left supraclavicular node. Ca-125 was 6,773. Pathological examination of interventional radiology-guided peritoneal mass biopsies had an immunoprofile compatible with a high-grade adenocarcinoma of Mullerian origin, favoring clear cell carcinoma. The tumor showed preserved mismatch repair (MMR) proteins expression (positive for MLH1, PMS2, MSH2, MSH6). Immunostain for human epidermal growth factor receptor 2 (HER-2) was equivocal and difficult to interpret on the scant cell block material with mostly cytoplasmic staining, no convincing membranous staining, score 0-1, and immunostaining for PD-L1 was limited by scant cellularity in the cell block, focal positivity with an estimated combined positive score (CPS) 3-4. Further genetic testing did not show any mutations for possible targeted therapy. The patient was diagnosed with FIGO stage IVB clear cell EOC. She received her first cycle of carboplatin, paclitaxel, and bevacizumab, with a plan for 3-4 cycles of neoadjuvant chemotherapy and interval debulking pending clinical response. After her first cycle of chemotherapy, the patient endorsed persistent neck pain of moderate severity. Magnetic resonance imaging (MRI) cervical spine revealed contrast-enhancing osseous lesions within the anterior arch and left lateral mass of C1, as well as in the C4 and C7 vertebral bodies, left articulating facet of C6 and T4 vertebral body extending into the left pedicle. There was lateral tumor extension with encasement of the vertebral artery at the C1 level. There was no evidence of epidural disease or spinal cord impingement (Figure 1A-1C). CT angiogram (CTA) demonstrated multiple osseous lytic lesions and a C1 lateral mass compression fracture extending to the left transverse foramen, with asymmetry of the lateral atlantodental interval measuring 8 mm on the right and 2 mm on the left. There was circumferential tumor encasement of the left vertebral artery in the sulcus arteriosus, with severe narrowing with preserved flow and no evidence of dissection (Figure 1D). Metastatic cervical and supraclavicular lymphadenopathy were confirmed. The total body bone scan did not show further OM. The patient was admitted to neurosurgery with gynecological oncology consultation, where she continued to endorse left-sided cervical pain radiating to the occiput which worsened with movement and improved with opioids, steroids and immobilization with a hard collar. There were no focal neurological deficits, paresthesias, anesthesia, gait difficulties, or bowel or bladder dysfunction. The patient had full strength in the upper and lower extremities and an intact gait. She had no signs of myelopathy and had a negative Hoffman’s sign. Given normal cervical alignment, with imaging demonstrating articular facet involvement and lateral mass fracture, this pain was considered to be caused by cervical spine instability (Figure 2A). Urgent surgery was not indicated without neurological deficits or compressive pathology. A multidisciplinary team discussed the case to determine if surgical decompression was required prior to local radiation, and surgical intervention with occiput to C2 instrumentation and fusion was decided upon. Surgery was performed two days after diagnosis. Intraoperatively, there was extensive tumor involvement of the C1 lateral mass and posterior arch, encasing the vertebral artery and extending towards the occipital condyle. A palliative C2 neurectomy was performed for pain control. A right C1 lateral mass screw, bilateral C2 pedicle screws, and occipital keel plate with three bicortical screws were placed. The fusion bed was prepared by decorticating the occiput and bilateral C1-2 joint spaces with the placement of allograft over the decorticated spaces. Intraoperative monitoring was stable throughout and the patient awoke at neurological baseline. Postoperatively, the patient endorsed significant improvement of her cervical pain. Incisional pain was controlled with methocarbamol, gabapentin and hydromorphone PCA (patient-controlled analgesia). Radiographs demonstrated excellent instrumentation placement and alignment (Figure 2B). The patient ambulated without assistance on postoperative day (POD) 1 and was discharged in stable condition on POD3. Radiographs of the spine confirmed stable spinal fusion rods three weeks postoperatively (Figure 2C). There was complete resolution of the cervical pain. The patient received her second cycle of carboplatin and paclitaxel four weeks postoperatively; at that time, progression of cervical lymphadenopathy was noted on exam. She since received two additional cycles three and six weeks later, with a downtrending CA-125 of 2,301. Bevacizumab has been held to allow for surgical healing. Denosumab injection of 120 mg every 4 weeks, calcium, and vitamin D were initiated postoperatively, with a plan for palliative radiation therapy (RT) of 20 Gy in 5 fractions to C1-2 and associated hardware to prevent disease progression.
This report illustrates a unique complication of OM in EOC. Given the rare nature of these lesions, the literature is sparse. Four retrospective studies comment on OM in the context of other rare OC metastases [4, 9, 10, 11]. In these, the incidence of OM ranged between 1.2% and 4%. Deng et al. [4] analyzed 1481 patients with stage 4 ovarian carcinoma using the Surveillance, Epidemiology, and End Results (SEER) database. OM was found in 3.74% of the entire cohort with a median survival time of 11 months for the patients with OM. Dauplat et al. [9] examined 255 patients with EOC with stage 4 disease. Only four patients (1.6%) had OM with a median survival of 9 months. Gardner et al. [10] used the SEER database to determine the pattern of the distant metastases at the initial presentation in patients with gynecological cancer and found that OM was present in 4% of the patients with ovarian carcinoma, vs 13% and 23% of patients with uterine and cervical cancer, respectively. Additionally, four retrospective studies looked specifically at OM in OC [2, 5, 6, 7] (Table 1). Ak et al. [2] examined 736 OC patients and found OM in 2.6%; OM was mostly seen with clear cell histology similar to our case report. The vertebrae, as in our patient, were most commonly involved, though patients often had multiple sites of involvement. Pain was the most common presenting symptom, but was absent in over 50%. Unlike our case, out of the two patients presenting with pathologic fractures, one had neurologic deficit. Patients were managed with palliative RT and bisphosphonates. Only advanced, inoperable disease at presentation was associated with a shorter time to development of OM. Median overall survival in OC patients with OM was 38.1 months and median survival after OM diagnosis was 13.6 months. Patients who had OM at the time of diagnosis of their OC had a shorter median OS than those who developed OM later on, 6.1 vs 63 months, respectively. To note, although insignificant, overall survival was shorter in patients with clear cell histology after OM than in patients with a different histologic subtype ( 7 vs 22 months) [2]. Sehouli et al. [5] examined 1,717 patients and found OM in 1.5%. The vertebral column was the most common site, most frequently in the lumbar, followed by the thoracic and cervical regions. Multiple OM were seen in the majority of patients, as in our case. Pain was the presenting symptom in 66%; 9% had impaired mobility unlike our patient, and 4% had neurologic symptoms. Pathologic fractures were reported in 33%. Most patients were treated with bisphosphonates; of those treated, 26% went on to develop pathological fractures and required surgical intervention. RT of OM was performed in a minority of cases for pain control and for SRE prevention. OM was noted to progress in 75% of cases despite systemic and local therapy. The median overall survival of the entire cohort was 50.5 months; patients with early OM had significantly shorter overall survival than those with later OM, 11.2 vs 78.4 months; to note, the overall survival rates were calculated regardless of the histology of the disease. By far, the largest study on this topic, Zhang, C. et al. [6], examined 32,178 Surveillance, Epidemiology, and End Results (SEER) database OC patients and found the prevalence of OM was 1.09%. OM were more common in women over 65, Black and unmarried women. Similar to our patient, most of those patients had an advanced stage, poorly differentiated grade, non-serous type, elevated CA-125 and had concurrent distant metastasis. The median overall survival for the entire cohort was 50 months, whereas the median overall survival after OM diagnosis was 5 months only. Among examined variables, only surgery at the primary site was associated with significantly longer survival, 18 vs 3 months for primary site surgery versus no surgery. Survival was significantly shorter in patients with a non-serous histology without commenting specifically on the clear cell carcinoma histology [6]. Lastly, Zhang, M. et al. [7] found OM in 0.82% of 2,189 OC cases. The majority of OM were vertebral (12 cervical, 10 lumbar, seven thoracic), eight were pelvic, five were limb, two were sternal, and one was rib. Over half presented with pain, 35% with difficulty walking, and 15% were asymptomatic. Similar to our case report, the majority of cases with OM were in advanced disease and EOC, and most of them were managed with chemotherapy and RT. Cases that were managed with combined chemotherapy and RT had significantly longer survival than those treated with either agent alone, 14.2 versus 11 versus 8.4 months for combined therapy, RT alone, and chemotherapy alone, respectively [7]. There is a paucity of data regarding the management of OM with or without SRE in OC, and the current management of lesions is based on that for other solid tumors. Diagnosis includes history and physical, and imaging options include radiographs, CT and MRI; MRI is the modality of choice for vertebral lesions. A multidisciplinary team approach is often needed for the management of such rare cases, including radiology, orthopedic surgery, neurosurgery, radiation oncology , gynecological oncology, palliative and pain medicine [12]. All of these modalities were utilized as illustrated in this case.
In conclusion, we present a unique case of clear cell EOC with vertebral OM resulting in pathologic C1 fracture requiring surgical stabilization, further complicated by vertebral artery encasement and narrowing. OM secondary to OC are rare and often present with pain though rarely with neurologic deficit. The risk factors for the development of OM are poorly understood but may include clear cell EOC as in this case, and lesions are more commonly described in those with late-stage disease. Patients who present with OM at the time of diagnosis or early in their disease course may have shorter overall survival than those with later OM. However, survival after OM diagnosis is on the order of months. Surgery at the primary site and combination chemotherapy and RT may prolong survival. These findings are based on limited retrospective studies, and further examination of risk factors and prognostic implications of OM in OC is needed. |
|
PMC10000017 | Marília Cruz Guttier,Marysabel Pinto Telis Silveira,Noemia Urruth Leão Tavares,Matheus Carrett Krause,Renata Moraes Bielemann,Maria Cristina Gonzalez,Elaine Tomasi,Flavio Fernando Demarco,Andréa Dâmaso Bertoldi | Difficulties in the use of medications by elderly people followed up in a cohort study in Southern Brazil Dificuldades no uso de medicamentos por idosos acompanhados em uma coorte do Sul do Brasil | 10-03-2023 | Old age assistance,Elderly,Cohort studies,Drug utilization,Assistência a idosos,Idoso,Estudos de coortes,Uso de medicamentos | ABSTRACT Objective: This study aimed to assess the need for help by elderly people to take their medications, the difficulties related to this activity, the frequency of forgotten doses, and factors associated. Methods: Cross-sectional study conducted with a cohort of elderly people (60 years and over — “COMO VAI?” [How do you do?] study), where the need for help to properly take medication and the difficulties faced in using them were evaluated. The Poisson regression model was used to estimate the crude and adjusted prevalence ratios (PR) of the outcomes and respective 95% confidence intervals according to the characteristics of the sample. Results: In total, 1,161 elderly people were followed up. The prevalence of participants who reported requiring help with medication was 15.5% (95%CI 13.5–17.8), and the oldest subjects, with lower educational levels, in worse economic situations, on four or more medications and in bad self-rated health were the ones who needed help the most. Continuous use of medication was reported by 83.0% (95%CI 80.7–85.1) of the sample and most participants (74.9%; 95%CI 72.0–77.5) never forgot to take their medications. Conclusion: The need for help to use medications was shown to be influenced by social and economic determinants. Studies assessing the difficulties in medication use by the elderly are important to support policies and practices to improve adherence to treatment and the rational use of medications. | Difficulties in the use of medications by elderly people followed up in a cohort study in Southern Brazil Dificuldades no uso de medicamentos por idosos acompanhados em uma coorte do Sul do Brasil
This study aimed to assess the need for help by elderly people to take their medications, the difficulties related to this activity, the frequency of forgotten doses, and factors associated.
Cross-sectional study conducted with a cohort of elderly people (60 years and over — “COMO VAI?” [How do you do?] study), where the need for help to properly take medication and the difficulties faced in using them were evaluated. The Poisson regression model was used to estimate the crude and adjusted prevalence ratios (PR) of the outcomes and respective 95% confidence intervals according to the characteristics of the sample.
In total, 1,161 elderly people were followed up. The prevalence of participants who reported requiring help with medication was 15.5% (95%CI 13.5–17.8), and the oldest subjects, with lower educational levels, in worse economic situations, on four or more medications and in bad self-rated health were the ones who needed help the most. Continuous use of medication was reported by 83.0% (95%CI 80.7–85.1) of the sample and most participants (74.9%; 95%CI 72.0–77.5) never forgot to take their medications.
The need for help to use medications was shown to be influenced by social and economic determinants. Studies assessing the difficulties in medication use by the elderly are important to support policies and practices to improve adherence to treatment and the rational use of medications.
Aging with quality of life is an important challenge when it comes to the care for the elderly. Policies have been developed seeking to promote health and aging with autonomy in the elderly population , as well as to help caregivers, as there is an increase in the incidence of chronic diseases and, consequently, the need for medication to treat them . In Brazil, the prevalence of use of at least one continuous-use medication among the elderly ranges from 80 to 93% . In Italy, this prevalence was similar (88%) . Elderly people make use of multiple medications and are exposed to complex therapeutic regimes , which can be unfavorable to treatment effectiveness. Considering that the barriers to accessing health services and medication have been overcome and that the elderly have their drug treatment in hand, there are still other difficulties faced by them. Decline in cognitive status , need for greater attention , loss of visual acuity and loss of ability to handle medication packages , as well as difficulties related to memory and time organization and management, can also be complicating factors for the correct use of medication . In a cross-sectional study carried out in the city of Marília (SP), 59.8% of the elderly reported difficulties related to the use of medications, with forgetfulness being cited by a quarter of them . A study carried out in Sweden pointed out that the majority (66.3%) of the elderly population had some limitation in the ability to manage their treatments . Another difficulty cited by the elderly was the lack of belief in their efficacy . The main result of these difficulties is the lack of adherence to treatment, but they also contribute to errors in medication administration , leading to unsatisfactory clinical results, adverse reactions and drug interactions . Bearing in mind that the literature deals with these difficulties within adherence assessment scores or in studies assessing the instrumental activities of daily living (IADL) of the elderly , this study aimed to assess the need for help by the elderly to take medications, as well as the difficulties faced when using them, and the frequency doses skipped or forgotten, after having overcome barriers to accessing health services and acquiring medication. Furthermore, the purpose was to evaluate factors associated with the need for help when taking medication at the correct dose and time.
Cross-sectional study with a cohort of elderly people, conducted in the urban area of the city of Pelotas, state of Rio Grande do Sul, Brazil (approximately 340,000 inhabitants in 2016). According to the Brazilian Institute of Geography and Statistics (IBGE) , in 2010, 93% of the population of Pelotas lived in urban areas and approximately 50,000 were aged 60 years or older. The sample recruitment and the first visit of the study called “COMO VAI?” (“How do you do?”) took place from January to August 2014. In total, 1,451 non-institutionalized elderly aged 60 years or older were included. The sampling process was carried out in two stages. Initially, clusters were selected using data from the 2010 Census , with census groups being selected by lot. In the second stage, listed and systematically drawn households were selected—31 per sector—to enable the identification of at least 12 elderly people in each of them. The second follow-up took place between November 2016 and April 2017, by telephone interviews; household visits were made in cases where telephone contact was not possible. Calls were made on different days and times, and participants not contacted by telephone had at least four visit attempts at the addresses made available to the study. The understanding of the questions was tested in a pilot study applied in face-to-face and telephone interviews. Demographic, socioeconomic, and behavioral characteristics were the independent variables selected based on studies assessing adherence to treatment and IADL . The following characteristics were collected in the first interview, to assist in the description of the sample: biological sex (male, female); age (60–69, 70–79, ≥80 years); skin color (self-reported, using the following categories: white, black, brown, yellow and indigenous, with the elderly self-declared as brown, yellow and indigenous grouped under the “mixed” category, due to the low frequency); education, defined as the highest level of education achieved in years of study (later categorized as none, <8 and ≥8 years); marital status (married/with a partner, single/divorced/widowed—considered “no partner”); economic situation (A/B — richer; and C, D/E — poorer), according to the criteria of the Brazilian Association of Companies and Research . Behavioral and health variables were also considered, given their importance in the evaluation of health care for the elderly. Characteristics such as current smoking (yes, no) were evaluated, considering daily cigarette consumption for more than one month; and alcohol consumption (yes, no), considering consumption of at least one dose of alcoholic beverage in the last 30 days. In addition, the concept of “polypharmacy” was evaluated, that is, simultaneous use of four or more medications . Health perception was measured in 2016 by the question “How do you rate your health?”, with the following response options: very good, good, regular, poor and very poor, later recategorized as very good/good, fair, bad/very bad. Outcomes were obtained at the second follow-up with the following filter question: “Do you need help taking your meds at the right dose and time?” (yes/no), which indicated the need for help with medication. Among those who needed help with their treatments, the three outcomes related to difficulties in taking medications were evaluated using the following questions: “Thinking about your medication, could you tell me if the following actions are ‘very difficult’, ‘a little difficult’ or if ‘not difficult’? removing the medicine from package; reading the medicine package, to assess difficulties with handling, and understanding the package; taking too many medications at the same time, or difficulty with the amount of medications in use. Continuous medication was also evaluated using the question “Do you take any continuous use medicine regularly, with no date to stop?” (Yes/No). For those who were on continuous medication, the following question was asked: a) “Do you sometimes forget to take your medicine?” (Yes/No); b) “How often do you have trouble remembering to take all your medications?”, with five response options: never/rarely, from time to time, sometimes, usually, always. Then, the responses were grouped into three categories (never/rarely, occasionally/sometimes, usually/always). These categories have been renamed to never, occasionally, and usually, respectively. Only elderly people who met the outcome and were followed up at both moments were included in the analyses. The analytical sample maintained the characteristics of the original cohort, with the exception of age, since there was a significant decrease in the proportion of elderly aged 80 years or older (p=0.044) (Supplementary Table). Analyses were performed using the Stata statistical package, version 16.0 (Stata Corporation, College Station, USA). First, the sample was described (followed up in 2016 and 2017). Afterwards, the prevalence and 95% confidence intervals (95%CI) of the main outcome were obtained according to the characteristics of the sample. Poisson regression with forward selection was used to estimate the crude and adjusted prevalence ratios (PR), and the adjusted model included the variables that presented p<0.20 in the crude analysis to control for possible confounding factors. The respective 95%CI of each predictor's PR were estimated. Descriptive analyses of the frequencies of outcomes were performed. Proportions were compared using the Pearson's χ2 test. Linear trend was assessed for significant associations between outcomes and exposure to more than two categories. The level of statistical significance was set at p<0.05. The study was approved by the Research Ethics Committee of the Medical School of Universidade Federal de Pelotas—CAAE: 54141716.0.0000.5317. The participants or caregivers signed an informed consent form, guaranteeing data confidentiality. In 2016 and 2017, for the elderly interviewed by telephone, consent was provided verbally with acceptance to answer the questionnaire.
The initial sample, in 2014, consisted of 1,451 elderly people. Of these, in 2016, 1,306 participants were located (145 obits identified). The follow-up rate was 90%, with the 1,161 elderly people who were alive being followed up. Most interviews (74.4%) took place over the phone. Table 1 shows the analysis of the outcome “Need for help to take medication at the right dose and time”, according to demographic and socioeconomic characteristics in 2016. Most participants were females (63.7%) aged between 60 and 69 years (56.0%), white (83.6%), with less than 8 years of schooling (54.2%), married or with a partner (55.9%), and in level C economic status (57.6%). Altogether, 15.5% of the elderly (95%CI 13.5–17.8) reported needing help with medication use. There was no significant difference in the prevalence of help needed according to biological sex and skin color. Age, educational level and economic situation were important predictors for this outcome. The prevalence of elderly aged 80 years or older who reported needing help was 2.3 times higher (95%CI 1.6–3.5) than among subjects aged between 60 and 69 years, and 3.0 times higher (95%CI 1.6–5.4) among participants with no schooling, compared to those with 8 years or more of schooling. The prevalence of elderly people who reports needing help with their medications in economic strata D/E was 70% higher (PR=1.7; 95%CI 1.0–2.8) than among those in economic strata A/B. Marital status, after adjustment, lost statistical significance (Table 1). Table 2 addresses the same outcome according to behavioral and health characteristics of participants. Most did not smoke (88.4%) or drink (76.5%), were under the concept of polypharmacy (53.7%), and perceived their health as very good or good (56.5%). Polypharmacy and self-rated health were important predictors for this outcome. The prevalence of elderly people who needed help was 1.6 times higher (95%CI 1.1–2.3) among those on four or more medications, compared to those on less than four medications. The worse the self-perception of health, the greater the need for help to take the medication, and among those who perceived their health as poor or very poor, the prevalence of help needed was 100% higher (PR=2.0; 95%CI 1.2–3.2) than among those who perceived their health as very good or good. Alcohol consumption in the last 30 days lost statistical significance after adjustment (Table 2). Figure 1 shows the difficulties cited by the 176 elderly people who reported needing help to use medication, stratified by age. No significant difference was observed in the difficulty of removing medications from the package between age groups (p=0.55), to read package instructions (p=0.09) or to take many medications at the same time (p=0.55). For all ages, most participants do not find it difficult to unpack medications and take many at the same time. However, the most prevalent answer for reading the package was “very difficult” at all ages (Figure 1). In assessing the use of continuous medication, 962 (83.0%; 95%CI 80.7–85.1) participants used them, among which 23.4% (95%CI 20.8–26, 1) reported occasionally forgetting to take doses. Figure 2 shows the proportion of elderly people on continuous medication who reported needing help to take them as forgetting is concerned. Among these users, 17.0% (95%CI 14.7–19.5) reported needing help and 83.0% (95%CI 80.5–85.3) reported not needing help. The proportion of forgetting doses among participants who needed help (35.0%) was significantly higher than among those who did not need help (20.5%; p<0.001) (Figure 2). Figure 3 shows the frequency of forgetting doses according to age, for those on continuous medication. Among 956 users, most of them (74.9%; 95%CI 72.0–77.5) never forgot to take any doses. For the 60- 69 age group, 19.3% (95%CI 16.2–23.0) occasionally forget and 2.9% (95%CI 1.7–4.8), usually forget. In the age group 70-79 years old, 26.1% (95%CI 21.4–31.3) occasionally forget and 5.2% (95%CI 1.7–4.7), usually forget. Among the aged 80 years or older, 14.4% (95%CI 9.4–21.5) eventually forget and 8.3% (95%CI 4.7–14.4) usually forget (p=0.002) (Figure 3).
This study shows that 15.5% of the elderly needed help to use their medication in the right dose and at the right time, and the greater the age, the lower the level of education and the worse the economic situation, the greater the proportion of elderly people who reported the need for help. Although there are methodological differences in studies that evaluate outcomes regarding the need for help and difficulty in using medications, in a population-based study carried out with elderly people aged 60 years or older in the city of São Paulo (SP), 8.5% of them had difficulty taking their medication and 89.3% received some sort of help in this task . The need for help with medications is a delicate issue, as when misused, they predispose the elderly population to the risks of polypharmacy and the possibility of developing more intense adverse or therapeutic effects, in addition to the likely increase in cost, both individually and for the health system . In addition, the need for help with medication can result in the need to expand the care network for the elderly and, in most cases, this network starts with family members, who leave aside their profession, leisure activities, and self-care to meet the needs of the elderly, often for prolonged periods, often until their deaths, which can lead to damage to the quality of life of the caregiver and the family . Another study, carried out in basic health units in the city of São Paulo (SP), used the Lawton Scale to identify the degree of dependence for IADL, and one of the evaluated items was whether the individual was able to take their medication in correct doses and in at correct times. It was observed that 46.8% of the elderly cannot, 28.2% need partial help, and only a quarter can use their medication without any help . Several factors are associated with impairment of functional capacity, such as advanced age, female gender, low income and education . Low educational level was also associated with the inability to take medication in a descriptive study carried out with 95 elderly people treated at a Family Health Strategy (FHS) unit in Goiânia (GO) , showing that adverse social and economic conditions negatively influence issues related to health, such as the need for help to use medications at correct dose and time. In that study, 30.0% of the elderly needed reminders to take their medications at the right time and 13.0% were unable to take them by themselves . The need for help from the elderly to deal with their treatments due to difficulty in handling medication packages, reading the packaging or taking too many medications directly interferes with adherence to treatment. Adherence to treatment is a complex, multifactorial matter that is essential for therapeutic results. When the patient does not adhere to treatment, there may be changes of various types, such as reduced benefits, increased risks, or both, which contributes to increased treatment costs for the elderly and for health services . In this sense, understanding the factors that prevent the patient from following the recommendations of health professionals is important. The need for help to take the medication in this study can be explained, in part, by difficulties in activities of daily living identified in the first follow-up, which were also associated with older age, lower education, and presence of multimorbidities ; however, this information was not collected in the follow-up from 2016, not allowing for these analyses. Considering that this is a longitudinal follow-up and aging being a limitation for the use of medication, it is likely that there will be an increase in the difficulties faced while using medication in the upcoming follow-ups. Regarding the difficulties with the therapeutic regimen presented by the elderly who reported needing help, the greater difficulty was related to removing the medication from the package and reading it among elderly people aged 80 years or older. These difficulties may be associated with the loss of fine motor skills and reduced visual acuity in this population, although this study has not found a significant difference. There is evidence that physiological aging can lead to decline in some tasks . A systematic review aimed at analyzing factors associated with the autonomy of the elderly showed that the oldest (over 80 years old) are 40% more likely to let other people decide for them, when compared to those aged 60-69 years. That is, with aging, the probability of loss of autonomy increases, as well as the perception of autonomy worsens . Also, visual acuity can decrease with age and this can affect the ability of the elderly to read information on the medicine package, leading to errors or confusions, especially with those whose names are similar. A study carried out with 96 elderly people aged 65 and older from a community in the countryside of São Paulo showed a significant increase in the prevalence of low vision, compromising activities of daily living . Other important points refer to continuous medications, polypharmacy, and the self-perception of health. The elderly population lives with chronic health problems, being a great consumer of health services and medicines , especially those for continuous use. This study showed that most elderly people aged 60 and older use this type of medication and that polypharmacy and poorer health perception were also associated with a greater need for help with medication. The high prevalence of polypharmacy among the elderly population points to the importance of identifying the needs of this population in order to make rational use of treatment . However, the results of this study showed that, of those on continuous medication, about a quarter eventually forget to take their medication, although most of them reported never forgetting (74.9%; 95%CI 72.0– 77.5). These results were lower than those reported by Bezerra et al. and Rocha et al. , and higher than those reported by Marin et al. Forgetting is a serious problem, as it can directly impact adherence to treatment and, consequently, the effectiveness of medications, leading to unsatisfactory control of multimorbidities . It is estimated that, in high-income countries, adherence to long-term therapies accounts to only 50% on average. In middle-income countries, the rates are even lower, which seriously compromises the efficacy of treatments and has important implications in quality of life, the economy, and public health . One of the limitations are the impossibility of collecting all behavioral and health characteristics in the same follow-up in which the outcome was collected, which may have underestimated or overestimated the relation of these variables with the outcome, even though the interval between follow-ups was of only two years. Not having evaluated the functional limitations of the elderly can also be a limitation, as these characteristics can directly influence the outcomes. However, the study has strengths: a population-based longitudinal study sample was used, with frequent follow-ups; however, hospitalized or institutionalized elderly were not included in the study. Even working with the elderly and the study not being initially planned to have a cohort design, the follow-up rate was high. Social and economic determinants were found to influence on the elderly's need for help to use their medications, and a high prevalence of elderly people on continuous treatment (with a quarter of these forgetting to take doses eventually, significantly higher among those who need help). Studies that estimate the difficulties faced with medications by the elderly are important to support health policies and practices aimed at minimizing issues and guiding actions to improve adherence to treatment and rational use of medication. |
|
PMC10000022 | Nathalia Brichet,Signe Brieghel,Frida Hastrup | Feral Kinetics and Cattle Research Within Planetary Boundaries | 23-02-2023 | critical kinetics,planetary boundaries,technical solutions,feed additives,livestock,Jevons’ paradox,Denmark,anthropology | Simple Summary This commentary advocates a note of caution with regard to using the manipulation of feed to address and solve cattle’s negative impacts on the climate and environment. It identifies some of the potential consequences—intentional and unintentional—of this type of ‘solutionism’ and proposes a wider discussion about reducing livestock numbers and thinking with planetary boundaries to promote sustainable animal production systems. The argument is illustrated through findings from an interdisciplinary research project on cattle production in Denmark. Abstract The increased attention drawn to the negative environmental impact of the cattle industry has fostered a host of market- and research-driven initiatives among relevant actors. While the identification of some of the most problematic environmental impacts of cattle is seemingly more or less unanimous, solutions are complex and might even point in opposite directions. Whereas one set of solutions seeks to further optimize sustainability pr. unit produced, e.g., by exploring and altering the relations between elements kinetically moving one another inside the cow’s rumen, this opinion points to different paths. While acknowledging the importance of possible technological interventions to optimize what occurs inside the rumen, we suggest that broader visions of the potential negative outcomes of further optimization are also needed. Accordingly, we raise two concerns regarding a focus on solving emissions through feedstuff development. First, we are concerned about whether the development of feed additives overshadows discussions about downscaling and, second, whether a narrow focus on reducing enteric gasses brackets other relations between cattle and landscapes. Our hesitations are rooted in a Danish context, where the agricultural sector—mainly a large-scale technologically driven livestock production—contributes significantly to the total emission of CO2 equivalents. | Feral Kinetics and Cattle Research Within Planetary Boundaries
This commentary advocates a note of caution with regard to using the manipulation of feed to address and solve cattle’s negative impacts on the climate and environment. It identifies some of the potential consequences—intentional and unintentional—of this type of ‘solutionism’ and proposes a wider discussion about reducing livestock numbers and thinking with planetary boundaries to promote sustainable animal production systems. The argument is illustrated through findings from an interdisciplinary research project on cattle production in Denmark.
The increased attention drawn to the negative environmental impact of the cattle industry has fostered a host of market- and research-driven initiatives among relevant actors. While the identification of some of the most problematic environmental impacts of cattle is seemingly more or less unanimous, solutions are complex and might even point in opposite directions. Whereas one set of solutions seeks to further optimize sustainability pr. unit produced, e.g., by exploring and altering the relations between elements kinetically moving one another inside the cow’s rumen, this opinion points to different paths. While acknowledging the importance of possible technological interventions to optimize what occurs inside the rumen, we suggest that broader visions of the potential negative outcomes of further optimization are also needed. Accordingly, we raise two concerns regarding a focus on solving emissions through feedstuff development. First, we are concerned about whether the development of feed additives overshadows discussions about downscaling and, second, whether a narrow focus on reducing enteric gasses brackets other relations between cattle and landscapes. Our hesitations are rooted in a Danish context, where the agricultural sector—mainly a large-scale technologically driven livestock production—contributes significantly to the total emission of CO2 equivalents.
Since 2006, when the much-quoted FAO report Livestock’s Long Shadow [1] was published, increasing attention has been paid to the environmental challenges brought about by the cattle industry [2,3]. Problems pile up. Today, agriculture occupies about 38% of Earth’s terrestrial surface [4]. Industrial livestock requires enormous amounts of feed, the production of which takes up more than three quarters of all agricultural land on the planet [5]. This leads to massive deforestation, which proliferates into yet other problems such as soil erosion, nutrient leaching, and loss of biodiversity. Further, the gases emitted from the rumination processes of cattle make up a substantial portion of the greenhouse gases (GHG) that spread through the atmosphere, heating up our globe. In particular, enteric fermentation has been targeted and understood as key to controlling and potentially reducing the climate impact of cattle. In response, a host of market- and research-driven initiatives among relevant actors has emerged. As this special issue suggests, the time has now come to collect knowledge on forage and feedstuff digestion kinetics in ruminants in order to meet the mounting pressures to reduce enteric methane production. The fact that domesticated animals produced around the world (mainly cattle, pigs, and poultry) now outnumber wild mammals and birds by a factor of ten no doubt adds pressure to this predicament [5]. Yet, for all the weight this ratio puts on the world’s ecosystems, the scientific knowledge base supporting the industry’s transition towards more sustainable futures most often comes from very specific areas of the natural sciences. Nourishing this quantitatively large global production of a few domesticated animal species, then, is the continuous production of scientific knowledge and research related to issues such as feed intensification, genetics, and health. For cattle, the obvious aim is to further optimize and increase dairy and beef production—industries that in Euro-American production systems are premised on principles of high cost efficiency, achieved by keeping large animal herds on small but intensively managed areas, with easy access to feed products that are often both nutritionally optimized and imported from overseas. While the identification of some of the most problematic environmental impacts of cattle is seemingly more or less unanimous, solutions are complex and might even point in opposite directions. As is clear from this special issue, one set of solutions points to interventions into the causal relations of elements kinetically moving one another inside the cow’s rumen. More specifically, feed additives of various sorts are developed to decrease the generation of methane, thus targeting climate change. Surely, these interventions are valuable—after all, in these critical, increasingly hot times, why oppose GHG reductions in whatever form and by whatever means? Nonetheless, in this short contribution, we raise two concerns regarding a focus on solving emissions through feedstuff development. Whereas knowledge about the possibilities of technological optimization is important, we suggest that broader visions of the possible negative outcomes of further optimization are also needed. Notably, we are concerned for two reasons: First, the development of feed additives may overshadow discussions about downscaling. Second, a narrow focus on reducing enteric gasses by manipulating a set of kinetic causal processes inside the rumen may bracket other relations between cattle and landscapes. Both of these consequences, we suggest, can potentially limit the scope of climate impact mitigation in relation to cattle. Below, we substantiate these reservations and go on to probe how the development of feed additives sit with ideas about absolute sustainability and safe operating spaces for humanity. Our approach to the issue of feed additives and cattle production is rooted in anthropology and cultural history. We thus work using a method of ethnographic fieldwork among stakeholders engaged in Danish cattle production, and we conduct document analysis of, e.g., policies and public discussions, just as we look to archival and historical literature to trace earlier connections between state making and livestock production in Denmark.
Within the agricultural sciences, an answer to the problematic climate impact of cattle has often been to further intensify animal production cf. [6]. This response, however, rests on a modernistic assumption that more-than-human lives are essentially controllable by humans. The ecological crisis that we are presently witnessing does indeed testify to human activities and projects on a massive scale, but not to human control over causal relations in the more-than-human world. This is what we mean by feral kinetics in this opinion’s title; we want to indicate that projects in which (causal) relations were once set in motion by humans through intentional projects often spur various unintended effects [7]. Surely, no one set out to change the climate through animals. A brief historical look at the Danish context in which our research is rooted shows how intensification has come about gradually and in response to a host of societal, political, and historical circumstances that jointly make up what livestock came to be. In Denmark, the agricultural sector is responsible for 27.1% of national GHG emissions (excl. land use, land use change, and forestry (LULUCF)), of which intensive livestock production is a prominent contributor [8]. Danish livestock production is often positively framed in public discourse as economically and environmentally cost-efficient when measured at the scale of singular products. Yet, there is reason to question the hidden costs of this mode of calculation, which often fails to include GHG emissions and global environmental impact derived from the livestock industry’s dependency on feedstuffs, and thus, on land cultivation and deforestation both in Denmark and overseas, as scientists have also pointed out [9]. In Denmark, as in other European countries, agricultural production has historically informed the organization of government, business, and civil society, and politicians still identify Denmark as “a farming country”—see, for example, [10]. It was through the workings and on-going development of the agricultural sector that Denmark as a nation underwent some of its most significant historical changes, not least that of industrial modernization [11,12]. More specifically, 19th- and 20th-century industrial modernization in the agricultural sector came about through the emergence of a strong, export-oriented livestock industry—see, for example, [13]. For instance, the number of pigs in Denmark has increased from 301,000 in 1860 to 12.2 million today, which is now more than twice the Danish population [14,15]. The historical emergence of a strong livestock sector not only changed the welfare and fortuity of rural communities. It also changed the physical appearance of the animal bodies that would increase the reach of Danish industrial interests by yielding more meat and milk per animal than they ever had before, which was achieved, in particular, through the industry’s adoption of scientific approaches to rational and optimized feeding [16,17]. Our point is that during the same historical processes that have made Danish agriculture synonymous with a large livestock industry, the livestock industry has itself become synonymous with high export and feedstuff dependency, as both are integral to how cost efficiency is construed in the industry. However, if the purpose of novel feed additives is to mitigate the climate impact of cattle production, it seems pivotal to ask whether such a high dependency on feeds should itself become a site of scientific intervention in the industry.
Indeed, to pinpoint our first reservation, focusing on greening the enteric digestion process by making ‘enhanced’ rumens less harmful, and politically prioritizing this effort, may engender a so-called rebound effect where decreased methane emission pr. cow is evened out by an increased scale of production—a mechanism sometimes summarized as Jevons’ paradox. In other words, addressing the issue of methane emission through conceptualizing the rumen as a discrete and singular site of intervention may occlude the concern both with a potential rise in scale and with other the well-known, environmentally harmful effects of maintaining present-scale livestock production [18]. Inadvertently, we fear, the development of feed additives may contribute to the status quo, thus foreclosing difficult discussions about how we might best use arable land (for feed or food), and about the number of cattle the world’s ecologies (including atmospheric greenhouse gasses) can actually support. Aspiring for sustainability—as genuine change—on a planet with limited resources and imminent tipping points requires that we think in absolute terms. Adjustment and improvement miss the mark.
Our second hesitation is that a narrow focus on kinetics inside the rumen may eclipse wider ecological relations put into motion by the altered processes in the rumen. As the many ecological crises we are witnessing make clear, humans can no longer be seen as being in full control of various processes on earth—what we referred to above as the ‘feral’ nature of the non-human world in this day and age. Processes once perceived as controllable have proven not to be so, resulting in unintentional global effects that are both distributed and caused by human actions [7]. With regard to feed additives specifically, these may work on other relations than those within the rumen, making it urgent to widen the scope of research. We must be careful not to make yet another potent feral product by limiting our view of what the problems and solutions are.
To summarize, our hesitations with regard to feed additives concern, first, how additives may maintain status quo in terms of the scale of cattle production, and second, how they may engender unanticipated ecological effects. On both accounts, we suggest, there is a need to consider how technological and natural scientific solutions to the methane issue relate to a planet with boundaries that limit a safe operating space for humanity [19,20]. In short, we want to question how feed additives as a means of reducing methane emissions sit with ideas about absolute sustainability [21,22]. If, as we suggest, technological solutions ‘work back’ on the identification of what the problem even is, the result is too easily a circular argument; we can only see the problems that we think we can solve by way of technology—rather than by a just and green societal transition, sanctioned in progressive politics. In making this argument, and exploring how it works in a Danish context, we draw on the model of planetary boundaries originally suggested by Rockström et al. [20]. In 2017, Campbell et al. [18] further worked with the model, arguing that agricultural production is, indeed, a main driver for the eco-system changes occurring, particularly within spheres where the planetary boundaries are transgressed. Our point is simply this: If the impact of agricultural production already exceeds ‘permissible’ limits, something has to change fundamentally. Making things relatively better is just not enough. This is particularly important in agro-industrially intensified countries such as Denmark where livestock industries are (dis)proportionately large; we must question both issues of scale and of unintended side effects. Choices remain to be made that observe planetary boundaries. Now, the scientists and companies who develop feed additives would probably agree that this is just one tool among many others. We want to stress that it is not feed additive development per se that we take issue with. Rather, looking to the recent political work of pushing for a green transition of the Danish cattle industry, we are concerned with the way feed additives have emerged on the political stage. Here, additives are presented as the obvious solution to a universal problem. However, the problem that additives are key to solving—i.e. a massive cattle industry, nourished by feed that causes deforestation and in a monocultural logic—can remain untouched. Further, the wider effects of the additives have yet to be documented.
To substantiate our argument, below we will look at a couple of instances where feed additives are discussed in a Danish context. We do so via ethnographic fieldwork, where we engage with stakeholders such as researchers and politicians, as well as with various written sources. Our method is ethnographic in the sense that we trace relations and generate analyses in dialogue with the field; as such, we explore what feed additives may be as they are produced and discussed by people who develop, implement, or entrust them with positive effects for mitigating climate change—see also [23]. In 2021, all political parties but one represented in the Danish parliament signed an agreement on the green transition of agricultural production in Denmark [24]. The agreement, launched as historic and ambitious, commits to reducing GHG emissions from agriculture by 55–65% by 2030 compared to 1990 levels, in addition to reducing nitrogen and phosphorous run-off into waterways in order to comply with EU regulation. To reach this binding target, curiously, the agreement starts with listing a number of caveats, all to the effect that the goal must be achieved without decreasing agricultural productivity, nor compromising public finances and Denmark’s competitive edge with regard to agriculture. Instead, the agreement highlights the continued prioritization of developing and implementing new technologies. Indeed, trust in (near) future technological solutions is so great that the historically ambitious deal, as it is now, ensures less than a third of the promised reduction. More precisely, the agreement specifies a reduction of 1.9 tons of CO2e out of the 6.1–8 tons which is the overall target for the agricultural sector, equaling a reduction in GHG emissions of 55–65% from 1990 emission levels. The rest of the required reduction, the agreement states, will be brought about by new technologies that have yet to be developed and implemented. Two overall arenas for technological innovation are singled out in the political agreement: namely, the curbing of emissions from manure from all production animals and enteric fermentation in livestock. The agreement states as follows: ”It will be a continued priority that new tools, such as feed additives, are transferred as quickly as possible to the implementation track, and that the demand [for reduced GHG emissions] is adjusted according to what can be realized” [24] (p. 4). What we want to point to here is that the means for a very large proportion of the CO2 reduction that the agreement commits to have yet to be invented, and that reduction targets are adjustable. In other words, the binding and historic agreement on the green transition of Danish agriculture is highly negotiable, and, further, dependent on uncertain technologies. All the while, the agreement lists a number of other priorities that are not up for negotiation—such as productivity, employment, public finances, and rural development. This leaves it up to innovative technologies to find the remaining (majority) of the promised GHG reduction. Accordingly, as we see it, there is a substantial risk that the agreement’s limited focus on climate change mitigation will lead to so-called “burden-shifting”—see [22]—as the negative impacts of feed production and consumption are only considered with regard to a single planetary boundary, as opposed to asking how feeds can become sustainable in an absolute sense, heeding all biospheres. This is to say that optimizing feeds as a means to mitigate GHG emissions specifically risks overlooking equally important and environmentally detrimental processes such as the eutrophication or acidification of waterbodies. If global feed production and consumption on the whole are largely left unchanged, the use of feed additives to mitigate GHG emissions risks relocating instead of actually solving the problems caused by livestock production. By leaving it up to hopeful investments in future technologies to reduce GHG, we are not forced to consider the number of cows, nor the other effects of an unchanged scale. Another example from our fieldwork sheds light on the potential unintended effects of implementing feed additive solutions. Below, we provide more detail regarding some of the ways that feed additives work in the practices and discussions of industrial agriculture stakeholders. At the annual Cattle Congress 2022, the head of the Cattle Section in SEGES Innovation—the Danish agricultural interest organization’s independent research unit—together with a researcher from the same organization, gave a talk under the headline “Climate Requirements in the Agricultural Agreement”—the same deal mentioned above. From her point of view, climate requirements can be answered by two distinct means of action: the handling of digestion and manure—while not compromising another distinct theme, namely animal welfare. In this way, she framed the problems involved in having an agricultural sector accounting for over one fourth of Danish GHG emissions by setting a very particular triangular frame within which one should think, talk, research, and act in relation to climate requirements: welfare, digestion, and manure. She mentioned that feeding with additives could reduce 20% of the 0.17 million tons CO2 that needed to be reduced by 2025, but also expressed frustration with the limited amount of money reserved to introduce and implement a new product approved by the European Union in the spring of 2022. New routines on farms need to be developed and supported, and potentially skeptical farmers should be convinced that milk yields will not decrease on account of the new additive. The researcher assisted her and elaborated on the tools needed to reach the goals in the climate law and the agreement discussed above. These were tools that altogether confirmed the dictum of ‘more for less’ (higher yield and more efficiency in feed and in producing bodies) that has made Danish livestock production competitive on a global market despite high production costs. Just as important, the presentation repeated a dictum that has recently become a standard answer to green goals: optimization equals sustainability. The researcher then went on to talk about the possibilities of reducing methane by changing diets—rapeseed and a handful of feed additives were mentioned, along with the possibilities and challenges these feeding options spurred. He singled out one new product and stated that the climate impact of milk will decrease by 17%, adding that if all conventional producers would implement the additive, the reduction targets for 2030 could be met. Thus far, he continued, the additive has only been tested on Holstein cattle. He wrapped up his presentation by saying that if any of the farmers present were interested in testing the product on their animals, they should feel free to get in touch. What interests us here is that the talk can be seen as combining the launch of a solution with calling for further tests, thereby mimicking the agreement above in its expectations for future effects of something yet to be fully developed and tested. Interestingly, a person in the audience raised his hand and questioned the manure from cattle fed with the approved additive, asking whether the emissions from it altered when distributed on the fields. In response, the researcher answered that the amount of product used is so small, and further that the product is processed so quickly in the cow that it was very unlikely that it would have an effect elsewhere, outside the rumen. However, he continued, researchers in Canada have recently conducted a study where they pointed to higher emissions from manure in the fields as an effect of the application of feed additives. Interestingly, this Canadian study—or hesitation, we could call it—did not seem to ‘alter’ the Danish researcher’s hopes for feed additives once they have been further tested. Chatting with the researcher after the talk, it became clear that potentially increased emissions on the fields were understood as a problem for another research field—namely, that which deals with manure handling. For him, it seemed, there were so many kinetic relations to be explored within the rumen, and understanding what happens later, out on the fields, would be a theme to be researched once processes in the rumen are more fully understood. Our point here is to highlight the decoupling of what goes on in the rumen upon applying feed additives from the ‘afterlife’ of such an intervention.
To conclude, what we argue—and the reason for our reservations towards the prevalent kind of solutionism offered by the development of feed additives—is that it takes a very particular perspective on the cow for its rumen to be the sole target of any intervention. From this perspective, the cow is a singular unit from within which technology can decrease methane emission. To the contrary, as anthropologists, we would see any cow as a set of relations, ranging from the microbial level to global issues of deforestation [23]. While we do not oppose feed additives as such, we do hold that they risk building on and maintaining a tunnel vision, as also described above, that allows for a curious disconnection of cattle’s rumen from other cattle-related processes and decisions, including the discussion of scale and other effects than GHG emissions. In Denmark, and elsewhere, other biosphere problems are also apparent as seen from the model of planetary boundaries. Not least, we have huge problems with the leaching of N and P severely affected waterways in Denmark, as a result of the scale of animal production, regardless of its climate efficiency when measured pr. kilogram of, e.g., milk. Put bluntly, as we see it, if we care about the immediate threats to the safe operating space for humanity, it makes little sense to assess the climate impact of cattle pr. singular rumen. One direct insight from the principle of absolute sustainability is that all agricultural resource activities impact many of the nine biosphere domains. Accordingly, solutions need to follow suit. We cannot afford the luxury of solving one problem at a time. |
|
PMC10000023 | Gabriel Cruz-González,Juan Manuel Pinos-Rodríguez,Miguel Ángel Alonso-Díaz,Dora Romero-Salas,Jorge Genaro Vicente-Martínez,Agustin Fernández-Salas,Jesús Jarillo-Rodríguez,Epigmenio Castillo-Gallegos | Rotational Grazing Modifies Rhipicephalus microplus Infestation in Cattle in the Humid Tropics | 02-03-2023 | cattle,ectoparasites,control,grasslands,ticks | Simple Summary Ticks are one of the main problems in production units, mainly because they have become resistant to the chemicals used to control them. Several alternative methods to chemicals have been sought to control tick infestations in cattle, which are practical and friendly to the environment. In this work, we implement rotational grazing to combat ticks at the pasture level. We found that a 30-day rest period for pastures (without animals) is not enough to reduce the presence of ticks in animals but that a 45-day rest period does reduce the presence of ticks in cattle. These studies are critical since they would help cattle producers design better strategies that help reduce the use of chemical acaricides and the presence of chemicals in milk, meat, and the environment. Abstract Rotational grazing has been mentioned as a potential tool to reduce losses caused by high tick loads. This study aimed: (1) to evaluate the effect of three grazing modalities (rotational grazing with 30- and 45-day pasture rest and continuous grazing) on Rhipicephalus microplus infestation in cattle, (2) to determine population dynamics of R. microplus in cattle under the three grazing modalities mentioned in the humid tropics. The experiment was carried out from April 2021 to March 2022 and consisted of 3 treatments of grazing with pastures of African Stargrass of 2 ha each. T1 was continuous grazing (CG00), and T2 and T3 were rotational grazing with 30 (RG30) and 45 d of recovery (RG45), respectively. Thirty calves of 8–12 months of age were distributed to each treatment (n = 10). Every 14 days, ticks larger than 4.5 mm were counted on the animals. Concomitantly, temperature (°C), relative humidity (RH), and rainfall (RNFL) were recorded. Animals in the RG45 group had the lowest count of R. microplus compared to the RG30 and CG00 groups; these results suggest that RG45 days of rest could be a potential tool to control R. microplus in cattle. Yet, we also observed the highest population of ticks on the animals under rotational grazing with a 30-day pasture rest. A low tick infestation characterized rotational grazing at 45 days of rest throughout the experiment. The association between the degree of tick infestation by R. microplus and the climatic variables was nil (p > 0.05). | Rotational Grazing Modifies Rhipicephalus microplus Infestation in Cattle in the Humid Tropics
Ticks are one of the main problems in production units, mainly because they have become resistant to the chemicals used to control them. Several alternative methods to chemicals have been sought to control tick infestations in cattle, which are practical and friendly to the environment. In this work, we implement rotational grazing to combat ticks at the pasture level. We found that a 30-day rest period for pastures (without animals) is not enough to reduce the presence of ticks in animals but that a 45-day rest period does reduce the presence of ticks in cattle. These studies are critical since they would help cattle producers design better strategies that help reduce the use of chemical acaricides and the presence of chemicals in milk, meat, and the environment.
Rotational grazing has been mentioned as a potential tool to reduce losses caused by high tick loads. This study aimed: (1) to evaluate the effect of three grazing modalities (rotational grazing with 30- and 45-day pasture rest and continuous grazing) on Rhipicephalus microplus infestation in cattle, (2) to determine population dynamics of R. microplus in cattle under the three grazing modalities mentioned in the humid tropics. The experiment was carried out from April 2021 to March 2022 and consisted of 3 treatments of grazing with pastures of African Stargrass of 2 ha each. T1 was continuous grazing (CG00), and T2 and T3 were rotational grazing with 30 (RG30) and 45 d of recovery (RG45), respectively. Thirty calves of 8–12 months of age were distributed to each treatment (n = 10). Every 14 days, ticks larger than 4.5 mm were counted on the animals. Concomitantly, temperature (°C), relative humidity (RH), and rainfall (RNFL) were recorded. Animals in the RG45 group had the lowest count of R. microplus compared to the RG30 and CG00 groups; these results suggest that RG45 days of rest could be a potential tool to control R. microplus in cattle. Yet, we also observed the highest population of ticks on the animals under rotational grazing with a 30-day pasture rest. A low tick infestation characterized rotational grazing at 45 days of rest throughout the experiment. The association between the degree of tick infestation by R. microplus and the climatic variables was nil (p > 0.05).
Ticks are one of the main threats to cattle production, affecting about 80% of livestock worldwide. These parasites generate losses ranging from 13.9 to 18.7 billion US dollars annually [1], mainly by affecting productive and welfare parameters. Rhipicephalus microplus (Canestrini, 1887) (Acari; Ixodidae) is the main ectoparasite affecting cattle in tropical, subtropical, and temperate areas of the world where it is a transmitter of pathogens such as Babesia bovis, B. bigemina, and Anaplasma marginale [2]. Worldwide control of this tick has mainly been based on therapeutic interventions using chemical treatments; however, these chemicals have developed acaricidal resistance in ticks and their ecological impact [3]. Non-conventional methods to control tick populations have been proposed to mitigate the effects of this resistance [4]. Among these are rotational grazing, in which the pasture manager maintains the grazing time cattle remains in the grazed section and determines the length of the recovery periods the animal stays off a pasture subdivision [5]. It aims to reduce the parasite–host interaction and has been mentioned as an ecological, profitable way and would also help optimize forage resources [6]. Rotational grazing as a means of tick control has received the attention of researchers for several years; nevertheless, [7] pointed out that most studies have been based on mathematical models, and there is little information on the effect of rotational grazing in the field. The preceding agrees with [8], who mentioned that population dynamics and dispersal of ticks in rotational grazing systems are complex and relatively unstudied. In field studies, it is observed that the effect of rotational grazing on ticks may depend on the recovery time of the paddock. Those rotational grazing with short periods of rest (20 days) showed higher tick infestations on-host than did those in continuous grazing. Some studies with longer paddock recovery times report that rotational grazing is a promising non-conventional strategy to control ticks. On the other hand, generations of R. microplus are increasing throughout the year, a phenomenon linked with global warming [9], which justifies the need for current studies of the seasonal dynamics of this ectoparasite. A successful tick-control strategy will also depend on the interaction of biotic and abiotic factors that leads to seasonal population abundance that, in turn, determines tick behavior in each region. This knowledge plays a decisive role in an ectoparasite control program [9]. There is no information on the effect of rotational grazing with different resting times of paddocks on the control of R. microplus in cattle. The objectives of this study were: (1) to evaluate the effect of three grazing modalities (continuous grazing and rotational grazing with 30- and 45-day pasture rest) on R. microplus infestation in cattle, (2) to determine population dynamics of R. microplus in cattle under the three grazing modalities mentioned in the environmental conditions of the humid tropics.
The study took place in the Center for Teaching, Research and Extension in Tropical Livestock Production (CEIEGT) of the Faculty of Veterinary Medicine and Zootechnics of the National Autonomous University of Mexico (20°02′ N, 97°06′ W) [10], from April 2021 to March 2022.
We tested three grazing management strategies: CG00, continuous grazing, where the animals roamed free in a single paddock without internal divisions; RG30, rotational grazing with 3-day and 30-day grazing and recovery times, respectively, with 11 paddocks of ≈0.18 ha each; and RG45, rotational grazing with 3-day and 45-day grazing and recovering times, respectively, with 16 paddocks of ≈0.12 ha each. The three experimental areas share the same latitude, and land irregularities are similar. Each treatment consisted of 2 ha of pastures where African star grass (Cynodon nlemfuensis) predominated and with natural tick infestations. Grazing by cattle has been the only use received by the pastures over the last 30 years. We used the flag technique to verify the presence of larvae in the three experimental areas before the start of the experiment. The number of larvae was very low and similar among pastures. Paddocks did not receive any anti-tick treatment before the investigation. The experimental animals were thirty heifers between 8 and 12 months of age and had an average live weight of 182 ± 44 kg. We allocated ten heifers randomly to each treatment. Eight animals from each group were F1 (Holstein × Zebu) and two 5/8 × 3/8 (Zebu × Holstein). Another grouping criterion was coat color. The stocking rate for each experimental area at the beginning of the study was four animal units per hectare (au = 450 kg of live weight). Fifteen days before starting the experiment, all animals were treated against gastrointestinal parasites (albendazole), ticks, and flies (coumaphos) in order to start with similar tick loads. Report [11] indicates that the local tick populations resist amitraz, synthetic pyrethroids, chlorpyrifos, diazinon, and ivermectin. However, no study has evaluated tick susceptibility to coumaphos, a chemical that has been nil for at least 15 years.
Throughout the experiment, all animals received a feed supplement at a daily rate of 1 kg per head. We supplied water ad libitum to the animals. During the winter, the available grass decreased, so every other day, we supplemented the heifers with two bales of grass hay (C. nlemfuensis and Brachiaria sp., ≈22 kg each). There are no common areas between any of the three treatments. Each treatment had mobile and exclusive drinkers and feeders for the animals. The animals were not treated against ticks during the study; however, cattle were under medical supervision to monitor tick loads and clinical signs that might be present.
The count was carried out throughout the year every 14 days from 7:00 to 9:00 h. In total, we did 26 counts, beginning in April 2021 and ending in March 2022. With the aid of a compression ramp, we counted R. microplus only if its length was > 4.5 mm in each heifer. Ticks remained in place after counting. A qualified veterinarian examined the total body surface area of the right side and the number of ticks multiplied by two [12].
Two weeks before and throughout the experiment, the environmental temperature (°C) and rainfall (mm) were recorded daily through the National Meteorological Service [13] database. The Weather Channel © mobile application provided the relative humidity data (RH, %) [14]. The climate is hot and humid with three climatic seasons: rainfall (June–September), winter (October–January), and dry (February–May). Our records (CEIEGT) and INEGI [10] indicated that the rainy season has temperatures ranging from 15 °C to 27 °C, rainfall is 715 mm, and relative humidity is 90–95%. The winter season (also known as “norths”) presents temperatures from 9 °C to 23 °C, with a total rainfall of 190 mm with a relative humidity of 30–90%. The dry season temperature varies from 11 °C to 29 °C, with rains of 150 mm and relative humidity of 20 to 80%.
The data were analyzed using D’Agostino & Pearson, Anderson–Darling, Shapiro–Wilk, and Kolmogorov–Smirnov tests to determine normality and homogeneity of variances using StatGraphics 19.1.3 (StatPoint, Inc., Herndon, VA, USA). Tests showed that our data were not normally distributed. Tick counts for each treatment were compared using the Kruskal–Wallis test with Statistica 10.0 (StatSoft, Inc., Tulsa, OK. USA). A 95% confidence interval and a p value of less than 0.05 were considered. Environmental temperature, relative humidity, and rainfall were correlated with tick load using the Spearman test with Software R version 2021 (R Core Team, Vienna, Austria). The tick count in the bovines of each treatment was analyzed by descriptive analysis (Software R version 2021).
Table 1 shows 26 counts of R. microplus teleogins (>4.5 mm). In the first three months (April, May, and June) after starting the experiment (six counts), the parasitic loads were very low and similar among treatments (p > 0.05) (Table 1). From count seven to count sixteen, corresponding to July to November, the animals in RG30 had the highest counts of R. microplus (p < 0.05) (Table 1). In the last ten samplings, corresponding to the November–March period, the tick count recorded in RG45 was significantly lower than RG30 and CG00 (p < 0.05). The animals in the RG30 group had a higher cumulative parasite load at the end of the experiment with 13,352 teleogins, followed by animals from CG00 and RG45 treatments with 1882 and 660 teleogins. The dispersion patterns of parasitic loads among animals were different within each treatment. In total, 30% of the animals in RG30 and RG45 treatments concentrated 55% (7344/13,352) and 57% (1073/1882) of parasite loads, while in the CG00 treatment, the pattern was 42% (277/660). None of the animals showed health problems during the experiment. The population dynamics of engorged ticks on each treatment showed variable patterns (Figure 1). Animals in the RG30 group presented the highest infestations of R. microplus ticks (>4.5 mm in length) throughout the year, and the population fluctuation showed five distribution peaks. The first peak of engorged females was in June and July, averaging 71.5 ticks per animal. The second and third peaks occurred during September and October, reaching an average of 188 ticks and 115 ticks per animal; fourth and fifth peaks occurred during January and February, with an average of 31 and 48 ticks per animal, respectively (Figure 1). The experiment’s minimum and maximum environmental temperature fluctuated between 11.5 °C and 37.0 °C. Monthly relative humidity (RH) and precipitation were between 67 and 85% and 54.0 to 427.7 mm, respectively. There was no association between the degree of tick infestation by R. microplus and the climatic variables (p > 0.05).
Rotational grazing has been reported as a viable alternative to control R. microplus in cattle. However, there is limited information on the effects of this non-chemical alternative at the farm level. Ours is the first report about the impact of three grazing management variations on R. microplus infestation in cattle. The evaluations that exist have been carried out in regions with different climates, different stocking rates and different types of pastures. However, these studies have reported significant findings that could help to understand the results obtained in the present report. In this study, we observed that reducing the length of the recovery period from continuous grazing or 45 days to a 30-day recovery stimulated the tick loads on heifers. Animals in the RG30 group had the highest count of R. microplus compared to the RG45 and CG00 groups; indeed, the animals in treatment RG30 had a higher cumulative parasite load at the end of the experiment. Rotational grazing (with 20 days of rest) in Cynodon dactylon pastures was ineffective in reducing the parasitic loads of R. microplus on animals compared with continuous grazing [7]. The duration of the non-parasitic phase (in pastures) depends directly on climate and vegetation, which determine the abundance of the populations [15]. Under controlled field conditions, the average pre-hatching time is 42 days. Larvae show better activity to adhere to the potential host at 3 to 8 days post-hatching, indicating that the adequate time for the presence of viable and vigorous larvae in the pasture could be 45–50 days post detachment of engorged ticks [9]. If we consider in this study that the animals return to the paddock after 30 days, there would still be no viable larvae to infest. Still, in the next round, the animals would return after 60 days, when the larvae have 15–20 more days of their best viable age, which could be an adequate time for a high level of infestation under the conditions of this study. Further research is needed to determine the duration of the biological parameters of ticks under this grazing system and to consider other factors inherent to animal behavior. On the other hand, short-term rotational grazing induces a high stocking density because the number of paddocks increases, but the number of animals remains more or less constant. As the available pasture and area per animal decrease, the probability of the larva–host encounter would increase, leading to augmented tick loads on animals [7,15,16]. However, other factors can also influence infestations, such as animal behavior, cattle trampling, and the vegetation cover of pastures, among others. Animals in the RG45 group, with a greater-density grazing system, had the lowest count of R. microplus compared to the RG30 and CG00 groups; indeed, the animals also had the lowest cumulative parasite load at the end of the experiment. These results suggest that RG45 days of rest could be a potential tool to control R. microplus in cattle. There is no information about the effect of rotational grazing with 45-day pasture rest on R. microplus infestation compared with grazing modalities such as 30-day pasture rest and continuous grazing. Unlike the RG30 group, the animals return to the paddock after 45 days, which could still mean a low percentage of viable ticks to infest. Still, in the next round, the animals would return after 90 days, when the larvae have 40–45 more days of their best viable age. Such length could be enough time for environmental conditions to damage the larvae, reducing the chances of infesting cattle. The effect of rotational grazing on tick populations results from the impact that abiotic factors, type of pasture, and the recovery time can have on non-parasitic phases (pre-oviposition, oviposition, incubation, egg hatch, and larval maturation) of R. microplus. The vegetation architecture influences tick loads by protecting the larvae, as it climbs the pasture, from non-ideal environmental factors [9,15,17]; furthermore, it can also offer protection to other non-parasitic phases. In this study, and after each grazing period, the pasture in the RG45 could have been shorter and could expose larvae to harsh environmental conditions that increased the probability of dehydration. Although ticks have an essential capacity to resist prolonged starvation, the decrease in their energy reserves and overexposure to adverse climatic factors (such as high temperature and low humidity) [18] are among the leading causes of mortality in field conditions [19,20]. Some authors from different countries have repeatedly mentioned achieving completely tick-free pastures. For example, refs. [7,15,21] report 98, 105, and 136 to 192 days, respectively, because the larvae of R. microplus lose water and energy without having a source of nutrition. This variation highlights the need for studies under different conditions of vegetation cover (grazing times) and other times of the year to better understand the behavior of ticks under different pasture management systems and geographical conditions. In addition, since there is a higher stocking density than group RG30, the soil, pastures, and ticks are exposed to greater trampling, which could affect their survival. Further studies are suggested to determine these factors’ influence on ticks’ survival at the pasture level. Remarkable data from the present study show that three out of ten animals in the RG30 and RG45 treatments maintained 55% and 57% of ticks, respectively. [16] reported a similar pattern, where 25 out of 36 animals were responsible for 50% of the total ticks. These results further reinforce the idea that animals have different susceptibilities or responses to tick infestations [22]. The above would help us to detect and treat only susceptible animals, as this would control around 55% of infestations and exert less selection pressure for resistance on ticks when treating animals. On the other hand, it is also essential to highlight behavior since some animals are leaders in the group and move ahead of others when the herd enters a new pasture; in this way, they can collect most of the tick larvae. Animal hierarchy can also explain why the tick load patterns are similar in the case of rotational grazing (55 and 57%) and lower (42%) for continuous grazing. The growth and establishment of tick populations are directly related to the availability of hosts and the climate, such as temperature, humidity, and precipitation [23]. The observations during one year of the parasitic phase of R. microplus in bovines helped us to determine that the ectoparasite showed approximately five peaks in the RG30 and one in RG45 and group CG00. It is worth mentioning that these populations, being highly influenced by environmental conditions, can present various diapause times, which could manifest in the absence of marked peaks in some seasons and treatments. Previous studies in tropical and subtropical areas reported 3 to 4 peaks per year for this tick species in cattle [16,18,22]. However, a recent experiment showed the occurrence of five annual peaks of R. microplus in cattle [24], attributing temperature as a possible factor in the increase in peaks. In this regard, the cattle tick population dynamics from 40 years ago until now showed a tick population growth, with different peaks in a year depending on the seasonality (i.e., rainfall and dry seasons) or associated with the increase of the environmental temperature over the years [9]. The present study had no significant association between the parasite loads and the climatic variables as analyzed (p > 0.05). An aspect to highlight in the current experiment was the increase to a peak in group CG00 in winter, which is probably an effect of the cumulative number of larvae in the grasses that became adults in previous generations. This observation during the winter season in this experiment agrees with that reported by [16] in Brazil, where they found high peaks of R. microplus in continuous grazing, attributing to the quality of pasture and the nutritional status of cattle, inducing a lower probability of susceptibility of animals to ticks.
We observed the highest population of ticks on the animals under rotational grazing with a 30-day pasture rest. A low tick infestation characterized rotational grazing at 45 days of rest throughout the experiment. None of the climatic variables evaluated was related to tick loads in the experimental groups. |
End of preview. Expand
in Dataset Viewer.
No dataset card yet
Contribute a Dataset Card- Downloads last month
- 151