dataset
stringclasses
45 values
id
stringlengths
17
64
messages
listlengths
2
2
science.medmentions_ner
science.medmentions_ner.2425
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"HealthCareActivity\", \"InjuryOrPoisoning\", \"BodySubstance\", \"IntellectualProduct\", \"AnatomicalStructure\", \"SpatialConcept\", \"Chemical\", \"Bacterium\", \"MedicalDevice\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Finding\", \"BiologicFunction\", \"Virus\", \"ResearchActivity\", \"ClinicalAttribute\", \"PopulationGroup\", \"Eukaryote\", \"BodySystem\", \"Food\", \"ProfessionalOrOccupationalGroup\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nThe yeast ADH7 promoter enables gene expression under pronounced translation repression caused by the combined stress of vanillin, furfural, and 5-hydroxymethylfurfural Lignocellulosic biomass conversion inhibitors such as vanillin, furfural, and 5-hydroxymethylfurfural (HMF) inhibit the growth of and fermentation by Saccharomyces cerevisiae. A high concentration of each fermentation inhibitor represses translation and increases non-translated mRNAs. We previously reported that the mRNAs of ADH7 and BDH2, which encode putative NADPH- and NADH-dependent alcohol dehydrogenases, respectively, were efficiently translated even with translation repression in response to severe vanillin stress. However, the combined effects of these fermentation inhibitors on the expression of ADH7 and BDH2 remain unclear. We herein demonstrated that exposure to a combined stress of vanillin, furfural, and HMF repressed translation. The protein synthesis of Adh7, but not Bdh2 was significantly induced under combined stress conditions, even though the mRNA levels of ADH7 and BDH2 were up-regulated. Additionally, adh7Δ cells were more sensitive to the combined stress than wild-type and bdh2Δ cells. These results suggest that induction of the ADH7 expression plays a role in the tolerance to the combined stress of vanillin, furfural, and HMF. Furthermore, we succeeded in improving yeast tolerance to the combined stress by controlling the expression of ALD6 with the ADH7 promoter. Our results demonstrate that the ADH7 promoter can overcome the pronounced translation repression caused by the combined stress of vanillin, furfural, and HMF, and also suggest a new gene engineering strategy to breed robust and optimized yeasts for bioethanol production from a lignocellulosic biomass." }, { "role": "assistant", "content": "{\"AnatomicalStructure\": [\"BDH2\", \"ADH7\", \"wild-type\"], \"Bacterium\": [], \"BiologicFunction\": [\"bioethanol production\", \"protein synthesis\", \"expression\", \"represses translation\", \"induction\", \"translation repression\", \"gene expression\", \"fermentation\", \"repressed translation\", \"up-regulated.\", \"growth\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"inhibitors\", \"inhibitor\", \"Bdh2\", \"ALD6\", \"vanillin\", \"HMF\", \"mRNA\", \"furfural\", \"NADPH-\", \"promoter\", \"Lignocellulosic\", \"5-hydroxymethylfurfural\", \"mRNAs\", \"Adh7\", \"non-translated mRNAs\", \"lignocellulosic\", \"NADH-dependent alcohol dehydrogenases\"], \"ClinicalAttribute\": [], \"Eukaryote\": [\"yeasts\", \"yeast\", \"Saccharomyces cerevisiae\"], \"Finding\": [\"stress\", \"results\", \"combined stress\"], \"Food\": [], \"HealthCareActivity\": [], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"gene engineering\"], \"SpatialConcept\": [], \"Virus\": []}" } ]
science.medmentions_ner
science.medmentions_ner.695
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"HealthCareActivity\", \"InjuryOrPoisoning\", \"BodySubstance\", \"IntellectualProduct\", \"AnatomicalStructure\", \"SpatialConcept\", \"Chemical\", \"Bacterium\", \"MedicalDevice\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Finding\", \"BiologicFunction\", \"Virus\", \"ResearchActivity\", \"ClinicalAttribute\", \"PopulationGroup\", \"Eukaryote\", \"BodySystem\", \"Food\", \"ProfessionalOrOccupationalGroup\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nThe clinicopathological and prognostic features of Chinese and Japanese inpatients with lung cancer Here, we retrospectively compared the differences in clinicopathological behaviors and prognosis of lung cancer from the First Affiliated Hospital (CMU1, n=513), Shengjing Hospital (CMUS, n=1021), Tumor Hospital (CMUT, n=5378) of China Medical University, the First Affiliated Hospital of Dalian (DMU, n=2251) and Jinzhou (JMU, n=630) Medical University, Takaoka Kouseiren Hospital (Takaoka, n=163) of Japan. Japanese lung cancer patients showed smaller tumor size, lower TNM staging, lower ratio of squamous cell carcinoma and higher ratio of small and large cell carcinomas than Chinese patients (p<0.05). Survival analysis showed that tumor size was employed as a prognostic factor for the Japanese and Chinese cancer patients (p<0.05). In DMU and CMUS, the ratio s of female patients or adenocarcinoma were higher than other hospitals (p<0.05), while the patients from CMUT and CMU1 were younger than the others (p<0.05). The ratio s of squamous cell carcinoma from CMUT, CMU1 and JMU were higher than the others, while it was the same for the ratio of large and small cell carcinoma in Takaoka and CMU1 (p<0.05). TNM staging was higher in CMUT than JMU and Takaoka (p<0.05). The female patients of lung cancer showed young prone, large tumor size, a high ratio of adenocarcinoma and advanced TNM staging in comparison to the counterpart (p<0.05). The younger patients of lung cancer displayed smaller tumor size, higher ratio of adenocarcinoma, lower TNM staging than the elder in Takaoka (p<0.05). There were more aggressive behaviors and shorter survival time for Chinese than Japanese lung cancer patients. The prevention of lung cancer should be strengthened by establishing a systematic and effective screening strategy, especially for the young and female patients." }, { "role": "assistant", "content": "{\"AnatomicalStructure\": [], \"Bacterium\": [], \"BiologicFunction\": [\"squamous cell carcinoma\", \"cancer\", \"small cell carcinoma\", \"large\", \"lung cancer\", \"adenocarcinoma\", \"small\", \"large cell carcinomas\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [], \"ClinicalAttribute\": [\"survival time\", \"prognostic features\", \"prognostic factor\", \"prognosis\"], \"Eukaryote\": [], \"Finding\": [], \"Food\": [], \"HealthCareActivity\": [\"prevention\", \"screening strategy\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"TNM staging\"], \"MedicalDevice\": [], \"Organization\": [\"Shengjing Hospital\", \"CMUT\", \"China Medical University\", \"DMU\", \"JMU\", \"Medical University\", \"hospitals\", \"CMU1\", \"First Affiliated Hospital\", \"Takaoka Kouseiren Hospital\", \"Takaoka\", \"CMUS\", \"Tumor Hospital\"], \"PopulationGroup\": [\"Chinese\", \"Japanese\", \"elder\"], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"retrospectively\", \"Survival analysis\"], \"SpatialConcept\": [\"Japan\", \"Jinzhou\", \"tumor size\", \"Dalian\"], \"Virus\": []}" } ]
science.bioasq_general_qa
science.bioasq_general_qa.220
[ { "role": "user", "content": "You will be shown a question, followed by excerpts from biomedical research papers. Please answer the question based on the provided context. Do not include any text in your response other than the answer.\n\nQuestion: What is the role of venous angioplasty in multiple sclerosis?\n\nContext: Balloon angioplasty corrects blood pressure deviation in multiple sclerosis patients undergoing internal jugular vein dilation.\nPURPOSE: To investigate prevalence of extracranial abnormalities in azygos and internal jugular (IJ) veins using conventional venography and intravascular ultrasound (IVUS) in patients with multiple sclerosis (MS) being evaluated for chronic cerebrospinal venous insufficiency, a condition of vascular hemodynamic dysfunction\nFeasibility and safety of endovascular treatment for chronic cerebrospinal venous insufficiency in patients with multiple sclerosis.\nCONCLUSIONS: Endovascular treatment for CCSVI appears feasible and safe.\nThis report presents the case of a 45-year-old man with known MS and suspected CCSVI who had undergone previous internal jugular angioplasty and stenting. The patient reported dramatic improvement of symptoms after intervention.\nOpen venous reconstruction of the internal jugular vein was performed with a spiral graft from the saphenous vein. The patient's symptoms improved for several weeks until the venous reconstruction occluded. This case is the first reported open venous reconstruction for suspected CCSVI.\nPercutaneous transluminal angioplasty for treatment of chronic cerebrospinal venous insufficiency in people with multiple sclerosis: a summary of a Cochrane systematic review.\nBACKGROUND: It has been recently hypothesised that chronic cerebrospinal venous insufficiency (CCSVI) may be an important factor in the pathogenesis of multiple sclerosis (MS). The proposed treatment for CCSVI is percutaneous transluminal angioplasty, also known as the 'liberation procedure', which is claimed to improve the blood flow in the brain, thereby alleviating some of the symptoms of MS.\nCONCLUSIONS: There is currently no high level evidence to support or refute the efficacy or safety of percutaneous transluminal angioplasty for treatment of CCSVI in people with MS.\nCONCLUSION: The results of the current study suggest that endovascular evaluation and management of chronic cerebrospinal venous insufficiency is safe, with low morbidity and no procedure-related mortality.\nThere are clinical trials ongoing or planned to treat chronic cerebrospinal venous insufficiency (CCSVI) through angioplasty.\nPercutaneous transluminal angioplasty for chronic cerebrospinal venous insufficiency in multiple sclerosis: dichotomy between subjective and objective outcome scores.\nSome multiple sclerosis (MS) patients reported an improvement after percutaneous transluminal angioplasty (PTA) for chronic cerebrospinal venous insufficiency (CCSVI), despite the lack of correspondence with objective outcome scores.\nSpontaneously performed approach to CCSVI does not improve clinical and MRI parameters, despite frequent subjective perception of quality of life improvement.\nPercutaneous angioplasty of internal jugular and azygous veins in patients with chronic cerebrospinal venous insufficiency and multiple sclerosis: early and mid-term results.\nPurpose: To assess the safety of endovascular treatment of chronic cerebrospinal venous insufficiency (CCSVI) in patients with multiple sclerosis (MS).\n.Conclusion: Endovascular treatment of the IJV and azygous veins in patients with CCSVI and MS is a safe procedure with no post-procedural complications followed by significant improvement of IJV flow haemodynamic parameters and decrease in the EDSS score. Whether CCSVI percutaneous treatment might affect clinical improvement in patients suffering from MS is yet to be seen after completion of major multicentric clinical trials, still it seems like that this procedure is not negligible.\nCONCLUSIONS: PTA in patients with MS with CCSVI increased CSF flow and decreased CSF velocity, which are indicative of improved venous parenchyma drainage.\nBACKGROUND: It has been postulated that Multiple sclerosis (MS) stems from a narrowing in the veins that drain blood from the brain, known medically as chronic cerebrospinal venous insufficiency, or CCSVI. It has been proposed that balloon angioplasty should alleviate the symptoms of MS.\nThe revival of this concept has generated major interest in media and patient groups, mainly driven by the hope that endovascular treatment of 'CCSVI' could alleviate MS.\nThe data obtained here do generally not support the 'CCSVI' concept. Moreover, there are no methodologically adequate studies to prove or disprove beneficial effects of endovascular treatment in MS.\nRATIONALE: It is estimated that some hundreds of Canadian patients with multiple sclerosis (MS) have journeyed abroad to avail themselves of 'liberation therapy' (venoplasty) following the initial report by Zamboni et al in 2009.\nCONCLUSION: No objective improvement was found at one year in thirty MS subjects who had undergone venoplasty, although many reported a degree of subjective benefit.\nZamboni et al. reported significant improvement in neurological outcomes in MS patients who underwent percutaneous transluminal angioplasty (PTA).\nCONCLUSION: The study revealed that PTA in relapsing remitting MS patients was not associated with any neurological improvement. However, there was an increase in disease activity irrespective of the adherence to DMTs.\nSerious complication of percutaneous angioplasty with stent implantation in so called \"chronic cerebrospinal venous insufficiency\" in multiple sclerosis patient.\nWe strongly ask for restriction of PTA procedure in so called CCSVI, which concept was not proven to be relevant to MS.\nPercutaneous transluminal angioplasty for treatment of chronic cerebrospinal venous insufficiency (CCSVI) in multiple sclerosis patients.\nThe proposed treatment for CCSVI is percutaneous transluminal angioplasty, also known as the 'liberation procedure', which is claimed to improve the blood flow in the brain thereby alleviating some of the symptoms of MS.\nAUTHORS' CONCLUSIONS: There is currently no high level evidence to support or refute the efficacy or safety of percutaneous transluminal angioplasty for treatment of CCSVI in people with MS.\nSignificant improvement of clinical disability in relapsing-remitting patients was (P < 0.001) achieved.\nData from a few small case series suggest possible improvement of the clinical course and quality of life by performing percutaneous balloon angioplasty (PTA) of the stenotic veins.\nCONCLUSION: In the case presented, PTA of the IJV confluence resulted in haemodynamic improvement despite the presence of IJV external compression.\nCONCLUSIONS: Endovascular treatment of CCSVI in patients with MS appears to be a safe procedure resulting in significant clinical improvement.\nEndovascular treatment of concurrent CCSVI seems to be safe and repeatable and may reduce annual relapse rates and cumulative disability in patients with relapsing-remitting MS.\nTherefore, any potentially harmful interventional treatment such as transluminal angioplasty and/or stenting should be strongly discouraged.\nThe work of Dr. Zamboni, et al, who reported that treating the venous stenoses causing CCSVI with angioplasty resulting in significant improvement in the symptoms and quality of life of patients with MS (2) has led to further interest in this theory and potential treatment.\nRegardless of this continued uncertainty surrounding the safety and efficacy of this therapy, MS patients from Canada, and other jurisdictions, are traveling abroad to receive central venous angioplasty and, unfortunately, some also receive venous stents.\nFinally, no proven (i.e., based on strict scientific methodology and on the rules of evidence-based medicine) therapeutic effect of the \"liberation\" procedure (unblocking the extracranial venous obstruction using angioplasty) has been shown up to date.\nThe angiographic and hemodynamic improvement was associated with improvement in symptomatology, most particularly in cognitive and constitutional symptoms that may be related to cerebrovenous flow.\nIn conclusion, in this series, endovascular intervention to correct venous stenosis associated with multiple sclerosis was associated with improvement in symptoms possibly related to disturbed venous hemodynamics.\nSome investigators have proceeded to unblinded nonrandomized angioplasty and stenting procedures in patients with CCSVI, with anecdotal reports of symptom improvement. Because of conflicting data on the presence of CCSVI and the absence of controlled trials of CCSVI intervention, the current standard of clinical care is neither to evaluate multiple sclerosis (MS) patients for CCSVI anomalies, nor to intervene with procedures to alter such anomalies.\nEndovascular treatment (percutaneous transluminal angioplasty (PTA) with or without stenting) of CCSVI was reported to be feasible with a minor complication rate.\nPTA seems to be an effective treatment for patients with CCVI and multiple sclerosis, However, randomized studies are warranted to establish the efficacy of this new treatment for MS.\nCONCLUSIONS: Endovascular treatment of CCSVI is a safe procedure; there is a 1.6% risk of major complications.\nCONCLUSIONS: The endovascular treatment in patients with MS and concomitant CCSVI did not have an influence on the patient's neurological condition; however, in the mid-term follow-up, an improvement in some quality-of-life parameters was observed.\nAlthough there is no methodologically rigorous evidence to support this hypothesis presently, a considerable number of MS patients have undergone endovascular CCSVI procedures. Such procedures include angioplasty or stent placement in jugular and azygous veins. The safety and efficacy of these procedures is unknown, but not without risk.\nCONCLUSIONS: As the debate about CCSVI and its relationship to MS continues, the complications and risks associated with venous stenting and angioplasty in jugular and azygous veins are becoming clearer.\nCONCLUSIONS: This study further confirms the safety of PTA treatment in patients with CCSVI associated with MS.\nThis review aims to describe the proposed association between CCSVI and MS, summarize the current data, and discuss the role of endovascular therapy and the need for rigorous randomized clinical trials to evaluate this association and treatment.\nCONCLUSION: Endovascular therapy appears to be a safe and reliable method for treating CCSVI.\nIn particular, the potential shift in treatment concepts possibly leading to an interventional treatment approach including balloon angioplasty and venous stent placement is currently being debated.\nIn conclusion, the present conclusive data confuting the theory of CCSVI in MS should lead to reluctance with respect to the interventional treatment of possible venous anomalies in MS patients.\nAlthough balloon angioplasty and stenting seem to be safe procedures, there are currently no data on the treatment of a large group of patients with this vascular pathology.\nCONCLUSIONS: The procedures appeared to be safe and well tolerated by the patients, regardless of the actual impact of the endovascular treatments for venous pathology on the clinical course of multiple sclerosis, which warrants long-term follow-up.\nTwo pilot studies demonstrated the safety and feasibility in Day Surgery of the endovascular treatment of CCSVI by means of balloon angioplasty (PTA). It determines a significant reduction of postoperative venous pressure.\nHowever, PTA seems to positively influence clinical and QoL parameters of the associated MS and warrants further randomized control trials.\nFurthermore, they have obtained apparent clinical improvement or stabilization by endovascular ballooning of points of obstruction in the great veins in some, at least temporarily.\nA potential shift in the treatment paradigm of MS involving endovascular balloon angioplasty or venous stent placement has been proposed as well as conducted in small patient series. In some cases, it may have resulted in serious injury.\nAt present, invasive and potentially dangerous endovascular procedures as therapy for patients with MS should be discouraged until such studies have been completed, analyzed, and debated in the scientific arena.\nShe underwent percutaneous balloon angioplasty of the associated chronic cerebrospinal venous insufficiency (CCSVI), due to membranous obstruction of the proximal azygous vein, with severe stenosis of the left internal jugular vein. Treatment of the associated CCSVI made a parallel improvement in both clinical and neurophysiological parameters, allowing us to avoid high dose steroid therapy.\nCONCLUSIONS: PTA of venous strictures in patients with CCSVI is safe, and especially in patients with RR, the clinical course positively influenced clinical and QOL parameters of the associated MS compared with the preoperative assessment.\nIn particular, the potential shift in treatment concepts possibly leading to an interventional treatment approach including balloon angioplasty and venous stent placement is currently being debated" }, { "role": "assistant", "content": "Chronic cerebrospinal venous insufficiency (CCSVI) may be an important factor in the pathogenesis of multiple sclerosis (MS). The proposed treatment for CCSVI is percutaneous transluminal angioplasty, also known as the 'liberation procedure', which is claimed to improve the blood flow in the brain, thereby alleviating some of the symptoms of MS. There have been reports to suggest that venous angioplasty in MS patients is a feasible and safe procedure and is associated with improved disease symptom severity, quality of life, and corrects blood pressure deviation, improves CSF dynamics. Open venous reconstruction of the internal jugular vein has also been tried with good results.\nHowever, some authors have failed to document beneficial value of venous angioplasty in MS patients and systematic review has suggested that there is currently no high level evidence to support or refute the efficacy or safety of percutaneous transluminal angioplasty for treatment of CCSVI in people with MS." } ]
science.covidfact_entailment
science.covidfact_entailment.44
[ { "role": "user", "content": "You will be shown a claim related to the COVID-19 pandemic, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\".\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract.\n\nFor instance, if the model were given the claim \"the COVID vaccine is safe for healthy adults\", the output might be\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that the risks of severe COVID vaccine side effects are low for healthy adults.\"]\n}\n\nClaim: Transmission of sars-cov-2 in domestic cats imposes a major bottleneck.\n\nAbstract:\nThe evolutionary mechanisms by which SARS-CoV-2 viruses adapt to mammalian hosts and, potentially, undergo antigenic evolution depend on the ways genetic variation is generated and selected within and between individual hosts.\nUsing domestic cats as a model, we show that SARS-CoV-2 consensus sequences remain largely unchanged over time within hosts, while dynamic sub-consensus diversity reveals processes of genetic drift and weak purifying selection.\nWe further identify a notable variant at amino acid position 655 in Spike (H655Y), which was previously shown to confer escape from human monoclonal antibodies.\nThis variant arises rapidly and persists at intermediate frequencies in index cats.\nIt also becomes fixed following transmission in two of three pairs.\nThese dynamics suggest this site may be under positive selection in this system and illustrate how a variant can quickly arise and become fixed in parallel across multiple transmission pairs.\nTransmission of SARS-CoV-2 in cats involved a narrow bottleneck, with new infections founded by fewer than ten viruses.\nIn RNA virus evolution, stochastic processes like narrow transmission bottlenecks and genetic drift typically act to constrain the overall pace of adaptive evolution.\nOur data suggest that here, positive selection in index cats followed by a narrow transmission bottleneck may have instead accelerated the fixation of S H655Y, a potentially beneficial SARS-CoV-2 variant.\nOverall, our study suggests species-and context-specific adaptations are likely to continue to emerge.\nThis underscores the importance of continued genomic surveillance for new SARS-CoV-2 variants as well as heightened scrutiny for signatures of SARS-CoV-2 positive selection in humans and mammalian model systems." }, { "role": "assistant", "content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"Transmission of SARS-CoV-2 in cats involved a narrow bottleneck, with new infections founded by fewer than ten viruses.\"]}" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.15841
[ { "role": "user", "content": "You are presented with multiple paper abstracts, alongside the associated literature review and chapter titles. Given the provided information, you are going to do a literature review. Please generate a coherent literature review that synthesizes the content, draws connections between different works, highlights major findings, and identifies any gaps in the existing research.\n\nLiterature Review Title: Toward the Internet of Underground Things: A Systematic Survey \nChapter Title: V. MAGNETIC INDUCTION FOR IOUT \n1. Abstract of Cited Paper (BIB001): Underground is a challenging environment for wireless communication since the propagation medium is no longer air but soil, rock and water. The well established wireless communication techniques using electromagnetic (EM) waves do not work well in this environment due to three problems: high path loss, dynamic channel condition and large antenna size. New techniques using magnetic induction (MI) can solve two of the three problems (dynamic channel condition and large antenna size), but may still cause even higher path loss. In this paper, a complete characterization of the underground MI communication channel is provided. Based on the channel model, the MI waveguide technique for communication is developed in order to reduce the MI path loss. The performance of the traditional EM wave systems, the current MI systems and our improved MI waveguide system are quantitatively compared. The results reveal that our MI waveguide system has much lower path loss than the other two cases for any channel conditions. \n2. Abstract of Cited Paper (BIB002): Ambient sensors are finding their way into our lives and the world around us enabling a great deal of real time environmental monitoring applications. Solid medium, like soil, compost, ice and ore, with relatively high conductivity and non-uniform dielectric properties, provide a unique challenge. One particular case where this is acutely true is black schist ore. While opaque to the propagation of high frequency electric waves, frequencies in the kHz range, and magnetic fields, are better able to penetrate these medium. This paper describes transmission characteristics for this kind of ore. It continues to show the modeling and development of a simple, cheap and low power 125 kHz inductively coupled wireless communication system and demonstrates its performance at a black schist ore mine site.\n\nLiterature Review:" }, { "role": "assistant", "content": "One of the major factors which limit the evolution of IoUT is the challenging underground environment. We have previously discussed that the heterogeneous soil medium and water content of the soil limit the transmission range of EM-based IoUT BIB001 . Hence, magnetic induction (MI) has been introduced to overcome the limitations of EM waves for IoUT BIB002 , BIB003 . In this section, we will cover multiple aspects of MI-based IoUT which include channel modeling, networking, and localization." } ]
science.craftchem_ner
science.craftchem_ner.8667
[ { "role": "user", "content": "You will be presented a sentence from biomedical research text. Extract all terms that correspond to chemical entities of biological interest (ChEBI).\nReturn your identified terms in a JSON array, formatted like this: [\"Entity1\", \"Entity2\", ...].\nIf no such entities are detected, respond by providing an empty JSON array [].\n\nBiomedical context:\nWhileALK5isnotrequiredforneuralcrestcellmigration,ourresultsdemonstratethatitplaysanimportantroleinthesurvivalofpost-migratorycardiacneuralcrestcells." }, { "role": "assistant", "content": "[]" } ]
science.chemsum_single_document_summarization
science.chemsum_single_document_summarization.87813
[ { "role": "user", "content": "You will be shown the title of a chemistry paper, together with sections excerpted from the body of the paper. The paper sections may include HTML tags like `<p>` and `</p>`; please ignore these. Your task is to write an abstract for the paper. Your response should include the abstract and no other text.\n\nTitle:\nAt the crossroads of biomacromolecular research: highlighting the interdisciplinary nature of the field\n\nPaper sections:\nBackground\n<p>Chemistry, often referred to as the central science, is critical to a fundamental understanding of the world around us. Chemical concepts have traditionally been central to the canonical sciences (i.e., biology, physics, and geology). As a chemist, it is gratifying to see the importance of chemistry continue in newer disciplines as well (i.e., materials science, forensics, astrobiology, biotechnology, bioinformatics, pharmacology, and atmospheric science). Chemistry Central Journal ambitiously aims to cover this rich diversity within modern chemical research. A quick perusal of the over fifty different subsections included within the journal reinforces this point. Moreover, many of these subsections are varied and interdisciplinary in their own right. This is especially true when considering biomacromolecular research.</p><p>Current biomacromolecular research is synonymous with physics, biology, and chemistry - clearly, a proper understanding of the physiochemical properties of these large molecules and how they function within the cell requires molecular-level insight. From a more practical point-of-view, figuring out ways to harness their plasticity is being vigorously pursued within electrical and nanoengineering, polymer science, and biotechnology. In order to parallel this breadth, the biomacromolecules section will be as inclusive as possible by featuring reports with both fundamental science and more applied points of view. For example, a sampling of the research topics that are appropriate includes:</p><p>• Bioenergetics</p><p>• Biomacromolecular function</p><p>• Biomaterials and biocatalysts</p><p>• Biomedical applications (biosensors, drug delivery devices, etc.)</p><p>• Biomimicry and molecular design</p><p>• Crowding and other in vivo effects</p><p>• Folding (experiment and theory)</p><p>• Molecular recognition and biomacromolecular assembly</p><p>• Physical chemistry of biological macromolecules</p><p>• Proteomics</p><p>• Single molecule studies</p><p>• Structural biology</p><p>In this inaugural commentary, I try to set the stage for this Gestalt view of biomacromolecular research by briefly highlighting examples of important issues in the overlap between biomacromolecules and several other topical research areas. In each of the exemplar discussions, key aspects of biomacromolecular research occur at the intersection of multiple scientific paradigms. While it is impossible to identify all research avenues that overlap with biomacromolecules in such a short commentary, hopefully a common interdisciplinary spirit is conveyed.</p>\n\nMaterials Science and Biotechnology\n<p>The overlap with materials and polymers is one of the most important research areas within biomacromolecules. The primary aim of which is to understand and manipulate materials at a fundamental level. Similarly, self-assembly and related nanotechnology efforts have a keen interest in biomaterials due to their exquisite specificity and efficiency. The Holy Grail is to develop medical implants that look, behave, and function like the biological systems they replace. It follows that one of the most important areas of biomaterials research attempts to figure out ways to escape a devastating immune response [1]. Moreover, understanding how these foreign biomaterials interact with tissues and/or the biological milieu is equally important. However, the utility of biomaterials is not limited to just biological problems. For example, it has recently been demonstrated, with much fanfare, that the electronic properties of DNA can be exploited to construct electrical circuits. This exciting observation introduces a whole new framework in which to develop smaller and more specific electronic devices [2]. A third important biomacromolecular research topic is biomimicry, which attempts to develop bio-inspired concepts. The guiding principle behind biomimicry is that evolution has resulted in highly optimized structures that can be transferred to modern technologies.</p><p>Likewise, the field of biotechnology also aims to strengthen the overlap between engineering principles and biological efforts. Biomacromolecules are commonly studied within biotechnology research due to their potential as drug-delivery devices, artificial tissues, scaffolding for artificial organs, etc. Encapsulation of therapeutic molecules within synthetic polymers provides a means to escape toxic side effects, control release rate, and target drug delivery. The importance of artificial tissues and organs is self-evident; however, the technical challenges posed by these tasks remain daunting. On a much smaller scale, de novo design and intelligent redesign of enzymes to catalyze specific industrial and environmentally related (remediation) reactions is being enthusiastically pursued due to the unparalleled specificity imparted by enzyme catalysts [3]. Unfortunately, the expectancy of all these efforts continues to surpass our current capacity.</p>\n\nAnalytical Chemistry\n<p>Shifting to a more fundamental point of view, structural studies form the foundation of nearly all biomacromolecular research. Linus Pauling, Jim Watson, Francis Crick, John Kendrew, Max Perutz, and many others revolutionized our understanding of proteins and nucleic acids [4]. Their pioneering work paved the way for today's more applied efforts. This structural context has helped set paradigms in modern biology about the origins of life, including: molecular genetics, enzymology, and energy transduction. In the decades since the genesis of structural biology, the Protein Data Bank has been primarily filled up with large numbers of globular proteins. This conspicuous enrichment of globular proteins has occurred because scientists are pragmatic; globular proteins represent the \"low hanging fruit.\" The next challenge is to characterize portions of the proteome that are less amendable to current structural techniques. The recent explosion in the number of structural studies related to membrane-bound proteins has provided key insight into this segment of the proteome. Roderick McKinnon's groundbreaking work on potassium channels illuminates the molecular basis of electrophysiology, provides a framework for designing new therapeutics, and (perhaps most importantly) confirms that all-atom structural biology of membrane-bound proteins is feasible [5].</p><p>The next pieces of the proteome likely to generate many breakthroughs are: (1) natively disordered proteins that have no structure as traditionally defined, and (2) massive macromolecular complexes. As their name implies, natively disordered proteins (aka, intrinsically unfolded proteins) challenge our very concept of structural biology [6]. These proteins compose 30-40% of the proteome, and appear to be commonly associated with cell signaling and/or responses to environmental stresses. Elucidating their un-structured characteristics, or the extent thereof, their expression profiles, and identifying the molecular counterparts they interact with should yield unfathomable insight. On a different front, large biological assemblies present many technical challenges to current analytical techniques. The lengthscales of these complexes prohibit characterization by standard methods (primarily x-ray crystallography and NMR). A sensible approach is to divide-and-conquer, where structural characterization of constituent subunits using standard techniques is followed by reassembly [7]. Reassembly requires a synergism of measurement and theory - computational modeling informed by basic connections from experiment should provide realistic models of the complexes. The most promising experimental techniques to elucidate the connectivity of these assemblies include: footprinting, cross-linking + mass spectrometry, and imaging methods. Once we have these tools in hand, we will be better equipped to investigate the cell's remarkable molecular machines that escape our current capabilities.</p>\n\nCell Biology\n<p>With the exception of structural studies, most of our current understanding of the physical properties of biomacromolecules comes from in vitro experiments within dilute solutions. This reductionist viewpoint has been necessary in order to make experimental interpretation tractable. However, this is clearly a drastic simplification; the cell is not a dilute solution! The consequences of this are profound. First, the cytoplasm of a cell is more gel-like than liquid. Second, the water molecule concentration of the cytoplasm is significantly reduced from bulk due to the presence of ions, small molecules, proteins, and organelles. And third, the local water molecule density is variable throughout the cell. All three of these observations can have pronounced effects on macromolecular structure, function, and stability. One of the most hotly researched topics in this area concerns the effects of molecular crowding. The congested cellular milieu significantly affects the physiochemical properties of its constituent molecules. As previously expected, and now confirmed, an important consequence of crowding is that the equilibrium between folded and unfolded states is shifted towards native-like populations [8]. The explanation for this is based on straightforward size considerations; the volume occupied by an unfolded protein is significantly greater, and thus less likely within a crowded environment, than its native-like counterpart. A better appreciation for these types of environmental effects will get us closer to understanding the underlying physics and chemistry of life. Moreover, it will move us ever close to designed biomacromolecular devices that behave and perform like naturally evolved functionalities.</p>\n\nChemical Theory\n<p>The first step towards any rational molecular design effort is to understand the underlying physics and chemistry of the system in question. Regrettably, application of standard (small molecule) chemical theory paradigms to macromolecules routinely fails to reproduce experimentally observable metrics. This performance gap is due to the extraordinary complexity of these large and highly interconnected polymers. For example, additivity of constituent free energy components is a common theme in small molecule chemistry; however, nonadditivity is frequently observed within protein double-mutant cycles [9]. Nevertheless, free energy nonadditivity is rarely considered in coarse-grained energy functions used to speed up theoretical investigations. In order for chemical theory to better recapitulate experimental observations, this problem must be tackled. A second problem arises from timescales [10]. The timescales associated with biological processes can span over fifteen orders of magnitude. Stated slightly differently, they overlap at least three different levels of modern chemical theory (quantum, molecular, and Brownian). The 1990's saw a large number of Federally funded research projects address this multi-timescale problem; unfortunately, a solution remains as elusive as ever. A third problem arises from the simple fact that biopolymers are molecules. This statement is not meant to be trite, but to draw attention to the fact that (like all molecules) biological macromolecules have intramolecular motions. In fact, due to size, these intramolecular motions can be amplified into drastic conformational changes. The most obvious way to theoretically examine intramolecular motions is via molecular dynamics simulations. Unfortunately, their utility is limited due to immense computational cost; this is especially true when considering large macromolecular complexes. Several elastic-network models [11] and graph-theoretic mechanical models [12] of flexibility have become popular recently due to their efficiency; however, these methods know nothing about thermodynamics and temperature. A key challenge of biomacromolecular theory is to develop methods that bridge the mechanical models with thermodynamics [13]. High quality reports that tackle fundamental problems associated with biomacromolecular theory are especially encouraged.</p>\n\nConclusion\n<p>Due to the innate complexity of proteins, nucleic acids, and polysaccharides, a proper understanding of biological macromolecules requires a diverse array of experimental and theoretical techniques. Moreover, as highlighted above, biomacromolecular research overlaps many different fields ranging from biomedicine to engineering. It should further be pointed out that it is impossible to completely segregate these interdisciplinary connections. For example, biomacromolecules as sensors of normal and pathological cellular function overlap all four areas discussed above. Theoretical methods provide the foundation upon which such sensors can be devised. The realization of such methods, which can are commonly based on FRET, fluorescence, electrochemical gradients, etc., is part and parcel to analytical chemistry. Developing the sensors into commercially viable devices is in-line with materials and engineering. Finally, interpretation of and reaction to the in vivo readings is emerging as a cornerstone of cellular biology. With all of this in mind, it is my goal to keep the biomacromolecules section as diverse as the Chemistry Central Journal itself. Above, I highlighted a few of the most confounding research tasks related to biomacromolecules currently receiving great attention amongst a myriad number of research focus areas. All aspects about biomacromolecules are appropriate and welcome for submission. If you have any questions about submitting a biomacromolecules research report, please feel free to contact me.</p>\n\nAcknowledgements\n<p>Don Jacobs (University of North Carolina at Charlotte) and Shankar Subramaniam (University of California at San Diego) are thanked for their comments regarding this commentary. Lauren Beastall is thanked for proofreading the commentary.</p>" }, { "role": "assistant", "content": "Due to their complexity and wide-ranging utility, biomacromolecular research is an especially interdisciplinary branch of chemistry. It is my goal that the Biomacromolecules subject area of Chemistry Central Journal will parallel this richness and diversity. In this inaugural commentary, I attempt to set the stage for achieving this by highlighting several areas where biomacromolecular research overlaps more traditional chemistry sub-disciplines. Specifically, it is discussed how Materials Science and Biotechnology, Analytical Chemistry, Cell Biology and Chemical Theory are each integral to modern biomacromolecular research. Investigators with reports in any of these areas, or any other dealing with biomacromolecules, are encouraged to submit their research papers to Chemistry Central Journal." } ]
science.chemprot_re
science.chemprot_re.429
[ { "role": "user", "content": "Given an abstract from a research paper, your task is to identify and label relationships between chemical compounds and genes mentioned in the abstract.\n\nThe relationship type must be one of ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nPlease format your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"]. If no relations can be found, please output an empty JSON array [].\n\nAbstract:\n\nEffects of ozone and fine particulate matter (PM (2.5)) on rat system inflammation and cardiac function. In order to understand the toxic mechanisms of cardiovascular system injuries induced by ambient PM (2.5) and/or ozone, a subacute toxicological animal experiment was designed with exposure twice a week for 3 continuous weeks. Wistar rats were randomly categorized into 8 groups (n=6): 1 control group, 3 groups exposed to fine particulate matters (PM (2.5)) alone at 3 doses (0.2, 0.8, or 3.2 mg/rat), 1 group to ozone (0.81 ppm) alone and 3 groups to ozone plus PM (2.5) at 3 doses (0.2, 0.8, or 3.2 mg/rat). Heart rate (HR) and electrocardiogram (ECG) was monitored at approximately 24-h both after the 3rd exposure and the last (6th) exposure, and systolic blood pressure (SBP) was monitored at approximately 24-h after the 6th exposure. Biomarkers of systemic inflammation and injuries (CRP, IL-6, LDH, CK), heart oxidative stress (MDA, SOD) and endothelial function (ET-1, VEGF) were analyzed after the 6th exposure. Additionally, myocardial ultrastructural alterations were observed under transmission electron microscopy (TEM) for histopathological analyses. Results showed that PM (2.5) alone exposure could trigger the significant increase of CRP, MDA, CK, ET-1 and SBP and decrease of heart rate variability (HRV), a marker of cardiac autonomic nervous system (ANS) function. Ozone alone exposure in rats did not show significant alterations in any indicators. Ozone plus PM (2.5) exposure, however, induced CRP, IL-6, CK, LDH and MDA increase, SOD and HRV decrease significantly in a dose-response way. Meanwhile, abnormal ECG types were monitored in rats exposed to PM (2.5) with and without ozone and obvious myocardial ultrastructural changes were observed by TEM. In conclusion, PM (2.5) alone exposure could cause inflammation, endothelial function and ANS injuries, and ozone potentiated these effects induced by PM (2.5)." }, { "role": "assistant", "content": "[[\"Ozone\", \"INDIRECT-UPREGULATOR\", \"CK\"], [\"Ozone\", \"INDIRECT-DOWNREGULATOR\", \"SOD\"], [\"Ozone\", \"INDIRECT-UPREGULATOR\", \"CRP\"], [\"Ozone\", \"INDIRECT-UPREGULATOR\", \"IL-6\"], [\"Ozone\", \"INDIRECT-UPREGULATOR\", \"LDH\"]]" } ]
science.qasa_abstractive_qa
science.qasa_abstractive_qa.887
[ { "role": "user", "content": "You will be shown an excerpt from a computer science scientific research paper, followed by a question about the paper. Please answer the question. Do not include any text in your response other than the answer.\n\nContext: To reduce this issue, we follow the intuition that if the encoder is providing useful information from the beginning of decoder training, the decoder is more likely to make use of \\boldsymbol{z} Fu et al. (2019); He et al. (2019). Specifically, we use the cyclical schedule to anneal \\beta for 10 periods Fu et al. (2019).Within one period, there are three consecutive stages: Training AE (\\beta=0) for 0.5 proportion, annealing \\beta from 0 to 1 for 0.25 proportion, and fixing \\beta=1 for 0.25 proportion. When \\beta>0, we use the KL thresholding scheme Li et al. (2019); Kingma et al. (2016), and replace the KL term \\mathcal{L}_{R} in (6) with a hinge loss term that maxes each component of the original KL with a constant \\lambda:ℒR′=∑imax[λ,KL(qϕ(zi|𝒙)||p(zi))]\\displaystyle\\mathcal{L}_{R}^{\\prime}=\\sum_{i}\\max[\\lambda,\\mbox{KL}(q_{\\boldsymbol{\\phi}}(z_{i}|\\boldsymbol{x})||p(z_{i}))]caligraphic_L start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT roman_max [ italic_λ , KL ( italic_q start_POSTSUBSCRIPT bold_italic_ϕ end_POSTSUBSCRIPT ( italic_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | bold_italic_x ) | | italic_p ( italic_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ) ](9)Here, z_{i} denotes the ith dimension of \\boldsymbol{z}. Usingthe thresholding objective causes learning to give up driving down KL for dimensions of \\boldsymbol{z} that are already beneath the target compression rate. Due to the regularization term \\mathcal{L}_{R}, Optimus can organize sentences in the way specified by the prior distribution. For basic VAEs, a smooth feature space is learned, which is specifically beneficial for better generalization when the number of task-specific labeled data is low.To have a fair comparison, we follow the BERT paper, where the hidden feature of [CLS] is used as thesentence-level representation. In this way, the linear classifiers for both models have the same number of trainableparameters. Though the latent vector \\boldsymbol{z} is typically used as sentence-level representation in VAE literature, we argue that the KL regularization applied on \\boldsymbol{z} has a large impact on the preceding layer feature {\\boldsymbol{h}}_{\\texttt{[CLS]}}.Specifically, {\\boldsymbol{h}}_{\\texttt{[CLS]}} is fed into an linear classifier {{\\bf W}}_{\\text{C}}\\in\\mathbb{R}^{K\\times H}, where K is the number of classes, with objective -\\log(\\text{softmax}({\\boldsymbol{h}}_{\\texttt{[CLS]}}{{\\bf W}}_{\\text{C}}^{\\top})). Two schemes are used:(\\textup{\\it i}) Fine-tuning, where both the pre-trained model and the classifier are updated;(\\textup{\\it ii}) Feature-based, where pre-trained model weights are frozen to provide embeddings for the classifier update.\n\nQuestion: Why does annealing the value of beta and, as a consequence, decrease the KL regularization during training cause the decoder to make greater use of z?" }, { "role": "assistant", "content": "While the authors explained in detail the method in which they cyclically annealed the value of beta while training their VAE and that KL regularization impacts features on the previous layer, this paper does not delve into the reasons why annealing the value of beta causes the decoder to make greater use of z. This is possibly because annealing beta as described in the paper is a widely used practice while training models such as these, which is why the authors may not have chosen to explain this assuming that these are broadly known pieces of information." } ]
science.data_reco_mcq_sc
science.data_reco_mcq_sc.10428
[ { "role": "user", "content": "Given a short query with keyphrases about a research idea in 2019 in the domain of artificial intelligence, recommend the relevant datasets to validate the research idea from options provided. A short description of the datasets is also provided to help you.\n\nQuery: I want to build a few-shot segmentation network.\n\nKeyphrases: few-shot segmentation images\n\nDataset description:\nNDD20: Northumberland Dolphin Dataset 2020 (NDD20) is a challenging image dataset annotated for both coarse and fine-grained instance segmentation and categorisation. This dataset, the first release of the NDD, was created in response to the rapid expansion of computer vision into conservation research and the production of field-deployable systems suited to extreme environmental conditions -- an area with few open source datasets.\nCOCO: The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\nFSOD: Few-Shot Object Detection Dataset (FSOD) is a high-diverse dataset specifically designed for few-shot object detection and intrinsically designed to evaluate thegenerality of a model on novel categories.\nFewRel 2.0: A more challenging task to investigate two aspects of few-shot relation classification models: (1) Can they adapt to a new domain with only a handful of instances? (2) Can they detect none-of-the-above (NOTA) relations?\n\nOptions:- A: NDD20 B: COCO C: FSOD D: FewRel 2.0" }, { "role": "assistant", "content": "COCO" } ]
science.drug_combo_extraction_re
science.drug_combo_extraction_re.437
[ { "role": "user", "content": "You will be shown a paragraph from a biomedical research article. Your task is to extract all drug combinations appearing in a specific sentence within in the paragraph. Each drug combination should be assigned to one of the following three categories:\n\n- POS: Indicates a positive interaction between the drugs in the combination; for instance, drugs that have an additive or synergistic effect.\n- NEG: Indicates a negative interaction between the drugs in the combination; for instance, combining these drugs might result in adverse side effects, or might reduce their efficacy.\n- COMB: Indicates that the drugs are given in combination, but the type of interaction between the drugs cannot be determined based on the provided context.\n\nPlease format your answer as a json array. Each entry in the array should itself be an array formatted as [<drug_1>, <drug_2>, ..., <drug_n>, <category>] where <category> is one of the three categories listed above and <drug_1>, ..., <drug_n> are drug names copied verbatim from the input sentence. Different combinations may involve different numbers of drugs; in other words, the `n` in <drug_n> may vary.\n\nAs an example, the following output describes two extracted drug combinations; one between two drugs, and one between three drugs:\n\n[[\"artesunate\", \"artemether\", \"COMB\"], [\"Nifedipine\", \"Labetalol\", \"Prazosin\", \"POS\"]]\n\nThe paragraph and specific sentence are below. You should only extract relationships expressed in the specific sentence; the paragraph is provided for context.\n\nIn your output, return only the json array and no other text. If no drug combinations appear, return an empty array `[]`.\n\nparagraph: Combination of gemcitabine antagonizes antitumor activity of paclitaxel through prevention of mitotic arrest and apoptosis. paclitaxel, the first member of taxanes, is one of the most active chemotherapeutic agents developed in the last decade for the treatment of advanced breast cancer and many other types of solid tumors. The promising clinical activity of paclitaxel has also promoted considerable interest in combining this drug with other anti-tumor agents. In this study, we assessed the cytotoxic interaction between paclitaxel and gemcitabine administered at various schedules to human breast and ovarian cancer cells. Through a series of in vitro assays including 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide assays, DNA fragmentation, and flow cytometric analyses, we found that gemcitabine could significantly antagonize the cytotoxic effects of paclitaxel when tumor cells were exposed to the two drugs simultaneously or exposed to gemcitabine before paclitaxel. However, there was little antagonistic interaction observed when paclitaxel was administered before gemcitabine. Further analyses demonstrated that gemcitabine could significantly interfere with the cytotoxic effects of paclitaxel on both mitotic arrest and apoptotic cell death unless paclitaxel is administered before gemcitabine . In addition, biochemical examinations revealed that pretreatment or cotreatment of gemcitabine inhibited paclitaxel-induced IkappaBalpha degradation and bcl-2 phosphorylation that are believed to play critical roles in the signal pathways leading to apoptotic cell death. These results indicate that the interaction between paclitaxel and gemcitabine is highly schedule dependent. Exposure of tumor cells to gemcitabine before paclitaxel or two drugs simultaneously could result in pronounced antagonism. The optimal schedule for this combination might be sequential exposure to paclitaxel followed by gemcitabine.\n\nsentence: Further analyses demonstrated that gemcitabine could significantly interfere with the cytotoxic effects of paclitaxel on both mitotic arrest and apoptotic cell death unless paclitaxel is administered before gemcitabine ." }, { "role": "assistant", "content": "[[\"paclitaxel\", \"gemcitabine\", \"POS\"]]" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.29061
[ { "role": "user", "content": "You are presented with multiple paper abstracts, alongside the associated literature review and chapter titles. Given the provided information, you are going to do a literature review. Please generate a coherent literature review that synthesizes the content, draws connections between different works, highlights major findings, and identifies any gaps in the existing research.\n\nLiterature Review Title: AR Enabled IoT for a Smart and Interactive Environment: A Survey and Future Directions \nChapter Title: AR Enabled IoT Platform for a Smart and Interactive Environment \n1. Abstract of Cited Paper (BIB001): This paper proposes a Hyper-connected IoT-VR (HI) Platform that designed interconnection of human, device and space. We utilized the high interoperability of IoT platform based services and the intuitive services from VR contents. HI Platform can provide customizable and intuitive remote services. Furthermore, this paper shows a Hyper-connected IoT-VR Office (HIO) Zone in order to implement the HI Platform. \n2. Abstract of Cited Paper (BIB002): Modern consumer electronics (CE) devices have evolved such that human interaction with machines is ever more natural and effective. The forthcoming deployment of the Internet of Things (IoT) over the existing Internet infrastructure promises to expand this interaction by providing applications and services that allow the human-computer interaction to integrate an automated customization of everyday life [1]. The vision is very optimistic on the amount of smart objects (SOs) deployed around us as the building blocks of the IoT. Experts forecast that 28 billion \"things\" will be connected to the Internet by 2020 [2].\n\nLiterature Review:" }, { "role": "assistant", "content": "Here, we allude to three key components: distributed AR data management; object-guided tracking; context-based augmented reality (AR) interaction, and the resulting advantages of combining Internet of Things (IoT) and AR. Figure 3 shows a possible architecture of such combination as a basis for smart and interactive AR services. The AR service client interacts directly with the IoT object of interest in the immediate area and, upon connection, immediately receives context-relevant AR datasets (for tracking or customized service content) BIB001 . Depending on the context, appropriate and available services, such as a simple product information display, appliance control, or an instruction manual, are shown in the proper form (for example, through a mobile graphical user interface (GUI), AR glasses, mobile AR, voice, spatially registered AR, or simple overlay) with which to interact. Additionally, the AR system can track the relationship between a physical object and tracking information such as 3D features from images. Next, the user has the capability, in terms of efficient operation of IoT objects, to look through it with helpful AR content associated with the IoT environment. This is based on the help of both the IoT communication capability and the graphical AR environment. Then, any valuable application and service to connect everyday IoT objects and provide the user's AR experience, such as an AR manual, training, control, and instructions, offers comprehensive platforms for ways that users can access and interact with mobile AR that cover everything and everyone to engage with the physical space BIB003 . Note that this typical approach can be realized using the filtering method that nearby AR clients to an IoT object can identify and augment the candidate target object to obtain contextual datasets . Here, we allude to three key components: distributed AR data management; object-guided tracking; context-based augmented reality (AR) interaction, and the resulting advantages of combining Internet of Things (IoT) and AR. Figure 3 shows a possible architecture of such combination as a basis for smart and interactive AR services. The AR service client interacts directly with the IoT object of interest in the immediate area and, upon connection, immediately receives context-relevant AR datasets (for tracking or customized service content) BIB001 . Depending on the context, appropriate and available services, such as a simple product information display, appliance control, or an instruction manual, are shown in the proper form (for example, through a mobile graphical user interface (GUI), AR glasses, mobile AR, voice, spatially registered AR, or simple overlay) with which to interact. Figure 4 shows a system to combine AR and IoT in human-real-world interactions, with a comparison of ubiquitous computers and augmented interaction. Note that a ubiquitous computer is made to appear anytime and everywhere in contrast to desktop computing, augmented interaction refers to controlling in an AR environment, and the proposed AR mixed with IoT is to control the surrounding IoT devices using AR visualization and interaction. The mixing system of AR and IoT has the main difference that the user interacts with each IoT object embodied in the real world using computer-augmented information BIB002 . We can imagine that the system is formed by merging ubiquitous computers ( Figure 4a ) and augmented interactions ( Figure 4b ). Jo et al., for example, proposed an architecture for combining an AR interface with the IoT for shopping services. They developed a proof-of-concept prototype of the AR framework tested on IoT lamps and an in situ interaction method to support control directly with the IoT object ( Figure 4c ) . Figure 5 shows the Additionally, the AR system can track the relationship between a physical object and tracking information such as 3D features from images. Next, the user has the capability, in terms of efficient operation of IoT objects, to look through it with helpful AR content associated with the IoT environment. This is based on the help of both the IoT communication capability and the graphical AR environment. Then, any valuable application and service to connect everyday IoT objects and provide the user's AR experience, such as an AR manual, training, control, and instructions, offers comprehensive platforms for ways that users can access and interact with mobile AR that cover everything and everyone to engage with the physical space BIB003 . Note that this typical approach can be realized using the filtering method that nearby AR clients to an IoT object can identify and augment the candidate target object to obtain contextual datasets . Figure 4 shows a system to combine AR and IoT in human-real-world interactions, with a comparison of ubiquitous computers and augmented interaction. Note that a ubiquitous computer is made to appear anytime and everywhere in contrast to desktop computing, augmented interaction refers to controlling in an AR environment, and the proposed AR mixed with IoT is to control the surrounding IoT devices using AR visualization and interaction. The mixing system of AR and IoT has the main difference that the user interacts with each IoT object embodied in the real world using computer-augmented information BIB002 . We can imagine that the system is formed by merging ubiquitous computers ( Figure 4a ) and augmented interactions ( Figure 4b ). Jo et al., for example, proposed an architecture for combining an AR interface with the IoT for shopping services. They developed a proof-of-concept prototype of the AR framework tested on IoT lamps and an in situ interaction method to support control directly with the IoT object ( Figure 4c ) . Figure 5 shows the newly created system for our paper to explain in situ operation of IoT lamps. Here, an AR user wearing a helmet-type AR device can interact intuitively (e.g., with hand gestures) to turn the lamp on or off with remote interaction without having to fiddle with the actual device. We survey the state-of-the-art elements of these three key areas, in the next sections, to draw a clearer picture of what future IoT-enabled AR will look like, how it will operate, and what it will be capable of providing. Figure 4 shows a system to combine AR and IoT in human-real-world interactions, with a comparison of ubiquitous computers and augmented interaction. Note that a ubiquitous computer is made to appear anytime and everywhere in contrast to desktop computing, augmented interaction refers to controlling in an AR environment, and the proposed AR mixed with IoT is to control the surrounding IoT devices using AR visualization and interaction. The mixing system of AR and IoT has the main difference that the user interacts with each IoT object embodied in the real world using computer-augmented information BIB002 . We can imagine that the system is formed by merging ubiquitous computers ( Figure 4a ) and augmented interactions ( Figure 4b ). Jo et al., for example, proposed an architecture for combining an AR interface with the IoT for shopping services. They developed a proof-of-concept prototype of the AR framework tested on IoT lamps and an in situ interaction method to support control directly with the IoT object ( Figure 4c ) . Figure 5 shows the newly created system for our paper to explain in situ operation of IoT lamps. Here, an AR user wearing a helmet-type AR device can interact intuitively (e.g., with hand gestures) to turn the lamp on or off with remote interaction without having to fiddle with the actual device. We survey the stateof-the-art elements of these three key areas, in the next sections, to draw a clearer picture of what future IoT-enabled AR will look like, how it will operate, and what it will be capable of providing." } ]
science.chemsum_single_document_summarization
science.chemsum_single_document_summarization.50001
[ { "role": "user", "content": "You will be shown the title of a chemistry paper, together with sections excerpted from the body of the paper. The paper sections may include HTML tags like `<p>` and `</p>`; please ignore these. Your task is to write an abstract for the paper. Your response should include the abstract and no other text.\n\nTitle:\nSustainable ppm level palladium-catalyzed aminations in nanoreactors under mild, aqueous conditions\n\nPaper sections:\nIntroduction\n<p>Ligated palladium has been and continues to be a privileged catalyst in the realm of organic synthesis. 1 In particular, the widespread application of Heck, Suzuki-Miyaura, and Negishi coupling reactions earned them the highest appreciation in chemistry. 2 Palladium-catalyzed aminations have also been pursued enthusiastically in elds ranging from pharmaceuticals and agrochemicals to others in the ne chemical industry. 3 The versatility and utility of such aminations has also made it among the more reliable methods for construction of C-N bonds that are so prevalent in many target molecules. 4 Since the mid 1990's, extensive ligand design as well as mechanistic investigations have emerged, leading to commonly employed phosphine ligands that tend to be monodentate, bidentate, or dialkylbiarylphosphines. 5 Development of pre-catalysts that generate active Pd(0) species in the reaction mixture have been also actively investigated. 6 Notable examples include Pd/ phosphine ligand complexes, palladacycles, and NHCs, among others. 6 Several generations of pre-catalysts have been developed, with each system allowing a seemingly broader substrate scope. 7 Nonetheless, much of the literature suggests that conditions typically used for these aminations are generally harsh, involving high boiling organic solvents (e.g., toluene, dioxane, DMF, etc.) as the reaction medium, oentimes requiring extended reaction times. 3 Moreover, catalyst loadings in the 2-10 mol% range of Pd are not uncommon, 4e representing a non-sustainable scenario given the endangered status of platinoids. 8 Hence, a procedure that is not only general and mild but can also be used under environmentally responsible conditions would be highly desirable, perhaps encouraging further use of this approach to C-N bond formation. Towards this goal, we and others have identied ligand scaffolds that complex Pd forming catalysts that mediate such couplings under mild, micellar conditions in water. 9 However, the crucial reaction parameters involving reduction in the amount of endangered metal required for catalysis to a sustainable level had yet to be achieved; indeed, in addition to high loadings being the norm, even greater amounts of privileged ligands are required. To begin to convert Pd-catalyzed aminations to truly sustainable processes, we have endeavored to develop, and this contribution report on, a general protocol that takes place in water, requires little-to-no investment in energy other than that available between room temperature and 45 C, and is particularly mindful of the endangered status of palladium. 8,10 Reducing the loading of a Pd catalyst by 1-2 orders of magnitude requires a signicant net increase in reagent activity. Realization of this goal becomes all the more challenging in that high levels of reaction conversion must be realized within reasonable timeframes. Among the many catalysts screened for this study, some of which are shown in Scheme 1, we took special note of reports by Colacot and co-workers describing two new classes of p-allylpalladium complexes containing biarylphosphine ligands. 6d,f Though mechanistic aspects of these studies were not investigated in aqueous media, in organic solvents it was shown that active \"L-Pd(0)\" species could be generated from either neutral or cationic ligated frameworks that are both air-and moisture-stable. 6f These preformed monoligated-Pd complexes are particularly appealing as they avoid use of excess ligands which are formally unnecessary for the catalytic cycle, ultimately generate organic waste, and can add cost to the overall process. Moreover, such catalyst precursors help ensure consistent results between batches, which has obvious advantages over in situ preparations both at an early stage of discovery as well as in future applications, especially at scale. 11</p>\n\nResults and discussion\n<p>A model reaction between 3-bromoanisole (1) and indoline (2) was initially used to screen for catalyst activity, where only 1000 ppm (i.e., 0.10 mol%) of the Pd-containing catalyst was invested. All reactions were run using nanomicelles derived from TPGS-750-M in water between rt and 45 C. While several pre-catalysts failed to deliver any detectable product, systems featuring a biarylphosphine scaffold proved more active, as noted previously by the Schmitt group. 9e Among t-BuXPhosbased complexes, the three cationic species [t-BuXPhos(Pd-pallyl)]OTf (C8), [t-BuXPhos(Pd-p-crotyl)]OTf (C9) and [t-BuX-Phos(Pd-p-cinnamyl)]OTf (C10) each led with high efficiency to the desired amination product. The most lipophilic pre-catalyst, C10, afforded the diarylamine 3 in close to quantitative yield. Use of triate salts was crucial, as the corresponding chlorides were less effective, presumably due to their tendency to form dimeric complexes via bridging chloride ions, species that are apparently catalytically inferior. 6d-f Having identied C10 as the most effective pre-catalyst under micellar conditions at the 1000 ppm level of Pd, we evaluated the scope of this C-N coupling chemistry between anilines/aniline-like amines and aryl bromides (Scheme 2). Primary and secondary anilines reacted efficiently between rt and 45 C with aryl bromides bearing electron-donating, -withdrawing, or neutral substituents (products 3-20). A highly functionalized heterocyclic bromide from the Merck Informer Library X3 (ref. 12) also participated to give the desired product 18. 13 Aryl chlorides were totally inert under these conditions (e.g., 8), and hence, complete selectivity for a bromide over chloride is assured based on formation of products 15, 23, 24, and 27. For reactions that proved to be particularly sluggish, an increase in the loading of Pd (from 0.10 to 0.25 mol%) led to the desired C-N bonds, as shown for products 21-27. Among these, secondary alkyl amino compounds such as morpholine (to give 21) and pyrrolidine (to give 25) could be used, though an excess of inorganic base was needed to achieve reasonable levels of conversion in both cases. Multiple C-N cross-couplings in 1-pot could be effected leading to both C 2 -, C 3 -symmetric, and axially chiral amination products 28-30, each in high overall yield. The latter two products required an increase to 5000 ppm of Pd catalyst, although multiple C-N bonds were made in each case.</p><p>Procedurally, this new protocol involves a very straightforward benchtop setup, requires no glovebox operations, and the catalysts used (e.g., C8 and C10) are easily prepared and stored. 6f Moreover, in many of the cases shown in Scheme 2, the solid products formed precipitate out of the aqueous medium necessitating only ltration, thereby avoiding any formal workup. Subsequent trituration leads to pure product without reliance on column chromatography, further minimizing both organic solvent and aqueous waste streams.</p><p>As aniline coupling partners are susceptible to oxidation, they are best puried prior to use. On the other hand, the corresponding nitro group-containing derivatives, as precursors, are oen stable solids at ambient temperatures. Combining, therefore, a nitro group reduction followed (without isolation) by a ppm level Pd-catalyzed amination would expand the appeal of this methodology, especially for industrial applications. We have previously developed a nitro group reduction protocol using the same aqueous micellar medium, where commercially available and inexpensive carbonyl iron powder (CIP) has been shown to be amenable to scale-up, delivering grams of amination products. 8g As illustrated in Scheme 3A, the crude amine product from the reduction of 31, following ltration through Celite, could be used directly in the amination protocol to afford the cross-coupling product 32 in good overall yield.</p><p>A gram-scale amination led to the preparation of diarylamine 8, starting with 4-uoroaniline and 4-bromoacetophenone (Scheme 3B). The almost equally effective pre-catalyst [t-BuXPhos(Pd-p-allyl)]OTf (C8) 14 was used due to its commercial availability. Without alteration from our optimized conditions (vide supra), the reaction run at 100 times its original scale was complete in less than one hour. More than ve grams of pure product could be collected via ltration, with an astonishingly low E factor (<1), as a measure of waste created (see ESI † for details). 15 While the synthesis of 8 was highly efficient at the ppm level of catalyst under aqueous conditions, such was not the case when performed in organic solvents commonly used for these aminations. 4e Thus, in either toluene or 1,4-dioxane under otherwise identical conditions, none of the desired product was detected (Scheme 3C). NMR analyses showed a combination of the corresponding imine had formed, along with unreacted bromoacetophenone in ca. a 1 : 1 ratio. A similar solvent comparison study was performed leading to amine 3, although low yields were obtained in this case as well. In the composite, these observations highlight the potentially signicant (e.g., chemoselectivity) differences that can be found between traditional and micellar catalysis.</p><p>Although increasing the loading (e.g., from 0.10 to 0.25 mol%; see Scheme 2) for especially difficult C-N coupling reactions leads to formation of the desired amination, the same net outcome could be obtained using 1250 ppm Pd simply by a 10 C increase in reaction temperature. Comparison studies leading to product 23 clearly showed that this was indeed the case (Scheme 4A).</p><p>By virtue of the ppm level of Pd catalyst associated with these aminations, the amount of residual metal to be found in the product amines, aer standard workup and purication, is expected to be below the FDA-approved level of 10 ppm per dose. 16 As test cases, products (e.g., 5, 12, and 23; Scheme 4B) were randomly selected and subjected to ICP-MS analyses for Pd content. Results (see ESI † for details) indicated that each has a Pd content #4 ppm, suggesting that additional time and cost associated with further purication, contrary to that based on traditional couplings done at 10-100 times the loading of Pd, should not be needed for products formed using this technology. Further reduction in the amount of Pd invested can be achieved by recycling of the aqueous reaction medium. As shown in the coupling between ethyl p-bromobenzoate (33) and p-nitroaniline (34; Scheme 4C), the solid diarylamine product 35 is isolated by simple ltration. The ltrate is then reused as the reaction medium aer supplementing with 500 ppm of ligated Pd. As summarized in this scheme, amine 35 is formed in good yields consistently over four additional cycles, with an average input of only 600 ppm (i.e., 0.06 mol%) of catalyst C10 per coupling.</p><p>To document the applicability of this aqueous coupling chemistry, direct comparisons were made by focusing on six key intermediates associated with medicinally relevant natural products or APIs (Scheme 5A). 17 Aminations that arrive at key intermediates 35-40 reveal that this methodology operates at much lower catalyst loadings and temperatures, and is typically faster than reactions in organic media. Moreover, yields tend to be comparable to, or better than, those reported previously. Another recent example providing further testimony to the potential savings of precious metal is shown in Scheme 5B. 18 Amination at C-2 in a 3-substituted estrone derivative afforded product 42 using 40 times less palladium and ligand (10 mol% ¼ 100 000 ppm), run at a lower temperature (45 vs. 110 C), and in less time (16 vs. 24 h). The level of residual Pd in the product from both approaches led to the expected outcomes: from amination in toluene: 165 ppm Pd, while from this ppm level amination in water: 3 ppm Pd.</p>\n\nConclusions\n<p>Overall, this work provides a signicant advance in Pd-catalyzed C-N cross-coupling chemistry. Specically, it offers the synthetic community, based on a known and well dened 1 : 1 Pd : L complex, aminations that: (1) rely on ppm levels of palladium; (2) are glove-box-free and operationally simple; (3) are general, highly efficient, and offer gram-scale capability; (4) perform under very mild conditions, with temperatures at 45 C or below; (5) utilize an aqueous reaction medium that avoids organic solvents, yet involve very little water; (6) allow for recycling of the surfactant, water, and catalyst; and (7) lead to ppm levels of residual palladium in the amination products. The process further illustrates the synthetic potential of pre-complexing a lipophilic ligand with Pd that can lead to pre-catalyst loadings at the ppm level. Extending the current technology to additional C-N cross couplings as well as use of Fe/ppm Pd nanoparticles 8a for similar synthetic purposes are currently under study.</p>\n\nConflicts of interest\n<p>There are no conicts to declare.</p>" }, { "role": "assistant", "content": "A 1 : 1 Pd : ligand complex, [t-BuXPhos(Pd-p-cinnamyl)]OTf, has been identified as a highly robust precatalyst for amination reactions leading to diarylamines, where loadings of metal are typically at 1000 ppm Pd, run in water at temperatures between rt and 45 C. The protocol is exceptionally simple, is readily scaled, and compares very favorably vs. traditional amination conditions. It has also been shown to successfully lead to key intermediates associated with several physiologically active compounds." } ]
science.cdr_ner
science.cdr_ner.340
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Disease\", \"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\" : [\"Naloxone\", ...], \"Disease\" : [\"Hypotensive\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nRisk factors of sensorineural hearing loss in preterm infants. Among 547 preterm infants of < or = 34 weeks gestation born between 1987 and 1991, 8 children (1.46%) developed severe progressive and bilateral sensorineural hearing loss. Perinatal risk factors of infants with hearing loss were compared with those of two control groups matched for gestation and birth weight and for perinatal complications. Our observations demonstrated an association of hearing loss with a higher incidence of perinatal complications. Ototoxicity appeared closely related to a prolonged administration and higher total dose of ototoxic drugs, particularly aminoglycosides and furosemide. Finally, we strongly recommend to prospectively and regularly perform audiologic assessment in sick preterm children as hearing loss is of delayed onset and in most cases bilateral and severe." }, { "role": "assistant", "content": "{\"Chemical\": [\"aminoglycosides\", \"furosemide\"], \"Disease\": [\"sensorineural hearing loss\", \"Ototoxicity\", \"ototoxic\", \"hearing loss\"]}" } ]
science.acl_arc_intent_classification
science.acl_arc_intent_classification.1626
[ { "role": "user", "content": "You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:\n- Background: provides context or foundational information related to the topic.\n- Extends: builds upon the cited work.\n- Uses: applies the methods or findings of the cited work.\n- Motivation: cites the work as inspiration or rationale for the research.\n- CompareOrContrast: compares or contrasts the cited work with others.\n- FutureWork: cites the work as a direction for future research.\n\nYour answer should be a single word from the following list of options: [\"Background\", \"Extends\", \"Uses\", \"Motivation\", \"CompareOrContrast\", \"FutureWork\"]. Do not include any other text in your response.\n\nSection Title:\nmethod\n\nContext before the citation:\nIn order to obtain semantic representations of each word, we apply our previous strategy (Schone and Jurafsky (2000)). This methodology is well-described in the literature (Landauer, et al., 1998; Manning and Schütze, 1999). The LSA approach then zeros out all but the top k singular values of the SVD, which has the effect of projecting vectors into an optimal kdimensional subspace.\n\nCitation Sentence:\nIn order to obtain semantic representations of each word , we apply our previous strategy ( Schone and Jurafsky ( 2000 ) ) .\n\nContext after the citation:\nRather than using a termdocument matrix, we had followed an approach akin to that of Schütze (1993), who performed SVD on a Nx2N term-term matrix. The N here represents the N-1 most-frequent words as well as a glob position to account for all other words not in the top N-1. The matrix is structured such that for a given word w’s row, the first N columns denote words that precede w by up to 50 words, and the second N � columns represent those words that follow by up to NCS(µ, ) =f NCS exp[ ((x-µ)/ )2]dx 50 words. Since SVDs are more designed to work then, if there were nR items in the ruleset, the with normally-distributed data (Manning and probability that a NCS is non-random is Schütze, 1999, p. 565), we fill each entry with a Pr(NCS)_ nT NCS(µT, T) normalized count (or Z-score) rather than straight frequency." }, { "role": "assistant", "content": "Extends" } ]
science.qasa_abstractive_qa
science.qasa_abstractive_qa.66
[ { "role": "user", "content": "You will be shown an excerpt from a computer science scientific research paper, followed by a question about the paper. Please answer the question. Do not include any text in your response other than the answer.\n\nContext: In Figure 4, our stochastic data augmentation (i.e., P>0) brings a significant improvement compared to the case of using the fixed neighborhood information (i.e., P=0) as encoder inputs.This result shows that the augmented views of positive interactions encourage BUIR to effectively learn users’ preference on items even in much sparse dataset.Interestingly, in case of the Ciao dataset which is less sparse than CiteULike, the benefit of our augmentation linearly increases with the maximum drop probability.This is because there is room for producing more various views (i.e., larger perturbation) based on a relatively more number of neighbors, and it eventually helps to boost the recommendation performance.To sum up, our framework that adopts the neighbor augmentation function successfully relieves the data sparsity issue of the OCCF problem, by leveraging the augmented views of few positive interactions. Our extensive evaluation on real-world implicit feedback datasets shows that BUIR consistently performs the best for top-K recommendation among a wide range of OCCF methods.In particular, the performance improvement becomes more significant in sparser datasets, with the help of utilizing augmented views of positive interactions as well as eliminating the effect of uncertain negative interactions.In addition, comparison results on a downstream task, which classifies the items into their category, support that BUIR learns more effective representations than other OCCF baselines.\n\nQuestion: How does the authors show utilizing augmented views of positive interactions can lead the performance improvement, especially in sparser datasets?" }, { "role": "assistant", "content": "They show augmented views of positive interactions can lead the performance improvement, especially in sparser datasets by showing the experimental result of stochastic data augmentation achieved a big improvement compared to the case of using the fixed neighborhood information as encoder inputs." } ]
science.gnormplus_ner
science.gnormplus_ner.334
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Gene\"].\n\nPlease return the output as a JSON object of the format: {\"Gene\": [\"SSX\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nDelineation of the clinical phenotype associated with OPHN1 mutations based on the clinical and neuropsychological evaluation of three families. Recent reports have demonstrated that mutations in the OPHN1 gene were responsible for a syndromic rather than non-specific mental retardation. Abnormalities of the posterior fossa with cerebellar hypoplasia have been demonstrated in all male patients reported to date. We report here a new family with X-linked mental retardation due to mutation in OPHN1 and present unpublished data about two families previously reported, concerning the facial and psychological phenotype of affected males and carrier females. Our study confirms that cerebellar hypoplasia is a hallmark of this syndrome. In addition, affected males display facial similarities that can help the diagnosis. Most carrier females have mild mental retardation and subtle facial changes." }, { "role": "assistant", "content": "{\"Gene\": [\"OPHN1\"]}" } ]
science.data_reco_mcq_sc
science.data_reco_mcq_sc.11561
[ { "role": "user", "content": "Given a short query with keyphrases about a research idea in 2018 in the domain of artificial intelligence, recommend the relevant datasets to validate the research idea from options provided. A short description of the datasets is also provided to help you.\n\nQuery: A large-scale dataset is constructed for deep feature representation learning of lesions. A triplet network\n\nKeyphrases: lesion embedding learning radiology images\n\nDataset description:\nLOGO-Net: A large-scale logo image database for logo detection and brand recognition from real-world product images.\nImageNet: The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nISTD: The Image Shadow Triplets dataset (ISTD) is a dataset for shadow understanding that contains 1870 image triplets of shadow image, shadow mask, and shadow-free image.\nCityscapes: Cityscapes is a large-scale database which focuses on semantic understanding of urban street scenes. It provides semantic, instance-wise, and dense pixel annotations for 30 classes grouped into 8 categories (flat surfaces, humans, vehicles, constructions, objects, nature, sky, and void).\n\nOptions:- A: LOGO-Net B: ImageNet C: ISTD D: Cityscapes" }, { "role": "assistant", "content": "ImageNet" } ]
science.mslr2022_cochrane_multidoc_summarization
science.mslr2022_cochrane_multidoc_summarization.2920
[ { "role": "user", "content": "You will be provided with the titles and abstracts of a collection of medical RCT reports studying a clinical question -- for instance, the efficacy of an intervention on some medical outcome. Please write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article. Your response should not include any other text. The RCT reports are below.\n\n\n\nTitle 1:\nExtending recall intervals--effect on resource consumption and dental health.\nAbstract 1:\nThe dental health of children has improved in recent years. This has stimulated more flexible planning and implementation of public dental care programs. The aim of the study was to compare time required for dental care and changes in dental health over a 2-yr period for patients examined and treated every 12 months and patients examined and treated every 24 months. The material comprised 185 children aged 3, 16 and 18 yr. Children with high caries prevalence were not included in the material. Time used for examination and treatment and dental health parameters were recorded. The data were analyzed using multiple linear regression analysis. Mean time used for examinations and mean total time used by the dental health service over the 2-yr period were significantly less for patients examined every 24 months than for patients examined every 12 months, while the treatment times did not differ between the groups. The longer recall interval was associated with greater DMFS increment but this was not statistically significant. The study indicates that the dental health service in Norway could save resources, i.e. increase the productivity in the short term by extending the intervals between examinations." }, { "role": "assistant", "content": "There is insufficient evidence from randomised controlled trials (RCTs) to draw any conclusions regarding the potential beneficial and harmful effects of altering the recall interval between dental check-ups. There is insufficient evidence to support or refute the practice of encouraging patients to attend for dental check-ups at 6-monthly intervals. It is important that high quality RCTs are conducted for the outcomes listed in this review in order to address the objectives of this review." } ]
science.craftchem_ner
science.craftchem_ner.4886
[ { "role": "user", "content": "You will be presented a sentence from biomedical research text. Extract all terms that correspond to chemical entities of biological interest (ChEBI).\nReturn your identified terms in a JSON array, formatted like this: [\"Entity1\", \"Entity2\", ...].\nIf no such entities are detected, respond by providing an empty JSON array [].\n\nBiomedical context:\nMoreover,toruleoutthepossibilitythatadifferentialeffectmaybeduetothedifferentgeneticstrategiesbywhichexpressionofEWS-Pea3inproprioceptiveafferentsisachieved,weperformedthisanalysisbothinEr81EWS-Pea3/-andTauEWS-Pea3/+PVCre/+embryos." }, { "role": "assistant", "content": "[]" } ]
science.scientific_lay_summarisation_elife_single_doc_summ
science.scientific_lay_summarisation_elife_single_doc_summ.3220
[ { "role": "user", "content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: Genome-wide Estrogen Receptor-α activation is sustained, not cyclical}\nArticle:\nAbstract:\nEstrogen Receptor-alpha( ER) drives 75% of breast cancers. Stimulation of the ER by estra-2-diol forms a transcriptionally-active chromatin-bound complex. Previous studies reported that ER binding follows a cyclical pattern. However, most studies have been limited to individual ER target genes and without replicates. Thus, the robustness and generality of ER cycling are not well understood. We present a comprehensive genome-wide analysis of the ER after activation, based on 6 replicates at 10 time-points, using our method for precise quantification of binding, Parallel-Factor ChIP-seq. In contrast to previous studies, we identified a sustained increase in affinity, alongside a class of estra-2-diol independent binding sites. Our results are corroborated by quantitative re-analysis of multiple independent studies. Our new model reconciles the conflicting studies into the ER at the TFF1 promoter and provides a detailed understanding in the context of the ER’s role as both the driver and therapeutic target of breast cancer.\nIntroduction:\nThe study of the Estrogen Receptor-α( ER) has played a fundamental role in both our understanding of transcription factors and cancer biology. The ER is one of a family of transcription factors called nuclear receptors. Nuclear receptors are intra-cellular and, on activation by their ligand, typically undergo dimerisation and bind to specific DNA motif( for ER: Estrogen Response Elements; EREs). On the chromatin, the nuclear receptor recruits a series of cofactors and promotes the basal transcription mechanism at either nearby promoters or through chromatin loops from distal enhancers. Because of the minimal nature of these systems relative to other signaling pathways, nuclear receptors have become a model system for transcription factor analysis. Simultaneously, the role of nuclear receptors as drivers in a range of hormone dependent cancers has led to focused studies in the context of the disease. Previously, it was reported that the ER and key cofactors followed a cyclical pattern in breast cancer cell lines with maximal binding at 45 min after stimulation with estra-2-diol( Shang et al., 2000; Métivier et al., 2003). Similar results were also reported for the AR after activation with DHT( Kang et al., 2002) and several follow-up studies exist looking at single genomic loci( Herynk et al., 2010; Luo et al., 2005; Shao et al., 2004; Burakov et al., 2002). However, subsequent genome-wide studies have provided little further detail on the specific nature of the proposed kinetics of ER binding being either limited in the number of replicates or lacking temporal resolution( Honkela et al., 2015; wa Maina et al., 2014; Dzida et al., 2017; Guertin et al., 2014). In our own network analysis( Holding et al., 2018), we focused on 0, 45 and 90 min and found no significant reduction in ER signal at 90 min. In the same study, quantitative proteomic analysis of ER interactions at the same time intervals by qPLEX-RIME( Papachristou et al., 2018) shows no significant difference in terms of ER interactions at 45 and 90 min. These conflicting results have so far not been resolved. Routinely used assays to measure protein binding to chromatin are based on Chromatin Immunoprecipitation( ChIP). A major challenge to monitoring ER activation through ChIP is the normalization of the ChIP signal— either genome-wide with next generation sequencing or at individual loci by qPCR— as the standard protocols do not control for a significant number of confounding factors including the efficiency of the immunoprecipitation step. In the case the of the two original studies( Shang et al., 2000; Métivier et al., 2003), the data only provided limited controls in this regard. An alternative method that has been applied to normalize ChIP-seq data is to use the maximal read count obtained at each individual site across each time point( Guertin et al., 2014); however, this method is at the expense of monitoring the magnitude of ER binding and gives equal weight to low read count peaks and more robust data from stronger binding sites. In the context of these challenges, we applied two strategies to robustly and accurately monitor the process of nuclear receptor binding to chromatin on activation. The first strategy was to increase the number of replicates. We generated sample data for six independent isogenic experiments to enable better characterization of the variance within the data. This strategy provided an unprecedented level of information regarding ER activation with twice the level of replication used in previous ChIP-qPCR studies( Métivier et al., 2003) and a significant improvement on previous single replicate genome-wide studies. The second strategy was to use our recently developed method for precise quantification of binding, Parallel-Factor ChIP( pfChIP)( Guertin et al., 2018), which uses an internal control for quantitative differential ChIP-seq. Combined, these two strategies enabled us to undertake the most comprehensive and precise analysis of ER activation to date.\nDiscussion:\nBy undertaking six biological replicates and incorporating an internal control with pfChIP, we have produced the most comprehensive analysis to date of ER binding over the first 90 min after stimulation with estra-2-diol. We found the sites at which we detected ER binding on the chromatin follows two distinct trajectories, either the rapid activation within 10 min followed by a stable response or ligand independent binding. Enrichment of the FOXA1 motif in the strongest ligand-independent/Class B sites supports our hypothesis: that these are as a result of ER interactions at these sites. Importantly, the de novo motif analysis did not find the presence of the CTCF motif, confirming that they are not an artifact of utilizing CTCF to normalize via the pfChIP-seq method. Analysis of the Class B binding sites with GREAT( Welch et al., 2014), Supplementary file 2, gave enrichment for 6 out of 30 ER regulated amplicons identified in a previous study of 191 breast cancer tumor samples( Nikolsky et al., 2008). On the basis that no ERE was found at Class B sites and that the affinity of ER at these sites was less than at estra-2-diol response sites, we propose that these sites represent open regions of chromatin where ER can be recruited by other transcription factors in the absence of its own ligand. However, these interactions are weak, and very likely transient, as the average binding affinity for Class B sites is similar in level( a normalized read count of 30–40) to the binding before activation at Class A binding sites, but greater than Class C binding sites( ≈ 10 normalized reads increasing to ≈ 40 on activation). Ligand dependent activation of ER was seen robustly at Class A sites, but displayed no evidence of cyclical binding. We propose instead that ER activation occurs rapidly, within 10 min and binding shows no significant change after this point. The two examples we demonstrated— of increasing or decreasing ER binding after activation at the SNX24 and ACKR3 TSS( Figure 4A)— should be interpreted with caution as, while downstream effects are likely to modulate ER binding, searching for individual outliers results within a large data set will generate false positives. Nonetheless, the two examples imply a secondary level of modulation does occur as previously seen, but at much lower magnitude than proposed in studies focused on ER cycling. It is possible that alternative conditions may be able to induce tightly regulated cycling; however, we feel this is unlikely in terms of physiology. For example, the work of Metivier et al. makes use of α-amanitin, a RNA polymerase inhibitor. Within the cancer biology setting, these conditions have no direct interpretation. Worse, the mode of action is downstream of the ER and therefore is a confounding factor, not a clear method of synchronization. In light of our results and the lack of consistency of published results, we propose that the previously described cyclical response kinetics are likely an artefact of observing a highly variable process without replicates. With replicates, the cyclical effect is lost when averaging. Even if a cyclical response existed, our results indicate that it is not regulated tightly enough to be coherently visible across multiple replicates. The variance in ER binding may better be described by heterogeneity in the cell populations before induction and by current models regarding expression noise as an indicator for greater transcription responsiveness( Morgan and Marioni, 2018). Finally, our proposal provides, for the first time, a model that reconciles ChIP-seq data with the stochastic model of nuclear receptor binding as proposed and visualized by those undertaking single model imaging. In contrast, ER cycling has always been irreconcilable with these alternative forms of data( Lenstra et al., 2016) While we cannot discount that our cells could have specifically lost the ability to regulate ER binding in the manner previously described, we have minimized this possibility through the use of cells direct from ATCC, by confirming the cell line by STR genotype and applying the latest methods( Ben-David et al., 2018) to confirm that our cell line is genetically similar to the strains used in other labs. Nonetheless, we would welcome further replication of this study. In summary, through the use of stringent internal controls, we have reproducibly shown that estra-2-diol responsive ER binding is sustained and not cyclical, with the magnitude of the binding primarily defined by the conservation of the ERE at the binding site." }, { "role": "assistant", "content": "Breast cancer is the most common type of cancer worldwide . The hormone estrogen drives the growth of 70% of breast cancer tumors . This form of breast cancer is called estrogen receptor positive ( ER+ ) breast cancer . In the early 2000s , several scientists found that some genes in ER+ breast cancers turn on and off in 90-minute cycles . Moreover , when the estrogen receptor binds to the DNA in the nucleus of a cell , it activates nearby genes causing the tumor cells to grow and divide . Learning more about how cancer cells respond to estrogen is very important . Many cancer drugs block estrogen to stop its tumor growth promoting effects . But the initial studies of estrogens effects were only able to look at how estrogen affected a small number of genes . Newer genome sequencing technologies allow scientists to study the effects of estrogen on more genes and provide more detailed information . Using these cutting-edge technologies , Holding et al . show that the 90-minute cycles found in the previous studies are likely artefacts of older techniques and lacking controls . The new experiments used a newer technique called parallel factor ChIP-seq to look at how all genes respond to the estrogen receptor . Then , Holding et al . reanalyzed data published in the previous studies and found that they were often contradictory and inconsistent . None of the genes – not even the ones looked at in earlier studies – were expressed in 90-minute cycles like the previous studies suggested . Instead , the expression of the genes was variable , which may make the cell even more responsive to estrogen . The previous reports of the 90-minute cycles are most likely explained by a bias of the human eye of finding patterns in a highly variable process that do not hold up to statistical analysis . Better understanding how estrogen influences genes and cell growth is essential to developing better treatments for ER+ breast cancer . This includes ruling out ideas that may be incorrect or misleading . These findings help resolve why not all studies have found estrogen receptor driven cycles of gene expression , and will provide researchers with a better foundation for future studies ." } ]
science.healthver_entailment
science.healthver_entailment.1075
[ { "role": "user", "content": "You will be shown a claim about public health, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\". If the abstract does not provide enough information to arrive at a verdict, write \"NEI\" (for \"not enough information\").\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract. If the verdict was \"NEI\", then return an empty array.\n\nFor instance, if the model were given the claim \"wearing masks can prevent the spread of COVID\", the output might be\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that mass mask-wearing reduces the transmission rate for COVID-19.\"]\n}\n\nClaim: Covid-19 infections leave an impact on the heart, raising concerns about lasting damage\n\nAbstract:\nBACKGROUND: A novel coronavirus disease (COVID-19) in Wuhan has caused an outbreak and become a major public health issue in China and great concern from international community.\nMyocarditis and myocardial injury were suspected and may even be considered as one of the leading causes for death of COVID-19 patients.\nTherefore, we focused on the condition of the heart, and sought to provide firsthand evidence for whether myocarditis and myocardial injury were caused by COVID-19.\nMETHODS: We enrolled patients with confirmed diagnosis of COVID-19 retrospectively and collected heart-related clinical data, mainly including cardiac imaging findings, laboratory results and clinical outcomes.\nSerial tests of cardiac markers were traced for the analysis of potential myocardial injury/myocarditis.\nRESULTS: 112 COVID-19 patients were enrolled in our study.\nThere was evidence of myocardial injury in COVID-19 patients and 14 (12.5%) patients had presented abnormalities similar to myocarditis.\nMost of patients had normal levels of troponin at admission, that in 42 (37.5%) patients increased during hospitalization, especially in those that died.\nTroponin levels were significantly increased in the week preceding the death.\n15 (13.4%) patients have presented signs of pulmonary hypertension.\nTypical signs of myocarditis were absent on echocardiography and electrocardiogram.\nCONCLUSIONS: The clinical evidence in our study suggested that myocardial injury is more likely related to systemic consequences rather than direct damage by the 2019 novel coronavirus.\nThe elevation in cardiac markers was probably due to secondary and systemic consequences and can be considered as the warning sign for recent adverse clinical outcomes of the patients." }, { "role": "assistant", "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"CONCLUSIONS: The clinical evidence in our study suggested that myocardial injury is more likely related to systemic consequences rather than direct damage by the 2019 novel coronavirus.\", \"The elevation in cardiac markers was probably due to secondary and systemic consequences and can be considered as the warning sign for recent adverse clinical outcomes of the patients.\"]}" } ]
science.medmentions_ner
science.medmentions_ner.1065
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"HealthCareActivity\", \"InjuryOrPoisoning\", \"BodySubstance\", \"IntellectualProduct\", \"AnatomicalStructure\", \"SpatialConcept\", \"Chemical\", \"Bacterium\", \"MedicalDevice\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Finding\", \"BiologicFunction\", \"Virus\", \"ResearchActivity\", \"ClinicalAttribute\", \"PopulationGroup\", \"Eukaryote\", \"BodySystem\", \"Food\", \"ProfessionalOrOccupationalGroup\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nDefinition and application of precision medicine \"No abstract\"." }, { "role": "assistant", "content": "{\"AnatomicalStructure\": [], \"Bacterium\": [], \"BiologicFunction\": [], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [], \"Food\": [], \"HealthCareActivity\": [\"precision medicine\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"Definition\"], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [], \"SpatialConcept\": [], \"Virus\": []}" } ]
science.data_reco_mcq_mc
science.data_reco_mcq_mc.2115
[ { "role": "user", "content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: I am trying to design an object detection network that does not use Non-Maximum Suppression.\n\nKeyphrases: object detection image\n\nDataset description:\nKITTI: KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) is one of the most popular datasets for use in mobile robotics and autonomous driving. It consists of hours of traffic scenarios recorded with a variety of sensor modalities, including high-resolution RGB, grayscale stereo cameras, and a 3D laser scanner.\nInterspeech 2021 Deep Noise Suppression Challenge: The Deep Noise Suppression (DNS) challenge is designed to foster innovation in the area of noise suppression to achieve superior perceptual speech quality. This challenge has two two tracks: Track 1: Real-Time Denoising track for wide band scenario The noise suppressor must take less than the stride time Ts (in ms) to process a frame of size T (in ms) on an Intel Core i5 quad-core machine clocked at 2.4 GHz or equivalent processor.\nGQA: The GQA dataset is a large-scale visual question answering dataset with real images from the Visual Genome dataset and balanced question-answer pairs. Each training and validation image is also associated with scene graph annotations describing the classes and attributes of those objects in the scene, and their pairwise relations.\nImageNet: The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nCOCO: The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\nAV Digits Database: AV Digits Database is an audiovisual database which contains normal, whispered and silent speech. 53 participants were recorded from 3 different views (frontal, 45 and profile) pronouncing digits and phrases in three speech modes.\nPHM2017: PHM2017 is a new dataset consisting of 7,192 English tweets across six diseases and conditions: Alzheimer’s Disease, heart attack (any severity), Parkinson’s disease, cancer (any type), Depression (any severity), and Stroke. The Twitter search API was used to retrieve the data using the colloquial disease names as search keywords, with the expectation of retrieving a high-recall, low precision dataset.\n\nOptions:- A: KITTI B: Interspeech 2021 Deep Noise Suppression Challenge C: GQA D: ImageNet E: COCO F: AV Digits Database G: PHM2017" }, { "role": "assistant", "content": "[\"KITTI\", \"COCO\"]" } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.36596
[ { "role": "user", "content": "You will be presented with the full text of a science research paper. Please write an abstract for this paper. Your response should include the abstract and no additional text.\n\nPaper text:\nthe subject of photometric redshift measurement ( i.e. estimating redshifts to galaxies from multi - waveband photometric observations ) has recently been revitalised , due to a rapid increase in the number of existing multi - waveband galaxy surveys .\nthis provides a fast way for estimating redshifts to a large number of galaxies , with some trade - off in their accuracy .\nmoreover , in the case of faint galaxies ( @xmath3 mag . ) , where spectroscopic redshifts are more difficult to measure even with the largest telescopes , photometric redshift technique is the _ only _ way to secure their redshifts . for a given galaxy , the photometric redshift can be estimated by comparing its observed sed with a set of template seds , corresponding to different galaxy types and shifted to different redshifts , allowing for the galaxy evolution with look - back time .\nthe redshift and spectral type associated with the template sed which is the closest to the observed sed will then be assigned to that galaxy . therefore , the crucial problem is the choice of the template seds and their behaviour at high redshifts .\nthere are two general ways for adopting the template seds , as discussed below ; * a ) . * empirical templates : in this case one uses the observed seds for different types of galaxies .\nthe problem here is that there are not enough information about the seds for different classes of galaxies at different redshifts ( particularly at high redshifts ) .\ntherefore , incorporating the spectral evolution of galaxies to the model seds of different types is difficult and uncertain . *\nb ) . * synthetic templates : uses model seds for different spectral types of galaxies , shifted in redshift space , assuming evolutionary population synthesis ( eps ) models .\nthe main problem here is to construct realistic model seds at different redshifts by constraining the evolutionary models . in the present study\n, we adopt a combined approach to self - consistently model the template seds at different redshifts , in agreement with observations .\nin particular , the effect of dust and its evolution with redshift is included in the seds , consistent with the eps models , and optimised using observations at different look - back times .\na large set of chemo - photometric evolutionary population synthesis ( eps ) models with different input parameters are developed , each accounting for both the local properties and evolutionary behavior of different types of galaxies@xmath4 . for a given spectral type of galaxies , the model parameters are normalised to those at @xmath5 , by fitting them to the local observed sed of their respective type .\nthe evolutionary behaviour ( i.e. input parameters ) of the eps models ( which also include contribution from dust ) are then constrained by estimating the photometric redshifts ( @xmath6 ) to a _ calibrating _ sample of 73 galaxies with available spectroscopic redshifts ( @xmath7 ) and minimising the _ rms _ scatter between them .\nthe calibrating sample is adopted from the hubble deep field ( hdf ) so that the galaxies will have detection in , at least , four passbands , including uv which is crucial for any photometric redshift determination .\nthe procedure is summarised in the flow chart in figure 1 . the parameters in the eps models , constrained by observations , consist of the imf , formation redshift ( @xmath8 ) , local star formation rate ( @xmath9 ) and the time evolution of the star - formation rate , as parametrised by @xmath10 ( see figure 1 ) .\nonce these parameters are constrained , the evolution of the optical depth ( @xmath11 ) is determined directly from the gas metallicity , @xmath12 , and the fractional mass of gas which takes part in star formation ( @xmath13 ) , assuming that the gas - to - dust ratio is proportional to the gas metallicity@xmath4 . for galaxies with @xmath14 in the calibrating sample ,\nthe effect of lyman continuum and lyman forest absorption is estimated@xmath15 the fluxes for these objects are subsequently corrected for inter - galactic absorption before they are used to constrain the eps models .\nthe template seds cover a range in wavelength from 0.05 @xmath0 to 1 mm .\nthe sensitivity of the result to the eps model parameters are shown in table 1 .\nthe optimum _ rms _ scatter obtained on the @xmath6 vs. @xmath7 plane ( fig .\n2 ) is 0.11 ( model 4 in table 1 ) , which is found using template seds for four different types of galaxies ( elliptical , spiral , starburst and irregular ) and only four passbands ( uvri ) .\nthis relation for the calibrating sample is presented in figure 2 , with its _ rms _ scatter taken as the error in our photometric redshifts .\n[ cols=\"^,^,^,^,^,^,^,^,^ \" , ]\na source of uncertainty in estimating the photometric redshifts and spectral types of galaxies is the possibility that the models might be degenerate ( i.e. different template seds producing the same result ) .\nfurthermore , the photometric errors in the observed seds are likely to affect the final estimate of both the photometric redshift and the spectral type of their respective galaxy . to investigate the uniqueness of the photometric redshift solutions in this study , a monte carlo simulation is performed . a simulated catalogue is generated to resemble the observed , uv selected hdf survey , with ubvi magnitudes , known redshifts ( @xmath16 ) and spectral types ( a uv selected catalogue is used here to ensure the availability of uv data which are crucial for any photometric redshift measurement ) .\nthe galaxies in this catalogue are randomly selected to have seds similar to the synthetic seds for any of the four types of galaxies ( e , sp , sb , irr ) considered here , shifted in redshift space .\nrandom gaussian noise , resembling photometric errors are then added to the simulated seds .\nthe simulated catalogue contains 273 galaxies ( the same as the complete uv - selected hdf catalogue ) , has a magnitude limit of u=27 mag .\n, an apparent uv magnitude distribution similar to the observed catalogue and a redshift limit of @xmath17 ( maximum redshift allowed in a uv selected survey ) .\nthe photometric redshift code was then used to predict the redshifts ( @xmath18 ) and spectral types of individual galaxies in the simulated catalogue and to compare them with their _ input _ values . the @xmath19 distribution ( fig .\n3 ) shows a distinct peak at zero , indicating that the redshifts for the simulated galaxies are well re - produced within @xmath20 .\nas far as the predicted spectral types are concerned , we recover the type classification for ellipticals in the input catalogue by @xmath21 ( i.e. no mis - identification of ellipticals ) . regarding the spirals ,\nirregulars and starbursts , we can recover , respectively , @xmath22 , @xmath23 and @xmath24 of the spectral types of galaxies in the input catalogue .\nin order to explore the consistency of the above procedure , we apply it on a complete sample of uv selected galaxies from the hdf , to estimate their photometric redshifts . using this redshift catalogue\n, we then predict the physical quantities ( i.e. the luminosity density ) in redshift intervals and compare them with similar measurements from other independent studies .\na uv selected survey is particularly useful for such comparison because the uv light is heavily affected by dust in galaxies@xmath25 , allowing a test of the estimated extinction corrections .\nmoreover , such surveys do not contain objects with @xmath26 and hence , are less affected ( then deep surveys in other bands ) by uncertainties in correction for lyman continuum absorption .\nthe uv luminosity densities are estimated in redshift intervals , using the photometric redshift catalogue .\nthe results are compared with similar measurements from other , independent , studies ( fig .\n4 ) and show excellent agreement .\nthis indicates that the statistical results , based on the present photometric redshift technique , are consistent with other studies .\na new method is presented to estimate photometric redshifts of galaxies .\nthe main improvement in this study is to develope more realistic template seds over a large range in redshift and in agreement with the observational data .\nthe seds cover a range in wavelength from 0.05 @xmath0 to 1 mm , with their main characteristics as follows : a ) .\nthey are optimised to produce the observed colours of galaxies at @xmath1 ; b ) .\nincorporate the chemo - photometric evolution of galaxies of different types ( i.e. stellar luminosities for different metallicities ) , in agreement with observations ; c ) .\nallow a self - consistent treatment of dust contribution and its evolution with redshift ; d ) . include absorption and re - emission of radiation by dust and hence , realistic estimates of the far - infrared radiation ; e ) .\ninclude correction for inter - galactic absorption by lyman continuum and lyman forest . using a calibration sample of 73 galaxies with available spectroscopic redshifts ,\nthe evolutionary model parameters are optimised to get the minimum scatter between the photometric and spectroscopic redshifts in the calibrating sample . using four passbands ( uvri ) ,\nthe photometric redshifts are estimated to an accuracy of @xmath27 .\nthe method is applied to a uv selected sample of hdf galaxies .\nthe uv luminosity densities , estimated in redshift intervals , are in excellent agreement with similar measurements from other , independent , surveys .\nmazzei , p. , de zotti , g. , & xu , c. 1994 , ap.j .\nmazzei , p. , xu , c. , & de zotti , g. 1992 , astron & astrophys 256 , 45 .\nmadau , p. , ferguson , h. , c. , dickinson , m. e. , giavalisco , m. , steidel , c. , fruchter , a. , 1996 , mnras 283 , 1388 .\nmeurer , g.r .\nheckman , t.m .\n, lehnert , m. , leitherer , c. & lowenthal , j. 1997 aj .\nconnolly , a.j . ,\nszalay , a.s . ,\ndickinson , m. , subbarao , m.u . & brunner r.j .\n1997 ap.j , 486 , l11 lilly , s.j . , le fevre , o. , hammer , f. , crampton , d. 1996 , apj . 460 , l1 ." }, { "role": "assistant", "content": "a new method is developed for estimating photometric redshifts , using realistic template seds , extending over four decades in wavelength ( i.e from 0.05 @xmath0 to 1 mm ) . \n the seds are constructed for four different spectral types of galaxies ( elliptical , spiral , irregular and starburst ) and satisfy the following characteristics : a ) . \n they are optimised to produce the observed colours of galaxies at @xmath1 ; b ) . \n incorporate the chemo - photometric spectral evolution of galaxies of different types , in agreement with observations ; c ) . \n allow treatment of dust contribution and its evolution with redshift , consistent with the spectral evolution model ; d ) . include absorption and re - emission of radiation by dust and hence , realistic estimates of the far - infrared radiation ; e ) . include correction for inter - galactic absorption by lyman continuum and lyman forest . using these template seds , \n the photometric redshifts are estimated to an accuracy of @xmath2 . \n the degeneracy in the method and hence , the redshifts and spectral types , are explored via monte carlo simulations . the consistency of the technique is demonstrated by estimating photometric redshifts to a uv selected sample of hdf galaxies and investigating if the statistical results ( ie . luminosity densities ) are in agreement with other independent studies ." } ]
science.data_reco_mcq_mc
science.data_reco_mcq_mc.1992
[ { "role": "user", "content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: An approach to decompose an ordinary convolutional layer into two layers with much fewer parameters. The\n\nKeyphrases: object detection images top-down maps autonomous driving\n\nDataset description:\nDecagon: Bio-decagon is a dataset for polypharmacy side effect identification problem framed as a multirelational link prediction problem in a two-layer multimodal graph/network of two node types: drugs and proteins. Protein-protein interaction network describes relationships between proteins.\nHIGGS Data Set: The data has been produced using Monte Carlo simulations. The first 21 features (columns 2-22) are kinematic properties measured by the particle detectors in the accelerator.\nCARLA: CARLA (CAR Learning to Act) is an open simulator for urban driving, developed as an open-source layer over Unreal Engine 4. Technically, it operates similarly to, as an open source layer over Unreal Engine 4 that provides sensors in the form of RGB cameras (with customizable positions), ground truth depth maps, ground truth semantic segmentation maps with 12 semantic classes designed for driving (road, lane marking, traffic sign, sidewalk and so on), bounding boxes for dynamic objects in the environment, and measurements of the agent itself (vehicle location and orientation).\nCIFAR-10: The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck).\nCOCO: The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nOptions:- A: Decagon B: HIGGS Data Set C: CARLA D: CIFAR-10 E: COCO" }, { "role": "assistant", "content": "[\"CIFAR-10\", \"COCO\"]" } ]
science.anat_em_ner
science.anat_em_ner.377
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"AnatomicalEntity\"].\n\nPlease return the output as a JSON object of the format: {\"AnatomicalEntity\": [\"tumor cells\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nAllogeneic injection of fetal membrane-derived mesenchymal stem cells induces therapeutic angiogenesis in a rat model of hind limb ischemia.\nBone marrow-derived mesenchymal stem cells (BM-MSC) have been demonstrated to be an attractive therapeutic cell source for tissue regeneration and repair. However, it remains unknown whether or not allogeneic transplantation of mesenchymal stem cells (MSC) derived from fetal membranes (FM), which are generally discarded as medical waste after delivery, has therapeutic potential. FM-MSC were obtained from Lewis rats and had surface antigen expression and multipotent potential partly similar to those of BM-MSC. Compared with BM-MSC, FM-MSC secreted a comparable amount of hepatocyte growth factor despite a small amount of vascular endothelial growth factor. FM-MSC and BM-MSC both expressed major histocompatibility complex (MHC) class I but not MHC class II antigens and did not elicit allogeneic lymphocyte proliferation in mixed lymphocyte culture. FM-MSC or BM-MSC obtained from Lewis rats were injected into a MHC-mismatched August-Copenhagen-Irish rat model of hind limb ischemia. Three weeks after injection, blood perfusion and capillary density were significantly higher in the FM-MSC and BM-MSC groups than in the phosphate-buffered saline group, and allogeneic FM-MSC and BM-MSC were still observed. In nonischemic hind limb tissues, allogeneic FM-MSC and BM-MSC injection were associated with a comparatively small amount of T lymphocyte infiltration, compared with the injection of allogeneic splenic lymphocytes. In conclusion, allogeneic FM-MSC injection did not elicit a lymphocyte proliferative response and provided significant improvement in a rat model of hind limb ischemia, comparable to the response to BM-MSC. Thus, allogeneic injection of FM-MSC may be a new therapeutic strategy for the treatment of severe peripheral vascular disease. Disclosure of potential conflicts of interest is found at the end of this article." }, { "role": "assistant", "content": "{\"AnatomicalEntity\": [\"FM\", \"fetal membranes\", \"Bone marrow-derived mesenchymal stem cells\", \"hind limb tissues\", \"lymphocyte\", \"capillary\", \"fetal membrane-derived mesenchymal stem cells\", \"cell\", \"FM-MSC\", \"blood\", \"surface\", \"MSC\", \"mesenchymal stem cells\", \"splenic lymphocytes\", \"lymphocyte culture\", \"BM-MSC\", \"tissue\", \"peripheral vascular\", \"T lymphocyte\", \"hind limb\"]}" } ]
science.covidfact_entailment
science.covidfact_entailment.125
[ { "role": "user", "content": "You will be shown a claim related to the COVID-19 pandemic, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\".\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract.\n\nFor instance, if the model were given the claim \"the COVID vaccine is safe for healthy adults\", the output might be\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that the risks of severe COVID vaccine side effects are low for healthy adults.\"]\n}\n\nClaim: An alpaca nanobody neutralizes sars-cov-2 by activating receptor interaction\n\nAbstract:\nSARS-CoV-2 enters host cells through an interaction between the spike glycoprotein and the angiotensin converting enzyme 2 (ACE2) receptor.\nDirectly preventing this interaction presents an attractive possibility for suppressing SARS-CoV-2 replication.\nHere, we report the isolation and characterization of an alpaca-derived single domain antibody fragment, Ty1, that specifically targets the receptor binding domain (RBD) of the SARS-CoV-2 spike, directly preventing ACE2 engagement.\nTy1 binds the RBD with high affinity, occluding ACE2.\nA cryo-electron microscopy structure of the bound complex at 2.9 Å resolution reveals that Ty1 binds to an epitope on the RBD accessible in both the 'up' and 'down' conformations, sterically hindering RBD-ACE2 binding.\nWhile fusion to an Fc domain renders Ty1 extremely potent, Ty1 neutralizes SARS-CoV-2 spike pseudovirus as a 12.8 kDa nanobody, which can be expressed in high quantities in bacteria, presenting opportunities for manufacturing at scale.\nTy1 is therefore an excellent candidate as an intervention against COVID-19." }, { "role": "assistant", "content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"Here, we report the isolation and characterization of an alpaca-derived single domain antibody fragment, Ty1, that specifically targets the receptor binding domain (RBD) of the SARS-CoV-2 spike, directly preventing ACE2 engagement.\"]}" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.23880
[ { "role": "user", "content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nIn the co-sparse analysis model a set of filters is applied to a signal out of the signal class of interest yielding sparse signal responses. As such, it may serve as a prior in inverse problems, or for structural analysis of signals that are known to belong to the signal class. The more the model is adapted to the class, the more reliable it is for these purposes. The task of learning such operators for a given class is therefore a crucial problem. In many applications, it is also required that the filter responses are obtained in a timely manner, which can be achieved by filters with a separable structure. Not only can operators of this sort be efficiently used for computing the filter responses, but they also have the advantage that less training samples are required to obtain a reliable estimate of the operator. The first contribution of this work is to give theoretical evidence for this claim by providing an upper bound for the sample complexity of the learning process. The second is a stochastic gradient descent (SGD) method designed to efficiently learn an analysis operators with separable structures, which incorporates an efficient step size selection. Numerical experiments are provided that link the sample complexity to the convergence speed of the SGD algorithm.\n\nCited Abstract(s):\n[1]: We consider the dictionary learning problem for the analysis model based sparse representation. A novel algorithm is proposed by adapting the synthesis model based simultaneous codeword optimisation (SimCO) algorithm to the analysis model. This algorithm assumes that the analysis dictionary contains unit Ł 2 -norm atoms and trains the dictionary by the optimisation on manifolds. This framework allows one to update multiple dictionary atoms in each iteration, leading to a computationally efficient optimisation process. We demonstrate the competitive performance of the proposed algorithm using experiments on both synthetic and real data, as compared with three baseline algorithms, Analysis K-SVD, analysis operator learning (AOL) and learning overcomplete sparsifying transforms (LOST), respectively." }, { "role": "assistant", "content": "Dong [1] propose a method that alternates between a hard thresholding operation of the co-sparse representation and an operator update stage where all rows of the operator are simultaneously updated using a gradient method on the sphere. Their target function has the form @math , where @math is the sparse representation of the signal @math ." } ]
science.data_reco_mcq_mc
science.data_reco_mcq_mc.855
[ { "role": "user", "content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: I want to test the effectiveness of unsupervised sentence representation models in natural language inference.\n\nKeyphrases: natural language inference text paragraph-level\n\nDataset description:\nMultiNLI: The Multi-Genre Natural Language Inference (MultiNLI) dataset has 433K sentence pairs. Its size and mode of collection are modeled closely like SNLI.\nSICK: The Sentences Involving Compositional Knowledge (SICK) dataset is a dataset for compositional distributional semantics. It includes a large number of sentence pairs that are rich in the lexical, syntactic and semantic phenomena.\nSentEval: SentEval is a toolkit for evaluating the quality of universal sentence representations. SentEval encompasses a variety of tasks, including binary and multi-class classification, natural language inference and sentence similarity.\nSST: The Stanford Sentiment Treebank is a corpus with fully labeled parse trees that allows for a complete analysis of the compositional effects of sentiment in language. The corpus is based on the dataset introduced by Pang and Lee (2005) and consists of 11,855 single sentences extracted from movie reviews.\nGLUE: General Language Understanding Evaluation (GLUE) benchmark is a collection of nine natural language understanding tasks, including single-sentence tasks CoLA and SST-2, similarity and paraphrasing tasks MRPC, STS-B and QQP, and natural language inference tasks MNLI, QNLI, RTE and WNLI.\ne-SNLI: e-SNLI is used for various goals, such as obtaining full sentence justifications of a model's decisions, improving universal sentence representations and transferring to out-of-domain NLI datasets.\n\nOptions:- A: MultiNLI B: SICK C: SentEval D: SST E: GLUE F: e-SNLI" }, { "role": "assistant", "content": "[\"SICK\", \"SST\"]" } ]
science.covidfact_entailment
science.covidfact_entailment.278
[ { "role": "user", "content": "You will be shown a claim related to the COVID-19 pandemic, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\".\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract.\n\nFor instance, if the model were given the claim \"the COVID vaccine is safe for healthy adults\", the output might be\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that the risks of severe COVID vaccine side effects are low for healthy adults.\"]\n}\n\nClaim: Sars-cov-2 infection control implementation based on sources of infection showing directions for three age groups in japan.\n\nAbstract:\nBackground Some aspects of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) transmission in children and adults remain unclear.\nThis report describes different SARS-CoV-2 transmission patterns by age group in Japan.\nMethods and findings This retrospective observational case series study analyzed transmission patterns of real-time polymerase chain reaction (RT-PCR)-confirmed SARS-CoV-2 infections found by local health authorities and commercial laboratories during January 14 through July 31, 2020 in Japan.\nAfter ascertaining the infection source for every symptomatic case as clusters at households, daycare facilities, schools, hospitals and workplaces etc., their associated transmission patterns were analyzed.\nIdentified cases were divided into three groups: underage, < 20; adults, 20-59; and elderly people 60 years old and older.\nThe reproductive number (R)s of respective transmission directions found for the respective age groups were compared.\nOf 26,986 total cases, 23,746 unknown cases were found, leaving 3,240 ascertained sources of infection (12.0%) comprising 125 (3.9%) underage, 2350 (72.5%) adult, and 765 (23.6%) elderly people.\nThe respective Rs of underage infection sources directed to underage, adult, and elderly people were estimated respectively as 0.0415 (95% CI, 0.0138-0.0691), 1.11 (95% CI, 0.9171-1.3226), and 0.2811 (95% CI, 0.2074-0.3687).\nThe respective Rs of adult infection source directed to underage, adult, and elderly people were estimated respectively as 0.0140 (95% CI, 0.0120-0.0162), 0.5392 (95% CI, 0.5236-0.5550), and 0.1135 (95% CI, 0.1074-0.1197).\nThe respective Rs of elderly infection source directed to underage, adult, and elderly people were estimated as 0.065 (95% CI, 0.0039-0.0091), 0.3264 (95% CI, 0.3059-0.3474), and 0.3991 (95% CI, 0.3757-0.4229).\nConclusions The main sources of SARS-CoV-2 infection were adults and elderly people.\nThe R of underage people directed to adults was greater than 1 because of close familial contact but they were unlikely to become carriers transmitting SARS-CoV-2 because they accounted for a minority for transmissions.\nApparently, SARS-CoV-2 was transmitted among adults and elderly people, suggesting that infection control of SARS-CoV-2 should be managed specifically by generation." }, { "role": "assistant", "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Methods and findings This retrospective observational case series study analyzed transmission patterns of real-time polymerase chain reaction (RT-PCR)-confirmed SARS-CoV-2 infections found by local health authorities and commercial laboratories during January 14 through July 31, 2020 in Japan.\"]}" } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.116409
[ { "role": "user", "content": "You will be presented with the full text of a science research paper. Please write an abstract for this paper. Your response should include the abstract and no additional text.\n\nPaper text:\nthe geometry of the obscuring matter in active galactic nuclei , predicted by unified models @xcite , is still unclear .\nthe study of variability of the absorbing column density ( n@xmath1 ) in these objects , provides invaluable informations on the size and location of the absorber .\nrecently , risaliti et al .\n@xcite found n@xmath1 variability on time scales from @xmath2 month to @xmath0 5 years in nearly all the objects belonging to a sample of bright , compton - thin seyfert 2 galaxies with multiple x - ray observations .\nthis variability suggests that the absorber could be much closer to the nucleus than usually assumed .\nto further investigate this issue , we have analyzed two bepposax long observations of the brightest seyfert galaxy , ngc 4151 , and present here some preliminary results .\nthe two bepposax observations were performed on 2001 december , 18 and 1996 july , 6 with the narrow field instruments : lecs ( 0.1 - 10 kev ) , mecs ( 1.3 - 10 kev ) , and pds ( 13 - 200 kev ) . the total absorbing column density measured in ngc4151 by heao1 , exosat , ginga , rosat and asca @xcite ranges from a few @xmath3 @xmath4 to a few @xmath5 @xmath4 . since\nthe corresponding photoelectric cut - off lies in the 2 - 4 kev energy range , we have chosen to analyze the light curves of the count rate in the bands 2 - 4 kev ( mecs ) , 6 - 10 kev ( mecs ) and of their softness ratio ( see figures 1 and 2 ) , to investigate whether n@xmath1 variations might have occurred within each observation .\nwe also analyzed the light curve of the count rate in the 15 - 200 kev band ( pds ) to monitor the continuum . from the analysis of these light curves we have selected some time intervals where we find indications that n@xmath1 might have undergone significant changes .\nthese intervals are shown and labelled in figures 1 and 2 . to quantify the spectral variations we have then performed a detailed spectral analysis of these stretches of data .\nstandard data reduction was performed using the saxdas software package version 2.0 following fiore , guainazzi & grandi @xcite .\nspectral fits were performed using the xspec 11.2.0 software package and public response matrices as from the 1999 december release .\nthe pds data were reduced using the `` variable risetime threshold '' technique to reject particle background @xcite .\nlecs and mecs spectra were rebinned following two criteria : a ) to sample the energy resolution of the detectors with three channels at all energies whenever possible , and to obtain at least 20 counts per energy channel .\nconstant factors have been introduced in the fitting models in order to take into account the intercalibration systematics between the instruments @xcite .\nthe model adopted for the analysis has the following components : 1 . a power - law with an exponential high - energy cut - off ; 2 . a neutral compton reflection component ( modelled with pexrav @xcite ) ; 3 . a narrow iron k@xmath6 emission line with energy set to 6.38 kev in the observer frame ; 4 . two components for the soft excess emission : a thermal component , and a power law , reflection component\ntwo components for the intrinsic absorber , both assumed neutral : @xmath7 which covers the nucleus totally ; @xmath8 which covers the nucleus only in part , with a covering factor cv @xcite .\nthe absorber is completed with the line of sight column density through the galaxy ( n@[email protected]@xmath11 @xmath4@xcite ) .\nfigures 3 and 4 illustrate the results for the two observations . in the 2001 december , 18 observation , we find that from a to b to c , both absorbers and the photon index do not show statistically significative changes . from c to m the two absorbers n@xmath12 and n@xmath13 change at a confidence level @xmath14 by a factor of 2 and 3.3 respectively , while the photon index is constant at a confidence level @xmath15 .\nif the covering factor cv is kept constant throughout the fit the amplitude of the variations of n@xmath12 and n@xmath13 is reduced to a factor 1.4 and 2 respectively .\nthe variations occur on a timescale of 100 - 150 ksec . in the 1996 july , 6 observation\n, we find that from a to b both absorbers undergo changes at a confidence level @xmath16 , while from a to c the absorbers change at a confidence level @xmath17 . from a to b to c , the photon index is again statistically consistent with a constant .\nthe amplitude of the variation of n@xmath12 and n@xmath13 is a factor 6 and 5 respectively .\nif the covering factor cv is kept constant throughout the fit the amplitude of the variations of n@xmath12 and n@xmath13 is reduced to a factor 4 and 3 respectively .\nthe variations occur on a timescale of @xmath18 ksec from a to b and @xmath19 ksec from a to c.\nwe have performed a detailed time - resolved spectral study of two long bepposax observations of the seyfert galaxy ngc4151 .\nwe find strong evidences for spectral variability which we model in terms of variations of two absorbing screens .\nthe timescales of these variations are between 40 ksec and 200 ksec , 10 - 100 times shorter than the typical timescales found by risaliti et al.@xcite in a sample of compton thin seyfert 2 galaxies .\nif we associate these timescales to a crossing time , the resulting linear size would be of about one light day , which is the size of the innermost part of the broad line region , as inferred from reverberation mapping in ngc 4151 .\nthis interpretation is supported by the argument , used by risaliti et al.@xcite , which assume that the absorbing medium might be made up of spherical clouds moving with keplerian velocities around the central black hole .\nthis requires that the cloud gas density is of the order of 10@xmath20 @xmath21 , typical of broad line region clouds @xcite . 9 antonucci , r. 1993 ara&a , 31 , 47 risaliti , g. , elvis , m. , nicastro , f. 2002 apj , 571 , 234 magdziarz , p. & zdziarski , a. 1995 , mnras , 273 , 837 holt , s. s. , mushotzky , r. f. , boldt , e. a. et al .\n1980 , apj , 241l , 13 perola , g.c .\n, piro , l. , altamore , a. et al .\n1986 , apj , 306 , 508 fiore , f. , perola , g. c. , romano , m. 1990 , mnras , 243 , 522 weaver , k.a . , yaqoob , t. , holt , s.s .\n1994apj,436l , 27 piro , l. , nicastro , f. , feroci , m. 1998 , axrs.symp , 481 wang , j .- x .\n, zhou , y .- y . ,\nwang , t .-\n1999 , apj , 523l , 129 zdziarski , a. , leighly , k.m . , matsuoka , m. et al .\n2002 , apj , 573 , 505 fiore , f. , guainazzi , m. & grandi , p. 1999\n, handbook for bepposax nfi spectral analysis , ftp://ftp.asdc.asi.it/pub/sax/doc/ + software_docs / saxabc_v1.2.ps.gz + or + http://heasarc.gsfc.nasa.gov/docs/sax/abc/ + saxabc/ murphy , e.m . , lockman , f.j . , laor , a. et al . 1996 apjs , 105 , 369 blandford , r. , netzer , a. and woltjer , l. 1990 : active galactic nuclei , springer - verlag edt ." }, { "role": "assistant", "content": "we have analyzed two long bepposax observations of the bright seyfert galaxy ngc 4151 , searching for short timescale ( @xmath010 - 200 ksec ) x - ray spectral variability . the light curve of a softness ratio , chosen as most sensitive to pinpoint changes of the column density of the absorbing gas along the line of sight , shows significant variations . \n we try to model these variations by performing a detailed , time resolved , spectral analysis . \n we find significant , large ( factors of 1.5 - 6 ) variations of the absorber column densities on time scales of 40 - 200 ksec . \n these values are 10 - 100 times shorter than those found by risaliti et al . \n 2002 in a sample of seyfert 2 galaxies , and provide strong constraints on the geometry of the obscuring medium ." } ]
science.scicite_classification
science.scicite_classification.3801
[ { "role": "user", "content": "You will be presented with a citation segment from the section of a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\nRecently, a novel method called minimal distance maximization (MDM) [12,13] was developed which does not require any weighting function." }, { "role": "assistant", "content": "Method" } ]
science.healthver_entailment
science.healthver_entailment.5003
[ { "role": "user", "content": "You will be shown a claim about public health, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\". If the abstract does not provide enough information to arrive at a verdict, write \"NEI\" (for \"not enough information\").\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract. If the verdict was \"NEI\", then return an empty array.\n\nFor instance, if the model were given the claim \"wearing masks can prevent the spread of COVID\", the output might be\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that mass mask-wearing reduces the transmission rate for COVID-19.\"]\n}\n\nClaim: the imune system alone can not fight against covid-19\n\nAbstract:\nThe novel coronavirus Covid-19 follows transmission route and clinical presentation of all community-acquired coronaviruses.\nInstead, the rate of transmission is significative higher, with a faster spread of the virus responsible of the worldwide outbreak and a significative higher mortality rate due to the development of a severe lung injury.\nMost noteworthy is the distribution of death rate among age groups.\nChildren and younger people are almost protected from severe clinical presentation.\nPossible explanation of this phenomenon could be the ability of past vaccinations (especially tetanic, diphtheria toxoids and inactivated bacteria as pertussis) to stimulate immune system and to generate a scattered immunity against non-self antigens in transit, as coronaviruses and other community-circulating viruses and make immune system readier to develop specific immunity against Covid-19.\nThe first support to this hypothesis is the distribution of mortality rate during historical pandemics (\"Spanish flu\" 1918, \"Asian flu\" 1956 and \"the Hong Kong flu\" 1968) among age groups before and after the introduction of vaccines.\nThe immunological support to the hypothesis derives from recent studies about immunotherapy for malignancies, which propose the use of oncolytic vaccines combined with toxoids in order to exploit CD4 + memory T cell recall in supporting the ongoing anti-tumour response.\nAccording to this hypothesis vaccine formulations (tetanus, diphtheria, Bordetella pertussis) could be re-administrate after the first contact with Covid-19, better before the development of respiratory severe illness and of course before full-blown ARDS (Acute Respiratory Distress Syndrome).\nThe CD4 + memory exploiting could help immune system to recall immunity of already know antigens against coronaviruses, avoiding or limiting \"lung crash\" until virus specific immunity develops and making it faster and prolonged.\nFinally, this administration could be helpful not only in already infected patients, but also before infection.\nIn fact, people could have an immune system more ready when the contact with the Covid-19 will occur." }, { "role": "assistant", "content": "{\"verdict\": \"NEI\", \"evidence\": []}" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.7831
[ { "role": "user", "content": "You are presented with multiple paper abstracts, alongside the associated literature review and chapter titles. Given the provided information, you are going to do a literature review. Please generate a coherent literature review that synthesizes the content, draws connections between different works, highlights major findings, and identifies any gaps in the existing research.\n\nLiterature Review Title: A Survey of Methods to Leverage Monolingual Data in Low-resource Neural Machine Translation \nChapter Title: B. Separate Language Model \n1. Abstract of Cited Paper (BIB001): Neural Machine Translation (NMT) typically leverages monolingual data in training through backtranslation. We investigate an alternative simple method to use monolingual data for NMT training: We combine the scores of a pre-trained and fixed language model (LM) with the scores of a translation model (TM) while the TM is trained from scratch. To achieve that, we train the translation model to predict the residual probability of the training data added to the prediction of the LM. This enables the TM to focus its capacity on modeling the source sentence since it can rely on the LM for fluency. We show that our method outperforms previous approaches to integrate LMs into NMT while the architecture is simpler as it does not require gating networks to balance TM and LM. We observe gains of between +0.24 and +2.36 BLEU on all four test sets (English-Turkish, Turkish-English, Estonian-English, Xhosa-English) on top of ensembles without LM. We compare our method with alternative ways to utilize monolingual data such as backtranslation, shallow fusion, and cold fusion.\n\nLiterature Review:" }, { "role": "assistant", "content": "In this subcategory, we will describe methods which use an additional separate target-side language model (LM). LMs are pre-trained on target-side monolingual corpus to predict next word probability distribution based on given previous words P LM (y t |y t−1 1 ). The main idea of these methods is to merge LM and translation model (TM) in the prediction step. This means that the probability distribution of next word given previously predicted words and input sentence P (y t |y t−1 1 , x) is calculated using output logits (or probability distribution) of TM and output logits of LM. The expected effect of additional target-side LM is that it should help TM to generate grammatically correct sentences. As LM is independent of TM, there're no restrictions on its architecture: it can be e.g., n-gram based feed-forward LM BIB001 or RNN-based LM . Shallow Fusion: Shallow Fusion [1] is a technique to merge separately pre-trained LM and TM. Each time step (step of prediction of next word) TM and LM propose scores of next possible words based on previously predicted words. Then, the scores of TM are summed with the scores of LM multiplied by hyper-parameter β, which requires additional fine-tuning on validation data, to control LM influence. The word with the highest score is selected to be the next word in a sequence. To be more accurate, beam search is applied, where top N most probable sequences are carried until the end of prediction and the most probable one (which has the highest score) is chosen as a result. The evaluation is provided for Turkish→English translation direction, and all experiments used RNN-based TM and LM. 160K parallel sentence pairs and 4M English monolingual sentences were used for training. Unfortunately, no improvements were obtained using this method, it gives almost the same results as a baseline without LM. PostNorm: This method uses the same idea of merging technique from the previous method with some modifications. The main difference here is that LM is trained first, then merged with TM, then, fixing LM's parameters, TM is trained. This technique is inspired by Cold Fusion . This LM and TM merging technique is called PostNorm and was proposed by Stahlberg et al. BIB002 . First, the output of projection layer S T M (y t |y t−1 1 , x) of TM is normalized by softmax and the probability distribution P T M (y t |y t−1 1 , x) is obtained. Then, it is component-wise multiplied by probability distribution of LM P LM (y t |y t−1 1 ). To make it a valid probability distribution, additional normalization using softmax function is applied. In contrast to Shallow Fusion, this method requires normalization after the summation of log probabilities (multiplication of probabilities) and training of TM after merging with LM. The method was evaluated on Turkish↔English language pair, RNN-based TM and LM were used in experiments. 207K parallel sentence pairs, and 47M English and 3M Turkish monolingual sentences were used for training. The method gives +0.71 BLEU for English→Turkish and +0.43 BLEU for Turkish→English directions over baselines without LM. For higher-resourced Xhosa→English pair with 739K parallel sentences, it achieves +2.36 BLEU." } ]
science.scierc_re
science.scierc_re.19
[ { "role": "user", "content": "Below is an abstract from a computer science research paper. Given this abstract, your task is to extract all unique relationships between entities of the following types:\n\n- \"USED-FOR\": For instance, \"Our method models user proficiency\" -> [\"method\", \"user proficiency\", \"USED-FOR\"].\n- \"EVALUATE-FOR\": For instance, \"We evaluate our method using F1-measure\" -> [\"F1-measure\", \"method\", \"EVALUATE-FOR\"].\n- \"FEATURE-OF\": For instance, \"prior knowledge of the model\" -> [\"prior knowledge\", \"model\", \"FEATURE-OF\"].\n- \"HYPONYM-OF\": For instance, \"TUIT is a software library\" -> [\"TUIT\", \"software library\", \"HYPONYM-OF\"].\n- \"PART-OF\": For instance, \"We incorporate NLU module into the system.\" -> [\"NLU module\", \"system\", \"PART-OF\"].\n- \"COMPARE\": For instance, \"Unlike the quantitative prior, the qualitative prior is often ignored\" -> [\"quantitative prior\", \"qualitative prior\", \"COMPARE\"].\n- \"CONJUNCTION\": For instance: \"obtained from human expert or knowledge base\" -> [\"human expert\", \"knowledge base\", \"CONJUNCTION\"].\n\nFormat your output as a json array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<Entity_B>\", \"<RELATION_A_B>\"]. An example output might look like:\n\n[[\"neural networks\", \"classification\", \"USED-FOR\"], [\"neuron\", \"neural network\", \"PART-OF\"]].\n\nIf no relations are found, please return an empty array [].\n\nAbstract:\n\nWe propose a detection method for orthographic variants caused by transliteration in a large corpus. The method employs two similarities. One is string similarity based on edit distance. The other is contextual similarity by a vector space model. Experimental results show that the method performed a 0.889 F-measure in an open test." }, { "role": "assistant", "content": "[[\"edit distance\", \"string similarity\", \"USED-FOR\"], [\"vector space model\", \"contextual similarity\", \"USED-FOR\"], [\"F-measure\", \"detection method\", \"EVALUATE-FOR\"], [\"similarities\", \"detection method\", \"USED-FOR\"], [\"detection method\", \"orthographic variants\", \"USED-FOR\"]]" } ]
science.data_reco_mcq_sc
science.data_reco_mcq_sc.1393
[ { "role": "user", "content": "Given a short query with keyphrases about a research idea in 2020 in the domain of artificial intelligence, recommend the relevant datasets to validate the research idea from options provided. A short description of the datasets is also provided to help you.\n\nQuery: I want to improve the state-of-the-art performance for speech enhancement on a dataset of children\n\nKeyphrases: speech enhancement audio\n\nDataset description:\nDialogue State Tracking Challenge: The Dialog State Tracking Challenges 2 & 3 (DSTC2&3) were research challenge focused on improving the state of the art in tracking the state of spoken dialog systems. State tracking, sometimes called belief tracking, refers to accurately estimating the user's goal as a dialog progresses.\nMLMA Hate Speech: A new multilingual multi-aspect hate speech analysis dataset and use it to test the current state-of-the-art multilingual multitask learning approaches.\nAudioSet: Audioset is an audio event dataset, which consists of over 2M human-annotated 10-second video clips. These clips are collected from YouTube, therefore many of which are in poor-quality and contain multiple sound-sources.\nSlowFlow: SlowFlow is an optical flow dataset collected by applying Slow Flow technique on data from a high-speed camera and analyzing the performance of the state-of-the-art in optical flow under various levels of motion blur.\n\nOptions:- A: Dialogue State Tracking Challenge B: MLMA Hate Speech C: AudioSet D: SlowFlow" }, { "role": "assistant", "content": "AudioSet" } ]
science.chemprot_ner
science.chemprot_ner.323
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Protein\", \"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\" : [\"Dexamethasone\", ...], \"Protein\" : [\"BRCA-1\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nThe exposure of highly toxic aconitine does not significantly impact the activity and expression of cytochrome P450 3A in rats determined by a novel ultra performance liquid chromatography-tandem mass spectrometric method of a specific probe buspirone.\nAconitum species are widely used to treat rheumatism, cardiovascular diseases, and tumors in China and other Asian countries. The herbs are always used with drugs such as paclitaxel. Aconitine (AC) is one of the main bioactive/high-toxic alkaloids of Aconitum roots. AC is metabolized by cytochrome P450 (CYP) 3A. However, whether AC inhibits/induces CYP3A, which causes drug-drug interaction (DDI) is unclear. Our study aims to explore the potent effects of AC, as a marker component of Aconitum, on CYP3A using the probe buspirone in rats. The effects of oral AC on pharmacokinetics of buspirone were evaluated. CYP3A activity and protein levels in rat liver microsomes pretreated with oral AC were also measured using in vitro buspirone metabolism and Western blot. Buspirone and its major metabolites 1-(2-pyrimidinyl)piperazine and 6'-hydroxybuspirone were determined using a newly validated UPLC-MS/MS method. Single dose and 7-day AC administration at 0.125mg/kg had no effect on CYP3A activity since no change in the formation of 1-(2-pyrimidinyl)piperazine and 6'-hydroxybuspirone. CYP3A activity and protein levels in liver microsomes were also not affected by 7-day AC pretreatment at 0.125mg/kg. Therefore, AC neither inhibits nor induces CYP3A in rats, indicating AC does not cause CYP3A-related DDI in the liver." }, { "role": "assistant", "content": "{\"Chemical\": [\"6\\u0027-hydroxybuspirone\", \"Buspirone\", \"paclitaxel\", \"aconitine\", \"buspirone\", \"Aconitine\", \"1-(2-pyrimidinyl)piperazine\"], \"Protein\": [\"cytochrome P450 (CYP) 3A\", \"CYP3A\", \"cytochrome P450 3A\"]}" } ]
science.pubmedqa_qa
science.pubmedqa_qa.303
[ { "role": "user", "content": "You will be shown an abstract from a research paper, and a corresponding research question. Determine the answer to the research question based on the context provided. Your answer should be one of: \"YES\", \"NO\", or \"MAYBE\".\n\nContext:\n(PURPOSE) To assess whether eligibility to an adjuvant chemotherapy protocol in itself represents a good prognostic factor after radical cystectomy for bladder cancer.\n(PATIENTS AND METHODS) Between April 1984 and May 1989, our institution entered 35 patients with invasive bladder cancer into the Swiss Group for Clinical and Epidemiological Cancer Research (SAKK) study 09/84. They were randomly assigned to either observation or three postoperative courses of cisplatin monotherapy after cystectomy. This study had a negative result. The outcome of these 35 patients (protocol group) was compared with an age- and tumor-stage-matched cohort (matched group; n = 35) who also underwent cystectomy during the same period, but were not entered into the SAKK study, as well as the remaining 57 patients treated during the study period for the same indication (remaining group).\n(RESULTS) Median overall survival decreased from 76.3 months in the protocol group to 52.1 months in the matched group and to 20.3 months in the remaining group. The respective times of median recurrence-free survival were 67.2, 16.0, and 9.4 months. Tumor progression occurred in 46% of the protocol group compared with 69% in the matched group and 65% in the remaining group (P<.05). Cancer-related death was noted in 40% of the protocol group, 57% in the matched group, and 56% in the remaining group.\n\nQuestion:\nIs eligibility for a chemotherapy protocol a good prognostic factor for invasive bladder cancer after radical cystectomy?\n\nAnswer:" }, { "role": "assistant", "content": "YES" } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.33164
[ { "role": "user", "content": "You will be presented with the full text of a science research paper. Please write an abstract for this paper. Your response should include the abstract and no additional text.\n\nPaper text:\nquark masses are fundamental parameters of the qcd lagrangian that can not be measured directly in the experiments , since free quarks are not physical states , and that can not be predicted by qcd using only theoretical considerations . up to now\nqcd sum rules and lattice simulations are the only non - perturbative techniques able to determine the absolute values of the light quark masses .\nlattice technique does not require any additional model parameter and is the only procedure that can be systematically improved .\nthe usual on - shell mass definition can not be used for quark masses and their values depend on the theoretical definition adopted . in the following\nwe will give our final results for the quark masses defined in the @xmath1 scheme .\n@xmath8 is defined by the perturbative expansion of the quark propagator renormalized with the @xmath1 prescription @xcite and it depends on the renormalization scale @xmath9 .\nthe symbols used in the following are fully defined in @xcite .\n+ the `` bare '' lattice quark mass can be determined directly from lattice simulations by fixing the mass of a hadron containing a quark with the same flavour to its experimental value . + \n[ tab : latparams ] lllllllllll + & c60a&c60b&c60c&c60d&w60&c62a&w62a&w62b&c64&w64 + @xmath10 & 6.0 & 6.0 & 6.0 & 6.0 & 6.0&6.2 & 6.2 & 6.2 & 6.4&6.4 + action & sw & sw & sw & sw & wil&sw & wil & wil & sw & wil + # confs&490 & 600 & 200 & 200 & 320&250 & 250 & 110 & 400&400 + volume & @xmath11&@xmath12&@xmath13 & @xmath14&@xmath11&@xmath15 & @xmath15&@xmath15&@xmath15 & @xmath15 + + + & c60z&w60z&c62z&w62z&c64z&w64z & + & 6.0&6.0&6.2&6.2&6.4&6.4 & + & sw & wil&sw & wil&sw & wil & + & 100&100&180&100&60 & 60 & + & @xmath14&@xmath14 & @xmath14&@xmath14 & @xmath16&@xmath16 & + quark masses can also be extracted from the axial ward identity .\nclose to the chiral limit and neglecting terms of @xmath5 , the ward identity can be written as @xmath17 where @xmath18 , @xmath19 and @xmath20 are the renormalization constants of the axial , pseudoscalar and scalar densities and @xmath21 is defined in @xcite .\nthe standard perturbative approach uses the lattice and the continuum perturbation theory to connect the `` bare '' lattice quark mass to the @xmath8 @xcite .\nthe scale @xmath22 , where @xmath23 is the lattice spacing , of our simulations is @xmath24 gev . at these scales\nwe expect small non - perturbative effects .\nhowever the `` tadpole '' diagrams , which are present in the lattice perturbation theory , can give raise to large perturbative corrections and then to large uncertainties in the matching procedure at values of @xmath25 .\n+ the non - perturbative renormalization ( np ) techniques eliminate these uncertainties @xcite .\n@xmath18 , @xmath19 and @xmath20 can be calculated by imposing the renormalization conditions , described in the next section , on the quark states of momentum @xmath26 and in the landau gauge@xcite .\nthe quark mass in the @xmath1 scheme is then defined as @xmath27\\cdot \\nonumber\\\\ & & \\hspace{-.7 cm}\\frac{z_a}{z_p^{ri}(\\mu ' a)}\\rho(a)\\end{aligned}\\ ] ] where @xmath28\\end{aligned}\\ ] ] and , for large time separations , @xmath29 .quark masses from the spectroscopy in mev .\n@xmath1 masses are at @xmath30 gev . [ cols=\"<,<,<,<,<\",options=\"header \" , ] this procedure works if @xmath9 satisfies the condition @xmath31 to avoid large higher - order perturbative corrections and discretization errors . in the figure [ fig.1 ] we report the matrix element of the renormalized pseudoscalar operator in the chiral limit as a function of the renormalization scale for the runs we have analysed .\ndata show that the discretization errors are within the statistical errors in the range @xmath32 where the np - renormalization is applied to renormalize the quark masses .\nall details of np - renormalization will be given in a forthcoming paper @xcite .\nwe have calculated the quark masses from the meson spectroscopy and from the ward identity using different sets of quenched data with @xmath2 , @xmath33 and @xmath34 and using either the wilson or the `` improved '' sw - clover action .\nthe parameters used in each simulation are listed in table 1 and the main results we have obtained are reported in tables [ tab : qmas_spect ] and [ tab : qmas_wi ] .\nthe data at @xmath4 have been used only for an exploratory study .\nthe physical volume and the time extension of the lattice are too small to be considered reliable .\n+ the @xmath1 mass values in table [ tab : qmas_wi ] show that the non - perturbative approach allows a determination of the quark masses with a smaller error than the standard perturbative approach .\nin the @xmath10 range studied , there is _ no _ statistical evidence for an `` @xmath23 '' dependence of the quark masses for the clover action . for the wilson action , a mild tendency in the quark masses to decrease with increasing @xmath10 exists but\nan extrapolation to the continuum limit is not reliable .\nthe @xmath1 mass values in tables [ tab : qmas_spect ] and [ tab : qmas_wi ] show a very good agreement between the quark masses extracted from the spectroscopy and from the ward identity using the non perturbative determinations of the renormalization constants , while the same comparison using the perturbative values of @xmath18 and @xmath19 is much poorer @xcite .\nwe believe that the best estimates for the strange and charm quark masses are obtained from the non - perturbative approach using wilson and clover data at @xmath2 and @xmath3 .\nour best results are @xmath6 and @xmath7 .\nthe first error is the statistical one and the second is the systematic error estimated extracting the quark masses from different mesons @xcite .\nthe values of quark masses we have obtained are in good agreement with previous determinations @xcite . 9 c.r .\nallton et al .\nv. gimnez et al . , in preparation .\nm. bochicchio et al .\ng. martinelli et al .\nm lscher , these proceedings .\nt@xmath35l collaboration ( n. eicker et al . )\nwork in progress and hep - lat/9707004 .\nr. gupta and t. bhattacharya , these proceedings and phys .\nrev d 55 ( 1997 ) 7203 .\nallton et .\nnucl . phys.b489(1997)427 .\nv. lubicz , hep - ph/9707374 . n. eicker at al . , hep - lat/9704019 ." }, { "role": "assistant", "content": "we present results of a high statistics study ( @xmath0 configurations ) of the quark masses in the @xmath1 scheme from lattice qcd in the quenched approximation at @xmath2 , @xmath3 and @xmath4 using both the wilson and the tree - level improved sw - clover fermion action . \n we extract quark masses from the meson spectroscopy and from the axial ward identity using non - perturbative values of the renormalization constants . \n we compare the results obtained with the two methods and we study the @xmath5 dependence of the quark masses for both actions . \n our best results are @xmath6 and @xmath7 ." } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.22928
[ { "role": "user", "content": "You are presented with a paragraph from a COVID-19 literature. Classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable topics within the array. Your output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nAvailable data suggest that the issue of CoViD-19 is particularly critical in patients with diabetes. In Italy, Internal Medicine (IM) wards have played a pivotal role in contrasting the spread of SARS-Cov2. During this pandemic, FADOI submitted a brief questionnaire to a group of its members acting as Head of IM units. Considering 38 units, 58% of beds dedicated to CoViD patients in CoViD Hospitals were in charge of IM, and globally cared for 6650 patients during a six-week period. Of these patients, 1264 (19%) had diabetes. Mortality rate in CoViD patients with or without diabetes were 20.5% and 14%, respectively (p < 0.001). Our survey seems to confirm that diabetes is a major comorbidity of CoViD-19, but it does not support an increased incidence of CoViD-19 infection in people with diabetes, if compared with the figures of patients with diabetes and hospitalized before the outbreak. On the other side, patients with diabetes appeared at a significantly increased risk of worse outcome. This finding underlines the importance of paying special attention to this patient population and its management." }, { "role": "assistant", "content": "[\"Treatment\", \"Prevention\"]" } ]
science.cdr_ner
science.cdr_ner.296
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Disease\", \"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\" : [\"Naloxone\", ...], \"Disease\" : [\"Hypotensive\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nThe site of common side effects of sumatriptan. Atypical sensations following the use of subcutaneous sumatriptan are common, but of uncertain origin. They are almost always benign, but can be mistaken for a serious adverse event by the patient. Two patients are presented with tingling or burning sensations limited to areas of heat exposure or sunburn. In these individuals, side effects are most likely generated superficially in the skin." }, { "role": "assistant", "content": "{\"Chemical\": [\"sumatriptan\"], \"Disease\": [\"Atypical sensations\", \"sunburn\", \"tingling or burning sensations\"]}" } ]
science.chemsum_single_document_summarization
science.chemsum_single_document_summarization.23293
[ { "role": "user", "content": "You will be shown the title of a chemistry paper, together with sections excerpted from the body of the paper. The paper sections may include HTML tags like `<p>` and `</p>`; please ignore these. Your task is to write an abstract for the paper. Your response should include the abstract and no other text.\n\nTitle:\nPeptidoglycan Modifications Tune the Stability and Function of the Innate Immune Receptor Nod2\n\nPaper sections:\n\n<p>The innate immune system must recognize and destroy pathogenic bacteria while maintaining the proper balance of the trillion commensal bacteria. One method by which the innate immune system detects bacteria is via the peptidoglycan, a strong polymer of carbohydrates and peptides that provides protection to the vulnerable bacterial cell. Peptidoglycan fragments such as N-acetyl-muramyl dipeptide (MDP) (1, Figure 1) and N-glycolyl-MDP (2, Figure 1) activate an immune response.1 Compound 1 is a highly conserved component of peptidoglycan and is found in both Gram-positive and Gram-negative bacteria, while 2 is specific to the Gram-positive bacteria Mycobacterium avium, the bacterium commonly found in patients suffering from Crohn's disease, a debilitating, noncurable, inflammatory bowel disease. In order to evade an innate immune response, pathogenic bacteria such as Staphylococcus aureus, Listeria monocytogenes, and Bacillus cereus utilize acetylation and deacetylation strategies to avoid detection,3 generating fragments such as 2-amino-MDP (3, Figure 1). However, it is not known how specific immune receptors respond to these peptidoglycan modifications (Figure 1).</p><p>Recently, 1 was shown to directly interact with the human innate immune receptor protein Nod2,4 ultimately resulting in the activation of a signaling cascade known as the inflammatory response via the NF-κB and MAP kinase pathways,5 while 2 has been demonstrated to elicit a more robust Nod2-dependent inflammatory response via the same pathways.6 The differential effects of both the presence and identity of acyl substituents could have broad implications in immune recognition of bacteria. Here we investigate how acetylation/deacetylation of these peptidoglycan fragments modulates molecular recognition by Nod2.</p><p>To further study the interaction of N-substituted MDPs (Figure 1) and Nod2, we required a synthetic route for MDP that offers selective functionalization of the 2-position. Existing synthetic strategies require extensive carbohydrate manipulation,7 troublesome dipeptide couplings,8 or expensive carbohydrate starting materials,9 and to date, no existing syntheses allow for a facile, late-stage modification at the 2-position.10 Additionally, no syntheses exist for the deacetylated peptidoglycan derivatives. We thus examined a rapid and modular synthesis that would allow for alterations at the 2-position and provide a divergent route for a variety of peptidoglycan and peptidoglycan-like derivatives. We present here a synthetic route that (1) allows general exploration of function via acylation at the carbohydrate 2-position and (2) utilizes an azide, a sterically unobtrusive acid-and base-stable protecting group,11 to mask the amine in the synthetic approach.</p><p>The synthesis begins with the inexpensive and readily obtainable starting material glucosamine hydrochloride, which is easily converted to 5 in six high-yielding steps using notably a diazo transfer to install the azide 4 and a Koenigs-Knorr-type reaction to protect the anomeric position (Scheme 1).12 The coupling reactions of a dipeptide fragment to modularly protected muramic acid derivatives were low yielding; thus, we opted to perform the amino acid couplings sequentially. Global deprotection was performed using Pd/C and H2 to expose the 2-amino functionality and yield 3, an N-deacetylated naturally occurring MDP. Acetylation of the 2-amino position under mild conditions furnished the N-acetyl-MDP target 113 and the N-glycolyl-MDP target 210 in 10 linear steps from glucosamine hydrochloride with overall yields of 33% and 28%, respectively. The synthesis provides a simplified route to the minimal peptidoglycan fragments known to activate Nod2.6,14</p><p>Having established a synthesis that allowed for the late-stage modification of the 2-amino position and the potential to yield a variety of new, 2-amino-functionalized MDPs via the naturally occurring deacetylated MDP 3, we set out to produce other peptidoglycan-like derivatives (Table 1) to explore their effect on Nod2 signaling. Functionality was chosen for coupling that would produce MDP derivatives applicable for use as chemical tools to study biological systems from peptidoglycan biosynthesis to peptidoglycan-induced innate immune system activation. We synthesized 8 and 9, the acetylated and methyl ether versions of 2, to probe how altering the glycolyl substitution affects Nod2 recognition. Compound 10 inverses the electronics from the parent compound 1. Compounds 11 and 12 represent extended acetylations reminiscent of 1 and 2. Finally, 13–15 represent MDP derivatives that are applicable to click chemistry, fluorophore-based assays, and anchoring to avidin/streptavidin, respectively. Additionally, this strategy allows for installation of orthogonal modification, permitting the synthesis of peptidoglycan derivatives with chemical tools at locations other than the 2-position (Scheme 2). Finally, one can envision the installation of a 13C- or 14C-labeled acetate to yield the isotopes of 1, tools that are important in studying innate immunity, for NMR analysis and other biological studies.</p><p>To directly determine how modification at the 2-position affects innate immune system activation, the MDP derivatives 1–3, 8–17 were assayed for activity (Figure 2, Table 2) using a Nod2-dependent NF-κB reporter assay.16 All compounds were tested at 20 μM, as we have shown in an in vitro binding assay that Nod2 binds to 1 with a KD of 51 nM.4a In addition, select compounds (1–3, 12, 13) were tested at a range of concentrations (0.2, 2, 20, and 200 μM) to ensure NF-κB activation studies were done at an effective concentration (Figure S1). As expected, 1 and 2 activate NF-κB in a Nod2-dependent manner (Figure 2, Table 2). Compound 3, the N-deacetylated-MDP, did not activate NF-κB, thus suggesting that N-acylation is an important feature for recognition by the Nod2 signaling cascade. Most modifications are tolerated at the 2-position, with 9–13 activating Nod2 ~2-fold greater than the untreated control. The fluorescent derivatives 14, 16, and 17 do not activate, regardless of fluorophore position. In contrast, placement of a biotin molecule, containing four methylenes (separating the amide from the heterocycle), at the 2-position (15) still allows activation of NF-κB, albeit to a lesser extent than the other acylated derivatives. These data suggest that the installation of a fluorophore is possible with increased linker length. The data also show that there is significant NF-κB activation in HEK293T cells, which follows the same trend as the Nod2-induced effect. This could be from the endogenous Nod2 and/or other targets of the bacterial cell wall ligands. Overall these data demonstrate that modification of the 2-position can be used for optimization/modulation of the Nod2 response.</p><p>Nod2 is a transiently stable protein.18 It is well-known that ligand binding can affect the stability of a receptor.19 We considered that the interaction of peptidoglycan derivatives with Nod2 could alter the stability of the protein. To determine if ligand interaction changes the half-life of Nod2, we analyzed the naturally occurring MDPs 1–3 and a subset of the peptidoglycan derivatives with increased activity (12, 13) or structural similarity (9, 10) to 1 or 2. We made use of a tetracycline-induced Nod2-expressing cell line,18 an excellent match for endogenous Nod2 cell lines, and cycloheximide inhibition of protein synthesis to determine the half-life of Nod2 in the presence of the MDP derivatives.20 Half-life was determined by first-order decay according to literature precedent.21 Treatment with either 1 or 2 significantly increases the Nod2 half-life (Figure 3, Table 2). When Nod2 is not treated with a peptidoglycan derivative, the half-life is 7.1 ± 1.4 h. However, when cells are treated with 1, the half-life is 15.9 ± 4.1 h, and if cells are treated with 2, the half-life is 23.1 ± 2.2 h. The N-deacetylated derivative 3 provided no stabilization of Nod2 (Table 2). Notably, the N-glycolyl derivative 2 promotes greater stabilization of Nod2 than the N-acetyl derivative 1, suggesting that Nod2-dependent NF-κB activation and Nod2 stabilization are correlated for the naturally occurring compounds 1–3 (Table 2). As 3 is the only compound tested that contains an amine, which will be positively charged at cellular pH, we were concerned that cell permeability may be a factor. We analyzed the ability of 6-amino-MDP (18, Table 2), a MDP derivative that replaces the 6-primary-hydroxy group with an amine. This compound is known to activate NF-κB in a Nod2-dependent manner.11a Despite the presence of the charged ammonium ion, 18 increased the half-life of Nod2 by nearly 3-fold (Table 2), indicating that the presence of a free amine does not prevent cellular access.</p><p>Other N-substituted derivatives tested significantly stabilized Nod2 compared to the untreated control (Table 2), indicating that these modifications likely do not alter the ability of the ligand to bind and/or stabilize Nod2. The correlation between Nod2-dependent NF-κB activation and Nod2 stabilization was not as strong for these compounds as for the natural compounds 1–3, indicating that these compounds may have differential permeabilites and/or cell degradation pathways. It is important to note that 3 and 16, which did not activate, also did not stabilize (Table 2). The half-life data demonstrate that peptidoglycan derivatives are capable of altering the stability of the receptor, implying that ligand recognition is important for the signaling processes by initiating and maintaining the response. We note earlier studies using mouse Nod2 showed that treatment with very high concentrations (200 μM) of 1 led to the degradation of Nod222 when protein synthesis is not inhibited, suggesting there are other feedback mechanisms to control Nod2 levels in the cell. The data presented here demonstrate the ability to modulate Nod2 stability via natural or unnatural variation at the site of acetylation.</p><p>In their natural environments, virulent bacteria modify their peptidoglycan to evade detection by the innate immune system.3,23 Acetylation of the 6-position of the carbohydrate and deacetylation of the 2-postion prevents cell wall degradation by lysozyme.3c,d Interestingly, these data demonstrate that deacetylation of the 2-postion also eliminates the ability of the peptidoglycan fragment to signal through the Nod2-dependent pathway. Acetylation of the 6-position does not affect the ability to stimulate the Nod2-dependent immune response.11a These data suggests that the deacetylation strategy used by bacteria is two-pronged in that the modification (3) yields lysozyme-resistant peptidoglycan that does not elicit an immune response via the Nod2-dependent pathway.</p><p>In addition to its role in recognizing pathogenic bacteria, Nod2 is critical for maintaining the proper balance of commensal bacteria.24 Interestingly, Nod2 mutations that correlate with development of the inflammatory bowel disease, Crohn's disease, are unable to activate Nod2.5a Previous experiments have shown that these mutations are unstable compared to the wild-type protein, and signaling can be restored via stability of Nod2 by the chaperone protein, Hsp70.18 This report shows that simple peptidoglycan derivatives stabilize Nod2 and modifications at the 2-position alter the stability of Nod2. Thus, peptidoglycan derivatives produced by these methods may stabilize Crohn's-associated Nod2 mutations, providing therapeutic leads.</p><p>In conclusion, acylation of peptidoglycan derivatives at the 2-position allows for the tuning of Nod2 stability and NF-κB response. This new, rapid, tunable, high-yielding synthesis of peptidoglycan derivatives allowed the production of analogues to probe the substrate requirements for activation and stabilization effects on the protein. Nature produces ligands (1, 2) that are capable of stabilizing Nod2 and other ligands (3) that produce no stabilizing effect. Moreover, synthetic, novel peptidoglycan-like derivatives can activate and stabilize Nod2, informing on requirements for modulating the innate immune signaling cascade in response to bacterial cell wall fragments.</p>" }, { "role": "assistant", "content": "Natural modifications of peptidoglycan modulate the innate immune response. Peptidoglycan derivatives activate this response via the intracellular innate immune receptor, Nod2. To probe how these modifications alter the response, a novel and eficient carbohydrate synthesis was developed to allow for late-stage modification of the amine at the 2-position. Modification of the carbohydrate was found to be important for stabilizing Nod2 and generating the proper response. The native Nod2 ligands demonstrate a significant increase in the cellular stability of Nod2. Moreover, changing the identity of the natural ligands at the carbohydrate 2-position allows for the Nod2-dependent immune response to be either up-regulated or down-regulated. The ligand structure can be adjusted to tune the Nod2 response, suggesting that other innate immune receptors and their ligands could use a similar strategy." } ]
science.acl_arc_intent_classification
science.acl_arc_intent_classification.1158
[ { "role": "user", "content": "You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:\n- Background: provides context or foundational information related to the topic.\n- Extends: builds upon the cited work.\n- Uses: applies the methods or findings of the cited work.\n- Motivation: cites the work as inspiration or rationale for the research.\n- CompareOrContrast: compares or contrasts the cited work with others.\n- FutureWork: cites the work as a direction for future research.\n\nYour answer should be a single word from the following list of options: [\"Background\", \"Extends\", \"Uses\", \"Motivation\", \"CompareOrContrast\", \"FutureWork\"]. Do not include any other text in your response.\n\nSection Title:\nmethod\n\nContext before the citation:\nAs has been previously observed and exploited in the NLP literature (Pang and Lee, 2004; Agarwal and Bhattacharyya, 2005; Barzilay and Lapata, 2005), the above optimization function, unlike many others that have been proposed for graph or set partitioning, can be solved exactly in an provably efficient manner via methods for finding minimum cuts in graphs. A minimum-cost assignment thus represents an optimum way to classify the speech segments so that each one tends not to be put into the class that the individual-document classifier disprefers, but at the same time, highly associated speech segments tend not to be put in different classes. where c(s) is the “opposite” class from c(s).\n\nCitation Sentence:\nAs has been previously observed and exploited in the NLP literature ( Pang and Lee , 2004 ; Agarwal and Bhattacharyya , 2005 ; Barzilay and Lapata , 2005 ) , the above optimization function , unlike many others that have been proposed for graph or set partitioning , can be solved exactly in an provably efficient manner via methods for finding minimum cuts in graphs .\n\nContext after the citation:\nIn our view, the contribution of our work is the examination of new types of relationships, not the method by which such relationships are incorporated into the classification decision. \u0000\u0000\u0000\u0000" }, { "role": "assistant", "content": "CompareOrContrast" } ]
science.qasper_extractive_qa
science.qasper_extractive_qa.211.2
[ { "role": "user", "content": "You will be shown sections from a scientific research paper, together with a question about the paper. Your task is to answer the question based on the contents of the paper. Your response should be a `json` object with two fields:\n\n\"answer\": An array of strings extracted directly from the paper which, collectively, answer the question.\n\"evidence\": An array of strings. Each should be an excerpt from the paper, in which one or more of the extracted answers can be found.\n\nFor instance, for the question \"What baselines did the authors compare against?\", a sample response might be:\n\n{\n \"answer\": [\"BERT\", \"RoBERTa\"],\n \"evidence\": [\"In our experiments, we compare the performance of our model against BERT and RoBERTa.\"]\n}\n\nDo not include any text in your response other than the json.\n\nIf the question is unanswerable given the provided excerpts, respond with the single word \"null\".\n\nPaper: CRWIZ: A Framework for Crowdsourcing Real-Time Wizard-of-Oz Dialogues\n\nLarge corpora of task-based and open-domain conversational dialogues are hugely valuable in the field of data-driven dialogue systems. Crowdsourcing platforms, such as Amazon Mechanical Turk, have been an effective method for collecting such large amounts of data. However, difficulties arise when task-based dialogues require expert domain knowledge or rapid access to domain-relevant information, such as databases for tourism. This will become even more prevalent as dialogue systems become increasingly ambitious, expanding into tasks with high levels of complexity that require collaboration and forward planning, such as in our domain of emergency response. In this paper, we propose CRWIZ: a framework for collecting real-time Wizard of Oz dialogues through crowdsourcing for collaborative, complex tasks. This framework uses semi-guided dialogue to avoid interactions that breach procedures and processes only known to experts, while enabling the capture of a wide variety of interactions. The framework is available at https://github.com/JChiyah/crwiz\n\nSystem Overview\nThe CRWIZ Intelligent Wizard Interface resides on Slurk BIBREF23, an interaction server built for conducting dialogue experiments and data collections. Slurk handles the pairing of participants and provides a basic chat layout amongst other features. Refer to BIBREF23 for more information on the pairing of participants and the original chat layout. Our chat layout remains similar to Slurk with an important difference. In our scenario, we assign each new participant a role (Operator or Wizard) and, depending on this role, the participant sees different game instructions and chat layout schemes. These are illustrated in Figures FIGREF8 and FIGREF11, for the Operator and Wizard respectively. The main components are described in turn below: 1) The Intelligent Wizard Interface; 2) dialogue structure; and 3) system-changing actions.\nWizard interface: the interface shown to participants with the Wizard role provides possible actions on the right-hand side of the browser window. These actions could be verbal, such as sending a message, or non-verbal, such as switching on/off a button to activate a robot. Figure FIGREF11 shows this interface with several actions available to be used in our data collection.\nDialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions.\nThe CRWIZ framework is domain-agnostic, but the data collected with it corresponds to the emergency response domain.\nSystem-changing actions: actions trigger transitions between the states in the FSM. We differentiate two types of actions:\nVerbal actions, such as the dialogue options available at that moment. The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard. Predefined messages can also trigger other associated events such as pop-ups or follow-up non-verbal actions.\nNon-verbal actions, such as commands to trigger events. These can take any form, but we used buttons to control robots in our data collection.\nSubmitting an action would change the dialogue state in the FSM, altering the set of actions available in the subsequent turn visible to the Wizard. Some dialogue options are only possible at certain states, in a similar way as to how non-verbal actions are enabled or disabled depending on the state. This is reflected in the Wizard interface.\nThe advantage of the CRWIZ framework is that it can easily be adapted to different domains and procedures by simply modifying the dialogue states loaded at initialisation. These files are in YAML format and have a simple structure that defines their NLG templates (the FSM will pick one template at random if there is more than one) and the states that it can transition to. Note, that some further modifications may be necessary if the scenario is a slot-filling dialogue requiring specific information at various stages.\nOnce the dialogue between the participants finishes, they receive a code in the chat, which can then be submitted to the crowdsourcing platform for payment. The CRWIZ framework generates a JSON file in its log folder with all the information regarding the dialogue, including messages sent, FSM transitions, world state at each action, etc. Automatic evaluation metrics and annotations are also appended such as number of turns per participant, time taken or if one of the participants disconnected. Paying the crowdworkers can be done by just checking that there is a dialogue file with the token that they entered.\n\nData Collection ::: Implementation\nThe dialogue structure for the Emergency Assistant (the Wizard) followed a dialogue flow previously used for the original lab-based Wizard-of-Oz study BIBREF4 but which was slightly modified and simplified for this crowdsourced data collection. In addition to the transitions that the FSM provides, there are other fixed dialogue options always available such as “Hold on, 2 seconds”, “Okay” or “Sorry, can you repeat that?” as a shortcut for commonly used dialogue acts, as well as the option to type a message freely.\nThe dialogue has several paths to reach the same states with varying levels of Operator control or engagement that enriched the heterogeneity of conversations. The Emergency Assistant dialogue options show various speaking styles, with a more assertive tone (“I am sending Husky 1 to east tower”) or others with more collaborative connotations (“Which robot do you want to send?” or “Husky 1 is available to send to east tower”). Refer to BIBREF4 for more details. Furthermore, neither participants were restricted in the number of messages that they could send and we did not require a balanced number of turns between them. However, there were several dialogue transitions that required an answer or authorisation from the Operator, so the FSM would lock the dialogue state until the condition was met. As mentioned earlier, the commands to control the robots are also transitions of the FSM, so they were not always available.\nThe Emergency Assistant interface contains a button to get a hint if they get stuck at any point of the conversation. This hint mechanism, when activated, highlights one of the possible dialogue options or robot buttons. This highlighted transition was based on the observed probability distribution of transitions from BIBREF4 to encourage more collaborative interaction than a single straight answer.\nAs in the real world, robot actions during the task were simulated to take a certain period of time, depending on the robot executing it and the action. The Emergency Assistant had the option to give status updates and progress reports during this period. Several dialogue options were available for the Emergency Assistant whilst waiting. The time that robots would take to perform actions was based on simulations run on a Digital Twin of the offshore facility implemented in Gazebo BIBREF26. Specifically, we pre-simulated typical robot actions, with the robot's progress and position reflected in the Wizard interface with up-to-date dialogue options for the Emergency Assistant. Once the robot signals the end of their action, additional updated dialogue options and actions are available for the Emergency Assistant. This simulation allowed us to collect dialogues with a realistic embedded world state.\n\nData Analysis ::: Subjective Data\nTable TABREF33 gives the results from the post-task survey. We observe, that subjective and objective task success are similar in that the dialogues that resolved the emergency were rated consistently higher than the rest.\nMann-Whitney-U one-tailed tests show that the scores of the Emergency Resolved Dialogues for Q1 and Q2 were significantly higher than the scores of the Emergency Not Resolved Dialogues at the 95% confidence level (Q1: $U = 1654.5$, $p < 0.0001$; Q2: $U = 2195$, $p = 0.009$, both $p < 0.05$). This indicates that effective collaboration and information ease are key to task completion in this setting.\nRegarding the qualitative data, one of the objectives of the Wizard-of-Oz technique was to make the participant believe that they are interacting with an automated agent and the qualitative feedback seemed to reflect this: “The AI in the game was not helpful at all [...]” or “I was talking to Fred a bot assistant, I had no other partner in the game“.\n\nData Analysis ::: Single vs Multiple Wizards\nIn Table TABREF28, we compare various metrics from the dialogues collected with crowdsourcing with the dialogues previously collected in a lab environment for a similar task. Most figures are comparable, except the number of emergency assistant turns (and consequently the total number of turns). To further understand these differences, we have first grouped the dialogue acts in four different broader types: Updates, Actions, Interactions and Requests, and computed the relative frequency of each of these types in both data collections. In addition, Figures FIGREF29 and FIGREF30 show the distribution of the most frequent dialogue acts in the different settings. It is visible that in the lab setting where the interaction was face-to-face with a robot, the Wizard used more Interaction dialogue acts (Table TABREF32). These were often used in context where the Wizard needed to hold the turn while looking for the appropriate prompt or waiting for the robot to arrive at the specified goal in the environment. On the other hand, in the crowdsourced data collection utterances, the situation updates were a more common choice while the assistant was waiting for the robot to travel to the specified goal in the environment.\nPerhaps not surprisingly, the data shows a medium strong positive correlation between task success and the number of Action type dialogue acts the Wizard performs, triggering events in the world leading to success ($R=0.475$). There is also a positive correlation between task success and the number of Request dialogue acts requesting confirmation before actions ($R=0.421$), e.g., “Which robot do you want to send?”. As Table 3 shows, these are relatively rare but perhaps reflect a level of collaboration needed to further the task to completion. Table TABREF40 shows one of the dialogues collected where the Emergency Assistant continuously engaged with the Operator through these types of dialogue acts.\nThe task success rate was also very different between the two set-ups. In experiments reported in BIBREF4, 96% of the dialogues led to the extinction of the fire whereas in the crowdsourcing setting only 9.66% achieved the same goal. In the crowdsourced setting, the robots were slower moving at realistic speeds unlike the lab setting. A higher bonus and more time for the task might lead to a higher task success rate.\n\nTable 1: Comparison of relevant recent works. In order, the columns refer to: the dataset and reference; if the dataset was generated using Wizard-of-Oz techniques; if there was a unique participant per role for the whole dialogue; if the dataset was crowdsourced; the type of interaction modality used; and finally, the type of task or domain that the dataset covers. † The participants were aware that the dialogue was authored by humans. ‡ The participants were volunteers without getting paid.\n\nFigure 1: Interface shown to those in the Operator role running on the Slurk interaction server. It has a similar layout to other chat applications with the chat window on the left and a field to send messages at the bottom. The right side is used to display additional information.\n\nFigure 2: Interface shown to those in the Emergency Assistant Wizard role running on the Slurk interaction server. The chat window is on the left, with the dialogue options and buttons to control the robots on the right. The chat here shows GIFs that appear to increase engagement and show game progress visually.\n\nFigure 5: Frequency of the top-10 Emergency Assistant dialogue acts in (Lopes et al., 2019).\n\nTable 2: Interaction features of the dialogues collected. We compare it with the results of the Wizard-of-Oz experiment in a controlled setting from (Lopes et al., 2019).\n\nTable 3: Distribution of the types of dialogue acts in the data collected with CRWIZ, compared with (Lopes et al., 2019).\n\nTable 4: Subjective ratings for the post-task survey reporting Mean, Median, Mode and Standard Deviation (SD). Scales were on a 7-point rating scale. “Dialogues Collected” refers to all the dialogues collected after filtering, whereas the other columns are for the dialogues that did not resolved the emergency (“Emergency Not Resolved Dialogues”) and those that did (“Emergency Resolved Dialogues”). Higher is better (Q3 reversed for this table). Highest numbers are bold. * indicates significant differences (p < 0.05, Mann-Whitney-U) between Emergency Resolved and Emergency Not Resolved dialogues.\n\nTable 5: Interaction between participants from one of the dialogues collected.\n\nQuestion: How does framework made sure that dialogue will not breach procedures?" }, { "role": "assistant", "content": "{\"answer\": [\"The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard. Predefined messages can also trigger other associated events such as pop-ups or follow-up non-verbal actions.\"], \"evidence\": [\"Wizard interface: the interface shown to participants with the Wizard role provides possible actions on the right-hand side of the browser window. These actions could be verbal, such as sending a message, or non-verbal, such as switching on/off a button to activate a robot. Figure FIGREF11 shows this interface with several actions available to be used in our data collection.\", \"Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions.\", \"System-changing actions: actions trigger transitions between the states in the FSM. We differentiate two types of actions:\", \"Verbal actions, such as the dialogue options available at that moment. The Wizard can select one of several predefined messages to send, or type their own message if needed. Free text messages do not change the dialogue state in the FSM, so it is important to minimise their use by providing enough dialogue options to the Wizard. Predefined messages can also trigger other associated events such as pop-ups or follow-up non-verbal actions.\", \"Non-verbal actions, such as commands to trigger events. These can take any form, but we used buttons to control robots in our data collection.\", \"Submitting an action would change the dialogue state in the FSM, altering the set of actions available in the subsequent turn visible to the Wizard. Some dialogue options are only possible at certain states, in a similar way as to how non-verbal actions are enabled or disabled depending on the state. This is reflected in the Wizard interface.\"]}" } ]
science.mslr2022_ms2_multidoc_summarization
science.mslr2022_ms2_multidoc_summarization.7434
[ { "role": "user", "content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nCLU Genetic Variants and Cognitive Decline among Elderly and Oldest Old\nAbstract 1:\nThe CLU gene is one of the prime genetic c and i date s associated with Alzheimers disease . In the present study CLU genotypes and haplotypes were associated with baseline cognition and the rate of cognitive decline in two cohorts , the Danish 1905 birth cohort ( 93 years of age in 1998 ) and the Longitudinal Study of Aging Danish twins ( LSADT ) ( 73–83 year old twins in 1997 ) . Both Mini Mental State Examination ( MMSE ) and a cognitive composite score was attained up to six times for up to 10 years and analysed using r and om effects models and vital status . The rs11136000 T allele was associated with better baseline cognitive performance both in the LSADT ( effect on intercept : 0.41 95 % CI [ −0.04 ; 0.87 ] ) and the 1905 birth cohort ( effect on intercept : 0.28 95 % CI [ 0.01 ; 0.55 ] ) , although it did not reach significance in the LSADT cohort . However , the rs11136000 T allele was significantly associated with a steeper decline ( effect on slope : −0.06 95 % CI [ −0.11 ; −0.01 ] ) in the LSADT cohort , but not in the 1905 birth cohort . Haplotype analyses revealed that carriers of the common rs11136000 , rs1532278 and rs9331888 TTC haplotype ( 36 % ) in the CLU gene performed cognitively better than non-carriers in the 1905 birth cohort ( effect on intercept : 0.50 95 % CI [ 0.12 ; 0.91 ] ) and carriers of a rare TCC haplotype ( 1 % ) performed worse on the cognitive composite score ( effect on intercept : −1.51 95 % CI [ −2.92 ; −0.06 ] ) . The association between the TTC haplotype and better cognitive composite score was higher among those surviving past the age of 98 ( p = 0.014 ) , and among these the TTC haplotype was borderline associated with a steep decline ( effect on slope : −0.13 95 % CI [ −0.27 ; 0.00 ] ) . In summery CLU genetic variants associate with cognition in two cohorts , but the genetic effect of CLU seems to regress toward the mean when aging" }, { "role": "assistant", "content": "Our meta- analysis thus suggested that the PICALM rs3851179 polymorphism was associated with AD ; the APOE ε4 status did not influence the relationship ." } ]
science.nlmgene_ner
science.nlmgene_ner.18
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Gene\"].\n\nPlease return the output as a JSON object of the format: {\"Gene\": [\"IGF\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nStructural mechanisms determining inhibition of the collagen receptor DDR1 by selective and multi-targeted type II kinase inhibitors. The discoidin domain receptors (DDRs), DDR1 and DDR2, form a unique subfamily of receptor tyrosine kinases that are activated by the binding of triple-helical collagen. Excessive signaling by DDR1 and DDR2 has been linked to the progression of various human diseases, including fibrosis, atherosclerosis and cancer. We report the inhibition of these unusual receptor tyrosine kinases by the multi-targeted cancer drugs imatinib and ponatinib, as well as the selective type II inhibitor DDR1-IN-1. Ponatinib is identified as the more potent molecule, which inhibits DDR1 and DDR2 with an IC50 of 9nM. Co-crystal structures of human DDR1 reveal a DFG-out conformation (DFG, Asp-Phe-Gly) of the kinase domain that is stabilized by an unusual salt bridge between the activation loop and αD helix. Differences to Abelson kinase (ABL) are observed in the DDR1 P-loop, where a β-hairpin replaces the cage-like structure of ABL. P-loop residues in DDR1 that confer drug resistance in ABL are therefore accommodated outside the ATP pocket. Whereas imatinib and ponatinib bind potently to both the DDR and ABL kinases, the hydrophobic interactions of the ABL P-loop appear poorly satisfied by DDR1-IN-1 suggesting a structural basis for its DDR1 selectivity. Such inhibitors may have applications in clinical indications of DDR1 and DDR2 overexpression or mutation, including lung cancer." }, { "role": "assistant", "content": "{\"Gene\": [\"DDR2\", \"receptor tyrosine kinases\", \"Abelson kinase\", \"DDRs\", \"DDR1\", \"collagen\", \"DDR\", \"ABL\", \"discoidin domain receptors\", \"collagen receptor\"]}" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.23675
[ { "role": "user", "content": "You are presented with a paragraph from a COVID-19 literature. Classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable topics within the array. Your output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nThis article presents expert recommendations for assisting newborn children of mothers with suspected or diagnosed coronavirus disease 2019 </mac_aq>(COVID-19). The consensus was developed by five experts with an average of 20 years of experience in neonatal intensive care working at a reference university hospital in Brazil for the care of pregnant women and newborns with suspected or confirmed COVID-19. Despite the lack of scientific evidence regarding the potential for viral transmission to their fetus in pregnant mothers diagnosed with or suspected of COVID-19, it is important to elaborate the lines of care by specialists from hospitals caring for suspected and confirmed COVID-19 cases to guide multidisciplinary teams and families diagnosed with the disease or involved in the care of pregnant women and newborns in this context. Multidisciplinary teams must be attentive to the signs and symptoms of COVID-19 so that decision-making is oriented and assertive for the management of the mother and newborn in both the hospital setting and at hospital discharge." }, { "role": "assistant", "content": "[\"Prevention\"]" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.12549
[ { "role": "user", "content": "You are presented with a paragraph from a COVID-19 literature. Classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable topics within the array. Your output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nSevere acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection has become a serious public health threat to the whole world, and the number of infected is still rising dramatically at this moment. Several studies have confirmed that cytokine storms play a critical role in causing a case to worsen from mild to severe or critical. The current treatment for cytokine storms is limited, so the international medical community is focusing on a specific and effective remedy. Jaktinib hydrochloride is a broad spectrum JAK inhibitor. It can inhibit cytokine-induced immune activation by multiple mechanisms and also slow viral proliferation by inhibiting AAK1 without causing unacceptable toxicity. Jaktinib hydrochloride has great potential for the treatment of patients with coronavirus disease 2019 (COVID-19)." }, { "role": "assistant", "content": "[\"Treatment\", \"Mechanism\"]" } ]
science.acl_arc_intent_classification
science.acl_arc_intent_classification.359
[ { "role": "user", "content": "You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:\n- Background: provides context or foundational information related to the topic.\n- Extends: builds upon the cited work.\n- Uses: applies the methods or findings of the cited work.\n- Motivation: cites the work as inspiration or rationale for the research.\n- CompareOrContrast: compares or contrasts the cited work with others.\n- FutureWork: cites the work as a direction for future research.\n\nYour answer should be a single word from the following list of options: [\"Background\", \"Extends\", \"Uses\", \"Motivation\", \"CompareOrContrast\", \"FutureWork\"]. Do not include any other text in your response.\n\nSection Title:\nNone\n\nContext before the citation:\nA number of speech understanding systems have been developed during the past fifteen years (Barnett et al. 1980, Dixon and Martin 1979, Erman et al. 1980, Haton and Pierrel 1976, Lea 1980, Lowerre and Reddy 1980, Medress 1980, Reddy 1976, Walker 1978, and Wolf and Woods 1980).\n\nCitation Sentence:\nA number of speech understanding systems have been developed during the past fifteen years ( Barnett et al. 1980 , Dixon and Martin 1979 , Erman et al. 1980 , Haton and Pierrel 1976 , Lea 1980 , Lowerre and Reddy 1980 , Medress 1980 , Reddy 1976 , Walker 1978 , and Wolf and Woods 1980 ) .\n\nContext after the citation:\nMost of these efforts concentrated on the interaction between low level information sources from a speech recognizer and a natural language processor to discover the meaning of an input sentence. While some of these systems did exhibit expectation capabilities at the sentence level, none acquired dialogues of the kind described here for the sake of dialogue level expectation and error correction. A detailed description of the kinds of expectation mechanisms appearing in these systems appears in Fink (1983). The problem of handling ill-formed input has been studied by Carbonell and Hayes (1983), Granger (1983), Jensen et al. (1983), Kwasny and Sondheimer (1981), Riesbeck and Schank (1976), Thompson (1980), Weischedel and Black (1980), and Weischedel and Sondheimer (1983)." }, { "role": "assistant", "content": "CompareOrContrast" } ]
science.covidfact_entailment
science.covidfact_entailment.569
[ { "role": "user", "content": "You will be shown a claim related to the COVID-19 pandemic, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\".\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract.\n\nFor instance, if the model were given the claim \"the COVID vaccine is safe for healthy adults\", the output might be\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that the risks of severe COVID vaccine side effects are low for healthy adults.\"]\n}\n\nClaim: Protein structure and sequence reanalysis of 2019-ncov genome refutes snakes as its intermediate host and the unique similarity between its spike protein insertions and hiv-1 .\n\nAbstract:\nAs the infection of 2019-nCoV coronavirus is quickly developing into a global pneumonia epidemic, the careful analysis of its transmission and cellular mechanisms is sorely needed.\nIn this Communication, we first analyzed two recent studies that concluded that snakes are the intermediate hosts of 2019-nCoV and that the 2019-nCoV spike protein insertions share a unique similarity to HIV-1.\nHowever, the reimplementation of the analyses, built on larger scale data sets using state-of-the-art bioinformatics methods and databases, presents clear evidence that rebuts these conclusions.\nNext, using metagenomic samples from Manis javanica, we assembled a draft genome of the 2019-nCoV-like coronavirus, which shows 73% coverage and 91% sequence identity to the 2019-nCoV genome.\nIn particular, the alignments of the spike surface glycoprotein receptor binding domain revealed four times more variations in the bat coronavirus RaTG13 than in the Manis coronavirus compared with 2019-nCoV, suggesting the pangolin as a missing link in the transmission of 2019-nCoV from bats to human." }, { "role": "assistant", "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"In this Communication, we first analyzed two recent studies that concluded that snakes are the intermediate hosts of 2019-nCoV and that the 2019-nCoV spike protein insertions share a unique similarity to HIV-1.\"]}" } ]
science.healthver_entailment
science.healthver_entailment.3221
[ { "role": "user", "content": "You will be shown a claim about public health, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\". If the abstract does not provide enough information to arrive at a verdict, write \"NEI\" (for \"not enough information\").\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract. If the verdict was \"NEI\", then return an empty array.\n\nFor instance, if the model were given the claim \"wearing masks can prevent the spread of COVID\", the output might be\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that mass mask-wearing reduces the transmission rate for COVID-19.\"]\n}\n\nClaim: The question at hand is whether such seasonal clues might apply to the brand-new coronavirus and the disease it causes (COVID-19)\n\nAbstract:\nThe undefendable outbreak of novel coronavirus (SARS-COV-2) lead to a global health emergency due to its higher transmission rate and longer symptomatic duration, created a health surge in a short time.\nSince Nov 2019 the outbreak in China, the virus is spreading exponentially everywhere.\nThe current study focuses on the relationship between environmental parameters and the growth rate of COVID-19.\nThe statistical analysis suggests that the temperature changes retarded the growth rate and found that -6.28{degrees}C and +14.51{degrees}C temperature is the favorable range for COVID-19 growth.\nGutenberg- Richter's relationship is used to estimate the mean daily rate of exceedance of confirmed cases concerning the change in temperature.\nTemperature is the most influential parameter that reduces the growth at the rate of 13-16 cases/day with a 1{degrees}C rise in temperature." }, { "role": "assistant", "content": "{\"verdict\": \"NEI\", \"evidence\": []}" } ]
science.pico_ner
science.pico_ner.3603
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Outcome\", \"Participant\", \"Intervention\"].\n\nPlease return the output as a JSON object of the format: {\"Participant\" : [\"patients with COPD\", ...], \"Intervention\" : [\"Atenolol\", ...], \"Outcome\" : [\"blood pressure\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nElective Induction of Labor Compared With Expectant Management of Nulliparous Women at 39 Weeks of Gestation : A Randomized Controlled Trial . OBJECTIVE To evaluate whether the elective induction of labor in nulliparous women with an unfavorable cervix affects the cesarean delivery rate . METHODS We conducted a randomized controlled trial at a tertiary care medical center . Nulliparous woman between 38 0/7 and 38 6/7 weeks of gestation who were least 18 years of age with a singleton gestation and a Bishop score of 5 or less were randomized to elective induction of labor or expectant management . The induction of labor group was induced within 1 week of enrollment but not before 39 0/7 weeks of gestation . The control group continued routine prenatal care with admission for labor or obstetric indication . The primary outcome was cesarean delivery . Assuming a 20 % rate in women in a control group , 80 % power , and a goal to detect a twofold increase to 40 % in the induction of labor group , 162 patients were needed . RESULTS From March 2010 to February 2014 , 82 patients were randomly allocated to induction of labor and 80 to expectant management . Baseline characteristics were similar between groups . The cesarean delivery rate in the induction of labor group was 30.5 % ( 25/82 ) compared with 17.7 % ( 14/79 ) in the expectant management group ( relative risk 1.72 , 95 % confidence interval 0.96-3.06 ) . CONCLUSION In nulliparous women with a Bishop score of 5 or less , elective induction after 39 0/7 weeks of gestation compared with expectant management of pregnancy did not double the rate of cesarean delivery . CLINICAL TRIAL REGISTRACTION ClinicalTrials.gov , www.clinicaltrials.gov , NCT01076062 . LEVEL OF EVIDENCE I ." }, { "role": "assistant", "content": "{\"Intervention\": [\"elective induction of labor\", \"Expectant Management of Nulliparous Women\", \"expectant management of pregnancy\", \"expectant management\", \"prenatal care with admission for labor\", \"obstetric indication\", \"induction of labor\", \"elective induction\", \"Elective Induction of Labor\"], \"Outcome\": [\"cesarean delivery rate\"], \"Participant\": [\"women\", \"Women\", \"82 patients\", \"18 years of age\", \"Gestation\", \"unfavorable cervix\"]}" } ]
science.anat_em_ner
science.anat_em_ner.368
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"AnatomicalEntity\"].\n\nPlease return the output as a JSON object of the format: {\"AnatomicalEntity\": [\"tumor cells\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nHematogenous metastasis in gastric cancer requires isolated tumor cells and expression of vascular endothelial growth factor receptor-1. \nPURPOSE: Recent studies of cancer metastasis have focused on the role of premetastatic gene expression and circulating tumor cells. We did a blind prospective study in gastric cancer to assess the significance of isolated tumor cells (ITC) and to test the hypothesis that vascular endothelial growth factor receptor-1 (VEGFR-1) is expressed within the bone marrow at tumor-specific, premetastatic sites. EXPERIMENTAL DESIGN: Both bone marrow and peripheral blood samples from 810 gastric cancer patients were collected at the Central Hospital, National Cancer Center (Tokyo, Japan). The samples were transferred to Kyushu University Hospital (Beppu, Japan) where they were analyzed by quantitative real-time reverse transcription-PCR for three epithelial cell markers, carcinoembryonic antigen, cytokeratin-19, and cytokeratin-7, as well as VEGFR-1. RESULTS: ITCs were observed in peripheral blood and bone marrow even in early stages of gastric cancer. The frequency of ITC in bone marrow was significantly associated with the stage of disease by ANOVA (P < 0.01). Gastric cancer metastasized when ITCs were observed in the presence of VEGFR-1. In the 380 patients who were ITC negative and showed low VEGFR-1 expression, synchronous (at the time of surgery) and heterochronous (recurrent) metastases were not observed. CONCLUSIONS: ITCs circulate even in early stages of disease. Furthermore, elevated expression of VEGFR-1 facilitates the establishment of hematogenous metastases in gastric cancer. This study indicates that the simultaneous presence of ITC and VEGFR-1 expression at premetastatic sites is clinically significant for disease progression." }, { "role": "assistant", "content": "{\"AnatomicalEntity\": [\"tumor cells\", \"ITCs\", \"cancer\", \"bone marrow\", \"peripheral blood\", \"ITC\", \"gastric cancer\", \"tumor\", \"epithelial cell\", \"metastases\", \"peripheral blood samples\", \"Gastric cancer\", \"samples\", \"hematogenous metastases\", \"premetastatic sites\", \"isolated tumor cells\"]}" } ]
science.drug_combo_extraction_re
science.drug_combo_extraction_re.1202
[ { "role": "user", "content": "You will be shown a paragraph from a biomedical research article. Your task is to extract all drug combinations appearing in a specific sentence within in the paragraph. Each drug combination should be assigned to one of the following three categories:\n\n- POS: Indicates a positive interaction between the drugs in the combination; for instance, drugs that have an additive or synergistic effect.\n- NEG: Indicates a negative interaction between the drugs in the combination; for instance, combining these drugs might result in adverse side effects, or might reduce their efficacy.\n- COMB: Indicates that the drugs are given in combination, but the type of interaction between the drugs cannot be determined based on the provided context.\n\nPlease format your answer as a json array. Each entry in the array should itself be an array formatted as [<drug_1>, <drug_2>, ..., <drug_n>, <category>] where <category> is one of the three categories listed above and <drug_1>, ..., <drug_n> are drug names copied verbatim from the input sentence. Different combinations may involve different numbers of drugs; in other words, the `n` in <drug_n> may vary.\n\nAs an example, the following output describes two extracted drug combinations; one between two drugs, and one between three drugs:\n\n[[\"artesunate\", \"artemether\", \"COMB\"], [\"Nifedipine\", \"Labetalol\", \"Prazosin\", \"POS\"]]\n\nThe paragraph and specific sentence are below. You should only extract relationships expressed in the specific sentence; the paragraph is provided for context.\n\nIn your output, return only the json array and no other text. If no drug combinations appear, return an empty array `[]`.\n\nparagraph: Genistein combined polysaccharide enhances activity of docetaxel , bicalutamide and Src kinase inhibition in androgen-dependent and independent prostate cancer cell lines . To determine the benefit of genistein combined polysaccharide (GCP) in combination with the androgen receptor antagonist bicalutamide, the antimicrotubule taxane docetaxel, and the Src kinase inhibitor pp2 as part of a treatment regimen for advanced prostate cancer (CaP). ### Materials And Methods The growth inhibitory and apoptotic effects of GCP in combination with bicalutamide, docetaxel and pp2 were evaluated in both the androgen-dependent LNCaP line, and three androgen-independent lines: CWR22Rv1, PC-3, and LNCaP-R273H. The LNCaP-R273H model is an LNCaP variant expressing a p53(GOF) allele; like CWR22Rv1 and PC-3, it is able to grow in a minimal androgen environment. The effects of GCP treatment in combination with the aforementioned drugs were measured using an MTT assay, Western blotting, flow cytometric analysis, and caspase activation assay. Altered schedules of drug administration were explored using combinations of GCP and docetaxel. ### results GCP potentiated the activity of docetaxel in all four cell lines, resulting in growth inhibition and increased apoptosis. The combination of GCP and bicalutamide had enhanced activity in both the LNCaP and LNCaP-R273H lines, which may better represent patient tumour cells after progression to androgen independence. Administration of docetaxel followed by GCP resulted in a synergistic interaction in LNCaP cells, with increased apoptosis. By contrast, GCP administered first showed subadditivity, probably resulting from GCP-mediated induction of G1 arrest interfering with docetaxel activity. ### conclusion These data suggest that GCP, an isoflavone-enriched compound with minimal side-effects and far superior intestinal absorption rate of genistein, has significant clinical potential in combination with docetaxel, bicalutamide or targeted agents for the treatment of advanced CaP.\n\nsentence: Genistein combined polysaccharide enhances activity of docetaxel , bicalutamide and Src kinase inhibition in androgen-dependent and independent prostate cancer cell lines ." }, { "role": "assistant", "content": "[[\"Genistein\", \"docetaxel\", \"POS\"], [\"Genistein\", \"bicalutamide\", \"POS\"]]" } ]
science.chemprot_re
science.chemprot_re.619
[ { "role": "user", "content": "Given an abstract from a research paper, your task is to identify and label relationships between chemical compounds and genes mentioned in the abstract.\n\nThe relationship type must be one of ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nPlease format your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"]. If no relations can be found, please output an empty JSON array [].\n\nAbstract:\n\nPeyer 's patch-mediated intestinal immune system modulating activity of pectic-type polysaccharide from peel of Citrus unshiu. An intestinal immune system modulating polysaccharide (CUI-3IIb-3-2, 18kDa) was purified from Citrus unshiu peel. CUI-3IIb-3-2 mainly comprised GalA, GlcA, Ara, Gal and Rha, and it consisted of 4-linked GalA, terminal Araf, 4- or 5-linked/3,4- or 3,5-branched Ara, terminal Gal, and 2-linked/2,4-branched Rha. After CUI-3IIb-3-2 digestion by endo-α-d- (1→4) -polygalacturonase, its hydrolysate was fractionated into PG-1 and PG-2. Methylation analyses of PG-1 and PG-2 using base-catalysed β-elimination suggested that CUI-3IIb-3-2 be assumed as pectic-type polysaccharide. Since the activities of PG-1 and PG-2 were potently decreased, the whole polysaccharide structure of CUI-3IIb-3-2 would be essential to maintain the activity. Meanwhile, when CUI-3IIb was orally administered in mice, bone marrow cell proliferation and GM-CSF/IL-6 production from Peyer 's patch cell were significantly higher (1.76- and 2.03/2.51-fold, respectively) than a saline. Therefore, a pectic-type polysaccharide from citrus peel could stimulate Peyer 's patches and produce hematopoietic growth factors resulted in bone marrow cell proliferation." }, { "role": "assistant", "content": "[]" } ]
science.chemsum_single_document_summarization
science.chemsum_single_document_summarization.92434
[ { "role": "user", "content": "You will be shown the title of a chemistry paper, together with sections excerpted from the body of the paper. The paper sections may include HTML tags like `<p>` and `</p>`; please ignore these. Your task is to write an abstract for the paper. Your response should include the abstract and no other text.\n\nTitle:\nPhotocatalyzed Transition-Metal-Free Oxidative Cross-Coupling Reactions of Tetraorganoborates\n\nPaper sections:\n\n<p>Biaryl derivatives constitute an essential class of compounds for the development of pharmaceuticals and organic materials. [1] The use of transition metal catalysis to forge C-C bonds between two aryls has become inevitable, following the pivotal groundwork laid by Corriu-Kumada-Tamao, [2] Negishi, [3] Suzuki-Miyaura [4] and Stille [5] in the 1970's. Although these traditional methods dominate the world of cross-coupling reactions, modern synthetic planning also includes direct C-H functionalization [6] and decarboxylative strategies. [7] Besides, efforts have been made towards transition-metal-free alternatives using borates as templates for intramolecular biaryl formation. Few methods depict homocoupling reactions using chemical oxidants such as VO(OEt)Cl2, [8] iridium [9] and zinc complexes [10] (Scheme 1A). Pioneered by Geske in 1959, [11] the formation of symmetrical biaryls under electrochemical oxidation was extended to fluorinated substrates by Waldvogel in 2018. [12] However, the intramolecular coupling of borate salts was restrained to homocoupling products until very recently (Scheme 1B). Building on methodologies developed in our group to access functionalized organoboron derivatives, we recently demonstrated an efficient and chemoselective electrochemical synthesis of unsymmetrical biaryls [13] and (hetero)-arylated olefins [14] from readily accessible tetraorganoborate salts (TOBs). The group of Studer similarly reported an elegant chemical oxidation of unsymmetrical tetraorganoborates towards heterobiaryls using Bobbitt's salt. [15] The renaissance of photochemistry in past decades stems mainly from the advantageous features it presents and the reactivity it promotes in comparison with classical chemical processes. [16] Photoredox catalysis is a rapidly expanding field as it offers a tunable control over oxidation and/or reduction steps for the generation of desired reactive intermediates. [17] Only few examples have been reported on the formation of C-C bonds between two aryl moieties through photoredox catalysis. The groups of Sanford, Duan and König employed either transition-metal photocatalysts ([Ru] [18] or [Zn]-PDI [19] complexes), or organophotocatalysts (Eosin Y or PDI) [20] to reduce diazocompounds and electron-poor aryl bromides under photoirradiation towards intermolecular aryl-aryl bond formation.</p><p>Scheme 1. Previous and present contributions to coupling reactions of TOBs.</p><p>The only example of photochemical borate rearrangement was described by Doty in 1971. Rose Bengal was employed under UVlight irradiation for the formation of biphenyl (Scheme 1C) from NaBPh4 through oxygen-relayed electron transfer. [21] The authors proposed that singlet oxygen is generated in situ, serving as the oxidizing species in the transformation. Herein, we present a conceptual hetero-coupling approach towards functionalized biaryls and arylated olefins by harvesting the potential of visible light photoredox catalysis to promote the oxidation step needed to trigger the key C-C bond formation.</p><p>Table 1. Optimization of the photo-coupling reaction.</p>\n\nPhotocatalyst\n<p>Deviation from conditions Conversion [a] (Isolated yield)</p>\n\nRhodamine 6G\n<p>Cl3CBr instead of O2,  = 530 nm 45%</p><p>Acr-1 without O2 57% (50%)</p><p>Acr-1 none 85% (77%)</p>\n\nAcr-1 EtOH instead of MeCN, 8 days 69%\n<p>Acr-2 none 82%</p>\n\nAcr-3 none 83%\n<p>Acr-4 none 85% (78%)</p><p>Acr-1 5 mol% catalyst loading 85% (78%)</p><p>Acr-1 5 mol% cat., 50 °C, 16h [b] 91% (83%) [a] Conversions determined by GC using n-undecane as internal standard. [b] Changing the reaction setup induces a temperature increase to 50 °C (selfheating), which allowed to decrease the reaction time to 16h, see SI.</p><p>Conditions were first optimized using unsymmetrical tetraarylborate salt 1a. Rhodamine 6G and acridinium-based photocatalysts (Acr) [22] were tested in the presence of Cl3CBr or oxygen as oxidant, respectively (Table 1). While rhodamine 6G only gave the desired product 2a in 45% conversion within 48h under green light irradiation ( = 530 nm), Acr-1 allowed for its formation in 57% without oxidant (50% isolated yield). In the presence of oxygen, the conversion went up to 85% and modifying the substitution pattern of the photocatalyst (Acr-2 to Acr-4) gave similar results (82 to 85% conversion). Switching the solvent to ethanol proved less efficient (69% conversion within 8 days). However, decreasing the catalyst loading to 5 mol% did not alter the rate of coupling reaction, furnishing 2a in 78% yield after 48h under blue light irradiation ( = 470 nm). Using a more powerful setup (13 W/cell) [23] allowed to decrease reaction times to 16h, which also caused an increase in temperature to 50 °C. Importantly, no traces of intermolecular coupling products were detected in all cases.</p><p>We previously demonstrated that the first oxidation of a TOB salt preferentially occurs on the most electron rich unsaturated moiety. [13,14] As a logical consequence, structures 1 were designed to possess one electron-richer aryl group (Ar 1 ), surrounded by electron-poorer groups (Ar 2 ). With optimized conditions and setup in hands, we started evaluating the scope of the aryl-aryl photocoupling (Scheme 2). Worthy of note, tetraarylborate salts were accessed ex-situ by reaction of an aryltrifluoroborate species with the appropriate Grignard (or zinc) reagent and were engaged further without purification. The yields in Scheme 2 are therefore given for the two-step sequence. In the presence of Acr-1 under blue light irradiation, we were able to push the conversion to its maximum for the formation of biphenyl 2b (91%), showing great improvement in comparison with the conditions previously described by the group of Doty (75% under UV irradiation). Varying the substitution pattern of Ar 1 by introducing methyl, trimethylsilyl and ether moieties led to structures 2c-j in good to excellent yields (up to 92%). Scaling up the reaction to 3 mmol also gave 2h in a reasonable yield (77%). Interestingly, these conditions allowed for an efficient coupling of unprotected phenol derivatives 2k-l in up to 83% isolated yield, verifying the functional group tolerance of the method. Amides and nitriles also proved tolerable (2m-n), although lower yields were obtained (47 to 53%). This can be attributed to the lower reactivity of organozinc reagents (necessary for the tolerance of amides and nitriles) towards starting trifluoroborates in the initial formation of TOB salts. Electron-rich heteroaryls such as benzoxadiazole and benzofurans were successfully coupled, yielding compounds 2o-r (up to 70%). The transformation of electron-poor pyridyl and quinolyl TOB derivatives proved more difficult to achieve and compounds 2s-t were only isolated in up to 56% yields. However, the presence of dibenzofurans as Ar 2 groups led to 78% of coupling product (2u). Functionalization of the pharmacologically relevant celecoxib structure gave 2v in 38% yield. The photocoupling reaction of sensitive halogenated and pseudo-halogenated scaffolds was examined next, as they open the possibility for further functionalization. While classical cross-coupling methods usually give complex mixtures of products through uncontrolled oxidative additions, [24] we have previously demonstrated that an electrochemical process can be employed to gain control over the formation of desired functionalized compounds. [13] Remarkably, iodinated and brominated substrates were tolerated under our photocatalyzed conditions, furnishing halogenated biaryls 2w-2z in up to 69% yield. Nitrile-substituted trifluoroborate substrates were also used in this two-step sequence, leading to structures 2aa-ab in moderate yields (47 to 63%).</p><p>Scheme 2. Scope of the photo-coupling of TOBs. [a] Borate salts were generated in-situ from the corresponding organotrifluoroborates and organomagesium species. [b] Yields are given over two steps, see SI. [c] Photocoupling was conducted on pure, isolated salts. [d] Prepared from organozinc reagents.</p><p>Having supported the proof of concept for photocoupling reactions with a broad scope of biaryl products, we set out to examine olefinations under similar conditions, as those would provide an interesting alternative for the formation of styryl derivatives. Model substrate 3a was employed for optimizations, assuming that the oxidation of the alkenyl substituent would be preferred over the remaining aryl groups. [14] Performing the reaction with 5 mol% of Acr-1 without oxidant afforded 4a in 35% yield and similar conditions to the ones used for biaryl formation only pushed the conversion to 65% (Table 2).</p><p>Table 2. Optimization of the photo-olefination reaction. Conversion [a] (Isolated yield) oxidant Conversion [a] (Isolated yield) O2 65% (<10%) [b] PhSSPh 57% (40%) none 45% (35%) K2HPO4 73% (57%)</p><p>However, the main product was found to be the overoxidized epoxide 5a (see Scheme 3). Diphenyldisulfide did not improve the conversion of 3a, and dipotassium phosphate furnished 4a in 57% isolated yield (73% conversion of 3a). These last conditions (cond. A, Scheme 3A) were therefore used in the exploration of the reaction scope. -styryl moieties were evaluated first, providing (E)-bisarylated olefins 4a-e with up to 57% yield. Interestingly, engaging (E)-alkenyl substrates led to the stereospecific formation of products (E)-4 under thermodynamic control. Cyclic olefins 4f-j were synthesized next, although conversions were generally lower than for acyclic ones, with the exception of 4i (69% isolated yield). When oxygen was used instead of K2HPO4, trans-epoxides 5a and 5b were formed in up to 54% yield.</p><p>Scheme 3. Photo-olefination of alkenylborates. [a] Borate salts were generated in-situ from the corresponding organotrifluoroborates. [b] Yields are given over two steps, see SI. [c] Photocoupling was conducted on pure, isolated salts.</p><p>More than adding synthetic interest to the method, these findings indicate that the oxidant plays an essential role in the transformation and its associated catalytic cycle. In contrast to the work from Doty, our different set of conditions leads us to believe that oxygen plays a role in the re-oxidation of the catalyst, rather than in the oxidation of the substrate itself.</p><p>We therefore propose a reaction mechanism in which the oxygen is implicated as the oxidant that enables the catalyst regeneration (Scheme 4). Blue light irradiation (470 nm) allows for the acridinium to reach its active, excited state to abstract an electron from substrate 1a. The resulting oxidized species [1a . ] undergoes a pseudo 1,2-metallate rearrangement, [13,14] giving the cyclic intermediate 6, which further opens towards 7 through [TS] # .</p><p>Oxygen further intervenes to regenerate the photocatalyst from its reduced form, producing [O2] .-. [25] As witnessed in the reaction of alkenyl derivatives, we know that oxygen interacts with the intermediates to produce the corresponding epoxides. In consequence, we advance that an active oxygen species reacts with [7] to promote an elimination reaction towards the formation of 2a.</p><p>Scheme 4. Theoretical studies and proposed mechanism.</p><p>To assess a plausible reaction path, we performed a series of density functional theory calculations on 1a using the software Q-Chem [26] with the B3LYP functional [27] and the 6-31G* basis set.</p><p>Here we determined equilibrium structures and computed the Gibbs free energies for the initial anion 1a and radical [1a . ] as well as for the radicals 6 and 7. We found a transition state between 6 and 7 at a barrier height of 50 kJ•mol -1 as depicted in Scheme 4. This completes the picture of the reaction mechanism: the oxidation of the anion is followed by a relaxation during which the reacting six-membered rings rotate to face each other, therefore coming closer together. The C-B-C-angle of 108° in the anion changes to 87° in 6. This structure has CS symmetry when rotationally averaging over the atomic positions of the methoxy group. During the reaction, the B-C bond is cleaved in the same time that the C-C bond. The CS symmetry is maintained while the bond breaks and lowered only afterwards. The C-B-C-angle in the transition state is 60° and shrinks further while the tetrahedral coordination at the carbon atom in the oxidized ring is restored.</p><p>In summary, we have demonstrated the feasibility of a chemoselective coupling of tetraorganoborates using organophotocatalysis. This conceptual approach has allowed us to access a wide range of biaryl, heterobiaryl and olefin products under blue light irradiation, in a simple and practical setup. A deeper analysis of the mechanism through experimental and theoretical studies led us to propose an unusual reaction path that goes through a cyclic intermediate, essential to the formation of the C-C bond. Heterocoupling reactions were achieved and we are currently working on alternative borate salts with increased sustainability and atom economy. There is no doubt that these advances within the field of photocatalysis will enable the discovery of new reactions in the near future.</p>" }, { "role": "assistant", "content": "Readily accessible tetraorganoborate salts undergo selective coupling reactions under blue light irradiation in the presence of catalytic amounts of transition-metal-free acridinium and rhodamine photocatalysts to furnish unsymmetrical biaryls, heterobiaryls and arylated olefins. This represents an interesting conceptual approach to forge C-C bonds between aryl, heteroaryl and alkenyl groups under smooth photochemical conditions. Computational studies were conducted to investigate the mechanism of the transformation." } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.54095
[ { "role": "user", "content": "You will be presented with the full text of a science research paper. Please write an abstract for this paper. Your response should include the abstract and no additional text.\n\nPaper text:\ncurrently , the question of pulsation modulation of magnetic field in stars , both with convective and radiative envelopes , is still opened .\n* rr lyr . *\n* babcock ( 1958 ) ) reported a detection of a magnetic field in rr lyr .\nthe longitudinal component of the field was found to be variable from @xmath31580 to + 540 g , but showed no correlation with the pulsation cycle of the star .\n* romanov et al .\n( 1987 , 1994 ) ) also registered significant magnetic field in rr lyr , and found the field to be variable with an amplitude of up to 1.5 kg over the pulsation cycle . on the other hand , ( *\n* preston ( 1967 ) ) and ( * ? ? ?\n* chadid et al . ( 2004 ) ) detected no convincing evidence of a photospheric magnetic field in the star in the years 19631964 and 19992002 , respectively .\n* @xmath0 aql . *\nphotoelectric magnetometer observations of @xmath0 aql , performed by ( * ? ? ?\n* ; * ? ? ?\n* borra et al .\n( 1981 , 1984 ) ) , detected no magnetic field in this star .\n* plachinda ( 2000 ) ) was the first who detected magnetic field on @xmath0 aql and reported pulsation modulation of the longitudinal component from @xmath3100 to + 50 g. ( * ? ? ?\n* wade et al . ( 2002 ) ) detected no statistically significant longitudinal magnetic field in @xmath0 aql during 3 nights in 2001 and concluded that @xmath0 aql is a non - magnetic star , at least at a level of 10 g. ( * ? ? ?\n* grunhut et al . ( 2010 ) ) registered clear zeeman signatures in stokes v parameter for @xmath0 aql and eight other supergiants . *\n@xmath4 peg . *\n* butkovskaya & plachinda ( 2007 ) ) reported the modulation of the longitudinal magnetic field in @xmath5 cephei - type star @xmath4 peg ( b2 iv ) with the amplitude of about 7 g over the 0.15-day pulsation period of the star . in order to shed some light on the problem of pulsational modulation of stellar magnetic fields\n, we continued spectropolarimetric monitoring of @xmath0 aql during 60 nights between 2002 and 2012 using coud spectrograph at the 2.6-m shajn telescope of the crimean astrophysical observatory ( ukraine ) .\nthe technique of zeeman splitting , used for the measurement of the longitudinal magnetic field , is described in detail by ( * ? ? ? * butkovskaya & plachinda ( 2007 ) ) .\nwe folded all values of the measured longitudinal magnetic field with the pulsational period according to the pulsation ephemeris : jd @xmath6 ( * ? ? ?\n* ( kiss & vink 2000 ) ) , where @xmath7 is the number of pulsation cycles . as an example , the pulsation modulation of the longitudinal magnetic field of @xmath0 aql in 2002 and 2004 is illustrated in fig .\nwe found that the magnetic field sinusoidally varies in phase of the radial pulsation of @xmath0 aql .\nthis confirms the previous conclusions of ( * ? ? ?\n* plachinda ( 2000 ) ) .\nhowever , the amplitude @xmath1 , mean field @xmath2 , and phases of maximum and minimum field are changing from year to year ( see table 1 where the f - test indicates the statistical reliability of the detected variability for each year ) .\nthe possible reason for those variations is stellar axial rotation or dynamo mechanisms .\naql folded with the 7.176726-day pulsation period : a ) our data obtained in 2002 ( _ closed and open circles _ ) , data by ( * ? ? ?\n* wade et al . ( 2002 ) ) ( _ black squares _ ) , and ( * ? ? ?\n* grunhut et al . ( 2010 ) ) ( _ open triangles _ ) ; b ) our data obtained in 2004 ( _ closed and open circles _ ) , data by ( * ? ? ?\n* ; * ? ? ?\n* borra et al .\n( 1981 , 1984 ) ) ( _ black squares _ ) .\nfitting sinusoids are shown by strong lines .\nopen circles represent our data that have not been taken into account in the fits.,width=384 ]" }, { "role": "assistant", "content": "we report the results of spectropolarimetric study of the classical cepheid @xmath0 aql . \n we found that the longitudinal magnetic field of @xmath0 aql sinusoidally varies with the radial pulsation period , while the amplitude @xmath1 , mean field @xmath2 , and phases of maximum and minimum field change from year to year . \n we hypothesize that possible reasons of those variations are stellar axial rotation or dynamo mechanisms ." } ]
science.bioasq_yesno_qa
science.bioasq_yesno_qa.772
[ { "role": "user", "content": "You have to answer a biomedical question in binary format, i.e. only yes and no are the only acceptable answer formats. A list of paragraphs is provided as context to help you answer the question.\n\nQuestion: Is inositol effective for trichotillomania?\n\nContext: Patients assigned to inositol failed to show significantly greater reductions on primary or secondary outcomes measures compared with placebo (all P>0.05).\nThis is the first study assessing the efficacy of inositol in the treatment of trichotillomania, but found no differences in symptom reductions between inositol and placebo.\nAt study endpoint, 42.1% of patients were 'much or very much improved' on inositol compared with 35.3% on placebo.\nConclusions • The review indicates that yoga, aerobic exercise, acupuncture, biofeedback, hypnosis, and inositol and N-acetylcysteine all show promise in the treatment of excoriation disorder and other body-focused repetitive behaviors, such as trichotillomania.\nFuture studies should examine whether inositol may be beneficial in controlling pulling behavior in a subgroup of individuals with trichotillomania." }, { "role": "assistant", "content": "no" } ]
science.chemprot_ner
science.chemprot_ner.521
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Protein\", \"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\" : [\"Dexamethasone\", ...], \"Protein\" : [\"BRCA-1\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nVinblastine-induced apoptosis of melanoma cells is mediated by Ras homologous A protein (Rho A) via mitochondrial and non-mitochondrial-dependent mechanisms.\nDespite the availability of melanoma treatment at the primary site, the recurrence of local melanoma can metastasize to any distant organ. Currently, the available therapies for the treatment of metastatic melanoma are of limited benefit. Thus, the functional analysis of conventional therapies may help to improve their efficiency in the treatment of metastatic melanoma. In the present study, the exposure of melanoma cells to vinblastine was found to trigger apoptosis as evidenced by the loss of mitochondrial membrane potential, the release of both cytochrome c and apoptosis inducing factor, activation of caspase-9 and 3, and cleavage of Poly (ADP-ribose)-Polymerase. Also, vinblastine enhances the phosphorylation of Ras homologous protein A, the accumulation of reactive oxygen species, the release of intracellular Ca(2+), as well as the activation of apoptosis signal-regulating kinase 1, c-jun-N-terminal kinase, p38, inhibitor of kappaBα (IκBα) kinase, and inositol requiring enzyme 1α. In addition, vinblastine induces the DNA-binding activities of the transcription factor NF-κB, HSF1, AP-1, and ATF-2, together with the expression of HSP70 and Bax proteins. Moreover, inhibitory experiments addressed a central role for Rho A in the regulation of vinblastine-induced apoptosis of melanoma cells via mitochondrial and non-mitochondrial-dependent mechanisms. In conclusion, the present study addresses for the first time a central role for Rho A in the modulation of vinblastine-induced apoptosis of melanoma cells and thereby provides an insight into the molecular action of vinblastine in melanoma treatment." }, { "role": "assistant", "content": "{\"Chemical\": [\"vinblastine\", \"inositol\", \"N\", \"Vinblastine\", \"oxygen\", \"Poly (ADP-ribose)\", \"Ca(2+)\"], \"Protein\": [\"AP-1\", \"c-jun-N-terminal kinase\", \"inositol requiring enzyme 1\\u03b1\", \"ATF-2\", \"kinase\", \"Ras homologous A protein\", \"inhibitor of kappaB\\u03b1\", \"I\\u03baB\\u03b1\", \"HSP70\", \"Ras homologous protein A\", \"p38\", \"NF-\\u03baB\", \"caspase-9 and 3\", \"cytochrome c\", \"apoptosis signal-regulating kinase 1\", \"Bax\", \"HSF1\", \"Poly (ADP-ribose)-Polymerase\", \"Rho A\", \"apoptosis inducing factor\"]}" } ]
science.chemsum_single_document_summarization
science.chemsum_single_document_summarization.106457
[ { "role": "user", "content": "You will be shown the title of a chemistry paper, together with sections excerpted from the body of the paper. The paper sections may include HTML tags like `<p>` and `</p>`; please ignore these. Your task is to write an abstract for the paper. Your response should include the abstract and no other text.\n\nTitle:\nDinitrosyl Iron Complexes Relevant to Rieske Cluster Nitrosylation\n\nPaper sections:\n\n<p>Iron-sulfur clusters are among the most ubiquitous cofactors in biology. The diversity of structures encountered in iron-sulfur proteins reflects the many functions nature employs for these versatile units, from electron transfer to substrate binding and activation.1 Of the many variations of iron-sulfur clusters, only a select few have ligands other than cysteine. Among these, the best-documented examples are Rieske-type Fe2S2 clusters2 that contain a pair of histidine residues ligating one iron atom.3,4 Recently, the first synthetic analog of a Rieske cluster was reported (1, Scheme 1).5 We were interested to explore the nitric oxide reactivity of such a compound as part of ongoing investigations into the reactivity of non-heme iron with NO. Moreover, there is very little information about the interaction of Rieske proteins with NO,6 which contrasts with the large body of literature regarding nitric oxide reactivity with purely cysteine-bound iron-sulfur clusters.7 Herein we report the results of our studies of NO reactions with 1 (Scheme 1) and introduce related N-bound dinitrosyliron complexes (DNICs) containing β-diketiminate ligands.</p><p>Reaction of Rieske model complex 1 with four equiv of nitric oxide in acetonitrile solution results in the appearance of two new nitrosyliron species after workup as judged by IR spectroscopy. The first displays νNO bands at 1795, 1743 and 1705 cm-1 (THF solution) consistent with formation of Roussin's black salt (RBS), [Fe4S3(NO)7]-. 8 We demonstrated previously that RBS can arise from reaction of NO with an Fe2S2 cluster containing (S2-o-xyl)2- ligands via the intermediacy of the thiolate bound DNIC.9 This DNIC can be observed by IR spectroscopy in reactions of 1 with Ph3CSNO in benzonitrile. After addition of 1 to 2 equiv of nitrosothiol, νNO bands at 1739 and 1679 cm-1 appear, consistent with formation of [(S2-o-xyl)Fe(NO)2]-. Upon addition of further equiv of Ph3CSNO, these IR bands diminish, giving rise to those of RBS.</p><p>The second iron nitrosyl species obtained from the reaction shown in Scheme 1 displays νNO peaks at 1761 and 1692 cm-1 (IR, KBr). Recrystallization from DMF/Et2O and subsequent spectroscopic characterization identified a new DNIC, 2 (Scheme 1). The N-bound DNIC displays a rhombic EPR signal centered at 2.03 (Supporting Information, SI) consistent with its formulation as an S = ½ paramagnetic complex and a {Fe(NO)2}9 species in the Enemark-Feltham notation.10 Compound 2 also contains a quadrupole doublet with an isomer shift of 0.16(2) mm/s in the 57Fe Mössbauer spectrum (SI), which corresponds well with the value of 0.18(2) mm/s determined previously for (Et4N)[(FePhS)2(NO)2].9 Notably, several examples of dinitrosyliron compounds containing nitrogen donor ligands have been published previously as potential models for histidine-ligated DNICs.11 The synthesis of compound 2 represents the first verification that such a DNIC can be prepared directly from reaction of an iron-sulfur cluster with nitric oxide. Compound 2 was also independently synthesized by salt metathesis of (Et4N)[FeCl2(NO)2] with the lithium salt of the nitrogen ligand displayed in Scheme 1. Spectroscopic features of the DNIC prepared in this fashion are identical in all respects to those observed for that prepared from 1.</p><p>In order to prepare a more soluble version of Rieske cluster-derived DNICs such as 2 for more extensive studies, we turned our attention to sterically demanding β-diketiminate ligands. We reasoned that β-diketiminates would chelate the {Fe(NO)2}9 unit giving rise to stable DNICs, thereby avoiding many of the common shortcomings of thiolate ligands such as tendency toward oxidation and kinetic lability. Furthermore, β-diketiminate ligands have been employed previously to stabilize low valent iron.12 As an entry into this chemistry the iodide precursor [FeI2(NO)2]- was selected as a source of the {Fe(NO)2}9 unit.13 Reaction of the lithium salt of [(2,6-diisopropylphenyl)NC(Me)]2CH (Ar-nacnac) with (PPN)[FeI2(NO)2] (PPN+ = μ-nitridobis(triphenylphosphine cation), affords the new compound, [(Ar-nacnac)Fe(NO)2] (3), after workup. Unlike typical anionic DNICs such as 2, neutral 3 is soluble in alkane and arene solvents and may be conveniently freed from salt impurities by recrystallization from pentane. The spectroscopic features of 3 (νNO = 1761, 1709 cm-1, benzene-d6) are similar to those of 2, consistent with an {Fe(NO)2}9 formulation (SI). The solid-state structure of 3 (Figure 1) displays pseudo-tetrahedral geometry at iron, with the dinitrosyliron unit residing in a steric pocket created by the β-diketiminate ligand. It is interesting to contrast the pseudo-tetrahedral structure of 3 with the square-planar geometry of a related Fe(I) dicarbonyl complex.14 The Fe-N-O metrics in 3 are similar to those observed in previously characterized DNICs.15</p><p>The {Fe(NO)2}9 unit of most DNICs can be reduced by one electron to give the diamagnetic {Fe(NO)2}10 configuration. Typically this reduction process is quasi-reversible electrochemically. Only in rare instances can the reduced DNIC be isolated.16 With DNICs containing thiolate ligands, reduction is followed by dissociation of RS- leading to formation of the reduced Roussin red ester derivatives.17 The cyclic voltammogram of 3 (SI) displays a quasi-reversible (ipr/ipf = 0.95) one-electron reduction at -1.34 V (vs ferrocene/ferrocenium) in THF. Chemical reduction of 3 with either decamethylcobaltocene or sodium amalgam affords the reduced {Fe(NO)2}10 DNIC. Metathesis of the cobaltocenium cation for Bu4N+ produces (Bu4N)[(Ar-nacnac)Fe(NO)2] (4a) after crystallization from Et2O. Compound 4a is a stable (mp = 198 – 200 °C) diamagnetic solid as judged by sharp features in its 1 H NMR spectrum (SI). Exposure of 4a to air immediately results in regeneration of 3 as judged by IR and UV-vis spectroscopy. The νNO bands in a benzene-d6 solution of 4a appear at 1627 and 1567 cm-1, a shift of >130 cm-1 from those observed for 3. Crystals of 4a proved unsuitable for X-ray determination so the PPN+ salt, 4b, was prepared.</p><p>The solid-state structure of the anion of 4b (Figure 2) resembles closely that of 3. One considerable difference are the elongated Fe-N(nacnac) bonds in 4b, which result in a smaller displacement of the Fe atom out of the nacnac plane compared with that in 3 (0.24 vs 0.62 Å). The Fe-N(NO) bonds for 4b are also ∼0.03 to 0.04 Å shorter than those of 3 consistent with a greater degree of π-backbonding in the reduced {Fe(NO)2}10 core. This phenomenon is also manifest by the longer N-O bonds of 4b (Figs. 1 and 2).</p><p>In conclusion, reaction of the cluster analog 1 with NO leads to formation of a new nitrogen containing {Fe(NO)2}9 DNIC. This reaction demonstrates that naturally occurring Rieske clusters may be similarly susceptible to disassembly by NO, as found for cysteine-bound Fe2S2 clusters. The reaction of 1 with NO (g) to form two {Fe(NO)2}9 units follows directly from previous work with synthetic clusters of the type [Fe2S2(SR)4]2- and suggests an intrinsic reactivity of the {Fe2S2}2+ core towards NO.9,18 β-Diketiminates are excellent ligands for dinitrosyliron complexes, allowing for isolation of a rare set of homologous DNIC redox partners. Preliminary studies (data not shown) with Fe(TPP)Cl (TPP = meso-tetraphenylporphine dianion) demonstrate that these DNICs display different NO transfer properties, suggesting that control of NO transfer in biology might be redox triggered. This hypothesis is currently under investigation in our laboratory.</p><p>Structure of 3 displaying 50% thermal ellipsoids with hydrogen atoms omitted for clarity. Selected bond lengths (Å) and angles (deg): Fe(1)-N(3), 1.6964(18); Fe(1)-N(4), 1.6882(18); N(3)-O(2), 1.177(2); N(4)-O(1), 1.174(2); N(3)-Fe(1)-N(4), 114.27(9); Fe(1)-N(3)-O(2), 162.7(2); Fe(1)-N(4)-O(1), 170.1(2).</p><p>Structure of the anion of 4b displaying 50% thermal ellipsoids with hydrogen atoms omitted for clarity. Selected bond lengths (Å) and angles (deg): Fe(1)-N(3), 1.668(5); Fe(1)-N(4), 1.649(4); N(3)-O(1), 1.191(6); N(4)-O(2), 1.218(6); N(3)-Fe(1)-N(4), 109.2(2); Fe(1)-N(3)-O(1), 163.2(5); Fe(1)-N(4)-O(2), 165.1(5).</p><p>Reaction of the Rieske-type cluster with NO.</p>" }, { "role": "assistant", "content": "Reaction of the Rieske cluster model complex (Et4N)2[(N2CHPh)Fe2S2(S2-o-xyl)] (N2CHPh = dianion of 2,2\\xe2\\x80\\xb2-(phenylmethylene)bis(3-methylindole); S2-o-xyl = dianion of 1,2-phenylenedimethanethiol) with nitric oxide results in disassembly of the iron-sulfur core and formation of {Fe(NO)2}9 dinitrosyliron complexes. Isolation and characterization of these DNICs, including the new compound, (Et4N)[(N2CHPh)Fe(NO)2], demonstrates a homology between the synthetic Riekse cluster and purely thiolate bound Fe2S2 clusters in reactions involving NO. To model the nitrogen rich environment of Rieske cluster-derived dinitroysliron species, a new type of neutral {Fe(NO)2}9 DNIC was prepared containing a \\xce\\xb2-diketiminate ligand. One-electron reduction of this compound affords the isolable {Fe(NO)2}10 DNIC. These compounds represent a rare example of structurally analogous DNIC redox partners." } ]
science.scicite_classification
science.scicite_classification.3173
[ { "role": "user", "content": "You will be presented with a citation segment from the section of a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\nThe number of functional groups was calculated from the integration ratios of the protons of the functional groups to the protons of the internal standard according to Pan et al. (2006b)." }, { "role": "assistant", "content": "Method" } ]
science.covidfact_entailment
science.covidfact_entailment.696
[ { "role": "user", "content": "You will be shown a claim related to the COVID-19 pandemic, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\".\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract.\n\nFor instance, if the model were given the claim \"the COVID vaccine is safe for healthy adults\", the output might be\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that the risks of severe COVID vaccine side effects are low for healthy adults.\"]\n}\n\nClaim: Covid-19 induces a hyperactive phenotype in circulating cellslets\n\nAbstract:\nBackground Coronavirus disease 2019 (COVID-19), caused by novel coronavirus SARS-CoV-2, has to date affected over 13.3 million globally.\nAlthough high rates of venous thromboembolism and evidence of COVID-19-induced endothelial dysfunction have been reported, the precise aetiology of the increased thrombotic risk associated with COVID-19 infection remains to be fully elucidated.\nObjectives Here, we assessed clinical platelet parameters and circulating platelet activity in patients with severe and non-severe COVID-19.\nMethods An assessment of clinical blood parameters in patients with severe COVID-19 disease (requiring intensive care), patients with non-severe disease (not requiring intensive care), general medical in-patients without COVID-19 and healthy donors was undertaken.\nPlatelet function and activity were also assessed by secretion and specific marker analysis.\nResults We show that routine clinical blood parameters including increased MPV and decreased platelet:neutrophil ratio are associated with disease severity in COVID-19 upon hospitalisation and intensive care unit admission.\nStrikingly, agonist-induced ADP release was dramatically higher in COVID-19 patients compared with non-COVID-19 hospitalized patients and circulating levels of PF4, sP-selectin and TPO were also significantly elevated in COVID-19.\nConclusion Distinct differences exist in routine full blood count and other clinical laboratory parameters between patients with severe and non-severe COVID-19.\nMoreover, we have determined that COVID-19 patients possess hyperactive circulating platelets.\nThese data suggest that abnormal platelet reactivity may contribute to hypercoagulability in COVID-19.\nFurther investigation of platelet function in COVID-19 may provide additional insights into the aetiology of thrombotic risk in this disease and may contribute to the optimisation of thrombosis prevention and treatment strategies." }, { "role": "assistant", "content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"Strikingly, agonist-induced ADP release was dramatically higher in COVID-19 patients compared with non-COVID-19 hospitalized patients and circulating levels of PF4, sP-selectin and TPO were also significantly elevated in COVID-19.\", \"Moreover, we have determined that COVID-19 patients possess hyperactive circulating platelets.\"]}" } ]
science.anat_em_ner
science.anat_em_ner.242
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"AnatomicalEntity\"].\n\nPlease return the output as a JSON object of the format: {\"AnatomicalEntity\": [\"tumor cells\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nMicrovascular density and vascular endothelial growth factor immunoreactivity as predictors of regional lymph node metastasis from betel-associated oral squamous cell carcinoma. \nPURPOSE: Neovascularization has profound effects on tumor growth and metastasis. Vascular endothelial growth factor (VEGF) is a mitogen that acts exclusively on endothelial cells. The roles of miscrovascularity density (MVD) and VEGF expression in the progression of oral squamous cell carcinoma (OSCC) have been controversial. The purpose of the present study was to measure the MVD and VEGF expression in a cohort of patients with betel-associated OSCC and to evaluate for possible clinicopathologic correlations. PATIENTS AND METHODS: The paraffin sections from 49 subjects with OSCC were subjected to immunohistochemical studies to measure the highest MVD (h-MVD) and cytoplasmic immunoreactivity of VEGF. The findings in the tissue samples were analyzed with regard to the patients' risk factors and clinical course. RESULTS: The OSCC samples had an average h-MVD score of 27.7/mm(2). VEGF immunoreactivity was positive in 75.5% of samples. Both h-MVD and VEGF immunoreactivity were statistically associated with lymph node metastasis (P =.012 and.037, respectively). A marginally significant association was also noted between the h-MVD and patient survival (P =.056). The age and oral habits of patients, as well as the tumor site and size, did not appear to be correlated with h-MVD or VEGF immunoreactivity. CONCLUSION: The data suggest that both h-MVD and VEGF immunoreactivity may be useful predictors for the progression of a subset of OSCC associated mostly with betel use. Antiangiogenesis therapy might have a role in reducing regional metastasis." }, { "role": "assistant", "content": "{\"AnatomicalEntity\": [\"tissue samples\", \"miscrovascularity\", \"OSCC\", \"lymph node\", \"betel\", \"endothelial cells\", \"tumor\", \"oral squamous cell carcinoma\", \"Microvascular\", \"paraffin sections\", \"samples\", \"tumor site\", \"OSCC samples\", \"cytoplasmic\", \"oral\"]}" } ]
science.medmentions_ner
science.medmentions_ner.2594
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"HealthCareActivity\", \"InjuryOrPoisoning\", \"BodySubstance\", \"IntellectualProduct\", \"AnatomicalStructure\", \"SpatialConcept\", \"Chemical\", \"Bacterium\", \"MedicalDevice\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Finding\", \"BiologicFunction\", \"Virus\", \"ResearchActivity\", \"ClinicalAttribute\", \"PopulationGroup\", \"Eukaryote\", \"BodySystem\", \"Food\", \"ProfessionalOrOccupationalGroup\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nPrognostic implications of the intrinsic molecular subtypes in male breast cancer Intrinsic molecular subtyping has been widely used in female breast cancer, and it has proven its significance. In this article, we aimed to study the intrinsic subtypes of male breast cancer (MBC) in correlation with clinicopathological features. We retrospectively identified 130 MBC cases from 2004 to 2013. Intrinsic molecular subtypes were determined by immunohistochemistry (IHC). From a total of 130 MBC cases, 45.4% of tumors were luminal A subtype, 44.6% were luminal B, 5% were HER2 positive and 5% were triple negative tumors. There were statistically significant differences between different IHC intrinsic subtypes regarding tumor size (p=0.001), estrogen receptor (ER) status (p=0.001), progesterone receptor (PR) status (p=0.001), HER2 status (p=0.001) and Ki67 proliferation index (p=0.001). The distribution of breast cancer intrinsic subtypes in males is different compared to its female counterpart; however, they don't seem to give the same prognostic value." }, { "role": "assistant", "content": "{\"AnatomicalStructure\": [], \"Bacterium\": [], \"BiologicFunction\": [\"MBC\", \"tumors\", \"female breast cancer\", \"male breast cancer\", \"breast cancer\", \"HER2 positive\", \"luminal A subtype\", \"luminal B\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"Ki67\"], \"ClinicalAttribute\": [\"clinicopathological features\", \"HER2 status\", \"progesterone receptor (PR) status\", \"estrogen receptor (ER) status\"], \"Eukaryote\": [], \"Finding\": [\"triple negative\"], \"Food\": [], \"HealthCareActivity\": [\"immunohistochemistry\", \"IHC\", \"Intrinsic molecular subtyping\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"Prognostic\", \"prognostic value.\", \"article\"], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"study\"], \"SpatialConcept\": [\"tumor size\"], \"Virus\": []}" } ]
science.covid_deepset_qa
science.covid_deepset_qa.812
[ { "role": "user", "content": "You will be shown a paragraph from a biomedical research paper, followed by a question about the paper. Please extract text from the paragraph to answer the question. Do not include any text in your repsonse other than the answer.\n\nParagraph: To investigate the antiviral mechanism of AP3, the thermal stability of AP3/N36 complex was compared with that of AP1/N36, AP2/N36, T20/N36, and C34/N36 complexes by circular-dichroism (CD) spectroscopy 27 . Because T20 lacks the pocket-binding domain (PBD) , the T20/N36 complex did not show a typical α -helical conformation, in consistence with our previous studies 8, 9 . Similar to the α -helicity of C34/N36 complex 3 , the AP1/N36, AP2/N36 and AP3/N36 complexes all formed a saddle-shaped negative peak at 208 nm and 222 nm, indicating their α -helical structures (Fig. 4a) Fig. 4b) , indicating that the α -helical complex formed by AP3 and N36 is the most stable among the four complexes. Then we compared the inhibitory activity of AP3 with that of AP1 and AP2 on 6-HB formation between C34 and N36. Since T20 cannot block 6-HB formation 8, 9 , we used a small-molecule HIV-1 fusion inhibitor, ADS-J1 28, 29 , to replace T20 as a control of 6-HB inhibition. As expected, ADS-J1 could effectively inhibit 6-HB formation with IC 50 of 2.75 μ M 8, 9, [27] [28] [29] . AP3 was highly effective against 6-HB formation in a dose-dependent manner with an IC 50 value of 0.24 μ M, about 30-and 15-fold more potent than AP1 and AP2, respectively (Fig. 4c) , confirming that AP3 can potently block gp41 6-HB fusion core formation, thus inhibiting HIV-1 fusion with the target cell membrane. Structural basis for the potent fusion inhibitory activity of the artificial peptide AP3. To elucidate the molecular determinants of these artificial peptides, we successfully solved all three complex structures of AP1/AP2/AP3 peptides binding with gp41 NHR. For AP1 and AP2, an optimized linker Each sample was tested in triplicate and the experiment was repeated twice. The data are presented as means ± SD. *P < 0.05, **P < 0.01, ***P < 0.001. \"SGGRGG\" was used to assemble the NHR and the artificial peptide into a single recombinant protein (N36-L6-AP1 or N36-L6-AP2). However, a similar strategy failed on the crystallization of AP3; therefore, we decided to cocrystallize the synthetic peptide N45 and AP3 peptide, and eventually the complex crystals were obtained. Interestingly, the crystals of three different inhibitors belong to three distinctive space groups: P2 1 for N36-L6-AP1, R32 for N36-L6-AP2, and P6 3 for N45/AP3. As expected, the NHR portions in three structures all form a trimeric core, while the AP1, AP2 or AP3 portion folds into a single-helix conformation and binds to NHR-trimer to form a typical 6-HB, similar to that of the HIV-1 gp41 core structure formed by the native CHR peptide C34 and N36 (Fig. 5a) . Also, the conserved hydrophobic residues, such as W43, W46 and I50, in the artificial peptides were deeply buried into the hydrophobic Table 2 . Pharmacokinetic parameters of AP2, AP3 and T20 following intravenous administration at 1 mg/kg in male SD rats (n = 2). Figure 3 . Sensitivity of AP3 and T20 to proteolytic degradation by proteinase K and rat liver homogenate. (a) After digestion by proteinase K at pH 7.2 and (b) rat liver homogenate, the residual amount of AP3 and T20 was detected by LC-MS analysis. The experiment was performed in triplicate and the data are presented as means ± SD. The inhibition of AP1, AP2, AP3, T20 and ADS-J1 against 6-HB formation between N36 and C34 was detected by ELISA using the 6-HB-specific mAb NC-1. Each sample was tested in triplicate, and the data are presented as means ± SD. grooves formed between each pair of NHR helices, similar to the corresponding residues of W628, W631 and I635 in the native gp41 CHR (Fig. 5b) . AP peptides exhibited better affinity against gp41 natural CHR. In C34, which contains the natural CHR sequence from W628 to L661, no strong interaction between I642 and Q565 in the viral gp41 NHR-CHR complex was found (Fig. 5c) . However, in the corresponding sequence (from W43 to K76) of AP1 and AP2, a hydrogen bond was established between S57 (corresponding to I642 in CHR) and Q18 (corresponding to Q565 in NHR) in N36-L6-AP1, N36-L6-AP2 and N45/AP3. Thus, S57 in AP1/AP2/AP3 plays a role in stabilizing the interactions between the artificial peptide inhibitor and its NHR target, resulting in their stronger binding affinity. Moreover, in NHR-CHR, L567 and L568 on two adjacent NHRs form a hydrophobic groove, in which T639 is buried (Supplementary Fig. S1a ). However, in N36-L6-AP1, N36-L6-AP2 and N45/AP3, I54 (corresponding to T639 in CHR) can strongly bind to L20 and L21 through fully hydrophobic side chain interactions. Similarly, the interaction of I64 (corresponding to S659 in CHR) with L10 and L11 (corresponding to L567 and L568 in NHR, respectively) in N36-L6-AP1, N36-L6-AP2 and N45/AP3 has been significantly enhanced ( Supplementary Fig. S1b ). Like the gp41 CHR helix, the helices of AP1, AP2 and AP3 also have two different sides, a hydrophobic side facing toward the NHR and a hydrophilic one facing outward. It is expected that the enhancement of the hydrophilicity of the exposed side of the inhibitors can increase their antiviral activity and solubility. To achieve this goal, the amino acid residues with hydrophobicity, or low hydrophilicity, like N637, S640, L641 and S644 in CHR, were changed to the amino acid residues with high hydrophilicity, like E52, K55, K56 and E59 in AP1, AP2 and AP3, respectively. Moreover, the hydrophobic residue M629 in CHR was replaced with a hydrophilic residue E44 in AP2 and AP3 ( Supplementary Fig. S2 ). These hydrophilic residues, such as glutamic acid and lysine, can increase the solubility of whole peptide and, hence, stabilize the complex formed by the inhibitor and its target. It has been proved that the EE-KK double salt bridge can stabilize helix conformation 30 . We have identified this kind of interaction between i and i + 3 or i + 4 positions on the three complex structures. In N36-L6-AP1, R48 interacts with E45 and E52 to form a salt bridge network. In N36-L6-AP2, E45 interacts with K48, and E52 binds to K56, while in N45/AP3, K69 binds to E66 ( Supplementary Fig. S2 ). These strong salt bridges formed by the oppositely charged residues stabilize AP peptide conformation, bringing its inhibitory effect into full play. As previously reported, addition of the \"M-T hook\" to the CHR peptides C34 and sifuvertide could dramatically improve the anti-HIV-1 activity 17, 18 . As expected, the N-terminal Met and Thr of AP3 forms a hook-like structure (Fig. 5d) . The hydrophobic methionine side chain of M41 accommodates the groove between AP3 and NHR helices, capping the hydrophobic pocket. This interaction leads to a series of conformational changes. The main chain of AP3 at W43 moves 1.91 Å closer to NHR compared to AP2 (Supplementary Fig. S3 ). The side chain of W43 in AP3 flips around 90 degrees and is buried deeper than that of AP2. The side chain of E44 turns back to interact to D47, but the E45 side chain turns back from K48 and interacts with T42. Therefore, this M-T hook structure could further stabilize the binding between AP3 and NHR target. Enfuvirtide, also known as T20, was approved by the U.S. FDA as the first HIV entry inhibitor-based antiviral drug for use with other anti-HIV medicines to treat HIV-1 infected adults and children at ages 6-16 years 23,31,32 (http://www.fuzeon.com). Although T20 is an indispensable anti-HIV drug for HIV/ AIDS patients who have failed to respond to the current antiretroviral therapeutics, its shortcomings have limited its clinical application. T20 has lower anti-HIV activity and shorter half-life than other CHR peptides containing PBD, such as C34 and C38 8, 9, 33 . In addition, T20-resistant HIV-1 variants emerged shortly (e.g., 14 days) after its use in patients 34 . Most of the T20-resistant viruses carried mutations in the GIV motif (residues 36-45: GIVQQQNNLL) in the gp41 NHR domain 10, [34] [35] [36] [37] [38] . The lack of PBD contributes to the major weaknesses of T20 described above. Since the conserved hydrophobic pocket in the gp41 NHR-trimer plays a critical role in stabilizing the interaction between the gp41 NHR and CHR and formation of the fusogenic 6-HB core 1, 39, 40 , the PBD-containing CHR-peptide, like C34, can bind to viral gp41 trimer more strongly and stably, thus possessing more potent anti-HIV activity than T20, a CHR peptide without PBD 8, 9 . In the absence of PBD, T20 mainly interacts with the middle region of the NHR domain containing the GIV motif. Therefore, a virus with mutations in this motif is generally resistant to T20 10, [34] [35] [36] [37] [38] . Compared with other anti-HIV drugs, another weakness of T20 is its cross-reactivity with the preexisting antibodies in HIV-1-infected patients. Besides gp41, T20 could also bind to some regions in gp120. The preexisting antibodies specific for the T20's binding sites in gp120 and gp41 may indirectly suppress the anti-HIV activity of T20 14, 15 . Addition of PBD to the N-terminus of T20, such as T-1249, could significantly improve the anti-HIV-1 potency, half-life and drug-resistance profile 33, [41] [42] [43] . Addition of M-T hook structure to the N-terminus of a PBD-containing CHR-peptides, such as MT-C34 or MT-SFT, could further increase the anti-HIV-1 activity of the corresponding CHR-peptides 17, 18 . Deletion of the GIV-motif-binding domain from a CHR-peptide, such as CP621-652 and CP32M, is another effective approach to increase the genetic barrier to drug resistance 44, 45 . However, none of the above approaches is effective in preventing the cross-reaction of T20 with the preexisting anti-gp41 antibodies in HIV/AIDS patients, since the above-modified peptides mainly contain the native sequences of the HIV-1 gp41 CHR domain. Our previous studies have shown that AP1 and AP2, artificial peptides with non-native protein sequences, could form coiled-coil structure to interact with gp41 NHR and inhibit HIV-1 Env-mediated cell-cell fusion 16 . In the present study, we designed a new artificial peptide, AP3, by adding M-T hook structure to the N-terminus of AP2 (Fig. 1a) , followed by investigating the influence of preexisting anti-gp41 antibodies in HIV-infected patients on AP3, using AP1, AP2 and T20 as controls. We demonstrated that sera of HIV-infected patients could bind to T20 and significantly reduce its potency against HIV-1. However, these same serum samples did not interact with the three artificial peptides and hardly impaired their antiviral activity. Surprisingly, the antibodies in the sera could even enhance AP3's anti-HIV-1 activity (Fig. 2a,b and Supplementary Table S1 ). These results confirmed, for the first time, that replacement of the native viral sequence in T20 with an artificial sequence is an effective approach to overcome a key shortcoming of T20 whereby its anti-HIV activity could be attenuated by preexisting anti-gp41 antibodies in HIV/AIDS patients. It is worthwhile to explore why the antibodies in the sera is able to enhance the anti-HIV-1 activity of AP3. Our recent study has demonstrated that T20's anti-HIV-1 activity is enhanced by a non-neutralizing antibody directed against the NHR domain of the HIV-1 gp41 46 . We thus hypothesize that some of the anti-gp41 antibodies in HIV/AIDS patients may bind to a site in NHR domain adjacent to the AP3's binding region, resulting in increased interaction between AP3 and NHR-trimer and enhanced antiviral activity of AP3. We then compared the inhibitory activity of AP3 with M-T hook and T20/AP2 without M-T hook on infection by divergent HIV-1 strains. AP3 was more effective than either AP2 or T20 in inhibiting infection by the laboratory-adapted strains and the primary isolates of HIV-1, including those resistant to T20 (Fig. 1b, Table 1 ). One may question whether AP3 can also induce drug-resistant viruses in patients if it is used in clinics to treat HIV-infected patients. We believe that AP3 is expected to have much higher genetic barrier to resistance than T20 because AP3 contains PBD, while T20 lacks PBD. Dwyer et al.\n\nQuestion: Why did the T20/N36 complex not show a typical alpha helical conformation?" }, { "role": "assistant", "content": "Because T20 lacks the pocket-binding domain (PBD)" } ]
science.qasper_extractive_qa
science.qasper_extractive_qa.102.0
[ { "role": "user", "content": "You will be shown sections from a scientific research paper, together with a question about the paper. Your task is to answer the question based on the contents of the paper. Your response should be a `json` object with two fields:\n\n\"answer\": An array of strings extracted directly from the paper which, collectively, answer the question.\n\"evidence\": An array of strings. Each should be an excerpt from the paper, in which one or more of the extracted answers can be found.\n\nFor instance, for the question \"What baselines did the authors compare against?\", a sample response might be:\n\n{\n \"answer\": [\"BERT\", \"RoBERTa\"],\n \"evidence\": [\"In our experiments, we compare the performance of our model against BERT and RoBERTa.\"]\n}\n\nDo not include any text in your response other than the json.\n\nIf the question is unanswerable given the provided excerpts, respond with the single word \"null\".\n\nPaper: Efficient Twitter Sentiment Classification using Subjective Distant Supervision\n\nAs microblogging services like Twitter are becoming more and more influential in today's globalised world, its facets like sentiment analysis are being extensively studied. We are no longer constrained by our own opinion. Others opinions and sentiments play a huge role in shaping our perspective. In this paper, we build on previous works on Twitter sentiment analysis using Distant Supervision. The existing approach requires huge computation resource for analysing large number of tweets. In this paper, we propose techniques to speed up the computation process for sentiment analysis. We use tweet subjectivity to select the right training samples. We also introduce the concept of EFWS (Effective Word Score) of a tweet that is derived from polarity scores of frequently used words, which is an additional heuristic that can be used to speed up the sentiment classification with standard machine learning algorithms. We performed our experiments using 1.6 million tweets. Experimental evaluations show that our proposed technique is more efficient and has higher accuracy compared to previously proposed methods. We achieve overall accuracies of around 80% (EFWS heuristic gives an accuracy around 85%) on a training dataset of 100K tweets, which is half the size of the dataset used for the baseline model. The accuracy of our proposed model is 2-3% higher than the baseline model, and the model effectively trains at twice the speed of the baseline model.\n\nIntroduction\nA lot of work has been done in the field of Twitter sentiment analysis till date. Sentiment analysis has been handled as a Natural Language Processing task at many levels of granularity. Most of these techniques use Machine Learning algorithms with features such as unigrams, n-grams, Part-Of-Speech (POS) tags. However, the training datasets are often very large, and hence with such a large number of features, this process requires a lot of computation power and time. The following question arises: What to do if we do not have resources that provide such a great amount of computation power? The existing solution to this problem is to use a smaller sample of the dataset. For sentiment analysis, if we train the model using a smaller randomly chosen sample, then we get low accuracy [16, 17]. In this paper, we propose a novel technique to sample tweets for building a sentiment classification model so that we get higher accuracy than the state-of-the-art baseline method, namely Distant Supervision, using a smaller set of tweets. Our model has lower computation time and higher accuracy compared to baseline model.\nUsers often express sentiment using subjective expression. Although objective expressions can also have sentiment, it is much rare. Determining subjectivity is quite efficient compared to determining sentiment. Subjectivity can be determined for individual tweets. But to do sentiment classification, we need to build a classification model with positive and negative sentiment tweets. The time to train a sentiment classification model increases with the increase in the number of training tweets. In this paper, we use tweet subjectivity to select the best training tweets. This not only lowers the computation time but also increases the accuracy because we have training data with less noise. Even the created features will be more relevant to the classification task. The computation cost will reduce due to small training data size and better set of features. Thus if users do not have enough computational resources, they can filter the training dataset using a high value of subjectivf ity threshold. This ensures reliable prediction on a smaller training dataset, and eventually requires less computational time. The above approach, and some of the intricacies that invariably seep in, need to be considered, and are described in the later sections of the paper. In this paper we also integrate a lot of meticulous preprocessing steps. This makes our model more robust, and hence leads to higher accuracy.\nAlong with the machine learning algorithms being used, we use a heuristic-based classification of tweets. This is based on the EFWS of a tweet, which is described in later sections. This heuristic basically takes into account the polarity scores of frequently used words in tweets, and is able to achieve around 85% accuracy on our dataset, hence boosting the overall accuracy by a considerable amount.\nOur training data consists of generic (not topic-specific) Twitter messages with emoticons, which are used as noisy labels. We show that the accuracy obtained on a training dataset comprising 100K tweets, and a test dataset of 5000 tweets gives an accuracy of around 80% on the following classifiers: Naive Bayes, RBF-kernel Support Vector Machine, and Logistic Regression. Our model takes roughly half the time to train and achieves higher accuracy (than the baseline model) on all the classifiers. Because the amount of training time is expected to increase exponentially as the training data increases, we expect our model to outperform (in terms of higher accuracy) the baseline model at a speed which is at least twofold the speed of the baseline model on larger datasets.\n\nCorpus\nOur training dataset has 1.6 million tweets, and 5000 tweets in the test dataset. Since the test dataset provided comprised only 500 tweets, we have taken part of the training data (exactly 5000 tweets, distinct from the training dataset) as the test dataset. We remove emoticons from our training and test data. The table below shows some sample tweets.\n\nBaseline model\nThe baseline model for our experiments is explained in the paper by Alec Go [1]. The model uses the Naive Bayes, SVM, and the Maximum Entropy classifiers for their experiment. Their feature vector is either composed of Unigrams, Bigrams, Unigrams + Bigrams, or Unigrams + POS tags.\nThis work achieved the following maximum accuracies:\na) 82.2 for the Unigram feature vector, using the SVM classifier,\nb) 83.0 for the Unigram + Bigram feature vector, using the MaxEnt classifier, and 82.7 using the Naive Bayes classifier.\nc) 81.9 for the Unigram + POS feature vector, using the SVM classifier.\nThese baseline accuracies were on a training dataset of 1.6 million tweets, and a test dataset of 500 tweets. We are using the same training dataset for our experiments. We later present the baseline accuracies on a training set of 200K tweets, and a test dataset of 5000 tweets; we compare our model's accuracy with these baseline accuracy values on the same test data of 5000 tweets.\n\nEvaluation\nIn this section, we present the collated results of our experiments. To show that our model achieves higher accuracy than the baseline model and on a smaller training dataset, we first fix the test dataset. Our test dataset, as mentioned before, consists of 5000 tweets. We conducted our experiments on an Intel Core i5 machine (4 cores), with 8 GB RAM. The following are the accuracies of the baseline model on a training set of 200K tweets:\nWe filtered the training set with a subjectivity threshold of 0.5. By doing this, we saw that the number of tweets reduced to approximately 0.6 million tweets from an earlier total of 1.6 million. We then trained our model described in earlier sections on a 100K tweets randomly picked from this filtered training dataset, and observed the following accuracies:\nNote that all the accuracies in the tables above have been recorded as the average of 3 iterations of our experiment. We achieve higher accuracy for all feature vectors, on all classifiers, and that too from a training dataset half the size of the baseline one.\nWe now see the intricacies of the subjectivity threshold parameter. It is clear that more and more tweets get filtered as the subjectivity threshold parameter increases. This can be seen in the Figure 1 shown below. We have plotted the number of tweets that remain after filtering from two sources: TextBlob, Opinion Finder Tool. TextBlob has an inbuilt function that provides us the subjectivity level of a tweet. On the other hand, Opinion Finder only provides the information of which parts of the text are subjective, and which are objective. From that, we define the subjectivity level of that text as:\nSubjectivity level = INLINEFORM0 \n[ xlabel=Subjectivity Threshold, ylabel=Tweets (in millions), xmin=0, xmax=1, ymin=0, ymax=2000000, xtick=0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1, ytick=0,200000,400000,600000,800000,1000000,1200000,1400000,1600000,1800000, legend pos=north east, ] [color=red] coordinates (0, 1600000) (0.1, 939785) (0.2, 873054) (0.3, 804820) (0.4, 712485) (0.5, 571864) (0.6, 449286) (0.7, 304874) (0.8, 211217) (0.9, 135788) ;\n[color=blue] coordinates (0, 1600000) (0.1, 602313) (0.2, 499173) (0.3, 392223) (0.4, 262109) (0.5, 169477) (0.6, 154667) (0.7, 139613) (0.8, 126148) (0.9, 116842) ; Textblob, Opinion Finder\nFigure 1: Number of tweets with subjectivity greater than the subjectivity threshold\n[ xlabel=Subjectivity Threshold, ylabel=Accuracy (from 0 to 1), xmin=0, xmax=1, ymin=0.7, ymax=1, xtick=0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1, ytick=0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1, legend pos=north east, ] [color=red] coordinates (0.1, 0.753871866) (0.2, 0.779442897) (0.3, 0.763421155) (0.4, 0.783231198) (0.5,0.805132645) (0.6,0.807373259) (0.7,0.808587744) (0.8,0.817799443) (0.9,0.823872989) ; Figure 2: Variation of accuracy (*Training data of 100K, Test data of 5K) with subjectivity threshold. *TextBlob is used to filter the tweets to form the training dataset.\nWe now focus on the issue of choosing the optimum threshold value. As the subjectivity threshold parameter increases, our model trains on tweets with a higher subjectivity level, and the overall accuracy increases. We observed the following accuracies on subjectivity level 0.8 (Unigrams as features):\nNaive Bayes: 80.32%\nNon-linear SVM: 80.15 %\nLogistic Regression: 81.77%\nWe should consider the fact that a lot of useful tweets are also lost in the process of gradually increasing the parameter, and this could cause a problem in cases when the test data is very large, because the model will not train on a generic dataset. Researchers may use a higher subjectivity threshold for their experiments if they are confident that most of the important information would be retained. This is most likely to happen in case of topic-specific or domain-specific data.\n[ ybar, enlargelimits=0.15, legend style=anchor=north, legend pos= north east, ylabel=Training time (in minutes), symbolic x coords=baseline,subjectivity=0.5,subjectivity=0.8, xtick=data, ] coordinates (baseline,17.4) (subjectivity=0.5,12.55) (subjectivity=0.8,10.68); coordinates (baseline,16.23) (subjectivity=0.5,12.31) (subjectivity=0.8,10.34); coordinates (baseline,31.9) (subjectivity=0.5,18.24) (subjectivity=0.8,16.3); Logistic Regression,Naive Bayes,SVM\nFigure 3: Comparison of training times for Unigrams\n[ ybar, enlargelimits=0.15, legend style=anchor=north, legend pos= north east, ylabel=Training time (in minutes), symbolic x coords=baseline,subjectivity=0.5,subjectivity=0.8, xtick=data, ] coordinates (baseline,28.41) (subjectivity=0.5,14.09) (subjectivity=0.8,11.3); coordinates (baseline,16.6) (subjectivity=0.5,13.51) (subjectivity=0.8,12.66); coordinates (baseline,35.2) (subjectivity=0.5,20.6) (subjectivity=0.8,19.2); Logistic Regression,Naive Bayes,SVM Figure 4: Comparison of training times for Unigrams + Bigrams\nWe use Logistic regression for classification and unigrams as the feature vector with K-fold cross validation for determining the accuracy. We choose an optimal threshold value of 0.5 for our experiment, considering the fact that the model should train on a more generic dataset. Figure 2 shows the variation of accuracy with the subjectivity threshold. The training size is fixed at 100K and the test dataset (5K tweets) is also same for all the experiments.\nWe also measure the time taken to train our model, and compare it to the baseline model. Our observation was that our model took roughly half the amount of time in some cases and yet obtained a higher accuracy. Figures 3 and 4 show the difference in training time of the baseline model, our model on a 0.5 subjectivity-filtered dataset, and our model on a 0.8 subjectivity-filtered dataset on unigrams and unigrams + bigrams respectively. The times recorded are on a training dataset of 100K for our model and 200K for the baseline model, and a test dataset of 5K was fixed in all the recordings. The winning point, which can be seen from the plots, is that our model is considerably faster, and even has twofold speed in some cases. And alongside saving computation time, it achieves higher accuracy. This can be attributed to the fact that as the subjectivity threshold increases, only the tweets with highly polar words are retained in the training set and this makes the whole process faster.\n\nFigure 1: Number of tweets with subjectivity greater than the subjectivity threshold\n\nFigure 4: Comparison of training times for Unigrams + Bigrams\n\nQuestion: What previously proposed methods is this method compared against?" }, { "role": "assistant", "content": "{\"answer\": [\"Naive Bayes\", \"SVM\", \"Maximum Entropy classifiers\"], \"evidence\": [\"The baseline model for our experiments is explained in the paper by Alec Go [1]. The model uses the Naive Bayes, SVM, and the Maximum Entropy classifiers for their experiment. Their feature vector is either composed of Unigrams, Bigrams, Unigrams + Bigrams, or Unigrams + POS tags.\"]}" } ]
science.chemprot_re
science.chemprot_re.588
[ { "role": "user", "content": "Given an abstract from a research paper, your task is to identify and label relationships between chemical compounds and genes mentioned in the abstract.\n\nThe relationship type must be one of ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nPlease format your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"]. If no relations can be found, please output an empty JSON array [].\n\nAbstract:\n\nAntigenotoxic potencies of a lichen species, Evernia prunastri. In this article, the genotoxic and antigenotoxic effects of methanol extract of Evernia prunastri (Huds.) Willd. (MEP) were studied using WP2, Ames (TA1535 and TA1537) and sister chromatid exchange (SCE) test systems. The results obtained from bacterial test systems demonstrated that MEP has strong antimutagenic potencies on TA1537 and WP2 strains. The highest inhibition rates for MEP on TA1537 and WP2 strains were 37.70% and 69.70%, respectively. According to the SCE test system, MEP reduced the genotoxic effects of aflatoxin. In order to clarify the mechanism underlying the antigenotoxic effects of MEP, the antioxidants were determined. Cotreatments of 5, 10 and 20 µg/mL concentrations of MEP with aflatoxin B (1) decreased the frequencies of SCE and the malondialdehyde level and increased amount of superoxide dismutase, glutathione and glutathione peroxidase which were decreased by aflatoxin. The data obtained from this work have clearly shown that MEP has significant antigenotoxic effects which are thought to be partly due to the antioxidant activities and antioxidant inducing capability of MEP. This is the first report indicating the antigenotoxic activities of MEP against several mutagen agents such as N-methyl-N'-nitro-N-nitrosoguanidine, acridin and aflatoxin." }, { "role": "assistant", "content": "[[\"aflatoxin\", \"INDIRECT-DOWNREGULATOR\", \"glutathione peroxidase\"], [\"aflatoxin B (1)\", \"INDIRECT-UPREGULATOR\", \"superoxide dismutase\"]]" } ]
science.medmentions_ner
science.medmentions_ner.2460
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"HealthCareActivity\", \"InjuryOrPoisoning\", \"BodySubstance\", \"IntellectualProduct\", \"AnatomicalStructure\", \"SpatialConcept\", \"Chemical\", \"Bacterium\", \"MedicalDevice\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Finding\", \"BiologicFunction\", \"Virus\", \"ResearchActivity\", \"ClinicalAttribute\", \"PopulationGroup\", \"Eukaryote\", \"BodySystem\", \"Food\", \"ProfessionalOrOccupationalGroup\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nEfficacy of a multi-component psychosocial intervention program for caregivers of persons living with neurocognitive disorders, Alexandria, Egypt: A randomized controlled trial Unlike other chronic diseases, dementia caregiving associated with enormous psychological burden, which stresses the need for caregivers -directed psychosocial interventions. Aim: This randomized controlled trial (RCT) was conducted to evaluate the short-term efficacy of a multi-component psychosocial intervention program for informal caregivers of persons with neurocognitive disorders in Alexandria, Egypt. Informal caregivers (120) were randomly assigned into intervention and control groups. The intervention group (60) participated in a multi-component program of 8 sessions, including psycho-education, group cognitive-behavioral therapy, and group social support. Program primary outcomes were assessed after program termination (post-1), and three months later (post-2). Measured outcomes included caregivers ' knowledge, depression and anxiety symptoms, and perceived burden. Caregivers ' depression, anxiety, and perceived burden demonstrated significant drop at post-1, and post-2 compared to the control group (P< 0.001). The intervention group showed significant negative absolute change on depression, anxiety, and perceived burden measures, while on the dementia -related knowledge measure, a significant positive absolute change was found at post-1, and post-2 (P< 0.001), in comparison to controls. All outcome measures recorded a large effect size; the highest was for knowledge (partial eta2 = 0.98), and the least was for perceived burden (partial eta2 = 0.71). A multi-component psychosocial intervention for caregivers of persons with neurocognitive disorders demonstrated a short-term efficacy in reducing their burden, depression, and anxiety, as well as improving caregivers ' knowledge. However, further research is needed to investigate long-term efficacy of the intervention." }, { "role": "assistant", "content": "{\"AnatomicalStructure\": [], \"Bacterium\": [], \"BiologicFunction\": [\"chronic diseases\", \"dementia\", \"neurocognitive disorders\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [\"anxiety\", \"anxiety symptoms,\", \"negative\", \"positive\", \"knowledge\", \"depression\"], \"Food\": [], \"HealthCareActivity\": [\"cognitive-behavioral therapy\", \"intervention\", \"multi-component psychosocial intervention\", \"evaluate\", \"psychosocial interventions\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [\"persons\", \"intervention\", \"intervention group\"], \"ProfessionalOrOccupationalGroup\": [\"Caregivers\", \"caregivers\"], \"ResearchActivity\": [\"randomized controlled trial\", \"RCT\", \"research\"], \"SpatialConcept\": [\"Egypt\", \"Alexandria\"], \"Virus\": []}" } ]
science.ncbi_ner
science.ncbi_ner.106
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Disease\"].\n\nPlease return the output as a JSON object of the format: {\"Disease\" : [\"COVID-19\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nA retrospective anonymous pilot study in screening newborns for HFE mutations in Scandinavian populations. We have retrospectively analyzed 837 random anonymized dried blood spot (DBS) samples from neonatal screening programs in Scandinavia for mutations in HFE, the candidate gene for hemochromatosis. We have found C282Y allele frequencies of 2. 3% (+ 2. 0%) (-1. 3%) in Greenland, 4. 5% +/-1. 9% in Iceland, 5. 1% +/-2. 3% in the Faeroe Islands, and 8. 2% +/-2. 7% in Denmark. The high prevalence of HFE mutations in Denmark suggests that population screening for the C282Y mutation could be highly advantageous in terms of preventive health care. Long-term follow-up evaluation of C282Y homozygotes and H63D/C282Y compound heterozygotes will give an indication of the penetrance of the mutations." }, { "role": "assistant", "content": "{\"Disease\": [\"hemochromatosis\"]}" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.29252
[ { "role": "user", "content": "You are presented with multiple paper abstracts, alongside the associated literature review and chapter titles. Given the provided information, you are going to do a literature review. Please generate a coherent literature review that synthesizes the content, draws connections between different works, highlights major findings, and identifies any gaps in the existing research.\n\nLiterature Review Title: High-Pressure Induced Phase Transitions in High-Entropy Alloys: A Review \nChapter Title: Hcp-Structured HEAs \n1. Abstract of Cited Paper (BIB001): High-entropy alloys (HEAs) with an atomic arrangement of a hexagonal close-packed (hcp) structure were found in YGdTbDyLu and GdTbDyTmLu alloys as a nearly single hcp phase. The equi-atomic alloy design for HEAs assisted by binary phase diagrams started with selecting constituent elements with the hcp structure at room temperature by permitting allotropic transformation at a high temperature. The binary phase diagrams comprising the elements thus selected were carefully examined for the characteristics of miscibility in both liquid and solid phases as well as in both solids due to allotropic transformation. The miscibility in interest was considerably narrow enough to prevent segregation from taking place during casting around the equi-atomic composition. The alloy design eventually gave candidates of quinary equi-atomic alloys comprising heavy lanthanides principally. The XRD analysis revealed that YGdTbDyLu and GdTbDyTmLu alloys thus designed are formed into the hcp structure in a nearly single phase. It was found that these YGdTbDyLu and GdTbDyTmLu HEAs with an hcp structure have delta parameter (δ) values of 1.4 and 1.6, respectively, and mixing enthalpy (ΔHmix) = 0 kJ/mol for both alloys. These alloys were consistently plotted in zone S for disordered HEAs in a δ-ΔHmix diagram reported by Zhang et al. (Adv Eng Mater 10:534, 2008). The value of valence electron concentration of the alloys was evaluated to be 3 as the first report for HEAs with an hcp structure. The finding of HEAs with the hcp structure is significant in that HEAs have been extended to covering all three simple metallic crystalline structures ultimately followed by the body- and face-centered cubic (bcc and fcc) phases and to all four simple solid solutions that contain the glassy phase from high-entropy bulk metallic glasses. \n2. Abstract of Cited Paper (BIB002): The microstructures and properties of high-entropy alloys (HEAs) based on the face-centered cubic and body-centered cubic structures have been studied extensively in the literature, but reports on HEAs in the hexagonal close-packed (HCP) structure are very limited. Using an efficient strategy in combining phase diagram inspection, CALPHAD modeling, and ab initio molecular dynamics simulations, a variety of new compositions are suggested that may hold great potentials in forming single-phase HCP HEAs that comprise rare earth elements and transition metals, respectively. Experimental verification was carried out on CoFeReRu and CoReRuV using X-ray diffraction, scanning electron microscopy, and energy dispersion spectroscopy. \n3. Abstract of Cited Paper (BIB003): The stability of high-entropy alloys (HEAs) is a key issue before their selection for industrial applications. In this study, in-situ high-pressure and high-temperature synchrotron radiation X-ray diffraction experiments have been performed on three typical HEAs Ni20Co20Fe20Mn20Cr20, Hf25Nb25Zr25Ti25, and Re25Ru25Co25Fe25 (at. %), having face-centered cubic (fcc), body-centered cubic (bcc), and hexagonal close-packed (hcp) crystal structures, respectively, up to the pressure of ∼80 GPa and temperature of ∼1262 K. Under the extreme conditions of the pressure and temperature, all three studied HEAs remain stable up to the maximum pressure and temperatures achieved. For these three types of studied HEAs, the pressure-dependence of the volume can be well described with the third order Birch-Murnaghan equation of state. The bulk modulus and its pressure derivative are found to be 88.3 GPa and 4 for bcc-Hf25Nb25Zr25Ti25, 193.9 GPa and 5.9 for fcc-Ni20Co20Fe20Mn20Cr20, and 304.6 GPa and 3.8 for hcp-Re25Ru25Co25F... \n4. Abstract of Cited Paper (BIB004): Abstract High-entropy alloys containing 5 and 6 platinum group metals have been prepared by thermal decomposition of single-source precursors non requiring high temperature or mechanical alloying. The prepared Ir 0.19 Os 0.22 Re 0.21 Rh 0.20 Ru 0.19 alloy is the first example of a single-phase hexagonal high-entropy alloy. Heat treatment up to 1500 K and compression up to 45 GPa do not result in phase changes, a record temperature and pressure stability for a single-phase high-entropy alloy. The alloys show pronounced electrocatalytic activity in methanol oxidation, which opens a route for the use of high-entropy alloys as materials for sustainable energy conversion. \n5. Abstract of Cited Paper (BIB005): Abstract The HoDyYGdTb high-entropy alloy (HEA) was synthesized and its structural phase transitions were determined under compression up to 60.1 GPa at room temperature. Three transformations following the sequence hexagonal close-packed (hcp) → samarium type (Sm-type) → double hexagonal close-packed (dhcp) → distorted face-center cubic (dfcc) are observed at 4.4, 26.7, and 40.2 GPa, respectively. The high pressure equation of state was determined for the HEA according to third-order Birch-Murnaghan equation of state. The results show that the bulk modulus and atomic volume of the alloy obey the “additivity law”. \n6. Abstract of Cited Paper (BIB006): The formation of octonary DyErGdHoLuScTbY and senary DyGdHoLaTbY and ErGdHoLaTbY high-entropy alloys (HEAs) with the hexagonal close-packed (HCP) structure was reported in this study. Experiments using scanning electron microscopy and x-ray diffraction confirmed the single HCP solid solution in the as-cast state for these three HEAs if the presence of minor rare-earth oxides due to contamination from processing is ignored. The measured compressive yield stress values for these HEAs at room temperature are 245, 205, and 360 MPa for the ErGdHoLaTbY, DyGdHoLaTbY, and DyErGdHoLuScTbY HEAs, respectively. The corresponding solid solution strengthening contributions for these HEAs were estimated using a simple elastic model, and the resulting contributions were 28 MPa, 27 MPa, and 42 MPa for the three aforementioned HEAs.The formation of octonary DyErGdHoLuScTbY and senary DyGdHoLaTbY and ErGdHoLaTbY high-entropy alloys (HEAs) with the hexagonal close-packed (HCP) structure was reported in this study. Experiments using scanning electron microscopy and x-ray diffraction confirmed the single HCP solid solution in the as-cast state for these three HEAs if the presence of minor rare-earth oxides due to contamination from processing is ignored. The measured compressive yield stress values for these HEAs at room temperature are 245, 205, and 360 MPa for the ErGdHoLaTbY, DyGdHoLaTbY, and DyErGdHoLuScTbY HEAs, respectively. The corresponding solid solution strengthening contributions for these HEAs were estimated using a simple elastic model, and the resulting contributions were 28 MPa, 27 MPa, and 42 MPa for the three aforementioned HEAs.\n\nLiterature Review:" }, { "role": "assistant", "content": "HEAs commonly form with fcc or bcc structures. Recently, hcp-structured HEAs were observed in high-pressure experiments via an fcc to hcp polymorphic transition and was obtained in the meltquenched alloys mainly consisting of heavy hcp metals, e.g., the CoOsReRu, CoFeReRu and CoReRuV, Ir0.19Os0.22Re0.21Rh0.20Ru0.19, or rare earth hcp elements BIB001 BIB002 [10] BIB003 BIB007 . Ahmad et al. BIB004 investigated the structural stability of the quarternary equiatomic ReRuCoFe alloy under high pressure. They compressed the sample from 0.9 GPa to 80.4 GPa in a DAC. The unit cell parameters of a and c, its ratio a/c, and the sample volume all continuously decreased with increasing pressure, which indicates the hcp structure of the ReRuCoFe alloy is stable under compression up to approx. 80 GPa. Yusenko et al. investigated the structural stability of the hcp Ir0.19Os0.22Re0.21Rh0.20Ru0.19 HEA at room temperature during compression up to 45 GPa but also observed no phase transition BIB005 . For the hcp-structured rare earth HEAs, Yu et al. reported a series of pressure-induced phase transitions in the HoDyYGdTb HEA by in situ XRD measurements in a DAC using synchrotron radiation x-ray. Four polymorphs were observed following a transition sequence of hcp→Sm-type→ dhcp→dfcc during compression up to 60.1 GPa (Figure 8) , which resembles the rich pressure-induced HEAs commonly form with fcc or bcc structures. Recently, hcp-structured HEAs were observed in high-pressure experiments via an fcc to hcp polymorphic transition and was obtained in the melt-quenched alloys mainly consisting of heavy hcp metals, e.g., the CoOsReRu, CoFeReRu and CoReRuV, Ir 0.19 Os 0.22 Re 0.21 Rh 0.20 Ru 0.19 , or rare earth hcp elements BIB001 BIB002 [10] BIB003 BIB007 . Ahmad et al. BIB004 investigated the structural stability of the quarternary equiatomic ReRuCoFe alloy under high pressure. They compressed the sample from 0.9 GPa to 80.4 GPa in a DAC. The unit cell parameters of a and c, its ratio a/c, and the sample volume all continuously decreased with increasing pressure, which indicates the hcp structure of the ReRuCoFe alloy is stable under compression up to approx. BIB005 . For the hcp-structured rare earth HEAs, Yu et al. reported a series of pressure-induced phase transitions in the HoDyYGdTb HEA by in situ XRD measurements in a DAC using synchrotron radiation x-ray. Four polymorphs were observed following a transition sequence of hcp→Sm-type→dhcp→dfcc during compression up to 60.1 GPa (Figure 8) , which resembles the rich pressure-induced polymorphic transitions in its constituent elements BIB006 . The Sm-type phase firstly appeared when the pressure reached 4.4 GPa. At 13.6 GPa, the hcp (102) diffraction peak disappeared, indicating the completion of the hcp to Sm-type phase transition. With further increasing pressure to 26.7 GPa, the dhcp phase emerged and persisted to 38.3 GPa, and then, the dhcp-dfcc phase transition occurred at 40.2 GPa. Entropy 2019, 21, x FOR PEER REVIEW 11 of 16 polymorphic transitions in its constituent elements BIB006 . The Sm-type phase firstly appeared when the pressure reached 4.4 GPa. At 13.6 GPa, the hcp (102) diffraction peak disappeared, indicating the completion of the hcp to Sm-type phase transition. With further increasing pressure to 26.7 GPa, the dhcp phase emerged and persisted to 38.3 GPa, and then, the dhcp-dfcc phase transition occurred at 40.2 GPa." } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.6657
[ { "role": "user", "content": "You are presented with multiple paper abstracts, alongside the associated literature review and chapter titles. Given the provided information, you are going to do a literature review. Please generate a coherent literature review that synthesizes the content, draws connections between different works, highlights major findings, and identifies any gaps in the existing research.\n\nLiterature Review Title: Learning to Detect Deceptive Opinion Spam: A Survey \nChapter Title: b: RNN \n1. Abstract of Cited Paper (BIB001): We propose using profile compatibility to differentiate genuine and fake product reviews. For each product, a collective profile is derived from a separate collection of reviews. Such a profile contains a number of aspects of the product, together with their descriptions. For a given unseen review about the same product, we build a test profile using the same approach. We then perform a bidirectional alignment between the test and the collective profile, to compute a list of aspect-wise compatible features. We adopt Ott et al. (2011)’s op spam v1.3 dataset for identifying truthful vs. deceptive reviews. We extend the recently proposed N-GRAM+SYN model of Feng et al. (2012a) by incorporating profile compatibility features, showing such an addition significantly improves upon their state-ofart classification performance. \n2. Abstract of Cited Paper (BIB002): Online reviews have become a valuable resource for decision making. However, its usefulness brings forth a curse ‒ deceptive opinion spam. In recent years, fake review detection has attracted significant attention. However, most review sites still do not publicly filter fake reviews. Yelp is an exception which has been filtering reviews over the past few years. However, Yelp’s algorithm is trade secret. In this work, we attempt to find out what Yelp might be doing by analyzing its filtered reviews. The results will be useful to other review hosting sites in their filtering effort. There are two main approaches to filtering: supervised and unsupervised learning. In terms of features used, there are also roughly two types: linguistic features and behavioral features. In this work, we will take a supervised approach as we can make use of Yelp’s filtered reviews for training. Existing approaches based on supervised learning are all based on pseudo fake reviews rather than fake reviews filtered by a commercial Web site. Recently, supervised learning using linguistic n-gram features has been shown to perform extremely well (attaining around 90% accuracy) in detecting crowdsourced fake reviews generated using Amazon Mechanical Turk (AMT). We put these existing research methods to the test and evaluate performance on the real-life Yelp data. To our surprise, the behavioral features perform very well, but the linguistic features are not as effective. To investigate, a novel information theoretic analysis is proposed to uncover the precise psycholinguistic difference between AMT reviews and Yelp reviews (crowdsourced vs. commercial fake reviews). We find something quite interesting. This analysis and experimental results allow us to postulate that Yelp’s filtering is reasonable and its filtering algorithm seems to be correlated with abnormal spamming behaviors. \n3. Abstract of Cited Paper (BIB003): Some unethical companies may hire workers (fake review spammers) to write reviews to influence consumers' purchasing decisions. However, it is not easy for consumers to distinguish real reviews posted by ordinary users or fake reviews post by fake review spammers. In this current study, we attempt to use Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN) framework to detect spammers. In the current, we used a real case of fake review in Taiwan, and compared the analytical results of the current study with results of previous literature. We found that the LSTM method was more effective than Support Vector Machine (SVM) for detecting fake reviews. We concluded that deep learning could be use to detect fake reviews. \n4. Abstract of Cited Paper (BIB004): In the past few years, consumer review sites have become the main target of deceptive opinion spam, where fictitious opinions or reviews are deliberately written to sound authentic. Most of the existing work to detect the deceptive reviews focus on building supervised classifiers based on syntactic and lexical patterns of an opinion. With the successful use of Neural Networks on various classification applications, in this paper, we propose FakeGAN a system that for the first time augments and adopts Generative Adversarial Networks (GANs) for a text classification task, in particular, detecting deceptive reviews. Unlike standard GAN models which have a single Generator and Discriminator model, FakeGAN uses two discriminator models and one generative model. The generator is modeled as a stochastic policy agent in reinforcement learning (RL), and the discriminators use Monte Carlo search algorithm to estimate and pass the intermediate action-value as the RL reward to the generator. Providing the generator model with two discriminator models avoids the mod collapse issue by learning from both distributions of truthful and deceptive reviews. Indeed, our experiments show that using two discriminators provides FakeGAN high stability, which is a known issue for GAN architectures. While FakeGAN is built upon a semi-supervised classifier, known for less accuracy, our evaluation results on a dataset of TripAdvisor hotel reviews show the same performance in terms of accuracy as of the state-of-the-art approaches that apply supervised machine learning. These results indicate that GANs can be effective for text classification tasks. Specifically, FakeGAN is effective at detecting deceptive reviews.\n\nLiterature Review:" }, { "role": "assistant", "content": "RNN uses internal ''memory'' to process a sequence of inputs, and is widely used for processing sequential information. Theoretically, RNN can make use of the information in arbitrarily long sequences, but in practice, standard RNN is limited to looking back only a few steps due to the vanishing gradient or exploding gradient problem. Researchers have developed more sophisticated types of RNN to avoid the shortcomings of standard RNN such as Bidirectional RNN, deep bidirectional RNN, Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), etc. Ren and Zhang (2016) empirically explored a neural network model to learn document-level representation for detecting deceptive opinion spam . Given a document, the model learned first sentence representations with a CNN, which were combined using a gated RNN to model discourse information and to yield a document vector. Finally, the document representation was used directly as features to identify deceptive opinion spam. Results on in-domain and cross-domain settings showed that the proposed model outperformed the traditional discrete models. More recently, Wang et al. investigated LSTM to detect spammers BIB004 . They used a real case of fake review in Taiwan, and compared the analytical results of the current study with results of previous literature. They found that LSTM was more effective than SVM for detecting fake reviews. c: OTHERS proposed to learn the representations of reviews in a data-driven manner instead of heavily relying on expert's knowledge to identify deceptive opinion spam . A tensor network model was built on the relation generated from two patterns, and a tensor factorization algorithm was used to learn the vector representations of reviews and products. Afterwards, they concatenated the review text, the embedding of a review and the reviewed product as final representation of a review. Then, a classifier was applied to detect deceptive opinion spam. Experimental results showed that the proposed method learned more robust review representations. Previous techniques failed to consider comprehensive features of entities such as review, reviewer, product and group of reviewers simultaneously. proposed a novel Multi-Iteration Network Structure which considered the most effective features along with inter-and intra-relationships between entities on Amazon. Experimental results proved the proposed model could improve the detection performance by reducing the false noise . Besides, Aghakhani et al. adopted Generative Adversarial Networks (GANs) for identifying deceptive reviews BIB005 . BIB001 . They demonstrated that the features driven from Context Free Grammar (CFG) parse trees consistently improved the detection performance. Specifically, deep syntactic features achieved 90.4% accuracy. When syntactic features were combined with Unigram features, the performance of 91.2% accuracy could be obtained. BIB002 proposed using profile compatibility to identify deceptive opinion spam BIB002 . They defined two types of compatibility between product profiles, and designed a methodology to tackle them by extracting aspects and associated descriptions from reviews. By integrating the profile alignment compatibility features with the features (Deep syntactic + Unigram) of We also show experimental results of different features on Yelp dataset constructed by filtering algorithm. As shown in Table 4 , Bigram features yield 67.8% accuracy on Yelp dataset. By adding POS Bigrams features, the performance can be improved 1-2% in accuracy, and achieve the best performance (68.1% accuracy). Only using POS Unigram, the detection accuracy is very low (55.6% accuracy). Based on the above observation, we can say that LIWC makes little contribution compared with Unigram and Bigram features. The datasets from the review websites have different data characteristics compared with the reviews based on crowdsourcing platform. Deceptive opinion spam and truthful reviews from Yelp tend to be consistent in language characteristics, while deceptive opinion spam from crowdsourcing platform and truthful reviews have great differences in language. Based on Table 3 and 4, we can know SVM with Unigram features on Yelp dataset only achieves 66.9% accuracy, which is far below the accuracy of 88.4% on AMT dataset constructed by crowdsourcing platform. This tells us that detecting real-life deceptive opinion spam (67% accuracy) in the commercial setting of Yelp is significantly harder than detecting crowdsourced deceptive opinion spam (90% accuracy). analysed the reason from a psychological viewpoint BIB003 , and pointed out that the Yelp website's deceptive opinion spam tried to make opinion spam as convincing as a truthful review. However, deceptive opinion spam from Amazon crowdsourcing platform in which the purpose of the Turker was to make money and they might lack a real purchase experience. For AMT dataset constructed by crowdsourcing platform, n-gram features give a high accuracy, indicating that deceptive opinion spam and truthful reviews are quite different in language structures by only imagining to make reviews." } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.169873
[ { "role": "user", "content": "You will be presented with the full text of a science research paper. Please write an abstract for this paper. Your response should include the abstract and no additional text.\n\nPaper text:\naccording to the currently accepted formulation , the energy flux density of radiation is @xmath0 the energy flux density of radiation at time @xmath1 and at any position @xmath2 on a sphere , eq .\n( [ eqs ] ) , is due to the radiation emanating by an accelerated charge at an earlier time @xmath3 and at the center of the sphere . the velocity @xmath4 and the acceleration @xmath5 of the charge in the right - hand side of eq .\n( [ eqs ] ) are defined at the retarded time @xmath3 .\nthe energy radiated into a solid angle @xmath6 in the direction @xmath7 , and then measured at the position @xmath8 and the time @xmath1 is @xmath9 , for an infinitesimal time interval @xmath10 .\nhence , @xmath11 is the energy per unit area per unit time measured at the position @xmath2 on the sphere .\nconsequently , the radiated power passing through the surface of a surrounding sphere per unit solid angle @xmath6 in the direction @xmath7 is @xmath12 eq . ( [ eqjak3 ] ) is the angular distribution of radiated power per unit solid angle , as measured by observers on a surrounding sphere @xcite .\nin contrast , according to the new formulation , the energy flux density of radiation is @xmath13 thus , the angular distribution of radiated power per unit solid angle , relative to the position of the charge at the retarded time @xmath3 , is @xmath14 by integrating eq .\n( [ huangeqjak3 ] ) over a surrounding sphere , the total radiated power crossing any surrounding sphere is @xmath15 the total radiated power of radiation crossing any surrounding sphere is equal to the total power emitted by the charge at the retarded time @xmath16 , the so - called linard s result , @xmath17 as it should be to fulfill the principle of conservation of energy . yet , according to the currently accepted formulation , by integrating eq .\n( [ eqjak3 ] ) over a surrounding sphere , the total power of radiation crossing any surrounding sphere is not equal to the linard s result @xcite .\nthe currently accepted formulation violates the principle of conservation of energy .\nin the comment , singal first claimed that our reasoning above is fallacious , because we equate the evaluated result of the total radiated power of radiation crossing a sphere to the total power emitted by the charge .\nthen , he presented a resolution to the problem . referring to fig .\n[ fig1 ] , at time @xmath1 the radiation in the region enclosed by spheres @xmath18 and @xmath19 is due to the radiation emitted by the charge during the time interval from @xmath20 to @xmath21 .\nthe radiation region is not spherically symmetric around @xmath22 , since the charge moves a distance @xmath23 from @xmath22 to @xmath24 .\nthus , the radiation emitted by the charge during the time interval @xmath21 does not cross the sphere @xmath19 at all points in the time interval @xmath21 , rather it takes @xmath25 . therefore , from eq .\n( [ eqjak3 ] ) , the total energy of radiation @xmath26 enclosed by spheres @xmath18 and @xmath19 is evaluated as @xmath27 the total energy of radiation @xmath26 is due to the total energy emitted by the charge during the time interval @xmath21 .\n@xmath28 is equal to the total power emitted by the charge at the retarded time @xmath16 , that is , @xmath29 therefore , the currently accepted formulation does not violate the principle of conservation of energy . in that case , the new formulation violates the principle of conservation of energy .\nyet , it should be noted that @xmath30 is not the total radiated power crossing the sphere @xmath19 at time @xmath1 . according to singal s reasoning , what is the total radiated power crossing the sphere @xmath19 ?\nin our reasoning @xcite , we do not derive the linard s result , and not equate the evaluated result of the total radiated power crossing a sphere to the linard s result .\ninstead , we compare the evaluated result with the linard s result to see which formulation satisfies the principle of conservation of energy .\nit turns out that currently accepted formulation , instead of the new formulation , violates the principle of conservation of energy .\nthere are problems in the currently accepted formulation .\nsince the charge is accelerated , it does not move uniformly during the time interval @xmath21 .\nthus , the angular distribution of radiated power in the time interval is not exactly in accordance with eq .\n( [ eqjak3 ] ) as evaluated at merely one retarded time @xmath16 , because the time - retarded positions and velocities @xmath4 of the charge change during the time interval .\nthe total power emitted by the charge as evaluated in accordance with singal s reasoning should be only approximately valid , as noticed by panofsky and jackson @xcite . therefore , it is very unlikely that the _ exact _ linard s result is obtained simply by an approximation approach , unless eq .\n( [ eqjak3 ] ) is only approximately correct .\naccording to singal , the factor @xmath31 in the time interval @xmath32 is interpreted as just a matter of simple geometry . yet\nthis interpretation negates the existent interpretations : a lorentz transformation of time between the charge s frame and the observer s frame @xcite , or something similar to the doppler effect @xcite .\nwhich interpretation is correct ?\nforemost , singal does not resolve the problem that the currently accepted formulation violates the principle of conservation of energy .\nthe total radiated power crossing a surrounding sphere at time @xmath1 is not equal to @xmath30 .\nsuppose that one wants to evaluate the total radiated _ power _\n@xmath33 crossing a sphere .\none first measures the total radiated _ energy _ @xmath34 crossing the sphere in a time interval @xmath35 .\nthen , the total radiated power crossing the sphere is given as @xmath36 .\nit should be emphasized that the time interval @xmath37 must be the same , as the measurement of the total radiated energy is carried out _ at all points _ on the sphere .\nhowever , according to singal s reasoning , the measurement of the total radiated energy @xmath38 crossing the sphere @xmath19 is carried out with different time intervals @xmath32 at different points on the sphere .\n@xmath39 is meaningless , and @xmath40 is not the total radiated power crossing the sphere . if singal thinks that the total radiated power crossing a surrounding sphere is @xmath30 , and it is equal to the total power emitted by the charge at the retarded time , then he might make a mistake in the meaning of the total radiated power crossing a sphere .\nhence , the problem that the currently accepted formulation violates the principle of conservation of energy remains unsolved .\nanother issue in the comment is `` while deriving expressions for the electric and magnetic fields , @xmath41 and @xmath42 , huang and lu @xcite in their eqs .\n( 20)-(25 ) simply replaced @xmath43 with @xmath44 which is not correct as these two quantities are actually related by @xmath45 , where @xmath46 is the doppler factor [ 3 ] .\nthus their transformed electric and magnetic fields are wrong . ''\nthe newly derived expressions for the electromagnetic fields become the currently accepted expressions , if @xmath47 , instead of @xmath48 , is employed in the transformation of electric and magnetic fields between inertial frames .\nfirst , that the new formulation , rather than the currently accepted formulation , fulfills the principle of conservation of energy reinforces the validity of the replacement @xmath49 in the transformation .\nfurthermore , maxwell s equations of electrodynamics are shown form - invariant via a novel perspective on relativistic transformation transformation of physical quantities , instead of space - time coordinates @xcite .\nan extra transformation of spacial coordinate such as @xmath47 is not necessary in the transformation of electric and magnetic fields to render maxwell s equations form - invariant among inertial frames . according to singal , the expression @xmath47 is considered as due to the doppler effect .\nyet , the doppler effect is the transformation of physical quantities of waves such as frequency @xmath50 and wave vector @xmath51 relative to inertial frames , instead of space - time coordinates @xmath52 .\nthe expression @xmath47 has nothing to do with the doppler effect , since it involves spatial coordinates only , without frequency and wave vector of waves . a systematic method to derive the doppler effect , without involving transformation of space - time coordinates , was presented in the literature @xcite .\neven more , in a certain case an anomaly the problem of negative frequency of waves , was found by applying the invariance of the phase of waves which is equivalent to relativistic transformation of both physical quantities @xmath53 and space - time coordinates @xmath52 simultaneously @xcite .\nthis indicates that the invariance of the phase of waves is invalid .\ntherefore , the doppler effect should be related to the transformation of physical quantities @xmath53 of waves only . in singal s interpretation ,\nthe factor @xmath31 in @xmath32 is a matter of simple geometry , whereas the factor @xmath54 in @xmath47 is due to the doppler effect .\nthe two interpretations seem incompatible .\nambiguities on the meaning of @xmath25 and @xmath47 still exist in the currently accepted formulation of electromagnetic radiation of an accelerated charge . owing to a misunderstanding in the meaning of\nthe total radiated power crossing a sphere , the serious problem that currently accepted formulation violates the principle of conservation of energy remains unsolved .\nsuch controversies as paradoxes in special relativity are hardly resolved just by theoretical reasoning . in order to convincingly determine the validity of the currently accepted formulation , an experimental test on the angular distribution of radiated power was proposed @xcite .\nnonetheless , it is necessary to clarify which the angular distribution of radiated power is : eq .\n( [ eqjak3 ] ) , or from eq .\n( [ toteqjak3 ] ) @xmath55 further theoretical and experimental examinations on the currently accepted formulation and the new formulation should be highly welcome . 99 a.k .\nsingal , `` a first principles derivation of the electromagnetic fields of a point charge in arbitrary motion '' , am .\n* 79 * , 1036 - 1041 ( 2011 ) .\nyoung - sea huang and kang - hao lu , `` exact expression for radiation of an accelerated charge in classical electrodynamics '' , found .\nphys . * 38 * , 151 - 159 ( 2008 ) .\nsingal , `` comment on ' ' exact expression for radiation of an accelerated charge in classical electrodynamics `` '' , found\n. phys . * 43 * , 267 - 270 ( 2013 ) .\nyoung - sea huang , `` is the current formulation of the radiation of an accelerated charge valid ? '' , nuovo cimento b * 124 * , 925 - 929 ( 2009 ) .\nw.k.h . panofsky and m. phillips , _ classical electricity and magnetism _ , ( addison - wesley publishing , inc . , 1962 ) , chapter 20 .\njackson , _ classical electrodynamics _ , third edition , ( john wiley & sons inc .\n, new york , 1999 ) , chapter 14 .\ngriffiths , _ introduction to electrodynamics _ , 2nd edition , ( prentice - hall , inc .\n, new jersey , 1986 ) , chapters 9 and 10 .\nlandau , and e.m .\nlifshitz , _ the classical theory of field _ , fourth edition , ( pergamon press ltd\n. , 1975 ) , chapter 9 .\nsmith , _ an introduction to classical electromagnetic radiation _ , ( cambridge university press , new york , 1997 ) , chapter 6 .\nyoung - sea huang , `` a new perspective of relativistic transformation for maxwell s equations of electrodynamics '' , phys .\n* 79 * , 055001 ( 2009 ) ; `` a new perspective on relativistic transformation versus the conventional lorentz transformation illustrated by the problem of electromagnetic waves in moving media '' , * 81 * , 015004 ( 2010 ) ; `` a new perspective on relativistic transformation : formulation of the differential lorentz transformation based on first principles '' , * 82 * , 045011 ( 2010 ) .\nyoung - sea huang , can .\n, `` formulation of the classical and the relativistic doppler effect by a systematic method '' , * 82 * , 957 - 964 ( 2004 ) . young - sea huang , `` the invariance of the phase of waves among inertial frames is questionable '' , epl * 79 * , 10006 ( 2007 ) ; `` is the phase of plane waves an invariant ? '' , z. naturforsch .\n* 65a * , 615 ( 2010 ) ." }, { "role": "assistant", "content": "flaws and ambiguities are pointed out upon examining the comment attempting to solve a problem as raised recently the currently accepted formulation of electromagnetic radiation of an accelerated charge violates the principle of conservation of energy . \n this problem is not solved by the comment , due to a misunderstanding in the meaning of the total radiated power crossing a sphere . \n an experiment is suggested to determine whether on not the currently accepted formulation is valid . in a recent literature @xcite , without any reason , singal made a hasty conclusion that the newly derived exact expression for the electromagnetic radiation of an accelerated charge by huang and lu @xcite is incorrect . in the comment @xcite , \n singal attempted to resolve a serious problem the currently accepted formulation of electromagnetic radiation of an accelerated charge violates the principle of conservation of energy , as raised recently @xcite . with a view to making a comprehensible reply , relevant expressions of both currently accepted formulation @xcite and the new formulation @xcite , as well as discrepancy between them \n are first presented ." } ]
science.scicite_classification
science.scicite_classification.4695
[ { "role": "user", "content": "You will be presented with a citation segment from the section of a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\nOne nodal family member Xnr3 lacks mesodermal-inducing activity via activating Smad2/3, but promotes neural induction by antagonizing BMP4 (Hansen et al., 1997; Haramoto et al., 2004)." }, { "role": "assistant", "content": "Background" } ]
science.drug_combo_extraction_re
science.drug_combo_extraction_re.174
[ { "role": "user", "content": "You will be shown a paragraph from a biomedical research article. Your task is to extract all drug combinations appearing in a specific sentence within in the paragraph. Each drug combination should be assigned to one of the following three categories:\n\n- POS: Indicates a positive interaction between the drugs in the combination; for instance, drugs that have an additive or synergistic effect.\n- NEG: Indicates a negative interaction between the drugs in the combination; for instance, combining these drugs might result in adverse side effects, or might reduce their efficacy.\n- COMB: Indicates that the drugs are given in combination, but the type of interaction between the drugs cannot be determined based on the provided context.\n\nPlease format your answer as a json array. Each entry in the array should itself be an array formatted as [<drug_1>, <drug_2>, ..., <drug_n>, <category>] where <category> is one of the three categories listed above and <drug_1>, ..., <drug_n> are drug names copied verbatim from the input sentence. Different combinations may involve different numbers of drugs; in other words, the `n` in <drug_n> may vary.\n\nAs an example, the following output describes two extracted drug combinations; one between two drugs, and one between three drugs:\n\n[[\"artesunate\", \"artemether\", \"COMB\"], [\"Nifedipine\", \"Labetalol\", \"Prazosin\", \"POS\"]]\n\nThe paragraph and specific sentence are below. You should only extract relationships expressed in the specific sentence; the paragraph is provided for context.\n\nIn your output, return only the json array and no other text. If no drug combinations appear, return an empty array `[]`.\n\nparagraph: Perspectives on anthracyclines plus ifosfamide in advanced soft tissue sarcomas. Doxorubicin and ifosfamide are currently the two main drugs for the treatment of soft tissue sarcomas in adults . Given in combination at full doses, with or without dacarbazine, these agents have induced higher response rates than were obtained with single-agent therapy. Because they involve considerable myelotoxicity, however, full-dose regimens should be reserved for patients with good performance status and without potential septic foci. Obviously, higher response rates do not automatically translate into improved survival. In soft tissue sarcomas, full-dose polychemotherapy will most probably provide a survival benefit only in selected patients in whom surgery can be performed in combination with chemotherapy. Prospective trials in such patients, although difficult to carry out, would be highly desirable. The information they would provide might help the clinician tailor treatment in a more rational way and improve chances of cure or long-term survival in at least some patient subgroups.\n\nsentence: Doxorubicin and ifosfamide are currently the two main drugs for the treatment of soft tissue sarcomas in adults ." }, { "role": "assistant", "content": "[[\"Doxorubicin\", \"ifosfamide\", \"POS\"]]" } ]
science.chemprot_ner
science.chemprot_ner.951
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Protein\", \"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\" : [\"Dexamethasone\", ...], \"Protein\" : [\"BRCA-1\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nNarirutin fraction from citrus peels attenuates alcoholic liver disease in mice.\nThis study aimed to demonstrate protective activities of the narirutin fraction from peels of Citrus unshiu against ethanol-induced hepatic damage through an animal study. Citrus narirutin fraction (CNF), contained 75% of narirutin, was obtained by an ultra-sonicated extraction and further purification. ICR mice were divided into four groups; normaldiet control, ethanol control (6.5g ethanol/kg), low-CNF (ethanol+150mg CNF/kg) and high-CNF (ethanol+300mg CNF/kg) groups. Consumption of alcohol for 8weeks induced severe liver damage with increases in prognostic indicators such as aspartate transaminase, alanine transaminase in serum whereas co-administration of CNF suppressed their increases. Excessive accumulations in liver TG and TC in ethanol control group were also suppressed by co-administration of CNF. Co-administration of CNF maintained SOD activity, GSH and malondialdehyde levels close to those of the normal diet group. Chronic consumption of alcohol also stimulated abrupt increases in pro-inflammatory cytokines such as nuclear factor (NF)-κB, tumor necrosis factor (TNF)-α and interleukin (IL)-1β in liver otherwise co-administration of CNF effectively suppressed production of these cytokines dose-dependently. These results indicate that co-administration of CNF with alcohol can alleviate alcohol induced liver damage through preventing lipid formation, protecting antioxidant system and suppressing productions of pro-inflammatory cytokines." }, { "role": "assistant", "content": "{\"Chemical\": [\"Narirutin\", \"narirutin\", \"GSH\", \"malondialdehyde\", \"alcohol\", \"ethanol\", \"aspartate\"], \"Protein\": [\"aspartate transaminase\", \"tumor necrosis factor (TNF)-\\u03b1\", \"alanine transaminase\", \"SOD\", \"nuclear factor (NF)-\\u03baB\", \"cytokines\", \"interleukin (IL)-1\\u03b2\"]}" } ]
science.bioasq_general_qa
science.bioasq_general_qa.1100
[ { "role": "user", "content": "You will be shown a question, followed by excerpts from biomedical research papers. Please answer the question based on the provided context. Do not include any text in your response other than the answer.\n\nQuestion: What is bb21217?\n\nContext: Here, we review three common treatment modalities used to target BCMA in the treatment of MM: bispecific antibody constructs, antibody-drug conjugates, and chimeric antigen receptor (CAR)-modified T-cell therapy. We provide an overview of preliminary clinical data from trials using these therapies, including the BiTE® (bispecific T-cell engager) immuno-oncology therapy AMG 420, the antibody-drug conjugate GSK2857916, and several CAR T-cell therapeutic agents including bb2121, NIH CAR-BCMA, and LCAR-B38M." }, { "role": "assistant", "content": "BB21217 is a chimeric antigen receptor (CAR)-modified T-cell therapy used to target B-cell maturation antigen (BCMA) in the treatment of multiple myeloma." } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.64338
[ { "role": "user", "content": "You will be presented with the full text of a science research paper. Please write an abstract for this paper. Your response should include the abstract and no additional text.\n\nPaper text:\nrecently , we have worked on some of the effects that general scalar - tensor theories ( stt ) of gravity yield on astrophysical scales @xcite . by taking the weak field limit of stt\n, three parameters appear @xcite : the newtonian constant at infinity @xmath2 , a compton length - scale ( @xmath3 ) coming from the effective mass of the lagrangian potential , and the strength of the new scalar gravitational interaction ( @xmath1 ) . for point - like\nmasses , the new newtonian potential is of a yukawa type @xcite and , in general , analytical expressions can be found for spherical @xcite and axisymmetric systems @xcite . using these solutions\nwe build a galactic model that is consistent with a flat rotation velocity profile which resembles the observations @xcite . in the present short contribution\n, we outline a way to construct such a galactic model using stt .\na typical spiral galaxy consists of a disk , a bulge , and a dark matter ( dm ) halo .\na real halo is roughly spherical in shape and contains most of the matter , up to 90% , of the system .\nthus , being the halo the main component , our galactic model will then consist of a spherical system of unknown dm .\ntherefore , in order compare with observations we assume that test particles stars and dust follow dm particles . to construct our halo model\n, we take the weak field limit of general stt and consider a spherically symmetric fluctuation of the scalar field around some fixed background field .\nthe stt is then characterized by a background value of the scalar field , @xmath4 , a compton lengthscale , @xmath5 , and a strength @xmath1 @xcite . for point - like sources ,\nthe new newtonian potential is well known to be of a yukawa type @xcite : @xmath6 if one fixes the background field to be the inverse of the newtonian constant , @xmath7 , then for @xmath8 the new newtonian potential coincides with the standard newtonian one . given this , for @xmath9 one finds deviations of the order of @xmath10 to the newtonian dynamics .\nthis setting is , however , very constrained by local ( solar system ) deviations of the newtonian force , in order for @xmath1 to be less than @xmath11 @xcite .\nalternatively , one can choose the setting @xmath12 and the new potential coincides with the newtonian for @xmath9 and deviates by @xmath13 for @xmath8 . if one thinks of a galactic system , physical scales are around the tens of kiloparsecs and so is our typical @xmath5 , then for distances bigger than this , one expects constrictions , e.g. from cluster dynamics or cosmology .\nseveral authors @xcite have considered these deviations , giving a rough estimate within the range of @xmath14 .\nfor example , the value of @xmath15 yields an asymptotic growing factor of @xmath16 in @xmath17 , whereas the value of @xmath18 reduces @xmath17 asymptotically by one fourth .\nfor a general density distribution the equations governing the weak energy ( newtonian limit ) of stt are @xcite : @xmath19 \\ ; , \\label{pares_eq_h00}\\\\ \\nabla^2 \\bar{\\phi } - m^2 \\bar{\\phi } & = & - 8\\pi \\alpha\\rho \\ ; , \\label{pares_eq_phibar}\\end{aligned}\\ ] ] where @xmath20 is the scalar field fluctuation and @xmath21 is the density distribution which contains baryons , dm particles , or other types of matter . for definiteness we can think of dm , but its profile is unknown yet .\nthis will be determined by setting the dynamics of test particles , i.e. , by demanding the new newtonian potential to be the one that solves the rotation curves of test particles stars and dust .\nthat is , we require @xmath22 where this constant is chosen to fit rotation velocities in spirals @xcite ; this is the simplest model .\nwe then proceed to solve for @xmath23 , and its solution is : @xmath24 substituting this result into the original system , ( [ pares_eq_h00 ] , [ pares_eq_phibar ] ) , gives @xmath25 for our convenience , we define the following densities : @xmath26 and @xmath27 , where @xmath28 is the density that the system would have to achieve flat rotation curves if system would be treated solely with newtonian physics , and @xmath29 is the contribution of the scalar field , which should be obtained by integrating the equation : @xmath30 by comparing ( [ pares_eq_phibar ] ) with ( [ nablaphi ] ) , it seems natural to identify @xmath31 to convert ( [ nablaphi ] ) to a type ( [ pares_eq_phibar ] ) equation , now for @xmath32 and @xmath28 . the solution is therefore given by @xcite : @xmath33 \\ , , \\end{aligned}\\ ] ] where @xmath34 and @xmath35 is a length scale of the spherical system , e.g. related to the distance at which stars possess flat rotation curves ; the size of the halo is denoted by @xmath36 . once\n( [ pares_eq_finalphi ] ) is solved , its solution is substituted into ( [ rho_sol ] ) to find the dm distribution , @xmath37 .\nthe procedure outlined here is straightforward .\nthus , using our given @xmath28 the above expression can analytically be integrated to have @xmath38 \\ , .\n\\end{aligned}\\ ] ] thus , the dm profile becomes @xmath39 \\right\\ } \\ , .\n\\end{aligned}\\ ] ] following , we plot in figure [ all - densities ] the density profiles @xmath37 , @xmath40 and @xmath41 for @xmath42 , @xmath43 . the main contribution to @xmath37 comes from @xmath40 , which is an inverse squared function of the radius .\nthe dm profile is therefore cuspy near the galactic centre , similar to the nfw s obtained from simulations @xcite .\nwe have computed the best fits to these curves for @xmath44 , given by @xmath45 and @xmath46 . on the other hand ,\nthe best fits for @xmath47 are @xmath48 and @xmath49 . in figure\n[ various - lambda ] we plot the dm profile for various @xmath5 values , resulting in small changes in the slope . for negative @xmath1 values , ( [ phi - bar - sol ] ) becomes negative , but the dm profile does not become shallower . we have computed the case @xmath50 , @xmath15 for @xmath44 and found @xmath51 , while for @xmath47 we obtained @xmath52 .\nwe have constructed a spherically galactic halo model in which we fit the new gravitational potential to match a flat rotation profile for test particles stars and dust .\nonce this is done , one computes the scalar field fluctuation and dm profiles resulting from this potential .\nwe have found analytical results for our model , eqs.([phi - bar - sol ] ) and ( [ rho - dm - sol ] ) .\nthese equations contain the information of the scalar field ( @xmath5 , @xmath1 ) , which determine the dm profile .\nwe have plotted our results for @xmath53 and various @xmath5s .\nthese profiles have an inner region as cuspy as the nfw s .\nnegative values of @xmath1 do not yield a shallower dm profile neither .\nthis result may however not surprise since we are not resolving the inner structure of the rotation velocity curves .\nwe only considered its flat asymptotic behaviour ; in a forthcoming paper we will study it .\n99 rodrguez \nmeza m a , klapp j , cervantes \ncota j l , dehnen h , 2001 , in macias a , cervantes \ncota j l , lmmerzahl c , eds , exact solutions and scalar fields in gravity : recent developments .\nkluwer academic / plenum publishers , new york , p. 213 rodrguez \nmeza m. a. , cervantes cota j. l. , 2004 , _ mnras _ * 350 * 671 rodrguez \nmeza m a et al 2005 , _ gen .\ngrav . _ * 37 * 823 pimentel l o and obregn o 1986 _ astrophys .\nspace sci . _\n* 126 * 231 - 4 fischbach e , talmadge c l , 1999 , _ the search for non newtonian gravity . _ springer \nverlag , new york sofue y and rubin v 2001 _ ann rev astron astrophys _ * 39 * 137 - 174 nagata r , chiba t , sugiyama n 2002 phys .\nd * 66 * 103510 ; umezu k , ichiki k , yahiro m 2005 phys .\nd * 72 * 044010 ; shirata a , shiromizu t , yoshida n , suto y 2005 phys .\nd * 71 * 064030 helbig t 1991 _ apj _ * 382 * 223 navarro j. f. et al 1996 _ apj _ * 462 * 563 ; 1997 _ apj _ * 490 * 493" }, { "role": "assistant", "content": "we computed flat rotation curves from scalar - tensor theories in their weak field limit . \n our model , by construction , fits a flat rotation profile for velocities of stars . as a result \n , the form of the scalar field potential and dm distribution in a galaxy are determined . by taking into account the constraints for the fundamental parameters of the theory @xmath0 , it is possible to obtain analytical results for the density profiles . for positive and negative values of @xmath1 , \n the dm matter profile is as cuspy as nfw s ." } ]
science.chemprot_ner
science.chemprot_ner.940
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Protein\", \"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\" : [\"Dexamethasone\", ...], \"Protein\" : [\"BRCA-1\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nIdentification of an Allosteric Modulator of Serotonin Transporter with Novel Mechanism of Action.\nSerotonin transporters (SERT) play an essential role in the termination and regulation of serotonin signaling in the brain. SERT is also the target of antidepressants and psychostimulants. Molecules with novel activities and modes of interaction with regard to SERT function are of great scientific and clinical interest. We explored structural regions outside the putative serotonin translocation pathway to identify potential binding sites for allosteric transporter modulators (ATMs). Mutational studies revealed a pocket of amino acids outside the orthosteric substrate binding sites located in the interface between extracellular loops 1 and 3 that when mutated affect transporter function. Using the structure of the bacterial transporter homologue leucine transporter as a template, we developed a structural model of SERT. We performed molecular dynamics simulations to further characterize the allosteric pocket that was identified by site-directed mutagenesis studies and employed this pocket in a virtual screen for small-molecule modulators of SERT function. In functional transport assays, we found that one of the identified molecules, ATM7, increased the reuptake of serotonin, possibly by facilitating the interaction of serotonin with transport-ready conformations of SERT when concentrations of serotonin were low and rate limiting. In addition, ATM7 potentiates 3,4-methylenedioxy-N-methylamphetamine (MDMA, \"Ecstasy\")-induced reversed transport by SERT. Taking advantage of a conformationally sensitive residue in transmembrane domain 6, we demonstrate that ATM7 mechanistically stabilizes an outward-facing conformation of SERT. Taken together these observations demonstrate that ATM7 acts through a novel mechanism that involves allosteric modulation of SERT function." }, { "role": "assistant", "content": "{\"Chemical\": [\"3,4-methylenedioxy-N-methylamphetamine\", \"MDMA\", \"serotonin\", \"amino acids\", \"leucine\", \"Serotonin\"], \"Protein\": [\"SERT\", \"Serotonin Transporter\", \"leucine transporter\", \"Serotonin transporters\"]}" } ]
science.bioasq_yesno_qa
science.bioasq_yesno_qa.221
[ { "role": "user", "content": "You have to answer a biomedical question in binary format, i.e. only yes and no are the only acceptable answer formats. A list of paragraphs is provided as context to help you answer the question.\n\nQuestion: Is indicated the use of antioxidant supplements in patients at risk for coronary artery disease?\n\nContext: We and others have published observational epidemiologic studies in support of vitamins in the primary prevention of CVD, but the results from intervention studies are mixed.\nFor vitamin E, observational data suggest benefit at doses of 100 to 400 IU/d. Results from recent large-scale trials are mixed, with some showing modest benefit but others suggesting no benefit, especially for secondary prevention. Results for B vitamins are also mixed and further complicated by the recent folate fortification of the flour supply. If greater B vitamin intake does reduce CVD, the benefits are likely to be greatest for primary prevention and in populations with intake below dietary reference standards.\nIn the dose-response meta-analysis, each 30 mg/day increase in vitamin C, 30 IU/day increase in vitamin E, and 1 mg/day increase in beta-carotene yielded the estimated overall relative risk for CHD of 1.01 (95% CI, 0.99-1.02), 0.96 (95% CI, 0.94-0.99), and 1.00 (95% CI, 0.88-1.14), respectively. CONCLUSIONS: Our findings in this meta-analysis suggest that an increase in dietary intake of antioxidant vitamins has encouraging prospects for possible CHD prevention.\nHigh levels of α-tocopherol in serum were associated with 30% lower CAD risk in another study (HR 0.71; 95%CI 0.53-0.94). Among minerals (zinc, selenium, and chromium), an inverse association between zinc and CAD was observed; levels lower than 14.1 µmol/L were associated with an increased risk for CAD (RR 1.70; 95%CI 1.21-2.38).\nThe information available on this issue is scarce. Further prospective studies are needed to elucidate the role of these nutrients in the cardiovascular risk of patients with diabetes.\nCoenzyme Q10 supplementation at a dosage of 150 mg appears to decrease the inflammatory marker IL-6 in patients with CAD.\nCoenzyme Q10 supplements at a dose of 150 mg can decrease oxidative stress and increase antioxidant enzyme activity in patients with CAD. A higher dose of coenzyme Q10 supplements (>150 mg/d) might promote rapid and sustainable antioxidation in patients with CAD.\nAlpha-tocopherol or beta-carotene supplementation has no protective effect on macrovascular outcomes or total mortality of diabetic male smokers.\nSodium selenite supplementation increases GPx-1 activity in endothelial cells and in CAD patients. Future studies have to demonstrate whether long-term CAD outcome can be improved.\nAfter 7.3 years of treatment and follow-up, a combination pill of folic acid, vitamin B6, and vitamin B12 did not reduce a combined end point of total cardiovascular events among high-risk women, despite significant homocysteine lowering.\nIn this population-based study, vitamin E use was unrelated to mortality, but this apparently null finding seems to represent a combination of increased mortality in those with severe cardiovascular disease and a possible protective effect in those without.\nIn this large cohort of apparently healthy US male physicians, self-selected supplementation with vitamin E, vitamin C, or multivitamins was not associated with a significant decrease in total CVD or CHD mortality.\nThe American Heart Association has recommended consumption of a balanced diet with emphasis on antioxidant-rich fruits and vegetables but has made no recommendations regarding vitamin E supplementation for the general population. Although vitamin E supplementation seems to be safe for most people, recommendations from health care professionals should reflect the uncertainty of established benefit as demonstrated in clinical trials\nRecent studies show that supplementation with antioxidant vitamins E and C have benefits in CHD prevention; however, supplementation with beta-carotene may have deleterious effects and is not recommended. Current evidence suggests that patients with CHD would probably benefit from taking vitamin E in a dosage of 400 IU per day and vitamin C in a dosage of 500 to 1,000 mg per day. Clinicians may also want to consider vitamin supplementation for CHD prevention in high-risk patients. Folate lowers elevated homocysteine levels, but evidence for routine supplemental use does not yet exist.\nIn patients at high risk for cardiovascular events, treatment with vitamin E for a mean of 4.5 years had no apparent effect on cardiovascular outcomes." }, { "role": "assistant", "content": "no" } ]
science.data_reco_mcq_sc
science.data_reco_mcq_sc.13948
[ { "role": "user", "content": "Given a short query with keyphrases about a research idea in 2010 in the domain of artificial intelligence, recommend the relevant datasets to validate the research idea from options provided. A short description of the datasets is also provided to help you.\n\nQuery: I want to search for images of cars on the streets.\n\nKeyphrases: image retrieval images\n\nDataset description:\nGTA5: The GTA5 dataset contains 24966 synthetic images with pixel level semantic annotation. The images have been rendered using the open-world video game Grand Theft Auto 5 and are all from the car perspective in the streets of American-style virtual cities.\nStreet Scene: Street Scene is a dataset for video anomaly detection. Street Scene consists of 46 training and 35 testing high resolution 1280×720 video sequences taken from a USB camera overlooking a scene of a two-lane street with bike lanes and pedestrian sidewalks during daytime.\nDBpedia: DBpedia (from \"DB\" for \"database\") is a project aiming to extract structured content from the information created in the Wikipedia project. DBpedia allows users to semantically query relationships and properties of Wikipedia resources, including links to other related datasets.\nSVT: The Street View Text (SVT) dataset was harvested from Google Street View. Image text in this data exhibits high variability and often has low resolution.\n\nOptions:- A: GTA5 B: Street Scene C: DBpedia D: SVT" }, { "role": "assistant", "content": "DBpedia" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.22897
[ { "role": "user", "content": "You are presented with a paragraph from a COVID-19 literature. Classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable topics within the array. Your output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nAn emerging viral infection is a global public health challenge. The development of modern, fast, and extensive transportation makes the outbreak hard to contain. Everyone is at risk, and the outbreak can rapidly turn into a pandemic crisis, like what we are currently facing for the 2019 novel coronavirus disease (COVID-19). Prompt diagnosis of the case is required to improve patients' prognosis and control of the outbreak. The common manifestations of COVID-19 include fever, cough, dyspnea, and malaise. However, patients may present with atypical symptoms that pose a diagnostic challenge. We report the first case of an elderly male who presented with rhabdomyolysis and later was diagnosed with COVID-19. Clinicians should be aware that rhabdomyolysis can be an initial presentation of COVID-19 or can occur at any time during the disease course. Patients with rhabdomyolysis should receive aggressive fluid administration to prevent acute kidney injury (AKI). However, COVID-19 patients are at risk of worsening oxygenation and acute hypoxemic respiratory failure from fluid overload. Therefore, cautious fluid administration is needed in COVID-19 patients with rhabdomyolysis." }, { "role": "assistant", "content": "[\"Case Report\"]" } ]
science.acl_arc_intent_classification
science.acl_arc_intent_classification.957
[ { "role": "user", "content": "You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:\n- Background: provides context or foundational information related to the topic.\n- Extends: builds upon the cited work.\n- Uses: applies the methods or findings of the cited work.\n- Motivation: cites the work as inspiration or rationale for the research.\n- CompareOrContrast: compares or contrasts the cited work with others.\n- FutureWork: cites the work as a direction for future research.\n\nYour answer should be a single word from the following list of options: [\"Background\", \"Extends\", \"Uses\", \"Motivation\", \"CompareOrContrast\", \"FutureWork\"]. Do not include any other text in your response.\n\nSection Title:\nNone\n\nContext before the citation:\nAfter much exploration, Demner-Fushman et al. (2006) discovered that it was not practical to annotate PICO entities at the phrase level due to significant unresolvable disagreement and interannotator reliability issues. Of the remaining citations, 275 were used for training and rule derivation, as described in the following sections. These 100 abstracts were set aside as a held-out test set.\n\nCitation Sentence:\nAfter much exploration , Demner-Fushman et al. ( 2006 ) discovered that it was not practical to annotate PICO entities at the phrase level due to significant unresolvable disagreement and interannotator reliability issues .\n\nContext after the citation:\nConsider the following segment: This double-blind, placebo-controlled, randomized, 3-period, complete block, 6-week crossover study examined the efficacy of simvastatin in adult men and women (N = 151) with stable type 2 DM, low density lipoprotein-cholesterol 100 mg/dL, HDL-C < 40 mg/dL, and fasting triglyceride level > 150 and < 700 mg/dL. All annotators agreed that the sentence contained the problem, population, and intervention. However, they could not agree on the exact phrasal boundaries of each element, and more importantly, general guidelines for ensuring consistent annotations. For example, should the whole clause starting with adult men and women be marked as population, or should type 2 Diabetes Mellitus (type 2 DM) be marked up only as the problem?" }, { "role": "assistant", "content": "Background" } ]
science.mslr2022_cochrane_multidoc_summarization
science.mslr2022_cochrane_multidoc_summarization.1404
[ { "role": "user", "content": "You will be provided with the titles and abstracts of a collection of medical RCT reports studying a clinical question -- for instance, the efficacy of an intervention on some medical outcome. Please write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article. Your response should not include any other text. The RCT reports are below.\n\n\n\nTitle 1:\nSulthiame in the primary therapy of West syndrome: a randomized double-blind placebo-controlled add-on trial on baseline pyridoxine medication.\nAbstract 1:\nWest syndrome (WS) is still one of the most difficult to treat epilepsies in infancy. Sulthiame (STM), which is commonly used in some countries in the treatment of benign focal epilepsies in childhood, has been suggested to be effective in WS too. This prospective, randomized placebo-controlled study was designed to prove or refute this hypothesis.\n Thirty-seven infants aged between 3.5 and 15 months with newly diagnosed WS received baseline therapy with pyridoxine (PDX). The children were randomized in a double-blind fashion to STM (n = 20) or placebo (n = 17), starting at day 4 at a moderate dose of 5 mg/kg body weight. Without complete cessation of infantile spasms (ISs) and resolution of hypsarrhythmia as the definition criteria of a response, the dose was doubled at day 7. The final examination was undertaken at the end of day 9.\n Based on the intention to treat, six (30%) of 20 patients responded to STM (p < 0.025), as did six (35%) of 17 infants fulfilling the study criteria (p < 0.01). Patients with tuberous sclerosis did not respond to STM (n = 3) No patient responded to placebo. One patient in the verum group was withdrawn because of reversible somnolence.\n As no child in the placebo group responded to the baseline PDX therapy, nor did any child in the STM group during the first 3 days of baseline therapy, PDX does not seem to be effective in WS. STM has a positive effect in the primary therapy of WS, comparable to that of vigabatrin." }, { "role": "assistant", "content": "Sulthiame may lead to a cessation of seizures when used as an add-on therapy to pyridoxine in patients with West syndrome. The included study was small and had a significant risk of bias which limits the impact of the evidence. No conclusions can be drawn on the occurrence of adverse drug effects, change in quality of life or mean reduction in seizure frequency. No evidence exists for the use of sulthiame as an add-on therapy in patients with epilepsy outside West syndrome. Large, multi-centre randomised controlled trials are necessary to inform clinical practice if sulthiame is to be used as an add-on therapy for epilepsy." } ]
science.healthver_entailment
science.healthver_entailment.237
[ { "role": "user", "content": "You will be shown a claim about public health, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\". If the abstract does not provide enough information to arrive at a verdict, write \"NEI\" (for \"not enough information\").\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract. If the verdict was \"NEI\", then return an empty array.\n\nFor instance, if the model were given the claim \"wearing masks can prevent the spread of COVID\", the output might be\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that mass mask-wearing reduces the transmission rate for COVID-19.\"]\n}\n\nClaim: Although face masks can reduce transmission of the Covid-19 virus, not completely. \n\nAbstract:\nFace masks are an avenue to curb the spread of coronavirus, but few people in Western societies wear face masks.\nSocial scientists have rarely studied face mask wearing, leaving little guidance for methods to encourage these behaviours.\nIn the current article, we provide an approach to address this issue by developing the 32-item and 8-dimension Face Mask Perceptions Scale (FMPS).\nWe begin by developing an over-representative item list in a qualitative study, wherein participants' responses are used to develop items to ensure content relevance.\nThis item list is then reduced via exploratory factor analysis in a second study, and the eight dimensions of the scale are supported.\nWe also support the validity of the FMPS, as the scale significantly relates to both face mask wearing and health perceptions.\nWe lastly confirm the factor structure of the FMPS in a third study via confirmatory factor analysis.\nFrom these efforts, we identify an avenue that social scientists can aid in preventing coronavirus and illness more broadly - by studying face mask perceptions and behaviours." }, { "role": "assistant", "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Face masks are an avenue to curb the spread of coronavirus, but few people in Western societies wear face masks.\"]}" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.23060
[ { "role": "user", "content": "You are presented with multiple paper abstracts, alongside the associated literature review and chapter titles. Given the provided information, you are going to do a literature review. Please generate a coherent literature review that synthesizes the content, draws connections between different works, highlights major findings, and identifies any gaps in the existing research.\n\nLiterature Review Title: Mulsemedia DIY: A Survey of Devices and a Tutorial for Building Your Own Mulsemedia Environment \nChapter Title: Assembling a 360 • VR Mulsemedia System \n1. Abstract of Cited Paper (BIB001): The next generation of wireless networks targets aspiring key performance indicators, like very low latency, higher data rates and more capacity, paving the way for new generations of video streaming technologies, such as 360° or omnidirectional videos. One possible application that could revolutionize the streaming technology is the 360° MULtiple SEnsorial MEDIA (MULSEMEDIA) which enriches the 360° video content with other media objects like olfactory, haptic or even thermoceptic ones. However, the adoption of the 360° Mulsemedia applications might be hindered by the strict Quality of Service (QoS) requirements, like very large bandwidth and low latency for fast responsiveness to the user's inputs that could impact their Quality of Experience (QoE). To this extent, this paper introduces the new concept of 360° Mulsemedia as well as it proposes the use of Reinforcement Learning to enable QoS provisioning over the next generation wireless networks that influences the QoE of the end-users.\n\nLiterature Review:" }, { "role": "assistant", "content": "A trendy mulsemedia environment involves 360 • VR media where the user is free to explore the environment. As it happens, unwieldy devices are unbidden guests to create sensory effects. Therefore, wearable devices are especially recommended in this scenario. Due to the fact that most wearables devices are still in their very early stage of maturity, it becomes a challenge to find and integrate suitable devices to VR headsets. Thus, DIY seems to be a feasible solution BIB001 . This suggested environment comprises fewer devices than the regular mulsemedia system, because the lighting device is the own VR headset. However, it requires a bit more hands-on to build your own devices. As shown in Figure 2 , there is a VR Headset whereby a smartphone is attached running a 360 • App, capable of reproducing local AV Content and SEM. This smartphone is linked to Headphone to reproduce high-quality audio, and to the internet through a Wi-Fi Router so as to communicate with the Mulsemedia Renderer. As this is a crucial system's component, it is worth recapitulating its purpose in Section 5.1.3. It runs a portable computer in the user's environment and send commands to the devices after processing the received SEM from the 360 • App. A Wind Device, a Scent Device, and a Vibration Device are linked to the Mulsemedia Renderer via wireless connections. The first two are connected via wire to a Microcontroller, responsible for handling the devices, which in turn, is linked to the Mulsemedia Renderer through Bluetooth. The Vibration Device is connected to the Mulsemedia Renderer directly via Bluetooth. The list of components required to engineer the 360 • VR mulsemedia system and their current estimated price are described in Table 5 . To assemble this 360 • VR mulsemedia environment, the initial steps are similar to the regular mulsemedia system's setup except that instead of using a multimedia application, one requires a Unity3D 360 • App that runs on a smartphone that can be attached to the Samsung VR Gear. First, the PlaySEM SER should be downloaded and installed following its readme instructions. Next, it is necessary to create a Unity3D 360 • App that will read 360 • content, SEM data, and transmit the latter to the PlaySEM SER. There is a simulation mode in which the integration of both can be tested without using real physical devices. They must be connected to the TP-LINK Archer C50 router. Thereafter, a Portable Fan Cooler device for producing wind shall be connected to the DFRobot Bluno Nano and subsequently integrated to the PlaySEM SER following the same strategy of the regular mulsemedia system's wind device. Although the use of multiple fans would allow the creation of positional wind effects, this is optional in this guideline so as not to increase the project's complexity at this point. As DFRobot Bluno Nano is compatible with Arduino, the same code written for the regular mulsemedia system can be used. The difference is that the communication between the PlaySEM SER and DFRobot Bluno Nano is established via Bluetooth Low Energy, which is not supported natively by Arduino microcontrollers, unless a complementary module is attached to it. Furthermore, DFRobot Bluno Nano is more compact and therefore more appropriate to go wireless. Power supply for DFRobot Bluno Nano must be provided by a portable battery, e.g., a powerbank, so that the user can wear the device. On this matter, covers for the wind fans and the battery might be desired. A suggestion of covers can be inspired by the work of Ranasinghe et al. BIB001 . As well as the regular mulsemedia wind device, schematics to develop an Arduino-based 5V fan for controlling intensity through Pulse Width Modulation (PWM) are largely encountered on the Internet. At this stage, there may be a need to acquire some electronic components to succeed in doing it. Then, the same process used to create the wind device, using DFRobot Bluno Nano though, can be performed to create a scent device. What differs is that it is suggested the use of a smaller fan such as the Mini Dupont Brush-less Cooling Fan combined with either some mesh bags filled with scent crystals provided by Exhalia or oil essence and cotton. A mesh bag with some scent needs to be placed at a short distance of the fan so that it blows the scent toward the user's nose. Moreover, it should be attached to the Samsung VR Gear so as to follow the user's head movement to provide a more accurate sense of scents. It would differ from the olfactory DIY displays available in Table 3 in the sense that not only the duration of scents could be programmed, but also the intensity. It should be noted that creativity is strongly required to design a cover that fits the purpose here. This process might require a cycle of creating and testing the device until a reasonable version is reached. A suggestion to annotate sensory effects in 360 • VR mulsemedia environments is described by Comsa et al. BIB002 . Combined with an adaptation in MPEG-V, it would allow unprecedented viewport-aware interoperability between the real and virtual world in 360 • environments. Last, the process to deliver vibration effects to the user is exactly the same as that of the regular mulsemedia system and can be followed in the past section." } ]
science.nlmchem_ner
science.nlmchem_ner.9
[ { "role": "user", "content": "You will be shown full-text from a biomedical research paper. Given this text, your task is to extract all unique entities of the following types: [\"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Carbon monoxide\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nDocument text:\n\nN-Carbamothioyl­amino-7-oxabicyclo­[2.2.1]hept-5-ene-2,3-dicarboximide The title compound, C9H9N3O3S, comprises a racemic mixture of chiral mol­ecules containing four stereogenic centres. The cyclo­hexane ring tends towards a boat conformation, while the tetra­hydro­furan ring and the dihydro­furan ring adopt envelope conformations. The dihedral angle between the thio­semicarbazide fragment and the fused-ring system is 77.20 (10)°. The crystal structure is stabilized by two inter­molecular N—H⋯O hydrogen bonds. Related literature For the use of 7-oxa-bicyclo­[2,2,1]hept-5-ene-2,3-dicarb­oxy­lic anhydride in clinical practice, see: Deng & Hu (2007 ▶). For the pharmacological activity of its derivatives, see: Hart et al. (2004 ▶). For bond lengths and angles in related structures, see: Goh et al. (2008 ▶). Experimental Crystal data C9H9N3O3S M r = 239.25 Orthorhombic, a = 8.3978 (8) Å b = 8.9032 (9) Å c = 13.5930 (14) Å V = 1016.31 (18) Å3 Z = 4 Mo Kα radiation μ = 0.31 mm−1 T = 298 K 0.45 × 0.43 × 0.40 mm Data collection Bruker SMART CCD area-detector diffractometer Absorption correction: multi-scan (SADABS; Bruker, 1997 ▶) T min = 0.872, T max = 0.885 5015 measured reflections 1791 independent reflections 1632 reflections with I > 2σ(I) R int = 0.023 Refinement R[F 2 > 2σ(F 2)] = 0.029 wR(F 2) = 0.071 S = 1.07 1791 reflections 145 parameters H-atom parameters constrained Δρmax = 0.14 e Å−3 Δρmin = −0.16 e Å−3 Absolute structure: Flack (1983 ▶), 728 Friedel pairs Flack parameter: 0.01 (9) Data collection: SMART (Bruker, 1997 ▶); cell refinement: SAINT (Bruker, 1997 ▶); data reduction: SAINT; program(s) used to solve structure: SHELXS97 (Sheldrick, 2008 ▶); program(s) used to refine structure: SHELXL97 (Sheldrick, 2008 ▶); molecular graphics: SHELXTL (Sheldrick, 2008 ▶); software used to prepare material for publication: SHELXTL. Supplementary Material Supplementary data and figures for this paper are available from the IUCr electronic archives (Reference: BX2331). supplementary crystallographic information Comment 7-Oxa-bicyclo[2,2,1]hept-5-ene-2,3-dicarboxylic anhydride has been widely employed in clinical practice, as it is less toxic and much easier to be synthesized [Deng et al., 2007]. Its derivatives are also pharmacologically active [Hart et al., 2004]. We report here the crystal structure of the title compound, (I) which comprises a racemic mixture of chiral molecules containing four stereogenic centres. The cyclohexane ring tends towards a boat conformation, the tetrahydrofuran ring and the dihydrofuran ring adopt envelope conformations (Fig. 1). The bond lengths and bond angles are normal range and comparable to those in the similar compound [Goh, et al., 2008] as representative example. The dihedral angle between the thiosemicarbazide fragment and fused-ring system is 77.20 (10)°. The crystal structure is stabilized by two intermolecular N—H···O and one intramolecular N—H···N hydrogen bonds (Table 1, Fig. 2). Experimental A mixture of exo-7-oxa-bicyclo[2,2,1]hept-5-ene-2,3-dicarboxylic anhydride (0.332 g, 2 mmol) and thiocarbanide (0.182 g, 2 mmol) in methanol (5 ml) was stirred for 5 h at room temperature, and then refluxed for 1 h. After cooling the precipitate was filtered and dried, the title compound was obtained. The crude product of 20 mg was dissolved in methanol of 10 ml. The solution was filtered to remove impurities, and then the filtrate was left for crystallization at room temperature. The single-crystal suitable for X-ray determination was obtained by evaporation from the methanol solution after 5 d. Refinement H atoms were initially located from difference maps and then refined in a riding model with C—H = 0.93–0.96 Å and Uiso(H) = 1.2Ueq(C) or 1.5Ueq(methyl C). Figures The molecular structure of (I), with the atom-numbering scheme. Displacement ellipsoide are drawn at 30% probability level. The crystal packing of (I), viewed along b axis. Dashed lines indicate hydrogen bonds. Crystal data Data collection Refinement Special details Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å2) Atomic displacement parameters (Å2) Geometric parameters (Å, °) Hydrogen-bond geometry (Å, °) Symmetry codes: (i) x+1/2, −y+1/2, −z+1; (ii) −x+3/2, −y+1, z+1/2. References Hydrogen-bond geometry (Å, °)" }, { "role": "assistant", "content": "{\"Chemical\": [\"C9H9N3O3S\", \"7-Oxa-bicyclo[2,2,1]hept-5-ene-2,3-dicarboxylic anhydride\", \"Mo\", \"tetra\\u00adhydro\\u00adfuran\", \"7-oxa-bicyclo\\u00ad[2,2,1]hept-5-ene-2,3-dicarb\\u00adoxy\\u00adlic anhydride\", \"N-Carbamothioyl\\u00adamino-7-oxabicyclo\\u00ad[2.2.1]hept-5-ene-2,3-dicarboximide\", \"methanol\", \"dihydro\\u00adfuran\", \"thio\\u00adsemicarbazide\", \"exo-7-oxa-bicyclo[2,2,1]hept-5-ene-2,3-dicarboxylic anhydride\", \"tetrahydrofuran\", \"dihydrofuran\", \"thiosemicarbazide\", \"cyclo\\u00adhexane\", \"thiocarbanide\", \"cyclohexane\"]}" } ]
science.medmentions_ner
science.medmentions_ner.672
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"HealthCareActivity\", \"InjuryOrPoisoning\", \"BodySubstance\", \"IntellectualProduct\", \"AnatomicalStructure\", \"SpatialConcept\", \"Chemical\", \"Bacterium\", \"MedicalDevice\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Finding\", \"BiologicFunction\", \"Virus\", \"ResearchActivity\", \"ClinicalAttribute\", \"PopulationGroup\", \"Eukaryote\", \"BodySystem\", \"Food\", \"ProfessionalOrOccupationalGroup\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nFactors associated with delayed screening of contacts of tuberculosis cases in the Somme, France To analyze the factors associated with the time to initiating tuberculosis contact investigations in the Somme department, France. All reported tuberculosis cases and all their contacts screened between 2007 and 2011 were retrospectively included. Univariate and multivariate analyses were conducted to determine the factors associated with a \"system delay \"≤1 month and a \" contact delay \"≤0 days. The mean time between the mandatory notification of a case of tuberculosis and the date set for the contact 's screening (system delay) was 35.3 days and the average time between that date and when the contact was actually screened (contact delay) was 12.5 days. In multivariate analysis, a smear-positive sputum sample (OR: 3.68; 95% CI: 1.63-8.30) and a diagnosis at the university hospital (OR: 2.61; 95% CI: 1.14-5.96) were significantly associated with a system delay ≤1 month. A smear-positive sputum sample (OR: 1.35; 95% CI: 1.08-1.69), male gender (OR: 1.21; 95% CI: 1.01-1.49), being born in a foreign country (OR: 1.31; 95% CI: 1.02-1.69), being a family member (OR: 1.37; 95% CI: 1.05-1.77), or being another type of close contact of the case (OR: 2.47; 95% CI: 1.81-3.36) were significantly associated with a contact delay ≤0 days. System and contact delays were longer than recommended, and the factors associated with the lengthening of these delays need to be taken into account." }, { "role": "assistant", "content": "{\"AnatomicalStructure\": [], \"Bacterium\": [], \"BiologicFunction\": [\"tuberculosis\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [\"smear-positive sputum sample\"], \"BodySystem\": [], \"Chemical\": [], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [\"diagnosis\"], \"Food\": [], \"HealthCareActivity\": [\"notification\", \"reported\", \"investigations\", \"screened\", \"screening\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [], \"MedicalDevice\": [], \"Organization\": [\"university hospital\"], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"analyze\"], \"SpatialConcept\": [\"France\", \"foreign country\"], \"Virus\": []}" } ]
science.scientific_lay_summarisation_elife_single_doc_summ
science.scientific_lay_summarisation_elife_single_doc_summ.3865
[ { "role": "user", "content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: The serine protease hepsin mediates urinary secretion and polymerisation of Zona Pellucida domain protein uromodulin}\nArticle:\nAbstract:\nUromodulin is the most abundant protein in the urine. It is exclusively produced by renal epithelial cells and it plays key roles in kidney function and disease. Uromodulin mainly exerts its function as an extracellular matrix whose assembly depends on a conserved, specific proteolytic cleavage leading to conformational activation of a Zona Pellucida( ZP) polymerisation domain. Through a comprehensive approach, including extensive characterisation of uromodulin processing in cellular models and in specific knock-out mice, we demonstrate that the membrane-bound serine protease hepsin is the enzyme responsible for the physiological cleavage of uromodulin. Our findings define a key aspect of uromodulin biology and identify the first in vivo substrate of hepsin. The identification of hepsin as the first protease involved in the release of a ZP domain protein is likely relevant for other members of this protein family, including several extracellular proteins, as egg coat proteins and inner ear tectorins.\nIntroduction:\nUromodulin, also known as Tamm-Horsfall protein, is a 105 kDa glycoprotein exclusively expressed in the kidney by epithelial cells lining the thick ascending limb( TAL) of Henle’s loop. It is a glycosylphosphatidylinositol( GPI)-anchored protein mainly localised at the apical plasma membrane of TAL epithelial cells. Uromodulin is secreted into the renal tubule through a conserved proteolytic cleavage( Santambrogio et al., 2008). In the urine it constitutes the most abundant protein under physiological conditions( Serafini-Cessi et al., 2003) and it is mainly found as high molecular-weight polymers. Uromodulin polymerisation is mediated by its Zona Pellucida( ZP) domain, a module present in several proteins that assemble into extracellular polymers, such as components of the egg coat( including sperm receptors ZP3 and ZP2) and of the inner ear tectorial membrane( Jovine et al., 2005). Specific cleavage is necessary for protein assembly into polymers, as it releases an inhibitory hydrophobic motif( external hydrophobic patch, EHP) distal to the ZP domain that prevents polymerisation( Jovine et al., 2004; Schaeffer et al., 2009; Han et al., 2010). The biological function of uromodulin is complex and only partly understood. Studies in Umod-/- mice demonstrated that it protects against urinary tract infections( Bates et al., 2004) and renal stone formation( Liu et al., 2010). Uromodulin has also been shown to modulate the activity of innate immunity cells via Toll-like receptor 4 signalling( Säemann et al., 2005) or via activation of the inflammasome( Darisipudi et al., 2012) and to regulate granulopoiesis and systemic neutrophil homeostasis( Micanovic et al., 2015). Finally, it is involved in the regulation of NaCl transport in the TAL, based on its role in promoting surface expression of the renal outer medullary potassium channel( ROMK2)( Renigunta et al., 2011) and of the sodium-potassium-chloride co-transporter( NKCC2)( Mutig et al., 2011). Recent data showed that uromodulin plays an important role in chronic diseases of the kidney( Rampoldi et al., 2011). Mutations in the uromodulin gene( UMOD) cause an autosomal dominant tubulo-interstitial kidney disease( Hart, 2002). In addition, genome-wide association studies demonstrated that common variants in the UMOD gene are associated with increased risk for chronic kidney disease( CKD) and hypertension( Köttgen et al., 2009; Padmanabhan et al., 2010). Such an effect is due to higher UMOD expression driven by the presence of risk alleles in its gene promoter( Trudu et al., 2013). Given the importance of polymerisation for uromodulin activity and the fact that this process depends on a specific protein cleavage, in this work we aimed at identifying the protease responsible for such cleavage and urinary secretion.\nDiscussion:\nCollectively, our results identify hepsin as the enzyme responsible for the physiological release of uromodulin in the urine. Hepsin, or TMPRSS1, is a member of type II transmembrane serine proteases. It is expressed in several tissues and it has been previously implicated in the processing of different substrates; however, the physiological relevance of most of these observations remains to be established( Chen et al., 2010; Hsu et al., 2012; Khandekar and Jagadeeswaran, 2014; Ganesan et al., 2011). By identifying hepsin as the enzyme responsible for the release of uromodulin in the renal tubule, our results reveal an important biological function of this type II transmembrane serine protease in the renal epithelium. At the same time, they represent a major advancement in understanding the biology of uromodulin, opening new avenues of research on the regulation and possibly modulation of its secretion. This could have impact for diseases such as urolithiasis where the urinary levels of uromodulin may be directly correlated with its protective function. Also, it might be relevant for CKD and hypertension where we demonstrated that UMOD risk variants are associated with increased uromodulin expression and urinary levels( Trudu et al., 2013; Olden et al., 2014). Lack of hepsin does not affect baseline renal function of Hpn-/- mice( serum creatinine: wild-type 0. 78 +/- 0. 04 vs Hpn-/- 0. 74 +/- 0. 14 mg/dL, n = 5/group, p=0. 59; blood urea nitrogen: wild-type 21. 6 +/- 3. 29 vs Hpn-/- 20. 8 +/- 4. 12 mg/dL, n= 5/group, p=0. 76). In this light, further studies in Hpn-/- mice with altered uromodulin processing and secretion are warranted to investigate if the mice develop a phenotype upon additional challenges. The results may provide new insight into uromodulin cellular and extracellular functions. It is well recognised that proteolysis plays a central role in uromodulin polymerisation. This step leads to the release of an inhibitory intramolecular interaction that prevents premature protein polymerisation( Jovine et al., 2004; Schaeffer et al., 2009; Han et al., 2010). Considering the maturation steps of uromodulin along the secretory compartment, hepsin-mediated cleavage likely takes place at the plasma membrane where the two proteins co-localise, allowing the protein to traffic within the cell in a polymerisation-incompetent conformation, and to polymerise only once it is secreted( Figure 8). Interestingly, the C-terminus of human and mouse urinary uromodulin has been mapped to a phenylalanine residue( F587 and F588, respectively)( Santambrogio et al., 2008). Since in vitro studies showed that hepsin preferentially cleaves substrates after basic amino acids( Herter et al., 2005; Béliveau et al., 2009; Owen et al., 2010), this could result from cleavage at the following arginine residue and subsequent C-terminal trimming. Post-cleavage trimming by carboxypeptidases is not uncommon and it has already been proposed for several secreted proteins, including urokinase( Günzlerwa et al., 1982), and ZP subunits( Kubo et al., 1999; Boja et al., 2003; Darie et al., 2005). 10. 7554/eLife. 08887. 025Figure 8. Model of uromodulin shedding and polymerisation. Uromodulin is exclusively expressed by TAL tubular epithelial cells. The protein enters the secretory pathway and reaches the plasma membrane in a polymerisation-incompetent conformation. This is ensured by the interaction between two hydrophobic patches within and next to the ZP-C subdomain( Internal Hydrophobic Patch, red circle, and External Hydrophobic Patch, dark green circle)( Jovine et al., 2004; Schaeffer et al., 2009; Han et al., 2010). Shedding by hepsin at the uromodulin consensus cleavage site( red diamond), likely occurring at the plasma membrane, releases the hydrophobic interaction, generating polymerisation-competent species that are assembled into polymeric filaments within the tubular lumen. Pink circles indicate N-terminal EGF-like domains, yellow and green cylinders represent ZP-N and ZP-C subdomains. The orientation of uromodulin within polymers is hypothetical. DOI: http://dx. doi. org/10. 7554/eLife. 08887. 025 Through the identification of hepsin as the protease releasing uromodulin in the urine we discover the first protease involved in the physiological cleavage of a ZP domain protein, which is a pre-requisite for polymerisation of this type of molecules. Notably, Hpn-/- mice were reported to have profound hearing loss likely due to inner ear defects, including a deformed and enlarged tectorial membrane( Guipponi et al., 2007). This acellular structure, which overlays the organ of Corti, consists of filaments largely composed of ZP domain proteins α- and β-tectorin. The morphology of the tectorial membrane is similarly altered in mice carrying deafness-causing mutations in α-tectorin( Legan et al., 2014) that are likely affecting protein secretion( Jovine et al., 2002). These findings, along with data presented in this manuscript, suggest that inner ear tectorins are also physiological substrates of hepsin. Moreover, release of glycoprotein-2, a ZP domain protein highly homologous to uromodulin that is predominantly expressed in pancreas, is mediated by an apical protease sensitive to serine protease inhibitors( Fritz and Lowe, 1996), again suggesting the involvement of hepsin. In light of these observations and given the conserved function of the ZP domain and the high similarity of the consensus cleavage site of ZP domain proteins( Jovine et al., 2004), our results are likely to be relevant for other members of this protein family." }, { "role": "assistant", "content": "Several proteins in humans and other animals contain a region called a 'zona pellucida domain' . This domain enables these proteins to associate with each other and form long filaments . Uromodulin is one such protein that was first identified more than fifty years ago . This protein is known to play a role in human diseases such as hypertension and kidney failure , but uromodulin’s biological purpose still remains elusive . Uromodulin is only made in the kidney and it is the most abundant protein in the urine of healthy individuals . Uromodulin also contains a so-called 'external hydrophobic patch' that must be removed before the zona pellucida domain can start to form filaments . This hydrophobic patch is removed when uromodulin is cut by an unknown enzyme; this cutting releases the rest of the uromodulin protein from the surface of the cells that line the kidney into the urine . Brunati et al . have now tested a panel of candidate enzymes and identified that one called hepsin is able to cut uromodulin . Hepsin is embedded in the cell membrane of the cells that line the kidney . When the level of hepsin was artificially reduced in cells grown in the laboratory , uromodulin remained anchored to the cell surface , its processing was altered and it did not form filaments . Brunati et al . next analysed mice in which the gene encoding hepsin had been deleted . While these animals did not have any major defects in their internal organs , they had much lower levels of uromodulin in their urine . Furthermore , this residual urinary protein was not cut properly and it did not assemble into filaments . Thus , these findings reveal that hepsin is the enzyme that is responsible for releasing uromodulin in the urine . This discovery could be exploited to alter the levels of uromodulin release , and further studies using mice lacking hepsin may also help to understand uromodulin’s biological role . Finally , it will be important to understand if hepsin , or a similar enzyme , is also responsible for the release of other proteins containing the zona pellucida domain ." } ]
science.chemdner_ner
science.chemdner_ner.3113
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Arsenic\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nSynthesis and antimicrobial activity of novel amphiphilic aromatic amino alcohols. We report in this work the preparation and in vitro antimicrobial evaluation of novel amphiphilic aromatic amino alcohols synthesized by reductive amination of 4-alkyloxybenzaldehyde with 2-amino-2-hydroxymethyl-propane-1,3-diol. The antibacterial activity was determined against four standard strains (Staphylococcus aureus, Staphylococcus epidermidis, Escherichia coli, Pseudomonas aeruginosa) and 21 clinical isolates of methicillin-resistant Staphylococcus aureus. The antifungal activity was evaluated against four yeast (Candida albicans, Candida tropicalis, Candida glabrata and Candida parapsilosis). The results obtained showed a strong positive correlation between the lipophilicity and the antibiotic activity of the tested compounds. The best activities were obtained against the Gram-positive bacteria (MIC=2-16μgml(-1)) for the five compounds bearing longer alkyl chains (4c-g; 8-14 carbons), which were also the most active against Candida (MIC=2-64μgml(-1)). Compound 4e exhibited the highest levels of inhibitory activity (MIC=2-16μgml(-1)) against clinical isolates of MRSA. A concentration of twice the MIC resulted in bactericidal activity of 4d against 19 of the 21 clinical isolates." }, { "role": "assistant", "content": "{\"Chemical\": [\"aromatic amino alcohols\", \"4-alkyloxybenzaldehyde\", \"2-amino-2-hydroxymethyl-propane-1,3-diol\", \"methicillin\"]}" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.29784
[ { "role": "user", "content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nTime series are difficult to monitor, summarize and predict. Segmentation organizes time series into few intervals having uniform characteristics (flatness, linearity, modality, monotonicity and so on). For scalability, we require fast linear time algorithms. The popular piecewise linear model can determine where the data goes up or down and at what rate. Unfortunately, when the data does not follow a linear model, the computation of the local slope creates overfitting. We propose an adaptive time series model where the polynomial degree of each interval vary (constant, linear and so on). Given a number of regressors, the cost of each interval is its polynomial degree: constant intervals cost 1 regressor, linear intervals cost 2 regressors, and so on. Our goal is to minimize the Euclidean (l_2) error for a given model complexity. Experimentally, we investigate the model where intervals can be either constant or linear. Over synthetic random walks, historical stock market prices, and electrocardiograms, the adaptive model provides a more accurate segmentation than the piecewise linear model without increasing the cross-validation error or the running time, while providing a richer vocabulary to applications. Implementation issues, such as numerical stability and real-world performance, are discussed.\n\nCited Abstract(s):\n[1]: Abstract Polynomial splines are often used in statistical regression models for smooth response functions. When the number and location of the knots are optimized, the approximating power of the spline is improved and the model is nonparametric with locally determined smoothness. However, finding the optimal knot locations is an historically difficult problem. We present a new estimation approach that improves computational properties by penalizing coalescing knots. The resulting estimator is easier to compute than the unpenalized estimates of knot positions, eliminates unnecessary “corners” in the fitted curve, and in simulation studies, shows no increase in the loss. A number of GCV and AIC type criteria for choosing the number of knots are evaluated via simulation.\n\n[2]: SUMMARY With ideal spatial adaptation, an oracle furnishes information about how best to adapt a spatially variable estimator, whether piecewise constant, piecewise polynomial, variable knot spline, or variable bandwidth kernel, to the unknown function. Estimation with the aid of an oracle offers dramatic advantages over traditional linear estimation by nonadaptive kernels; however, it is a priori unclear whether such performance can be obtained by a procedure relying on the data alone. We describe a new principle for spatially-adaptive estimation: selective wavelet reconstruction. We show that variable-knot spline fits and piecewise-polynomial fits, when equipped with an oracle to select the knots, are not dramatically more powerful than selective wavelet reconstruction with an oracle. We develop a practical spatially adaptive method, RiskShrink, which works by shrinkage of empirical wavelet coefficients. RiskShrink mimics the performance of an oracle for selective wavelet reconstruction as well as it is possible to do so. A new inequality in multivariate normal decision theory which we call the oracle inequality shows that attained performance differs from ideal performance by at most a factor of approximately 2 log n, where n is the sample size. Moreover no estimator can give a better guarantee than this. Within the class of spatially adaptive procedures, RiskShrink is essentially optimal. Relying only on the data, it comes within a factor log 2 n of the performance of piecewise polynomial and variableknot spline methods equipped with an oracle. In contrast, it is unknown how or if piecewise polynomial methods could be made to function this well when denied access to an oracle and forced to rely on data alone.\n\n[3]: Abstract An adaptive spline method for smoothing is proposed that combines features from both regression spline and smoothing spline approaches. One of its advantages is the ability to vary the amount of smoothing in response to the inhomogeneous “curvature” of true functions at different locations. This method can be applied to many multivariate function estimation problems, which is illustrated by an application to smoothing temperature data on the globe. The method's performance in a simulation study is found to be comparable to the wavelet shrinkage methods proposed by Donoho and Johnstone. The problem of how to count the degrees of freedom for an adaptively chosen set of basis functions is addressed. This issue arises also in the MARS procedure proposed by Friedman and other adaptive regression spline procedures.\n\n[4]: During the past decade there has been an explosion in computation and information technology. With it have come vast amounts of data in a variety of fields such as medicine, biology, finance, and marketing. The challenge of understanding these data has led to the development of new tools in the field of statistics, and spawned new areas such as data mining, machine learning, and bioinformatics. Many of these tools have common underpinnings but are often expressed with different terminology. This book describes the important ideas in these areas in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of color graphics. It is a valuable resource for statisticians and anyone interested in data mining in science or industry. The book's coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and boosting---the first comprehensive treatment of this topic in any book. @PARASPLIT This major new edition features many topics not covered in the original, including graphical models, random forests, ensemble methods, least angle regression and path algorithms for the lasso, non-negative matrix factorization, and spectral clustering. There is also a chapter on methods for wide'' data (p bigger than n), including multiple testing and false discovery rates. @PARASPLIT Trevor Hastie, Robert Tibshirani, and Jerome Friedman are professors of statistics at Stanford University. They are prominent researchers in this area: Hastie and Tibshirani developed generalized additive models and wrote a popular book of that title. Hastie co-developed much of the statistical modeling software and environment in R S-PLUS and invented principal curves and surfaces. Tibshirani proposed the lasso and is co-author of the very successful An Introduction to the Bootstrap. Friedman is the co-inventor of many data-mining tools including CART, MARS, projection pursuit and gradient boosting.\n\n[5]:" }, { "role": "assistant", "content": "While we focus on segmentation, there are many methods available for fitting models to continuous variables, such as a regression, regression decision trees, Neural Networks [4] , Wavelets [2] , Adaptive Multivariate Splines [5] , Free-Knot Splines [1] , Hybrid Adaptive Splines [3] , etc." } ]
science.bioasq_general_qa
science.bioasq_general_qa.942
[ { "role": "user", "content": "You will be shown a question, followed by excerpts from biomedical research papers. Please answer the question based on the provided context. Do not include any text in your response other than the answer.\n\nQuestion: What is holoprosencephaly?\n\nContext: Holoprosencephaly (HPE) is a congenital defect of the brain, median structures, and face resulting from an incomplete cleavage of the primitive brain during early embryogenesis.\nHoloprosencephaly (HPE) is the most common developmental defect of the forebrain characterized by inadequate or absent midline division of the forebrain into cerebral hemispheres, with concomitant midline facial defects in the majority of cases.\nHoloprosencephaly (HPE) is a severe brain malformation characterized by abnormal cleavage of the prosencephalon in the 5th gestational week.\nBACKGROUND AND PURPOSE: Holoprosencephaly is a rare developmental brain abnormality with a range of severity.\nBACKGROUND: Holoprosencephaly is the most common malformation of the forebrain (1 in 250 embryos) with severe consequences for fetal and child development.\nPURPOSE: Holoprosencephaly is a rare developmental brain abnormality with a range of severity.\nHoloprosencephaly is the most common malformation of the forebrain and typically results in severe neurocognitive impairment with accompanying midline facial anomalies.\nHoloprosencephaly is a rare brain abnormality resulting from an incomplete cleavage of the primitive prosencephalon of forebrain during early embryogenesis.\nHoloprosencephaly is a brain malformation that develops as a result of a defect in development of prosencephalon during early gestation.\nPURPOSE OF REVIEW: Holoprosencephaly is a disorder of forebrain development characterized by a failure of the brain to separate into two hemispheres during early development.\nHoloprosencephaly (HPE) is a severe brain malformation characterized by abnormal cleavage of the prosencephalon in the 5th gestational week.\nHoloprosencephaly is a brain defect resulting from incomplete cleavage of the embryonic forebrain.\nHoloprosencephaly is a spectrum of congenital defects of forebrain development characterized by incomplete separation of the cerebral hemispheres.\nHoloprosencephaly is a brain anomaly of varying severity with associated extracranial, symptomatic abnormalities in only a minority of cases.\nPURPOSE OF REVIEW: Holoprosencephaly is a disorder of forebrain development characterized by a failure of the brain to separate into two hemispheres during\nBACKGROUND: Holoprosencephaly is a structural anomaly of the brain that consists in a defect of the prosencephalon development that leads to face and neurological defects of variable intensity.AIM: To estimate holoprosencephaly prevalence at birth.PATIENTS AND METHODS: All cases of holoprosencephaly, born alive or stillbirths, registered in the 15 Chilean Hospitals of the Latin American Collaborative Study of Congenital Malformations (ECLAM\nHoloprosencephaly is a clinically and genetically heterogeneous midline brain malformation associated with neurologic manifestations including developmental delay, intellectual disability and seizures." }, { "role": "assistant", "content": "Holoprosencephaly (HPE) is a congenital defect of the brain, median structures, and face resulting from an incomplete cleavage of the primitive brain during early embryogenesis . The most common developmental defect is characterized by inadequate or absent midline division of the forebrain into cerebral hemispheres with concomitant midline facial defects in the majority of cases . Holoprosencephaly (HPE) is a congenital defect of the brain, median structures, and face resulting from an incomplete cleavage of the primitive brain during early embryogenesis. Holoprosencephaly is a congenital defect of the brain, median structures, and face resulting from incomplete incomplete cleavage of the primitive brain during early embryogenesis. Holoprosencephaly (HPE) is a congenital defect of the brain, median structures, and face resulting from an incomplete cleavage of the primitive brain during early embryogenesis." } ]
science.chia_ner
science.chia_ner.572
[ { "role": "user", "content": "You will be shown inclusion or exclusion criteria for a clinical trial. Given these criteria, your task is to extract all unique entities of the following types: [\"Observation\", \"Condition\", \"Procedure\", \"Drug\", \"Temporal\", \"Multiplier\", \"Qualifier\", \"Negation\", \"ReferencePoint\", \"Scope\", \"Value\", \"Mood\", \"Device\", \"Visit\", \"Measurement\", \"Person\"].\n\nPlease return the output as a JSON object of the format: {\"Condition\" : [\"breast cancer\", ...], \"Device\" : [\"ports\", ...], \"Drug\" : [\"Atenolol\", ...], \"Measurement\" : [\"blood pressure\", ...], \"Mood\" : [\"suspected\", ...], \"Multiplier\" : [\"more than two\", ...], \"Negation\" : [\"no\", ...], \"Observation\" : [\"history\", ...], \"Person\" : [\"male\", ...], \"Procedure\" : [\"surgery\", ...], \"Qualifier\" : [\"severe\", ...], \"ReferencePoint\" : [\"enrollment\", ...], \"Scope\" : [\"infection\", ...], \"Temporal\" : [\"one day\", ...], \"Value\" : [\"50%\", ...], \"Visit\" : [\"ICU\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nKnown intolerance to the doxycycline Body weight <40 kg Pregnancy or breastfeeding History of severe allergic reaction or anaphylaxis Alcohol or drug abuse" }, { "role": "assistant", "content": "{\"Condition\": [\"anaphylaxis\", \"allergic reaction\", \"drug abuse\", \"Pregnancy\", \"Alcohol abuse\", \"intolerance to the doxycycline\"], \"Device\": [], \"Drug\": [\"doxycycline\"], \"Measurement\": [\"Body weight\"], \"Mood\": [], \"Multiplier\": [], \"Negation\": [], \"Observation\": [\"History of\", \"breastfeeding\"], \"Person\": [], \"Procedure\": [], \"Qualifier\": [\"severe\"], \"ReferencePoint\": [], \"Scope\": [\"severe allergic reaction or anaphylaxis Alcohol or drug abuse\"], \"Temporal\": [], \"Value\": [\"\\u003c40 kg\"], \"Visit\": []}" } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.1072
[ { "role": "user", "content": "You will be shown biomedical passages and your task is to the answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: What cellular process is the protein clathrin involved in?\n\nContext: ligand-receptor complexes are internalized via clathrin coated pits by a process called receptor-mediated endocytosis.\nReceptor-mediated endocytosis proceeds by transfer of receptor-ligand complexes from clathrin-coated pits at the cell surface to uncoated endocytic vesicles termed receptosomes (or endosomes).\nwhile clathrin mediated endocytosis w\nClathrin-mediated endocytosis (CME) is a cellular trafficking process in which cargoes and lipids are internalized from the plasma membrane into vesicles coated with clathrin and adaptor proteins.\nClathrin is involved in the endocytosis and exocytosis of cellular proteins and the process of virus infection.\nEndocytosis is a crucial cellular process in eukaryotic cells which involves clathrin and/or adaptor proteins , lipid kinases , phosphatases and the actin cytoskeleton\nClathrin-independent endocytosis ( CIE ) is the process of cellular uptake of various particles , including pathogens , without the coat protein clathrin\nEndocytosis is an essential cellular process in eukaryotic cells that involves concordant functions of clathrin and adaptor proteins , various protein and lipid kinases , phosphatases and the actin cytoskeleton\nEndocytosis mediated by clathrin , a cellular process by which cells internalize membrane receptors and their extracellular ligands , is an important component of cell signaling regulation\nClathrin-mediated endocytosis ( CME ) is a cellular trafficking process in which cargoes and lipids are internalized from the plasma membrane into vesicles coated with clathrin and adaptor proteins\nClathrin plays an important role in many cellular processes, including endocytosis, secretion, and sorting of membranous organelles.\nClathrin-mediated endocytosis is a major regulator of cell-surface protein internalization." }, { "role": "assistant", "content": "receptor mediated endocytosis" } ]
science.data_reco_mcq_mc
science.data_reco_mcq_mc.583
[ { "role": "user", "content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: I want to compare the performance of unsupervised MRC systems.\n\nKeyphrases: machine reading comprehension text\n\nDataset description:\nSNIPS: The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity: SearchCreativeWork (e.g. Find me the I, Robot television show), GetWeather (e.g.\nSearchQA: SearchQA was built using an in-production, commercial search engine. It closely reflects the full pipeline of a (hypothetical) general question-answering system, which consists of information retrieval and answer synthesis.\nCOVERAGE: COVERAGE contains copymove forged (CMFD) images and their originals with similar but genuine objects (SGOs). COVERAGE is designed to highlight and address tamper detection ambiguity of popular methods, caused by self-similarity within natural images.\nNewsQA: The NewsQA dataset is a crowd-sourced machine reading comprehension dataset of 120,000 question-answer pairs. Documents are CNN news articles.\nACDC: The goal of the Automated Cardiac Diagnosis Challenge (ACDC) challenge is to: compare the performance of automatic methods on the segmentation of the left ventricular endocardium and epicardium as the right ventricular endocardium for both end diastolic and end systolic phase instances; compare the performance of automatic methods for the classification of the examinations in five classes (normal case, heart failure with infarction, dilated cardiomyopathy, hypertrophic cardiomyopathy, abnormal right ventricle). The overall ACDC dataset was created from real clinical exams acquired at the University Hospital of Dijon.\n\nOptions:- A: SNIPS B: SearchQA C: COVERAGE D: NewsQA E: ACDC" }, { "role": "assistant", "content": "[\"SearchQA\", \"NewsQA\"]" } ]
science.chemdner_ner
science.chemdner_ner.2292
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Arsenic\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nIncreased vulnerability to β-cell destruction and diabetes in mice lacking NAD(P)H:quinone oxidoreductase 1. NAD(P)H:quinone oxidoreductase 1 (NQO1) has been known to protect cells against stressors, including the diabetogenic reagent streptozotocin (STZ). The present study demonstrated that NQO1 deficiency resulted in increased pancreatic β-cell death induced by multiple low dose of STZ (MLDS) injections. NQO1 knockout (KO) mice showed hyperglycemia, body weight loss, impaired glucose clearance rate and a lower plasma insulin level after MLDS treatment. Moreover, β-cell mass and pancreatic insulin content were significantly lower in KO mice than in wild-type (WT) mice after MLDS treatment. Five days after the first STZ treatment, the islets of KO mice had substantially more TUNEL-positive β-cells than those of WT mice, but there was no difference in the regeneration of β-cells between KO mice and WT mice. At the same time, MLDS-treated KO mice showed significantly increased apoptotic markers in β-cells, including cleaved caspase 3, Smac/DIABLO and AIF (apoptosis inducing factor) in the cytoplasm. These results suggest that mice deficient in NQO1 are vulnerable to MLDS-induced β-cell destruction and diabetes, caused by increase of β-cell apoptosis in pancreas." }, { "role": "assistant", "content": "{\"Chemical\": [\"streptozotocin\", \"glucose\", \"quinone\", \"STZ\", \"NAD(P)H\"]}" } ]
science.scitldr_aic
science.scitldr_aic.1091
[ { "role": "user", "content": "Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.\n\nPaper text:\n\nBottleneck structures with identity (e.g., residual) connection are now emerging popular paradigms for designing deep convolutional neural networks (CNN), for processing large-scale features efficiently.\nIn this paper, we focus on the information-preserving nature of identity connection and utilize this to enable a convolutional layer to have a new functionality of channel-selectivity, i.e., re-distributing its computations to important channels.\nIn particular, we propose Selective Convolutional Unit (SCU), a widely-applicable architectural unit that improves parameter efficiency of various modern CNNs with bottlenecks.\nDuring training, SCU gradually learns the channel-selectivity on-the-fly via the alternative usage of\n(a) pruning unimportant channels, and\n(b) rewiring the pruned parameters to important channels.\nThe rewired parameters emphasize the target channel in a way that selectively enlarges the convolutional kernels corresponding to it.\nOur experimental results demonstrate that the SCU-based models without any postprocessing generally achieve both model compression and accuracy improvement compared to the baselines, consistently for all tested architectures.\nNowadays, convolutional neural networks (CNNs) have become one of the most effective approaches in various fields of artificial intelligence.\nWith a growing interest of CNNs, there has been a lot of works on designing more advanced CNN architectures BID43 BID21 .\nIn particular, the simple idea of adding identity connection in ResNet BID11 has enabled breakthroughs in this direction, as it allows to train substantially deeper/wider networks than before by alleviating existed optimization difficulties in previous CNNs.\nRecent CNNs can scale over a thousand of layers BID12 or channels BID18 without much overfitting, and most of these \"giant\" models consider identity connections in various ways BID49 BID18 .\nHowever, as CNN models grow rapidly, deploying them in the real-world becomes increasingly difficult due to computing resource constraints.\nThis has motivated the recent literature such as network pruning BID9 BID28 BID35 , weight quantization BID36 BID3 , adaptive networks BID47 BID5 BID0 BID19 , and resource-efficient architectures BID17 BID40 BID32 .For\ndesigning a resource-efficient CNN architecture, it is important to process succinct representations of large-scale channels. To\nthis end, the identity connections are useful since they allow to reduce the representation dimension to a large extent while \"preserving\" information from the previous layer. Such\nbottleneck architectures are now widely used in modern CNNs such as ResNet BID11 and DenseNet BID18 for parameter efficiency, and many state-of-the-art mobile-targeted architectures such as SqueezeNet BID20 , ShuffleNet BID53 BID32 , MoblileNet BID16 BID40 , and CondenseNet BID17 commonly address the importance of designing efficient bottlenecks.Contribution. In this\npaper, we propose Selective Convolutional Unit (SCU), a widely-applicable architectural unit for efficient utilization of parameters in particular as a bottleneck upon identity connection. At a high-level\n, SCU performs a convolutional operation to transform a given input. The main goal\nof SCU, however, is rather to re-distribute their computations only to selected channels (a) (b)Figure\n1:\n(a) An illustration\nof channel de-allocation and re-allocation procedures. The higher the saturation\nof the channel color, the higher the ECDS value. (b) The overall structure\nof SCU.of importance, instead of processing the entire input naively. To this end, SCU has two\nspecial operations: (a) de-allocate unnecessary\ninput channels (dealloc), and (b) re-allocate the obstructed\nchannels to other channels of importance (realloc) (see Figure 1a) . They are performed without damaging\nthe network output (i.e., function-preserving operations), and therefore one can call them safely at any time during training. Consequently, training SCU is a process\nthat increases the efficiency of CNN by iteratively pruning or rewiring its parameters on-the-fly along with learning them. In some sense, it is similar to how hippocampus\nin human brain learn, where new neurons are generated daily, and rewired into the existing network while maintaining them via neuronal apoptosis or pruning BID38 BID49 .We combine several new ideas to tackle technical\nchallenges for such on-demand, efficient trainable SCU. First, we propose expected channel damage score\n(ECDS), a novel metric of channel importance that is used as the criterion to select channels for dealloc or realloc. Compared to other popular magnitude-based metrics\nBID28 BID35 , ECDS allows capturing not only low-magnitude channels but also channels of low-contribution under the input distribution. Second, we impose channel-wise spatial shifting bias\nwhen a channel is reallocated, providing much diversity in the input distribution. It also has an effect of enlarging the convolutional\nkernel of SCU. Finally, we place a channel-wise scaling layer inside\nSCU with sparsity-inducing regularization, which also promotes dealloc (and consequently realloc as well), without further overhead in inference and training.We evaluate the effectiveness of SCU by applying it to several modern CNN models including ResNet BID11 , DenseNet BID18 , and ResNeXt BID49 , on various classification datasets. Our experimental results consistently show that SCU improves\nthe efficiency of bottlenecks both in model size and classification accuracy. For example, SCU reduces the error rates of DenseNet-40 model\n(without any post-processing) by using even less parameters: 6.57% → 5.95% and 29.97% → 28.64% on CIFAR-10/100 datasets, respectively. We also apply SCU to a mobile-targeted CondenseNet BID17 model\n, and further improve its efficiency: it even outperforms NASNet-C BID54 , an architecture searched with 500 GPUs for 4 days, while our model is constructed with minimal efforts automatically via SCU.There have been significant interests in the literature on discovering which parameters to be pruned during training of neural networks, e.g., see the literature of network sparsity learning BID48 BID25 BID41 BID35 BID30 BID4 . On the other hand, the progress is, arguably, slower for how\nto rewire the pruned parameters of a given model to maximize its utility. proposed Dense-Sparse-Dense (DSD), a multi-step training flow\napplicable for a wide range of DNNs showing that re-training with re-initializing the pruned parameters can improve the performance of the original network. Dynamic network surgery BID7 , on the other hand, proposed a\nmethodology of splicing the pruned connections so that mis-pruned ones can be recovered, yielding a better compression performance. In this paper, we propose a new way of rewiring for parameter\nefficiency, i.e., rewiring for channel-selectivity, and a new architectural framework that enables both pruning and rewiring in a single pass of training without any postprocessing or re-training (as like human brain learning). Under our framework, one can easily set a targeted trade-off\nbetween model compression and accuracy improvement depending on her purpose, simply by adjusting the calling policy of dealloc and realloc. We believe that our work sheds a new direction on the important\nproblem of training neural networks efficiently.\nWe demonstrate that CNNs of large-scale features can be trained effectively via channel-selectivity, primarily focusing on bottleneck architectures.\nThe proposed ideas on channel-selectivity, however, would be applicable other than the bottlenecks, which we believe is an interesting future research direction.\nWe also expect that channel-selectivity has a potential to be used for other tasks as well, e.g., interpretability BID42 , robustness BID6 , and memorization BID51 ." }, { "role": "assistant", "content": "We propose a new module that improves any ResNet-like architectures by enforcing \"channel selective\" behavior to convolutional layers" } ]
science.chemprot_ner
science.chemprot_ner.209
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Protein\", \"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\" : [\"Dexamethasone\", ...], \"Protein\" : [\"BRCA-1\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nRufinamide attenuates mechanical allodynia in a model of neuropathic pain in the mouse and stabilizes voltage-gated sodium channel inactivated state.\nBACKGROUND: Voltage-gated sodium channels dysregulation is important for hyperexcitability leading to pain persistence. Sodium channel blockers currently used to treat neuropathic pain are poorly tolerated. Getting new molecules to clinical use is laborious. We here propose a drug already marketed as anticonvulsant, rufinamide. METHODS: We compared the behavioral effect of rufinamide to amitriptyline using the Spared Nerve Injury neuropathic pain model in mice. We compared the effect of rufinamide on sodium currents using in vitro patch clamp in cells expressing the voltage-gated sodium channel Nav1.7 isoform and on dissociated dorsal root ganglion neurons to amitriptyline and mexiletine. RESULTS: In naive mice, amitriptyline (20 mg/kg) increased withdrawal threshold to mechanical stimulation from 1.3 (0.6-1.9) (median [95% CI]) to 2.3 g (2.2-2.5) and latency of withdrawal to heat stimulation from 13.1 (10.4-15.5) to 30.0 s (21.8-31.9), whereas rufinamide had no effect. Rufinamide and amitriptyline alleviated injury-induced mechanical allodynia for 4 h (maximal effect: 0.10 +/- 0.03 g (mean +/- SD) to 1.99 +/- 0.26 g for rufinamide and 0.25 +/- 0.22 g to 1.92 +/- 0.85 g for amitriptyline). All drugs reduced peak current and stabilized the inactivated state of voltage-gated sodium channel Nav1.7, with similar effects in dorsal root ganglion neurons. CONCLUSIONS: At doses alleviating neuropathic pain, amitriptyline showed alteration of behavioral response possibly related to either alteration of basal pain sensitivity or sedative effect or both. Side-effects and drug tolerance/compliance are major problems with drugs such as amitriptyline. Rufinamide seems to have a better tolerability profile and could be a new alternative to explore for the treatment of neuropathic pain." }, { "role": "assistant", "content": "{\"Chemical\": [\"sodium\", \"Rufinamide\", \"mexiletine\", \"amitriptyline\", \"rufinamide\", \"Sodium\"], \"Protein\": [\"Sodium channel\", \"Nav1.7\", \"voltage-gated sodium channel\", \"Voltage-gated sodium channels\"]}" } ]
science.bioasq_yesno_qa
science.bioasq_yesno_qa.29
[ { "role": "user", "content": "You have to answer a biomedical question in binary format, i.e. only yes and no are the only acceptable answer formats. A list of paragraphs is provided as context to help you answer the question.\n\nQuestion: Have Quantitative Trait Loci affecting splicing (splicing QTLs) been linked to disease?\n\nContext: The spontaneously hypertensive rat (SHR) is a widely used rodent model of hypertension and metabolic syndrome. Previously we identified thousands of cis-regulated expression quantitative trait loci (eQTLs) across multiple tissues using a panel of rat recombinant inbred (RI) strains derived from Brown Norway and SHR progenitors. These cis-eQTLs represent potential susceptibility loci underlying physiological and pathophysiological traits manifested in SHR. We have prioritized 60 cis-eQTLs and confirmed differential expression between the parental strains by quantitative PCR in 43 (72%) of the eQTL transcripts.\nThese colocalizing correlated cis-eQTLs (c3-eQTLs) are highly attractive as primary susceptibility loci for the colocalizing pQTLs. Furthermore, sequence analysis of the c3-eQTL genes identified single nucleotide polymorphisms (SNPs) that are predicted to affect transcription factor binding affinity, splicing and protein function. These SNPs, which potentially alter transcript abundance and stability, represent strong candidate factors underlying not just eQTL expression phenotypes, but also the correlated metabolic and physiological traits. In conclusion, by integration of genomic sequence, eQTL and QTT datasets we have identified several genes that are strong positional candidates for pathophysiological traits observed in the SHR strain.\nIdentifying associations between genotypes and gene expression levels using microarrays has enabled systematic interrogation of regulatory variation underlying complex phenotypes. This approach has vast potential for functional characterization of disease states, but its prohibitive cost, given hundreds to thousands of individual samples from populations have to be genotyped and expression profiled, has limited its widespread application.RESULTS: Here we demonstrate that genomic regions with allele-specific expression (ASE) detected by sequencing cDNA are highly enriched for cis-acting expression quantitative trait loci (cis-eQTL) identified by profiling of 500 animals in parallel, with up to 90% agreement on the allele that is preferentially expressed. We also observed widespread noncoding and antisense ASE and identified several allele-specific alternative splicing variants.CONCLUSION: Monitoring ASE by sequencing cDNA from as little as one sample is a practical alternative to expression genetics for mapping cis-acting variation that regulates RNA transcription and processing.\nThe six genes corresponded to rat and mouse quantitative trait loci (QTLs) that had shown associations with the common traits such as the well characterized MS and even tumor susceptibility. Our findings suggest that the six genes may play important roles in the pleiotropic effects on lipid metabolism and the MS, which increase the risk of Type 2 Diabetes and cardiovascular disease. The use of the multivariate phenotypes can be advantageous in identifying genetic risk factors, accounting for the pleiotropic effects when the multivariate phenotypes have a common etiological pathway.\nTo elucidate mechanisms involved in multiple sclerosis (MS), we studied genetic regulation of experimental autoimmune encephalomyelitis (EAE) in rats, assuming a conservation of pathogenic pathways. In this study, we focused on Eae23, originally identified to regulate EAE in a (LEW.1AV1xPVG.1AV1)F2 cross. Our aim was to determine whether one or more genes within the 67 Mb region regulate EAE and to define candidate risk genes.METHODOLOGY/PRINCIPAL FINDINGS: We used high resolution quantitative trait loci (QTL) analysis in the 10th generation (G10) of an advanced intercross line (AIL) to resolve Eae23 into two QTLs that independently regulate EAE, namely Eae23a and Eae23b." }, { "role": "assistant", "content": "yes" } ]
science.mslr2022_cochrane_multidoc_summarization
science.mslr2022_cochrane_multidoc_summarization.3496
[ { "role": "user", "content": "You will be provided with the titles and abstracts of a collection of medical RCT reports studying a clinical question -- for instance, the efficacy of an intervention on some medical outcome. Please write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article. Your response should not include any other text. The RCT reports are below.\n\n\n\nTitle 1:\nEffect of growth hormone on exercise tolerance in children with cystic fibrosis.\nAbstract 1:\nThe effect of growth hormone (GH) treatment on exercise tolerance in children with cystic fibrosis was investigated.\n 10 prepubertal children (mean +/- SD; age: 12.1 +/- 1.7 yr; height: 137.4 +/- 9.2 cm; body mass: 27.8 +/- 4.2 kg; forced expiratory volume in 1 s (FEV1): 68 +/- 22% predicted) were randomly assigned to either control period (CON, standard therapy) or recombinant human growth hormone (GH) period (additional GH treatment, 0.11-0.14 IU.kg-1, daily, s.c.) for the first 6 months, and then assigned to the other period for the next 6 months. At study entry and after each period, anthropometric data, pulmonary function, and exercise capacity (peak exercise capacity, .VO(2peak), and isokinetic muscle strength) were measured.\n Changes in height (+4.3 +/- 1.0 cm), total body mass (+2.2 +/- 0.8 kg), and lean body mass (LBM, +2.9 +/- 0.7 kg) were significantly higher (P < 0.01) after GH treatment compared with CON. Pulmonary function did not significantly change in either of the periods. In contrast to CON, GH treatment improved absolute .VO(2peak) (+19%, P < 0.01), peak ventilation (+14%, P < 0.01), and peak oxygen pulse (+18%, P < 0.01). Analysis of variance revealed that most of the changes (71%) in .VO(2peak) could be explained by those in LBM and FEV1 (P = 0.001).\n GH treatment clearly improved exercise tolerance, presumably resulting from the combined effects of GH on the muscular, cardiovascular, and pulmonary capacity.\n\n\nTitle 2:\nA multicenter, randomized, double-blind, placebo-controlled trial to evaluate the metabolic and respiratory effects of growth hormone in children with cystic fibrosis.\nAbstract 2:\nPositive effects of growth hormone therapy on growth, nutritional status, and lung function have been observed in patients with cystic fibrosis, but the current evidence is based on unblinded studies that involved a small number of patients. This trial was designed as a multicenter, randomized, placebo-controlled, double-blind study to assess the efficacy and safety of 2 dosages of growth hormone in cystic fibrosis.\n Sixty-three dystrophic patients with cystic fibrosis were randomly assigned for 24 weeks to 1 of 3 treatment arms: growth hormone dosage of 0.11 IU/kg body weight per day, growth hormone dosage of 0.21 IU/kg body weight per day, or placebo. The 24-week double-blind period was followed by an open treatment period of 24 weeks. The primary outcome measure was the change in forced expiratory volume in 1 second in percentage predicted from baseline. Secondary outcome measures were changes in height, weight, and exercise tolerance.\n Height, growth velocity, and growth factors (insulin-like growth factor 1 and insulin-like growth factor-binding protein 3) increased significantly in both treatment groups, whereas weight gain did not differ between the growth hormone groups and placebo. A trend toward improvement in absolute forced vital capacity was observed in patients who received the higher growth hormone dosage, whereas forced expiratory volume in 1 second did not change significantly with growth hormone treatment. Maximal oxygen uptake during peak exercise increased significantly in treated patients. There were no significant differences in the frequency or severity of adverse effects or in the incidence of abnormalities in glucose metabolism.\n These data suggest that in the group investigated, growth hormone therapy was well tolerated and had positive metabolic effects but did not result in short-term improvement of lung function in patients with cystic fibrosis.\n\n\nTitle 3:\nProspective randomised treatment with recombinant human growth hormone in cystic fibrosis.\nAbstract 3:\nTo evaluate the efficacy and safety of treatment with recombinant growth hormone (rGH) in patients with cystic fibrosis (CF).\n Twenty patients with CF (aged 10-23 years) were randomised to age and sex matched treatment and control groups. The treatment group received daily subcutaneous injections of 1 IU/kg/wk rGH for 12 months. Pulmonary function (forced expiratory volume in one second (FEV1) and airway resistance), exercise capacity measured with a bicycle ergometer, body composition (dual energy x ray absorptiometry), and weight were assessed at the beginning of the study and after 6 and 12 months.\n rGH treatment did not improve weight and pulmonary function, but lean body mass increased significantly in the treatment group. Exercise capacity increased in the treatment group from 143 (16) W (mean (SD)) to 164 (19) W after 12 months of rGH treatment.\n Treatment of CF patients with rGH for one year improved the exercise capacity significantly but not pulmonary function. The improved exercise capacity needs confirmation in a larger population before such an expensive treatment is justified.\n\n\nTitle 4:\nA multi-center controlled trial of growth hormone treatment in children with cystic fibrosis.\nAbstract 4:\nWe evaluated safety and efficacy of recombinant human growth hormone (rhGH) for improving growth, lean body mass (LBM), pulmonary function, and exercise tolerance in children with cystic fibrosis (CF) and growth restriction.\n Multicenter, open-label, controlled clinical trial comparing outcomes in prepubertal children <14 years with CF, randomized in a 1:1 ratio to receive daily rhGH (Nutropin AQ) or no treatment (control) for 12 months, followed by a 6-month observation (month 18). Safety was monitored at each visit, including assessments of glucose tolerance.\n Sixty-eight subjects were randomized (control n = 32; rhGH n = 36). Mean height standard deviation score (SDS) in the rhGH group increased by 0.5 ± 0.4 at 12 months (mean ± SD, P < 0.001); the control group height SDS remained unchanged. Weight increased by 3.8 ± 1.8 versus 2.8 ± 1.5 kg, (mean ± SD, P = 0.0356) and LBM increased by 3.8 ± 1.8 versus 2.1 ± 1.4 kg (P = 0.0002) in the rhGH group versus controls, respectively. Forced vital capacity increased by 325 ± 319 in the rhGH group compared with 178 ± 152 ml in controls (mean ± SD, P = 0.032). Forced expiratory volume in 1 sec improved in both groups with a significant difference between groups after adjustment for baseline severity (LS mean ± SE: rhGH, 224 ± 37, vs. controls, 108 ± 40 ml; P = 0.04). There was no difference between groups in exercise tolerance (6-min walk distance) at 1 year. Changes in glucose tolerance for the two groups were similar over the 12-month study period, with three subjects developing IGT and one CFRD in each group. One rhGH-treated patient developed increased intracranial pressure.\n Treatment with rhGH in prepubertal children with CF was effective in promoting growth, weight, LBM, lung volume, and lung flows, and had an acceptable safety profile.\n Copyright © 2011 Wiley Periodicals, Inc." }, { "role": "assistant", "content": "Recombinant growth hormone therapy is effective in improving the intermediate outcomes in height, weight and lean tissue mass when compared with no treatment. One measure of pulmonary function test showed moderate improvement. No significant changes in quality of life, clinical status or side-effects were observed in this review. Long-term, well-designed randomised controlled trials of recombinant growth hormone therapy in patients with cystic fibrosis are required prior to evaluation of human growth hormone treatment for routine use in patients." } ]