dataset
stringclasses
45 values
id
stringlengths
17
64
messages
listlengths
2
2
science.medmentions_ner
science.medmentions_ner.1862
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"HealthCareActivity\", \"InjuryOrPoisoning\", \"BodySubstance\", \"IntellectualProduct\", \"AnatomicalStructure\", \"SpatialConcept\", \"Chemical\", \"Bacterium\", \"MedicalDevice\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Finding\", \"BiologicFunction\", \"Virus\", \"ResearchActivity\", \"ClinicalAttribute\", \"PopulationGroup\", \"Eukaryote\", \"BodySystem\", \"Food\", \"ProfessionalOrOccupationalGroup\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nVariants in congenital hypogonadotrophic hypogonadism genes identified in an Indonesian cohort of 46,XY under-virilised boys Congenital hypogonadotrophic hypogonadism (CHH) and Kallmann syndrome (KS) are caused by disruption to the hypothalamic-pituitary-gonadal (H-P-G) axis. In particular, reduced production, secretion or action of gonadotrophin-releasing hormone (GnRH) is often responsible. Various genes, many of which play a role in the development and function of the GnRH neurons, have been implicated in these disorders. Clinically, CHH and KS are heterogeneous; however, in 46,XY patients, they can be characterised by under-virilisation phenotypes such as cryptorchidism and micropenis or delayed puberty. In rare cases, hypospadias may also be present. Here, we describe genetic mutational analysis of CHH genes in Indonesian 46,XY disorder of sex development patients with under-virilisation. We present 11 male patients with varying degrees of under-virilisation who have rare variants in known CHH genes. Interestingly, many of these patients had hypospadias. We postulate that variants in CHH genes, in particular PROKR2, PROK2, WDR11 and FGFR1 with CHD7, may contribute to under-virilisation phenotypes including hypospadias in Indonesia." }, { "role": "assistant", "content": "{\"AnatomicalStructure\": [\"FGFR1\", \"PROKR2\", \"neurons\", \"WDR11\", \"micropenis\", \"genes\", \"PROK2\", \"CHD7\", \"cryptorchidism\"], \"Bacterium\": [], \"BiologicFunction\": [\"hypogonadotrophic hypogonadism\", \"disorders\", \"production\", \"KS\", \"CHH\", \"under-virilised\", \"delayed puberty\", \"hypospadias\", \"secretion\", \"under-virilisation\", \"genetic mutational\", \"development\", \"Kallmann syndrome\", \"46,XY disorder of sex development\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [\"hypothalamic-pituitary-gonadal (H-P-G) axis\"], \"Chemical\": [\"gonadotrophin-releasing hormone\", \"GnRH\"], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [\"46,XY\"], \"Food\": [], \"HealthCareActivity\": [], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [\"Indonesian\", \"cohort\"], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"analysis\"], \"SpatialConcept\": [\"Indonesia\"], \"Virus\": []}" } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.26868
[ { "role": "user", "content": "You will be presented with the full text of a biomedical research paper. Please write an abstract for this paper. Your response should include the abstract and no additional text.\n\nPaper text:\na fragment corresponding to the extracellular domain ( 20160 amino acids ) of human trii was cloned into pbtm116 .\na total of 10 10 independent colonies were screened as previously described ( colland et al . , 2004 ) .\nthe prey fragments of the positive clones were pcr amplified and sequenced . the human embryonic kidney cell line\n293 t , hscs , human rd cells , mouse c2c12 cells , monkey kidney cos7 cells , and mink lung mvlu1 cells were transfected using lipofectamine - plus reagent ( invitrogen ) according to the manufacturer 's instructions . for experiments with adam12 antisense\n, cells were incubated with 2 m of antisense oligonucleotides to adam12 ( ctctcttttatgccttct and ccccattcctttctcc ) or random control oligonucleotides ( actactacactagactac and gctctatgactcccag ) as previously described ( lafuste et al . , 2005 ) . for rnai experiments\nare3-lux , gaga9-lux , fast1 , ha - smad4 , myc - smad2 , and myc - smad7 were previously described ( dumont et al .\nthe expression vector for ha - trii was provided by j. wrana ( samuel lunenfeld research institute , mount sinai hospital , toronto , ontario , canada ) .\nexpression constructs for wild type or mutants of adam12 and adam12 fused to egfp were prepared as previously described ( hougaard et al . , 2000 ) .\nthe expression vector encoding adam12 shrna or scrambled shrna was constructed using the block - it u6 rna system ( invitrogen ) according to the manufacturer 's instructions . the expression vector for flag - adam12\nwas obtained by fusing the flag epitope to the n terminus of the adam12 fragment ( amino acids 142739 ) isolated in the yeast two - hybrid screen .\nhepg2 , c2c12 , or 293 t cells were transfected by lipofectamine , and , 30 h later , they were treated for 18 h with 2 ng / ml human tgf-1 ( sigma - aldrich ) .\ncell extracts were assayed for luciferase activity using the dual luciferase reporter assay system ( promega ) , and luciferase activities were normalized on the basis of renilla luciferase expression from the prl - tk control vector . for potassium depletion experiments , transfected cells were incubated in medium and water ( 1:1 ) for 5 min at 37c followed by incubation in medium depleted or not depleted in kcl for 1 h at 37c before stimulation with tgf-. after transfection , cells were lysed in lysis buffer ( dumont et al . , 2003 ) , and cell lysates\nwere subjected to immunoprecipitation with the appropriate antibody for 2 h followed by adsorption to sepharose bead coupled protein g for 1 h. immunoprecipitates were washed five times with lysis buffer containing 0.5% np-40 . for the association of endogenous trii with endogenous adams ,\nimmunoprecipitates were washed three times with lysis buffer containing 0.5% np-40 and two times with lysis buffer containing 1% np-40 .\nthen , samples were separated by sds - page and analyzed by immunoblotting with the indicated antibodies .\nthe following antibodies were used : anti - adam12 rb 122 ( gilpin et al . , 1998 ) , anti - flag m2 ( sigma - aldrich ) , anti - ha and anti \nmyc-9e10 ( boehringer manheim ) , antiphospho - smad2 ( cell signaling technologies ) , anti - smad2 ( zymed laboratories ) , anti - adam10 ( prosci ) , anti - adam17 ( chemicon ) , and antiactin , anti - trii , anti bone morphogenetic protein rii , anti - smad7 , anti pai-1 , and anti - junb ( santa cruz biotechnology , inc . ) .\ncells were fixed in 3% pfa , permeabilized with 0.1% triton x-100 , and incubated for 60 min at room temperature with the appropriate primary antibody followed by the appropriate secondary antibody .\n/ ml 1,4-diazabicyclo[2.2.2]octane , and viewed on an automated microscope ( dmrxa2 ; leica ) equipped with a camera ( coolsnap es n&b ; roper scientific ) and a 63 hcx pl apo na 1.32 oil objective ( leica ) .\nz steps were submitted to deconvolution ( nearest neighbor method ) by using metamorph software ( universal imaging corp . ) .\ntotal rna were extracted by the guanidinium thiocianate / cesium chloride method , and real - time quantitative pcr was performed by the fluorescent dye sybr green methodology as previously described ( le pabic et al . , 2003 ) .\nprimer pairs for target genes were as follows : pai-1 , sense ( 5-gtctttccgaccaagagcag-3 ) and antisense ( 5-cgatcctgaccttttgcagt-3 ) ; adam12 , sense ( 5-gtttggctttggaggaagcacag-3 ) and antisense ( 5-tgcaggcagaggcttctgagg-3 ) ; col1a2 , sense ( 5-ggtggtggttatgactttg-3 ) and antisense ( 5-atacaggtttcgccggtag-3 ) ; and 18s , sense ( 5-cgccgctagaggtgaaattc-3 ) and antisense ( 5-ttggcaaatgctttcgctc-3 ) .\ns1 a shows the effect of increasing amounts of adam12 on expression of the tgf-responsive gene pai-1 .\ns1 b shows the tgf-dependent expression of endogenous adam12 or col1a2 in cells treated with adam12 antisense or control oligonucleotides .\ns3 shows the association of trii with several transmembrane proteins as indicated by labeling with a membrane - impermeable biotinylation reagent .\na fragment corresponding to the extracellular domain ( 20160 amino acids ) of human trii was cloned into pbtm116 .\na total of 10 10 independent colonies were screened as previously described ( colland et al . , 2004 ) .\nthe human embryonic kidney cell line 293 t , hscs , human rd cells , mouse c2c12 cells , monkey kidney cos7 cells , and mink lung mvlu1 cells were transfected using lipofectamine - plus reagent ( invitrogen ) according to the manufacturer 's instructions . for experiments with adam12 antisense\n, cells were incubated with 2 m of antisense oligonucleotides to adam12 ( ctctcttttatgccttct and ccccattcctttctcc ) or random control oligonucleotides ( actactacactagactac and gctctatgactcccag ) as previously described ( lafuste et al . , 2005 ) . for rnai experiments ,\nare3-lux , gaga9-lux , fast1 , ha - smad4 , myc - smad2 , and myc - smad7 were previously described ( dumont et al . , 2003 ;\nwas provided by j. wrana ( samuel lunenfeld research institute , mount sinai hospital , toronto , ontario , canada ) .\nexpression constructs for wild type or mutants of adam12 and adam12 fused to egfp were prepared as previously described ( hougaard et al . , 2000 ) .\nthe expression vector encoding adam12 shrna or scrambled shrna was constructed using the block - it u6 rna system ( invitrogen ) according to the manufacturer 's instructions .\nthe expression vector for flag - adam12 was obtained by fusing the flag epitope to the n terminus of the adam12 fragment ( amino acids 142739 ) isolated in the yeast two - hybrid screen .\nhepg2 , c2c12 , or 293 t cells were transfected by lipofectamine , and , 30 h later , they were treated for 18 h with 2 ng / ml human tgf-1 ( sigma - aldrich ) .\ncell extracts were assayed for luciferase activity using the dual luciferase reporter assay system ( promega ) , and luciferase activities were normalized on the basis of renilla luciferase expression from the prl - tk control vector . for potassium depletion experiments , transfected cells were incubated in medium and water ( 1:1 ) for 5 min at 37c followed by incubation in medium depleted or not depleted in kcl for 1 h at 37c before stimulation with tgf-.\nafter transfection , cells were lysed in lysis buffer ( dumont et al . , 2003 ) , and cell lysates\nwere subjected to immunoprecipitation with the appropriate antibody for 2 h followed by adsorption to sepharose bead coupled protein g for 1 h. immunoprecipitates were washed five times with lysis buffer containing 0.5% np-40 . for the association of endogenous trii with endogenous adams ,\nimmunoprecipitates were washed three times with lysis buffer containing 0.5% np-40 and two times with lysis buffer containing 1% np-40 .\nthen , samples were separated by sds - page and analyzed by immunoblotting with the indicated antibodies .\nthe following antibodies were used : anti - adam12 rb 122 ( gilpin et al . , 1998 ) , anti - flag m2 ( sigma - aldrich ) , anti - ha and anti \nmyc-9e10 ( boehringer manheim ) , antiphospho - smad2 ( cell signaling technologies ) , anti - smad2 ( zymed laboratories ) , anti - adam10 ( prosci ) , anti - adam17 ( chemicon ) , and antiactin , anti - trii , anti bone morphogenetic protein rii , anti - smad7 , anti \ncells were fixed in 3% pfa , permeabilized with 0.1% triton x-100 , and incubated for 60 min at room temperature with the appropriate primary antibody followed by the appropriate secondary antibody .\nthe coverslips were washed , mounted in pbs containing 50% glycerol and 1 mg / ml 1,4-diazabicyclo[2.2.2]octane , and viewed on an automated microscope ( dmrxa2 ; leica ) equipped with a camera ( coolsnap es n&b ; roper scientific ) and a 63 hcx pl apo na 1.32 oil objective ( leica ) .\nz steps were submitted to deconvolution ( nearest neighbor method ) by using metamorph software ( universal imaging corp . ) .\ntotal rna were extracted by the guanidinium thiocianate / cesium chloride method , and real - time quantitative pcr was performed by the fluorescent dye sybr green methodology as previously described ( le pabic et al . , 2003 ) .\nprimer pairs for target genes were as follows : pai-1 , sense ( 5-gtctttccgaccaagagcag-3 ) and antisense ( 5-cgatcctgaccttttgcagt-3 ) ; adam12 , sense ( 5-gtttggctttggaggaagcacag-3 ) and antisense ( 5-tgcaggcagaggcttctgagg-3 ) ; col1a2 , sense ( 5-ggtggtggttatgactttg-3 ) and antisense ( 5-atacaggtttcgccggtag-3 ) ; and 18s , sense ( 5-cgccgctagaggtgaaattc-3 ) and antisense ( 5-ttggcaaatgctttcgctc-3 ) .\ns1 a shows the effect of increasing amounts of adam12 on expression of the tgf-responsive gene pai-1 .\ns1 b shows the tgf-dependent expression of endogenous adam12 or col1a2 in cells treated with adam12 antisense or control oligonucleotides .\ns3 shows the association of trii with several transmembrane proteins as indicated by labeling with a membrane - impermeable biotinylation reagent ." }, { "role": "assistant", "content": "transforming growth factor- ( tgf- ) regulates a wide variety of biological processes through two types of ser / thr transmembrane receptors : the tgf- type i receptor and the tgf- type ii receptor ( trii ) . upon ligand binding , tgf- type i receptor activated by trii propagates signals to smad proteins , which mediate the activation of tgf- target genes . in this study \n , we identify adam12 ( a disintegrin and metalloproteinase 12 ) as a component of the tgf- signaling pathway that acts through association with trii . \n we found that adam12 functions by a mechanism independent of its protease activity to facilitate the activation of tgf- signaling , including the phosphorylation of smad2 , association of smad2 with smad4 , and transcriptional activation . furthermore , adam12 induces the accumulation of trii in early endosomal vesicles and stabilizes the trii protein presumably by suppressing the association of trii with smad7 . \n these results define adam12 as a new partner of trii that facilitates its trafficking to early endosomes in which activation of the smad pathway is initiated ." } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.22677
[ { "role": "user", "content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: Thermodynamic Costs of Information Processing in Sensory Adaptation}\nArticle:\nAbstract:\nBiological sensory systems react to changes in their surroundings. They are characterized by fast response and slow adaptation to varying environmental cues. Insofar as sensory adaptive systems map environmental changes to changes of their internal degrees of freedom, they can be regarded as computational devices manipulating information. Landauer established that information is ultimately physical, and its manipulation subject to the entropic and energetic bounds of thermodynamics. Thus the fundamental costs of biological sensory adaptation can be elucidated by tracking how the information the system has about its environment is altered. These bounds are particularly relevant for small organisms, which unlike everyday computers, operate at very low energies. In this paper, we establish a general framework for the thermodynamics of information processing in sensing. With it, we quantify how during sensory adaptation information about the past is erased, while information about the present is gathered. This process produces entropy larger than the amount of old information erased and has an energetic cost bounded by the amount of new information written to memory. We apply these principles to the E. coli's chemotaxis pathway during binary ligand concentration changes. In this regime, we quantify the amount of information stored by each methyl group and show that receptors consume energy in the range of the information-theoretic minimum. Our work provides a basis for further inquiries into more complex phenomena, such as gradient sensing and frequency response.\nIntroduction:\nIn order to perform a variety of tasks, living organisms continually respond and adapt to their changing surroundings through diverse electrical, chemical and mechanical signaling pathways, called sensory systems[1]. In mammals, prominent examples are the neurons involved in the visual, olfactory, and somatic systems[2]–[5]. But also unicellular organisms lacking a neuronal system sense their environment: Yeast can sense osmotic pressure[6], and E. coli can monitor chemical gradients[7], temperatures[8] and pH[9]. Despite the diversity in biochemical details, sensory adaptation systems( SAS) exhibit a common behavior: long-term storage of the state of the environment and rapid response to its changes[10]. Intuitively, one expects that for these SAS to function, an energy source – such as ATP or SAM – is required; but is there a fundamental minimum energy needed? To tackle this question, we first relate a generic SAS to a binary information processing device, which is tasked to perform fast information acquisition on the environment( response) and to record subsequently the information into its longer term memory( adaptation). Since the foundational works of Maxwell, Szilard and Landauer, the intimate relationship between thermodynamic costs and information processing tasks has been intensely studied[11]–[17]. As a result, the natural mapping between a generic SAS and an information processing device allows us to quantify the minimal energetic costs of sensory adaptation. The idea of viewing biological processes as information processing tasks is not new[7],[12],[18]. However, rationalizing sensory adaptation is complicated by recent studies that have revealed that motifs in the underlying biochemical networks play a fundamental role in the thermodynamic costs. For instance, the steady state of feedback adaptive systems must be dissipative, with more dissipation leading to better adaptation[19], an observation echoed in the analysis of a minimal model of adaptive particle transport[20]. Other studies have suggested that some feedforward adaptive systems may require dissipation to sustain their steady state[21], while some may not[22],[23]. Furthermore, past studies[18],[24] have approached the notion of information by considering noisy inputs due to stochastic binding, a realm in which adaptation may not be relevant due to the separation of time-scales[25]. Here, we develop a different approach that avoids these caveats by considering a thermodynamically consistent notion of information that naturally incorporates the costs of sensing in sensory adaptation. Specifically, we derive a collection of universal bounds that relate the thermodynamic costs of sensing to the information processed. These bounds reveal for the first time that for a generic SAS, measuring an environmental change is energetically costly[( 6) below], while to erase the memory of the past is energetically free, but necessarily irreversible[( 5) below]. By formalizing and linking the information processing and thermodynamics of sensory systems, our work shows that there is an intrinsic cost of sensing due to the necessity to process information. To illustrate our generic approach, we study first a minimal four-state feedforward model and then a detailed ten-state feedback model of E. coli chemotaxis. Owing to the symmetry of its motif's topology the four-state feedforward model does not require energy to sustain its adapted state. Instead, all the dissipation arises from information processing: acquiring new information consumes energy, while erasing old information produces entropy. By contrast, the E. coli model sustains its nonequilibrium steady state( NESS) by constantly dissipating energy, a requirement for adaptation with a feedback topology[19]. In this nonequilibrium setting, we generalize our thermodynamic bounds in order to pinpoint the additional energy for sensing over that required to maintain the steady state. We find with this formalism that in E. coli chemotaxis the theoretical minimum demanded by our bounds accounts for a sizable portion of the energy spent by the bacterium on its SAS.\nDiscussion:\nWe have derived generic information-theoretic bounds to sensory adaptation. We have focused on response-adaptive sensory systems subject to an abrupt environmental switch. This was merely a first step, but the procedure we have outlined here only relies on the validity of the second law of thermodynamics, and therefore can be extend to any small system affected by a random external perturbation to which we can apply stochastic thermodynamics, which is reviewed in[37]. Our predictions are distinct from( although reminiscent of) Landauer's principle[11],[12], which bounds the minimum energy required to reset an isolated memory. By contrast, the information erased in our system is its correlations with the signal. There is another important distinction from the setup of Landauer, and more broadly the traditional setup in the thermodynamics of computation[11] as well as the more recent advancements on the thermodynamics of information processing in the context of measurement and feedback[15],[38]–[45]. There the memory is reset by changing or manipulating it by varying its energy landscape. In our situation, the erasure comes about because the signal is switched. The loss of correlations is stimulated by a change in the measured system – that is the environmental signal; erasure does not occur because the memory itself is altered. Also relevant is[46], which addresses the minimum dissipated work for a system to make predictions about the future fluctuations of the environmental signal, in contrast to the measured information about the current signal, which we have considered. Our results predict that energy is required to sense changes in the environment, but do not dictate that source of energy. Our equilibrium feedforward model is able to sense and adapt by consuming energy provided by the environment. E. coli's feedback, however, uses mostly external energy to respond, but must consume energy of its own to adapt. The generic bounds here established apply to these two distinct basic topologies, irrespective of their fundamentally different energetics. For E. coli, to quantify to what extent is affected by SAM consumption and ligand binding, a more detailed chemical model is required in conduction with a partitioning of the excess work into distinct terms. An interesting open question in this regard, is why nature would choose the dissipative steady state of E. coli, when theoretically the cost of sensing could be paid by the environment. For a ligand change of, in the region of high adaptation, the information measured/erased is bits. We observed that the corresponding average change in the methylation level for a chemoreceptor is, suggesting that a methylation level can store bits for such 1-bit step response operations. Despite the small adaptation error, information storage is limited by fluctuations arising from the finite number of discrete methylation levels. Receptors' cooperativity, which is known to reduce fluctuations of the collective methylation level, may prevent this allowing them to store more information. On the energetic side, we have shown that the cost of sensing these ligand changes per receptor is around 10% of the cost of sustaining the corresponding adaptive machinery. We also showed that the energetic cost of binary operations is roughly twice beyond its minimum for large ligand changes, in stark contrast with everyday computers for which the difference is orders of magnitude. Taken together these numbers suggest that 5% of the energy a cell uses in sensing is determined by information-thermodynamic bounds, and is thus unavoidable. Future work should include addressing sensory adaptation in more complex scenarios. One which has recently aroused attention is fluctuating environments, which so far has been addressed using trajectory information[44],[45],[47]. However, under physiological conditions this is unlikely to play a significant role given the large separation of time-scales between binding, response, and adaptation[25]. Another scenario is a many bits step operation, in which instead of high and low signals a large discrete set of ligand concentrations is considered. Frequency response and gradient sensing are also appealing[27], since in them the system is in a dynamic steady state in which the memory is continuously erased and rewritten. Analysis of such scenarios is far from obvious, but the tools developed in this work constitute the first step in developing their theoretical framework." }, { "role": "assistant", "content": "The ability to process information is a ubiquitous feature of living organisms . Indeed , in order to survive , every living being , from the smallest bacterium to the biggest mammal , has to gather and process information about its surrounding environment . In the same way as our everyday computers need power to function , biological sensors need energy in order to gather and process this sensory information . How much energy do living organisms have to spend in order to get information about their environment ? In this paper , we show that the minimum energy required for a biological sensor to detect a change in some environmental signal is proportional to the amount of information processed during that event . In order to know how far a real biological sensor operates from this minimum , we apply our predictions to chemo-sensing in the bacterium Escherichia Coli and find that the theoretical minimum corresponds to a sizable portion of the energy spent by the bacterium ." } ]
science.qasa_abstractive_qa
science.qasa_abstractive_qa.1505
[ { "role": "user", "content": "You will be shown an excerpt from a computer science scientific research paper, followed by a question about the paper. Please answer the question. Do not include any text in your response other than the answer.\n\nContext: Diffusion models are probabilistic generative models that are trained to learn a data distribution by the gradual denoising of a variable sampled from a Gaussian distribution. Specifically, this corresponds to learning the reverse process of a fixed-length Markovian forward process. In simple terms, a conditional diffusion model \\hat{\\mathbf{x}}_{\\theta} is trained using a squared error loss to denoise a variably-noised image \\mathbf{z}_{t}\\coloneqq\\alpha_{t}\\mathbf{x}+\\sigma_{t}{\\bm{\\epsilon}} as follows:\\mathbb{E}_{\\mathbf{x},\\mathbf{c},{\\bm{\\epsilon}},t}\\!\\left[w_{t}\\|\\hat{\\mathbf{x}}_{\\theta}(\\alpha_{t}\\mathbf{x}+\\sigma_{t}{\\bm{\\epsilon}},\\mathbf{c})-\\mathbf{x}\\|^{2}_{2}\\right](1)where \\mathbf{x} is the ground-truth image, \\mathbf{c} is a conditioning vector (e.g., obtained from a text prompt), {\\bm{\\epsilon}}\\sim\\mathcal{N}(\\mathbf{0},\\mathbf{I}) is a noise term and \\alpha_{t},\\sigma_{t},w_{t} are terms that control the noise schedule and sample quality, and are functions of the diffusion process time t\\sim\\mathcal{U}([0,1]).At inference time, the diffusion model is sampled by iteratively denoising \\mathbf{z}_{t_{1}}\\sim\\mathcal{N}(\\mathbf{0},\\mathbf{I}) using either the deterministic DDIM song2020denoising or the stochastic ancestral sampler ho2020denoising . Intermediate points \\mathbf{z}_{t_{1}},\\dotsc,\\mathbf{z}_{t_{T}}, where 1=t_{1}>\\cdots>t_{T}=0, are generated, with decreasing noise levels. These points, \\hat{\\mathbf{x}}^{t}_{0}\\coloneqq\\hat{\\mathbf{x}}_{\\theta}(\\mathbf{z}_{t},\\mathbf{c}), are functions of the \\mathbf{x}-predictions. The details of text-conditioning in text-to-image diffusion models are of high importance for visual quality and semantic fidelity. Ramesh et al. ramesh2022hierarchical use CLIP text embeddings that are translated into image embeddings using a learned prior, while Saharia et al. saharia2022photorealistic use a pre-trained T5-XXL language model raffel2020exploring . In our work, we use the latter.Language models like T5-XXL generate embeddings of a tokenized text prompt, and vocabulary encoding is an important pre-processing step for prompt embedding. In order to transform a text prompt \\mathbf{P} into a conditioning embedding \\mathbf{c}, the text is first tokenized using a tokenizer f using a learned vocabulary. Following saharia2022photorealistic , we use the SentencePiece tokenizer kudo2018sentencepiece . After tokenizing a prompt \\mathbf{P} using tokenizer f we obtain a fixed-length vector f(\\mathbf{P}). The language model \\Gamma is conditioned on this token identifier vector to produce an embedding \\mathbf{c}\\coloneqq\\Gamma(f(\\mathbf{P})). Finally, the text-to-image diffusion model is directly conditioned on \\mathbf{c}.\n\nQuestion: How is the conditioning vector obtained?" }, { "role": "assistant", "content": "From text prompts, the conditioning embedding vector is obtained. The text prompt is first tokenized to provide a fixed length token identification vector, and then the language model maps this vector to produce an embedding that serves as a conditioning vector." } ]
science.data_reco_mcq_mc
science.data_reco_mcq_mc.2365
[ { "role": "user", "content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: I propose an unsupervised method to decouple the lexical and affective information from facial configurations.\n\nKeyphrases: emotion recognition video images\n\nDataset description:\nAff-Wild: Aff-Wild is a dataset for emotion recognition from facial images in a variety of head poses, illumination conditions and occlusions.\nFlightmare Simulator: Flightmare is composed of two main components: a configurable rendering engine built on Unity and a flexible physics engine for dynamics simulation. Those two components are totally decoupled and can run independently from each other.\nSherLIiC: SherLIiC is a testbed for lexical inference in context (LIiC), consisting of 3985 manually annotated inference rule candidates (InfCands), accompanied by (i) ~960k unlabeled InfCands, and (ii) ~190k typed textual relations between Freebase entities extracted from the large entity-linked corpus ClueWeb09. Each InfCand consists of one of these relations, expressed as a lemmatized dependency path, and two argument placeholders, each linked to one or more Freebase types.\nRAF-DB: The Real-world Affective Faces Database (RAF-DB) is a dataset for facial expression. It contains 29672 facial images tagged with basic or compound expressions by 40 independent taggers.\nIEMOCAP: Multimodal Emotion Recognition IEMOCAP The IEMOCAP dataset consists of 151 videos of recorded dialogues, with 2 speakers per session for a total of 302 videos across the dataset. Each segment is annotated for the presence of 9 emotions (angry, excited, fear, sad, surprised, frustrated, happy, disappointed and neutral) as well as valence, arousal and dominance.\nSEMAINE: The SEMAINE videos dataset contains spontaneous data capturing the audiovisual interaction between a human and an operator undertaking the role of an avatar with four personalities: Poppy (happy), Obadiah (gloomy), Spike (angry) and Prudence (pragmatic). The audiovisual sequences have been recorded at a video rate of 25 fps (352 x 288 pixels).\n\nOptions:- A: Aff-Wild B: Flightmare Simulator C: SherLIiC D: RAF-DB E: IEMOCAP F: SEMAINE" }, { "role": "assistant", "content": "[\"IEMOCAP\", \"SEMAINE\"]" } ]
science.chemprot_re
science.chemprot_re.188
[ { "role": "user", "content": "Given an abstract from a research paper, your task is to identify and label relationships between chemical compounds and genes mentioned in the abstract.\n\nThe relationship type must be one of ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nPlease format your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"]. If no relations can be found, please output an empty JSON array [].\n\nAbstract:\n\nPharmacokinetic Interactions between Monoamine Oxidase A Inhibitor Harmaline and 5-Methoxy-N, N-Dimethyltryptamine, and the Impact of CYP2D6 Status. 5-Methoxy-N, N-dimethyltryptamine (5-MeO-DMT or street name \" 5-MEO \") is a newer designer drug belonging to a group of naturally occurring indolealkylamines. Our recent study has demonstrated that coadministration of monoamine oxidase A (MAO-A) inhibitor harmaline (5 mg/kg) increases systemic exposure to 5-MeO-DMT (2 mg/kg) and active metabolite bufotenine. This study is aimed at delineating harmaline and 5-MeO-DMT pharmacokinetic (PK) interactions at multiple dose levels, as well as the impact of CYP2D6 that affects harmaline PK and determines 5-MeO-DMT O-demethylation to produce bufotenine. Our data revealed that inhibition of MAO-A-mediated metabolic elimination by harmaline (2, 5, and 15 mg/kg) led to a sharp increase in systemic and cerebral exposure to 5-MeO-DMT (2 and 10 mg/kg) at all dose combinations. A more pronounced effect on 5-MeO-DMT PK was associated with greater exposure to harmaline in wild-type mice than CYP2D6-humanized (Tg-CYP2D6) mice. Harmaline (5 mg/kg) also increased blood and brain bufotenine concentrations that were generally higher in Tg-CYP2D6 mice. Surprisingly, greater harmaline dose (15 mg/kg) reduced bufotenine levels. The in vivo inhibitory effect of harmaline on CYP2D6-catalyzed bufotenine formation was confirmed by in vitro study using purified CYP2D6. Given these findings, a unified PK model including the inhibition of MAO-A- and CYP2D6-catalyzed 5-MeO-DMT metabolism by harmaline was developed to describe blood harmaline, 5-MeO-DMT, and bufotenine PK profiles in both wild-type and Tg-CYP2D6 mouse models. This PK model may be further employed to predict harmaline and 5-MeO-DMT PK interactions at various doses, define the impact of CYP2D6 status, and drive harmaline-5-MeO-DMT pharmacodynamics." }, { "role": "assistant", "content": "[[\"harmaline\", \"INHIBITOR\", \"monoamine oxidase A\"], [\"harmaline\", \"DOWNREGULATOR\", \"CYP2D6\"], [\"Harmaline\", \"INHIBITOR\", \"Monoamine Oxidase A\"], [\"bufotenine\", \"PRODUCT-OF\", \"CYP2D6\"], [\"harmaline\", \"INHIBITOR\", \"MAO-A\"]]" } ]
science.covid_deepset_qa
science.covid_deepset_qa.604
[ { "role": "user", "content": "You will be shown a paragraph from a biomedical research paper, followed by a question about the paper. Please extract text from the paragraph to answer the question. Do not include any text in your repsonse other than the answer.\n\nParagraph: Etiology of Influenza-Like Illnesses from Sentinel Network Practitioners in Réunion Island, 2011-2012\n\nhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC5031398/\n\nSHA: f5ff89ebfdd0375d034c112c6c1c7e163fa69a0c\n\nAuthors: Brottet, Elise; Jaffar-Bandjee, Marie-Christine; Li-Pat-Yuen, Ghislaine; Filleul, Laurent\nDate: 2016-09-21\nDOI: 10.1371/journal.pone.0163377\nLicense: cc-by\n\nAbstract: In Réunion Island, despite an influenza surveillance established since 1996 by the sentinel general practitioner’s network, little is known about the etiology of Influenza like-illness (ILI) that differs from influenza viruses in a tropical area. We set up a retrospective study using nasal swabs collected by sentinel GPs from ILI patients in 2011 and 2012. A total of 250 swabs were randomly selected and analyzed by multiplex reverse transcriptase polymerase chain reaction (RT-PCR) including research of 18 viruses and 4 bacteria. We detected respiratory viruses in 169/222 ( 76.1 %) samples, mostly rhinovirus (23.4%), influenza A virus (21.2%), influenza B virus (12.6%), coronavirus (4.9%) and Human metapneumovirus (3.6%). Nine swabs (5.3% of positive swabs) revealed co-infections with two viruses identified, among which six concerned co-infections with influenza viruses. We observed important seasonal differences, with circulation of Human Metapneumoviruses, RSV A and B and coronavirus only during summer; whereas parainfluenza viruses were identified only during winter. In conclusion, this study highlights a substantial circulation of multiple respiratory pathogens in Réunion Island throughout the year. It shows that ILI are not only attributable to influenza and underlines the need for biological surveillance. As the use of multiplex RT-PCR showed its efficacy, it is now used routinely in the surveillance of ILI. Text: Influenza like-illness (ILI) or acute respiratory infections can be caused by several types of respiratory viruses or bacteria in humans [1] . Influenza viruses, Respiratory Syncytial viruses (RSV) and Parainfluenza viruses are identified as major viruses mostly responsible for ILI and pneumonia in several studies [2] . However practitioners cannot diagnose the infection without a biological test confirmation. Unfortunately, these infections causes are identified in less than 50% [3] . Réunion Island, a French overseas territory with 850,000 inhabitants, is located in the southern hemisphere between Madagascar and Mauritius in the Indian Ocean (Latitude: 21°05.2920 S Longitude: 55°36.4380 E.). The island benefits from a healthcare system similar to mainland France and epidemiological surveillance has been developed by the regional office of the French Institute for Public Health Surveillance (Cire OI), based on the surveillance system of mainland France [4] . Influenza activity generally increases during austral winter, corresponding to summer in Europe [5] . Since 2011, influenza vaccination campaign in Reunion Island starts in April and the vaccine used corresponds to World Health Organization recommendations for the southern hemisphere. Since 1996, clinical and biological influenza surveillance has been based on a sentinel practitioner's network [6] . In 2014, this network was composed of 58 general practitioners (GPs) spread over the island and represented around 7% of all Réunion Island GPs. Nasal swabs are randomly collected all along the year and are tested by RT-PCR for influenza viruses. Among these surveillance samples, 40 to 50% are tested positive for influenza A virus, A(H1N1)pdm09 or B virus by the virological laboratory of the University Hospital Center of Réunion. Thus ILI samples tested negative for influenza are of unknown etiology. Several biological tools allow identifying respiratory pathogens from nasal swab. In recent years, multiplex reverse transcriptase polymerase chain reaction (RT-PCR) has been developed to identify several viruses simultaneously [7] [8] [9] [10] . We therefore used this new method to set up a retrospective study using swabs collected by sentinel GPs from 2011 to 2012. The main objective of our study was to characterize respiratory pathogens responsible for ILI consultations in sentinel GPs in 2011 and 2012. Secondary objectives were to highlight seasonal trends on respiratory pathogens circulation and to describe occurrence of co-infections, especially during the flu season. ILI was defined as a sudden onset of fever more than 38 degrees Celsius and cough, associated or not with other symptoms such as breathing difficulty, headache, etc. Every week, all GPs of the sentinel network were encouraged to collect a nasal swab from the first two patients who presented ILI since less than three days. After being tested for influenza viruses, the 994 swabs collected in 2011 and 2012 are frozen at -80°C at the university hospital center (CHU) laboratory. Based on the budget, a season-stratified sample of 250 swabs was randomly selected in order to describe circulating viruses including outside flu season. Random sampling was performed with Excel 1 using the anonymized surveillance database of the Cire OI. The sampling frame contained identification number of swab assigned by Cire OI, laboratory identification number, sex, age, date of onset of symptoms, date of swab collection and result of influenza RT-PCR. We used Respifinder 1 Smart 22 kits a multiplex RT-PCR (PathoFinder, Maastricht, The Netherlands) which can detect 22 respiratory pathogens. This assay is based on the multiplex ligation-dependent probe amplification (MLPA) technology. The reverse transcription and preamplification steps were performed on the epgradient Mastercycler 1 (Eppendorf) and the hybridization, ligation and detection steps on the LightCycler 1 480 system (Roche Applied Science). This method was chosen because of its high specificity, compared to other same methods (78% versus 33%) [3, 11] . Multiplex analysis allows for rapid production of diagnostic results. It thus allows highlighted the possible presence of eighteen respiratory viruses and four bacteria in one reaction by melt curve analysis: Influenza A not (H1N1 \n\nStatistical analyses were performed with Stata 1 and Excel 1 . Two seasons were defined to identify possible seasonal trends in circulation of the viruses: winter season during weeks 23 to 39 between June and September and summer season during the rest of the year. Data and swabs result from a surveillance system that received regulatory approvals, including the CNIL (National Commission for Information Technology and Civil Liberties Number 1592205) approval in July 2012. All the patients have received oral information and gave their consent for swab and data collection. Data were collected for surveillance purpose and are totally anonymous. Among the 250 randomly-selected swabs, 26 were not available anymore as they were sent to Influenza Reference Center for confirmation and characterization of the pathogenic agent. According to the sensitivity of the assay two samples could be discordant results between Influenza PCR initially realized and Multiplex PCR. Thus they were deleted from the analysis: one is positive for Influenza in singleplex and negative for all tested pathogens in multiplex and one is positive for Influenza in singleplex and positive for PIV2 in multiplex. In total, 222 analyses were considered. Moreover, 53 samples were negative for all analyzed respiratory pathogens (23.9%) and 169 samples had at least one detected pathogen ( %), finally a total of 178 pathogens was identified. During the study period, a minority of the weeks (21 i.e. 20%) did not include any sampled swab, mainly outside flu season. Patients' sex-ratio was 0.63 (86 men and 136 women) and mean age was 28.4 years [min 0; max 81]. Ten percent had less than 5 years, 24% 5-15 years, 63% 15-65 years and only 3% were 65 and older. The respiratory pathogens most frequently identified in ILI swabs were rhinovirus (23.4%), influenza A not H1N1 (21.2%) and influenza B (12.6%) ( Table 1) . Among the 22 respiratory pathogens tested by the multiplex, only three were not found in any analyzed sample: Parainfluenza3, Legionella pneumophila and Bordetella pertussis. Regarding co-infections, nine swabs revealed the presence of two viruses, among which6 involved influenza viruses (Table 2) . Analyses showed that some viruses are possibly seasonal and were circulating during a specific period of the year. They are detected only in summer for Human Metapneumovirus, RSV A and B, and influenza A(H1N1)pdm09. For the latter, it is specific to the studied period since the influenza A(H1N1)pdm09 virus reappeared in Réunion Island in October 2012 and was no longer circulating since late 2010. On the opposite, Parainfluenza 1,2 and 4 viruses were identified only in winter. For other pathogens, no specific period of detection was observed. A weekly description of samples was realized to study the distribution of respiratory pathogens in 2011 and 2012 (Fig 1) . Results of biological analyses were compared with data of ILI consultations declared by sentinel GPs in 2011 and 2012. We observed in 2011, after a first wave in June mainly due to influenza A not H1N1 virus, a second wave of ILI consultations with mainly identification of Parainfluenza viruses and not influenza viruses. In 2012, the second epidemic wave at the end of austral winter coincided with Influenza viruses and Rhinovirus circulation. Regarding negative swabs (Fig 2) , we observed no seasonality during the study period with a similar proportion whatever the season. This retrospective study based on a sentinel GPs network showed that not only influenza viruses are responsible for ILI consultations. Indeed, an important circulation of multiple pathogens was observed throughout the year, with 12 different types of pathogens identified in 2011 and 2012. Respiratory viral pathogens were present in % of samples, which is largely above results from annual influenza surveillance [12] . After influenza viruses, Rhinovirus and Coronavirus were the most common respiratory viruses in Réunion Island. Although samples were not taken every week, sample was representative of ILI activity and consistent with flu season. Nevertheless, according to the low number of samples, it is difficult to conclude about seasonality. However in our study, RSV was circulating in summer season which is hot and rainy, which is confirmed by other studies in tropical region [13] . This study also highlighted several co-infections, showing that concomitant the multiple etiology of ILI. Co-circulation was already observed in Réunion Island during the A(H1N1) pdm09 pandemic in addition to influenza virus, with identification of other respiratory viruses such as Rhinovirus or Coronavirus [14] . In mainland France, during this pandemic, circulation of major respiratory viruses was found, such as Rhinovirus, Parainfluenza, Coronavirus, Human Metapneumovirus, like in our publication [15] [16] . In our study, only 5.3% of positive swabs were co-infections whereas in two studies in Madagascar co-infections represented 27.3% and 29.4% [17] [18] . Despite the distance of 9,300 km between Réunion and France, the island is directly connected to Europe with four daily flights to France. These exchanges can impact respiratory pathogens circulation in southern and northern hemisphere. Results of this study can therefore be of interest to both Indian Ocean and Europe countries. Among the 148 swabs initially negative for influenza because not previously tested for any other viruses, the study found an etiology for 95 swabs. In total, only 53 swabs, representing 24% of the sample, remained without etiology with negative multiplex PCR results all along the year. Multiple hypotheses can explain this result: a poor quality of swabs, preventing from identifying a pathogen, noninfectious causes or other pathogens not included in the multiplex PCR. However, we couldn't test the negative swabs for RNAse P, a marker of human cells, which could provide a modicum of assurance that the swab contained human cells. Concerning the two samples divergent for influenza identification between the multiplex and singleplex PCR, we discarded them for the analysis; one was positive in Influenza with singleplex and positive in PIV with multiplex. It could be a false positive result from singleplex. Indeed, as the multiplex PCR assay has a good sensitivity and is considered as a gold-standard, we decided to keep seven negative results for Influenza in singleplex and positive in Influenza in multiplex [7] [8] [9] [10] . No case of Bordetella pertussis which causes whooping cough and Legionella pneumophila which causes Legionnaires' disease was identified in this study. However, these diseases are rare in Réunion Island, around three cases of Legionnaires' disease are declared each year. A limit of the study is that no clinical data were available in the virological surveillance system of influenza in Réunion Island. It was impossible to compare clinical symptoms according to each pathogen and to know if there are different pathogens which cause for instance rhinitis, laryngitis or bronchitis (diseases included in ILI). A specific prospective study including clinical data might provide useful elements in the semiotics of diseases. In conclusion, this study highlighted an important circulation of multiple pathogens in Réunion Island throughout the year. It shows that ILI is not specific to influenza and so it is essential to have biological results in order to establish the differential diagnosis and thus explain the etiology of symptoms. For a better understanding of respiratory pathogens circulating in Réunion Island, information from this study may also be useful to practitioners who see many patients in consultation with ILI. As the use of multiplex RT-PCR showed its efficacy in the ILI surveillance and allowed to highlight the circulation of other viruses and bacterial causes of respiratory infections, it is now used routinely in the surveillance of ILI. Moreover, it would be interesting to repeat this study every 3 or 5 years adding clinical data to monitor the evolution of respiratory pathogens in Réunion Island over time.\n\nQuestion: Respiratory viral pathogens were present in what percentage of samples?" }, { "role": "assistant", "content": "76.1" } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.95294
[ { "role": "user", "content": "You will be presented with the full text of a science research paper. Please write an abstract for this paper. Your response should include the abstract and no additional text.\n\nPaper text:\nthe bhlumi monte carlo ( mc ) program@xcite was developed as a precision tool for calculating the small - angle bhabha luminosity process at slc and lep , and with continued development , it will continue to be a valuable tool meeting the requirements of a next - generation linear @xmath1 collider , such as the proposed ilc .\ncentral to this program s success was an exact treatment of the phase space for @xmath2 photon bremsstrahlung .\na yfs - exponentiation@xcite procedure allows all ir singularities to be canceled exactly between real and virtual emission processes to all orders .\nthe leading soft photon effects are exponentiated , and ir - finite yfs residuals are then calculated exactly to the order required to reach the desired precision level .\nbhlumi attained a total error budget of 0.061% for lep1 parameters and 0.122% for lep2 parameters for a typical calorimetric detector scenario.@xcite to assure this precision level , it was necessary to calculate the most important unimplemented effect in bhlumi4.04 , which was the next to leading - log ( nll ) contribution to the two - photon radiative corrections .\nthe double real photon and single real plus virtual photon corrections to the small - angle bhabha scattering process were calculated exactly in refs .\n@xcite when added to the known two - loop virtual correction@xcite , these results showed that the @xmath3 corrections enter at the 0.027% level for lep1 parameters and 0.04% level for lep2 parameters .\nhere , @xmath4 is the `` large logarithm '' entering into a leading log expansion . implementing these exact @xmath5 results in bhlumi would eliminate these contributions to the error budget .\nthe only remaining unimplemented @xmath5 radiative corrections would then be up - down interference effects in which two virtual photons are exchanged between the @xmath6 and @xmath7 line , which are suppressed at small angles , and nominally enter at the level of 0.004% or less for angles below @xmath8.@xcite these contributions , represented by diagrams of the type shown in fig . 1 , could thus be safely neglected for slc and lep physics .\n-0.5 cm for ilc physics , where the goal is to reach 0.01% in the small - angle bhabha luminosity process , it is desirable to carefully check the magnitude of the up - down interference terms , and to implement them if they turn out to be significant . a key ingredient in the comparison , the five - point box integral appearing in the second and fourth diagram in fig . 1\n, has recently been provided by the looptools 2.2 package.@xcite a number of other @xmath5 calculations which have appeared recently@xcite should also provide valuable insight into effects which may need to be implemented in the small - angle bhabha calculation to reach ilc precision specifications .\nfermion pair production plays a critical role in extracting precision electroweak physics from @xmath1 colliders .\nthis process is calculated by another yfs - exponentiated mc program , the @xmath0mc.@xcite again , the photonic radiative corrections play an essential role in calculating the yfs residuals through order @xmath9 .\nthese have been calculated exactly , including finite - mass corrections , for initial state and final state radiation.@xcite using helicity spinor techniques,@xcite a concise and stable representation for the @xmath5 initial or final state radiation amplitude has been obtained , including finite - mass corrections . the matrix element for hard photon initial - state emission with one virtual photon may be expressed as @xmath10 where @xmath11 is the tree - level matrix element for single hard photon emission , @xmath12 are scalar form factors and @xmath13 are spinor factors defined in ref .\n@xcite .\nthe single hard photon cross section is of particular interest in radiative return applications@xcite , where initial state radiation is used to reduce the effective beam energy , allowing a fixed energy machine to probe a range of energies .\na mc program phokhara was developed to calculate radiative return at @xmath14 and @xmath15 factories.@xcite the same radiative corrections are relevant for a high - energy @xmath1 collider investigating physics around the @xmath16 peak , for example .\nit is therefore useful to compare the radiative corrections obtained for both the @xmath0mc and phokhara in detail .\nboth calculations claim the same level of exactness , including the same diagrams as well as electron mass corrections relevant for collinear bremsstrahlung . we have compared the virtual corrections to initial state hard - photon emission calculated in ref .\n@xcite ( kr ) for phokhara to those calculated in ref .\n@xcite ( jmwy ) for the @xmath0mc in the case of muon pair production .\nanalytically , it was found that in the absence of mass corrections , both expressions agree to nll order ( @xmath3 in the integrated cross section).@xcite a compact expression for nll limit of the matrix element was obtained in ref .\n@xcite , where it was shown that the terms @xmath17 and @xmath18 in eqn .\n( [ vdef1 ] ) vanish to nll order , and the helicity - averaged nll limit of @xmath19 is @xmath20 where @xmath21 is the `` large logarithm '' in the leading log expansion , @xmath22 measures the inner product of one of the incoming fermion momenta @xmath23 with the hard photon momentum @xmath24 , sp@xmath25 = li@xmath26 is the dilogarithm ( spence ) function , and @xmath27 is the ir - divergent virtual yfs form factor . since the two results are known to agree with the nll limit calculated using eqn .\n( [ f0 ] ) , the nll limit is subtracted in each case , permitting the nnll contributions and collinear mass corrections to be investigated in the context of the @xmath0mc .\n2 shows the results of a @xmath0mc run calculating the nnll contribution to muon pair production at a cms energy of 500 gev for @xmath28 events , both with and without the mass corrections .\nthe cross - section is integrated up to a radiated photon energy fraction of @xmath29 ( with @xmath30 ) using the yfs residual @xmath31 for one hard photon at @xmath5 , subtracting the nll contribution obtained using eqn .\n( [ f0 ] ) .\nthe result is normalized with respect to the non - radiative born cross section for muon pair production .\nit is found that the maximum difference between the complete kr and jmwy results ( from the next - to - last bin ) is @xmath32 units of the born cross section .\nmost of this apparently comes from differences in the treatment of the mass corrections .\nkr uses an expansion in @xmath33 , while jmwy uses a technique developed by berends _\net al._@xcite for adding the mass corrections required in collinear limits to a calculation obtained using massless spinors . without mass corrections\n( comparing the massless points ) , the results agree to within a part per million . this agreement is better than noted previously@xcite due to improvements in the stability of the algorithms used .\ndirect comparisons of the phokhara and @xmath0mc programs have also been conducted.@xcite it is interesting to note that the size of the nnll part of the corrections implemented by jmwy never exceed @xmath34 up to @xmath35 , and reach @xmath36 in the last bin , where @xmath37 .\nthis suggests that for most purposes , the considerably simpler nll result represented by eqn .\n( [ f0 ] ) will suffice .\nthe drell - yan process plays a role at hadron colliders which is as basic as the bhabha scattering or pair production cross section at @xmath1 colliders .\nin fact , @xmath38 and @xmath16 production has been proposed as the luminosity process for the lhc.@xcite a fully exclusive calculation of the parton - level cross sections is needed at the @xmath39 level for upcoming lhc physics .\nthese cross sections are currently known at the @xmath40 level , using nlo matrix elements .\nwhile nnlo results are available for the integrated cross section@xcite and rapidity distribution@xcite , a fully - exclusive nnlo cross section needed for a mc event generator is not yet available .\nmoreover , electroweak radiative corrections will be required as well .\nreaching the desired lhc precision will require corrections of order @xmath41 and order @xmath42 , including mixed @xmath43 contributions .\nexamples of the latter diagrams are shown in fig .\n( 7.5,6.5 ) ( 1.8,3.5)(a ) ( 0,3.25 ) ( 5.6,3.5)(b ) ( 0,0 ) ( 1.8,0.25)(c ) ( 3.75,3.25 ) ( 5.6,0.25)(d ) ( 3.75,0 ) the hard parton - level processes must be calculated and combined with pdfs in a mc program designed to generate the desired distribution of partons plus mixed qcd and qed bremsstrahlung .\nthis will require a careful implementation of the multiple gluon and photon phase space .\nexperience in the electroweak sector suggests that yfs exponentiation will provide a strong tool for implementing the multiple - emission phase space , giving very precise control over the soft and collinear limits .\nthe large numbers of diagrams creates a challenge for obtaining an expression that can be evaluated quickly enough for mc implementation , and evaluated in a numerically stable manner .\ncommon reduction methods based on the passarino - veltman technique@xcite can produce millions of terms , which are both slow to evaluate in a mc setting , and prone to numerical instabilities due to the large numbers of terms added and potential cancellations among them .\nthus , a significant part of this problem will involve developing and testing new methods for organizing and calculating the terms in a stable manner .\nonce the parton - level matrix element is obtained , it may be incorporated into a `` qced - exponentiated '' mc program , implementing a procedure similar to yfs exponentiation in a combined qed and qcd setting to construct the exact phase space for multiple gluon and photon radiation .\nthis requires extending the yfs calculus to non - abelian gauge theory , with due care in handling the genuine non - abelian singularities which arise in qcd\n. the qcd and qed exponentiation will be conducted at order @xmath44 on an event - by - event basis in the presence of showers without double - counting of soft and collinear emission effects .\nfurther details of this construction will be presented elsewhere in these proceedings.@xcite\na careful calculation of higher - order bremsstrahlung corrections led to a precision tool ( bhlumi ) for bhabha luminosity calculations .\nincorporating the second order photonic corrections obtained to test bhlumi s precision may be enough to reach the 0.01% level proposed for the ilc . calculating the @xmath5 up - down interference contribution will help to clarify this .\na number of second - order bhabha scattering results are now available which should be useful for testing bhlumi s precision at this higher level .\nwe have also described cross - checks on the second - order photonic radiative corrections for fermion pair production developed for the @xmath0mc .\ncomparisons with similar initial - state radiative corrections developed for phokhara show agreement on the order of @xmath45 in units of the born cross section at ilc energies .\nremaining differences may be attributed primarily to differences in the handling of finite mass corrections in the collinear limits , and possible residual numerical instabilities for photons radiated near the fermion pair production threshold .\nfinally , we have outlined a program for carrying the successes of the yfs - exponentiated mc framework developed for slc and lep physics into the hadronic realm of the lhc , where a precise calculation of the parton - level diagrams can be combined with a combined qcd and qed exponentiation framework , with complete control of the multiple photon and gluon phase space , to develop a mc for the drell - yan process which can reach the lhc precision requirements .\nyost , s. majhi and b.f.l .\nward , proceedings of the 2005 international linear collider physics and detector workshop and second ilc accelerator workshop , snowmass ( 2005 ) , alcpg1911 ( hep - ph/0512022 ) ." }, { "role": "assistant", "content": "we describe radiative corrections to bremsstrahlung and their application to high energy collider physics , focusing on the applications to luminosity measurement , fermion pair production and radiative return . \n we review the status of one loop radiative corrections in bhlumi and the @xmath0mc , including cross checks with newer results developed independently for radiative return . \n we outline a yfs - exponentiated approach to the drell - yan process for lhc physics , including a discussion of the relevant radiative corrections ." } ]
science.mslr2022_ms2_multidoc_summarization
science.mslr2022_ms2_multidoc_summarization.6735
[ { "role": "user", "content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nMental Fatigue Increases Gait Variability During Dual-task Walking in Old Adults\nAbstract 1:\nBackground Mental fatigue is a psychobiological state induced by sustained periods of dem and ing cognitive activity and is characterized by feelings of tiredness which are common in everyday life . Recently , it has been hypothesized that mental fatigue might have an impact on gait performance in old adults . Therefore , the effect of mental fatigue on gait performance under single- and dual-task conditions was investigated in young and old participants . Methods Spatio-temporal gait parameters of 16 young and 16 old healthy participants were measured using a photoelectric system during single- and dual-task walking before and after a r and omly assigned mental fatigue ( performing a stop-signal task for 90 minutes ) and control intervention ( watching a video for 90 minutes ) , respectively . Changes in subjective fatigue , wakefulness , mood , arousal , and psychophysiological workload ( heart rate variability indices ) were assessed . Results Psychometric measures indicated increased subjective fatigue and arousal as well as decreased mood and wakefulness after the mental fatigue task . Heart rate variability indices revealed a higher psychophysiological workload during the mental fatigue intervention in old compared to young participants . Gait measures ( coefficient of variation of speed , stride length , and stance time ) revealed impaired dual-task walking performance following the mental fatigue intervention only in old participants . Conclusion Data indicate that mental fatigue , induced by sustained cognitive activity , can impair gait performance during dual-task walking in old adults . The susceptibility to mental fatigue could be a new intrinsic risk factor for falls in older people and should be taken into account when dual-task gait analyses are performed\n\n\nTitle 2:\nChanges in postural stability with fatigue of lower extremity frontal and sagittal plane movers.\nAbstract 2:\nThe purpose of this study was to quantify changes in postural stability with fatigue of the frontal and sagittal movers of the lower extremities . There were four test sessions , with a r and omized order assigned according to the muscles tested and the plane of motion . Subjects were 20 healthy men ( age : 22.6+/-2.4 years , height : 173.7+/-3.6 cm , weight : 63.3+/-7.9 kg ) . During each session , one set of muscle groups was fatigued using isokinetic contractions : ankle plantar/dorsi flexors , ankle evertor/invertors , hip flexor/extensors or hip abductor/adductors . The Biodex Stability System was used to assess anterior/posterior and medial/lateral stability before and after muscle fatigue . Repeated measures ANOVAs revealed that fatigue was associated with a significant increase in all stability indices . Fatigue of the hip movers , whether in the frontal or sagittal planes , led to greater increments in stability indices than fatigue of the ankle musculature . Fatigue of the frontal movers result ed in greater increases in the medial/lateral stability index compared to fatigue of the sagittal movers . In conclusion , fatigue of proximal lower extremity muscles affects postural stability and fatigue of the frontal movers is associated with postural instability in the frontal plane\n\n\nTitle 3:\nFatigue effects on muscle excitability.\nAbstract 3:\nRepeated isometric or shortening contractions of skeletal muscle cause muscle fatigue but several prior studies have reported an apparent absence of muscle fatigue when humans performed up to 70 lengthening contractions . We pursued the hypothesis that perhaps muscle excitability is a factor that aids force preservation with repeated eccentric actions . Soleus compound muscle action potential ( M-wave ) latency , peak-to-peak amplitude ( PPA ) , duration , and area under the curve were examined in 12 subjects ( mean age 24.3 y ) over 4 testing days that included : no exercise , isometric exercise ( neutral ankle angle ) , isokinetic ( 0.5 rad.s-1 ) concentric and eccentric exercise of the plantar flexors in the seated position on a Biodex dynamometer . Supramaximal shocks were delivered to the tibial nerve in the popliteal fossa at baseline ( 3 shocks , 1 min apart ) , during exercise ( 1 shock-after each of 5 bouts/10 contractions ) , and during 10-min recovery . From initial to final contractions , concentric , isometric , and eccentric fatigue was -32 , -41 and + 2 % ( Condition by Trial Interaction , F2,22 = 25.1 , p = 0.000 ) . No changes occurred in latency or duration ( p > 0.05 ) , but PPA ( Condition by Time interaction , F51,561 = 3.7 , p = 0.000 ) increased during isometric and eccentric exercise and remained elevated during recovery . Area increased ( F51,561 = 3.1 , p = 0.000 ) significantly during all three exercise conditions and approximated baseline by minute 8 of recovery . It was concluded that although the potentiation of the action potential of individual muscle fibers seems to be the common mechanism underlying the increase in muscle excitability during plantar flexion exercise , it is possible that different factors could cause such a non-specific response\n\n\nTitle 4:\nThe effect of lower limb muscle fatigue on obstacle negotiation during walking in older adults.\nAbstract 4:\nTripping over obstacles is a common cause of falls in older adults , and muscle fatigue , which can alter walking patterns , may add to this risk . To date , no study has examined the effect of lower limb muscle fatigue on obstacle negotiation in older adults . 30 older adults ( 13 women , aged 78.3 [ 6.2 ] years ) negotiated a 12 m obstacle course , while completing a visual secondary task , under two r and omized conditions : rested or fatigued . For the fatigue condition , participants performed a repeated sit-to-st and movement , as fast as possible , until they could no longer continue . Participants then immediately began walking trials . Kinematic and kinetic data were collected on approach to , during , and after crossing a height-adjustable target obstacle ( 10 % and 20 % of leg length ) . Repeated measures ANOVA showed a statistically significant increase in lead limb vertical loading rate after stepping over the 10 % obstacle when fatigued , relative to rested ( P=0.046 ) . No other significant between-condition differences ( > 0.05 ) were observed for the other kinematic variables when negotiating the 10 % obstacle . Furthermore , no significant between-condition differences ( P>0.05 ) were observed for any kinetic or kinematic variables when negotiating the 20 % obstacle . This study describes a feasible method for investigating the consequences of lower limb muscle fatigue on obstacle crossing . The current finding of increased vertical loading rate when fatigued supports the need for further investigation into the effect of muscle fatigue on gait under different environmental conditions , fatiguing a range of muscles , analyzing a more comprehensive array of kinetic and kinematic measures , and in healthy and clinical population\n\n\nTitle 5:\nMaximum voluntary activation in nonfatigued and fatigued muscle of young and elderly individuals.\nAbstract 5:\nBACKGROUND AND PURPOSE Research ers study ing central activation of muscles in elderly subjects ( > or = 65 years of age ) have investigated activation in only the nonfatigued state . This study examined the ability of young and elderly people to activate their quadriceps femoris muscles voluntarily under both fatigued and nonfatigued conditions to determine the effect of central activation failure on age-related loss of force . SUBJECTS AND METHODS Twenty young subjects ( 11 men , 9 women ; mean age = 22.67 years , SD = 4.14 , range = 18 - 32 years ) and 17 elderly subjects ( 8 men , 9 women ; mean age = 71.5 years , SD = 5.85 , range = 65 - 84 years ) participated in this study . Subjects were seated on a dynamometer and stabilized . Central activation was quantified , based on the change in force produced by a 100-Hz , 12-pulse electrical train that was delivered during a 3- to 5-second isometric maximum voluntary contraction ( MVC ) of the quadriceps femoris muscle . Next , subjects performed 25 MVCs ( a 5-second contraction with 2 seconds of rest ) to fatigue the muscle . During the last MVC , central activation was measured again . RESULTS In the nonfatigued state , elderly subjects had lower central activation than younger subjects . In the fatigued state , this difference became larger . DISCUSSION AND CONCLUSION Central activation of the quadriceps femoris muscle in elderly subjects was reduced in both the fatigued and nonfatigued states when compared with young subjects . Some part of age-related weakness , therefore , may be attributed to failure of central activation in both the fatigued and nonfatigued states\n\n\nTitle 6:\nEffects of unilateral leg muscle fatigue on balance control in perturbed and unperturbed gait in healthy elderly.\nAbstract 6:\nThis study assessed effects of unilateral leg muscle fatigue ( ULMF ) on balance control in gait during the stance and swing phases of the fatigued leg in healthy elderly , to test the assumption that leg muscle strength limits balance control during the stance-phase . Ten subjects ( aged 63.4 , SD 5.5 years ) walked on a treadmill in 4 conditions : unperturbed unfatigued , unperturbed fatigued , perturbed unfatigued , and perturbed fatigued . The perturbations were lateral trunk pulls just before contralateral heel contact . ULMF was evoked by unilateral squat exercise until task failure . Isometric knee extension strength was measured to verify the presence of muscle fatigue . Between-stride st and ard deviations and Lyapunov exponents of trunk kinematics were used as indicators of balance control . Required perturbation force and the deviation of trunk kinematics from unperturbed gait were used to assess perturbation responses . Knee extension strength decreased considerably ( 17.3 % SD 8.6 % ) as a result ULMF . ULMF did not affect steady-state gait balance . Less force was required to perturb subjects when the fatigued leg was in the stance-phase compared to the swing-phase . Subjects showed a faster return to the unperturbed gait pattern in the fatigued than in the unfatigued condition , after perturbations in swing and stance of the fatigued leg . The results of this study are not in line with the hypothesized effects of leg muscle fatigue on balance in gait . The healthy elderly subjects were able to cope with substantial ULMF during steady-state gait and demonstrated faster balance recovery after laterally directed mechanical perturbations in the fatigued than in the unfatigued condition\n\n\nTitle 7:\nMonitoring Muscle Fatigue Progression during Dynamic Exercise\nAbstract 7:\nABSTRACT Purpose To develop and vali date a novel model for assessment of peripheral fatigue progression during dynamic exercise . Methods Seven males and four females ( 24 ± 3 yr ) completed one-legged knee-extensor exercise at 40 % of peak power output ( PPO ) for 12 min ( 40 % PPO ) . Additionally , an exhaustive bout lasting approximately 6 min ( 84 % ± 2 % PPO ; 59 ± 10 W ) was contrasted to two bouts completed at ±5 % of the target workload . Trials were completed in r and omized order . Percutaneous electrical stimulation of m. vastus lateralis during the passive knee-flexion phase allowed quantification of maximal twitch force every 30 s in parallel with RPE . Results Elicited twitch force remained unchanged during the 40 % PPO trial . During the three exhaustive bouts , exercise differed in duration ( 561 ± 154 s , 366 ± 64 s , 245 ± 61 s ; P < 0.001 ) and amplitude of elicited twitch force showed a curvilinear decline across time . Elicited twitch force at exhaustion was approximately 60 % reduced and similar between the exhaustive trials ( intraclass correlation coefficient , 0.76 ; 95 % confidence interval , 0.48–0.92 ) . The increase in RPE during exercise was strongly correlated to the gradually reduced evoked twitch force ( repeated- measures correlation , 0.89 ; 95 % confidence interval , 0.62–0.97 ) . Conclusions The developed model permits quantification of muscle fatigue progression during continuous dynamic one-legged knee-extension and a biphasic fatigue pattern is demonstrated during intense exercise . The model is sensitive to small changes in intensity , and it provides a novel approach for study ing muscular mechanisms and their temporal relation to fatigue progression in vivo" }, { "role": "assistant", "content": "Conclusion Muscle performance fatigability affects spatial and temporal features of gait and , mainly , inter-stride trunk acceleration variability .\nIn contrast , sustained mental activity tends only to affect step variability during dual tasking ." } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.6025
[ { "role": "user", "content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nSensing surroundings is ubiquitous and effortless to humans: It takes a single glance to extract the spatial configuration of objects and the free space from the scene. To help machine vision with spatial understanding capabilities, we introduce the View Parsing Network (VPN) for cross-view semantic segmentation. In this framework, the first-view observations are parsed into a top-down-view semantic map indicating precise object locations. VPN contains a view transformer module, designed to aggregate the first-view observations taken from multiple angles and modalities, in order to draw a bird-view semantic map. We evaluate the VPN framework for cross-view segmentation on two types of environments, indoors and driving-traffic scenes. Experimental results show that our model accurately predicts the top-down-view semantic mask of the visible objects from the first-view observations, as well as infer the location of contextually-relevant objects even if they are invisible.\n\nCited Abstract(s):\n[1]: We study the problem of transferring a sample in one domain to an analog sample in another domain. Given two related domains, S and T, we would like to learn a generative function G that maps an input sample from S to the domain T, such that the output of a given function f, which accepts inputs in either domains, would remain unchanged. Other than the function f, the training data is unsupervised and consist of a set of samples from each domain. The Domain Transfer Network (DTN) we present employs a compound loss function that includes a multiclass GAN loss, an f-constancy component, and a regularizing component that encourages G to map samples from T to themselves. We apply our method to visual domains including digits and face images and demonstrate its ability to generate convincing novel images of previously unseen entities, while preserving their identity.\n\n[2]: We introduce a neural architecture for navigation in novel environments. Our proposed architecture learns to map from first-person views and plans a sequence of actions towards goals in the environment. The Cognitive Mapper and Planner (CMP) is based on two key ideas: a) a unified joint architecture for mapping and planning, such that the mapping is driven by the needs of the task, and b) a spatial memory with the ability to plan given an incomplete set of observations about the world. CMP constructs a top-down belief map of the world and applies a differentiable neural net planner to produce the next action at each time step. The accumulated belief of the world enables the agent to track visited regions of the environment. We train and test CMP on navigation problems in simulation environments derived from scans of real world buildings. Our experiments demonstrate that CMP outperforms alternate learning-based architectures, as well as, classical mapping and path planning approaches in many cases. Furthermore, it naturally extends to semantically specified goals, such as 'going to a chair'. We also deploy CMP on physical robots in indoor environments, where it achieves reasonable performance, even though it is trained entirely in simulation.\n\n[3]: Humans routinely retrace a path in a novel environment both forwards and backwards despite uncertainty in their motion. In this paper, we present an approach for doing so. Given a demonstration of a path, a first network generates an abstraction of the path. Equipped with this abstraction, a second network then observes the world and decides how to act in order to retrace the path under noisy actuation and a changing environment. The two networks are optimized end-to-end at training time. We evaluate the method in two realistic simulators, performing path following both forwards and backwards. Our experiments show that our approach outperforms both a classical approach to solving this task as well as a number of other baselines.\n\n[4]: With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulators output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a self-regularization term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.\n\n[5]: Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks. One appealing alternative is rendering synthetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on rendered images fail to generalize to real images. To address this shortcoming, prior work introduced unsupervised domain adaptation algorithms that have tried to either map representations between the two domains, or learn to extract features that are domain-invariant. In this work, we approach the problem in a new light by learning in an unsupervised manner a transformation in the pixel space from one domain to the other. Our generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain. Our approach not only produces plausible samples, but also outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Finally, we demonstrate that the adaptation process generalizes to object classes unseen during training.\n\n[6]: Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.\n\n[7]: \n\n[8]: The cost of large scale data collection and annotation often makes the application of machine learning algorithms to new tasks or datasets prohibitively expensive. One approach circumventing this cost is training models on synthetic data where annotations are provided automatically. Despite their appeal, such models often fail to generalize from synthetic to real images, necessitating domain adaptation algorithms to manipulate these models before they can be successfully applied. Existing approaches focus either on mapping representations from one domain to the other, or on learning to extract features that are invariant to the domain from which they were extracted. However, by focusing only on creating a mapping or shared representation between the two domains, they ignore the individual characteristics of each domain. We hypothesize that explicitly modeling what is unique to each domain can improve a model's ability to extract domain-invariant features. Inspired by work on private-shared component analysis, we explicitly learn to extract image representations that are partitioned into two subspaces: one component which is private to each domain and one which is shared across domains. Our model is trained to not only perform the task we care about in the source domain, but also to use the partitioned representation to reconstruct the images from both domains. Our novel architecture results in a model that outperforms the state-of-the-art on a range of unsupervised domain adaptation scenarios and additionally produces visualizations of the private and shared representations enabling interpretation of the domain adaptation process.\n\n[9]: Harvesting dense pixel-level annotations to train deep neural networks for semantic segmentation is extremely expensive and unwieldy at scale. While learning from synthetic data where labels are readily available sounds promising, performance degrades significantly when testing on novel realistic data due to domain discrepancies. We present Dual Channel-wise Alignment Networks (DCAN), a simple yet effective approach to reduce domain shift at both pixel-level and feature-level. Exploring statistics in each channel of CNN feature maps, our framework performs channel-wise feature alignment, which preserves spatial structures and semantic information, in both an image generator and a segmentation network. In particular, given an image from the source domain and unlabeled samples from the target domain, the generator synthesizes new images on-the-fly to resemble samples from the target domain in appearance and the segmentation network further refines high-level features before predicting semantic maps, both of which leverage feature statistics of sampled images from the target domain. Unlike much recent and concurrent work relying on adversarial training, our framework is lightweight and easy to train. Extensive experiments on adapting models trained on synthetic segmentation benchmarks to real urban scenes demonstrate the effectiveness of the proposed framework.\n\n[10]: Domain adaptation is critical for success in new, unseen environments. Adversarial adaptation models applied in feature spaces discover domain invariant representations, but are difficult to visualize and sometimes fail to capture pixel-level and low-level domain shifts. Recent work has shown that generative adversarial networks combined with cycle-consistency constraints are surprisingly effective at mapping images between domains, even without the use of aligned image pairs. We propose a novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model. CyCADA adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs. Our model can be applied in a variety of visual recognition and prediction settings. We show new state-of-the-art results across multiple adaptation tasks, including digit classification and semantic segmentation of road scenes demonstrating transfer from synthetic to real world domains.\n\n[11]: Developing visual perception models for active agents and sensorimotor control in the physical world are cumbersome as existing algorithms are too slow to efficiently learn in real-time and robots are fragile and costly. This has given rise to learning-in-simulation which consequently casts a question on whether the results transfer to real-world. In this paper, we investigate developing real-world perception for active agents, propose Gibson Environment for this purpose, and showcase a set of perceptual tasks learned therein. Gibson is based upon virtualizing real spaces, rather than artificially designed ones, and currently includes over 1400 floor spaces from 572 full buildings. The main characteristics of Gibson are: I. being from the real-world and reflecting its semantic complexity, II. having an internal synthesis mechanism \"Goggles\" enabling deploying the trained models in real-world without needing domain adaptation, III. embodiment of agents and making them subject to constraints of physics and space.\n\n[12]: We propose coupled generative adversarial network (CoGAN) for learning a joint distribution of multi-domain images. In contrast to the existing approaches, which require tuples of corresponding images in different domains in the training set, CoGAN can learn a joint distribution without any tuple of corresponding images. It can learn a joint distribution with just samples drawn from the marginal distributions. This is achieved by enforcing a weight-sharing constraint that limits the network capacity and favors a joint distribution solution over a product of marginal distributions one. We apply CoGAN to several joint distribution learning tasks, including learning a joint distribution of color and depth images, and learning a joint distribution of face images with different attributes. For each task it successfully learns the joint distribution without any tuple of corresponding images. We also demonstrate its applications to domain adaptation and image transformation.\n\n[13]: Two less addressed issues of deep reinforcement learning are (1) lack of generalization capability to new goals, and (2) data inefficiency, i.e., the model requires several (and often costly) episodes of trial and error to converge, which makes it impractical to be applied to real-world scenarios. In this paper, we address these two issues and apply our model to target-driven visual navigation. To address the first issue, we propose an actor-critic model whose policy is a function of the goal as well as the current state, which allows better generalization. To address the second issue, we propose the AI2-THOR framework, which provides an environment with high-quality 3D scenes and a physics engine. Our framework enables agents to take actions and interact with objects. Hence, we can collect a huge number of training samples efficiently. We show that our proposed method (1) converges faster than the state-of-the-art deep reinforcement learning methods, (2) generalizes across targets and scenes, (3) generalizes to a real robot scenario with a small amount of fine-tuning (although the model is trained in simulation), (4) is end-to-end trainable and does not need feature engineering, feature matching between frames or 3D reconstruction of the environment." }, { "role": "assistant", "content": "Given that current graphics simulation engines can render realistic scenes, recognition algorithms can be trained on data pulled from simulation engines (i.e. for visual navigation models [2] [4]1 [11] [7] [3] ). Several techniques have been proposed to address the domain adaptation issue [9] [5] [4] [1] [12] [6] [8] , when models trained with simulated images are transferred to real scenes [4]3 . Rather than working on the task of visual navigation directly, our project aims at parsing the top-down-view semantic map from first-view observations. The resulting top-down-view map will further facilitate the visual navigation and path planning." } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.7184
[ { "role": "user", "content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nImage retrieval based on deep convolutional features has demonstrated state-of-the-art performance in popular benchmarks. In this paper, we present a unified solution to address deep convolutional feature aggregation and image re-ranking by simulating the dynamics of heat diffusion. A distinctive problem in image retrieval is that repetitive or features tend to dominate final image representations, resulting in representations less distinguishable. We show that by considering each deep feature as a heat source, our unsupervised aggregation method is able to avoid over-representation of features. We additionally provide a practical solution for the proposed aggregation method and further show the efficiency of our method in experimental evaluation. Inspired by the aforementioned deep feature aggregation method, we also propose a method to re-rank a number of top ranked images for a given query image by considering the query as the heat source. Finally, we extensively evaluate the proposed approach with pre-trained and fine-tuned deep networks on common public benchmarks and show superior performance compared to previous work.\n\nCited Abstract(s):\n[1]: Recently, neuron activations extracted from a pre-trained convolutional neural network (CNN) show promising performance in various visual tasks. However, due to the domain and task bias, using the features generated from the model pre-trained for image classification as image representations for instance retrieval is problematic. In this paper, we propose quartet-net learning to improve the discriminative power of CNN features for instance retrieval. The general idea is to map the features into a space where the image similarity can be better evaluated. Our network differs from the traditional Siamese-net in two ways. First, we adopt a double-margin contrastive loss with a dynamic margin tuning strategy to train the network which leads to more robust performance. Second, we introduce in the mimic learning regularization to improve the generalization ability of the network by preventing it from overfitting to the training data. Catering for the network learning, we collect a large-scale dataset, namely GeoPair, which consists of 68k matching image pairs and 63k non-matching pairs. Experiments on several standard instance retrieval datasets demonstrate the effectiveness of our method.\n\n[2]: We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com Deep-Image-Retrieval.\n\n[3]: We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following three principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the \"Vector of Locally Aggregated Descriptors\" image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we develop a training procedure, based on a new weakly supervised ranking loss, to learn parameters of the architecture in an end-to-end manner from images depicting the same places over time downloaded from Google Street View Time Machine. Finally, we show that the proposed architecture significantly outperforms non-learnt image representations and off-the-shelf CNN descriptors on two challenging place recognition benchmarks, and improves over current state-of-the-art compact image representations on standard image retrieval benchmarks.\n\n[4]: This publicly available curated dataset of almost 100 million photos and videos is free and legal for all.\n\n[5]:" }, { "role": "assistant", "content": "Fine-tuning an off-the-shelf network is also popular for improving retrieval quality. For instance, there are a number of approaches that learn features for the specific task of landmark retrieval [5] [2] [3] [1] . While fine-tuning a pre-trained model is usually preceded by extensive manual annotation, Radenovic [5] introduced an unsupervised fine-tuning of CNN for image retrieval from a large collection of unordered images in a fully automated manner. Similar to this work, the methods presented in [3] [1] overcame laborious annotation, and collected training data in a weakly supervised manner. More specifically, Arandjelovic [3] proposed a new network architecture, NetVLAD, that was trained for place recognition in an end-to-end manner from weakly supervised Google Street View Time Machine images. Cao [1] trained a special architecture Quartet-net by harvesting data automatically from GeoPair [4] . We show that our feature weighting, and image re-ranking approach, while not requiring extra supervision, performs favorably compared to these previous methods." } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.1138
[ { "role": "user", "content": "You will be shown biomedical passages and your task is to the answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: What is a bacteriocin?\n\nContext: One such strategy involves using narrow-spectrum protein antibiotics (so-called bacteriocins), which diverse bacteria use to compete against closely related species.\nMany enterococcal strains produce bacteriocins, which could be useful as natural food preservatives through inhibition of pathogenic and spoilage microorganisms.\nBacteriocins, the ribosomally produced antimicrobial peptides of bacteria, represent an untapped source of promising antibiotic alternatives" }, { "role": "assistant", "content": "antimicrobial peptide of bacteria" } ]
science.scitldr_aic
science.scitldr_aic.876
[ { "role": "user", "content": "Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.\n\nPaper text:\n\nPrevious work shows that adversarially robust generalization requires larger sample complexity, and the same dataset, e.g., CIFAR-10, which enables good standard accuracy may not suffice to train robust models.\nSince collecting new training data could be costly, we focus on better utilizing the given data by inducing the regions with high sample density in the feature space, which could lead to locally sufficient samples for robust learning.\nWe first formally show that the softmax cross-entropy (SCE) loss and its variants convey inappropriate supervisory signals, which encourage the learned feature points to spread over the space sparsely in training.\nThis inspires us to propose the Max-Mahalanobis center (MMC) loss to explicitly induce dense feature regions in order to benefit robustness.\nNamely, the MMC loss encourages the model to concentrate on learning ordered and compact representations, which gather around the preset optimal centers for different classes.\nWe empirically demonstrate that applying the MMC loss can significantly improve robustness even under strong adaptive attacks, while keeping state-of-the-art accuracy on clean inputs with little extra computation compared to the SCE loss.\nThe deep neural networks (DNNs) trained by the softmax cross-entropy (SCE) loss have achieved state-of-the-art performance on various tasks (Goodfellow et al., 2016) .\nHowever, in terms of robustness, the SCE loss is not sufficient to lead to satisfactory performance of the trained models.\nIt has been widely recognized that the DNNs trained by the SCE loss are vulnerable to adversarial attacks (Carlini & Wagner, 2017a; Goodfellow et al., 2015; Kurakin et al., 2017; Papernot et al., 2016) , where human imperceptible perturbations can be crafted to fool a high-performance network.\nTo improve adversarial robustness of classifiers, various kinds of defenses have been proposed, but many of them are quickly shown to be ineffective to the adaptive attacks, which are adapted to the specific details of the proposed defenses .\nBesides, the methods on verification and training provably robust networks have been proposed (Dvijotham et al., 2018a; b; Hein & Andriushchenko, 2017; .\nWhile these methods are exciting, the verification process is often slow and not scalable.\nAmong the previously proposed defenses, the adversarial training (AT) methods can achieve state-of-the-art robustness under different adversarial settings Zhang et al., 2019b) .\nThese methods either directly impose the AT mechanism on the SCE loss or add additional regularizers.\nAlthough the AT methods are relatively strong, they could sacrifice accuracy on clean inputs and are computationally expensive (Xie et al., 2019) .\nDue to the computational obstruction, many recent efforts have been devoted to proposing faster verification methods Xiao et al., 2019) and accelerating AT procedures (Shafahi et al., 2019; Zhang et al., 2019a) .\nHowever, the problem still remains.\nshow that the sample complexity of robust learning can be significantly larger than that of standard learning.\nGiven the difficulty of training robust classifiers in practice, they further postulate that the difficulty could stem from the insufficiency of training samples in the commonly used datasets, e.g., CIFAR-10 (Krizhevsky & Hinton, 2009) .\nRecent work intends to solve this problem by utilizing extra unlabeled data (Carmon et al., 2019; Stanforth et al., 2019) , while we focus on the complementary strategy to exploit the labeled data in hand more efficiently.\nNote that although the samples in the input space are unchangeable, we could instead manipulate the local sample distribution, i.e., sample density in the feature space via appropriate training objectives.\nIntuitively, by inducing high-density feature regions, there would be locally sufficient samples to train robust classifiers and return reliable predictions .\n!\"\"# ∈ %&, %& + ∆% (low sample density) !\"\"# ∈ %*, %* + ∆% (high sample density) + , * !\n.#/ ∈ %&, %& + ∆% (medium sample density) !.#/ ∈ %*, %* + ∆% (medium sample density)\nIn this paper, we formally demonstrate that applying the softmax function in training could potentially lead to unexpected supervisory signals.\nTo solve this problem, we propose the MMC loss to learn more structured representations and induce high-density regions in the feature space.\nIn our experiments, we empirically demonstrate several favorable merits of our method:\n(i) Lead to reliable robustness even under strong adaptive attacks in different threat models;\n(ii) Keep high performance on clean inputs comparable TO SCE;\n(iii) Introduce little extra computation compared to the SCE loss;\n(iv) Compatible with the existing defense mechanisms, e.g., the AT methods.\nOur analyses in this paper also provide useful insights for future work on designing new objectives beyond the SCE framework." }, { "role": "assistant", "content": "Applying the softmax function in training leads to indirect and unexpected supervision on features. We propose a new training objective to explicitly induce dense feature regions for locally sufficient samples to benefit adversarial robustness." } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.3470
[ { "role": "user", "content": "You are presented with a paragraph from a COVID-19 literature. Classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable topics within the array. Your output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nBACKGROUND: The COVID-19 pandemic continues to have a significant impact on the provision of medical care. Planning to ensure there is capability to treat those that become ill with the virus has led to an almost complete moratorium on elective work. This study evaluates the impact of COVID-19 on cancer, in particular surgical intervention, in patients with esophago-gastric cancer at a high-volume tertiary center. METHODS: All patients undergoing potential management for esophago-gastric cancer from 12 March to 22 May 2020 had their outcomes reviewed. Multi-disciplinary team (MDT) decisions, volume of cases, and outcomes following resection were evaluated. RESULTS: Overall 191 patients were discussed by the MDT, with a 12% fall from the same period in 2019, including a fall in new referrals from 120 to 83 (P = 0.0322). The majority of patients (80%) had no deviation from the pre-COVID-19 pathway. Sixteen patients had reduced staging investigations, 4 had potential changes to their treatment only, and 10 had a deviation from both investigation and potential treatment. Only one patient had palliation rather than potentially curative treatment. Overall 19 patients underwent surgical resection. Eight patients (41%) developed complications with two (11%) graded Clavien-Dindo 3 or greater. Two patients developed COVID-19 within a month of surgery, one spending 4 weeks in critical care due to respiratory complications; both recovered. Twelve patients underwent endoscopic resections with no complications. CONCLUSION: Care must be taken not to compromise cancer treatment and outcomes during the COVID-19 pandemic. Excellent results can be achieved through meticulous logistical planning, good communication, and maintaining high-level clinical care." }, { "role": "assistant", "content": "[\"Prevention\"]" } ]
science.anat_em_ner
science.anat_em_ner.86
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"AnatomicalEntity\"].\n\nPlease return the output as a JSON object of the format: {\"AnatomicalEntity\": [\"tumor cells\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nIntervention\nPatients' initial introduction to the Wrist Extension Dynasplint (WED) system, [Dynasplint Systems, Inc., Severna Park, MD, USA] included customized fitting (wrist length, width, and girth so that the force and counter force straps could be properly aligned) and training on donning and doffing of the device. (See figure 1.) Verbal and written instructions were provided throughout the duration of treatment for safety, general wear and care, and tension setting goals based on patient tolerance.\nFigure 1\nWrist Extension Dynasplint.\nEach patient initially wore the WED for 4-6 continuous hours at an initial tension setting of #2 (0.1 foot pounds of torque). This duration was for acclimatization to the system; then patients were instructed to wear the WED system at night while sleeping for 6-8 hours of continuous wear. After each patient was comfortable wearing the unit for one week at tension level #2, they were instructed to increase the tension level to #3 (0.3 ft lbs.) and make continual increases every two weeks. If prolonged soreness followed a session (soreness for more than 15 minutes) the patient was instructed to decrease the tension one half a setting for two days until they were comfortable wearing it for 6-8 hours at the new tension setting. The majority of all patients reached level #5 (0.8 foot pounds of torque) by the end of two months. All range of motion measurements were recorded by the prescribing clinician." }, { "role": "assistant", "content": "{\"AnatomicalEntity\": [\"Wrist\", \"wrist\"]}" } ]
science.scitldr_aic
science.scitldr_aic.317
[ { "role": "user", "content": "Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.\n\nPaper text:\n\nDeep neural networks work well at approximating complicated functions when provided with data and trained by gradient descent methods.\nAt the same time, there is a vast amount of existing functions that programmatically solve different tasks in a precise manner eliminating the need for training.\nIn many cases, it is possible to decompose a task to a series of functions, of which for some we may prefer to use a neural network to learn the functionality, while for others the preferred method would be to use existing black-box functions.\nWe propose a method for end-to-end training of a base neural network that integrates calls to existing black-box functions.\nWe do so by approximating the black-box functionality with a differentiable neural network in a way that drives the base network to comply with the black-box function interface during the end-to-end optimization process.\nAt inference time, we replace the differentiable estimator with its external black-box non-differentiable counterpart such that the base network output matches the input arguments of the black-box function.\nUsing this ``Estimate and Replace'' paradigm, we train a neural network, end to end, to compute the input to black-box functionality while eliminating the need for intermediate labels.\nWe show that by leveraging the existing precise black-box function during inference, the integrated model generalizes better than a fully differentiable model, and learns more efficiently compared to RL-based methods.\nEnd-to-end supervised learning with deep neural networks (DNNs) has taken the stage in the past few years, achieving state-of-the-art performance in multiple domains including computer vision BID25 , natural language processing BID23 BID10 , and speech recognition BID29 .\nMany of the tasks addressed by DNNs can be naturally decomposed to a series of functions.\nIn such cases, it might be advisable to learn neural network approximations for some of these functions and use precise existing functions for others.\nExamples of such tasks include Semantic Parsing and Question Answering.\nSince such a decomposition relies partly on precise functions, it may lead to a superior solution compared to an approximated one based solely on a learned neural model.\nDecomposing a solution into trainable networks and existing functions requires matching the output of the networks to the input of the existing functions, and vice-versa.\nThe input and output are defined by the existing functions' interface.\nWe shall refer to these functions as black-box functions (bbf), focusing only on their interface.\nFor example, consider the question: \"Is 7.2 greater than 4.5?\"\nGiven that number comparison is a solved problem in symbolic computation, a natural solution would be to decompose the task into a two-step process of\n(i) converting the natural language to an executable program, and\n(ii) executing the program on an arithmetic module.\nWhile a DNN may be a good fit forWe propose an alternative approach called Estimate and Replace that finds a differentiable function approximation, which we term black-box estimator, for estimating the black-box function.\nWe use the black-box estimator as a proxy to the original black-box function during training, and by that allow the learnable parts of the model to be trained using gradient-based optimization.\nWe compensate for not using any intermediate labels to direct the learnable parts by using the black-box function as an oracle for training the black-box estimator.\nDuring inference, we replace the black-box estimator with the original non-differentiable black-box function.End-to-end training of a solution composed of trainable components and black-box functions poses several challenges we address in this work-coping with non-differentiable black-box functions, fitting the network to call these functions with the correct arguments, and doing so without any intermediate labels.\nTwo more challenges are the lack of prior knowledge on the distribution of inputs to the black-box function, and the use of gradient-based methods when the function approximation is near perfect and gradients are extremely small.\nThis work is organized as follows: In Section 2, we formulate the problem of decomposing the task to include calls to a black-box function.\nSection 3 describes the network architecture and training procedures.\nIn Section 4, we present experiments and comparison to Policy Gradient-based RL, and to fully neural models.\nWe further discuss the potential and benefits of the modular nature of our approach in Section 6.\nInterpretability via Composability Lipton (2016) identifies composability as a strong contributor to model interpretability.\nThey define composability as the ability to divide the model into components and interpret them individually to construct an explanation from which a human can predict the model's output.\nThe Estimate and Replace approach solves the black-box interface learning problem in a way that is modular by design.\nAs such, it provides an immediate interpretability benefit.\nTraining a model to comply with a well-defined and well-known interface inherently supports model composability and, thus, directly contributes to its interpretability.For example, suppose you want to let a natural language processing model interface with a WordNet service to receive additional synonym and antonym features for selected input words.\nBecause the WordNet interface is interpretable, the intermediate output of the model to the WordNet service (the words for which the model requested additional features) can serve as an explanation to the model's final prediction.\nKnowing which words the model chose to obtain additional features for gives insight to how it made its final decision.Reusability via Composability An additional clear benefit of model composability in the context of our solution is reusability.\nTraining a model to comply with a well-defined interface induces well-defined module functionality which is a necessary condition for module reuse.\nCurrent solutions for learning using black-box functionality in neural network prediction have critical limitations which manifest themselves in at least one of the following aspects:\n(i) poor generalization,\n(ii) low learning efficiency,\n(iii) under-utilization of available optimal functions, and\n(iv) the need for intermediate labels.In this work, we proposed an architecture, termed EstiNet, and a training and deployment process, termed Estimate and Replace, which aim to overcome these limitations.\nWe then showed empirical results that validate our approach.Estimate and Replace is a two-step training and deployment approach by which we first estimate a given black-box functionality to allow end-to-end training via back-propagation, and then replace the estimator with its concrete black-box function at inference time.\nBy using a differentiable estimation module, we can train an end-to-end neural network model using gradient-based optimization.\nWe use labels that we generate from the black-box function during the optimization process to compensate for the lack of intermediate labels.\nWe show that our training process is more stable and has lower sample complexity compared to policy gradient methods.\nBy leveraging the concrete black-box function at inference time, our model generalizes better than end-to-end neural network models.\nWe validate the advantages of our approach with a series of simple experiments.\nOur approach implies a modular neural network that enjoys added interpretability and reusability benefits.Future Work We limit the scope of this work to tasks that can be solved with a single black-box function.\nSolving the general case of this problem requires learning of multiple black-box interfaces, along unbounded successive calls, where the final prediction is a computed function over the output of these calls.\nThis introduces several difficult challenges.\nFor example, computing the final prediction over a set of black-box functions, rather than a direct prediction of a single one, requires an additional network output module.\nThe input of this module must be compatible with the output of the previous layer, be it an estimation function at training time, or its black-box function counterpart at inference time, which belong to different distributions.\nWe reserve this area of research for future work.As difficult as it is, we believe that artificial intelligence does not lie in mere knowledge, nor in learning from endless data samples.\nRather, much of it is in the ability to extract the right piece of information from the right knowledge source for the right purpose.\nThus, training a neural network to intelligibly interact with black-box functions is a leap forward toward stronger AI." }, { "role": "assistant", "content": "Training DNNs to interface w\\ black box functions w\\o intermediate labels by using an estimator sub-network that can be replaced with the black box after training" } ]
science.craftchem_ner
science.craftchem_ner.3644
[ { "role": "user", "content": "You will be presented a sentence from biomedical research text. Extract all terms that correspond to chemical entities of biological interest (ChEBI).\nReturn your identified terms in a JSON array, formatted like this: [\"Entity1\", \"Entity2\", ...].\nIf no such entities are detected, respond by providing an empty JSON array [].\n\nBiomedical context:\nTransmissionelectronmicrographsofCrx-/-photoreceptorsinthephotoreceptorlayeratP7." }, { "role": "assistant", "content": "[\"electron\"]" } ]
science.chemprot_re
science.chemprot_re.147
[ { "role": "user", "content": "Given an abstract from a research paper, your task is to identify and label relationships between chemical compounds and genes mentioned in the abstract.\n\nThe relationship type must be one of ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nPlease format your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"]. If no relations can be found, please output an empty JSON array [].\n\nAbstract:\n\nTargeting protein-protein interactions within the cyclic AMP signaling system as a therapeutic strategy for cardiovascular disease. The cAMP signaling system can trigger precise physiological cellular responses that depend on the fidelity of many protein-protein interactions, which act to bring together signaling intermediates at defined locations within cells. In the heart, cAMP participates in the fine control of excitation-contraction coupling, hence, any disregulation of this signaling cascade can lead to cardiac disease. Due to the ubiquitous nature of the cAMP pathway, general inhibitors of cAMP signaling proteins such as PKA, EPAC and PDEs would act non-specifically and universally, increasing the likelihood of serious 'off target' effects. Recent advances in the discovery of peptides and small molecules that disrupt the protein-protein interactions that underpin cellular targeting of cAMP signaling proteins are described and discussed." }, { "role": "assistant", "content": "[]" } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.401
[ { "role": "user", "content": "You will be shown biomedical passages and your task is to the answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: What is the function of the spliceosome complex?\n\nContext: The spliceosome machinery is composed of multimeric protein complexes that generate a diverse repertoire of mRNA through coordinated splicing of heteronuclear RNAs.\nSplicing and alternate splicing are the two key biological processes that result in the generation of diverse transcript and protein isoforms in Plasmodium falciparum as well as in other eukaryotic organisms.\nSpliceosome assembly belongs to the key processes that enable splicing of mRNA and modulate alternative splicing.\nThe excision of introns from nascent eukaryotic transcripts is catalyzed by the spliceosome, a highly complex and dynamic macromolecular machine composed of RNA and protein.\nRecent advances have begun to provide exciting new insights into the dynamic interactions that govern the function of the spliceosome, the multi-megadalton complex that performs splicing." }, { "role": "assistant", "content": "The excision of introns from nascent eukaryotic transcripts is catalyzed by the spliceosome." } ]
science.chia_ner
science.chia_ner.148
[ { "role": "user", "content": "You will be shown inclusion or exclusion criteria for a clinical trial. Given these criteria, your task is to extract all unique entities of the following types: [\"Observation\", \"Condition\", \"Procedure\", \"Drug\", \"Temporal\", \"Multiplier\", \"Qualifier\", \"Negation\", \"ReferencePoint\", \"Scope\", \"Value\", \"Mood\", \"Device\", \"Visit\", \"Measurement\", \"Person\"].\n\nPlease return the output as a JSON object of the format: {\"Condition\" : [\"breast cancer\", ...], \"Device\" : [\"ports\", ...], \"Drug\" : [\"Atenolol\", ...], \"Measurement\" : [\"blood pressure\", ...], \"Mood\" : [\"suspected\", ...], \"Multiplier\" : [\"more than two\", ...], \"Negation\" : [\"no\", ...], \"Observation\" : [\"history\", ...], \"Person\" : [\"male\", ...], \"Procedure\" : [\"surgery\", ...], \"Qualifier\" : [\"severe\", ...], \"ReferencePoint\" : [\"enrollment\", ...], \"Scope\" : [\"infection\", ...], \"Temporal\" : [\"one day\", ...], \"Value\" : [\"50%\", ...], \"Visit\" : [\"ICU\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nPatients with any other primary DSM-IV psychiatric diagnosis in addition to Obsessive Compulsive Disorder.\nPatients who currently fulfil criteria for DSM-IV eating disorder, body dysmorphic disorder, current alcohol or substance abuse, or who have a lifetime history of bipolar disorder. Patients with a history of Schizophrenia and other psychotic disorders, Delirium, Dementia, and Amnestic and other cognitive disorders.\nSubjects with a concurrent Axis II Cluster A Personality Disorder\nBorderline or Antisocial Personality Disorder.\nSubjects who based on history or mental status examination have a significant risk of committing suicide, in the investigator's opinion.\nSubjects with a history of more than three adequate trials with an SSRI.\nSubjects who have had an adequate trial of pregabalin.\nSubjects who have initiated psychotherapy in the last 4 months prior to the first visit.\nSubjects who, during the course of the study, would be likely to require treatment with prohibited concomitant therapy .\nPrior use of or a known allergy or hypersensitivity to pregabalin.\nSubjects who have participated in any clinical trial within 30 days prior to entering the study, or in a clinical trial involving a psychotropic medication within the 6 months prior to entering the study.\nAny subject who has been taking benzodiazepines before entering the study who: 1) cannot tolerate being free of benzodiazepines for 4 weeks, or 2) has signs or symptoms of benzodiazepine withdrawal or rebound at the end of those 4 weeks. Should a patient entering the study, who is currently on benzodiazepines develop discontinuation symptoms with discontinuation of their benzodiazepine, we will treat these symptoms with a more gradual benzodiazepine taper. Study will be delayed until the patient is able to tolerate the discontinuation for 4 weeks.\nPatients with a current seizure disorder, organic brain disorder or a history of seizure disorders (except for febrile seizures in childhood).\nPatients with thyroid pathology, the treatment of which has not been stabilized for at least three months.\nPatients on neuroleptic drugs in the two months prior to study entry or cognitive behavioural therapy specific to OCD within four weeks of study entry\nPregnant or lactating females, or if sexually active and of childbearing potential, not using adequate methods of birth control.\nPatients with a history or evidence of a medical condition that would expose them to an increased risk of a significant adverse event or interfere with assessments of safety and efficacy during the trial.\nPatients receiving psychotropics of any kind, including betablockers and other anticonvulsants. Sleep medication such as oral chloral-hydrate or zopiclone are acceptable.\nPatients using any herbal psychoactive treatments, e.g. St John's Wort, Valerian, Kava Kava, L-tryptophan.\nPatients with any condition or on any therapy that, in the investigator's opinion, or as indicated in the pregabalin product label, may pose a risk to the subject.\nPatients who have had a major life event in the past three months, which in the judgement of the investigator is influencing their current condition.\nPatients having clinically significant abnormal laboratory, or ECG findings not resolved by further examinations." }, { "role": "assistant", "content": "{\"Condition\": [\"derline or sonality Disorder.\\nS\", \"organic brain disorder\", \"bipolar disorder\", \"Amnestic\", \"hypersensitivity\", \"substance abuse\", \"Obsessive Compulsive Disorder\", \"Dementia\", \"eating disorder\", \"OCD\", \"lcohol buse,\", \"body dysmorphic disorder\", \"allergy\", \"seizure disorder\", \"febrile seizures\", \"Personality Disorder\", \"psychiatric diagnosis\", \"psychotic disorders\", \"cognitive disorders\", \"Antisocial Personality Disorder\", \"thyroid pathology\", \"Delirium\", \"history of seizure disorders\", \"Schizophrenia\"], \"Device\": [], \"Drug\": [\"St John\\u0027s Wort\", \"Kava Kava\", \"zopiclone\", \"anticonvulsants\", \"Sleep medication\", \"pregabalin\", \"Valerian\", \"psychotropics\", \"neuroleptic drugs\", \"L-tryptophan\", \"betablockers\", \"chloral-hydrate\"], \"Measurement\": [\"ndings not by fur\", \"ECG findings\"], \"Mood\": [], \"Multiplier\": [], \"Negation\": [\"except\", \"not\", \"acceptable\", \"in addition to\"], \"Observation\": [\"risk of committing suicide\"], \"Person\": [], \"Procedure\": [\"cognitive behavioural therapy\", \"mental status examination\", \"herbal psychoactive treatments\", \"psychotherapy\", \"treatment\"], \"Qualifier\": [\"childhood\", \"significant\", \"primary\", \"any other\", \"other\", \"oral\", \"Axis II Cluster A\", \"stabilized\", \"DSM-IV\"], \"ReferencePoint\": [\"first visit\", \"study entry\"], \"Scope\": [\"any other primary DSM-IV\", \"eating disorder, body dysmorphic disorder, current alcohol or substance abuse\", \"oral chloral-hydrate or zopiclone\", \"betablockers and other anticonvulsants\", \"allergy or hypersensitivity\", \"St John\\u0027s Wort, Valerian, Kava Kava, L-tryptophan\", \"laboratory, or ECG findings\"], \"Temporal\": [\"within four weeks of study entry\", \"in the last 4 months prior to the first visit\", \"at least three months\", \"in the two months prior to study entry\"], \"Value\": [\"significant abnormal\"], \"Visit\": []}" } ]
science.healthver_entailment
science.healthver_entailment.1659
[ { "role": "user", "content": "You will be shown a claim about public health, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\". If the abstract does not provide enough information to arrive at a verdict, write \"NEI\" (for \"not enough information\").\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract. If the verdict was \"NEI\", then return an empty array.\n\nFor instance, if the model were given the claim \"wearing masks can prevent the spread of COVID\", the output might be\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that mass mask-wearing reduces the transmission rate for COVID-19.\"]\n}\n\nClaim: Hydroxychloroquine is a medication that doctors prescribe to treat a variety of conditions, including malaria, arthritis, and lupus.\n\nAbstract:\nBackground: There is no effective therapy for COVID-19.\nHydroxychloroquine (HCQ) and chloroquine (CQ) have been used for its treatment but their safety and efficacy remain uncertain.\nObjective: We performed a systematic review to synthesize the available data on the efficacy and safety of CQ and HCQ for the treatment of COVID-19.\nMethods: Two reviewers searched for published and pre-published relevant articles between December 2019 to 8th June 2020.\nThe data from the selected studies were abstracted and analyzed for efficacy and safety outcomes.\nCritical appraisal of the evidence was done by Cochrane risk of bias tool and Newcastle Ottawa scale.\nThe quality of evidence was graded as per the GRADE approach.\nResults: We reviewed 12 observational and 3 randomized trials which included 10659 patients of whom 5713 received CQ/HCQ and 4966 received only standard of care.\nThe efficacy of CQ/HCQ for COVID-19 was inconsistent across the studies.\nMeta-analysis of included studies revealed no significant reduction in mortality with HCQ use [RR 0.98 95% CI 0.66-1.46] , time to fever resolution [mean difference -0.54 days (-1.19-011)] or clinical deterioration/development of ARDS with HCQ [RR 0.90 95% CI 0.47-1.71].\nThere was a higher risk of ECG abnormalities/arrhythmia with HCQ/CQ [RR 1.46 95% CI 1.04 to 2.06].\nThe quality of evidence was graded as very low for these outcomes.\nConclusions: The available evidence suggests that CQ or HCQ does not improve clinical outcomes in COVID-19.\nWell-designed randomized trials are required for assessing the efficacy and safety of HCQ and CQ for COVID-19." }, { "role": "assistant", "content": "{\"verdict\": \"NEI\", \"evidence\": []}" } ]
science.healthver_entailment
science.healthver_entailment.4783
[ { "role": "user", "content": "You will be shown a claim about public health, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\". If the abstract does not provide enough information to arrive at a verdict, write \"NEI\" (for \"not enough information\").\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract. If the verdict was \"NEI\", then return an empty array.\n\nFor instance, if the model were given the claim \"wearing masks can prevent the spread of COVID\", the output might be\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that mass mask-wearing reduces the transmission rate for COVID-19.\"]\n}\n\nClaim: remdesivir was superior to placebo in shortening the time to recovery in adults who were hospitalized with Covid-19 and had evidence of lower respiratory tract infection. \n\nAbstract:\nBackground Coronavirus disease 2019 (COVID-19), caused by the novel coronavirus SARS-CoV-2, has led to significant global mortality and morbidity.\nUntil now, no treatment has proven to be effective in COVID-19.\nTo explore whether the use of remdesivir, initially an experimental broad-spectrum antiviral, is effective in the treatment of hospitalized patients with COVID-19, we conducted a systematic review and meta-analysis of randomized, placebo-controlled trials investigating its use.\nMethods A rapid search of the MEDLINE and EMBASE medical databases was conducted for randomized controlled trials.\nA systematic approach was used to screen, abstract, and critically appraise the studies.\nGrading of Recommendations Assessment, Development, and Evaluation (GRADE) method was applied to rate the certainty and quality of the evidence reported per study.\nResults Two RCTs studies were identified (n=1,299).\nA fixed-effects meta-analysis revealed reductions in mortality (RR=0.69, 0.49 to 0.99), time to clinical improvement (3.95 less days, from 3.86 days less to 4.05 less days ), serious adverse events (RR=0.77, 0.63 to 0.94) and all adverse events (RR=0.87, 0.79 to 0.96).\nConclusion In this rapid systematic review, we present pooled evidence from the 2 included RCT studies that reveal that remdesivir has a modest yet significant reduction in mortality and significantly improves the time to recovery, as well as significantly reduced risk in adverse events and serious adverse events.\nIt is more than likely that as an antiviral, remdesivir is not sufficient on its own and may be suitable in combination with other antivirals or treatments such as convalescent plasma.\nResearch is ongoing to clarify and contextual these promising findings." }, { "role": "assistant", "content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"It is more than likely that as an antiviral, remdesivir is not sufficient on its own and may be suitable in combination with other antivirals or treatments such as convalescent plasma.\"]}" } ]
science.chemprot_ner
science.chemprot_ner.505
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Protein\", \"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\" : [\"Dexamethasone\", ...], \"Protein\" : [\"BRCA-1\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nEscitalopram.\nEscitalopram oxalate (S-citalopram, Lexapro), a selective serotonin re-uptake inhibitor antidepressant which is the S-enantiomer of citalopram, is in clinical development worldwide for the treatment of depression and anxiety disorders. Preclinical studies demonstrate that the therapeutic activity of citalopram resides in the S-isomer and that escitalopram binds with high affinity to the human serotonin transporter. Conversely, R-citalopram is approximately 30-fold less potent than escitalopram at this transporter. Escitalopram has linear pharmacokinetics, so that plasma levels increase proportionately and predictably with increased doses and its half-life of 27 - 32 h is consistent with once-daily dosing. In addition, escitalopram has negligible effects on cytochrome P450 drug-metabolising enzymes in vitro, suggesting a low potential for drug-drug interactions. The efficacy of escitalopram in patients with major depressive disorder has been demonstrated in multiple short-term, placebo-controlled clinical trials, three of which included citalopram as an active control, as well as in a 36-week study evaluating efficacy in the prevention of depression relapse. In these studies, escitalopram was shown to have robust efficacy in the treatment of depression and associated symptoms of anxiety relative to placebo. Efficacy has also been shown in treating generalised anxiety disorder, panic disorder and social anxiety disorder. Results also suggest that, at comparable doses, escitalopram demonstrates clinically relevant and statistically significant superiority to placebo treatment earlier than citalopram. Analysis of the safety database shows a low rate of discontinuation due to adverse events, and there was no statistically significant difference between escitalopram 10 mg/day and placebo in the proportion of patients who discontinued treatment early because of adverse events. The most common adverse events associated with escitalopram which occurred at a rate greater than placebo include nausea, insomnia, ejaculation disorder, diarrhoea, dry mouth and somnolence. Only nausea occurred in > 10% of escitalopram-treated patients." }, { "role": "assistant", "content": "{\"Chemical\": [\"citalopram\", \"Escitalopram oxalate\", \"serotonin\", \"S-enantiomer of citalopram\", \"R-citalopram\", \"Lexapro\", \"S-citalopram\", \"Escitalopram\", \"escitalopram\"], \"Protein\": [\"human serotonin transporter\", \"cytochrome P450\"]}" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.27096
[ { "role": "user", "content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nThis paper investigates the problem of modeling Internet images and associated text or tags for tasks such as image-to-image search, tag-to-image search, and image-to-tag search (image annotation). We start with canonical correlation analysis (CCA), a popular and successful approach for mapping visual and textual features to the same latent space, and incorporate a third view capturing high-level image semantics, represented either by a single category or multiple non-mutually-exclusive concepts. We present two ways to train the three-view embedding: supervised, with the third view coming from ground-truth labels or search keywords; and unsupervised, with semantic themes automatically obtained by clustering the tags. To ensure high accuracy for retrieval tasks while keeping the learning process scalable, we combine multiple strong visual features and use explicit nonlinear kernel mappings to efficiently approximate kernel CCA. To perform retrieval, we use a specially designed similarity function in the embedded space, which substantially outperforms the Euclidean distance. The resulting system produces compelling qualitative results and outperforms a number of two-view baselines on retrieval tasks on three large-scale Internet image datasets.\n\nCited Abstract(s):\n[1]: We are interested in large-scale image classification and especially in the setting where images corresponding to new or existing classes are continuously added to the training set. Our goal is to devise classifiers which can incorporate such images and classes on-the-fly at (near) zero cost. We cast this problem into one of learning a metric which is shared across all classes and explore k-nearest neighbor (k-NN) and nearest class mean (NCM) classifiers. We learn metrics on the ImageNet 2010 challenge data set, which contains more than 1.2M training images of 1K classes. Surprisingly, the NCM classifier compares favorably to the more flexible k-NN classifier, and has comparable performance to linear SVMs. We also study the generalization performance, among others by using the learned metric on the ImageNet-10K dataset, and we obtain competitive performance. Finally, we explore zero-shot classification, and show how the zero-shot model can be combined very effectively with small training datasets.\n\n[2]: Automatically assigning keywords to images is of great interest as it allows one to index, retrieve, and understand large collections of image data. Many techniques have been proposed for image annotation in the last decade that give reasonable performance on standard datasets. However, most of these works fail to compare their methods with simple baseline techniques to justify the need for complex models and subsequent training. In this work, we introduce a new baseline technique for image annotation that treats annotation as a retrieval problem. The proposed technique utilizes low-level image features and a simple combination of basic distances to find nearest neighbors of a given image. The keywords are then assigned using a greedy label transfer mechanism. The proposed baseline outperforms the current state-of-the-art methods on two standard and one large Web dataset. We believe that such a baseline measure will provide a strong platform to compare and better understand future annotation techniques.\n\n[3]: Automatic image annotation aims at predicting a set of textual labels for an image that describe its semantics. These are usually taken from an annotation vocabulary of few hundred labels. Because of the large vocabulary, there is a high variance in the number of images corresponding to different labels (\"class-imbalance\"). Additionally, due to the limitations of manual annotation, a significant number of available images are not annotated with all the relevant labels (\"weak-labelling\"). These two issues badly affect the performance of most of the existing image annotation models. In this work, we propose 2PKNN, a two-step variant of the classical K-nearest neighbour algorithm, that addresses these two issues in the image annotation task. The first step of 2PKNN uses \"image-to-label\" similarities, while the second step uses \"image-to-image\" similarities; thus combining the benefits of both. Since the performance of nearest-neighbour based methods greatly depends on how features are compared, we also propose a metric learning framework over 2PKNN that learns weights for multiple features as well as distances together. This is done in a large margin set-up by generalizing a well-known (single-label) classification metric learning algorithm for multi-label prediction. For scalability, we implement it by alternating between stochastic sub-gradient descent and projection steps. Extensive experiments demonstrate that, though conceptually simple, 2PKNN alone performs comparable to the current state-of-the-art on three challenging image annotation datasets, and shows significant improvements after metric learning.\n\n[4]: Image auto-annotation is an important open problem in computer vision. For this task we propose TagProp, a discriminatively trained nearest neighbor model. Tags of test images are predicted using a weighted nearest-neighbor model to exploit labeled training images. Neighbor weights are based on neighbor rank or distance. TagProp allows the integration of metric learning by directly maximizing the log-likelihood of the tag predictions in the training set. In this manner, we can optimally combine a collection of image similarity metrics that cover different aspects of image content, such as local shape descriptors, or global color histograms. We also introduce a word specific sigmoidal modulation of the weighted neighbor tag predictions to boost the recall of rare words. We investigate the performance of different variants of our model and compare to existing work. We present experimental results for three challenging data sets. On all three, TagProp makes a marked improvement as compared to the current state-of-the-art." }, { "role": "assistant", "content": "The data-driven image annotation approaches of [4] [2] [3] use discriminative learning to obtain a metric or a weighting of different features to improve the relevance of database images retrieved for a query. Unfortunately, the learning stage is very computationally expensive -- for example, in the TagProp method of [4] , it scales quadratically with the number of images. In fact, the standard datasets used for image annotation by [2] [4] [3] consist of 5K-20K images and have 260-290 tags each.By contrast, our datasets (shown in Figure ) range in size from 71K to 270K and have tag vocabularies of size 1K-20K. While it is possible to develop scalable metric learning algorithms using stochastic gradient descent (e.g., [1] ), our work shows that learning a linear embedding using CCA can serve as a simpler attractive alternative." } ]
science.mslr2022_cochrane_multidoc_summarization
science.mslr2022_cochrane_multidoc_summarization.2151
[ { "role": "user", "content": "You will be provided with the titles and abstracts of a collection of medical RCT reports studying a clinical question -- for instance, the efficacy of an intervention on some medical outcome. Please write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article. Your response should not include any other text. The RCT reports are below.\n\n\n\nTitle 1:\nRemoxipride versus thioridazine in elderly psychotic patients.\nAbstract 1:\nA total of 18 hospitalized elderly psychotic patients in need of antipsychotic treatment took part in a double-blind exploratory study concerning the safety, tolerability and efficacy of remoxipride and thioridazine. Their median age was 78 years (range 66-90 years). Over the study period of 6 weeks, 9 patients received remoxipride and 9 thioridazine. Both drugs were given in a dosage range of 50-200 mg/day. The median total score on the Brief Psychiatric Rating Scale at the start of active treatment was 18 in the remoxipride group and 24 in the thioridazine group. The scores were reduced to 6 and 7, respectively, at the last rating. The Clinical Global Impression at last rating showed 4 of the 9 remoxipride patients to be very much improved compared to 1 of the 9 thioridazine patients. Apart from three reports of severe drowsiness in the thioridazine group, the incidence of treatment-emergent adverse effects was low in both groups. No clinically significant aberrations were seen in laboratory tests or cardiovascular data. In conclusion, remoxipride seemed to be well tolerated in this group of patients and its antipsychotic efficacy in the doses used is promising.\n\n\nTitle 2:\nInternational multisite double-blind trial of the atypical antipsychotics risperidone and olanzapine in 175 elderly patients with chronic schizophrenia.\nAbstract 2:\nThe authors compared the effects of the two most commonly used atypical antipsychotics, risperidone and olanzapine, in elderly patients with schizophrenia.\n In an 8-week, international, double-blind study, patients (outpatients, hospital inpatients, and residents of nursing or boarding homes) were randomly assigned to receive risperidone (1 mg to 3 mg/day) or olanzapine (5 mg to 20 mg/day). The main outcome measures were changes in Positive and Negative Syndrome Scale (PANSS) total scores and rates of extrapyramidal symptoms (EPS).\n Subjects were 175 patients age 60 years or over with schizophrenia or schizoaffective disorder. The mean duration of illness was 36.5 years. Median doses were 2 mg/day of risperidone and 10 mg/day of olanzapine. PANSS total scores and four of the five PANSS factor scores (positive symptoms, negative symptoms, disorganized thoughts, and anxiety/depression) improved significantly at all time-points and at endpoint in both groups; between-treatment differences were not significant. EPS-related adverse events were reported by 9.2% of patients in the risperidone group and 15.9% in the olanzapine group; the between-treatment difference was not significant. Total scores on the Extrapyramidal Symptom Rating Scale were reduced in both groups at endpoint; between-treatment differences were not significant. Clinically relevant weight gain was seen in both groups, but was significantly less frequent in risperidone patients than in olanzapine patients.\n Stable elderly patients with chronic schizophrenia receiving appropriate doses of risperidone or olanzapine over an 8-week period experienced significant reductions in the severity of psychotic and extrapyramidal symptoms, with a relatively low risk of side effects." }, { "role": "assistant", "content": "Antipsychotics may be widely used in the treatment of elderly people with schizophrenia, however, based on this systematic review, there are little robust data available to guide the clinician with respect to the most appropriate drug to prescribe. Clearly reported large short, medium and long-term randomised controlled trials with participants, interventions and primary outcomes that are familiar to those wishing to help elderly people with schizophrenia are long overdue." } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.48256
[ { "role": "user", "content": "You will be presented with the full text of a biomedical research paper. Please write an abstract for this paper. Your response should include the abstract and no additional text.\n\nPaper text:\nthis investigation was part of a trial evaluating a(h1n1)pdm09 transmission among household contacts , conducted during the first wave of the 2009 pandemic ( may july 2009 ) in quebec city , quebec , canada ( 5 ) .\nclinical data and samples were serially obtained from index case - patients and their contacts in 42 households .\nnasopharyngeal secretions were collected from all participants during the first visit and tested by 2 different assays : a conventional reverse transcription pcr ( rt - pcr ) targeting the hemagglutinin gene of a(h1n1)pdm09 virus ( 6 ) and a universal rt - pcr targeting the matrix gene of all influenza a viruses ( 7 ) .\nblood was collected from persons > 7 years of age at their initial visit ( acute - phase sample ) and 34 weeks later ( convalescent - phase sample ) .\nserum was tested by microneutralization assay according to world health organization standard protocols with minor modifications ( 8) . this serologic study comprised 122 persons from the 42 households .\ntwenty - four persons were rt - pcr confirmed index case - patients ( median age 15 years , range 756 years ) , and 98 were household contacts ( median age 30.5 years , range 761 years ) , of whom 34 also were positive for a(h1n1)pdm09 virus by rt - pcr . for 67 patients ( median age 20 years , range 761 years ) , a(h1n1)pdm09 was confirmed by rt - pcr and/or microneutralization assay : 10 ( 15% ) by rt - pcr alone , 9 ( 13% ) by microneutralization assay alone , and 48 ( 72% ) by rt - pcr and microneutralization assay .\nof the 67 a(h1n1)pdm09-infected persons , 8 ( 12% ) seroconverted to a / brisbane/59/2007 ( a[h1n1 ] vaccine strain for 200809 ) ( table a1 ) .\nseven a / brisbane/59/2007 seroconverters were rt - pcr positive and a(h1n1)pdm09 seroconverters , and 1 was rt - pcr positive and a a(h1n1)pdm09 nonseroconverter . in comparison , 1 ( 2% ) of the 55 a(h1n1)pdm09-negative patients seroconverted to a / brisbane/59/2007 ( fisher exact test , p<0.05 ) .\nseasonal influenza viruses were not circulating in the province of quebec at the time of this study .\nonly 1 of 9 a / brisbane/59/2007 seroconverters had previously received the inactivated 200809 seasonal influenza vaccines .\nno participants were vaccinated against a(h1n1)pdm09 virus , and none received antiviral therapy or prophylaxis .\nwe then assessed whether this cross - reactivity was limited to the a / brisbane/59/2007 strain , the most recent seasonal a ( h1n1 ) virus to have circulated before a(h1n1)pdm09 virus , or whether it was broader . to this end , we tested all paired serum samples against an older seasonal a ( h1n1 ) influenza virus , i.e. , a / new caledonia/20/1999 ( h1n1 vaccine strain used during the 200001 through 200607 seasons ) , and a past a(h3n2 ) virus , i.e. , a / panama/7/2004 ( h3n2 vaccine component used during the 200001 through 200304 seasons ) . seven ( 10% ) a(h1n1)pdm09 virus \npositive persons also seroconverted to a / new caledonia/20/1999(h1n1 ) , all of whom were rt - pcr - positive and a(h1n1)pdm09 virus seroconverters , whereas none of the a(h1n1)pdm09 virus negative persons seroconverted to this older strain ( p<0.05 ) . on the other hand , seroconversion rates for a / panama/7/2004(h3n2 ) did not differ significantly between a(h1n1)pdm09 virus - positive ( 9% ) and -negative ( 5% ) patients .\nin addition , we identified 4 ( 6% ) persons with laboratory - confirmed a(h1n1)pdm09 virus infections who seroconverted to both seasonal ( h1n1 ) viruses and 2 ( 3% ) who seroconverted to a / brisbane/59/2007(h1n1 ) and a / panama/2007/99(h3n2 ) ( table ) .\nparticipant 44c , a household contact of a confirmed case - patient with a negative rt - pcr for a(h1n1)pdm09 and low antibody titers in the convalescent - phase serum , showed cross - neutralizing antibodies meeting 4-fold seroconversion criteria for a / brisbane/59/2007(h1n1 ) and a / panama/7/2004(h3n2 ) . * rt - pcr , reverse transcription pcr ; ari , acute respiratory illness ( i.e. , presence of 2 of the following signs / symptoms : fever [ 37.8c ] or feverishness , cough , sore throat , or rhinorrhea ) ; ili , influenza - like illness ( i.e. , fever and cough and/or sore throat ) ; + , positive ; , negative . received seasonal vaccine in 200809 .\nduring this study , the only influenza virus detected in the province of quebec was a(h1n1)pdm09 virus . yet , 8 ( 12% ) of 67 a(h1n1)pdm09 virus \ninfected persons in our study had a concomitant significant increase in microneutralization antibody titers against the most recent a / brisbane/59/2007(h1n1 ) strain , of whom 5 persons had 48-fold , 2 had 16-fold , and 1 had 32-fold rises .\nin addition , 4 of these 8 persons also seroconverted to an older a / new caledonia/20/1999(h1n1 ) virus , of whom 3 persons had 4-fold and 1 had 16-fold rises between acute - phase and convalescent - phase serum .\nthe cross - reactivity observed in the study population does not seem to be completely subtype specific because some persons also showed rising titers against an old influenza a ( h3n2 ) strain ( a / panama/2007/99 ) , although in this case , seroconversion rates did not differ significantly between a(h1n1)pdm09 virus positive and negative persons .\na recent study in hong kong of 28 paired serum samples showed that infection with the pandemic virus could broaden cross - reactive immunity to other recent subtype h1 swine viruses .\nin contrast to our study , perhaps because of the small number of participants or older age of a(h1n1)pdm09 virus positive case - patients ( 30.5 vs. 20 years ) , no cross - reactive response was shown against the more recent seasonal influenza virus a / hk/400599/2008(h1n1 ) ( 9 ) . we could not determine the extent to which past seasonal influenza vaccinations and/or natural infections contributed to the generation of cross - neutralizing antibodies to a(h1n1)pdm09 and the seasonal influenza strains .\nour next step will be to investigate potential cross - neutralizing determinants between these seasonal and pandemic viruses .\nneutralizing antibodies that bind to the stalk region of ha2 have been shown to confer broad cross - neutralizing activity against several subtypes of viruses across clades and to provide protection in animal models ( 10,11 ) .\nsix of the 8 persons who seroconverted to a / brisbane/59/2007(h1n1 ) by microneutralization assay did not meet the 4-fold criteria by hemagglutinin inhibition assay ( data not shown ) , suggesting that the cross - reactivity might result from conserved epitopes in the stalk region of ha2 or in other proteins .\ngreenbaum et al . recently showed that , overall , 49% of the epitopes reported in recently circulating seasonal a ( h1n1 ) strains were conserved in the a(h1n1)pdm09 virus ( 12 ) . specifically ,\n31% , 41% , and 69% of the b - cell , cd4 and cd8 t - cell epitopes , respectively , were conserved . natural infection with a(h1n1)pdm09 virus also could have elicited cross - reactive responses against internal components of older viral strains ( 13 ) .\nwe find intriguing the elaboration of cross - reactive neutralizing antibodies to more recently circulating influenza a ( h1n1 ) strains as a result of novel a(h1n1)pdm09 virus infection , whereas the reverse has not generally been evident in serosurveys for cross - reactive a(h1n1)pdm09 antibody , except in elderly persons who had substantial cross - reactive antibodies to a(h1n1)pdm09 virus ( 14,15 ) .\nunfortunately , because of the small sample size of our study and lack of serum from children < 7 years of age , we could not assess whether cross - reactivity was an age - dependent phenomenon .\nhowever , all but 3 of the cross - reactive seroconverters ( 13/16 [ 81% ] ) were 730 years of age . to explore preferential responses to the original infecting virus ( original antigenic sin ) , we assessed cross - reactivity for the older a / new caledonia/99(h1n1 ) virus that was potentially the priming antigen for some of our younger participants or was closely related to the priming antigen in older participants .\nhowever , seroconversion rates for a / new caledonia ( 10% ) were comparable to those of the more recent a / brisbane/59/2007(h1n1 ) strain ( 12% ) , and thus we could not distinguish original antigenic sin on that basis .\nthese antigens may have been too closely related antigenically to demonstrate that in this young cohort .\nour work supports the notion that natural a(h1n1)pdm09 virus infection induces broad heterosubtypic ( h1 and even h3 ) responses .\nit also highlights the need for further investigation of the mechanisms behind cross - protection because they could be keys to creating improved influenza vaccines with broader protection ." }, { "role": "assistant", "content": "we looked for cross - reactive antibodies in 122 persons with paired serum samples collected during the 2009 pandemic of influenza virus a(h1n1)pdm09 . \n eight ( 12% ) of 67 persons with a(h1n1)pdm09 infection confirmed by reverse transcription pcr and/or serology also seroconverted to the seasonal a / brisbane/59/2007 ( h1n1 ) virus , compared with 1 ( 2% ) of 55 a(h1n1)pdm09-negative persons ( p<0.05 ) ." } ]
science.medmentions_ner
science.medmentions_ner.507
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"HealthCareActivity\", \"InjuryOrPoisoning\", \"BodySubstance\", \"IntellectualProduct\", \"AnatomicalStructure\", \"SpatialConcept\", \"Chemical\", \"Bacterium\", \"MedicalDevice\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Finding\", \"BiologicFunction\", \"Virus\", \"ResearchActivity\", \"ClinicalAttribute\", \"PopulationGroup\", \"Eukaryote\", \"BodySystem\", \"Food\", \"ProfessionalOrOccupationalGroup\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nLong noncoding RNA XIST acts as an oncogene in non-small cell lung cancer by epigenetically repressing KLF2 expression Recently, long noncoding RNAs (lncRNAs) have been identified as critical regulators in numerous types of cancers, including non-small cell lung cancer (NSCLC). X inactivate-specific transcript (XIST) has been found to be up-regulated and acts as an oncogene in gastric cancer and hepatocellular carcinoma, but little is known about its expression pattern, biological function and underlying mechanism in NSCLC. Here, we identified XIST as an oncogenic lncRNA by driving tumorigenesis in NSCLC. We found that XIST is over- expressed in NSCLC, and its increased level is associated with shorter survival and poorer prognosis. Knockdown of XIST impaired NSCLC cells proliferation, migration and invasion in vitro, and repressed the tumorigenicity of NSCLC cells in vivo. Mechanistically, RNA immune-precipitation (RIP) and RNA pull-down experiment demonstrated that XIST could simultaneously interact with EZH2 to suppress transcription of its potential target KLF2. Additionally, rescue experiments revealed that XIST 's oncogenic functions were partly depending on silencing KLF2 expression. Collectively, our findings expound how XIST over- expression endows an oncogenic function in NSCLC." }, { "role": "assistant", "content": "{\"AnatomicalStructure\": [\"oncogene\", \"oncogenic\", \"regulators\"], \"Bacterium\": [], \"BiologicFunction\": [\"silencing\", \"biological function\", \"expressed\", \"cancers\", \"gastric cancer\", \"cells proliferation\", \"hepatocellular carcinoma\", \"migration\", \"expression\", \"NSCLC\", \"invasion\", \"tumorigenesis\", \"tumorigenicity\", \"non-small cell lung cancer\", \"transcription\", \"epigenetically repressing\", \"up-regulated\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"lncRNAs\", \"Long noncoding RNA\", \"lncRNA\", \"RNA\", \"EZH2\", \"long noncoding RNAs\", \"XIST\", \"X inactivate-specific transcript\", \"KLF2\"], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [\"shorter survival\", \"poorer prognosis\", \"findings\"], \"Food\": [], \"HealthCareActivity\": [\"immune-precipitation\", \"RIP\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"experiments\", \"Knockdown\", \"pull-down experiment\"], \"SpatialConcept\": [\"pattern\"], \"Virus\": []}" } ]
science.chia_ner
science.chia_ner.111
[ { "role": "user", "content": "You will be shown inclusion or exclusion criteria for a clinical trial. Given these criteria, your task is to extract all unique entities of the following types: [\"Observation\", \"Condition\", \"Procedure\", \"Drug\", \"Temporal\", \"Multiplier\", \"Qualifier\", \"Negation\", \"ReferencePoint\", \"Scope\", \"Value\", \"Mood\", \"Device\", \"Visit\", \"Measurement\", \"Person\"].\n\nPlease return the output as a JSON object of the format: {\"Condition\" : [\"breast cancer\", ...], \"Device\" : [\"ports\", ...], \"Drug\" : [\"Atenolol\", ...], \"Measurement\" : [\"blood pressure\", ...], \"Mood\" : [\"suspected\", ...], \"Multiplier\" : [\"more than two\", ...], \"Negation\" : [\"no\", ...], \"Observation\" : [\"history\", ...], \"Person\" : [\"male\", ...], \"Procedure\" : [\"surgery\", ...], \"Qualifier\" : [\"severe\", ...], \"ReferencePoint\" : [\"enrollment\", ...], \"Scope\" : [\"infection\", ...], \"Temporal\" : [\"one day\", ...], \"Value\" : [\"50%\", ...], \"Visit\" : [\"ICU\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nASA 1 \nASA 2 \nPts have current treatment plan at OHSU for extraction of some or all of remaining teeth and scheduled for delivery of a removable appliance post extraction \nTeeth used are able to be isolated with rubber dam \nUnderstand and sign consent form" }, { "role": "assistant", "content": "{\"Condition\": [\"ASA 2\", \"ASA 1\"], \"Device\": [], \"Drug\": [], \"Measurement\": [], \"Mood\": [\"scheduled for\"], \"Multiplier\": [], \"Negation\": [], \"Observation\": [\"treatment plan at OHSU\"], \"Person\": [], \"Procedure\": [], \"Qualifier\": [], \"ReferencePoint\": [], \"Scope\": [], \"Temporal\": [], \"Value\": [], \"Visit\": []}" } ]
science.chemsum_single_document_summarization
science.chemsum_single_document_summarization.103175
[ { "role": "user", "content": "You will be shown the title of a chemistry paper, together with sections excerpted from the body of the paper. The paper sections may include HTML tags like `<p>` and `</p>`; please ignore these. Your task is to write an abstract for the paper. Your response should include the abstract and no other text.\n\nTitle:\nPhotodissociation Mass Spectrometry Accurately Localizes Sites of Backbone Deuteration in Peptides\n\nPaper sections:\n\n<p>Detailed insight into the dynamic conformation of proteins is critical for understanding and deciphering the molecular mechanisms of most cellular processes. 1 Hydrogen/deuterium exchange mass spectrometry (HDX-MS) has become a popular method for studying the conformation, dynamics and interactions of proteins in solution under a variety of conditions. [2][3][4] The method tracks the H/D exchange of protein backbone amides and thus provides a sensitive analytical probe of the hydrogen bonding networks and the solvation of local higher-order structure. An experimental approach to increase the spatial resolution of an HDX-MS experiment is to subject peptides (bottomup) or intact proteins (top-down) to gas-phase fragmentation (MS/MS) inside the mass spectrometer prior to mass analysis. 5 This conceptionally simple HDX-MS/MS approach, however, requires that the deuterium labeling of backbone amides from solution remains unperturbed during ionization and gas-phase dissociation. [6][7][8] The common fragmentation technique of collision induced dissociation (CID) as well as infrared multiphoton dissociation (IRMPD), increase the internal energy of a given ion on a relatively slow timescale (µs-ms) through multiple collisions or absorption of low energy photons, thus mobilizing charging protons prior to covalent bond dissociation. 9,10 Both techniques have been shown to randomize the selective deuterium labeling of backbone amides in peptides and proteins, here referred to as H/D scrambling. 7,8 In contrast, fast (picosecond) fragmentation by electron capture dissociation (ECD) was found to proceed without H/D scrambling in peptides and proteins. 11,12 The related technique of electron transfer dissociation (ETD) also proceeds without H/D scrambling, 13 and was in a proof-of-concept study shown to provide a more accessible LC-compatible alternative to ECD, for high-resolution HDX-MS/MS analysis of proteins. 14 Subsequent applications to other proteins and protein-ligand/protein-protein complexes have further underscored the use of ETD to increase the spatial resolution of HDX-MS experiments. [15][16][17][18] Use of a more efficient fragmentation technique could greatly empower the HDX-MS/MS approach. Ultraviolet photodissociation (UVPD) has been shown to provide efficient and highly informative fragmentation of protonated peptides and proteins, producing primarily a/x ions, but also c/z and b/y ions as well as side-chain cleavages (v, w, and d ions). UVPD of peptides and proteins have been implemented at different wavelengths ranging from 157 nm to 266 nm. At wavelengths above 266 nm, excitation happens primarily for peptides containing aromatic residues. 19 However, when using higher energy UV photons at lower wavelengths, the primary accessible chromophore becomes the amide group (with transitions to π* either from π at 193 nm or from n at 213 nm). 9,20,21 The absorption of a single photon at these wavelengths deposits a high amount of energy (7.9 eV at 157 nm, 6.4 eV at 193 nm, and 5.8 eV at 213 nm) which exceeds peptide bond energies of 3-4 eV 20 resulting in comprehensive fragmentation of both peptides and intact proteins, largely independent of ion charge and size. 9 Furthermore, UVPD occurs without charge-reduction, thus greatly enabling detection of both N-and C-terminal fragments following dissociation. 20 The increase in availability and ease of implementation of UV lasers on commercial mass spectrometers, combined with the ability of UV lasers to deliver very short high-energy pulses that deposits high energies into peptide and protein ions on a LC-compatible timescale, makes UVPD a promising method to empower both bottom-up and top-down HDX-MS experiments. It is however critical to first establish if fragment ions from UVPD retain the selective solution-phase deuterium labeling of backbone amides in a polypeptide.</p><p>Here we investigate the occurrence of H/D scrambling at backbone amide positions during UVPD of a regioselective deuterium labeled 12-mer peptide, peptide P1 (HHHHHHIIKIIK). 22 For this peptide, differences in the intrinsic H/D exchange rates of the constituent amino acids allow deuterium to be retained solely in the backbone amide residues in the C-terminal part (-IIKIIK) for several minutes in solution. 22 Upon ionization, gas-phase fragmentation and mass analysis, the deuterium content can be measured for both intact precursor ion and MS/MS fragment ions, which enables a sensitive quantitation of the level of H/D scrambling. UVPD was performed on peptide P1 using a 213 nm laser operated at 1 kHz and delivering 2.5 µJ/pulse, which was implemented on a commercially available ion mobility enabled high-resolution Q-TOF mass spectrometer (Figure 1a). The triply-protonated peptide P1 was mass selected in the quadrupole, trapped in the trap T-Wave ion guide and photoactivated for 600 ms. The photofragments are then extracted to the TOF for mass analysis (Figure S-1). UVPD resulted in multiple singly or doubly charged fragment ions, primarily a/x and c/z ions of high abundance, b/y ions of low abundance, a single d ion, and a His immonium ion/a1 ion (Figure 1b-d). The relative abundance of the different fragment ions is plotted in Figure S-2. The deuterium content of all fragment ions of sufficient signal-to-noise ratio (S/N>15) were used for quantitating H/D scrambling (Figure 1d in bold). Further details concerning experimental settings and data analysis are provided in the Supporting Information.</p><p>To avoid collisional activation and thus H/D scrambling in the front-end of the mass spectrometer prior to UVPD of the labeled peptide, a systematic optimization of ion source parameters was performed to facilitate gentle ionization and ion transport. ETD has been shown to proceed inherently without H/D scrambling. 13,23 By performing ETD in the trap T-Wave of the Q-TOF instrument we quantitated the influence of several instrument parameters on H/D scrambling for the triply-protonated peptide P1 prior to fragmentation (Figure S-3). Four parameters had a significant impact on H/D scrambling (Table S- 1). At elevated settings, the \"sampling cone\" induced complete randomization of the deuterium label of peptide P1, in good correlation to earlier work. 24 The other three important parameters \"cone gas\", \"source offset\", and \"StepWave 2 offset\", the latter two unique to the StepWave ion guide of this Q-TOF instrument, induced a moderate increase in scrambling of 15-25% at elevated settings (Figure S-3). The four parameters were tuned to define \"mild\" ion source settings that allowed maximum ion transmission while retaining minimal scrambling (<10%) in ETD-generated c/z ions. \"Harsh\" ion source settings were similarly defined that induced scrambling in ETD-generated c/z ions (Table S-1). The harsh conditions were obtained from the mild conditions by increasing the electrostatic potential of the sampling cone from 20 V to 105 V. UVPD was performed for labeled peptide P1 under both mild and harsh conditions (Figure 2). The deuterium content of a, x, and c ions, as well as a single z ion resulting from UVPD at mild conditions was plotted as a function of fragment ion length and compared to theoretical models in case of either 0% or 100% scrambling (Figure 2). Results from one a ion series are shown in Figure 2d, with data from replicate measurements and other fragment ion types shown in Figure S-4. Evidently, a, c, x, and z fragment ions from UVPD at mild ion source conditions reproducibly retained the regioselective labeling pattern of the intact peptide from solution. The deuterium contents of all detected fragment ions correlated closely with the theoretical model for 0% scrambling (<10% scrambling). The fact that both a/x and c/z ions were generated by UVPD at 213 nm without H/D scrambling suggests that the a/x and c/z fragments were generated on a comparably fast timescale through mechanisms that do not involve reversible protonation or hydrogen migration of backbone amide groups. In contrast, the deuterium content of a/x and c/z fragment ions generated through UVPD of labeled peptide P1 at harsh ion source conditions (Figures 2e and S-5a), correlated closely with the model for 100% scrambling (>85% scrambling). This control experiment validated the applied analytical approach by demonstrating the ability of UVPD-generated fragment ions of the selectively labeled peptide to report the occurrence of H/D scrambling.</p><p>We next compared the occurrence of H/D scrambling in labeled peptide P1 following UVPD, ETD, or CID performed under identical mild ion source conditions (Figures 3 and S-5). ETD yielded c/z fragment ions with a similar low degree of H/D scrambling (8% ± 3%) to a/x and c/z fragment ions from UVPD. We assign the minimal residual scrambling level detected (<10%) to method uncertainty 6 and minor inherent H/D scrambling occurring during ion transmission in the mass spectrometer at mild ion source conditions. Comparative UVPD and ETD experiments performed at harsh ion source conditions both produced full scrambling at >90% (Figure S-5). Furthermore, CIDgenerated a/x and b/y fragment ions of labeled peptide P1 displayed a complete randomization of the solution-phase labeling pattern (>90% scrambling), as previously shown for b/y ions. 7 Notably, UVPD gave a/x fragment ions with negligible H/D scrambling, demonstrating differences in the mechanism by which a and x ions are generated in UVPD and CID. In conclusion, by measuring the deuterium content of a/x and c/z fragment ions of the selectively model peptide, our results show that UVPD at 213 nm can proceed without H/D scrambling. By demonstrating the ability of UVPD to accurately measure the deuterium levels of individual amide hydrogens in a peptide, our results show the feasibility of using UVPD to increase the spatial resolution of an HDX-MS experiment. Distinct mechanisms may give rise to the diverse fragment ions observed during UVPD at 213 nm (e.g. a/x, c/z, b/y). Here we observed that both singly and doubly protonated a/x and c/z ions were generated without randomization of the deuterium labeling of backbone amide groups, supporting that such fragment ions are generated by direct mechanisms occurring promptly after absorption of the high-energy photon (5.8 eV) on a microsecond to nanosecond timescale. 19 UVPD-generated a/x and c/z ions are thus formed faster than the rate of proton mobilization in the activated peptide and the underlying radical-based mechanism(s) do not require migration of hydrogens to facilitate bond cleavage. Absorption of photons of even higher energy during UVPD at 193 or 157 nm (6.4-7.9 eV) is therefore likely also to yield a/x and c/z ions without H/D scrambling, although this remains to be validated experimentally. Whilst the low abundance b/y ions formed during 213 nm UVPD of peptide P1 precluded reliable quantitative analysis, our results are in line with mechanistic studies of UVPD at 157 nm suggesting that all fragment ions from UVPD using short-wavelength lasers originate from fast radical based mechanisms. 25 Our findings indicate that UVPD at 213 nm offers a powerful complement to ETD in bottom-up HDX-MS/MS workflows, with the potential to provide alternative fragmentation, with less dependency on precursor size and charge state, 26 and even allow the extraction of site-specific information for singly-charged peptides. 21 Furthermore, the current implementation of UVPD on a high resolution Q-TOF capable of ion mobility separation 27,28 should enhance extraction of residue-specific deuterium levels from overlapping fragment ions from UVPD of larger peptides and intact proteins in both bottom-up and top-down HDX-MS/MS applications.</p>" }, { "role": "assistant", "content": "Hydrogen/deuterium exchange mass spectrometry (HDX-MS) is now positioned as a routinely used technique to inform on protein structure, dynamics, and interactions. Localizing the deuterium content on a single residue basis increases the spatial resolution of this technique enabling detailed structural analysis. Here we investigate the use of ultraviolet photodissociation (UVPD) at 213 nm to localize incorporated deuterium with single residue resolution in HDX-MS experiments. Using a selectively labeled peptide, we show that UVPD occurs without H/D scrambling as the peptide probe accurately retains its solution-phase deuterium labeling pattern. Our results indicate that UVPD provides an attractive alternative to electron mediated dissociation to increase the spatial resolution of the HDX-MS experiment, combining high fragmentation efficiency, high fragment ion diversity, and low chargestate de-pendency." } ]
science.chemprot_re
science.chemprot_re.244
[ { "role": "user", "content": "Given an abstract from a research paper, your task is to identify and label relationships between chemical compounds and genes mentioned in the abstract.\n\nThe relationship type must be one of ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nPlease format your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"]. If no relations can be found, please output an empty JSON array [].\n\nAbstract:\n\nA comparison of the effects of nabumetone vs meloxicam on serum thromboxane B2 and platelet function in healthy volunteers. AIMS: To compare the effects of nabumetone and meloxicam, two cyclo-oxygenase-2 (COX-2) preferential nonsteroidal anti-inflammatory drugs (NSAIDs), on platelet COX-1 activity and platelet function. METHODS: Twelve healthy volunteers (3 male, 9 female, median age 22 years) participated in an open, randomized, cross-over trial of nabumetone 1000 mg twice daily vs meloxicam 7.5 mg twice daily during 1 week with 2 weeks wash-out. After a second 2 week wash-out period, one dose of indomethacin 50 mg was given as a positive control to check for NSAID induced inhibition of platelet function. COX-1 inhibition was measured as percentage inhibition of serum TXB2 generation in clotting whole blood, and as closure time with use of the platelet function analyser PFA-100. Data are reported as median with range. Paired variables were analysed using Wilcoxons signed rank test. RESULTS: TXB2 levels decreased significantly after all three medications, but percentage inhibition after nabumetone and indomethacin (88% and 97%, respectively) was significantly higher than after meloxicam (63%) (P<0.05). Closure times increased significantly after administration of all three medications (P<0.05). Increases in closure time after administration did not differ between nabumetone and meloxicam (24% and 14%, respectively), but were significantly larger after indomethacin administration (63%) (P<0.01). CONCLUSIONS: In the maximum registered dosage, nabumetone inhibits thromboxane production much more than meloxicam, signifying less COX-2 selectivity of the former. However, both nabumetone and meloxicam cause only minor impairment in platelet function in comparison with indomethacin and the difference between them is not significant." }, { "role": "assistant", "content": "[[\"TXB2\", \"PRODUCT-OF\", \"COX-1\"], [\"nabumetone\", \"DOWNREGULATOR\", \"COX-2\"]]" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.13546
[ { "role": "user", "content": "You are presented with multiple paper abstracts, alongside the associated literature review and chapter titles. Given the provided information, you are going to do a literature review. Please generate a coherent literature review that synthesizes the content, draws connections between different works, highlights major findings, and identifies any gaps in the existing research.\n\nLiterature Review Title: A comparative survey of Personalised Information Retrieval and Adaptive Hypermedia techniques \nChapter Title: Information Access Paradigms of PIR and AH \n1. Abstract of Cited Paper (BIB001): As information becomes more ubiquitous and the demands that searchers have on search systems grow, there is a need to support search behaviors beyond simple lookup. Information seeking is the process or activity of attempting to obtain information in both human and technological contexts. Exploratory search describes an information-seeking problem context that is open-ended, persistent, and multifaceted, and information-seeking processes that are opportunistic, iterative, and multitactical. Exploratory searchers aim to solve complex problems and develop enhanced mental capacities. Exploratory search systems support this through symbiotic human-machine relationships that provide guidance in exploring unfamiliar information landscapes. Exploratory search has gained prominence in recent years. There is an increased interest from the information retrieval, information science, and human-computer interaction communities in moving beyond the traditional turn-taking interaction model support d by major Web search engines, and toward support for human intelligence amplification and information use. In this lecture, we introduce exploratory search, relate it to relevant extant research, outline the features of exploratory search systems, discuss the evaluation of these systems, and suggest some future directions for supporting exploratory search. Exploratory search is a new frontier in the search domain and is becoming increasingly important in shaping our future world. Table of Contents: Introduction / Defining Exploratory Search / Related Work / Features of Exploratory Search Systems / Evaluation of Exploratory Search Systems / Future Directions and concluding Remarks\n\nLiterature Review:" }, { "role": "assistant", "content": "One of the major distinctions between PIR and AH lies in the overall information access paradigms underlying both areas. This distinction has driven most of the research in both fields and has lead to their respective techniques and technologies towards adaptation and personalisation. PIR is based on the standard Information Retrieval model, which is traditionally focused on the retrieval of documents that are relevant to a unitary query. While PIR extends this model by taking into account historical interactions, the paradigm remains finding the most relevant documents for a single user query. This fundamental underpinning of PIR makes such systems particularly suitable for the general information access paradigm of searching by query, where it is assumed that a user can express their information need in a relatively precise user query. However, this assumption is not always correct, especially in cases where users are uncertain about their actual information needs or the correct query terms to express these needs. AH systems stem from the information access paradigm of searching by browsing, where users generally have less precise information needs and therefore need to surf and explore pages. AH is facilitating this type of search by providing the most relevant browsing content and links with respect to a rich representation of user characteristics (such as preferences, history or prior knowledge). AH is assisting a user's information exploration by creating or adapting the navigation across hypertext and hypermedia using e.g. link creation, link/content hiding or link/content annotation. This distinction in information access paradigms has also driven the application of the respective techniques towards specific user information needs. In , search needs are classified into three categories, Navigational searching, Transactional searching and Informational searching. Navigational searches typically only require one precise answer, for example the web address of a particular company or the homepage of an individual. Similarly, Transactional searches are very specific, as the main goal is to purchase a particular product or use a particular service. As PIR focuses on finding the most relevant documents for precise user queries, it is particularly suited for these information needs. However, these two types of searches are shown to constitute only 20% of current searches on the web . The remaining 80% of searches can be classified as Informational searching, i.e. a user searching for deeper information on a particular topic. Current (P)IR solutions typically do not assist this Informational searching adequately, as often more than a few very precise results are needed to fill a user's knowledge gap. AH systems have inherently focussed on providing assistance towards such Informational searching needs, since they typically assist users in gaining deep knowledge in a particular domain. In particular, AH systems have proven successful in e-learning systems, where a user's information need is to learn about a particular topic. This represents a good example of a search reaching beyond a few precise documents and corresponds to a case where a user is uncertain about the range of query terms to choose. However, while AH systems have proven effective for such \"formal\" learning tasks, their closed corpus nature has prevented a more widespread applicability for web-scale Informational searching (i.e. \"informal\" learning activities). Despite PIR and AH stemming from distinct information access paradigms, both fields effectively strive towards similar aims of providing the most personally relevant information to an individual user. This has led to many conceptual commonalities between the techniques of both fields, which can potentially complement each other in order to better assist a user's information exploration on a web-scale. The aim of assisting such searches is also actively researched in the area of Exploratory Search BIB001 . This field distinguishes itself from standard search through the following 6 characteristics BIB002 ): 1) multiple query iterations, 2) open-ended information needs, 3) close coupling with task context, 4) combination of browsing and focused searching, 5) possible collaboration of multiple people and 6) advanced system evaluation with respect to learning, insight, task outcomes and system utility. One could thus argue that a hybrid of both PIR and AH systems could potentially be characterised by this search paradigm. By analysing and contrasting the various PIR and AH techniques, this survey is able to provide a set of affordances that such a hybrid system's components require in order to improve user assistance across Informational and Exploratory search." } ]
science.craftchem_ner
science.craftchem_ner.9734
[ { "role": "user", "content": "You will be presented a sentence from biomedical research text. Extract all terms that correspond to chemical entities of biological interest (ChEBI).\nReturn your identified terms in a JSON array, formatted like this: [\"Entity1\", \"Entity2\", ...].\nIf no such entities are detected, respond by providing an empty JSON array [].\n\nBiomedical context:\nWt1andCited1(red)stainingrevealedanincreaseofapproximately30%inthethicknessofthecappingmesenchymesurroundingthemutanturetericbuds(Figure3,Table2)." }, { "role": "assistant", "content": "[]" } ]
science.scitldr_aic
science.scitldr_aic.1495
[ { "role": "user", "content": "Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.\n\nPaper text:\n\nLanguage style transfer is the problem of migrating the content of a source sentence to a target style.\nIn many applications, parallel training data are not available and source sentences to be transferred may have arbitrary and unknown styles.\nIn this paper, we present an encoder-decoder framework under this problem setting.\nEach sentence is encoded into its content and style latent representations.\nBy recombining the content with the target style, we can decode a sentence aligned in the target domain.\nTo adequately constrain the encoding and decoding functions, we couple them with two loss functions.\nThe first is a style discrepancy loss, enforcing that the style representation accurately encodes the style information guided by the discrepancy between the sentence style and the target style.\nThe second is a cycle consistency loss, which ensures that the transferred sentence should preserve the content of the original sentence disentangled from its style.\nWe validate the effectiveness of our proposed model on two tasks: sentiment modification of restaurant reviews, and dialog response revision with a romantic style.\nStyle transfer is a long-standing research problem that aims at migrating the content of a sample from a source style to a target style.\nRecently, great progress has been achieved by applying deep neural networks to redraw an image in a particular style BID7 BID10 BID2 BID20 BID12 .\nHowever, until now very few approaches have been proposed for style transfer of natural language sentences, i.e., changing the style or genre of a sentence while preserving its semantic content.\nFor example, we would like a system that can convert a given text piece in the language of Shakespeare BID14 ; or rewrite product reviews with a favored sentiment BID17 .One\nimportant issue on language style transfer is that parallel data are unavailable. For\ninstance, considering the task of rewriting a negative review of a product to its counterpart with a positive sentiment, we can hardly find paired data that describe the same content. Yet\n, many text generation frameworks require parallel data, such as the popular sequence-to-sequence model in machine translation and document summarization BID16 , and thus are not applicable under this scenario. A\nfew recent approaches have been proposed for style transfer with non-parallel data BID4 BID17 . Their\nkey idea is to learn a latent representation of the content disentangled from the source style, and then recombine it with the target style to generate the corresponding sentence.All the above approaches assume that data have only two styles, and their task is to transfer sentences from one style to the other. However\n, in many practical settings, we may deal with sentences in more than two styles. Taking\nthe review sentiment modification as an example again, some reviews may be neither positive nor negative, but in a neutral style. Moreover\n, even reviews considered negative can be categorized into more fine-grained sentiments, such as anger, sadness, boredom and other negative styles. It may\nbe beneficial if such styles are treated differently. As another\nexample, consider a chatbot with a coherent persona, which has a consistent language behavior and interaction style BID9 . A simple framework\nfor this task is to first use human dialog data to train a chatbot system, such as a retrieval-based dialog model BID11 , and then transfer the output responses with a language style transfer model so that multi-round responses always have a consistent style. Note that the human\ndialog sentences are collected from different users, and users' expressions of the content and tones may be in different personalized characteristics. Thus the output responses\nretrieved from the dialog model may have the language style of any user. Simply treating the responses\nwith a single style and employing the existing style transfer models would lead to unsatisfactory results. Hence, in this paper, we study\nthe setting of language style transfer in which the source data to be transferred can have various (and possibly unknown) styles.Another challenging problem in language style transfer is that the transferred sentence should preserve the content of the original sentence disentangled from its style. To tackle this problem, BID17\nassumed the source domain and the target domain share the same latent content space, and trained their model by aligning these two latent spaces. BID4 constrained that the latent\ncontent representation of the original sentence could be inferred from the transferred sentence. However, these attempts considered\ncontent modification in the latent content space but not the sentence space.In this work, we develop an encoder-decoder framework that can transfer a sentence from a source domain to its counterpart in a target domain. The training data in the two domains\nare non-parallel, and sentences in the source domain can have arbitrary language styles but those in the target domain are with a consensus style. We encode each sentence into two latent\nrepresentations, one for the content disentangled from the style, and the other for the style. Intuitively, if a source sentence is considered\nhaving the target style with a high probability, its style representation should be close to the target style representation. To make use of this idea, we enforce that the discrepancy\nbetween an arbitrary style representation and the target style representation should be consistent with the closeness of its sentence style to the target style. A cycle consistency loss is further introduced to avoid content\nchange by directly considering the transferred sentence. Its idea is that the generated sentence, when put back into the\nencoder and recombined with its original style representation, can recover the original sentence. We evaluate the performance of our proposed model on two tasks.\nThe first is the sentiment modification task with its source domain\ncontaining more than one sentiments, and the second is to transfer general dialog responses to a romantic style.\nIn this paper, we present an encoder-decoder framework for language style transfer, which allows for the use of non-parallel data and source data with various unknown language styles.\nEach sentence is encoded into two latent representations, one corresponding to its content disentangled from the style and and the other representing the style only.\nBy recombining the content with the target style, we can decode a sentence aligned in the target domain.\nSpecifically, we propose two loss functions, i.e., the style discrepancy loss and the cycle consistency loss, to adequately constrain the encoding and decoding functions.\nThe style discrepancy loss is to enforce a properly encoded style representation while the cycle consistency loss is used to ensure that the style-transferred sentences can be transferred back to their original sentences.\nExperimental results on two tasks demonstrate that our proposed method outperforms the state-of-the-art style transfer method BID17 We randomly select 200 test samples from Yelp and perform human evaluations on four aspects of the results: (1) content: estimates if the content of an input sentence is preserved in a transferred sentence; content rating has 0 (changed), 1 (synonym substitution or partially changed), and 2 (unchanged); (2) sentiment: estimates if the sentiment of a transferred sentence is consistent with the target sentiment; sentiment rating has 0 (unchanged and wrong), 1 (changed but wrong), 2 (correct); (3) fluency: estimates the fluency of transferred sentences; fluency is rated from 1 (unreadable) to 4 (perfect); (4) overall: estimates the overall quality of transferred sentences; overall rating ranges from 1 (failed) to 4 (perfect).We\nhired five annotators and averaged their evaluations. TAB0\nshows results on Yelp when the source domain contains not only negative sentences but also 150k positive sentences (row 3 in TAB0 ), and TAB0 shows results on Yelp when the target domain contains only 100k positive sentences ( row 1 in TAB3 ). As can\nbe seen, our model is better in terms of sentiment accuracy and overall quality, which is consistent with the automatic evaluation results." }, { "role": "assistant", "content": "We present an encoder-decoder framework for language style transfer, which allows for the use of non-parallel data and source data with various unknown language styles." } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.14786
[ { "role": "user", "content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nThis paper aims to improve the widely used deep speaker embedding x-vector model. We propose the following improvements: (1) a hybrid neural network structure using both time delay neural network (TDNN) and long short-term memory neural networks (LSTM) to generate complementary speaker information at different levels; (2) a multi-level pooling strategy to collect speaker information from both TDNN and LSTM layers; (3) a regularization scheme on the speaker embedding extraction layer to make the extracted embeddings suitable for the following fusion step. The synergy of these improvements are shown on the NIST SRE 2016 eval test (with a 19 EER reduction) and SRE 2018 dev test (with a 9 EER reduction), as well as more than 10 DCF scores reduction on these two test sets over the x-vector baseline.\n\nCited Abstract(s):\n[1]: Bidirectional long short-term memory (BLSTM) acoustic models provide a significant word error rate reduction compared to their unidirectional counterpart, as they model both the past and future temporal contexts. However, it is nontrivial to deploy bidirectional acoustic models for online speech recognition due to an increase in latency. In this letter, we propose the use of temporal convolution, in the form of time-delay neural network (TDNN) layers, along with unidirectional LSTM layers to limit the latency to 200 ms. This architecture has been shown to outperform the state-of-the-art low frame rate (LFR) BLSTM models. We further improve these LFR BLSTM acoustic models by operating them at higher frame rates at lower layers and show that the proposed model performs similar to these mixed frame rate BLSTMs. We present results on the Switchboard 300 h LVCSR task and the AMI LVCSR task, in the three microphone conditions.\n\n[2]: Both Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) have shown improvements over Deep Neural Networks (DNNs) across a wide variety of speech recognition tasks. CNNs, LSTMs and DNNs are complementary in their modeling capabilities, as CNNs are good at reducing frequency variations, LSTMs are good at temporal modeling, and DNNs are appropriate for mapping features to a more separable space. In this paper, we take advantage of the complementarity of CNNs, LSTMs and DNNs by combining them into one unified architecture. We explore the proposed architecture, which we call CLDNN, on a variety of large vocabulary tasks, varying from 200 to 2,000 hours. We find that the CLDNN provides a 4–6 relative improvement in WER over an LSTM, the strongest of the three individual models." }, { "role": "assistant", "content": "Combining CNN or TDNN with LSTM is proved effective in automatic speech recognition (ASR) tasks. [2] proposed to stack CNN, LSTM and DNN sequentially for speech recognition and it shows superior results than using CNN, LSTM or DNN alone. [1] conducted experiments to compare the stack order of TDNN and LSTM and found interleaving of TDNN layers with LSTM layers could be more effective than simple stacking strategy." } ]
science.healthver_entailment
science.healthver_entailment.3258
[ { "role": "user", "content": "You will be shown a claim about public health, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\". If the abstract does not provide enough information to arrive at a verdict, write \"NEI\" (for \"not enough information\").\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract. If the verdict was \"NEI\", then return an empty array.\n\nFor instance, if the model were given the claim \"wearing masks can prevent the spread of COVID\", the output might be\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that mass mask-wearing reduces the transmission rate for COVID-19.\"]\n}\n\nClaim: The sunlight has a huge impact on decaying the coronavirus\n\nAbstract:\nSevere acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a highly contagious virus that can transmit through respiratory droplets, aerosols, or contacts.\nFrequent touching of contaminated surfaces in public areas is therefore a potential route of SARS-CoV-2 transmission.\nThe inanimate surfaces have often been described as a source of nosocomial infections.\nHowever, summaries on the transmissibility of coronaviruses from contaminated surfaces to induce the coronavirus disease 2019 are rare at present.\nThis review aims to summarize data on the persistence of different coronaviruses on inanimate surfaces.\nThe literature was systematically searched on Medline without language restrictions.\nAll reports with experimental evidence on the duration persistence of coronaviruses on any type of surface were included.\nMost viruses from the respiratory tract, such as coronaviruses, influenza, SARS-CoV, or rhinovirus, can persist on surfaces for a few days.\nPersistence time on inanimate surfaces varied from minutes to up to one month, depending on the environmental conditions.\nSARSCoV-2 can be sustained in air in closed unventilated buses for at least 30 min without losing infectivity.\nThe most common coronaviruses may well survive or persist on surfaces for up to one month.\nViruses in respiratory or fecal specimens can maintain infectivity for quite a long time at room temperature.\nAbsorbent materials like cotton are safer than unabsorbent materials for protection from virus infection.\nThe risk of transmission via touching contaminated paper is low.\nPreventive strategies such as washing hands and wearing masks are critical to the control of coronavirus disease 2019." }, { "role": "assistant", "content": "{\"verdict\": \"NEI\", \"evidence\": []}" } ]
science.genia_ner
science.genia_ner.1372
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"RNA\", \"CellLine\", \"Protein\", \"DNA\", \"CellType\"].\n\nPlease return the output as a JSON object of the format: {\"DNA\" : [\"TAL1\", ...], \"RNA\": [\"mRNAs\", ...], \"CellLine\": [\"T cells\", ...], \"CellType\": [\"monocytes\", ...], \"Protein\": [\"IL-2\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nSignalling into the T-cell nucleus: NFAT regulation. The nuclear factor of activated T cells (NFAT) plays an important role in T-cell biology. Activation of T cells results in the rapid calcineurin-dependent translocation of NFAT transcription factors from the cytoplasm to the nucleus. This translocation process coupled to the subsequent active maintenance of NFAT in the nucleus compartment is critical for the induction of expression of several genes encoding cytokines and membrane proteins that modulate immune responses. The molecular cloning of the NFAT family of transcription factors has facilitated rapid progress in the understanding of the signalling mechanisms that control the activity of NFAT." }, { "role": "assistant", "content": "{\"CellLine\": [], \"CellType\": [], \"DNA\": [], \"Protein\": [\"calcineurin\", \"NFAT\", \"membrane proteins\", \"NFAT family\", \"nuclear factor of activated T cells\", \"cytokines\", \"transcription factors\"], \"RNA\": []}" } ]
science.mslr2022_cochrane_multidoc_summarization
science.mslr2022_cochrane_multidoc_summarization.3038
[ { "role": "user", "content": "You will be provided with the titles and abstracts of a collection of medical RCT reports studying a clinical question -- for instance, the efficacy of an intervention on some medical outcome. Please write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article. Your response should not include any other text. The RCT reports are below.\n\n\n\nTitle 1:\nComparison of a levonorgestrel-releasing intrauterine device versus expectant management after conservative surgery for symptomatic endometriosis: a pilot study.\nAbstract 1:\nTo determine whether the frequency and severity of dysmenorrhea are reduced in women with symptomatic endometriosis in whom a levonorgestrel-releasing intrauterine device (Lng-IUD) is inserted after operative laparoscopy compared with those treated with surgery only.\n Open-label, parallel-group, randomized, controlled trial.\n A tertiary care and referral center for patients with endometriosis.\n Parous women with moderate or severe dysmenorrhea undergoing first-line operative laparoscopy for symptomatic endometriosis.\n Randomization to immediate Lng-IUD insertion or expectant management after laparoscopic treatment of endometriotic lesions. Proportions of women with recurrence of moderate or severe dysmenorrhea in the two study groups 1 year after surgery and overall degree of satisfaction with treatment. Moderate or severe dysmenorrhea recurred in 2 of 20 (10%) subjects in the postoperative Lng-IUD group and 9/20 (45%) in the surgery-only group. Thus, a medicated device inserted postoperatively will prevent the recurrence of moderate or severe dysmenorrhea in one out of three patients 1 year after surgery. A total of 15/20 (75%) women in the Lng-IUD group and 10/20 (50%) in the expectant management group were satisfied or very satisfied with the treatment received.\n Insertion of an Lng-IUD after laparoscopic surgery for symptomatic endometriosis significantly reduced the medium-term risk of recurrence of moderate or severe dysmenorrhea.\n\n\nTitle 2:\nPostoperative levonorgestrel-releasing intrauterine system for pelvic endometriosis-related pain: a randomized controlled trial.\nAbstract 2:\nTo estimate the effectiveness of a postoperative levonorgestrel-releasing intrauterine system for relieving pelvic pain in patients with endometriosis.\n A double-blind randomized controlled trial was conducted in 55 patients with endometriosis and moderate-to-severe dysmenorrhea (visual analog scale, greater than 50 mm) undergoing laparoscopic conservative surgery. After surgery, patients were randomized to a levonorgestrel-releasing intrauterine system (n=28) or expectant management (n=27) group. Primary outcome was the change of dysmenorrhea visual analog scale. Secondary outcomes included changes of pelvic pain and dyspareunia visual analog scale, Short Form-36 score, and adverse effects.\n The two groups were comparable in age, body mass index, parity, and baseline pain scores. At 12 months, the levonorgestrel-releasing intrauterine system group had a significantly lower median value of dysmenorrhea and noncyclic pelvic pain score. Compared with the control group, the levonorgestrel-releasing intrauterine system group had greater reduction in dysmenorrhea visual analog scale (-81.0 compared with -50.0 mm, P=.006) and pelvic pain visual analog scale (-48.5 compared with -22.0 mm, P=.038) but a comparable reduction in dyspareunia visual analog scale (-15.0 compared with -19.0 mm, P=.831). Two patients in levonorgestrel-releasing intrauterine system group (7.4%) and nine in the expectant management group (39.1%) had recurrent dysmenorrhea within 1 year postoperatively (P=.014). Number-needed-to-treat to prevent one case with recurrent dysmenorrhea within the first year was three cases. The Short Form-36 scores improved in the levonorgestrel-releasing intrauterine system group but did not change in the expectant management group. There was no serious adverse event during the study period.\n The levonorgestrel-releasing intrauterine system is effective and well accepted for long-term therapy after conservative surgery for patients with moderate to severe pain related to endometriosis. It can improve the patient's quality of life, including physical and mental health.\n\n\nTitle 3:\nPostoperative medical treatment of chronic pelvic pain related to severe endometriosis: levonorgestrel-releasing intrauterine system versus gonadotropin-releasing hormone analogue.\nAbstract 3:\nTo compare efficacy of the levonorgestrel-releasing intrauterine system (LNG-IUS; Mirena) with depot GnRH analogue (GnRH-a; gosareline acetate; Zoladex) on endometriosis-related chronic pelvic pain (CPP) in patients with severe endometriosis during 12 months.\n Prospective, randomized, controlled study.\n The reproductive endocrinology unit of a tertiary, research and education hospital.\n Forty women with severe endometriosis (revised The American Fertility Society [AFS] classification >40) and endometriosis-related CPP and control groups were enrolled in the study.\n The patients were treated with either LNG-IUS (n = 20) or GnRH-a (n = 20). The GnRH-a dose was repeated every 4 weeks for 24 weeks.\n Scores of CPP were evaluated using a visual analogue scale (VAS) and total endometriosis severity profile (TESP).\n The TESP score decreased in the LNG-IUS group at first, third, and sixth month follow-up visits, whereas at the 12th month follow-up visit, the TESP scores were increased to values similar to pretreatment values. Although the VAS score had no significant alteration during the follow-up period in the LNG-IUS group, the GnRH-a group showed a significant decrease in the VAS score and TESP score at the end of 1 year. The LNG-IUS treatment showed a lower patient satisfaction.\n Both treatment modalities showed comparable effectiveness in the treatment of CPP-related endometriosis.\n Copyright © 2011 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved." }, { "role": "assistant", "content": "There is limited but consistent evidence showing that postoperative LNG-IUD use reduces the recurrence of painful periods in women with endometriosis. Further well-designed RCTs are needed to confirm these findings." } ]
science.chemprot_re
science.chemprot_re.873
[ { "role": "user", "content": "Given an abstract from a research paper, your task is to identify and label relationships between chemical compounds and genes mentioned in the abstract.\n\nThe relationship type must be one of ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nPlease format your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"]. If no relations can be found, please output an empty JSON array [].\n\nAbstract:\n\nImpaired cliff avoidance reaction in dopamine transporter knockout mice. RATIONALE: Impulsivity is a key feature of disorders that include attention-deficit/hyperactivity disorder (ADHD). The cliff avoidance reaction (CAR) assesses maladaptive impulsive rodent behavior. Dopamine transporter knockout (DAT-KO) mice display features of ADHD and are candidates in which to test other impulsive phenotypes. OBJECTIVES: Impulsivity of DAT-KO mice was assessed in the CAR paradigm. For comparison, attentional deficits were also assessed in prepulse inhibition (PPI) in which DAT-KO mice have been shown to exhibit impaired sensorimotor gating. RESULTS: DAT-KO mice exhibited a profound CAR impairment compared to wild-type (WT) mice. As expected, DAT-KO mice showed PPI deficits compared to WT mice. Furthermore, the DAT-KO mice with the most impaired CAR exhibited the most severe PPI deficits. Treatment with methylphenidate or nisoxetine ameliorated CAR impairments in DAT-KO mice. CONCLUSION: These results suggest that DAT-KO mice exhibit impulsive CAR behavior that correlates with their PPI deficits. Blockade of monoamine transporters, especially the norepinephrine transporter (NET) in the prefrontal cortex (PFC), may contribute to pharmacological improvement of impulsivity in these mice." }, { "role": "assistant", "content": "[]" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.20330
[ { "role": "user", "content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nMobile edge computing is a new cloud computing paradigm, which makes use of small-sized edge clouds to provide real-time services to users. These mobile edge-clouds (MECs) are located in close proximity to users, thus enabling users to seamlessly access applications running on MECs. Due to the co-existence of the core (centralized) cloud, users, and one or multiple layers of MECs, an important problem is to decide where (on which computational entity) to place different components of an application. This problem, known as the application or workload placement problem, is notoriously hard, and therefore, heuristic algorithms without performance guarantees are generally employed in common practice, which may unknowingly suffer from poor performance as compared with the optimal solution. In this paper, we address the application placement problem and focus on developing algorithms with provable performance bounds. We model the user application as an application graph and the physical computing system as a physical graph, with resource demands availabilities annotated on these graphs. We first consider the placement of a linear application graph and propose an algorithm for finding its optimal solution. Using this result, we then generalize the formulation and obtain online approximation algorithms with polynomial-logarithmic (poly-log) competitive ratio for tree application graph placement. We jointly consider node and link assignment, and incorporate multiple types of computational resources at nodes.\n\nCited Abstract(s):\n[1]: The MapReduce Hadoop architecture has become very important and effective in cloud systems because many data-intensive applications are usually required to process big data. In such environments, big data is partitioned and stored over several data nodes; thus, the total completion time of a task would be delayed if the maximum access latency among all pairs of a data node and its assigned computation node is not bounded. Moreover, the computation nodes usually need to communicate with each other for aggregating the computation results; therefore, the maximum access latency among all pairs of assigned computation nodes also needs to be bounded. In the literature, it has been proved that the placement problem of computation nodes (virtual machines) to minimize the maximum access latency among all pairs of a data node and its assigned computation node and among all pairs of assigned computation nodes does not admit any approximation algorithm with a factor smaller than two, whereas no approximation algorithms have been proposed so far. In this paper, we first propose a 3-approximation algorithm for resolving the problem. Subsequently, we close the gap by proposing a 2-approximation algorithm, that is, an optimal approximation algorithm, for resolving the problem in the price of higher time complexity. Finally, we conduct simulations for evaluating the performance of our algorithms.\n\n[2]: Network virtualization allows multiple heterogeneous virtual networks (VNs) to coexist on a shared infrastructure. Efficient mapping of virtual nodes and virtual links of a VN request onto substrate network resources, also known as the VN embedding problem, is the first step toward enabling such multiplicity. Since this problem is known to be NP-hard, previous research focused on designing heuristic-based algorithms that had clear separation between the node mapping and the link mapping phases. In this paper, we present ViNEYard-a collection of VN embedding algorithms that leverage better coordination between the two phases. We formulate the VN embedding problem as a mixed integer program through substrate network augmentation. We then relax the integer constraints to obtain a linear program and devise two online VN embedding algorithms D-ViNE and R-ViNE using deterministic and randomized rounding techniques, respectively. We also present a generalized window-based VN embedding algorithm (WiNE) to evaluate the effect of lookahead on VN embedding. Our simulation experiments on a large mix of VN requests show that the proposed algorithms increase the acceptance ratio and the revenue while decreasing the cost incurred by the substrate network in the long run.\n\n[3]: The SDN and NFV paradigms enable novel network services which can be realized and embedded in a flexible and rapid manner. For example, SDN can be used to flexibly steer traffic from a source to a destination through a sequence of virtualized middleboxes, in order to realize so-called service chains. The service chain embedding problem consists of three tasks: admission control, finding suitable locations to allocate the virtualized middleboxes and computing corresponding routing paths. This paper considers the offline batch embedding of multiple service chains. Concretely, we consider the objectives of maximizing the profit by embedding an optimal subset of requests or minimizing the costs when all requests need to be embedded. Interestingly, while the service chain embedding problem has recently received much attention, so far, only non- polynomial time algorithms (based on integer programming) as well as heuristics (which do not provide any formal guarantees) are known. This paper presents the first polynomial time service chain approximation algorithms both for the case with admission and without admission control. Our algorithm is based on a novel extension of the classic linear programming and randomized rounding technique, which may be of independent interest. In particular, we show that our approach can also be extended to more complex service graphs, containing cycles or sub-chains, hence also providing new insights into the classic virtual network embedding problem.\n\n[4]: We study a basic resource allocation problem that arises in cloud computing environments. The physical network of the cloud is represented as a graph with vertices denoting servers and edges corresponding to communication links. A workload is a set of processes with processing requirements and mutual communication requirements. The workloads arrive and depart over time, and the resource allocator must map each workload upon arrival to the physical network. We consider the objective of minimizing the congestion. We show that solving a subproblem about mapping a single workload to the physical graph essentially suffices to solve the general problem. In particular, an α-approximation for this single mapping problem gives an O(α log nD)-competitive algorithm for the general problem, where n is the number of nodes in the physical network and D is the maximum to minimum workload duration ratio. We also show how to solve the single mapping problem for two natural class of workloads, namely depth-d-trees and complete-graph workloads. For depth-d tree, we give an nO(d) time O(d2 log (nd))-approximation based on a strong LP relaxation inspired by the Sherali-Adams hierarchy.\n\n[5]: We consider resource allocation algorithms for distributed cloud systems, which deploy cloud-computing resources that are geographically distributed over a large number of locations in a wide-area network. This distribution of cloud-computing resources over many locations in the network may be done for several reasons, such as to locate resources closer to users, to reduce bandwidth costs, to increase availability, etc. To get the maximum benefit from a distributed cloud system, we need efficient algorithms for resource allocation which minimize communication costs and latency. In this paper, we develop efficient resource allocation algorithms for use in distributed clouds. Our contributions are as follows: Assuming that users specify their resource needs, such as the number of virtual machines needed for a large computational task, we develop an efficient 2-approximation algorithm for the optimal selection of data centers in the distributed cloud. Our objective is to minimize the maximum distance, or latency, between the selected data centers. Next, we consider use of a similar algorithm to select, within each data center, the racks and servers where the requested virtual machines for the task will be located. Since the network inside a data center is structured and typically a tree, we make use of this structure to develop an optimal algorithm for rack and server selection. Finally, we develop a heuristic for partitioning the requested resources for the task amongst the chosen data centers and racks. We use simulations to evaluate the performance of our algorithms over example distributed cloud systems and find that our algorithms provide significant gains over other simpler allocation algorithms.\n\n[6]: Modern cloud infrastructure providers allow customers to rent computing capability in the form of a network of virtual machines (VMs) with bandwidth guarantees between pairs of VMs. Typical requests are in the form of a chain of VMs with an uplink bandwidth to the gateway node of the network (rooted path requests), and most data center architectures route network packets along a spanning tree of the physical network. VMs are instantiated inside servers which reside at the leaves of this network, leading to the following optimization problem: given a rooted tree network T and a set of rooted path requests, find an embedding of the requests that minimizes link congestion. Our main result is an algorithm that, given a rooted tree network T with n leaves and set of weighted rooted path requests, embeds a 1−e fraction of the requests with congestion at most poly(logn, logθ,e−1)·OPT (approximation is necessary since the problem is NP-hard). Here OPT is the congestion of the optimal embedding and θ is the ratio of the maximum to minimum weights of the path requests. We also obtain an O(Hlogn e2) approximation if node capacities can be augmented by a (1+e) factor (here H is the height of the tree). Our algorithm applies a randomized rounding scheme based on Group Steiner Tree rounding to a novel LP relaxation of the set of subtrees of T with a given number of leaves that may be of independent interest.\n\n[7]: Recently, Network Function Virtualization (NFV) has been proposed to transform from network hardware appliances to software middleboxes. Normally, a demand needs to invoke several Virtual Network Functions (VNFs) in a particular order following the service chain along a routing path. In this paper, we study the joint problem of VNF placement and path selection to better utilize the network. We discover that the relation between the link and server usage plays a crucial role in the problem. We first propose a systematic way to elastically tune the proper link and server usage of each demand based on network conditions and demand properties. In particular, we compute a proper routing path length, and decide, for each VNF in the service chain, whether to use additional server resources or to reuse resources provided by existing servers. We then propose a chain deployment algorithm to follow the guidance of this link and server usage. Via simulations, we show that our design effectively adapts resource usage to network dynamics, and, hence, serves more demands than other heuristics." }, { "role": "assistant", "content": "One important aspect to note is that most existing work, including [2] [6] [5] [1] [7] [3] , do not specifically consider the online operation of the algorithms. Although some of them implicitly claim that one can apply the algorithm repeatedly for each newly arrived application, the competitive ratio of such procedure is unclear. To the best of our knowledge, [4] is the only work that studied the competitive ratio of the online application placement problem that considers ." } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.144366
[ { "role": "user", "content": "You will be presented with the full text of a science research paper. Please write an abstract for this paper. Your response should include the abstract and no additional text.\n\nPaper text:\nn103b , one of the brightest x - ray and radio sources in the large magellanic cloud ( lmc ) , is a young , compact supernova remnant ( snr ) .\nthe remnant , located on the north - eastern edge of the hii region n103 , is @xmath0 40 kpc from the young star cluster ngc 1850 .\n@xcite the radio and x - ray morphologies are strikingly similar ; the western edge ( near the hii region ) shows considerable structure and is @xmath0 3 times brighter than the eastern edge .\n@xcite in both bands , the emission arises from a region 7 pc in diameter ( d=50kpc to lmc assumed ) . due to its proximity to a star forming region\n, it was originally suspected that n103b was the result of the core collapse of a massive object .\nhowever , the asca spectrum shows no evidence for k - shell emission from o , ne or mg while si , s , ar , ca , and fe features are strong , indicating that the remnant is more likely the result of a type ia sn explosion . @xcite\nn103b was observed for 40.8 ks with acis s-3 , one of the back - illuminated ccds in the acis - s array .\nwe found that @xmath035@xmath1 of the frames had a high background count rate and that during these times , the spectrum of the background changed significantly .\nthese frames were removed from the data set . the standard asca grade filter ( g 02346 ) was used .\nour observation of n103b has confirmed many earlier results .\nchandra _ x - ray and the australian telescope compact array ( atca ) radio images @xcite are quite similar ( figure 1 ) .\nsince the radio and x - ray flux is depressed in the eastern half of the remnant we conclude that this intensity difference is due to a real density variation , as opposed to an absorption effect . while the bright radio knots are in the general vicinity of the x - ray knots , they do not overlap .\nfurther investigation is required to determine whether there is a relative rotation between the two images , which may explain the offset .\nthe spectrum of n103b as a whole ( figure 2 ) confirms the earlier asca results .\nwe see no evidence for o or ne in the spectrum , although a mg line is present .\ncontinuum subtracted ( cs ) and equivalent width ( ew ) images show significant variation throughout the remnant .\nthe images for he - like si are shown in figure 3 .\nthe interval 1700 - 1865 ev was used to create the he - like si line image . to estimate the continuum\n, we performed a linear interpolation between images in the 1515 - 1680 ev and 2050 - 2215 ev bands , which are free of any significant lines .\nall images were smoothed with a 3 pixel ( 1.5 ) boxcar to reduce the number of pixels with no counts .\nwe were concerned that the cs and ew images were unreliable , as each pixel contained only a few counts at most , so the cs and ew images were collapsed into radial and azimuthal dimensions by binning the line and continuum images into rings and sectors .\nthe cs and ew flux were calculated for each ring and sector as shown in bottom of figure 3 .\nthe radial and azimuthal plots confirm what is seen in the images .\nthe emission from he - like si is greatest along the western edge , where the entire remnant is bright .\nhowever , as shown by the ew images and plots , it is only in the outer rim of the remnant that the he - like si emission drastically dominates over the continuum emission . throughout the remainder of the remnant ,\nthe ratio of line to continuum is relatively flat .\nwe have not yet determined whether the change in emissivity is due to a difference in abundance , temperature , ionization or a combination of these .\npreliminary analysis of this _ chandra _ observation reveal that n103b shows significant structure at the arcsecond level . the variations in x - ray flux are consistent with those in radio and are due to real density variation within the remnant .\nthe spectrum of the remnant is that of a typical type ia snr , however future spatially resolved spectroscopy will allow for a better classification .\nfinally , there are dramatic spatial variations in the emissivity of he - like si in particular .\nthe radial variation in the equivalent width of the remnant is particularly intriguing .\ndetailed spectral analysis of concentric rings will enable us to determine the source of these and other similar variations .\ndickel , j. and milne , d. 1995 , aj , 109 , 1 mathewson , d.s . ,\nford , v.l . ,\ndopita , m.a . ,\ntuohy , i.r . ,\nlong , k.s . , &\nhelfand , d.j .\n1983 , apjs , 51,345 hughes , j.p , hayashi , i. , helfand , d. , hwang , u. , itoh , m. , kirshner , r. , koyama , k. , markert , t. , tsunemi , h. , & woo , j. , 1995 apjl , 444 , l81" }, { "role": "assistant", "content": "we present the preliminary results of a 40.8 ks _ chandra _ \n acis observation of the young supernova remnant ( snr ) n103b located in the large magellanic cloud . \n the image reveals structure at the sub - arcsecond level , including several bright knots and filamentary structures . \n the remnant has the characteristic spectrum of a type ia snr , containing strong lines of fe , he - and h - like si and s , ar , and ca . \n narrow band images reveal non - uniformities in the remnant ." } ]
science.anat_em_ner
science.anat_em_ner.322
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"AnatomicalEntity\"].\n\nPlease return the output as a JSON object of the format: {\"AnatomicalEntity\": [\"tumor cells\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nMedical adherence and childhood chronic illness: family daily management skills and emotional climate as emerging contributors.\nPURPOSE OF REVIEW:\nTo describe recent research that examines family factors that promote or derail adherence to medical regimens for children with chronic health conditions, primarily asthma, diabetes, and cystic fibrosis. From the past 2 years, eight correlational studies were identified which specifically examined the links between family management strategies, family climate and medical adherence.\nRECENT FINDINGS:\nFindings from the studies suggest that team-based management strategies and cohesive family climate promote adherence to medical treatments over time. Family interactions that are characterized by conflict and disengagement tend to disrupt adherence and inevitably cause a decline in child health. Moreover, these findings seem to be moderated by child age in that poorer adherence often occurs when a child reaches adolescence and is searching for greater autonomy.\nSUMMARY:\nFuture research should consider the challenges in measuring medical adherence in the family context as well as incorporating more naturalistic studies of family interactions. Randomized controlled trials using family-based interventions may consider focusing on medical adherence as an important mediator between family process and child health outcomes." }, { "role": "assistant", "content": "{\"AnatomicalEntity\": []}" } ]
science.chemdner_ner
science.chemdner_ner.2891
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Arsenic\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nCharacterization of Hachi (Camelus dromedarius) fat extracted from the hump. In this work, the characteristics of fat from the hump of young camels (Hachi) were evaluated. The physicochemical properties of the fat were as follows: melting point, 45°C; saponification value, 202.3mg KOH/g oil; refractive index (60°C), 1.468; unsaponifiable matter, 1.37%; free fatty acids (as the percentage of oleic acid), 0.96%; and peroxide value, 3.37mequiv. O2/kg oil. High-resolution (1)H nuclear magnetic resonance ((1)H NMR) was used for the direct determination of the iodine value of Hachi fat (62.74g/100g oil). The Hachi fat was composed primarily of oleic acid (33.35%), followed by palmitic acid (26.16%), stearic acid (10.07%), palmitelaidic acid (9.56%) and myristic acid (8.83%). The thermal properties were assessed by thermogravimetry (TG) and derivative thermogravimetry (DTG). The results of the present analytical study showed that Hachi fat could be used in food products and as an important source of biological materials." }, { "role": "assistant", "content": "{\"Chemical\": [\"oleic acid\", \"KOH\", \"palmitic acid\", \"palmitelaidic acid\", \"fatty acids\", \"iodine\", \"(1)H\", \"myristic acid\", \"stearic acid\", \"peroxide\", \"O2\"]}" } ]
science.qasa_abstractive_qa
science.qasa_abstractive_qa.1359
[ { "role": "user", "content": "You will be shown an excerpt from a computer science scientific research paper, followed by a question about the paper. Please answer the question. Do not include any text in your response other than the answer.\n\nContext: We used the extended version of the Cornell graspingdataset for our experiments. This dataset, along with code for thispaper, is available athttp://pr.cs.cornell.edu/deepgrasping.We note that this is an updated version of the dataset used in[28], containing several more complex objects, and thusresults for their algorithms will be different from those in[28].This dataset contains 1035 images of 280 graspable objects, several ofwhich are shown in Fig. 9.Each image is annotated with several ground-truth positive and negativegrasping rectangles. While the vast majority of possible rectangles for most objectswill be non-graspable, the dataset contains roughly equal numbers ofgraspable and non-graspable rectangles. We will show that this is usefulfor an unsupervised learning algorithm, as it allows learning a goodrepresentation for graspable rectangles even from unlabeled data.\n\nQuestion: Which RGBD robotic grasping dataset was used for verification?" }, { "role": "assistant", "content": "We used the extended version of the Cornell grasping dataset for our experiments. This dataset, along with code for this paper, is available athttp://pr.cs.cornell.edu/deepgrasping.We note that this is an updated version of the dataset used in[28], containing several more complex objects, and thus results for their algorithms will be different from those in[28]. This dataset contains 1035 images of 280 graspable objects, several of which are shown in Fig. 9.Each image is annotated with several ground-truth positive and negative grasping rectangles. While the vast majority of possible rectangles for most objects will be non-graspable, the dataset contains roughly equal numbers of graspable and non-graspable rectangles. We will show that this is useful for an unsupervised learning algorithm, as it allows learning a good representation for graspable rectangles even from unlabeled data." } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.4655
[ { "role": "user", "content": "You are presented with a paragraph from a COVID-19 literature. Classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable topics within the array. Your output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nBACKGROUND: A retrospective study was conducted to summarize the clinical information of childhood infections during the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) epidemic. METHODS: Children with SARS-CoV-2 infection in 11 hospitals from three provinces of South China were included in the study. Clinical information was collected and compared with children and adults infected by SARS-CoV-2 in Wuhan. RESULTS: In total, 52 children were enrolled, including 28 boys. The median age was 9 years (interquartile range [IQR], 4-12); 44.2% cases were of clustered occurrences, 40.4% patients had fever, 48.1% had cough, and 46.2% had a high lymphocyte count. No abnormalities were found in the liver and kidney function. Also, 82.7% of patients received antiviral therapy, but such therapy did not shorten the time to virus negativity or hospital stay (P = .082). The time to virus negativity was 12.0 days (IQR, 8.0-16.8) and hospital stay was 14.5 days (IQR, 10.3-17.9). Compared with reports in Wuhan, there were more acute upper respiratory tract infection (AURTI) and fewer pneumonia cases (P = .000). Compared with the non-ICU adult COVID-19 in Wuhan, these children's diseases were relatively mild, with fewer complications. CONCLUSIONS: Children with SARS-CoV-2 infection had a mild fever, lymphocyte elevation was more common than reduction, and antiviral treatment had no obvious effect. The overall clinical manifestations were mild, and the prognosis was good." }, { "role": "assistant", "content": "[\"Treatment\", \"Diagnosis\"]" } ]
science.scientific_lay_summarisation_elife_single_doc_summ
science.scientific_lay_summarisation_elife_single_doc_summ.2839
[ { "role": "user", "content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: Sensory dynamics of visual hallucinations in the normal population}\nArticle:\nAbstract:\nHallucinations occur in both normal and clinical populations. Due to their unpredictability and complexity, the mechanisms underlying hallucinations remain largely untested. Here we show that visual hallucinations can be induced in the normal population by visual flicker, limited to an annulus that constricts content complexity to simple moving grey blobs, allowing objective mechanistic investigation. Hallucination strength peaked at ~11 Hz flicker and was dependent on cortical processing. Hallucinated motion speed increased with flicker rate, when mapped onto visual cortex it was independent of eccentricity, underwent local sensory adaptation and showed the same bistable and mnemonic dynamics as sensory perception. A neural field model with motion selectivity provides a mechanism for both hallucinations and perception. Our results demonstrate that hallucinations can be studied objectively, and they share multiple mechanisms with sensory perception. We anticipate that this assay will be critical to test theories of human consciousness and clinical models of hallucination.\nIntroduction:\nHallucinations occur across a wide range of pathologies and are also common in non-clinical populations( Barrett, 1993; 1994). Generally, hallucinations are defined as an involuntary percept-like experience in the absence of an appropriate direct stimulus( Bentall, 1990). However, very little is known about the mechanisms underlying hallucinations, due largely to the methodological constraints caused by their inherently subjective, constantly changing heterogeneous content. Visual hallucinations are thought to arise in exceptional circumstances when external stimuli are overwhelmed by internally generated spontaneous patterns of neural activity. This situation occurs when the parameters governing normal visual function are altered due to changes in brain anatomy or physiology( Ffytche, 2008; Butler et al., 2012), state changes such as dreaming or migraines( Llinás and Ribary, 1993; Aurora and Wilkinson, 2007), psychotropic drugs that temporarily perturb normal cortical function, or empty full field luminance flicker( Passie et al., 2008; Billock and Tsou, 2012). However, across these classes of hallucination, understanding has been severely limited by the multi-feature( color, form and motion) heterogeneous content that changes unpredictably over time, and typically requires subjective reports or subsequent depiction such as drawing to communicate subjective experience( Allefeld et al., 2011). To study visual hallucinations, we constrained empty-field flicker to a thin annulus that was centred on the fovea( Figure 1A). This stimulus effectively constrained the hallucinated forms to one spatial dimension. 10. 7554/eLife. 17072. 003Figure 1. Hallucination stimulus and data.( A) Physical stimulus and depiction of the percept.( B) Depiction of the stimulus used to measure the effective contrast. The small inner annulus is the perceptual, while the larger outer annulus shows a depiction of the hallucinated content.( C) Hallucination depiction and nonius lines used to measure the effective rotation propagation times.( D) Effective contrast data showing mean point of subjective equivalence between the perceptual and hallucinated content as a function of flicker frequency( Exp. 1; N = 42, 56 trials per frequency).( E) Data showing interocular interaction( Exp. 2; N = 20, 56 trials per combination of synchrony and frequency). Synchronous and asynchronous flickering annuli give different contrasts measures.( F) Hallucination motion speed measures using stimulus in C, as a function of flicker rate( Exp. 3A; N = 6, 40 trials per frequency).( G) Dependence of propagation times on cortical distance( Exp. 3B; N = 6, 40 trials per eccentricity condition). Distance around the annulus was converted into cm across cortex using the formulae from( Horton and Hoyt, 1991). Main effect of distance F( 2, 5) = 13. 74; p=0. 001.( H) The effect of number of physical gaps in the annulus( Exp. 4A; N = 4, 10 trials per stimulus).( I) The effect of the width of physical gaps in the annulus( Exp. 4B; N = 4, 10 trials per stimulus).( J) Hallucinated structure and motion is reduced at isoluminance( Exp. 5; N = 5, 20 trials per luminance condition). All error bars show ± SEM. DOI: http://dx. doi. org/10. 7554/eLife. 17072. 003\nDiscussion:\nOur data implicate retinotopically organized visual cortex as the site of bistable flicker induced hallucinations. Unlike previous work on visual hallucinations that suffered methodological limitations due to the almost infinite array of unpredictable hallucinated features( e. g. multiple combinations of color, form and motion), our technique essentially limits the space to a one dimensional annulus and the one set of visual features: moving light grey blobs. The intriguing phenomenon of perceptual bistability has fascinated thinkers for centuries( Blake and Logothetis, 2002), both because of the phenomenological experience of oscillating consciousness, but also as it provides the informative dissociation between low-level stimulation and awareness. Here we show a hallucinated bistable stimulus, in which both the content( form and motion) and alternations in motion are endogenously generated. Counter-phase physical stimuli are known to induce motion percepts that spontaneously switch between the two candidate directions of motion. Our model suggests that uniform flicker can induce spatiotemporal responses in primary visual cortex that are very similar to those induced by counter-phase physical stimuli, given the appropriate choice of spatial and temporal frequencies. We argue that the same neural mechanism that contributes to apparent motion of a counter-phase physical stimulus also contributes to hallucinated experience. It is up to future research to divulge if sensory decisions in endogenously generated content are resolved using the same neural machinery as exogenously sourced perceptual information. A neural field model based on the idea that standing waves of neural activity form visual hallucinations, provided a quantitative mechanism for these bistable hallucinations. It is interesting that in general, theories based on so-called symmetry-breaking standing waves have been proposed to explain the complex spatio-dynamics of the human brain( Atasoy et al., 2016), as well as in many other physical processes such as convection in fluids, animal coat markings, and cellular division( Stewart, 1999; Turing, 1952; Kondo and Miura, 2010). The exceptional circumstances in which externally sourced stimuli are overwhelmed by internally generated spontaneous patterns of neural activity and the accompanying conscious experience( hallucinations), are notoriously difficult to study scientifically. Accordingly, almost no specific treatments have been developed for clinical use. Our technique for controlling and objectively measuring the range of hallucinated features( contrast, motion, bistability) and the corresponding neural model should prove useful in probing the mechanism( s) that allows such a range of non-ordinary function to produce hallucinations." }, { "role": "assistant", "content": "Hallucinations can occur in both healthy and unwell people . Drugs , sleep deprivation , loss of vision , and migraines can all trigger visual hallucinations in people with no psychiatric illness . We have known for more than 200 years that flickering light can induce hallucinations in almost anyone . However , the unpredictability , complexity and personal nature of hallucinations make them difficult to measure scientifically , and previous studies often had to rely on drawings and verbal descriptions . Pearson et al . now show how to induce visual hallucinations in anyone , and how to measure them objectively and reliably without relying on subjective reports or drawings . The participant volunteers were university students with no history of migraines or psychiatric disorders . The students watched an image of a plain white ring flicker on and off around 10 times per second against a black background . All individuals reported seeing pale grey blobs appear in the ring and rotate around it , first in one direction and then the other . These grey hallucinations are much simpler than the complex multi-shape hallucinations people generally experience and so they are easier to study objectively . To measure the hallucinations , Pearson et al . placed a second ring marked with permanent perceptual grey blobs inside the white ring . By stating whether the hallucinated blobs were lighter or darker than the real blobs , the participants were able to communicate the strength of their hallucinations . Similarly , by indicating when the hallucinated blobs had moved past fixed lines at the top and bottom of the white ring , the subjects were able to convey the speed of the hallucinated motion . The hallucinated blobs and ‘real’ perceived blobs had many of the same properties , and seemed to arise in the same part of the brain , the visual cortex . By using the data to construct a neural computer model of visual cortex , Pearson et al . propose a mechanism that can explain both normal vision and hallucinations . The next step is to investigate whether the experimental methods can also model the hallucinations produced by psychiatric disorders ." } ]
science.chemsum_single_document_summarization
science.chemsum_single_document_summarization.96868
[ { "role": "user", "content": "You will be shown the title of a chemistry paper, together with sections excerpted from the body of the paper. The paper sections may include HTML tags like `<p>` and `</p>`; please ignore these. Your task is to write an abstract for the paper. Your response should include the abstract and no other text.\n\nTitle:\nRanking Protein-Protein Docking Results using Steered Molecular Dynamics and Potential of Mean Force Calculations\n\nPaper sections:\n\n<p>Despite many advances in modeling, docking, and scoring, predicting protein-protein interactions is still riddled with challenges1. Selecting the final model(s) is typically considered one of the most difficult steps and is often the most critical. Here we describe a novel, physics-based, multi-step approach to identify near-native protein-protein complex structures from a set of top-ranked poses.</p><p>In our method, summarized in Figure 1, steered molecular dynamics (MD) simulations are used to estimate the force required to separate the partners of docked protein-protein complexes by pulling one partner away from the other. The top-10 complexes (those with the highest force required for separation) are selected for more detailed investigation using umbrella sampling. The umbrella sampling simulations combined with the weighted histogram analysis method (WHAM) provide an estimate of the potential of mean force (PMF) of protein dissociation. The difference in the PMF between the bound (starting configuration) and unbound (ending configuration) state is the calculated delta G of complex dissociation.</p><p>A set of ten diverse protein-protein complexes was used to evaluate our method (Table 1). From ~54,000 poses produced using ZDOCK2, a set of ~100 representative poses were selected. The selected poses were then evaluated using steered MD and five standard scoring functions, zrank13, zrank23, zdock2, irad4, and a custom potential based on van der Waals, electrostatics and knowledge-based terms5, herein referred to as \"stats\". The scoring functions were independently evaluated using the interface RMSD (iRMSD), a commonly used metric to evaluate protein-protein docking poses6.</p><p>A prediction was considered \"good\" if the iRMSD ≤ 2.0 Å, a pose was considered \"acceptable\" if the iRMSD ≤ 4.0 Å, and a prediction was considered \"poor\" if the iRMSD > 4.0 Å.</p><p>Plots showing the number of actives recovered versus the percentage of complexes screened are shown in Figure 2A (tabulated values are shown in Supplemental Table 1). Steered MD produced the best results of any scoring scheme tested producing at least one good pose for 7/10 systems tested and an acceptable pose for 10/10 systems within the top-10 predictions. The irad and stats scoring functions performed similarly to sMD, both produced good predictions in 6/10 cases and acceptable poses were predicted from 10/10 and 8/10 poses respectively. In terms of enrichment, steered MD and stats perform the best. This is especially apparent in 1DJF, 1EZU, and 1UDI, where steered MD and stats significantly outperform the other scoring functions. Furthermore, both perform perfectly or nearly perfectly in 4/10 systems (1PPE, 1DJF, 1EAW, and 1UDI) as shown in Figure 2A (dotted line in inset indicates perfect prediction).</p><p>In general, most scoring functions that were tested produced a good or acceptable pose within the top-10 predictions, but oftentimes the top-10 predictions also included several poor poses. The inclusion of poor poses is less detrimental if they are ranked below good or acceptable poses, but this was not always the case. For instance, in 2HRK, only a single good pose (iRMSD: 1.98 Å) was identified in the top-10 predictions by steered MD and this pose was ranked 10th overall. Furthermore the three acceptable poses (iRMSDs: 2.49 Å, 2.13 Å, and 2.86 Å) were also ranked poorly (7th, 8th and 9th, respectively). In a blind prediction scenario, this type of result could easily lead to an unproductive final model. Thus we attempted to further refine the top-10 predictions using umbrella sampling.</p><p>Ideally, in cases such as 2HRK, re-scoring using PMF will result in the low iRMSD structures being re-ranked closer to the top. Alternatively, the 1VFB dataset contains several successful poses that are ranked near the top and 7 out of the top 10 poses are acceptable (iRMSD ≤ 4.0 Å). To ensure that re-ranking by PMF does not alter a successful screen, the top-10 predicted complexes from the 1VFB systems were also re-scored using umbrella sampling.</p><p>Umbrella sampling is a technique where overlapping MD trajectories are utilized to produce an estimate of the potential of mean force (PMF) along a pre-defined reaction coordinate, in this case the distance describing the dissociation of the two protein units along the vector created by the centers of mass of each unit. These calculations, although computationally expensive, may provide a more accurate quantification of protein-protein interactions compared to steered MD alone. As a proof of concept, we selected the top-10 structures from 2HRK and 1VFB and used umbrella sampling to re-rank these structures.</p><p>In both cases, re-ranking the top-10 poses using the PMF calculated by umbrella sampling improved the results. In 2HRK, the lowest iRMSD complex (1.98 Å) rose from a 10th place when ranked by steered MD alone to 1st when using PMF (Figure 2B-left panel). Likewise in the 1VFB dataset, the 9.84Å structure fell from 1st ranked in the steered MD ranking down to one of the lowest ranked structures when ranked by PMF (Figure 2B–right panel). In addition, in the 1VFB dataset all good poses (iRMSD ≤ 2.0Å) were ranked in the top-4 highest positions using PMF (Figure 2B–right panel).</p><p>As a comparison, we also calculated the PMF of the crystal structures (shown in bolded black lines in Figure 2B). In the case of 1VFB, the calculated PMF of crystal structure was in agreement with the low iRMSD (≤ 2.0Å) structures. This finding suggests that the crystal structure and accurately predicted poses demonstrate similar behavior in the calculations. However, in the case of 2HRK, the PMF of the crystal structure was ~30 kJ/mol larger than the best ranked structure (1.98 Å). One possible explanation for this finding is that not all crystal contacts are adequately reproduced in the docked results. One possible explanation for this observation is interfacial waters.</p><p>The hydration site analysis program WATsite17, was used to compare the number of hydration sites in or immediately adjacent to the interface of the protein-protein complex based on the x-ray conformation and pose with lowest iRMSD for 2HRK and 1VFB. A comparison between the 2HRK crystal structure and the equilibrated lowest iRMSD pose (1.98Å) revealed that not all contact-mediating hydration sites in the x-ray structure of the protein-protein interface were conserved in the low iRMSD pose (Supplemental Figure 1). Whereas 16 contact-mediating hydration sites were identified in the protein interface of the x-ray structure only 12 were found in the low iRMSD pose (Supplementary Table 2). Repeating this analysis for 1VFB revealed that the same number of contact-mediating waters were identified in low iRMSD and x-ray structure supporting the observation that the PMF of the crystal structure and good poses were approximately equal. Thus, important water-mediated interactions are lost for the low iRMSD pose resulting in reduced complex stability compared to the x-ray structure of the complex.</p><p>To the best of our knowledge, this is the first time that steered MD and PMF calculations have been used to evaluate protein-protein docking poses. Furthermore the use of explicit solvent MD simulations allows for the incorporation of waters into the interface which are accounted for in our procedure, a feature that is very rarely included in traditional docking and scoring methods.</p><p>Despite the limited number of test cases, we believe the proposed stepwise method to be a promising approach, although there are some important considerations about the limitations of this method. Importantly, the time required for calculation of PMF profiles could present a significant limitation. In practice, we suggest that a more rapid scoring function might be used as a pre-filter, prior to implementing the more computationally demanding umbrella sampling (both the stats and the irad scoring functions performed exceptionally well in our hands). In addition to the time required to calculate the PMF profiles, the calculations are sensitive to the reproducibility of the interactions in the interface. In an ideal case, protein partners would not change conformation upon binding and interface interactions would be strictly between partners (i.e. not mediated by water or other co-factors). Caution should be exercised in cases where drastic conformational changes are thought to occur or in cases where protein interactions are extensively mediated by other molecules. Methods such as principal component analysis (PCA), may be employed to determine the best vector for sMD simulations in cases of intricate interfaces or where significant conformational change is anticipated</p><p>In summary, the use of steered MD and umbrella sampling in ranking protein-protein docking conformations represents a novel approach in this field and has been found to be successful in the test cases presented here and elsewhere18. While there are some limitations to this approach, notably the computational cost, we believe that this approach may prove useful in a range of systems and be a complimentary approach to the currently used scoring functions for protein-protein docking.</p>\n\nMethods\n<p>Only a general outline of the procedure and tools used has been included here, a detailed methods section has been included in the supplemental information.</p><p>All protein systems were docked using the ZDOCK algorithm producing ~54,000 conformations. From these 100 representative conformations were selected for steered MD. Gromacs 4.6.1 was used to prepare and equilibrate each system prior to sMD. From the sMD simulations, the total force was computed as the difference between the lowest and highest recorded force for each simulation. Umbrella sampling was performed on the top-10 structures from 1VFB and 2HRK and the g_wham program from Gromacs was used to estimate the PMF using the sampled windows. WATsite was used in the interfacial water analysis.</p>" }, { "role": "assistant", "content": "Crystallization of protein-protein complexes can often be problematic and therefore computational structural models are often relied upon. Such models are often generated using protein-protein docking algorithms, where one of the main challenges is selecting which of several thousand potential predictions represents the most near-native complex. We have developed a novel technique that involves the use of steered molecular dynamics (sMD) and umbrella sampling to identify near-native complexes among protein-protein docking predictions. Using this technique, we have found a strong correlation between our predictions and the interface RMSD (iRMSD) in ten diverse test systems. On two of the systems, we investigated if the prediction results could be further improved using potential of mean force calculations. We demonstrated that a near-native (<2.0 \\xc3\\x85 iRMSD) structure could be identified in the top-1 ranked position for both systems." } ]
science.qasper_extractive_qa
science.qasper_extractive_qa.728.0
[ { "role": "user", "content": "You will be shown sections from a scientific research paper, together with a question about the paper. Your task is to answer the question based on the contents of the paper. Your response should be a `json` object with two fields:\n\n\"answer\": An array of strings extracted directly from the paper which, collectively, answer the question.\n\"evidence\": An array of strings. Each should be an excerpt from the paper, in which one or more of the extracted answers can be found.\n\nFor instance, for the question \"What baselines did the authors compare against?\", a sample response might be:\n\n{\n \"answer\": [\"BERT\", \"RoBERTa\"],\n \"evidence\": [\"In our experiments, we compare the performance of our model against BERT and RoBERTa.\"]\n}\n\nDo not include any text in your response other than the json.\n\nIf the question is unanswerable given the provided excerpts, respond with the single word \"null\".\n\nPaper: Hate Speech in Pixels: Detection of Offensive Memes towards Automatic Moderation\n\nThis work addresses the challenge of hate speech detection in Internet memes, and attempts using visual information to automatically detect hate speech, unlike any previous work of our knowledge. Memes are pixel-based multimedia documents that contain photos or illustrations together with phrases which, when combined, usually adopt a funny meaning. However, hate memes are also used to spread hate through social networks, so their automatic detection would help reduce their harmful societal impact. Our results indicate that the model can learn to detect some of the memes, but that the task is far from being solved with this simple architecture. While previous work focuses on linguistic hate speech, our experiments indicate how the visual modality can be much more informative for hate speech detection than the linguistic one in memes. In our experiments, we built a dataset of 5,020 memes to train and evaluate a multi-layer perceptron over the visual and language representations, whether independently or fused. The source code and mode and models are available this https URL .\n\nMotivation\nThe spread of misinformation or hate messages through social media is a central societal challenge given the unprecedented broadcast potential of these tools. While there already exist some moderation mechanisms such as crowd-sourced abuse reports and dedicated human teams of moderators, the huge and growing scale of these networks requires some degree of automation for the task.\nSocial networks have already introduced many tools to detect offensive or misleading content, both for visual and textual content, ranging from nudity and pornography BIBREF0, BIBREF1 to hate speech BIBREF2 and misinformation BIBREF3. However, machine learning is still facing some challenges when processing borderline or figurative content such as nudity in paintings, political satire or other forms of humorous content. In particular for the case of hate speech, rapidly evolving topics and shifting trends in social media make its detection a topic of constant and active research.\nThis work takes one step forward and instead of focusing on visual or linguistic content alone, we tackle the challenging problem of detecting hate speech in memes. Memes are a form of humorist multimedia document which is normally based on an image with some sort of caption text embedded in the image pixels. Memes have gained a lot of popularity in the last few years and have been used in many different contexts, specially by young people. However, this format has also been used to produce and disseminate hate speech in the form of dark humour. The multimodal nature of memes makes it very challenging to analyze because, while the visual and linguistic information is typically neutral or actually funny in isolation, their combination may result in hate speech messages.\nOur work explores the potential of state of the art deep neural networks to detect hate speech in memes. We study the gain in accuracy when detecting hate speech in memes by fusing the vision and language representations, when compared with the two modalities apart. Our experiments indicate that while meme detection is a multimodal problem that benefits by analyzing both modalities, this societal task is far from being solve given the high abstraction level of the messages contained in memes.\n\nRelated Work\nHate speech is a widely studied topic in the context of social science. This phenomena has been monitored, tracked, measured or quantified in a number of occasions BIBREF4, BIBREF5, BIBREF6. It appears in media such as newspapers or TV news, but one of the main focus of hate speech with very diverse targets has appeared in social networks BIBREF7, BIBREF8, BIBREF9. Most works in hate speech detection has focused in language. The most common approach is to generate an embedding of some kind, using bag of words BIBREF8 or N-gram features BIBREF10 and many times using expert knowledge for keywords. After that, the embedding is fed to a binary classifier to predict hate speech. Up to our knowledge, there is no previous work on detecting hate speech when combining language with visual content as in memes. Our technical solution is inspired by BIBREF11 in which gang violence on social media was predicted from a multimodal approach that fused images and text. Their model extracted features from both modalities using pretrained embeddings for language and vision, and later merged both vectors to feed the multimodal features into a classifier.\n\nModel\nThe overall system expects an Internet meme input, and produces a hate score as an output. Figure FIGREF1 shows a block diagram of the proposed solution.\nThe first step of the process is extracting the text of the image with Optical Character Recognition (OCR). The text detected by the OCR is encoded in a BERT BIBREF12 representation for language. We used the Tesseract 4.0.0 OCR with a Python wrapper on top. This encoding generates contextual (sub)words embeddings, which we turn into a sentence embedding by averaging them. We used a PyTorch implementation available at the repo below . This implementations includes multiple pretrained versions and we chose the one called bert-base-multilingual-cased. This version has 12 layers, 768 hidden dimensions, 12 attention heads with a total of 110M parameters and is trained on 104 languages.\nThe visual information was encoded with a VGG-16 convolutional neural network BIBREF13, trained on ImageNet BIBREF14. Then we used the activations from a hidden layer as feature vectors for the image, Specifically, we used the last hidden before output, which has 4096 dimensions. We obtained the pretrained model from the TorchVision module in PyTorch.\nThe text and image encodings were combined by concatenation, which resulted in a feature vector of 4,864 dimensions. This multimodal representation was afterward fed as input into a multi-layer perceptron (MLP) with two hidden layer of 100 neurons with a ReLU activation function. The last single neuron with no activation function was added at the end to predict the hate speech detection score.\n\nDataset\nWe built a dataset for the task of hate speech detection in memes with 5,020 images that were weakly labeled into hate or non-hate memes, depending on their source. Hate memes were retrieved from Google Images with a downloading tool. We used the following queries to collect a total of 1,695 hate memes: racist meme (643 memes), jew meme (551 memes), and muslim meme (501 Memes). Non-hate memes were obtained from the Reddit Memes Dataset . We assumed that all memes in the dataset do not contain any hate message, as we considered that average Reddit memes do not belong to this class. A total of 3,325 non-hate memes were collected. We split the dataset into train (4266 memes) and validation (754 memes) subsets. The splits were random and the distribution of classes in the two subsets is the same. We didn't split the dataset into three subsets because of the small amount of data we had and decided to rely on the validation set metrics.\n\nExperiments\nOur experiments aimed at estimating the potential of a multimodal hate speech detector, and study the impact of a multimodal analysis when compared to using language or vision only.\nWe estimated the parameters of the MLP on top of the encoding of the meme with an ADAM optimizer with a lr=0.1, betas=(0.9, 0.999) and $\\varepsilon =10^{-8}$, weight decay=0, a batch size of 25, and a drop out of 0.2 on the first hidden layer. The network was trained with a a Mean Squared Error (MSE) loss, but assessed in terms of binary accuracy.\nFigure FIGREF7 presents the results of the training with the three considered configurations: language only, vision only, and a multimodal solution. In the single modality cases, the input layer of the MLP is adjusted to the size of the encoded representation. The curves show how the blue line representing the multimodal case obtains the best results, closely followed by the orange one of the vision only case. The language only configuration performs clearly worse than the other two. Nevertheless, the three curves are consistenly over the baseline accuracy of $0.66$, which would be achieved by a dummy predictor of Non-hate class, because of the 34%-66% class imbalance of the dataset.\nTable TABREF11 provides numerical results comparing the three configurations based on two different metrics: Max. Accuracy corresponds to the best accuracy obtained in any epoch, while Smth Max. Accuracy corresponds to the smoothed accuracy to which the model was converging. This was estimated by smoothing the curve with a momentum average and picking the best value. We thought the second metric was a good estimation of the real performance of the model due to the huge validation accuracy fluctuation between epochs in evaluation. Also, since the classes are imbalanced, we computed the precision-recall curve for the best multimodal model, getting an Average Precision of $0.81$.\nWe consider that the superior performance of the vision only configuration over the language only one may be due to a diversity of reasons. Firstly, the most obvious one is that the dimensionality of the image representation (4096) is much larger than the linguistic one (768), so it has the capacity to encode more information. Also, the different models have different number of parameters due to different MLP input and we didn't take into consideration this variation of the model's capacity. Secondly, we think there might a visual bias on the dataset. Mainly, because there are more modern style memes on the no hate class and more classic style memes in the hate class. Classic or modern memes refer basically to the format and placement of the text. Figure FIGREF12 (a) and (b) are examples of them. Also, we found some false positives in the hate class and there might be false negatives in the non-hate Reddit set. Finally, memes are often highly compressed images with an important level of distortion. This fact may affect the quality of the OCR recognition and, therefore, the language encoding, as shown in the Figure FIGREF12 (c).\nThe training code and models are publicly available to facilitate reproducibility.\n\nConclusions\nOur study on hate speech detection in memes concludes that it is possible to automatize the task, in the sense that a simple configuration using state of the art image and text encoders can detect some of them. However, the problem is far from being solved, because the best accuracy obtained of $0.83$ seems modest despite being much better than greedy solution of predicting always the most frequent class. The proposed system may be used for filtering some of the memes distributed through a social network, but it would still require a human moderator for many of them.\nUnfortunately, the system may actually also be used for the opposite of detecting hate speech memes, but helping in their creation. Given a large amount of sentences and images, a misuse of the system may assess the hate score of each possible pair of text and image to find novel combinations with an expect high hate level.\nThe experiments also show that the visual cues are much more important than the linguistic ones when detecting hate speech memes, a totally opposite scenario to previous studies focusing on language-based hate speech detection. While the best results are obtained with the multimodal approach, the gain with respect to the vision only one is small. A practical deployment of this system should evaluate whether the computation cost of running the OCR and encoding the extracted text is worthy based on the reported gains in accuracy.\nThe present work poses a new challenge to the multimedia analysis community, which has been proven to be difficult but not impossible. Given the rich affective and societal content in memes, an effective solution should probably also take into account much more additional information than just the one contained in the meme, such as the societal context in which the meme is posted.\n\nAcknowledgements\nThis work has been developed in the framework of project TEC2016-75976-R, funded by the Spanish Ministerio de Economía y Competitividad and the European Regional Development Fund (ERDF), and the Industrial Doctorate 2017-DI-011 funded by the Government of Catalonia. We gratefully acknowledge the support of NVIDIA Corporation with the donation of some of the GPUs used for this work.\n\nFigure 1: Block diagram of the system\n\nTable 1: Accuracy results for the three configurations\n\nFigure 2: Results refer to the language only (red), vision only (orange) and multimodal (blue) solutions. In (a) we see the train loss, in (b) we see the validation loss and in (c) the validation average binary accuracy. The x-axis is #batch for (a) and #epoch for (b) and (c).\n\nFigure 3: Three examples of the diffenent meme styles\n\nQuestion: What is the source of memes?" }, { "role": "assistant", "content": "{\"answer\": [\"Google Images\", \"Reddit Memes Dataset\"], \"evidence\": [\"We built a dataset for the task of hate speech detection in memes with 5,020 images that were weakly labeled into hate or non-hate memes, depending on their source. Hate memes were retrieved from Google Images with a downloading tool. We used the following queries to collect a total of 1,695 hate memes: racist meme (643 memes), jew meme (551 memes), and muslim meme (501 Memes). Non-hate memes were obtained from the Reddit Memes Dataset . We assumed that all memes in the dataset do not contain any hate message, as we considered that average Reddit memes do not belong to this class. A total of 3,325 non-hate memes were collected. We split the dataset into train (4266 memes) and validation (754 memes) subsets. The splits were random and the distribution of classes in the two subsets is the same. We didn\\u0027t split the dataset into three subsets because of the small amount of data we had and decided to rely on the validation set metrics.\"]}" } ]
science.data_reco_mcq_mc
science.data_reco_mcq_mc.1108
[ { "role": "user", "content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: I want to recognize actions in static images.\n\nKeyphrases: action recognition images\n\nDataset description:\nImage and Video Advertisements: The Image and Video Advertisements collection consists of an image dataset of 64,832 image ads, and a video dataset of 3,477 ads. The data contains rich annotations encompassing the topic and sentiment of the ads, questions and answers describing what actions the viewer is prompted to take and the reasoning that the ad presents to persuade the viewer (\"What should I do according to this ad, and why should I do it?\nSNIPS: The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity: SearchCreativeWork (e.g. Find me the I, Robot television show), GetWeather (e.g.\nHMDB51: The HMDB51 dataset is a large collection of realistic videos from various sources, including movies and web videos. The dataset is composed of 6,766 video clips from 51 action categories (such as “jump”, “kiss” and “laugh”), with each category containing at least 101 clips.\nUCF101: UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports).\nHollywood 3D dataset: A dataset for benchmarking action recognition algorithms in natural environments, while making use of 3D information. The dataset contains around 650 video clips, across 14 classes.\n\nOptions:- A: Image and Video Advertisements B: SNIPS C: HMDB51 D: UCF101 E: Hollywood 3D dataset" }, { "role": "assistant", "content": "[\"HMDB51\", \"UCF101\"]" } ]
science.chemdner_ner
science.chemdner_ner.1064
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Arsenic\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nLive-cell imaging of p53 interactions using a novel Venus-based bimolecular fluorescence complementation system. p53 plays an important role in regulating a wide variety of cellular processes, such as cell cycle arrest and/or apoptosis. Dysfunction of p53 is frequently associated with several pathologies, such as cancer and neurodegenerative diseases. In recent years substantial progress has been made in developing novel p53-activating molecules. Importantly, modulation of p53 interaction with its main inhibitor, Mdm2, has been highlighted as a promising therapeutic target. In this regard, bimolecular fluorescence complementation (BiFC) analysis, by providing direct visualization of protein interactions in living cells, offers a straightforward method to identify potential modulators of protein interactions. In this study, we developed a simple and robust Venus-based BiFC system to screen for modulators of p53-p53 and p53-Mdm2 interactions in live mammalian cells. We used nutlin-3, a well-known disruptor of p53-Mdm2 interaction, to validate the specificity of the assay. The reduction of BiFC signal mediated by nutlin-3 was correlated with an increase in Puma transactivation, PARP cleavage, and cell death. Finally, this novel BiFC approach was exploited to identify potential modulators of p53-Mdm2 complex formation among a commercially available chemical library of 33 protein phosphatase inhibitors. Our results constitute \"proof-of-concept\" that this model has strong potential as an alternative to traditional target-based drug discovery strategies. Identification of new modulators of p53-p53 and p53-Mdm2 interactions will be useful to achieve synergistic drug efficacy with currently used anti-tumor therapies." }, { "role": "assistant", "content": "{\"Chemical\": [\"nutlin-3\"]}" } ]
science.acl_arc_intent_classification
science.acl_arc_intent_classification.1650
[ { "role": "user", "content": "You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:\n- Background: provides context or foundational information related to the topic.\n- Extends: builds upon the cited work.\n- Uses: applies the methods or findings of the cited work.\n- Motivation: cites the work as inspiration or rationale for the research.\n- CompareOrContrast: compares or contrasts the cited work with others.\n- FutureWork: cites the work as a direction for future research.\n\nYour answer should be a single word from the following list of options: [\"Background\", \"Extends\", \"Uses\", \"Motivation\", \"CompareOrContrast\", \"FutureWork\"]. Do not include any other text in your response.\n\nSection Title:\nintroduction\n\nContext before the citation:\nment (Sarkar and Wintner, 1999; Doran et al., 2000; Makino et al., 1998). 1In this paper, we use the term LTAG to refer to FBLTAG, if not confusing. There have been many studies on parsing techniques (Poller and Becker, 1998; Flickinger et al., 2000), ones on disambiguation models (Chiang, 2000; Kanayama et al., 2000), and ones on programming/grammar-development environ-\n\nCitation Sentence:\nment ( Sarkar and Wintner , 1999 ; Doran et al. , 2000 ; Makino et al. , 1998 ) .\n\nContext after the citation:\nThese works are restricted to each closed community, and the relation between them is not well discussed. Investigating the relation will be apparently valuable for both communities. In this paper, we show that the strongly equivalent grammars enable the sharing of “parsing techniques”, which are dependent on each computational framework and have never been shared among HPSG and LTAG communities. We apply our system to the latest version of the XTAG English grammar (The XTAG Research Group, 2001), which is a large-scale FB-LTAG grammar." }, { "role": "assistant", "content": "Background" } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.8160
[ { "role": "user", "content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: A Comprehensive Survey of Small-Molecule Binding Pockets in Proteins}\nArticle:\nAbstract:\nMany biological activities originate from interactions between small-molecule ligands and their protein targets. A detailed structural and physico-chemical characterization of these interactions could significantly deepen our understanding of protein function and facilitate drug design. Here, we present a large-scale study on a non-redundant set of about 20, 000 known ligand-binding sites, or pockets, of proteins. We find that the structural space of protein pockets is crowded, likely complete, and may be represented by about 1, 000 pocket shapes. Correspondingly, the growth rate of novel pockets deposited in the Protein Data Bank has been decreasing steadily over the recent years. Moreover, many protein pockets are promiscuous and interact with ligands of diverse scaffolds. Conversely, many ligands are promiscuous and interact with structurally different pockets. Through a physico-chemical and structural analysis, we provide insights into understanding both pocket promiscuity and ligand promiscuity. Finally, we discuss the implications of our study for the prediction of protein-ligand interactions based on pocket comparison.\nIntroduction:\nAt the molecular level, many functions of proteins in a living cell can be attributed to or regulated by their interactions with small-molecule ligands such as metabolites or drugs[1],[2]. A high-resolution structural description of protein-ligand recognition is very important for understanding protein function and designing new compounds for therapeutic purposes. As revealed in many of the crystal structures of proteins in complex with their ligands, protein-ligand interactions usually take place at preferred sites on the protein surface known as “pockets”[3],[4], in contrast to relatively flat geometric shape of protein-protein interaction sites[5]. Traditionally, the study of a protein-ligand complex structures often focuses on the structural or physico-chemical characteristics that are thought to be specific to that individual pocket[6]. However, it is becoming more and more clear that proteins are generally promiscuous in that they interact with multiple distinct ligands[7],[8]. One naturally seeks detailed structural insights into both the origin and generality of this intriguing observation. In this regard, a comprehensive, large-scale comparative study on all protein pockets in all protein structures that are solved to date may uncover principles that explain the promiscuity of protein-ligand interactions. In such a study, the first question is: How many representative pockets are there in the structural space of all pockets? This echoes a similar question asked about the fold space of proteins[9],[10]. A very recent study addressed this by comparing the pockets of 5, 000 single-domain proteins[11]. It was found that a few hundred pocket structures are enough to represent all structure shapes in this set, and similar shaped pockets are also found in artificially generated proteins, which were built and selected based on thermodynamic stability but not biochemical function. In this sense, the structural space of protein pockets is degenerate and surprisingly small. Since the number of known bioactive ligands[12],[13] is much larger than the available number of pocket shapes, the implication is that a given pocket shape can accommodate more than one type of ligand, thus generating the promiscuity responsible for the evolution of biochemical function[14],[15]. The observation that pocket shapes are degenerate suggests that the same ligand could bind to pockets of similar shape but located in different proteins, thus leading to side-effects of drug molecules through unexpected “off-target” interactions[16],[17]. However, the specific interplay of pocket geometry and chemical environment with the types of ligands that are bound was not addressed in that study[11] as it focused on the properties of pockets in proteins without a companion analysis of the bound ligands. In the current contribution, we address this issue. A second question is: To what extent can we infer a similar protein-ligand interaction by matching protein pockets? The answer to this question has practical applications for protein function prediction[18] or small-molecule compound screening[19]. In order to match pockets, many computational approaches have been developed to compare pockets based on their structural and/or physico-chemical features( for a review see[20]). These methods may be categorized into two classes: The first is based on the structural alignment of pocket-lining residues or atoms[21]–[24], and the second is based on comparison of descriptors independent of the residue or atom alignment[25]–[27]. The former class is generally more accurate, albeit slower than an alignment-free method, due to the complexity of the alignment algorithm. In that regard, we recently proposed an efficient, robust method, APoc, for large-scale pocket comparison[28]. On the other hand, since a structural alignment is not required, alignment-free methods might have an advantage in dealing with flexible pockets. Their disadvantage is that they often lack a direct physical interpretation for why two pockets are similar as assessed by their fingerprints. Another interesting question is: How different are the ligands that bind to the same pocket? Obviously, if the ligands are very similar, they are very likely to have similar interactions with the pocket, e. g., that might contain a common anchor and variable region[29]. However, if the ligands possess different scaffolds and/or chemical properties, it might not be obvious as to what, if any, interactions are conserved. How does a pocket maintain favorable interactions with very different ligands? Conversely, a ligand may be found in pockets of different protein structures. How different are those pockets that interact with the same ligand? An early study of pockets from non-homologous proteins that bind the nine most common ligands suggests that there are shape variations in these pockets[30]. This further raises the question of how a ligand manages to interact with different pocket shapes. To address these questions, we performed a comprehensive comparative study on a large curated set of over 20, 000 ligand-bound pocket structures from crystallized protein-ligand complexes. We first characterize the structural space of these pockets. This is followed by an analysis of the correlations between pocket similarity and ligand chemical similarity. Then, we investigate both pocket promiscuity( one pocket accommodating different ligands separately) and ligand promiscuity( one ligand recognized by different proteins), respectively. Finally, the implications of our study are discussed.\nDiscussion:\nOur study demonstrates a complicated picture of protein-ligand interactions. First, from mainly a structural prospective, the space of the protein pockets is degenerate. The growth rate of novel pockets deposited in the PDB has been steadily decreasing over the past decade, approaching a plateau. At a PS-score of 0. 40( P<0. 01), one can find a structural match for all known pockets by using about 1, 300 representative pockets. The number is higher than that reported in an earlier study[11], which was limited to proteins with less than 250 residues and employed a less stringent pocket similarity criterion. Perhaps, this result is not that surprising given that the structural space of protein folds themselves is also finite[9],[10]. Like protein fold space, the structural space of protein pockets is continuous in the sense that a significant set of structural features in one pocket can be found in another pocket, which may not share any evolutionary relationship. Interestingly, at a high structural similarity level( PS-score 0. 50), a phase transition occurs in pocket space( Fig. 2B), yielding mostly isolated clusters of pockets that could share an evolutionary relationship. However, this is not to say pockets at a lower similarity level do not share an evolutionary origin or that those at a higher similarity level have a common ancestor. Instead, it means that it is difficult to establish their evolutionary relationship using structural information alone. This observation is analogous to a “continuous-to-discrete” view of protein fold space[47]. Like the classification of protein folds, classification of protein pockets is dependent on the similarity criteria employed. We note that there is no perfect metric or criteria that gives a universally agreed upon classification. In our study, pocket similarity comparison focused on the position of Cα and Cβ atoms of the pocket-lining residues, as well as their chemical similarity. The similarity criteria we selected are based on estimation of the statistical significance, ranging from P<0. 05 to highly significant P<1×10−6 according to APoc[28]. One major purpose of pocket comparison is to develop a structure-based method for predicting protein-ligand interactions. The rationale behind is that similar pockets attract similar ligands. This is certainly true to some extent; as shown in Fig. 5, for 72% and 50% of ligand-bound pockets one can find another similarly shaped pocket( at P<0. 01) that interacts with a similar ligand at a Tc>0. 4 and 0. 7, respectively. One advantage based on structural comparison of pockets is that one may uncover ligand-protein interactions that are undetectable from sequence or global structural comparison. However, there is one limitation to this approach. As we shown here, one type of pocket shape can accommodate multiple types of ligands, which could introduce false positives. To address this issue, it is necessary to increase the level of pocket similarity to reduce false positives, at the cost of sensitivity. This explains why current methods have relatively low coverage in benchmark tests[23],[28]. How to improve sensitivity and maintain a low false positive rate remains a challenge for predicting protein-ligand interactions on the basis of pocket similarity. Many protein pockets are promiscuous. More than 1/3 of pockets in our data set belong to those promiscuous pockets that interact with multiple, chemically different ligands. Considering that only a tiny fraction of protein-ligand interactions are captured in the PDB, the results shown here likely represent a lower bound, and it is very likely that promiscuous protein pockets are more common. From an analysis of protein-ligand interactions observed in promiscuous protein pockets, we showed that a large fraction( ∼60% on average) of these interactions share similar types of interactions, e. g., hydrogen bonding, hydrophobic, or aromatic. Moreover, the plasticity of protein pockets may also provide alternative, viable interaction modes[11]. Therefore, these promiscuous interactions may be understood from a physical chemical point of view. In principle, if one could design a scheme that matches similar ligands based on their physico-chemical properties regardless of their chemical scaffolds, then it could provide a means of predicting novel protein-ligand interactions. In practice, however, this is a highly challenging problem because many physical interactions such as hydrophobic interactions are not very specific, thus allowing many possible solutions that increase the chance of hitting a false positive. The complexity of protein-ligand interactions is also reflected in ligand promiscuity. That is, a ligand with different poses may interact with differently shaped pockets. One main reason is that ligands with multiple rotatable bonds are flexible, thus yielding different conformations selected by different pockets. In some cases, different poses fit different physico-chemical environments[48]. These observations further help explain polypharmacology or the unexpected “off-target” interactions found in many drug molecules[17]. From a prediction point of view, for a compound of interest, it is unlikely to predict all its protein partners based on only one template because of ligand structural diversity. In this regard, a catalog of structures of many-faceted protein-ligand interactions could significantly improve the prediction of side-effects or repurposing of drugs. In summary, we find that both protein pocket promiscuity and ligand promiscuity are common. The relationship of protein pockets and ligands is often not one to one but many to many. A given ligand may interact with a number of proteins whose structures are globally unrelated but contain similar pockets. Or it might interact with proteins having different pockets. Conversely, a given pocket can have similar physico-chemical interactions with ligands that may or may not have similar linear structures. For the case of dissimilar ligand pairs, they can adopt conformations that have similar interaction surfaces. Based on this and prior work[7], we conclude that promiscuous ligand interactions of differing specificity are inherent to proteins and living cells. This has a number of implications: It provides a mechanism for a living cell to select for useful biochemical functions as such low level function is likely inherent to a soup of quasi stable protein structures which can then be optimized[14],[15]. It also provides biological robustness[49]. On the other hand, it could cause difficulty in the control of biological processes and in assessing the accuracy of predicted protein-ligand interactions, since we are far from knowing all protein-ligand interactions. This work clearly argues that the notion of one ligand-one protein target that implicitly underlies many drug discovery efforts is fundamentally incorrect." }, { "role": "assistant", "content": "The life of a living cell relies on many distinct proteins to carry out their functions . Most of these functions are rooted in interactions between the proteins and metabolites , small-molecules essential for life . By targeting specific proteins relevant to a disease , drug molecules may provide a cure . A deep understanding of the nature of interactions between proteins and small-molecules ( or ligands ) through analyzing their structures may help predict protein function or improve drug design . In this contribution , we present a large-scale analysis of a non-redundant set of over 20 , 000 experimental protein-ligand complex structures available in the current Protein Data Bank . We seek answers to several fundamental questions: How many representative pockets are there that serve as ligand-binding sites in proteins ? To what extent can we infer a similar protein-ligand interaction by matching the structures of protein pockets ? How different are the ligands found in the same pocket ? For a promiscuous protein pocket , how does a pocket maintain favorable interactions with very different ligands ? Conversely , how different are those pockets that interact with the same ligand ? We find the structural space of protein pocket is small and that both protein promiscuity and ligand promiscuity are very common in Nature ." } ]
science.covidfact_entailment
science.covidfact_entailment.136
[ { "role": "user", "content": "You will be shown a claim related to the COVID-19 pandemic, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\".\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract.\n\nFor instance, if the model were given the claim \"the COVID vaccine is safe for healthy adults\", the output might be\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that the risks of severe COVID vaccine side effects are low for healthy adults.\"]\n}\n\nClaim: Intrinsic signal amplification by type-iii crispr-cas systems provides a sequence-specific viral diagnostic\n\nAbstract:\nTo combat viral pandemics, there is an urgent need for inexpensive new technologies that enable fast, reliable, and scalable detection of viruses.\nHere we repurposed the type III CRISPR-Cas system for sensitive and sequence specific detection of SARS-CoV-2 in an assay that can be performed in one hour or less.\nRNA recognition by type III systems triggers Cas10-mediated polymerase activity, which simultaneously generates pyrophosphates, protons and cyclic oligonucleotides.\nWe show that amplified products of the Cas10-polymerase are detectable using colorimetric or fluorometric readouts." }, { "role": "assistant", "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"Here we repurposed the type III CRISPR-Cas system for sensitive and sequence specific detection of SARS-CoV-2 in an assay that can be performed in one hour or less.\"]}" } ]
science.genia_ner
science.genia_ner.335
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"RNA\", \"CellLine\", \"Protein\", \"DNA\", \"CellType\"].\n\nPlease return the output as a JSON object of the format: {\"DNA\" : [\"TAL1\", ...], \"RNA\": [\"mRNAs\", ...], \"CellLine\": [\"T cells\", ...], \"CellType\": [\"monocytes\", ...], \"Protein\": [\"IL-2\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nDifferential nuclear localization of p50, p52, and RelB proteins in human accessory cells of the immune response in situ. The Rel / NF-kappa B proteins, p50, p52, p65, c-Rel, and RelB, constitute a family of transcription factors involved in the positive regulation of a variety of genes during the immune response. Recently, it has been shown that RelB knockout mice have no dendritic cells (DC). An overexpression of p50 has been described in follicular dendritic cells (FDC). A constitutive NF-kappa B activity has been reported in mature macrophages. This led to the hypothesis that some of the Rel / NF-kappa B proteins were key nuclear factors in functions of accessory cells of the immune response. Therefore, we investigated in situ the nuclear localization of Rel / NF-kappa B proteins in accessory cells of the immune system by immunohistochemistry and double labeling by immunofluorescence from five normal human tonsils and five lymph nodes with follicular hyperplasia. Nuclear p65 and c-Rel proteins were found in all cell types including lymphocytes. In germinal centers GC, p50, p52, and RelB were found in the nuclei of FDC only and were not detected in the nuclei of CD68 + cells. In T cell areas, p50, p52, and RelB were found in the nuclei of HLA-DR + cells with an antigen-presenting cell (APC) morphology. p52 and RelB were detected in the nuclei in both CD1a + and CD68 + cells from the T cell area, whereas p50 was found only in CD68-and CD1a-cells. Cells with nuclear p50 were negative for the CD38, CD20 and CD2 markers. These results show that, physiologically, high levels of nuclear of p50, p52 and RelB are restricted to accessory cells of the immune system, which include FDC in GC, and DC and macrophages in the T cell zone, that specialized scavenger macrophages from GC do not have detectable levels of p52 and RelB, whereas macrophages from the T cell area, known to present the antigen to T cells, do have both nuclear p52 and RelB, and that in the T cell zone, p52 and RelB are located in nuclei of both CD1a +, CD68 + or both, cells APC, whereas p50 is restricted to CD1a-and CD68-APC. The different patterns of p50, p52 and RelB protein nuclear localization may provide insight into their different roles during the immune response in vivo." }, { "role": "assistant", "content": "{\"CellLine\": [\"CD68 - and CD1a - cells\", \"CD68 +\", \"CD68 + cells\", \"HLA - DR + cells\", \"CD1a +\"], \"CellType\": [\"mature macrophages\", \"accessory cells\", \"APC\", \"DC\", \"follicular dendritic cells\", \"lymphocytes\", \"macrophages\", \"CD1a -\", \"CD68 -\", \"scavenger macrophages\", \"GC\", \"dendritic cells\", \"T cells\", \"FDC\", \"human accessory cells\"], \"DNA\": [], \"Protein\": [\"nuclear factors\", \"p50\", \"RelB\", \"CD20\", \"CD38\", \"p52\", \"CD2 markers\", \"c - Rel\", \"Rel / NF - kappa B proteins\", \"p65\", \"p50 , p52 , p65 , c - Rel , and RelB\", \"NF - kappa B\", \"transcription factors\"], \"RNA\": []}" } ]
science.chemprot_re
science.chemprot_re.256
[ { "role": "user", "content": "Given an abstract from a research paper, your task is to identify and label relationships between chemical compounds and genes mentioned in the abstract.\n\nThe relationship type must be one of ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nPlease format your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"]. If no relations can be found, please output an empty JSON array [].\n\nAbstract:\n\nAntimyeloma effects of a novel synthetic retinoid Am80 (Tamibarotene) through inhibition of angiogenesis. In multiple myeloma (MM), the interaction between myeloma cells and bone marrow microenvironment has an important role in the pathogenesis of MM. We first examined the inducing effect of myeloma cells on migration of human umbilical vein vascular endothelial cells (HUVECs). Five myeloma cell lines produced varying amounts of VEGF, and migration of HUVECs was induced by coculture with myeloma cells. We next examined the inhibitory effect of a novel synthetic retinoid Am80 (Tamibarotene) on both myeloma cells and HUVECs. Am80 is specific for the retinoic-acid receptor-alpha/beta, and has therapeutic effects in all-trans retinoic acid resistant acute promyelocytic leukemia. Am80 slightly inhibited the growth of both myeloma cells and HUVECs, and remarkably inhibited the growth of HUVECs stimulated by VEGF. Am80 showed little growth inhibition of bone marrow stromal cells (BMSCs), but it markedly inhibited migration of HUVECs by cocultured myeloma cells. Am80 inhibited VEGF-induced phosphorylation of VEGF receptor. In addition, VEGF-induced formation of tube-like structures in vitro and neovascularization in mouse corneas were significantly inhibited by Am80. These findings clearly demonstrate that Am80 is a potential inhibitor of angiogenesis caused by the interaction between vascular endothelial cells and myeloma cells, and might be a useful therapeutic agent against MM." }, { "role": "assistant", "content": "[[\"Am80\", \"INHIBITOR\", \"VEGF\"], [\"Am80\", \"INHIBITOR\", \"VEGF receptor\"]]" } ]
science.bioasq_general_qa
science.bioasq_general_qa.915
[ { "role": "user", "content": "You will be shown a question, followed by excerpts from biomedical research papers. Please answer the question based on the provided context. Do not include any text in your response other than the answer.\n\nQuestion: What can be isolated from Pleurotus mutilus?\n\nContext: Pleuromutilins are antibiotics that selectively inhibit bacterial translation and are semisynthetic derivatives of the naturally occurring tricyclic diterpenoid pleuromutilin, which received its name from the pleuromutilin-producing fungus Pleurotus mutilu\nEnhanced production of pleuromutilin by Pleurotus mutilus and study on its molecular structure\nThis study aims to enhance the accumulation of pleuromutilin by Pleurotus mutilus and to analyze the molecular structure of pleuromutilin.\nEnhanced production of pleuromutilin by Pleurotus mutilus and study on its molecular structure.\nPleuromutilins are antibiotics that selectively inhibit bacterial translation and are semisynthetic derivatives of the naturally occurring tricyclic diterpenoid pleuromutilin, which received its name from the pleuromutilin-producing fungus Pleurotus mutilus Tiamulin and valnemulin are two established derivatives in veterinary medicine for oral and intramuscular administration.\nBACKGROUND Pleuromutilin is a natural tricyclic, derived from the fungus, Pleurotus mutilus.\nBACKGROUND Pleuromutilin is a natural tricyclic, derived from the fungus, Pleurotus mutilus.\nThis study aims to enhance the accumulation of pleuromutilin by Pleurotus mutilus and to analyze the molecular structure of pleuromutilin" }, { "role": "assistant", "content": "Pleuromutilins are antibiotics, isolated from the fungus, Pleurotus mutilus, that selectively inhibit bacterial translation and are semisynthetic derivatives of the naturally occurring tricyclic diterpenoid pleuromutilins." } ]
science.data_reco_mcq_mc
science.data_reco_mcq_mc.325
[ { "role": "user", "content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: A system for image modeling and synthesis.\n\nKeyphrases: image modeling synthesis images\n\nDataset description:\nTHEODORE: Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes.\nSearchQA: SearchQA was built using an in-production, commercial search engine. It closely reflects the full pipeline of a (hypothetical) general question-answering system, which consists of information retrieval and answer synthesis.\nJSUT Corpus: JSUT Corpus is a free large-scale speech corpus that can be shared between academic institutions and commercial companies has an important role. However, such a corpus for Japanese speech synthesis does not exist.\nLSUN: The Large-scale Scene Understanding (LSUN) challenge aims to provide a different benchmark for large-scale scene classification and understanding. The LSUN classification dataset contains 10 scene categories, such as dining room, bedroom, chicken, outdoor church, and so on.\nCelebA-HQ: The CelebA-HQ dataset is a high-quality version of CelebA that consists of 30,000 images at 1024×1024 resolution.\nCelebA: CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age.\n\nOptions:- A: THEODORE B: SearchQA C: JSUT Corpus D: LSUN E: CelebA-HQ F: CelebA" }, { "role": "assistant", "content": "[\"LSUN\", \"CelebA\"]" } ]
science.genia_ner
science.genia_ner.1405
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"RNA\", \"CellLine\", \"Protein\", \"DNA\", \"CellType\"].\n\nPlease return the output as a JSON object of the format: {\"DNA\" : [\"TAL1\", ...], \"RNA\": [\"mRNAs\", ...], \"CellLine\": [\"T cells\", ...], \"CellType\": [\"monocytes\", ...], \"Protein\": [\"IL-2\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nA novel mutation in the coding sequence of the FY*B allele of the Duffy chemokine receptor gene is associated with an altered erythrocyte phenotype. The Duffy blood group system is of clinical and biological significance. Antibodies to Duffy antigens are responsible for some cases of transfusion incompatibility and newborn hemolytic disease. The Duffy protein is a receptor for the Plasmodium vivax erythrocyte-binding protein and is also a receptor for various chemokines (thus renamed Duffy Antigen Receptor for Chemokines [ DARC ]). The two Duffy polymorphic antigens, Fya and Fyb (coded by the FY*A and FY*B alleles), are present on erythrocyte membranes. The Fy (a-b-) phenotype is the predominant one in populations of black people and also occurs in other populations, including some non-Ashkenazi Jewish groups. The Fy (a-b-) phenotype has been associated with a mutation in the FY*B promoter at the GATA box that abolishes the expression of erythrocyte Duffy protein. We describe here a novel mutation, present in the FY*B coding sequence (271C-- > T), that is associated with some Fy (b-) phenotypes among non-Ashkenazi Jews and among Brazilian blacks. The mutation is present in Fy (b-) individuals, who have wild-type FY*B GATA and carry the previously described 304G-- > A substitution. The 271C-- > T and 304G-- > A can be identified by restriction enzyme-generated restriction fragment length polymorphisms. The 271C-- > T substitution represents a considerable change in chemical nature (Arg91-- > Cys), one which may affect the antigenic determinants of DARC, and thus be of clinical significance. The mutation may have implications for some physiological roles of DARC and be of interest in malaria research and in studies of population genetics." }, { "role": "assistant", "content": "{\"CellLine\": [], \"CellType\": [], \"DNA\": [\"FY*B allele\", \"Duffy chemokine receptor gene\", \"restriction enzyme - generated restriction fragment length polymorphisms\", \"GATA box\", \"FY*A and FY*B alleles\", \"FY*B coding sequence\", \"FY*B promoter\", \"wild - type FY*B GATA\"], \"Protein\": [\"Antibodies\", \"erythrocyte Duffy protein\", \"chemokines\", \"Fyb\", \"Fya\", \"Duffy antigens\", \"erythrocyte membranes\", \"Duffy polymorphic antigens\", \"DARC\", \"Plasmodium vivax erythrocyte - binding protein\", \"Duffy Antigen Receptor for Chemokines\", \"Duffy protein\"], \"RNA\": []}" } ]
science.healthver_entailment
science.healthver_entailment.2694
[ { "role": "user", "content": "You will be shown a claim about public health, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\". If the abstract does not provide enough information to arrive at a verdict, write \"NEI\" (for \"not enough information\").\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract. If the verdict was \"NEI\", then return an empty array.\n\nFor instance, if the model were given the claim \"wearing masks can prevent the spread of COVID\", the output might be\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that mass mask-wearing reduces the transmission rate for COVID-19.\"]\n}\n\nClaim: Smoking tobacco is also a known risk factor for severe disease and death from many respiratory infections.\n\nAbstract:\nIntroduction Epidemiological and laboratory research seems to suggest that smoking and perhaps nicotine alone could reduce the severity of COVID-19.\nLikewise, there is some evidence that inhaled corticosteroids could also reduce its severity, opening the possibility that nicotine and inhaled steroids could be used as treatments.\nMethods In this prospective cohort study, we will link English general practice records from the QResearch database to Public Health England's database of SARS-CoV-2 positive tests, Hospital Episode Statistics, admission to intensive care units, and death from COVID-19 to identify our outcomes: hospitalisation, ICU admission, and death due to COVID.\nUsing Cox regression, we will perform sequential adjustment for potential confounders identified by separate directed acyclic graphs to: 1.\nAssess the association between smoking and COVID-19 disease severity, and how that changes on adjustment for smoking-related comorbidity.\n2. More closely characterise the association between smoking and severe COVID-19 disease by assessing whether the association is modified by age (as a proxy of length of smoking), gender, ethnic group, and whether people have asthma or COPD.\n3. Assess for evidence of a dose-response relation between smoking intensity and disease severity, which would help create a case for causality.\n4.\nExamine the association between former smokers who are using NRT or are vaping and disease severity.\n5. Examine whether pre-existing respiratory disease is associated with severe COVID-19 infection.\n6. Assess whether the association between chronic obstructive pulmonary disease (COPD) and asthma and COVID-19 disease severity is modified by age, gender, ethnicity, and smoking status.\n7. Assess whether the use of inhaled corticosteroids is associated with severity of COVID-19 disease.\n8. To assess whether the association between use of inhaled corticosteroids and severity of COVID-19 disease is modified by the number of other airways medications used (as a proxy for severity of condition) and whether people have asthma or COPD.\nConclusions This representative population sample will, to our knowledge, present the first comprehensive examination of the association between smoking, nicotine use without smoking, respiratory disease, and severity of COVID-19.\nWe will undertake several sensitivity analyses to examine the potential for bias in these associations." }, { "role": "assistant", "content": "{\"verdict\": \"NEI\", \"evidence\": []}" } ]
science.scicite_classification
science.scicite_classification.3032
[ { "role": "user", "content": "You will be presented with a citation segment from the section of a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\nSVCT2 has been detected in mice skeletal muscle (Kuo et al. 2004) as well as in two murine muscle cell lines (Savini et al. 2005, Savini et al. 2007b)." }, { "role": "assistant", "content": "Background" } ]
science.craftchem_ner
science.craftchem_ner.7305
[ { "role": "user", "content": "You will be presented a sentence from biomedical research text. Extract all terms that correspond to chemical entities of biological interest (ChEBI).\nReturn your identified terms in a JSON array, formatted like this: [\"Entity1\", \"Entity2\", ...].\nIf no such entities are detected, respond by providing an empty JSON array [].\n\nBiomedical context:\nSpeedcongenicstraindevelopment" }, { "role": "assistant", "content": "[]" } ]
science.mslr2022_ms2_multidoc_summarization
science.mslr2022_ms2_multidoc_summarization.11110
[ { "role": "user", "content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nIs obesity a possible modifier of periodontal disease as a chronic inflammatory process? A case-control study.\nAbstract 1:\nBACKGROUND AND OBJECTIVE This cross-sectional case-control study was conducted to provide a comparative evaluation of clinical periodontal measurements , together with serum levels of certain bioactive peptides and inflammatory cytokines , in relation to obesity . For this purpose , clinical periodontal measurements and the levels of serum leptin , adiponectin , interleukin-6 ( IL-6 ) , C-reactive protein and soluble intercellular adhesion molecule-1 of obese female individuals and their nonobese counterparts were compared . MATERIAL AND METHODS Sixty obese ( body mass index ( BMI ) > 30 ) and 31 nonobese ( BMI < 30 ) female subjects were recruited for the present study . Before any periodontal intervention , serum sample s were obtained and full-mouth clinical periodontal measurements were recorded at six sites per tooth . ELISA was used for the biochemical analysis . Data were tested statistically . RESULTS Clinical attachment level was significantly higher in the obese group compared with the nonobese control group ( p < 0.05 ) . Serum levels of leptin and IL-6 were significantly higher in the obese group ( p < 0.05 ) . BMI correlated with the serum levels of inflammatory molecules ( p < 0.05 ) , but not with clinical periodontal parameters , in the obese group . CONCLUSION In conclusion , obesity does not seem to have a prominent effect on clinical periodontal parameters but it does have many correlations with circulating inflammatory molecules . As suggested in the literature , increased levels of leptin and IL-6 in the obese group might be one explanation for a possible relationship between obesity and periodontal disease . A prospect i ve study is warranted to clarify , in greater detail , the effects of obesity on periodontal health\n\n\nTitle 2:\nRelationship between increases in BMI and changes in periodontal status: a prospective cohort study.\nAbstract 2:\nAIM The purpose of this prospect i ve cohort study was to investigate whether body mass index ( BMI ) and oral health behaviour are related to changes in periodontal status in Japanese university students . MATERIAL S AND METHODS Students ( n = 224 ) who were interested in receiving oral health examinations before entering university and before graduation were included in the analysis . Subjects were investigated regarding the correlations of oral health behaviours and increases in BMI with the percentage of bleeding on probing ( % BOP ) and Community Periodontal Index ( CPI ) scores as indicators of changes in periodontal status . RESULTS The risk of increased % BOP was associated with the non-use of dental floss ( adjusted odds ratio [ OR ] : 3.11 ; 95 % confidence interval [ CI ] : 1.31 - 7.37 ; p < 0.05 ) , whereas the risk of increased CPI score was associated with increases in BMI ( OR : 1.95 ; 95 % CI : 1.05 - 3.65 ; p < 0.05 ) and simplified oral hygiene index score ( OR : 2.28 ; 95 % CI : 1.23 - 4.22 ; p < 0.01 ) . CONCLUSION Increases in BMI were associated with worsening of periodontal status , defined as increased CPI score in Japanese university students , whereas lack of inter-dental cleaning was associated with exacerbated gingival bleeding\n\n\nTitle 3:\nEffect of periodontal treatment on serum C-reactive protein level in obese and normal-weight women affected with chronic periodontitis.\nAbstract 3:\nOBJECTIVE To investigate the effect of conventional periodontal therapy on serum C-reactive protein ( CRP ) level and periodontal status in obese and normal-weight chronic periodontitis patients . METHODS This is a controlled clinical trial conducted at the King Abdulaziz University Faculty of Dentistry , Jeddah , Kingdom of Saudi Arabia between December 2009 and March 2011 . A total of 40 women affected with moderate to severe chronic periodontitis were selected ( 20 obese [ test group ] and 20 normal-weight [ control ] ) . Smokers , pregnant women , and subjects with any systemic disease were excluded . Serum CRP level and periodontal parameters , including clinical attachment level , probing depth , bleeding on probing and plaque scores were assessed at baseline , and 2 months after non-surgical periodontal treatment . RESULTS Periodontal therapy was effective in reducing gingival inflammation , as well as serum CRP level in the total sample and within each group . The pre-treatment mean level of serum CRP was 0.78 ( + /-0.51 ) and post- treatment was 0.55 ( + /-0.41 ) mg/l in the total sample ( p=0.001 ) . A tendency was observed toward a better systemic response to treatment in normal-weight compared to obese women , however , it was not statistically significant ( the mean changes in CRP levels after therapy were 0.28 [ + /-0.43 ] and 0.19 [ + /-0.32 ] mg/l ) . CONCLUSION Periodontal treatment is effective in reducing systemic inflammation as measured by serum CRP level , and obesity does not have a major negative impact on response to periodontal therapy\n\n\nTitle 4:\nThe role of obesity as a modifying factor in patients undergoing non-surgical periodontal therapy.\nAbstract 4:\nBACKGROUND Earlier studies have shown an association between obesity and periodontitis , which is mediated by cytokine production . The aim of this study is to assess the role of obesity as a modifying factor on periodontal clinical parameters and on circulating proinflammatory cytokine levels in subjects undergoing non-surgical periodontal treatment . METHODS Twenty-seven obese subjects and 25 normal-weight subjects were enrolled in this study . Subjects in both groups had generalized chronic periodontitis . The periodontal parameters measured before and 3 months after non-surgical periodontal therapy were : visible plaque index , bleeding gingival index , bleeding on probing , probing depth , and clinical attachment level . In addition , subjects underwent anthropometric measurements and serum analyses of fasting glucose , glycated hemoglobin , interleukin-1β , interleukin-6 , tumor necrosis factor-α , and interferon-γ . RESULTS Periodontal therapy significantly decreased visible plaque index , bleeding gingival index , bleeding on probing , probing depth of 4 to 6 mm , probing depth ≥7 mm , clinical attachment level of 4 to 6 mm , and clinical attachment level ≥7 mm in both groups ( P ≤0.05 ) . Circulating proinflammatory cytokines significantly decreased in obese and normal-weight subjects after periodontal treatment ( P ≤0.05 ) . However , interleukin-6 and tumor necrosis factor-α levels remained higher in obese subjects 3 months after treatment ( P ≤0.05 ) . CONCLUSION Obesity does not seem to play a negative role by interfering in the improvement of the periodontal clinical response or decreasing circulating proinflammatory cytokine levels after periodontal treatment" }, { "role": "assistant", "content": "Obesity is characterized by a chronic sub clinical inflammation that could exacerbate other chronic inflammatory disorders like as periodontitis .\nCONCLUSIONS The association between obesity and periodontitis was consistent with a compelling pattern of increased risk of periodontitis in overweight or obese individuals .\nAlthough the underlying pathophysiological mechanism remains unclear , it has been pointed out that the development of insulin resistance as a consequence of a chronic inflammatory state and oxidative stress could be implicated in the association between obesity and periodontitis ." } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.139709
[ { "role": "user", "content": "You will be presented with the full text of a science research paper. Please write an abstract for this paper. Your response should include the abstract and no additional text.\n\nPaper text:\nrecent results from the relativistic heavy ion collider ( rhic ) demonstrate qualitatively new behavior for heavy - ion collisions at high energies .\nrhic appears to probe a regime where quarks and gluons are no longer bound into hadrons but form a new state of matter , a quark - gluon plasma ( qgp ) .\nthe experimental results suggest that the transition does not occur directly from a confined phase to an ideal qgp but that it goes through a phase where the deconfined phase is non - perturbative @xcite .\nin fact , `` partial '' deconfinement is expected from the gross - witten matrix model for su(@xmath0 ) wilson lines at large @xmath0 @xcite and from quasi - particle approaches @xcite .\nhere , we investigate an effective theory of straight , thermal wilson lines @xcite which respects the global z(n ) symmetry of yang - mills and might be able to describe the region below about @xmath1 , where resummed perturbation theory fails .\nwe focus , in particular , on the distribution of the _ eigenvalues _ of the su(2 ) wilson line at , and slightly above , the deconfining phase transition .\nwe consider an effective theory in three dimensions which uses the thermal wilson line @xmath2 as the fundamental degree of freedom in the electric sector . this is a matrix valued field which exists in three dimensional space .\nthe effective lagrangian is composed of a kinetic term , the one - loop potential , and a non - perturbative term proportional to a `` fuzzy - bag '' constant @xmath3 : @xmath4 @xmath5 is the non - abelian magnetic field strength and @xmath6 the covariant derivative .\nthis theory does not involve elements of the algebra , i.e. @xmath7 , and respects the z(n ) center symmetry of the su(n ) gauge group .\nit is therefore able to generate fluctuations between different z(n ) vacua , just above the phase transition .\nthe lagrangian is non - renormalizable in three dimensions and thus applies only at distances @xmath8 .\nwe performed simulations of the effective theory for two colors , employing three - dimensional cubic lattices with periodic boundary conditions .\nwe use metropolis and overrelaxation techniques @xcite to update the lattice configurations and to generate a thermal ensemble .\nwe currently neglect all but the @xmath9 term of the perturbative potential which can then be combined with the non - perturbative term .\nthe lattice action then reads @xmath10 it contains the standard kinetic wilson action for the magnetic fields , where the sum runs over all `` plaquettes '' , gauge - covariant kinetic nearest neighbour interaction for the wilson lines and a mass term .\nthe index @xmath11 labels lattice sites and @xmath12 labels links .\ninitial results were obtained by neglecting the magnetic sector @xcite .\nthe theory is then essentially reduced to a sigma model with global @xmath13 symmetry .\nwhen @xmath14 , a second order phase transition occurs at @xmath15 . here , the expectation value of the `` length '' of @xmath16 , @xmath17 , takes on a non - zero value @xcite .\nthe effective screening mass , which equals the inverse spatial correlation length , drops to zero here ( up to finite lattice size effects ) and remains zero above the phase transition point , due to massless goldstone modes .\nchoosing a fixed value for @xmath18 and varying @xmath19 also gives a second order phase transition .\nhere , the proper order parameter is the expectation value of the polyakov loop , which is the trace of the wilson line .\nlarger @xmath18 shifts the phase transition point to smaller @xmath19 . to obtain non - zero vevs for the polyakov loop for negative @xmath19 , an infinitesimal background field must be added .\nthese results suggest that the phase boundary of the theory runs diagonally in the @xmath18-@xmath19 plane @xcite .\nwe measured the average and the difference of eigenvalues of the wilson lines @xmath20 and determined their distribution in the thermal ensemble over a broad range of couplings .\nwe find , that for @xmath14 and small @xmath18 , the distribution is dominated by the group integration measure which leads to a logarithmic divergence for @xmath21 . for large\n@xmath19 one approaches the perturbative vacuum , where the distribution of the average @xmath22 is peaked around 1 . in the confined phase at large @xmath18 the wilson lines fluctuate about the non - trivial vacuum @xmath23 , or su(2 ) rotations thereof\n( eigenvalues `` repel '' ) .\nfluctuations diminish with increasing @xmath18 .\nlattice for different values of @xmath19 and @xmath18 .\nmagnetic fields are included .\n[ fig : histo_fig],title=\"fig : \" ] lattice for different values of @xmath19 and @xmath18 .\nmagnetic fields are included .\n[ fig : histo_fig],title=\"fig : \" ] lattice for different values of @xmath19 and @xmath18 .\nmagnetic fields are included .\n[ fig : histo_fig],title=\"fig : \" ] lattice for different values of @xmath19 and @xmath18 .\nmagnetic fields are included .\n[ fig : histo_fig],title=\"fig : \" ] adding gauge fields now leads to a few qualitative changes in the observables .\nforemost , it is no longer necessary to use the 0(4)-length as an order parameter , since the presence of gauge fields explicitly breaks the @xmath24 symmetry of the kinetic term .\nthe polyakov loop is now a good order parameter , even for @xmath14 .\nall massless goldstone modes disappear as well , so that the inverse correlation length is only zero ( again up to finite size effects ) exactly at the phase transition point .\nqualitative differences in the distribution of eigenvalues arise from the fact that even at @xmath14 raising @xmath18 can now lead to eigenvalue attraction .\nthe qualitative dependence on @xmath19 for fixed @xmath18 is similar to the spin - model case , although the transition from the perturbative vacuum to the confined vacuum is smoother across the point @xmath14 ( as one would expect , since this is reminiscent of the change of the behavior of the polyakov loop ) .\noverall , changes in the electric sector due to magnetic fields are qualitatively similar to the effect of the external field which was formerly used to find the phase boundary in the spin model at @xmath25 . to assess the influence of the dynamics of the wilson lines on the magnetic fields we have also measured the expectation value of the wilson action itself and of the spatial string tension .\nthis was done by obtaining expectation values of rectangular wilson loops @xmath26 , where i and j are the dimensions of the loop in lattice units .\nsuch loops obey the area law @xmath27 , where @xmath28 is the area of the loop and the string tension @xmath29 plays the role of a decay constant .\none normally uses creutz ratios @xcite but our comparison of both methods suggests that in 3d , using the wilson loops directly works equally well .\n( left ) and as a function of @xmath19 for fixed @xmath30 ( right ) .\nthe string tension appears to be independent of @xmath19 .\n[ fig : string_fig],title=\"fig : \" ] ( left ) and as a function of @xmath19 for fixed @xmath30 ( right ) .\nthe string tension appears to be independent of @xmath19 .\n[ fig : string_fig],title=\"fig : \" ] we find that both the string tension and the wilson action drop approximately exponentially with @xmath18 but are independent of @xmath19 .\nboth are also independent of the lattice size .\nthe magnetic sector is largely unaffected by deconfinement in the electric sector .\nm. gyulassy , l. mclerran , nucl .\na * 750 * , 30 ( 2005 ) .\na. dumitru , y. hatta , j. lenaghan , k. orginos and r. d. pisarski , phys .\nd * 70 * , 034511 ( 2004 ) ; a. dumitru , j. lenaghan and r. d. pisarski , phys .\nd * 71 * , 074004 ( 2005 ) .\np. n. meisinger , m. c. ogilvie and t. r. miller , phys .\nb * 585 * , 149 ( 2004 ) ; phys .\nd * 65 * , 034009 ( 2002 ) .\nr. pisarski , phys .\nd * 74 * , 121703 ( 2006 ) ; a. vuorinen and l. g. yaffe , phys .\nd * 74 * , 025011 ( 2006 ) .\na. dumitru , d. smith , phys .\nd**77 * * , 094022 ( 2008 ) . n. & a. w. rosenbluth , m. n. & a. h. teller , chem . phys .\n* 21 * , 1087 ( 1953 ) .\nf. r. brown , t. j. woch , phys .\nlett . * 58 * 2394 ( 1987 ) .\nj. b. kogut , m. snow , m. stone , nucl .\nb * 200 * 211 ( 1982 ) .\nm. creutz , phys .\n* 45 * 313 ( 1980 ) ." }, { "role": "assistant", "content": "we perform simulations of an effective theory of su(2 ) wilson lines in three dimensions . \n our action includes a kinetic term , the one - loop perturbative potential for the wilson line , a non - perturbative `` fuzzy - bag '' contribution and spatial gauge fields . \n we determine the phase diagram of the theory and confirm that , at moderately weak coupling , the non - perturbative term leads to eigenvalue repulsion in a finite region above the deconfining phase transition ." } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.4160
[ { "role": "user", "content": "You will be presented with the full text of a biomedical research paper. Please write an abstract for this paper. Your response should include the abstract and no additional text.\n\nPaper text:\njuvenile idiopathic arthritis ( jia ) is an autoimmune disease1 whose association with uveitis is established for more than half a century.2,3,4 there are , according to the international league of associations for rheumatologists ( ilar ) , 7 subtypes of jia known as oligoarticular , polyarticular rf( ) , polyarticular rf(+ ) , enthesitis - related arthritis , psoriatic arthritis , systemic , and undifferantiated.5 the jia - associated uveitis may develop before , at the same time , or after the arthritis onset.2,6 it is typically anterior , chronic , and usually affects both eyes.711 if untreated , serious ocular complications can occur ( cataract , glaucoma , band keratopathy , posterior synechiae ) causing visual impairment in afflicted children.2,5,12 risk factors for the development of uveitis in children with jia , are early onset of the disease , pauciarticular subtype , antinuclear antibodies ( ana ) positive titers , female sex , and certain human leukocyte antigen ( hla ) markers.13,14 all children with jia should have at least one adequate slit lamp examination as soon as possible after diagnosis of the arthritis.7,12 if uveitis is not detected initially , all children with jia should be screened by slit lamp examination as per specific screening criteria2,3,15,16 according to the american academy of pediatrics ( table 1 ) .\nwe must mention that the frequency of ophthalmologic examination has been slightly changed , according to updated recommendations,17 where the duration of disease ( in years ) is also taken into consideration . in the present study we performed a retrospective chart review to study the prevalence , complications , and visual outcome of jia - associated uveitis patients .\nto identify those with uveitis and related complications a retrospective chart review was performed in 56 consecutive patients that fulfilled the ilar 2001 revised criteria for juvenile rheumatoid arthritis ( jra):5 persistent arthritis ( swelling of a joint or limitation of motion with heat , pain , or tenderness ) of one or more joints for at least 6 weeks after exclusion of infectious , metabolic , traumatic , and neoplastic causes as well as other autoimmune diseases ( including psoriatic arthritis , ankylosing spondylitis , inflammatory bowel disease , systemic lupus erythematosus ) .\nthose patients were referred to the pediatric department of the university hospital of ioannina , greece , between 1995 and 2007 and were examined ophthalmologically in the pediatric ophthalmologic department of our clinic . the pediatric rheumatological database included age , gender , arthritis subgroup , age at onset of arthritis , treatment , arthritis activity , number of swollen or tender joints , and extraarticular manifestations such as the presence of uveitis and uveitis complications .\nthe presence of antinuclear antibodies ( ana ) , hla - b27 antigen , and rheumatoid factor ( rf ) were documented .\nthe screening intervals met the criteria of the established guidelines , at the time patients were examined.18 the mean follow up time was 7.5 years .\nall patients underwent a thorough ophthalmologic examination including visual acuity or cycloplegic refraction , slit - lamp biomicroscopy , iop measurement ( when possible ) , and dilated - fundus examination .\nvisual acuity cut - offs of 20/50 or worse ( low vision ) and 20/200 or worse ( legal blindness ) were used according to recommendations from the standardization of uveitis nomenclature ( sun ) working group.19 statistical analysis was performed with statistical package for social sciences ( version 14.0 ; spss inc . ,\nodds ratios ( or ) were calculated using chi - square tests or fisher s exact tests where appropriate and p values less than 0.05 were considered statistically significant .\nthirty - seven ( 66% ) were girls and 19 ( 34% ) were boys .\nthe average age at diagnosis of jia was 65 months ( 95% confidence interval [ ci ] : 5177 months ) , range : 3161 months , median : 54 months .\nit was bilateral in 21 ( 84% ) and unilateral in the other 4 ( 16% ) cases .\nthe average age at diagnosis of jia of the children who developed uveitis was 56 months ( 95% ci : 3675 months ) , range : 9152 months , median : 36 months .\nthe prevalence was 48% ( 14 of 29 ) in the patients seen before january 2000 and 41% ( 11 of 27 ) in those seen after january 2000 .\nthe diagnosis of uveitis was made within the first 3 years after the diagnosis of jia for the majority of the patients , specifically for 23 ( 92% ) of them . in 13 ( 52% ) of the 25 patients ,\nuveitis was diagnosed at the initial eye examination immediately after diagnosis of jia , while for the remaining 12 ( 48% ) children , the mean interval from the diagnosis of jia to the onset of uveitis was 23 months ( 95% ci 641 months ) , range : 389 months , median : 12 months .\noligoarticular jra is defined by involvement of 4 or fewer joints ; polyarticular jra is defined by involvement of > 4 joints ( usually 1020 ) ; and systemic - onset jra is defined by quotidian fevers during the first 6 weeks of the illness , almost always associated with a characteristic rash . of the 56 patients with jia , 42 ( 75% ) were classified as having the oligoarticular onset subtype , 10 ( 18% ) as polyarticular and 4 ( 7% ) had the systemic onset subtype .\nof the 25 patients who developed uveitis , 21 ( 84% ) were classified as having oligoiarticular onset jia , 4 ( 16% ) had polyarticular onset jia and none had systemic jia ( table 2 ) .\nthe subtype of jia did not constitute a statistically signigicant risk factor in our study(p - value = 0.17 , using the fisher s exact test ) .\nof the 56 patients , ana titers were positive in 52 ( 93% ) and negative in only 4 ( 7% ) of them .\nof the 25 patients with uveitis , 24 ( 96% ) had positive ana titers and only 1 ( 4% ) had negative ana titers ( table 3 ) . in this case also , ana positive titers did not constitute a statistically significant risk factor ( p - value = 0.62 , using the fisher s exact test ) .\nvisual complications occurred in 7(28% ) of the 25 patients with uveitis ( figures 14 ) .\none patient ( suffering from phthisis bulbi , see figure 4 ) lost vision and two others had loss of vision up to counting fingers .\nnone of the rest 22(88% ) patients had visual acuity less than 7/10 , while 15(60% ) of them had visual acuity equal to 10/10 and 19(76% ) equal or more than 9/10 .\ncataract developed in six ( treated surgically with implantation of anterior chamber intraocular lens ) and glaucoma in one ( initially treated with antiglaucomatic drugs and later with trabeculectomy ) .\nfour patients developed band keratopathy ( treated with abrasion and use of edta ) , one patient had phthisis bulbi , another had hyphema and three developed posterior synechiae ( table 4 ) . concerning the therapy ,\n11 children were treated with methotrexate , three with a combination of methotrexate and methylprednisolone ; 10 children were treated with the use of biological factors , such as the tumor necrosis factor ( tnf)-blocking agents etanercept and infliximab ; and the rest were treated with nonsteroidal anti - inflammatory agents .\njuvenile idiopathic arthritis is an autoimmune disease that can cause great inconvenience in the young patient s life .\nthe prevalence of uveitis at children with jia,8,9 has been found to be between 4% and 24.4% , even though there are sporadic studies showing a much greater prevalence .\nchronic iridocyclitis accounts for about 90% of the cases and if untreated , serious visual impairment may occur .\nit is important to consider that up to 80% of all cases of anterior uveitis in childhood are associated with jia , thus rendering it the most common cause of chronic iridocyclitis in children.1921 the prevalence of uveitis in our study was 45% .\nit is one of the highest observed in most studies , especially given the fact that it concerns all types of jia and not only selectively the oligoarticular .\nthe confusingly different prevalence rates observed in the literature , can be attributed to the different study designs , as some of them concern population - based data and others large rheumatology centers with different selection criteria and follow - up .\nfurthermore , these differences may derive from the social and environmental factors of the patients included in the various studies .\nwe wanted to find out whether this high rate was due to older cases and whether there is a decreasing tendency over the last decade .\na similar observation was made by kodsi et al in 2002.2 a significant characteristic of these children is that they rarely complain of pain or \nas uveitis progresses and when they begin to complain of visual disturbances , they have already developed serious complications ( cataract , glaucoma , band keratopathy , cystoid macular edema etc).23 most patients have seronegative disease for rheumatoid factor while the most common serological finding is the presence of ana antibodies which are detected in 96% of uveitis cases .\nana titers constitute a nonspecific finding since they may be present in other autoimmune diseases , malignancies , and aging and they are unassociated with disease activity or severity.24 according to studies , the presence of ana should prove useful in identifying patients with jia at risk for chronic anterior uveitis.2 another significant risk factor is the subtype of jia .\n80%90% of cases of jia - associated uveitis suffer from oligoarticular arthritis , 7%14% from polyarticular and 2%6% presents the systemic subtype.19,26 what might be interesting , is the fact that despite the established association of uveitis with the presence of oligoarticular jia and presence of antinuclear antibodies,14,26 in our study oligoarticular onset subtype of jia or ana positive titers did not constitute a statistically significant risk factor ( p > 0.05 ) .\nas in all retrospective studies , our results must be interpreted with caution . a referral bias exists because our institution is a tertiary care medical center , and it is possible that only the most severe cases of jia associated uveitis were referred to our center as suggested by high frequencies of ocular complications and poor visual acuity as well as the long duration of uveitis prior to presentation to our clinic .\nfurthermore , the number of events was small for some outcomes , which may have limited the precision of odds ratios for certain risk factors . despite these limitations\n, our study suggests that visual impairment still occur frequently in jia - related uveitis , seen in the tertiary care setting .\nalmost equally high rates of oligoarticular onset subtype of jia and ana positive titers were found among children who did not develop uveitis throughout any stage of the follow up . in our study visual complications occurred in 7(28% ) of the 25 patients with uveitis .\nsevere visual impairment has been reported in up to 38% of patients.2728 increased severity of ocular disease at the initial examination has been reported as a risk factor for poor visual acuity outcomes at the last follow - up visit , but the follow - up time in these series are variable.29,30 one patient developed uveitis 6 years later and another one almost 7.5 years later . both of them were males , suffering from bilateral uveitis , were ana positive , and hla - b27 negative ; one suffering oligoarticular and the other from polyarticular jia .\nthe range of cumulative incidence of jra - associated uveitis has been reported to be between 1.4%38%.2,3136 our analysis suggested a cumulative incidence of jra - associated uveitis at the high end of the above mentioned rates .\nas a conclusion , we observed remarkably high prevalence of uveitis and related ocular complications in 7 ( 28% ) of the patients and furthermore the rate of poor visual outcome was almost 12% . the reported rate of visual loss due to jra - related uveitis varies in more recent studies from 3%25%.26 all these documentations may facilitate the optimization of the current guidelines and result in optimization of follow up .\nimmediate ophthalmological examination after the diagnosis of jia and excellent co - operation between the physician and the parents of the children concerning the therapy may help to improve long - term outcome ." }, { "role": "assistant", "content": "objectiveto evaluate the characteristics and visual prognosis of juvenile idiopathic arthritis - associated uveitis ( jia).methodsa retrospective review was performed on 56 patients who met the criteria for jia to identify those with uveitis and related complications . \n patients were referred to and were examined in the pediatric department of the university hospital of ioannina , between 1995 and 2007.resultsthe prevalence of jia - associated uveitis was high . despite this and the related complications , the final visual outcome was satisfactory in the majority of the cases . \n authors did not observe any correlation between prognosis and sex , age at the onset of uveitis or arthritis , pattern of arthritis , or positivity for antinuclear antibodies ( ana).conclusionwe found a remarkably high prevalence of uveitis and related ocular complications in 7 ( 28% ) of the patients , and the rate of poor visual outcome was 12% ." } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.395
[ { "role": "user", "content": "You will be shown biomedical passages and your task is to the answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: Which DNA repair system is involved in HNPCC?\n\nContext: The HNPCC syndrome (hereditary non polyposis colon cancer) or Lynch syndrome stands for an autosomic dominant condition leading to the most prevalent hereditary colo-rectal cancers (CCR). MMR (mismatch repair)'s genes are involved in carcinogenesis as they play a role in ADNA mismatch repair.\nMutations in the genes of the mismatch DNA repair system and of the TGF-beta-II-receptor, the main defects of the HNPCC (hereditary nonpolyposis colorectal cancer), are exclusively identified in sequences of microsatellites.\nIn HNPCC families, germline mutations in any of four genes encoding proteins of a specialized DNA repair system, the mismatch repair, predispose to cancer development." }, { "role": "assistant", "content": "In HNPCC, the mismatch DNA repair system is involved." } ]
science.scitldr_aic
science.scitldr_aic.621
[ { "role": "user", "content": "Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.\n\nPaper text:\n\nRecurrent Neural Networks have long been the dominating choice for sequence modeling.\nHowever, it severely suffers from two issues: impotent in capturing very long-term dependencies and unable to parallelize the sequential computation procedure.\nTherefore, many non-recurrent sequence models that are built on convolution and attention operations have been proposed recently.\nNotably, models with multi-head attention such as Transformer have demonstrated extreme effectiveness in capturing long-term dependencies in a variety of sequence modeling tasks.\nDespite their success, however, these models lack necessary components to model local structures in sequences and heavily rely on position embeddings that have limited effects and require a considerable amount of design efforts.\nIn this paper, we propose the R-Transformer which enjoys the advantages of both RNNs and the multi-head attention mechanism while avoids their respective drawbacks.\nThe proposed model can effectively capture both local structures and global long-term dependencies in sequences without any use of position embeddings.\nWe evaluate R-Transformer through extensive experiments with data from a wide range of domains and the empirical results show that R-Transformer outperforms the state-of-the-art methods by a large margin in most of the tasks.\nRecurrent Neural Networks (RNNs) especially its variants such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) have achieved great success in a wide range of sequence learning tasks including language modeling, speech recognition, recommendation, etc (Mikolov et al., 2010; Sundermeyer et al., 2012; Graves & Jaitly, 2014; Hinton et al., 2012; Hidasi et al., 2015) .\nDespite their success, however, the recurrent structure is often troubled by two notorious issues.\nFirst, it easily suffers from gradient vanishing and exploding problems, which largely limits their ability to learn very long-term dependencies (Pascanu et al., 2013) .\nSecond, the sequential nature of both forward and backward passes makes it extremely difficult, if not impossible, to parallelize the computation, which dramatically increases the time complexity in both training and testing procedure.\nTherefore, many recently developed sequence learning models have completely jettisoned the recurrent structure and only rely on convolution operation or attention mechanism that are easy to parallelize and allow the information flow at an arbitrary length.\nTwo representative models that have drawn great attention are Temporal Convolution Networks(TCN) (Bai et al., 2018) and Transformer (Vaswani et al., 2017) .\nIn a variety of sequence learning tasks, they have demonstrated comparable or even better performance than that of RNNs (Gehring et al., 2017; Bai et al., 2018; Devlin et al., 2018) .\nThe remarkable performance achieved by such models largely comes from their ability to capture long-term dependencies in sequences.\nIn particular, the multi-head attention mechanism in Transformer allows every position to be directly connected to any other positions in a sequence.\nThus, the information can flow across positions without any intermediate loss.\nNevertheless, there are two issues that can harm the effectiveness of multi-head attention mechanism for sequence learning.\nThe first comes from the loss of sequential information of positions as it treats every position identically.\nTo mitigate this problem, Transformer introduces position embeddings, whose effects, The illustration of one layer of R-Transformer.\nThere are three different networks that are arranged hierarchically.\nIn particular, the lower-level is localRNNs that process positions in a local window sequentially (This figure shows an example of local window of size 3); The middle-level is multi-head attention networks which capture the global long-term dependencies; The upper-level is Position-wise feedforward networks that conduct non-linear feature transformation.\nThese three networks are connected by a residual and layer normalization operation.\nThe circles with dash line are the paddings of the input sequence however, have been shown to be limited (Dehghani et al., 2018; Al-Rfou et al., 2018) .\nIn addition, it requires considerable amount of efforts to design more effective position embeddings or different ways to incorporate them in the learning process (Dai et al., 2019) .\nSecond, while multi-head attention mechanism is able to learn the global dependencies, we argue that it ignores the local structures that are inherently important in sequences such as natural languages.\nEven with the help of position embeddings, the signals at local positions can still be very weak as the number of other positions is significantly more.\nTo address the aforementioned limitations of the standard Transformer, in this paper, we propose a novel sequence learning model, termed as R-Transformer.\nIt is a multi-layer architecture built on RNNs and the standard Transformer, and enjoys the advantages of both worlds while naturally avoids their respective drawbacks.\nMore specifically, before computing global dependencies of positions with the multi-head attention mechanism, we firstly refine the representation of each position such that the sequential and local information within its neighborhood can be compressed in the representation.\nTo do this, we introduce a local recurrent neural network, referred to as LocalRNN, to process signals within a local window ending at a given position.\nIn addition, the LocalRNN operates on local windows of all the positions identically and independently and produces a latent representation for each of them.\nIn this way, the locality in the sequence is explicitly captured.\nIn addition, as the local window is sliding along the sequence one position by one position, the global sequential information is also incorporated.\nMore importantly, because the localRNN is only applied to local windows, the aforementioned two drawbacks of RNNs can be naturally mitigated.\nWe evaluate the effectiveness of R-Transformer with a various of sequence learning tasks from different domains and the empirical results demonstrate that R-Transformer achieves much stronger performance than both TCN and standard Transformer as well as other state-of-the-art sequence models.\nThe rest of the paper is organized as follows: Section 2 discusses the sequence modeling problem we aim to solve; The proposed R-Transformer model is presented in Section 3.\nIn Section 4, we describe the experimental details and discuss the results.\nThe related work is briefly reviewed in Section 5.\nSection 6 concludes this work.\nIn summary, experimental results have shown that the standard Transformer can achieve better results than RNNs when sequences exhibit very long-term dependencies, i.e., sequential MNIST while its performance can drop dramatically when strong locality exists in sequences, i.e., polyphonic music and language.\nMeanwhile, TCN is a very strong sequence model that can effectively learn both local structures and long-term dependencies and has very stable performance in different tasks.\nMore importantly, the proposed R-Transformer that combines a lower level LocalRNN and a higher level multi-head attention, outperforms both TCN and Transformer by a large margin consistently in most of the tasks.\nThe experiments are conducted on various sequential learning tasks with datasets from different domains.\nMoreover, all experimental settings are fair to all baselines.\nThus, the observations from the experiments are reliable with the current experimental settings.\nHowever, due to the computational limitations, we are currently restricted our evaluation settings to moderate model and dataset sizes.\nThus, more evaluations on big models and large datasets can make the results more convincing.\nWe would like to leave this as one future work.\nIn this paper, we propose a novel generic sequence model that enjoys the advantages of both RNN and the multi-head attention while mitigating their disadvantages.\nSpecifically, it consists of a LocalRNN that learns the local structures without suffering from any of the weaknesses of RNN and a multi-head attention pooling that effectively captures long-term dependencies without any help of position embeddings.\nIn addition, the model can be easily implemented with full parallelization over the positions in a sequence.\nThe empirical results on sequence modeling tasks from a wide range of domains have demonstrated the remarkable advantages of R-Transformer over state-of-the-art nonrecurrent sequence models such as TCN and standard Transformer as well as canonical recurrent architectures." }, { "role": "assistant", "content": "This paper proposes an effective generic sequence model which leverages the strengths of both RNNs and Multi-head attention." } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.16163
[ { "role": "user", "content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: Consistent Estimation of Gibbs Energy Using Component Contributions}\nArticle:\nAbstract:\nStandard Gibbs energies of reactions are increasingly being used in metabolic modeling for applying thermodynamic constraints on reaction rates, metabolite concentrations and kinetic parameters. The increasing scope and diversity of metabolic models has led scientists to look for genome-scale solutions that can estimate the standard Gibbs energy of all the reactions in metabolism. Group contribution methods greatly increase coverage, albeit at the price of decreased precision. We present here a way to combine the estimations of group contribution with the more accurate reactant contributions by decomposing each reaction into two parts and applying one of the methods on each of them. This method gives priority to the reactant contributions over group contributions while guaranteeing that all estimations will be consistent, i. e. will not violate the first law of thermodynamics. We show that there is a significant increase in the accuracy of our estimations compared to standard group contribution. Specifically, our cross-validation results show an 80% reduction in the median absolute residual for reactions that can be derived by reactant contributions only. We provide the full framework and source code for deriving estimates of standard reaction Gibbs energy, as well as confidence intervals, and believe this will facilitate the wide use of thermodynamic data for a better understanding of metabolism.\nIntroduction:\nThe extensive use of formation Gibbs energies for calculating might create the impression that combining these two frameworks( RC and GC) is a trivial task. Traditionally, the formation energy of all pure elements in their standard forms is set to zero by definition. All other compounds' formation energies are calculated in relation to these reference points. This is a sound definition which creates a consistent framework for deriving the of any reaction which is chemically balanced. However, the difficulty of calculating the formation energy for some complex but useful co-factors has been side-stepped by creating a somewhat looser definition of formation Gibbs energy, where several non-elemental compounds are defined as reference points as well( with a standard formation energy of zero). For some rare reactions, this definition can create a conflict that will result in a very large mistake in the. For instance, Alberty's formation energy table[15] lists 16 compounds as having. Among these, only 5 are elements(,,,, and) and 11 are co-factors( CoA−1, NAD( ox) −1, FAD( ox) −2, FMN( ox) −2 and seven other redox carriers). In most reactions which use these co-factors as substrates, the “zeros” will cancel out since one of the products will match it with a formation energy which is defined according to the same reference point( e. g. FAD( ox) −2 will be matched with FAD( red) −2 whose formation energy is kJ/mol in Albery's table). Nevertheless, there are a handful of reactions where this matching doesn't occur. In the reaction( catalyzed by FAD nucleotidohydrolase, EC 3. 6. 1. 18), there is a violation of conservation laws for FAD and FMN( both have in Alberty's table). Therefore, using the table naïvely for this reaction would yield a wrong value, namely kJ/mol. Combining formation energies derived using GC with ones from RC greatly increases the number of reactions where different reference-points are mixed together, and mistakes such as the one described above become much more common. One way to deal with the problem of reference-point conflicts, is to use either RC or GC exclusively for every reaction being estimated. Specifically, RC will by applied to all reactions that can be completely covered by it. Only if one or more reactants are missing from the formation energy table, we would use the less precise GC method for the entire reaction. Unfortunately, combining the frameworks in such a way can easily lead to violations of the first law of thermodynamics. This stems from the fact that inconsistent use of formation energies across several reactions, coming from inaccuracies in the estimation method that do not cancel out, can create situations where futile cycles will have a non-zero change in Gibbs energy. An example for such a futile cycle is given in Figure 2. Applying this method for large-scale metabolic reconstructions will most likely lead to non-physical solutions. Reference-point conflicts and first-law violations can both be avoided, by adjusting baseline formation energies of compounds with non-elemental reference points to match group contribution estimates. This approach was taken in[8] and[10]. The formation energies of FAD( ox) −2 and all other reference points in Alberty's table were set equal to their group contribution estimates. All formation energies that were determined relative to each reference point were then adjusted according to Alberty's table to maintain the same relative formation energies. The main disadvantage of this approach is that the set of reference points is fixed and limited to a few common cofactors. The coverage of reactant contributions could be increased by also defining less common metabolites as reference points, but listing them all in a static table would be impractical and inefficient. The component contribution method, which is described in detail in the following sections of this paper, manages to combine the estimates of RC and GC while avoiding any reference-point conflicts or first-law violations. In the component contribution framework, the maximal set of reference points given a set of measured reactions is automatically determined. We maintain the notion of prioritizing RC over GC, but rather than applying only one method exclusively per reaction, we split every reaction into two independent reactions. One of these sub-reactions can be evaluated using RC, while the other cannot – and thus its is calculated using GC. We use linear orthogonal projections in order to split each of the reactions, ensuring that all estimated values are self-consistent. The choice of orthogonal projections is somewhat arbitrary, and is based on the assumption that it is beneficial to minimize the euclidean distance to the projected point that can be estimated using RC. This framework also enables us to calculate confidence intervals for standard reaction Gibbs energies in a mathematically sound way.\nDiscussion:\nThe component contribution method presented in this paper merges two established methods for calculating standard Gibbs energies of reactions while maintaining each of their advantages; accuracy in the case of reactant contribution( RC) and the wide coverage of group contribution( GC). By representing every reaction as a sum of two complementary component reactions, one in the subspace that is completely covered by RC and the other in the complementary space, we maximize the usage of information that can be obtained with the more accurate RC method. Overall, we find that there is a 50% reduction in the median absolute residual compared to standard GC methods, while providing the same wide coverage and ensuring that there are no reference-point inconsistencies that otherwise lead to large errors. Furthermore, since our method is based on least-squares linear regression, we use standard practices for calculating confidence intervals for standard Gibbs energies( see section Calculation of confidence intervals), and for weighing the measured standard Gibbs energies used as training data( see Section S1. 2 in Text S1). Since the empirical data used in our method is measured in various conditions( temperature, pH, ionic strength, metal ion concentrations, etc.) – it is important to “standardize” the input data before applying any linear regression model[24]. In this work, we used an inverse Legendre transform to normalize the pH and ionic strength, but ignore the temperature effect and the metal ion concentrations( see Section S1. 1 in Text S1). In addition, the proton dissociation constants were obtained from a third party software estimator( by Marvin, see Methods) and have a mean absolute error of about 0. 9 pH units[30]. Notably, a commendable effort for creating a database of thermodynamic quantities[31] has been published recently, where the data was standardized using more reliable parameters and considering more effects. This database currently only covers reactions from glycolysis, the tricarboxylic acid cycle, and the pentose phosphate pathway. Therefore, we chose to use the more extensive TECRDB database and perform the inverse Legendre transform ourselves, effectively increasing the coverage while compromising on the accuracy of the data. Since the changes brought forward in the component contribution method are independent of the source of input data, we believe that it will benefit from any future improvements in these databases. The precision of the component contribution method is limited by the accuracy of the measured reaction equilibrium constants used in the regression model. In cases of isolated reactions, where the empirical data cannot be corroborated by overlapping measurements, large errors will be directly propagated to our estimate of those reactions' standard Gibbs energies. As the number of measurements underlying an estimate is reflected in its standard error, however, confidence intervals for such reactions will be large. It is therefore recommended to use confidence intervals, and not point estimates, for simulations and predictions based on standard Gibbs energy estimates. In the future, it might be worthwhile to integrate several promising computational prediction approaches[32] which are not based on RC and GC, such as molecular mechanics methods[33], density functional methods[34], and post Hartree-Fock approaches[35],[36]. Although the computational cost of these methods can be substantial depending on the theoretical method and the solvation models[37] used, they have the advantage of being based on computable chemical and physical principles, implying that a 100% coverage of all biochemical reactions is achievable( though not yet practical). Currently, the accuracy of these methods for reactions in solution is limited. Nevertheless, they might already be useful for estimating of reactions that are not covered by component contributions, or for validating the sparse measurements. Alternatively, a method that infers from reaction similarities named IGERS[38] manages to be much more accurate than GC when predicting the standard Gibbs energy of reactions which are very similar to a reaction with a measured. Adding IGERS as another layer between RC and GC using the ideas presented in this paper might contribute to the overall accuracy of our estimations. Finally, the laws of additivity suggested by[16] include single atom( zero-order) and single bond( first-order) contributions, which would be too crude to use for approximating Gibbs energies directly, but might be useful as two extra layers in a method like component contribution and help cover a wider fraction of the reaction space. The use of thermodynamic parameters in modeling living systems has been hindered by the fact that it is mostly inaccessible or requires a high level of expertise to use correctly, especially in genome-scale models. In order to alleviate this limitation, we created a framework that facilitates the integration of standard reaction Gibbs energies into existing models and also embedded our code into the openCOBRA toolbox. The entire framework( including the source code and training data) is freely available. We envisage a collaborative community effort that will result in a simple and streamlined process where these important thermodynamic data are widely used and where future improvements in estimation methods will seamlessly propagate to modelers." }, { "role": "assistant", "content": "The metabolism of living organisms is a complex system with a large number of parameters and interactions . Nevertheless , it is governed by a strict set of rules that make it somewhat predictable and amenable to modeling . The laws of thermodynamics play a pivotal role by determining reaction feasibility and by governing the kinetics of enzymes . Here we introduce estimations for the standard Gibbs energy of reactions , with the best combination of accuracy and coverage to date . The estimations are derived using a new method which we denote component contribution . This method integrates multiple sources of information into a consistent framework that obeys the laws of thermodynamics , and provides a significant improvement in accuracy compared to previous genome-wide estimations of standard Gibbs energies . We apply and test our method on reconstructions of E . coli and human metabolism and , in addition , do our best to facilitate the use of these estimations in future models by providing open-source software that performs the integration in a streamlined process ." } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.1349
[ { "role": "user", "content": "You will be shown biomedical passages and your task is to the answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: Which disease phenotype has the worst prognosis in Duchenne Muscular Dystrophy?\n\nContext: A strong association between the risk of cognitive disability and the involvement of groups of DMD isoforms was found. In particular, improvements in the correlation of FSIQ with mutation location were identified when a new classification system for mutations affecting the Dp140 isoform was implemented.\nThe ACTN3 577XX Null Genotype Is Associated with Low Left Ventricular Dilation-Free Survival Rate in Patients with Duchenne Muscular Dystrophy.\nThese results suggest that patients with Duchenne muscular dystrophy with defective cDMD have more severe disease than those without cDMD deficit.\nOsteopontin genotype is a genetic modifier of disease severity in Duchenne dystrophy. Inclusion of genotype data as a covariate or in inclusion criteria in DMD clinical trials would reduce intersubject variance, and increase sensitivity of the trials, particularly in older subjects.\nSPP1 genotype is a determinant of disease severity in Duchenne muscular dystrophy." }, { "role": "assistant", "content": "Dp140 isoform" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.28253
[ { "role": "user", "content": "You are presented with multiple paper abstracts, alongside the associated literature review and chapter titles. Given the provided information, you are going to do a literature review. Please generate a coherent literature review that synthesizes the content, draws connections between different works, highlights major findings, and identifies any gaps in the existing research.\n\nLiterature Review Title: A Survey of Codes for Optical Disk Recording \nChapter Title: V. ALTERNATIVES TO EFM SCHEMES \n1. Abstract of Cited Paper (BIB001): Reports on an alternative to eight-to-fourteen modulation (EFM), called EFMPlus, which has been adopted as coding format of the multimedia compact disc proposal. The rate of the new code is 8/16, which means that a 6-7 % higher information density can be obtained. EFMPlus is the spitting image of EFM (same minimum and maximum runlength, clock content etc). Computer simulations have shown that the low-frequency content of the new code is only slightly larger than its conventional EFM counterpart. > \n2. Abstract of Cited Paper (BIB002): Describes a new technique for constructing fixed-length (d,k) runlength-limited block codes. The new codes are very close to block-decodable codes, as decoding of the retrieved sequence can be accomplished by observing (part of) the received codeword plus a very small part (usually only a single bit) of the previous codeword. The basic idea of the new construction is to uniquely represent each source word by a (d,k) sequence with specific predefined properties, and to construct a bridge of /spl beta/, 1/spl les//spl beta//spl les/d, merging bits between every pair of adjacent words. An essential element of the new coding principle is look ahead. The merging bits are governed by the state of the encoder (the history), the present source word to be translated, and by the upcoming source word. The new constructions have the virtue that only one look-up table is required for encoding and decoding. >\n\nLiterature Review:" }, { "role": "assistant", "content": "It is of some interest to consider the possibility of redesigning the EFM code and its variants of various codes rates and to compare the spectral performance. The EFM code was designed in 1980 before efficient design algorithms, such as ACH and so on, were developed. A second handicap of the EFM design is that at the time of its conception, every gate used for decoding was one too many. BIB002 Let us now for academic interest ignore for the time being the complexity issue, and start from scratch. Essentially, EFMPlus is a redesign of EFM with a rate 8/16 instead of 8/17. Decoding of EFMPlus requires 1000 instead of the 52 gates of EFM. An obvious alternative of the rate 8/16, EFMPlus code would have been EFM with two instead of three merging bits. The dc- content of the alternative code, EFM16a (the name we shall use for the code with two instead of three merging bits), can easily be assessed by computer simulation, and the results are shown in Table IV . The dc-content can be reduced significantly by a reassignment of the various words that takes into account the following observation. Observe, for example, that in EFM16a, the 16-bit words 0001000100001000 and 1001000100001000 are alternative channel representations. It can easily be verified that the disparity of both words (after precoding, of course) is zero. This, in fact, means that the encoder has no real option to increase or decrease the RDS with the transmission of those codewords. Obviously, it would be much better if we could redesign the code in such a way that as many source words as possible would have channel representations of zero-disparity. Nonzerodisparity codewords of opposite signs should be paired, whereas zero-disparity codewords may remain single. It is a straightforward exercise, using Gu and Fuja's method BIB001 to design a block code according to the above design heuristics. We may construct a block-decodable code with a source size of 260 instead of 257 words. A typical result, note there are many possibilities, called EFM16b, offers 8 dB (see Table IV ) more reduction at the low-frequency end than does EFM16a. This is a significant improvement, in particular, as the only disadvantage is the extra gate count required for decoding. Note that the EFM16b code requires a full decoding array of 16 bits instead of 14 bits as in EFM16a. An advantage with respect to EFMPlus, which requires a sliding-block decoder of length two, is the absence of error propagation. On the other side of the balance, we have a 3-dB extra reduction of EFMPlus's lf-content (see Table IV) . As , at least in theory, it is possible to construct a rate 8/15 (2, 9) code. The EFM15 code is an example of a rate 8/15, (2, 14) DCRLL code. An alternative rate 8/15 construction BIB003 is possible that requires, in contrast with classic EFM, only one merging bit. We could, in principle, employ the same 14-bit word assignment as in classic EFM. For an example of such a construction, the reader is referred to the U.S. Patent granted to Tanaka et al. . EFM15, which is very similar to EFMPlus, has a codeword length of 15 bits. The code, a typical sliding-block code, has four encoder states, and was constructed using the ACH algorithm after a single split. The number of words that can be accommodated depends on the state, and is at most 270. This leaves at most spare words that can be used for dc-control. Pairing of the alternative representations has been accomplished in such a way that the words that form a pair differ in a single position, i.e., have unity Hamming distance. This has the advantage that the decoding operation is simplified and that the alternative representations have an odd or even number of ones, which, as was observed experimentally, has a beneficial effect upon the quality of the dc-control. Further details, such as coding tables and so on, of EFM15 can be found in the U.S. Patent description ." } ]
science.data_reco_mcq_sc
science.data_reco_mcq_sc.7792
[ { "role": "user", "content": "Given a short query with keyphrases about a research idea in 2019 in the domain of artificial intelligence, recommend the relevant datasets to validate the research idea from options provided. A short description of the datasets is also provided to help you.\n\nQuery: We study the relationship between the protein sequence, the structure, and the function. We show\n\nKeyphrases: substrate specificity prediction protein sequences\n\nDataset description:\nDecagon: Bio-decagon is a dataset for polypharmacy side effect identification problem framed as a multirelational link prediction problem in a two-layer multimodal graph/network of two node types: drugs and proteins. Protein-protein interaction network describes relationships between proteins.\nBioGRID: BioGRID is a biomedical interaction repository with data compiled through comprehensive curation efforts. The current index is version 4.2.192 and searches 75,868 publications for 1,997,840 protein and genetic interactions, 29,093 chemical interactions and 959,750 post translational modifications from major model organism species.\nBDD100K: Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving. Researchers are usually constrained to study a small set of problems on one dataset, while real-world computer vision applications require performing tasks of various complexities.\nUASOL: The UASOL an RGB-D stereo dataset, that contains 160902 frames, filmed at 33 different scenes, each with between 2 k and 10 k frames. The frames show different paths from the perspective of a pedestrian, including sidewalks, trails, roads, etc.\n\nOptions:- A: Decagon B: BioGRID C: BDD100K D: UASOL" }, { "role": "assistant", "content": "BioGRID" } ]
science.mslr2022_cochrane_multidoc_summarization
science.mslr2022_cochrane_multidoc_summarization.1819
[ { "role": "user", "content": "You will be provided with the titles and abstracts of a collection of medical RCT reports studying a clinical question -- for instance, the efficacy of an intervention on some medical outcome. Please write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article. Your response should not include any other text. The RCT reports are below.\n\n\n\nTitle 1:\nAn open, randomized, controlled study of transdermal hormone replacement therapy on the rate of bone loss in primary biliary cirrhosis.\nAbstract 1:\nThe prevalence of osteoporosis amongst patients with primary biliary cirrhosis (PBC) is high and may be a serious clinical problem. Hormone replacement therapy (HRT) is effective in preventing bone loss but has not been evaluated in randomized trials in PBC. The primary aim was to study the effect of transdermal HRT in combination with daily vitamin D and calcium supplementation on bone loss compared with vitamin D and calcium supplementation only in postmenopausal women with PBC. The secondary aim was to study the safety of transdermal HRT. SUBJECTS/INTERVENTIONS: Eighteen females with PBC were randomized to receive 2 years therapy with either (i) transdermal oestradiol 50 microg 24 h(-1) two times per week + medroxyprogesterone 2.5 mg day(-1) + alfacalcidol 0.25 microg day(-1) and calcium 1 g day(-1) or (ii) alfacalcidol 0.25 microg day(-1) and calcium 1 g day(-1). Dual-energy X-ray absorptiometry for measurement of bone mineral density (BMD) and sampling of blood and serum for measurements of biochemical markers of liver function was performed before, during and at the end of treatment.\n BMD increased significantly at the lumbar spine (P < 0.05) and the femoral neck (P < 0.05) in the HRT group whereas no significant change was found in the control group. One oestrogen-treated patient was excluded after 1 year because of deteriorating, but reversible, aminotransferases. Dropout frequency because of nonliver-related causes was higher in the HRT group. Otherwise, no difference with respect to adverse liver reactions was found between the groups.\n Transdermal HRT increases BMD in PBC patients with few severe side effects related to the liver.\n\n\nTitle 2:\nOsteoporosis in primary biliary cirrhosis: a randomized trial of the efficacy and feasibility of estrogen/progestin.\nAbstract 2:\nThe optimal therapy for the prevention and treatment of osteoporosis in primary biliary cirrhosis (PBC) is unknown. Hormone replacement therapy (HRT) prevents osteoporosis, but may promote cholestasis. We performed a double-blind, randomized, placebo-controlled trial of transdermal estrogen/progestin in postmenopausal women with PBC. The 24-month study enrolled 31 patients, but trial uptake was limited and treatment arm dropout was significant. Placebo-treated patients had a higher percentage loss in femoral neck bone mineral density than actively treated patients (-3.76 +/- 1.37% versus 0.21 +/- 1.01%, respectively, P = .058). New fractures occurred in 2 patients on placebo, and in no patients on treatment. The mean monthly increase in bilirubin was not significantly different between groups, but individual data suggest HRT may worsen cholestasis. In conclusion, women with PBC have strong feelings about HRT, and recruitment for this intervention is difficult. Transdermal estrogen/progestin likely provides protection against bone loss in PBC patients, but may worsen cholestasis." }, { "role": "assistant", "content": "We did not find evidence to support the use of hormone replacement for women with primary biliary cirrhosis. It seems that hormone replacement is connected with a significant increase in the occurrence of adverse events." } ]
science.genia_ner
science.genia_ner.570
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"RNA\", \"CellLine\", \"Protein\", \"DNA\", \"CellType\"].\n\nPlease return the output as a JSON object of the format: {\"DNA\" : [\"TAL1\", ...], \"RNA\": [\"mRNAs\", ...], \"CellLine\": [\"T cells\", ...], \"CellType\": [\"monocytes\", ...], \"Protein\": [\"IL-2\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nSurfactant suppresses NF-kappa B activation in human monocytic cells. In addition to biophysical properties, pulmonary surfactant has immunomodulatory activity. We previously demonstrated that both synthetic (Exosurf) and modified natural surfactant (Survanta) downregulated endotoxin-stimulated inflammatory c ytokine mRNA levels and protein products (tumor necrosis factor-alpha [ TNF ], interleukin-1-beta [ IL-1 ], interleukin-6 [ IL-6 ]) in human alveolar macrophages. In this study, we report that both Exosurf and Survanta suppress TNF mRNA and secretion (85 + /-4% mean percent inhibition + /-SEM by Exosurf; 71 + /-6% by Survanta) by endotoxin-stimulated THP-1, a human monocytic cell line. Because surfactant downregulated inflammatory cytokine production similarly in both normal human alveolar macrophages and the THP-1 cell line, we used this cell line to investigate whether surfactant affected transcriptional mechanisms. Specifically, we examined nuclear factor-kappa B (NF-kappa B) activation because it is crucial in transcriptional regulation of many inflammatory cytokine genes including TNF, IL-1, and IL-6. Electrophoretic mobility shift assays showed that both surfactants decreased activation of NF-kappa B. The presence of both p65 and p50 NF-kappa B components in LPS-activated THP-1 cells was confirmed by specific antibody induction of supershifts in mobility assays. These results are the first to suggest that surfactant 's suppressive effects on inflammatory cytokine production may involve transcriptional regulation through inhibition of NF-kappa B activation." }, { "role": "assistant", "content": "{\"CellLine\": [\"cell line\", \"human monocytic cell line\", \"THP - 1 cell line\", \"THP - 1 cells\", \"endotoxin - stimulated THP - 1\", \"this cell line\", \"the THP - 1 cell line\"], \"CellType\": [\"macrophages\", \"monocytic cells\", \"normal human alveolar macrophages\", \"human alveolar macrophages\"], \"DNA\": [\"IL - 6\", \"IL - 1\", \"TNF\", \"inflammatory cytokine genes\"], \"Protein\": [\"p50\", \"nuclear factor - kappa B\", \"tumor necrosis factor - alpha\", \"IL - 6\", \"interleukin - 1 - beta\", \"protein products\", \"p65\", \"IL - 1\", \"inflammatory cytokine\", \"NF - kappa B\", \"interleukin - 6\", \"NF - kappa B components\"], \"RNA\": [\"inflammatory c ytokine mRNA\", \"TNF mRNA\"]}" } ]
science.mslr2022_cochrane_multidoc_summarization
science.mslr2022_cochrane_multidoc_summarization.1493
[ { "role": "user", "content": "You will be provided with the titles and abstracts of a collection of medical RCT reports studying a clinical question -- for instance, the efficacy of an intervention on some medical outcome. Please write a few sentences synthesizing the findings of these reports, which could serve as a conclusion section for a systematic review article. Your response should not include any other text. The RCT reports are below.\n\n\n\nTitle 1:\nCyclofenil versus placebo in progressive systemic sclerosis. A one-year double-blind crossover study of 27 patients.\nAbstract 1:\nCyclofenil was evaluated versus placebo in the treatment of progressive systemic sclerosis (PSS, scleroderma) in a 2 x 6-month double-blind crossover study. The mean duration of disease was six years. Of 38 patients entering the study, 27 completed both periods. Reasons for drop-outs were very high liver transaminases in three cases, cardiac death in two, and drug allergy, alcoholic problems, suspected congestive heart failure, reactivation of tuberculosis, arteriosclerotic heart disease, and lethal progression of PSS in one case each. No fatality was attributed to cyclofenil. Liver enzyme abnormalities were seen in 13 of 35 active drug periods and in 5 of 30 placebo periods. Cutaneous and visceral involvement were assessed by a large battery of subjective parameters and objective tests. Overall improvement was seen during 17 drug periods and nine placebo periods (N.S.), but a paired comparison of the status at the end of each treatment period resulted in the following distribution: 15 were improved at the end of the drug period, four at the end of placebo period (p less than 0.01) and eight were unchanged. In patients with a disease duration of five years or less, joint stiffness and pain were less on drug than on placebo treatment (p less than 0.05). In the whole group, oesophageal peristalsis improved (p less than 0.05). Blood folate increased (p less than 0.01). Working capacity was lower after the drug period than after the placebo period (p less than 0.05). Several other parameters, however, did not change significantly. Cyclofenil appears to be a promising drug in the treatment of PSS and should be tested further in controlled long-term studies." }, { "role": "assistant", "content": "There is insufficient evidence either to support or to refute the use of Cyclofenil in the treatment of Raynaud's phenomenon secondary to scleroderma." } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.288
[ { "role": "user", "content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nWe study the fair allocation of indivisible goods under the assumption that the goods form an undirected graph and each agent must receive a connected subgraph. Our focus is on well-studied fairness notions including envy-freeness and maximin share fairness. We establish graph-specific maximin share guarantees, which are tight for large classes of graphs in the case of two agents and for paths and stars in the general case. Unlike in previous work, our guarantees are with respect to the complete-graph maximin share, which allows us to compare possible guarantees for different graphs. For instance, we show that for biconnected graphs it is possible to obtain at least @math of the maximin share, while for the remaining graphs the guarantee is at most @math . In addition, we determine the optimal relaxation of envy-freeness that can be obtained with each graph for two agents, and characterize the set of trees and complete bipartite graphs that always admit an allocation satisfying envy-freeness up to one good (EF1) for three agents. Our work demonstrates several applications of graph-theoretical tools and concepts to fair division problems.\n\nCited Abstract(s):\n[1]: We study the existence of allocations of indivisible goods that are envy-free up to one good (EF1), under the additional constraint that each bundle needs to be connected in an underlying item graph G. When the items are arranged in a path, we show that EF1 allocations are guaranteed to exist for arbitrary monotonic utility functions over bundles, provided that either there are at most four agents, or there are any number of agents but they all have identical utility functions. Our existence proofs are based on classical arguments from the divisible cake-cutting setting, and involve discrete analogues of cut-and-choose, of Stromquist's moving-knife protocol, and of the Su-Simmons argument based on Sperner's lemma. Sperner's lemma can also be used to show that on a path, an EF2 allocation exists for any number of agents. Except for the results using Sperner's lemma, all of our procedures can be implemented by efficient algorithms. Our positive results for paths imply the existence of connected EF1 or EF2 allocations whenever G is traceable, i.e., contains a Hamiltonian path. For the case of two agents, we completely characterize the class of graphs @math that guarantee the existence of EF1 allocations as the class of graphs whose biconnected components are arranged in a path. This class is strictly larger than the class of traceable graphs; one can be check in linear time whether a graph belongs to this class, and if so return an EF1 allocation." }, { "role": "assistant", "content": "[1] investigated the same model with respect to relaxations of envy-freeness. As we mentioned, they characterized the set of graphs for which EF1 can be guaranteed in the case of two agents with arbitrary monotonic valuations. Moreover, they showed that an EF1 allocation always exists on a path for @math . Intriguingly, the existence question for @math remains open, although they showed that an EF2 allocation can be guaranteed for any @math ." } ]
science.pico_ner
science.pico_ner.3273
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Outcome\", \"Participant\", \"Intervention\"].\n\nPlease return the output as a JSON object of the format: {\"Participant\" : [\"patients with COPD\", ...], \"Intervention\" : [\"Atenolol\", ...], \"Outcome\" : [\"blood pressure\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nThe impact of the amount of social evaluation on psychobiological responses to a body image threat . The present study examined the impact of amount of social-evaluative body image threat on psychobiological responses . Women ( N=123 ) were randomized into an individual-threat , group-threat or no-threat condition . Participants completed a measure of state body shame and provided a sample of saliva ( to assess cortisol ) at baseline and following their condition . Both threat conditions had higher baseline-adjusted body shame following the threat compared to the no-threat condition ; however , no difference on baseline-adjusted body shame between the threat conditions was found . The same pattern of results was found for cortisol - both threat conditions had higher baseline-adjusted response cortisol than the no-threat condition , with no significant differences between the threat groups . Findings suggest that the magnitude of psychobiological responses to a social-evaluative body image threat does not differ with the amount of social-evaluative threat ( individual- versus group-threat ) . These findings provide insight into the context of body image threats of women ." }, { "role": "assistant", "content": "{\"Intervention\": [\"social evaluation\", \"state body shame\"], \"Outcome\": [], \"Participant\": [\"Women\", \"women\", \"body image threat .\", \"N=123\"]}" } ]
science.data_reco_mcq_mc
science.data_reco_mcq_mc.2129
[ { "role": "user", "content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: I want to learn a compact hash function to index the large-scale image collections.\n\nKeyphrases: image retrieval images\n\nDataset description:\nCIFAR-10: The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck).\nBigEarthNet: BigEarthNet consists of 590,326 Sentinel-2 image patches, each of which is a section of i) 120x120 pixels for 10m bands; ii) 60x60 pixels for 20m bands; and iii) 20x20 pixels for 60m bands.\nfMoW: Functional Map of the World (fMoW) is a dataset that aims to inspire the development of machine learning models capable of predicting the functional purpose of buildings and land use from temporal sequences of satellite images and a rich set of metadata features.\nOpenEDS: OpenEDS (Open Eye Dataset) is a large scale data set of eye-images captured using a virtual-reality (VR) head mounted display mounted with two synchronized eyefacing cameras at a frame rate of 200 Hz under controlled illumination. This dataset is compiled from video capture of the eye-region collected from 152 individual participants and is divided into four subsets: (i) 12,759 images with pixel-level annotations for key eye-regions: iris, pupil and sclera (ii) 252,690 unlabelled eye-images, (iii) 91,200 frames from randomly selected video sequence of 1.5 seconds in duration and (iv) 143 pairs of left and right point cloud data compiled from corneal topography of eye regions collected from a subset, 143 out of 152, participants in the study.\nDCASE 2018 Task 4: DCASE2018 Task 4 is a dataset for large-scale weakly labeled semi-supervised sound event detection in domestic environments. The data are YouTube video excerpts focusing on domestic context which could be used for example in ambient assisted living applications.\nCUHK03: The CUHK03 consists of 14,097 images of 1,467 different identities, where 6 campus cameras were deployed for image collection and each identity is captured by 2 campus cameras. This dataset provides two types of annotations, one by manually labelled bounding boxes and the other by bounding boxes produced by an automatic detector.\n\nOptions:- A: CIFAR-10 B: BigEarthNet C: fMoW D: OpenEDS E: DCASE 2018 Task 4 F: CUHK03" }, { "role": "assistant", "content": "[\"CIFAR-10\", \"CUHK03\"]" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.23260
[ { "role": "user", "content": "You are presented with a paragraph from a COVID-19 literature. Classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable topics within the array. Your output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nDuring the pandemic of 2019-nCoV, large public hospitals are facing great challenges. Multi-hospital development will be the main mode of hospital administrative management in China in the future. West China Hospital of Sichuan University implemented multi-hospital integrated management, in which the branch district established the administrative multi-department collaboration mode. As an important part of the operation of branch district, how to effectively organize transportation of staffs and patients and to prevent and control the pandemic of 2019-nCoV simultaneously between different hospitals have been the key and difficult points, which should be solved urgently in the management of the branch district." }, { "role": "assistant", "content": "[\"Prevention\"]" } ]
science.data_reco_mcq_mc
science.data_reco_mcq_mc.2260
[ { "role": "user", "content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: In this paper, we study a practical setting for the training of neural networks, where we do not allow\n\nKeyphrases: imagenet images\n\nDataset description:\nCIFAR-10: The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck).\nTHEODORE: Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes.\nUNSW-NB15: UNSW-NB15 is a network intrusion dataset. It contains nine different attacks, includes DoS, worms, Backdoors, and Fuzzers.\nBDD100K: Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving. Researchers are usually constrained to study a small set of problems on one dataset, while real-world computer vision applications require performing tasks of various complexities.\nUNITOPATHO: Histopathological characterization of colorectal polyps allows to tailor patients' management and follow up with the ultimate aim of avoiding or promptly detecting an invasive carcinoma. Colorectal polyps characterization relies on the histological analysis of tissue samples to determine the polyps malignancy and dysplasia grade.\nCOCO: The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\nGSL: Dataset Description The Greek Sign Language (GSL) is a large-scale RGB+D dataset, suitable for Sign Language Recognition (SLR) and Sign Language Translation (SLT). The video captures are conducted using an Intel RealSense D435 RGB+D camera at a rate of 30 fps.\n\nOptions:- A: CIFAR-10 B: THEODORE C: UNSW-NB15 D: BDD100K E: UNITOPATHO F: COCO G: GSL" }, { "role": "assistant", "content": "[\"CIFAR-10\", \"COCO\"]" } ]
science.scicite_classification
science.scicite_classification.708
[ { "role": "user", "content": "You will be presented with a citation segment from the section of a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\nTaken together with studies on the function of female ornamentation (Lebas 2006; Weiss 2006; Pryke 2007; reviewed in Amundsen & Parn 2006) and song in female songbirds (Ritchison 1983; Catchpole & Slater 1995; Searcy & Yasukawa 1995; Langmore 1998), these results suggest that the processes…" }, { "role": "assistant", "content": "Result" } ]
science.craftchem_ner
science.craftchem_ner.3992
[ { "role": "user", "content": "You will be presented a sentence from biomedical research text. Extract all terms that correspond to chemical entities of biological interest (ChEBI).\nReturn your identified terms in a JSON array, formatted like this: [\"Entity1\", \"Entity2\", ...].\nIf no such entities are detected, respond by providing an empty JSON array [].\n\nBiomedical context:\nSurprisingly,thePGC-1alpha-/-miceonahigh-fatdietweresignificantlymoreglucose-tolerantandinsulin-sensitivecomparedtothePGC-1alpha+/+mice(Figure9B)." }, { "role": "assistant", "content": "[\"glucose\", \"insulin\"]" } ]
science.chemprot_re
science.chemprot_re.54
[ { "role": "user", "content": "Given an abstract from a research paper, your task is to identify and label relationships between chemical compounds and genes mentioned in the abstract.\n\nThe relationship type must be one of ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nPlease format your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"]. If no relations can be found, please output an empty JSON array [].\n\nAbstract:\n\nTranscriptional control of the arginine/lysine transporter, cat-1, by physiological stress. Cells respond to physiological stress by phosphorylating the alpha subunit of the translation initiation factor eIF2. This adaptive response inhibits protein synthesis and up-regulates genes essential for cell survival. Cat-1, the transporter for the essential amino acids, arginine and lysine, is one of the up-regulated genes. We previously showed that stress increases cat-1 expression by coordinated stabilization of the mRNA and increased mRNA translation. This induction is triggered by amino acid depletion and the unfolded protein response (UPR), which is caused by unfolded proteins in the endoplasmic reticulum. We show here that cat-1 gene transcription is also increased by cellular stress. Our studies demonstrate that the cat-1 gene promoter/regulatory region is TATA-less and is located in a region that includes 94 bases of the first exon. Transcription from this promoter is stimulated 8-fold by cellular stress. An amino acid response element within the first exon is shown to be required for the response to amino acid depletion but not to the UPR. The stimulation of transcription by amino acid depletion requires activation of GCN2 kinase, which phosphorylates eIF2alpha. This phosphorylation also induces translation of the cat-1 mRNA, demonstrating that stress-induced transcriptional and translational control of cat-1 are downstream targets of a signaling pathway initiating with eIF2alpha phosphorylation. Our studies show that the increase in cat-1 gene expression by cellular stress involves at least three types of coordinate regulation: regulation of transcription, regulation of mRNA stability, and regulation of mRNA translation." }, { "role": "assistant", "content": "[[\"arginine\", \"SUBSTRATE\", \"Cat-1\"], [\"amino acids\", \"SUBSTRATE\", \"Cat-1\"], [\"lysine\", \"SUBSTRATE\", \"Cat-1\"]]" } ]
science.ncbi_ner
science.ncbi_ner.175
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Disease\"].\n\nPlease return the output as a JSON object of the format: {\"Disease\" : [\"COVID-19\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nGene transfer and expression of human phenylalanine hydroxylase. Phenylketonuria (PKU) is caused by a genetic deficiency of the enzyme phenylalanine hydroxylase (PAH). A full-length complementary DNA clone of human PAH was inserted into a eukaryotic expression vector and transferred into mouse NIH3T3 cells which do not normally express PAH. The transformed mouse cells expressed PAH messenger RNA, immunoreactive protein, and enzymatic activity that are characteristic of the normal human liver products, demonstrating that a single gene contains all of the necessary genetic information to code for functional PAH. These results support the use of the human PAH probe in prenatal diagnosis and detection of carriers, to provide new opportunities for the biochemical characterization of normal and mutant enzymes, and in the investigation of alternative genetic therapies for PKU.." }, { "role": "assistant", "content": "{\"Disease\": [\"Phenylketonuria\", \"deficiency of the enzyme phenylalanine hydroxylase\", \"PKU\"]}" } ]
science.nlmgene_ner
science.nlmgene_ner.74
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Gene\"].\n\nPlease return the output as a JSON object of the format: {\"Gene\": [\"IGF\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nMicroRNA-203 mimics age-related aortic smooth muscle dysfunction of cytoskeletal pathways. Increased aortic stiffness is a biomarker for subsequent adverse cardiovascular events. We have previously reported that vascular smooth muscle Src-dependent cytoskeletal remodelling, which contributes to aortic plasticity, is impaired with ageing. Here, we use a multi-scale approach to determine the molecular mechanisms behind defective Src-dependent signalling in an aged C57BL/6 male mouse model. Increased aortic stiffness, as measured in vivo by pulse wave velocity, was found to have a comparable time course to that in humans. Bioinformatic analyses predicted several miRs to regulate Src-dependent cytoskeletal remodelling. qRT-PCR was used to determine the relative levels of predicted miRs in aortas and, notably, the expression of miR-203 increased almost twofold in aged aorta. Increased miR-203 expression was associated with a decrease in both mRNA and protein expression of Src, caveolin-1 and paxillin in aged aorta. Probing with phospho-specific antibodies confirmed that overexpression of miR-203 significantly attenuated Src and extracellular signal regulated kinase (ERK) signalling, which we have previously found to regulate vascular smooth muscle stiffness. In addition, transfection of miR-203 into aortic tissue from young mice increased phenylephrine-induced aortic stiffness ex vivo, mimicking the aged phenotype. Upstream of miR-203, we found that DNA methyltransferases (DNMT) 1, 3a, and 3b are also significantly decreased in the aged mouse aorta and that DNMT inhibition significantly increases miR-203 expression. Thus, the age-induced increase in miR-203 may be caused by epigenetic promoter hypomethylation in the aorta. These findings indicate that miR-203 promotes a re-programming of Src/ERK signalling pathways in vascular smooth muscle, impairing the regulation of stiffness in aged aorta." }, { "role": "assistant", "content": "{\"Gene\": [\"DNMT\", \"MicroRNA-203\", \"ERK\", \"miR-203\", \"extracellular signal regulated kinase\", \"caveolin-1\", \"paxillin\", \"DNA methyltransferases (DNMT) 1, 3a, and 3b\", \"Src\"]}" } ]
science.ncbi_ner
science.ncbi_ner.521
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Disease\"].\n\nPlease return the output as a JSON object of the format: {\"Disease\" : [\"COVID-19\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nChronic nonspherocytic hemolytic anemia (CNSHA) and glucose 6 phosphate dehydrogenase (G6PD) deficiency in a patient with familial amyloidotic polyneuropathy (FAP). Molecular study of a new variant (G6PD Clinic) with markedly acidic pH optimum. A new glucose-6-phosphate dehydrogenase (G6PD) variant with severe erythrocytic G6PD deficiency and a unique pH optimum is described in a young patient with chronic nonspherocytic hemolytic anemia (CNSHA) and familial amyloidotic polyneuropathy (FAP). Chronic hemolysis was present in the absence of infections, oxidant drugs or ingestion of faba beans. Residual enzyme activity was about 2. 6% and 63% of normal activity in erythrocytes and leucocytes, respectively. A molecular study using standard methods showed G6PD in the patient to have normal electrophoretic mobility (at pH 7. 0, 8. 0 and 8. 8), normal apparent affinity for substrates (Km, G6P and NADP) and a slightly abnormal utilization of substrate analogues (decreased deamino-NADP and increased 2-deoxyglucose-6-phosphate utilization). Heat stability was found to be markedly decreased (8% of residual activity after 20 min of incubation at 46 degrees C) and a particular characteristic of this enzyme was a biphasic pH curve with a greatly increased activity at low pH. Although molecular characteristics of this variant closely resemble those of G6PD Bangkok and G6PD Duarte, it can be distinguished from these and all other previously reported variants by virtue of its unusual pH curve. Therefore the present variant has been designated G6PD Clinic to distinguish it from other G6PD variants previously described" }, { "role": "assistant", "content": "{\"Disease\": [\"familial amyloidotic polyneuropathy\", \"FAP\", \"erythrocytic G6PD deficiency\", \"glucose 6 phosphate dehydrogenase (G6PD) deficiency\", \"chronic nonspherocytic hemolytic anemia\", \"Chronic hemolysis\", \"Chronic nonspherocytic hemolytic anemia\", \"CNSHA\"]}" } ]
science.qasper_extractive_qa
science.qasper_extractive_qa.301.2
[ { "role": "user", "content": "You will be shown sections from a scientific research paper, together with a question about the paper. Your task is to answer the question based on the contents of the paper. Your response should be a `json` object with two fields:\n\n\"answer\": An array of strings extracted directly from the paper which, collectively, answer the question.\n\"evidence\": An array of strings. Each should be an excerpt from the paper, in which one or more of the extracted answers can be found.\n\nFor instance, for the question \"What baselines did the authors compare against?\", a sample response might be:\n\n{\n \"answer\": [\"BERT\", \"RoBERTa\"],\n \"evidence\": [\"In our experiments, we compare the performance of our model against BERT and RoBERTa.\"]\n}\n\nDo not include any text in your response other than the json.\n\nIf the question is unanswerable given the provided excerpts, respond with the single word \"null\".\n\nPaper: Logician: A Unified End-to-End Neural Approach for Open-Domain Information Extraction\n\nIn this paper, we consider the problem of open information extraction (OIE) for extracting entity and relation level intermediate structures from sentences in open-domain. We focus on four types of valuable intermediate structures (Relation, Attribute, Description, and Concept), and propose a unified knowledge expression form, SAOKE, to express them. We publicly release a data set which contains more than forty thousand sentences and the corresponding facts in the SAOKE format labeled by crowd-sourcing. To our knowledge, this is the largest publicly available human labeled data set for open information extraction tasks. Using this labeled SAOKE data set, we train an end-to-end neural model using the sequenceto-sequence paradigm, called Logician, to transform sentences into facts. For each sentence, different to existing algorithms which generally focus on extracting each single fact without concerning other possible facts, Logician performs a global optimization over all possible involved facts, in which facts not only compete with each other to attract the attention of words, but also cooperate to share words. An experimental study on various types of open domain relation extraction tasks reveals the consistent superiority of Logician to other states-of-the-art algorithms. The experiments verify the reasonableness of SAOKE format, the valuableness of SAOKE data set, the effectiveness of the proposed Logician model, and the feasibility of the methodology to apply end-to-end learning paradigm on supervised data sets for the challenging tasks of open information extraction.\n\nSAOKE Data Set\nWe randomly collect sentences from Baidu Baike (http://baike.baidu.com), and send those sentences to a crowd sourcing company to label the involved facts. The workers are trained with labeling examples and tested with exams. Then the workers with high exam scores are asked to read and understand the facts in the sentences, and express the facts in the SAOKE format. During the procedure, one sentence is only labeled by one worker. Finally, more than forty thousand sentences with about one hundred thousand facts are returned to us. The manual evaluation results on 100 randomly selected sentences show that the fact level precision and recall is 89.5% and 92.2% respectively. Table 3 shows the proportions of four types of facts (described in Section \"SAOKE Data Set\" ) contained in the data set. Note that the facts with missing predicates represented by “P” are classified into “Unknown”. We publicize the data set at https://ai.baidu.com/broad/subordinate?dataset=saoke.\nPrior to the SAOKE data set, an annotated data set for OIE tasks with 3,200 sentences in 2 domains was released in BIBREF20 to evaluate OIE algorithms, in which the data set was said BIBREF20 “13 times larger than the previous largest annotated Open IE corpus”. The SAOKE data set is 16 times larger than the data set in BIBREF20 . To the best of our knowledge, SAOKE data set is the largest publicly available human labeled data set for OIE tasks. Furthermore, the data set released in BIBREF20 was generated from a QA-SRL data set BIBREF21 , which indicates that the data set only contains facts that can be discovered by SRL (Semantic Role Labeling) algorithms, and thus is biased, whereas the SAOKE data set is not biased to an algorithm. Finally, the SAOKE data set contains sentences and facts from a large number of domains.\n\nRestricted Copy Mechanism\nThe word selection model employed in BIBREF22 selects words from the whole vocabulary $V$ , which evidently violates the “literal honest” requirement of SAOKE. We propose a restricted version of copy mechanism BIBREF24 as the word selection model for Logician:\nWe collect the symbols introduced in Section \"SAOKE Format: Symbol Aided Open Knowledge Expression\" into a keyword set $K=\\lbrace $ “ $ISA$ ”, “ $DESC$ ”, “ $LOC$ ”, “ $BIRTH$ ”, “ $DEATH$ ”, “ $=$ ”, “ $($ ”, “)”, “ $\\$$ ”,“ $[$ ”, “ $ISA$0 ”, “ $ISA$1 ”, “ $ISA$2 ”, “ $ISA$3 ”, “ $ISA$4 ”, “ $ISA$5 ” $ISA$6 where “ $ISA$7 ” is the separator of elements of fact tuples. “ $ISA$8 ”, “ $ISA$9 ”, “ $DESC$0 ”, “ $DESC$1 ” are placeholders . When the decoder is considering generating a word $DESC$2 , it can choose $DESC$3 from either $DESC$4 or $DESC$5 . \n$$p(w_{t}^{F}|w_{t-1}^{F},s_{t},c_{t})=p_{X}(w_{t}^{F}|w_{t-1}^{F},s_{t},c_{t})+p_{K}(w_{t}^{F}|w_{t-1}^{F},s_{t},c_{t}),$$ (Eq. 15) \nwhere $p_{X}$ is the probability of copying from $S$ and $p_{K}$ is the probability of selecting from $K$ . Since $S\\cap K=\\phi $ and there are no unknown words in this problem setting, we compute $p_{X}$ and $p_{K}$ in a simpler way than that in BIBREF24 , as follows: $\np_{X}(w_{t}^{F}=w_{j}^{S}) & = & \\frac{1}{Z}\\exp (\\sigma ((h_{j}^{S})^{T}W_{c})s_{t}),\\\\\np_{K}(w_{t}^{F}=k_{i}) & = & \\frac{1}{Z}\\exp (v_{i}^{T}W_{o}s_{t}),\n$ \nwhere the (generic) $Z$ is the normalization term, $k_{i}$ is one of keywords, $v_{i}$ is the one-hot indicator vector for $k_{i}$ , $W_{o}\\in \\mathbb {R}^{(|K|\\times N_{h})}$ , $W_{c}\\in \\mathbb {R}^{(N_{h}\\times N_{h})}$ , and $\\sigma $ is a nonlinear activation function.\n\nCoverage Mechanism\nIn practice, Logician may forget to extract some facts (under-extraction) or extract the same fact many times (over-extraction). We incorporate the coverage mechanism BIBREF25 into Logician to alleviate these problems. Formally, when the decoder considers generating a word $w_{t}^{F}$ , a coverage vector $m_{j}^{t}$ is introduced for each word $w_{j}^{S}$ , and updated as follows: $\nm_{j}^{t} & = & \\mu (m_{j}^{t-1},\\alpha _{tj},h_{j}^{S},s_{t-1})=(1-z_{i})\\circ m_{j}^{t-1}+z_{j}\\circ \\tilde{m}_{j}^{t},\\\\\n\\tilde{m}_{j}^{t} & = & \\tanh (W_{h}h_{j}^{S}+u_{\\alpha }\\alpha _{tj}+W_{s}s_{t-1}+U_{m}[r_{i}\\circ m_{j}^{t-1}]),\n$ \nwhere $\\circ $ is the element-wise multiplication operator. The update gate $z_{j}$ and the reset gate $r_{j}$ are defined as, respectively, $\nz_{j} & = & \\sigma (W_{h}^{z}h_{j}^{S}+u_{\\alpha }^{z}\\alpha _{tj}+W_{s}^{z}s_{t-1}+U_{m}^{z}m_{j}^{t-1}),\\\\\nr_{j} & = & \\sigma (W_{h}^{r}h_{j}^{S}+u_{\\alpha }^{r}\\alpha _{tj}+W_{s}^{r}s_{t-1}+U_{m}^{r}m_{j}^{t-1}),\n$ \nwhere $\\sigma $ is a logistic sigmoid function. The coverage vector $m_{j}^{t}$ contains the information about the historical attention focused on $w_{j}^{S}$ , and is helpful for deciding whether $w_{j}^{S}$ should be extracted or not. The alignment model is updated as follows BIBREF25 : $\ne_{tj}=a(s_{t-1},h_{j}^{S},m_{j}^{t-1})=v_{a}^{T}\\tanh (W_{a}s_{t-1}+U_{a}h_{j}^{S}+V_{a}m_{j}^{t-1}),\n$ \nwhere $V_{a}\\in \\mathbb {R}^{(N_{h}\\times N_{h})}$ .\n\nPost processing\nFor each sequence generated by Logician, we parse it into a set of facts, remove tuples with illegal format or duplicated tuples. The resultant set is taken as the output of the Logician.\n\nResults Analysis\nThe experimental results reveal that, Logician outperforms the comparison methods with large margin in first three tasks. For hyponymy detection tasks, Logician overwhelms the $\\text{HypeNet}_{\\text{Phrase}}$ using the same training data, and produces comparable results to $\\text{HypeNet}_{\\text{Phrase}}^{\\text{Extra}}$ with much less training data. Table 6 exhibits several example sentences and the facts extracted by these algorithms.\nThe poor performance of pattern-based methods is plausibly due to the noise in SAOKE data set. The sentences in SAOKE data set are randomly selected from a web encyclopedia, with free and casual writing style, are thus more noisy than the training data of NLP toolset used by these methods. In this situation, the NLP toolset may produce poor results, so do the pattern-based methods.\nModels learned from the SAOKE data set archive much better performance. Nevertheless, $\\text{SRL}_{\\text{SAOKE}}$ extracts each fact without knowing whether a candidate word has been used in other facts, which results in the misleading overlap of the word UTF8gbsn“学” (“Learn” in English) between two facts in the first case of Table 6 . Similarly, $\\text{HypeNet}_{\\text{Phrase}}$ and $\\text{HypeNet}_{\\text{Phrase}}^{\\text{Extra}}$ focus on the semantic vectors of pairs of phrases and their dependency paths in the background corpus. They extract each fact independently from other facts and hence do not know whether there have been any other relations extracted about these two phrases. In other words, for those comparison methods, an important source of information is neglected and a global optimization for all facts involved in sentences is absent.\nOn the contrary, Logician performs global optimization over the facts involved in each sentence by the sequence-to-sequence learning paradigm with the help of the coverage mechanism, in which facts compete each other to attract the attention of words, but also cooperate to share words. Valuable information is shared between these multiple tasks, which makes Logician consistently superior to other algorithms in these tasks.\nFurthermore, $\\text{SRL}_{\\text{SAOKE}}$ and $\\text{HypeNet}$ methods suffer from the OOV problem, such as unfamiliar words/phrases like the person name and school name in the last case of Table 6 . In this situation they may fail to produce a reasonable result. Logician is able to cope with unfamiliar words/phrases by exploiting the context information using deep RNN network with the help of copy mechanism.\n\nExtraction Error Analysis of Logician\nWe do a preliminary analysis for the results produced by the Logician model. The most notable problem is that it is unable to recall some facts for long or complex sentences. The last case in Table 6 exhibits such situation, where the fact UTF8gbsn(蔡竞,ISA,经济学博士)((Cai Jing, ISA, Ph. D. in economics) in English) is not recalled. This phenomenon indicates that the coverage mechanism may lose effectiveness in this situation. The second class of error is incomplete extraction, as exhibited in the third case in Table 6 . Due to the incomplete extraction, the left parts may interfere the generation of other facts, and result in nonsense results, which is the third class of error. We believe it is helpful to introduce extra rewards into the learning procedure of Logician to overcome these problems. For example, the reward could be the amount of remaining information left after the fact extraction, or the completeness of extracted facts. Developing such rewards and reinforcement learning algorithms using those rewards to refine Logician belongs to our future works.\n\nTable 4: Analysis of Components involved in Logician.\n\nTable 5: Human evaluation results on four types of information extraction tasks.\n\nQuestion: What's the size of the previous largest OpenIE dataset?" }, { "role": "assistant", "content": "{\"answer\": [\"3,200 sentences\"], \"evidence\": [\"Prior to the SAOKE data set, an annotated data set for OIE tasks with 3,200 sentences in 2 domains was released in BIBREF20 to evaluate OIE algorithms, in which the data set was said BIBREF20 \\u201c13 times larger than the previous largest annotated Open IE corpus\\u201d. The SAOKE data set is 16 times larger than the data set in BIBREF20 . To the best of our knowledge, SAOKE data set is the largest publicly available human labeled data set for OIE tasks. Furthermore, the data set released in BIBREF20 was generated from a QA-SRL data set BIBREF21 , which indicates that the data set only contains facts that can be discovered by SRL (Semantic Role Labeling) algorithms, and thus is biased, whereas the SAOKE data set is not biased to an algorithm. Finally, the SAOKE data set contains sentences and facts from a large number of domains.\"]}" } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.7887
[ { "role": "user", "content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: Decoding the chromatin proteome of a single genomic locus by DNA sequencing}\nArticle:\nAbstract:\nTranscription, replication, and repair involve interactions of specific genomic loci with many different proteins. How these interactions are orchestrated at any given location and under changing cellular conditions is largely unknown because systematically measuring protein–DNA interactions at a specific locus in the genome is challenging. To address this problem, we developed Epi-Decoder, a Tag-chromatin immunoprecipitation-Barcode-Sequencing( TAG-ChIP-Barcode-Seq) technology in budding yeast. Epi-Decoder is orthogonal to proteomics approaches because it does not rely on mass spectrometry( MS) but instead takes advantage of DNA sequencing. Analysis of the proteome of a transcribed locus proximal to an origin of replication revealed more than 400 interacting proteins. Moreover, replication stress induced changes in local chromatin proteome composition prior to local origin firing, affecting replication proteins as well as transcription proteins. Finally, we show that native genomic loci can be decoded by efficient construction of barcode libraries assisted by clustered regularly interspaced short palindromic repeats and CRISPR-associated protein 9( CRISPR/Cas9). Thus, Epi-Decoder is an effective strategy to identify and quantify in an unbiased and systematic manner the proteome of an individual genomic locus by DNA sequencing.\nIntroduction:\nThe chromatin at any given location in the genome is a dynamic entity that likely involves the interaction of many different proteins and nonprotein factors. The essential and complex processes of transcription, replication, and repair require major chromatin rearrangements to access and use the genome[1, 2]. Furthermore, the genome’s chromatin is under the influence of signals that can relay information of cellular events or states to the genome and vice versa[3, 4]. Fully understanding the chromatin regulatory mechanisms in the cell will require comprehensive knowledge of the full set of proteins that bind at individual genomic loci. Although many chromatin factors have already been identified by genetics and protein–protein interaction studies, direct, unbiased, and comprehensive analyses of chromatin interactions at specific genomic loci has remained a major challenge[5, 6]. A commonly considered strategy towards solving this problem is introducing an affinity handle at a locus of interest, purifying the locus( capture), and analysing the copurified proteins by mass spectrometry( MS)[7, 8]. However, a major challenge with capture MS is the need for very high levels of enrichment of the locus of interest versus the rest of the large genome while at the same time obtaining sufficient amounts of material for comprehensive and quantitative MS analysis[5, 6]. For example, in a model organism with a small genome such as yeast, purification of a 1 kb locus with its associated proteins from an entire genome of approximately 30 Mb requires a 30, 000-fold purification. Here, we present an independent approach, Epi-Decoder, with which the interactome of a single-copy locus is determined by DNA sequencing instead of MS. In this approach developed in yeast, a library of clones is generated in which each clone harbours at least one DNA barcode at a fixed locus and one protein tagged at its endogenous locus with a common epitope tag. Following chromatin immunoprecipitation( ChIP) for the common tag on a pool of cells, the barcodes of the coimmunoprecipitated DNA are counted by high-throughput sequencing, and the amount of barcode recovered serves as a read-out for the amount of the corresponding specific protein binding at the barcoded locus. In contrast with proteins, DNA barcodes can be amplified prior to counting, boosting the sensitivity of detection. Epi-Decoder of a single transcribed locus in yeast identified more than 400 chromatin-interacting proteins, enabled the identification of differences in protein binding between the 5’ and 3’ end of a gene, and demonstrated a chromatin rewiring in response to physiological changes. We also demonstrate that Epi-Decoder can be applied to a locus of interest by harnessing clustered regularly interspaced short palindromic repeats and CRISPR-associated protein 9( CRISPR/Cas9)-mediated genome engineering. Thus, Epi-Decoder provides an efficient method to provide a comprehensive map of the dynamic proteome of a single-copy locus. Moreover, it is an orthogonal approach to capture MS because acquisition of quantitative and qualitative information on protein binding does not involve MS but is obtained by DNA sequencing.\nDiscussion:\nStrategies aimed at identifying proteins bound at specific genomic loci frequently involve affinity capture combined with MS[5, 6]. Affinity handles generally involve engineering the locus by introducing binding sites for a tagged protein, capturing a native locus by using oligo-capture, or targeting inactive Cas9-fusion proteins. Important progress has been made for multicopy DNA loci such as repetitive DNA elements[7, 8]. Recently, oligo-capture[51] and CRISPR/Cas9 technology[52, 53] have been applied to identify proteins at single-copy genomic loci. However, the problem of assessing a unique locus in a comprehensive and quantitative manner by MS has not been solved, and many challenges remain, a major one being the high degree of purification of the locus of interest while obtaining sufficient material for good coverage in MS analysis[6]. Epi-Decoder provides a powerful and orthogonal strategy in which the very challenging proteomics problem of decoding the chromatin proteome of a single genomic locus is addressed by DNA sequencing. We demonstrate that Epi-Decoder enables an unbiased, comprehensive, and quantitative analysis of protein occupancy of a single locus in budding yeast. In addition to known and expected core chromatin proteins, we observed significant binding scores for many proteins that do not have canonical DNA-related functions. Among the unexpected factors are several metabolic enzymes, oxidative stress response factors, the yeast ubiquitin-activating enzyme, chaperones, proteasome subunits, and RNA-processing factors( S1 Table). Some of these unexpected factors are highly expressed; their binding score may reflect nonspecific binding. On the other hand, not all unexpected interactors are highly expressed or show equal binding scores. In addition, for a substantial number of the identified metabolic enzymes( e. g., glyceraldehyde 3-phosphate dehydrogenase/Tdh3, pyruvate kinase/Cdc19, homocitrate synthase/Lys20, Lys21) and protein chaperones( e. g., the HSP70 chaperones Ssa1 and Ssa2), there is evidence that they have active roles in transcription and DNA repair[54–58]. Furthermore, the binding scores of some metabolic enzymes changed upon treatment with HU( S2 Table), indicating that their interaction with chromatin is not constitutive. Indeed, there is a growing body of evidence that enzymes in the cytoplasm might have moonlighting functions in the nucleus, for example, by local supply of cofactors of histone-modifying enzymes[19, 54, 59–62]. Alternatively, the interaction with chromatin could somehow influence the activity of metabolic enzymes. The protein interactions that we describe here for 3 barcoded locations provide a rich resource for further unravelling the biological meaning of noncanonical chromatin interactions. Epi-Decoder is more than a discovery tool. It offers the possibility to compare different loci or to analyse 1 locus under different physiological or genetic conditions in a systematic and quantitative manner. Because all proteins are examined in a pooled fashion using the same affinity handle, differences in purification and cell backgrounds can be excluded and binding scores can be directly compared. This is exemplified by the basal transcription factors Tfa2 and Ssl2, which are quantitative outliers compared to their complex members( TFIIE and TFIIH, respectively), suggesting noncanonical functions. Another example is the chromatin rewiring we observed upon treatment with HU, which uncovers large-scale quantitative changes in interactions of transcription and replication proteins as well as other factors. We note that Epi-Decoder generates independent binding scores for proteins with( nearly) identical protein sequences but encoded by different genes, such as the histone proteins or other protein paralogs such as Ssa1 and Ssa2, which are 98% identical. This can provide extra information about the differential use of related proteins that cannot, or not easily, be distinguished by MS methods. Protein isoforms that arise through post-translational modification cannot be distinguished by Epi-Decoder. The Epi-Decoder strategy will be applicable to a broad range of fundamental epigenetic questions. A few requirements need to be met for successful application. First, Epi-Decoder requires a library of tagged proteins. For budding yeast, several tagged protein libraries are already available in addition to the carboxy-terminal TAP-tagged library used here[63–65]. This will allow for extending the current Epi-Decoder analyses to a nearly complete coverage of the proteome as well as to alternative( amino-terminal) tags, together enabling an unprecedented deep analysis of chromatin proteomes. Tagged protein libraries are also becoming available for many other organisms[66–71]. A second requirement for Epi-Decoder applications is the integration of DNA barcodes proximal to the locus of interest. Given their small size, DNA barcodes generally only minimally disrupt a locus, but this needs to be verified for the locus of interest, as is the case for strategies involving targeting affinity handles by Cas9 or other approaches. Fortunately, genomic barcoding has come within reach for many research questions due to the availability of emerging genome engineering strategies, in yeast and other organisms[46, 47, 72, 73]. Indeed, we demonstrate that CRIPSR/Cas9 can be used to generate markerless barcoded DNA libraries at high efficiency for a new locus of interest[46]. Furthermore, our findings with the barcoded native ADE2 locus demonstrate that Epi-Decoder is not restricted to haploid cells but can also be used in diploid cells in which the tagged allele of a protein and the barcoded allele are present in a heterozygous state. This greatly facilitates creating Epi-Decoder libraries in yeast as well as other cell systems. In summary, Epi-Decoder is powerful and versatile strategy for decoding the protein interactome of a single genomic locus. We expect that the Epi-Decoder strategy and derivatives thereof will enable the decoding of dynamic proteomes of different loci in different organisms and will be of high value for addressing a broad range of important chromatin biology questions in future applications." }, { "role": "assistant", "content": "DNA is packaged by proteins into a structure called chromatin . This packaging plays an important role in gene transcription , DNA replication , and DNA repair . The regulation of these processes is determined by the interplay of proteins that physically interact with specific loci . Although many chromatin proteins have been identified , it remains a challenge to comprehensively identify all the chromatin interactions at any specific locus by protein-based methods such as capture mass spectrometry ( MS ) . Here , we take advantage of DNA barcoding technologies and high-throughput sequencing to tackle this problem . We developed Epi-Decoder , a method that identifies chromatin–protein interaction at a single-copy locus in budding yeast . Epi-Decoder was successfully applied at 3 different genomic loci , resulting in comprehensive overviews of local chromatin interactomes . Most factors identified are known to be involved in chromatin-associated processes such as transcription and DNA replication . We also identified unexpected chromatin–protein interactions . Furthermore , capturing the dynamic chromatin interactome by changing physiological conditions provided insight into the local response to stress . In conclusion , Epi-Decoder is an effective tool for measuring the local proteome of a specific genomic locus by DNA sequencing ." } ]
science.bioasq_general_qa
science.bioasq_general_qa.783
[ { "role": "user", "content": "You will be shown a question, followed by excerpts from biomedical research papers. Please answer the question based on the provided context. Do not include any text in your response other than the answer.\n\nQuestion: What is the role of Gata3 in Th2 cells?\n\nContext: RHS6 coordinately regulates the Th2 cytokine genes by recruiting GATA3, SATB1, and IRF4.\nRHS6 recruited transcription factors GATA3, SATB1, and IRF4, which play important roles in expression of all three Th2 cytokine genes\nposttranslational modifications of Gata3 that control the regulation of IFNγ expression in memory Th2 cells.\nGATA3 is responsible for induction of T(h)2 differentiation and represses T(h)1 differentiation.\nGenome-wide analysis reveals unique regulation of transcription of Th2-specific genes by GATA3\nIL-4-mediated STAT6 activation induces high levels of transcription of GATA3, a master regulator of Th2 cell differentiation, and enforced expression of GATA3 induces Th2 cytokine expression.\nT-bet and GATA3 orchestrate Th1 and Th2 differentiation through lineage-specific targeting of distal regulatory elements.\nT-bet and GATA3 regulate the CD4+ T cell Th1/Th2 cell fate decision but little is known about the interplay between these factors outside of the murine Ifng and Il4/Il5/Il13 loci\nWe propose these aspects of T-bet and GATA3 function are important for Th1/Th2 differentiation and for understanding transcription factor interactions in other T cell lineage decisions.\nT helper 2 cells regulate inflammatory responses to helminth infections while also mediating pathological processes of asthma and allergy. IL-4 promotes Th2 development by inducing the expression of the GATA3 transcription factor, and the Th2 phenotype is stabilized by a GATA3-dependent autoregulatory loop.\nType I IFN reverses human Th2 commitment and stability by suppressing GATA3.\nOur results suggest that the balance of Runx3 and GATA3 is one factor that influences the manifestation of CD4(+) cells as the Th1 or Th2 phenotypes.\nMoreover, the expression of the Th2 transcription factor, GATA3, was significantly reduced in PKCtheta-deficient T cells.\nImpaired GATA3-dependent chromatin remodeling and Th2 cell differentiation leading to attenuated allergic airway inflammation in aging mice\nIn addition, reduced expression of GATA3 was detected in developing Th2 cells.\ntransduction of Runx1 into wild-type T cells caused a complete attenuation of Th2 differentiation and was accompanied by the cessation of GATA3 expression\nA critical role for NF-kappa B in GATA3 expression and TH2 differentiation in allergic airway inflammation." }, { "role": "assistant", "content": "RHS6 coordinately regulates the Th2 cytokine genes by recruiting GATA3, SATB1, and IRF4. RHS6 recruited transcription factors GATA3, SATB1, and IRF4, which play important roles in expression of all three Th2 cytokine genes IL-4-mediated STAT6 activation induces high levels of transcription of GATA3, a master regulator of Th2 cell differentiation, and enforced expression of GATA3 induces Th2 cytokine expression. Gata3 coordinates the progression of H3K9me2/H3K27me3 by mediating histone deacetylation and chromatin remodeling in Th2 cells. Gata3 plays an important role in this process by repressing the expression of histone H3 in Th1 cells and by promoting chromatin condensation during DNA damage-induced DNA damage. RHS6 coordinately regulates the Th2 cytokine genes by recruiting GATA3, SATB1, and IRF4. RHS6 recruited transcription factors GATA3, SATB1, and IRF4, which play important roles in expression of all three Th2 cytokine genes. posttranslational modifications of Gata3 that control the regulation of IFNg expression in memory Th2 cells. GATA3 is responsible for induction of T(h)2 differentiation and represses T(h)1 differentiation. Posttranslational modifications of Gata3 control the regulation of IFNg expression in memory Th2 cells." } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.79003
[ { "role": "user", "content": "You will be presented with the full text of a biomedical research paper. Please write an abstract for this paper. Your response should include the abstract and no additional text.\n\nPaper text:\nspinal intervertebral fusion remains the gold standard for treating spinal disease . since the surgery changes the original structure of the spine\n, the mechanical environment can also be changed , and concomitant symptoms will occur , such as adjacent segment degeneration ( asd ) .\nfound the incidence of asd to be 3% per year , of 300 cases in 10 years of follow - up .\nreported the incidence of asd to be up to 92% based on imaging diagnosis . beyond doubt , the mechanical changes in the environment are the primary reason for asd , but the exact mechanism remains unclear .\nbased on this , some researchers believe that load increase on the facet joint is the initiating factor for asd .\nreported that the pressure increased significantly after fusion ; double segment fusion makes the pressure three times greater than in an intact joint .\nfound that facet joint load ( fjl ) increased ipsilaterally during lateral bending , and contralaterally during axial rotation after disc prosthesis implantation .\ndue to limitations in measurement conditions , no distribution or growth mode for fjl were reported in vitro until now .\nbauman reported that there was no significant impact on the fjl after prodisc - c implantation , during spinal bending .\nfound that during loading motions such as lateral bending and rotation , there were no significant differences after prosthesis implantation as compared with the intact joint .\nthe mechanisms of action were different during various motions , as well as facet joint contact mode and load .\nthus far , no such studies were reported for the change in adjacent segment facet joints with motion after intervertebral fusion . in this study\n, human cadaveric cervical vertebra samples were used for studying the load rate increase and the force distribution of the adjacent segment facet joint after single and double segment fusion .\nthe hypothesis is that the loading rate of adjacent segmental facet joint after fusion will be different in various motions and will be increased with number of fusion segments .\nthree groups of six cadaveric cervical vertebrae ( c2c7 ) were used for biomechanical testing .\nsamples were thawed from 20 c to 4 c for 4 h , and defrosted for 20 h at room temperature in normal saline .\nall samples were obtained under a body donation agreement , and informed consent was obtained from relatives .\neach sample was placed in three groups in the following order : the intact group , the one segment fusion group ( c45 ; fus1 ) , and the two segments fusion group ( c45 and c56 ; fus2 ; figure 1 ) .\nanterior cervical discectomy and fusion were performed on the c45 level in the fus1 group and the c46 levels for the fus2 group .\nthe operative procedure is shown in the attachment ( titanium alloy internal fixators , shandong weigao , china ) , and the adjacent segment is c34 .\nthe cranial and caudal vertebrae of each specimen were embedded in an upright tensionless position using self - curing denture acrylic ( shanghai new century dental material co. ltd . ,\na 6 of freedom robot arm ( yaskawa motoman , tokyo , japan ) which enabled execution of complex motion patterns was used to manipulate the specimen via predefined motion or load on the cranial vertebrae . a sensitive force / torque sensor ( mini 45 , ati , usa )\nwas mounted at the robot 's end - effector , which enabled simultaneous measurement of applied forces and torques during load - controlled robot movement .\nmotion of each vertebra was acquired using an optical motion capture system ( optotrak - certus / ndi , waterloo / ontario , canada ) with active infrared light - emitting diode markers .\nboston , ma , usa ) was used for pressure measurement embedded in the facet joint .\nthree groups of test ( a ) intact , ( b ) c4 - 5 fusion ( c ) c4 - 6 fusion .\nclamps ; 4 . led marks ; 5 . samples ; 6 . pressure sensors .\naccurate positioning of the upper vertebra relative to the clamp was essential to avoid misalignment of anatomical planes of the specimen and robot end - effector plane .\nall groups were tested in flexion , extension , bending , and rotation in the intact and fusion groups , respectively .\nthe loading protocol of panjabi was used and was specialized for studying adjacent segment problems .\nthis hybrid method was based on the pure moment loading mode , with the intact one loaded first with force - control , and the fused mode of the same sample with a position - control method . according to the method of woo et al .\n, disperse pure moment points were found first , and then the automatic interpolation function of the robot was used to fulfill continuous pure moment loading .\nthe intact group was loaded to 2 nm , and the fused group was loaded with the same range of motion ( rom ) with the intact group at 2 nm . all operative procedures were performed by the same surgeon .\npaired comparisons between treatment groups were made using student 's paired t - test . a p < 0.05 was considered as statistically significant .\naccurate positioning of the upper vertebra relative to the clamp was essential to avoid misalignment of anatomical planes of the specimen and robot end - effector plane .\nall groups were tested in flexion , extension , bending , and rotation in the intact and fusion groups , respectively .\nthe loading protocol of panjabi was used and was specialized for studying adjacent segment problems .\nthis hybrid method was based on the pure moment loading mode , with the intact one loaded first with force - control , and the fused mode of the same sample with a position - control method . according to the method of woo et al .\n, disperse pure moment points were found first , and then the automatic interpolation function of the robot was used to fulfill continuous pure moment loading .\nthe intact group was loaded to 2 nm , and the fused group was loaded with the same range of motion ( rom ) with the intact group at 2 nm . all operative procedures were performed by the same surgeon . paired comparisons between treatment groups were made using student 's paired t - test .\nthe rom of operative levels before and after fusion is shown in table 1 . the adjacent fjls ( c34 )\nwere measured under the three states : intact , single fused , and double fused .\nthe fjl of fusion groups was larger than the intact groups , of which the maximum fjl in the fusions were significantly higher than the intact groups during extension ( p < 0.05 ) [ figures 3 and 4 ] .\nfacet joint load increased after fusion and positively correlated with the number of fusion segments .\nmoreover , as shown in figures 5 and 6 , the growth speed of the fjl also increased with the number of fusion segments , in which extension was the fastest of the motions ( p < 0.05 ) .\nthe response pressure - time curve of the joints of one sample is shown in figure 7 ; the pressure difference under continuous loading can be seen , with the following magnitude sequence : rotation , extension , lateral bending , and flexion .\nthere was no pressure signal when loading with flexion for the facet joints were open during this motion . during extension ,\nthe center of force extended from the distal tip to the proximal roots of the facet joints . during lateral bending ,\npressure center transfer from the bending side to the other side and the center appeared to be of the banding type . during rotation , fjl increased contralaterally , and the center of force spread from the contralateral part of the forced articular side to the entire area .\ntransformation of facet joints for three motions : ( a ) lateral bending ( b ) extension ( c ) rotation .\nadjacent segment degeneration after fusion is becoming increasingly popular in both clinical and research fields .\nsome researchers suggest that mechanical change of the adjacent segment facet joints is the initial factor for the asd .\nthe intervertebral disc and the two facet joints make up the constrained three balance points during spine finishing movements .\nthe instantaneous center of rotation transfers in the area is surrounded by these three points .\nany mechanical changes of the facet joints may affect the structural stability and health of the joints , especially the occurrence of spinal degeneration and osteoarthritis ( oa ) .\nfirst , facet joints are deeply located in the inner spine and wrapped by capsules , so it is impossible to obtain pressure information without destroying the capsules for in vitro testing . as discussed in the cited literature , damage to the capsule can incur an abnormal change in the facet joints . the cutting and suture methods used in our study can measure pressure data with the sensor slice inside the joint , with minimal damage to the capsules .\nsecond , the tecscan 6900r pressure sensor used for testing is optimal for facet joints .\nthe plastic package can be used in a damp environment , and the flexional properties can ensure real - time measurement during the entire movement process of the test .\nthese advantages overcome other limitations of traditional methods , such as fuji pressure sensitive paper , and greatly reduce the influence of environmental factors .\nthe pressure during flexion is zero , which is consistent with the finite element method ( fem ) computation reported by faizan et al .\nthe fem results show that the pressure of the adjacent joint increased by three times after double - segment fusion , while our results show a 5080% increase .\nthe reasons may be that the degree of freedom of the fusion segment could be completely constrained while using fem ; however , for the in vitro test , the motion can not be completely removed , so the operative segment can move to some extent after fusion .\nsecond , the same samples were used for testing in the three groups for comparison ; each sample required being encountered twice for surgery and three times for testing .\nrotation results were identified with the fem , at approximately 50 n. schmorl 's nodes are associated with heavy labor occupation factors .\nlumbar disc herniation has been shown to be significantly associated with cumulative exposures during a physical workload .\nthe mechanisms for these spinal diseases are related to different types of cumulative motions and postures .\nwe found that the load rates of the facet joints are different for different motions and that there may be a good explanation for this .\nit has been shown that the articular cartilage is very sensitive to the loads that occur during daily normal or specialized activities . fast or impact loading may diminish proteoglycan synthesis and damage tissue through necrosis .\nthe initial failure of subchondral bone is suggested to be the cause of oa and joint degeneration .\na quick change in stress rate may lead to very fast crack propagation , which may lower the toughness of the articular surface .\nfailure of the joints may lead to accumulation of damage in other joint components such as the bone , muscles , ligaments , tendons , and nerves , which participate in load transmission . in this study\nadjacent segment degeneration is always concomitant with facet joint degeneration , although it remains inconclusive whether degeneration of the facet joint or disc occurs first .\nwe found that different motions may affect the force and force velocity of the facet joint , which may incur facet joint degeneration .\nthis finding suggests that asd may be related to the exercise habit before and after surgery ; more studies are required to confirm this hypothesis ." }, { "role": "assistant", "content": "background : the cause of the adjacent segment degeneration ( asd ) after fusion remains unknown . \n it is reported that adjacent facet joint stresses increase after anterior cervical discectomy and fusion . \n this increase of stress rate may lead to tissue injury . \n thus far , the load rate of the adjacent segment facet joint after fusion remains unclear.methods:six c2c7 cadaveric spine specimens were loaded under four motion modes : flexion , extension , rotation , and lateral bending , with a pure moment using a 6 robot arm combined with an optical motion analysis system . \n the tecscan pressure test system was used for testing facet joint pressure.results:the contact mode of the facet joints and distributions of the force center during different motions were recorded . \n the adjacent segment facet joint forces increased faster after fusion , compared with intact conditions . while the magnitude of pressures increased , there was no difference in distribution modes before and after fusion . \n no pressures were detected during flexion . \n the average growth velocity during extension was the fastest and was significantly faster than lateral bending.conclusions:one of the reasons for cartilage injury was the increasing stress rate of loading . \n this implies that asd after fusion may be related to habitual movement before and after fusion . \n more and faster extension is disadvantageous for the facet joints and should be reduced as much as possible ." } ]
science.data_reco_mcq_sc
science.data_reco_mcq_sc.2116
[ { "role": "user", "content": "Given a short query with keyphrases about a research idea in 2020 in the domain of artificial intelligence, recommend the relevant datasets to validate the research idea from options provided. A short description of the datasets is also provided to help you.\n\nQuery: MMT is an unsupervised domain adaptation method for person re-ID. We propose to improve the model'\n\nKeyphrases: person re-identification images autonomous driving\n\nDataset description:\nSyn2Real: Syn2Real, a synthetic-to-real visual domain adaptation benchmark meant to encourage further development of robust domain transfer methods. The goal is to train a model on a synthetic \"source\" domain and then update it so that its performance improves on a real \"target\" domain, without using any target annotations.\nMarket-1501: Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector.\nLibri-Adapt: Libri-Adapt aims to support unsupervised domain adaptation research on speech recognition models.\nTHEODORE: Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes.\n\nOptions:- A: Syn2Real B: Market-1501 C: Libri-Adapt D: THEODORE" }, { "role": "assistant", "content": "Market-1501" } ]
science.covid_deepset_qa
science.covid_deepset_qa.1591
[ { "role": "user", "content": "You will be shown a paragraph from a biomedical research paper, followed by a question about the paper. Please extract text from the paragraph to answer the question. Do not include any text in your repsonse other than the answer.\n\nParagraph: Despite being a major cause of exacerbation, reports linking respiratory viruses to acute exacerbations only start to emerge in the late 1950s (Pattemore et al., 1992) ; with bacterial infections previously considered as the likely culprit for acute exacerbation (Stevens, 1953; Message and Johnston, 2002) . However, with the advent of PCR technology, more viruses were recovered during acute exacerbations events and reports implicating their role emerged in the late 1980s (Message and Johnston, 2002) . Rhinovirus (RV) and respiratory syncytial virus (RSV) are the predominant viruses linked to the development and exacerbation of chronic airway inflammatory diseases (Jartti and Gern, 2017) . Other viruses such as parainfluenza virus (PIV), influenza virus (IFV) and adenovirus (AdV) have also been implicated in acute exacerbations but to a much lesser extent (Johnston et al., 2005; Oliver et al., 2014; Ko et al., 2019) . More recently, other viruses including bocavirus (BoV), human metapneumovirus (HMPV), certain coronavirus (CoV) strains, a specific enterovirus (EV) strain EV-D68, human cytomegalovirus (hCMV) and herpes simplex virus (HSV) have been reported as contributing to acute exacerbations . The common feature these viruses share is that they can infect both the upper and/or lower airway, further increasing the inflammatory conditions in the diseased airway (Mallia and Johnston, 2006; Britto et al., 2017) . Respiratory viruses primarily infect and replicate within airway epithelial cells . During the replication process, the cells release antiviral factors and cytokines that alter local airway inflammation and airway niche (Busse et al., 2010) . In a healthy airway, the inflammation normally leads to type 1 inflammatory responses consisting of activation of an antiviral state and infiltration of antiviral effector cells. This eventually results in the resolution of the inflammatory response and clearance of the viral infection (Vareille et al., 2011; Braciale et al., 2012) . However, in a chronically inflamed airway, the responses against the virus may be impaired or aberrant, causing sustained inflammation and erroneous infiltration, resulting in the exacerbation of their symptoms (Mallia and Johnston, 2006; Dougherty and Fahy, 2009; Busse et al., 2010; Britto et al., 2017; Linden et al., 2019) . This is usually further compounded by the increased susceptibility of chronic airway inflammatory disease patients toward viral respiratory infections, thereby increasing the frequency of exacerbation as a whole (Dougherty and Fahy, 2009; Busse et al., 2010; Linden et al., 2019) . Furthermore, due to the different replication cycles and response against the myriad of respiratory viruses, each respiratory virus may also contribute to exacerbations via different mechanisms that may alter their severity. Hence, this review will focus on compiling and collating the current known mechanisms of viral-induced exacerbation of chronic airway inflammatory diseases; as well as linking the different viral infection pathogenesis to elucidate other potential ways the infection can exacerbate the disease. The review will serve to provide further understanding of viral induced exacerbation to identify potential pathways and pathogenesis mechanisms that may be targeted as supplementary care for management and prevention of exacerbation. Such an approach may be clinically significant due to the current scarcity of antiviral drugs for the management of viral-induced exacerbations. This will improve the quality of life of patients with chronic airway inflammatory diseases. Once the link between viral infection and acute exacerbations of chronic airway inflammatory disease was established, there have been many reports on the mechanisms underlying the exacerbation induced by respiratory viral infection. Upon infecting the host, viruses evoke an inflammatory response as a means of counteracting the infection. Generally, infected airway epithelial cells release type I (IFNα/β) and type III (IFNλ) interferons, cytokines and chemokines such as IL-6, IL-8, IL-12, RANTES, macrophage inflammatory protein 1α (MIP-1α) and monocyte chemotactic protein 1 (MCP-1) (Wark and Gibson, 2006; Matsukura et al., 2013) . These, in turn, enable infiltration of innate immune cells and of professional antigen presenting cells (APCs) that will then in turn release specific mediators to facilitate viral targeting and clearance, including type II interferon (IFNγ), IL-2, IL-4, IL-5, IL-9, and IL-12 (Wark and Gibson, 2006; Singh et al., 2010; Braciale et al., 2012) . These factors heighten local inflammation and the infiltration of granulocytes, T-cells and B-cells (Wark and Gibson, 2006; Braciale et al., 2012) . The increased inflammation, in turn, worsens the symptoms of airway diseases. Additionally, in patients with asthma and patients with CRS with nasal polyp (CRSwNP), viral infections such as RV and RSV promote a Type 2-biased immune response (Becker, 2006; Jackson et al., 2014; Jurak et al., 2018) . This amplifies the basal type 2 inflammation resulting in a greater release of IL-4, IL-5, IL-13, RANTES and eotaxin and a further increase in eosinophilia, a key pathological driver of asthma and CRSwNP (Wark and Gibson, 2006; Singh et al., 2010; Chung et al., 2015; Dunican and Fahy, 2015) . Increased eosinophilia, in turn, worsens the classical symptoms of disease and may further lead to life-threatening conditions due to breathing difficulties. On the other hand, patients with COPD and patients with CRS without nasal polyp (CRSsNP) are more neutrophilic in nature due to the expression of neutrophil chemoattractants such as CXCL9, CXCL10, and CXCL11 (Cukic et al., 2012; Brightling and Greening, 2019) . The pathology of these airway diseases is characterized by airway remodeling due to the presence of remodeling factors such as matrix metalloproteinases (MMPs) released from infiltrating neutrophils (Linden et al., 2019) . Viral infections in such conditions will then cause increase neutrophilic activation; worsening the symptoms and airway remodeling in the airway thereby exacerbating COPD, CRSsNP and even CRSwNP in certain cases (Wang et al., 2009; Tacon et al., 2010; Linden et al., 2019) . An epithelial-centric alarmin pathway around IL-25, IL-33 and thymic stromal lymphopoietin (TSLP), and their interaction with group 2 innate lymphoid cells (ILC2) has also recently been identified (Nagarkar et al., 2012; Hong et al., 2018; Allinne et al., 2019) . IL-25, IL-33 and TSLP are type 2 inflammatory cytokines expressed by the epithelial cells upon injury to the epithelial barrier (Gabryelska et al., 2019; Roan et al., 2019) . ILC2s are a group of lymphoid cells lacking both B and T cell receptors but play a crucial role in secreting type 2 cytokines to perpetuate type 2 inflammation when activated (Scanlon and McKenzie, 2012; Li and Hendriks, 2013) . In the event of viral infection, cell death and injury to the epithelial barrier will also induce the expression of IL-25, IL-33 and TSLP, with heighten expression in an inflamed airway (Allakhverdi et al., 2007; Goldsmith et al., 2012; Byers et al., 2013; Shaw et al., 2013; Beale et al., 2014; Jackson et al., 2014; Uller and Persson, 2018; Ravanetti et al., 2019) . These 3 cytokines then work in concert to activate ILC2s to further secrete type 2 cytokines IL-4, IL-5, and IL-13 which further aggravate the type 2 inflammation in the airway causing acute exacerbation (Camelo et al., 2017) . In the case of COPD, increased ILC2 activation, which retain the capability of differentiating to ILC1, may also further augment the neutrophilic response and further aggravate the exacerbation (Silver et al., 2016) . Interestingly, these factors are not released to any great extent and do not activate an ILC2 response during viral infection in healthy individuals (Yan et al., 2016; Tan et al., 2018a) ; despite augmenting a type 2 exacerbation in chronically inflamed airways (Jurak et al., 2018) . These classical mechanisms of viral induced acute exacerbations are summarized in Figure 1 . As integration of the virology, microbiology and immunology of viral infection becomes more interlinked, additional factors and FIGURE 1 | Current understanding of viral induced exacerbation of chronic airway inflammatory diseases. Upon virus infection in the airway, antiviral state will be activated to clear the invading pathogen from the airway. Immune response and injury factors released from the infected epithelium normally would induce a rapid type 1 immunity that facilitates viral clearance. However, in the inflamed airway, the cytokines and chemokines released instead augmented the inflammation present in the chronically inflamed airway, strengthening the neutrophilic infiltration in COPD airway, and eosinophilic infiltration in the asthmatic airway. The effect is also further compounded by the participation of Th1 and ILC1 cells in the COPD airway; and Th2 and ILC2 cells in the asthmatic airway. Frontiers in Cell and Developmental Biology | www.frontiersin.org mechanisms have been implicated in acute exacerbations during and after viral infection (Murray et al., 2006) . Murray et al. (2006) has underlined the synergistic effect of viral infection with other sensitizing agents in causing more severe acute exacerbations in the airway. This is especially true when not all exacerbation events occurred during the viral infection but may also occur well after viral clearance (Kim et al., 2008; Stolz et al., 2019) in particular the late onset of a bacterial infection (Singanayagam et al., 2018 (Singanayagam et al., , 2019a . In addition, viruses do not need to directly infect the lower airway to cause an acute exacerbation, as the nasal epithelium remains the primary site of most infections. Moreover, not all viral infections of the airway will lead to acute exacerbations, suggesting a more complex interplay between the virus and upper airway epithelium which synergize with the local airway environment in line with the \"united airway\" hypothesis (Kurai et al., 2013) . On the other hand, viral infections or their components persist in patients with chronic airway inflammatory disease (Kling et al., 2005; Wood et al., 2011; Ravi et al., 2019) . Hence, their presence may further alter the local environment and contribute to current and future exacerbations. Future studies should be performed using metagenomics in addition to PCR analysis to determine the contribution of the microbiome and mycobiome to viral infections. In this review, we highlight recent data regarding viral interactions with the airway epithelium that could also contribute to, or further aggravate, acute exacerbations of chronic airway inflammatory diseases. Patients with chronic airway inflammatory diseases have impaired or reduced ability of viral clearance (Hammond et al., 2015; McKendry et al., 2016; Akbarshahi et al., 2018; Gill et al., 2018; Wang et al., 2018; Singanayagam et al., 2019b) . Their impairment stems from a type 2-skewed inflammatory response which deprives the airway of important type 1 responsive CD8 cells that are responsible for the complete clearance of virusinfected cells (Becker, 2006; McKendry et al., 2016) . This is especially evident in weak type 1 inflammation-inducing viruses such as RV and RSV (Kling et al., 2005; Wood et al., 2011; Ravi et al., 2019) . Additionally, there are also evidence of reduced type I (IFNβ) and III (IFNλ) interferon production due to type 2-skewed inflammation, which contributes to imperfect clearance of the virus resulting in persistence of viral components, or the live virus in the airway epithelium (Contoli et al., 2006; Hwang et al., 2019; Wark, 2019) . Due to the viral components remaining in the airway, antiviral genes such as type I interferons, inflammasome activating factors and cytokines remained activated resulting in prolong airway inflammation (Wood et al., 2011; Essaidi-Laziosi et al., 2018) . These factors enhance granulocyte infiltration thus prolonging the exacerbation symptoms. Such persistent inflammation may also be found within DNA viruses such as AdV, hCMV and HSV, whose infections generally persist longer (Imperiale and Jiang, 2015) , further contributing to chronic activation of inflammation when they infect the airway (Yang et al., 2008; Morimoto et al., 2009; Imperiale and Jiang, 2015; Lan et al., 2016; Tan et al., 2016; Kowalski et al., 2017) .\n\nQuestion: How does the infected airway cell respond?" }, { "role": "assistant", "content": "Generally, infected airway epithelial cells release type I (IFNα/β) and type III (IFNλ) interferons, cytokines and chemokines such as IL-6, IL-8, IL-12, RANTES, macrophage inflammatory protein 1α (MIP-1α) and monocyte chemotactic protein 1 (MCP-1) (Wark and Gibson, 2006; Matsukura et al., 2013) ." } ]
science.gnormplus_ner
science.gnormplus_ner.393
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Gene\"].\n\nPlease return the output as a JSON object of the format: {\"Gene\": [\"SSX\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nGranulin mutations associated with frontotemporal lobar degeneration and related disorders: an update. Mutations in the gene encoding granulin (HUGO gene symbol GRN, also referred to as progranulin, PGRN), located at chromosome 17q21, were recently linked to tau-negative ubiquitin-positive frontotemporal lobar degeneration (FTLDU). Since then, 63 heterozygous mutations were identified in 163 families worldwide, all leading to loss of functional GRN, implicating a haploinsufficiency mechanism. Together, these mutations explained 5 to 10% of FTLD. The high mutation frequency, however, might still be an underestimation because not all patient samples were examined for all types of loss-of-function mutations and because several variants, including missense mutations, have a yet uncertain pathogenic significance. Although the complete phenotypic spectrum associated with GRN mutations is not yet fully characterized, it was shown that it is highly heterogeneous, suggesting the influence of modifying factors. A role of GRN in neuronal survival was suggested but the exact mechanism by which neurodegeneration and deposition of pathologic brain inclusions occur still has to be clarified." }, { "role": "assistant", "content": "{\"Gene\": [\"Granulin\", \"GRN\", \"granulin\", \"PGRN\", \"progranulin\"]}" } ]
science.drug_combo_extraction_re
science.drug_combo_extraction_re.578
[ { "role": "user", "content": "You will be shown a paragraph from a biomedical research article. Your task is to extract all drug combinations appearing in a specific sentence within in the paragraph. Each drug combination should be assigned to one of the following three categories:\n\n- POS: Indicates a positive interaction between the drugs in the combination; for instance, drugs that have an additive or synergistic effect.\n- NEG: Indicates a negative interaction between the drugs in the combination; for instance, combining these drugs might result in adverse side effects, or might reduce their efficacy.\n- COMB: Indicates that the drugs are given in combination, but the type of interaction between the drugs cannot be determined based on the provided context.\n\nPlease format your answer as a json array. Each entry in the array should itself be an array formatted as [<drug_1>, <drug_2>, ..., <drug_n>, <category>] where <category> is one of the three categories listed above and <drug_1>, ..., <drug_n> are drug names copied verbatim from the input sentence. Different combinations may involve different numbers of drugs; in other words, the `n` in <drug_n> may vary.\n\nAs an example, the following output describes two extracted drug combinations; one between two drugs, and one between three drugs:\n\n[[\"artesunate\", \"artemether\", \"COMB\"], [\"Nifedipine\", \"Labetalol\", \"Prazosin\", \"POS\"]]\n\nThe paragraph and specific sentence are below. You should only extract relationships expressed in the specific sentence; the paragraph is provided for context.\n\nIn your output, return only the json array and no other text. If no drug combinations appear, return an empty array `[]`.\n\nparagraph: Real-World Palbociclib Use in HR+/HER2- Advanced Breast Cancer in Canada: The IRIS Study. Palbociclib is a selective cyclin-dependent kinase ( CDK ) 4/6 inhibitor used in combination with aromatase inhibitors or fulvestrant for patients with hormone receptor-positive ( HR+ ) human epidermal growth factor receptor 2 (HER2)-negative advanced/metastatic breast cancer ( ABC/MBC ) . palbociclib was the first CDK 4/6 inhibitor approved for HR+/HER2- ABC/MBC treatment in Canada in combination with letrozole (P+L) as an initial endocrine-based therapy (approved March 2016), or with fulvestrant (P+F) following disease progression after prior endocrine therapy (approved May 2017). The Ibrance Real World Insights (IRIS) study (NCT03159195) collected real-world outcomes data for palbociclib-treated patients in several countries, including Canada. ### methods This retrospective chart review included women with HR+/HER2- ABC/MBC receiving P+L or P+F in Canada. Physicians reviewed medical records for up to 14 patients, abstracting demographic and clinical characteristics, treatment patterns, and clinical outcomes. Progression-free rates (PFRs) and survival rates (SRs) at 6, 12, 18, and 24 months were estimated via Kaplan-Meier analysis. ### results Thirty-three physicians examined medical records for 247 patients (P+L, ### conclusions Dose reduction rates were low and PFR and SR were high in this Canadian real-world assessment of P+L and P+F treatments, suggesting that palbociclib combinations are well tolerated and effective.\n\nsentence: Palbociclib is a selective cyclin-dependent kinase ( CDK ) 4/6 inhibitor used in combination with aromatase inhibitors or fulvestrant for patients with hormone receptor-positive ( HR+ ) human epidermal growth factor receptor 2 (HER2)-negative advanced/metastatic breast cancer ( ABC/MBC ) ." }, { "role": "assistant", "content": "[[\"Palbociclib\", \"fulvestrant\", \"POS\"]]" } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.189302
[ { "role": "user", "content": "You will be presented with the full text of a science research paper. Please write an abstract for this paper. Your response should include the abstract and no additional text.\n\nPaper text:\none of the most intriguing discoveries made in solar physics early last century was the phenomenon known as ellerman bombs .\nthe common use of the h@xmath0 line to monitor the sun allowed the discovery of features with enhanced emission peaks in the wings of this line , not affecting the central absorption core ( @xcite @xcite ) .\noff - band ( @xmath3 1 ) h@xmath0 filtergrams showed these features to be spatially point - like structures with a size of @xmath31 , lasting for little more than 10 minutes ( see @xcite @xcite and references therein ) .\nas they are virtually absent from line core filtergrams , they have always been associated to events taking place in the low chromosphere .\nthis is also confirmed by their clear correlation with brightenings seen in the 1600 continuum images from the trace satellite ( @xcite @xcite ) .\nsometimes , an x - ray transient brightening can also be observed at the same location ( @xcite @xcite ) .\nrecent reviews on ellerman bombs are given by @xcite ( @xcite ) and in the introduction of @xcite ( @xcite ) .\nthe physical process behind ellerman bombs has remained elusive . the blue asymmetry observed by the h@xmath0 profiles ( excess emission in the blue wing )\nhas sometimes been associated to outward motions of the solar material ( @xcite @xcite ) . but using two - component models separated along the line of sight , @xcite ( @xcite ) showed that the asymmetry could be originated from , either , hot emitting material moving upwards or cold absorbing material falling from the top of the chromosphere .\nthe relation to the magnetic field configuration has been controversial as well .\nellerman bombs have been detected in two distinct solar magnetic scenarios , but always inside active regions .\nthey occur in emerging regions between the opposite polarity footpoints and also surrounding isolated sunspots , beyond the penumbral boundary . for this later case , @xcite ( @xcite )\nhave shown that they can be identified with the so - called moving magnetic features ( mmfs ) that are seen in the moat region around sunspots ( for a description of mmfs see @xcite @xcite ) .\nwhereas this points towards a magnetic origin of ellerman bombs , a number of events observed by @xcite ( @xcite ) ( see also @xcite @xcite ) were not clearly linked to magnetic structures .\nthese authors point out that a number of cases showed a relationship with the boundaries of magnetic regions but others did not .\nmore recently the situation has been clarified by the analysis of the observations made by the flare genesis balloon experiment ( @xcite @xcite , @xcite @xcite ) .\nobserving an emerging flux region , these authors describe dipolar features that appear in the middle of the region , where the flux emerges at the surface ( below the so - called arch filament system ) . with better sensitivity magnetograms than the previous works\n, @xcite ( @xcite ) found that almost all ellerman bombs studied could be identified with the neutral line above the dipolar features ( or in neutral lines inside supergranular boundaries ) .\nthose cases not linked with the presence of neutral lines could be associated with interacting field lines above the photosphere , a result confirmed by the extrapolations made by @xcite ( @xcite ) .\nthese authors proposed , then , as the most probably mechanism for the generation of ellermam bombs some form of magnetic reconnection in the low chromosphere .\nthat reconnection happens efficiently at these heights is supported by the fact that the temperature minimum is the region with the lowest electrical conductivity , and resistive processes occur there more easily than elsewhere ( @xcite @xcite ) . while the flare genesis observations correspond to the central portion of an emerging active region ,\nthese conclusions offer an interesting parallelism with the ellerman bombs observed surrounding sunspots and associated to mmfs .\n@xcite ( @xcite ) pointed out that many of the bombs studied were spatially located at the dipped portions of undulatory field lines in the flux emerging area .\nthis undulatory or serpentine configuration is also used for the explanation of mmfs ( as first proposed by @xcite @xcite ) . in this way , a unifying scenario for the generation of these point - like energy release processes could be constructed . but a study similar to that made by @xcite ( @xcite ) for the moat of sunspots is still lacking . the comparison between the ellerman bombs seen in these two scenarios and the determination of their physical origin awaits for more solid observational tools than h@xmath0 line profiles .\nin particular , techniques providing information of the magnetic field at the chromospheric heights where the energy release takes place , will be of great value .\nlinear polarization signals observed in ellerman bombs have been reported in the past ( @xcite @xcite ) , but their exact amount an origin remains controversial . in this paper , we propose the use of full stokes spectropolarimetry in more reliable diagnostic lines such as the caii infrared triplet lines . the most important advantage these lines offer is the availability of powerful inversion techniques that can be used to cover almost the full chromospheric region .\nthis paper presents a first characterization of the atmosphere of two regions close to a sunspot that display the properties ascribed to the ellerman bombs .\nthe observations used in this work are first - light data from the new instrument spectro - polarimeter for infrared and optical regions ( spinor , see @xcite @xcite ) , at the dunn solar telescope ( dst ) located at the sacramento peak observatory .\nthe dataset contains the two chromospheric caii lines at 849.8 and 854.2 nm and two photospheric fei lines at 849.7 and 853.8 nm . we observed active region noaa 0634 on june 16 at 15:16 ut , carrying out a 350-step scan with a spatial stepping of 0.22 . by using the adaptive optics system of the dst @xcite\n, we attained a spatial resolution as good as 0.6 , although this figure varies slightly along the scanning direction ( the horizontal direction in the figures ) due to temporal seeing variations . in order to improve the signal - to - noise ratio in the stokes profiles , we averaged the observations in 3@xmath13 pixel boxes .\nthe noise is thus reduced to @xmath33@xmath110@xmath2 times the continuum intensity . at this level\nthere is barely no signal in stokes @xmath4 and @xmath5 , so we decided to leave them out of our analysis .\nthe chromospheric stokes @xmath6 profiles , however , reveal fascinatingly complex shapes with multiple peaks , while the photospheric lines exhibit the usual antisymmetric profiles .\nfig [ fig : maps ] shows maps of the continuum intensity and a `` magnetogram '' taken in one of the photospheric lines .\nthe magnetic configuration surrounding the sunspots appears very different on both sides of it .\nthe eastern side exhibits a much more complex topology with an intricate pattern of plage fields extending from the sunspot boundary to the edge of the field of view .\nimages of the region ( fig [ fig : ha ] ) reveal considerable emission near the eastern penumbral boundary .\nphotospheric velocity maps of noaa 0634 were obtained by measuring the minimum position of the fe i line at 849.7 nm .\na similar strategy is not viable for the ca ii lines , however , because of the complicated patterns of emission reversals and self - absortions found in the line cores .\ninstead , we measured the intensity difference between two points symmetrically located at various distances from the line core .\nthis method is only an approximation but it works better than taking the minimum position .\nfig [ fig : vels ] shows the velocity maps obtained at two heights , with the convention that positive velocities are away from the observer .\nthe cosine of the heliocentric angle for this region is @xmath7 .\nnotice the two prominent bright features marked with arrows in the lower panel of the figure .\nthese features are strong plasma flows directed away from the line of sight and located on the center side of the sunspot ( but outside the photospheric penumbral boundary ) .\ndetailed profiles of these features are shown in fig [ fig : velprofs ] .\nboth features exhibit very strong redshifts , visible in stokes @xmath8 and @xmath6 .\npoint a shows a very prominent redshift starting near 40 pm away from line center ( corresponding roughly to the high photosphere or low chromosphere ) all the way to the line core ( middle chromosphere ) .\nthis feature is well localized ( see lower panel ) and the doppler shift is @xmath314 km s@xmath9 .\npoint b exhibits somewhat weaker flows ( @xmath310 km s@xmath9 ) that are more localized in height ( upper photosphere / lower chromosphere ) , but has a more extended tail along the slit ( see lower panel ) .\nnotice how both points a and b coincide with h@xmath0 emissions in fig [ fig : ha ]\nwe performed detailed non - lte inversions of the area surrounding points a and b in fig [ fig : vels ] .\nthe inversions were carried out with the code of @xcite ( @xcite ) ( see also @xcite @xcite ) by applying it systematically on each pixel independently . in order to minimize the risk of the algorithm settling on a secondary minimum , we repeated each inversion 10 times with randomized initializations . the model atmosphere that yields the best fit to the observed stokes profiles\nis taken as a good representation of the solar atmosphere under study .\nthe inversion code treats the caii lines in non - lte and the fei lines in lte .\nblends are treated consistently by adding the opacities of all the relevant transitions at each wavelength and spatial point , and then solving the radiative transfer equation with the total opacity .\nthe non - lte calculation is based on the preconditioning strategy of @xcite ( @xcite ) ( see also @xcite @xcite ) assuming complete angle and frequency redistribution .\nthis last approximation is particularly good for the caii infrared triplet lines , as shown by @xcite ( @xcite ) .\nlet us consider in detail the models obtained for the points that we labeled as a and b. figures [ fig : modelsa ] and [ fig : modelsb ] show the atmospheric stratification inside the downflows , as well as an average over the surrounding , non - downflowing area for reference .\nthe most striking features in these models are the strong downwards velocities found in the upper photosphere , around @xmath10 .\nthe motions along the line - of - sight approach the speed of sound in point b and exceed it in point a. the other parameters show different behaviors in points a and b. in particular , the downflowing layer in point b is considerably hotter than the reference atmosphere .\nhowever , in point a the downflowing layer is cooler ( although it then becomes hotter at higher layers ) . in both cases\nthere is another hot layer at @xmath11 .\nour findings for the two regions are consistent with the results of @xcite ( @xcite ) who found ellermam bombs to be predominantly dominted by downflows with no indication of upflows .\nthe fits provided by the inversions can be seen in fig [ fig : profs ] .\nnote that the observed chromospheric profiles exhibit a very complex structure with multiple peaks in stokes @xmath6 ( the photospheric lines , on the other hand , produce rather normal profiles ) .\ngenerally speaking , the inversion code yields a reasonably good fit to the overall features , but there is more structure in the observed profiles ( particularly noticeable in stokes @xmath6 with multiple peaks ) , than in the synthetic ones .\nthis is an indication that the actual atmospheres are even more complex than our models , probably due to the presence of unresolved atmospheric components .\nthus far , we have not made a direct connection between the downflowing features analyzed here and ebs . to this aim\n, we used the model atmospheres obtained from the inversion to synthesize the h@xmath0 profiles that these models produce . the result is plotted in fig [ fig : halpha ] , along with a reference quiet sun profile calculated from model c of @xcite ( @xcite ) ( val - c ) .\nthis synthetic profile has the same appearance traditionally observed in ebs .\n@xcite ( @xcite ) describes the h@xmath0 spectra of his bombs as having emission features ( brighter than the continuum ) on both sides of the line center\n. the red asymmetry ( higher intensity peak in the red wing ) shown here is not the one most commonly observed in ellerman bombs ( see the introduction ) , but this asymmetry is also present quite often ( 40% of the cases according to @xcite @xcite ) .\nit is the result of a hot downflowing component as seen in fig [ fig : modelsa ] .\nthe resemblance of ebs with the synthetic profile in fig [ fig : halpha ] is a solid evidence that we are indeed observing the same phenomenon .\nmoreover , it also provides strong support for the reliability of the inversions .\nthe energetic phenomenon generally referred to as ebs has remained very ellusive to solar physicists for almost a century .\nthis is probably due in no small measure to the fact that it has only been observed in h@xmath0 thus far , which is a very difficult line to interpret . in this paper\nwe present the first observations of ebs in other lines , namely the caii infrared triplet .\nthese lines are much easier to model and have been successfully inverted in the past few years ( e.g. , @xcite @xcite ; @xcite @xcite ; @xcite @xcite ) and thus our results open a new window to the observational study of ebs .\nwe find that the ebs analyzed in this work are associated with supersonic ( or nearly so ) downflows in the upper photosphere .\nno upflows are observed in our data .\n@xcite ( @xcite ) determined the vector current densities in the large sunspot of fig [ fig : maps ] and found that the east penumbra is permeated by strong currents in the upper photosphere , with a magnitude of @xmath33@xmath110@xmath12 a km@xmath13 .\nthe fact that these ebs occur in a region with strong currents suggests that ebs may be associated with magnetic reconnection . in order to understand the nature of ebs and the physical processes that produce them\n, it would be extremely valuable to have time - series observations of photospheric and chromospheric lines , ideally with full stokes polarimetry .\nthe authors wish to acknowledge the enthusiastic support from the nso staff at the sacramento peak observatory , especially d. gilliam , m. bradford and j. elrod .\nthanks are also due to s. hegwer , s. gregory , r. dunbar , t. spence , s. fletcher , c. berst and w. jones . ,\nd. , s. l. hegwer , t. rimmele , l. v. didkovsky , and p. r. goode : 2003 , ` optical design of high - order adaptive optics for the nso dunn solar telescope and the big bear solar observatory ' . in : _\ninnovative telescopes and instrumentation for solar astrophysics . edited by stephen l. keil , sergey v. avakyan .\nproceedings of the spie , volume 4853 , pp .\n593 - 599 ( 2003 ) .\n_ pp . 593599 ." }, { "role": "assistant", "content": "ellerman bombs are bright emission features observed in the wings of h@xmath0 , usually in the vicinity of magnetic concentrations . here \n we show that they can also be detected in the ca ii infrared triplet lines , which are easier to interpret and therefore allow for more detailed diagnostics . \n we present full stokes observations of the 849.8 and 854.2 nm lines acquired with the new spectro - polarimeter spinor . \n the data shows no significant linear polarization at the level of 3@xmath110@xmath2 . \n the circular polarization profiles exhibit measureable signals with a very intricate pattern of peaks . a non - lte analysis of the spectral profiles emerging from these features reveal the presence of strong downflows ( @xmath310 km / s ) in a hot layer between the upper photosphere and the lower chromosphere ." } ]
science.anat_em_ner
science.anat_em_ner.387
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"AnatomicalEntity\"].\n\nPlease return the output as a JSON object of the format: {\"AnatomicalEntity\": [\"tumor cells\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nA novel integrin alpha5beta1 antagonistic peptide, A5-1, screened by Protein Chip system as a potent angiogenesis inhibitor.\nIntegrin alpha5beta1 immobilized on a ProteoChip was used to screen new antagonistic peptides from multiple hexapeptide sub-libraries of the positional scanning synthetic peptide combinatorial library (PS-SPCL). The integrin alpha5beta1-Fibronectin interaction was demonstrated on the chip. A novel peptide ligand, A5-1 (VILVLF), with high affinity to integrin alpha5beta1 was identified from the hexapeptide libraries with this chip-based screening method on the basis of a competitive inhibition assay. A5-1 inhibits the integrin-fibronectin interaction in a dose-dependent manner (IC(50); 1.56+/-0.28 microM. In addition, it inhibits human umbilical vein endothelial cell proliferation, migration, adhesion, tubular network formation, and bFGF-induced neovascularization in a chick chorioallantoic membrane. These results suggest that A5-1 will be a potent inhibitor of neovascularization." }, { "role": "assistant", "content": "{\"AnatomicalEntity\": [\"tubular network\", \"chorioallantoic membrane\", \"human umbilical vein endothelial cell\"]}" } ]
science.qasa_abstractive_qa
science.qasa_abstractive_qa.1300
[ { "role": "user", "content": "You will be shown an excerpt from a computer science scientific research paper, followed by a question about the paper. Please answer the question. Do not include any text in your response other than the answer.\n\nContext: Substitute DNN Training: The adversary uses the initial substitute training sets and the oracle to trainsubsitute DNNs. Our substitute architecture A, a standard forimage classification, is describedin Table 13 (cf. appendix).The substitute DNN is trained on ourmachine for 6 substitute epochs.Duringeach of these 6 epochs, the model is trained for 10 epochsfrom scratch with a learning rate of 10^{-2} and momentum of 0.9. Betweensubstitute epochs, we perform a Jacobian-based dataset augmentation with a stepsize of \\lambda=0.1 to generate additional synthetic training data, which we label using the MetaMind oracle. The accuracy of the two substitute DNNs is reported inFigure 4. It is computed with the MNISTtest set (minus the 150 samples used in the first initial substitute trainingset). The adversary does not have access to this full test set: wesolely use it to analyze our results. The two substituteDNNs respectively achieve a 81.20\\% and 67.00\\% accuracy on the MNIST test set after 6 substitute training epochs. These accuracies fallshort of current state-of-the-art accuracies on this task. However, the adversary has access to a limited number ofsamples (in this case 6,400=100\\times 2^{6} instead of 50,000 forstate-of-the-art models). Furthermore, the adversarial goal is to craftadversarial samples misclassified by the oracle. Instead of learning asubstitute DNN with optimal accuracy, the adversary is interested inlearning a substitute capable of mimicking the oracle decisionboundaries. Whereas we previously trained all of our substitutes using DNNs only, we now useboth DNNs and LR as substitute models. The Jacobian-based dataset augmentationdescribed in the context of DNNs is easily adapted to logistic regression: thelater is analog to the softmax layer frequently used by the former whenoutputting probability vectors. We use 100 samples from the MNIST test set asthe initial substitute training set and use the two refinements introduced inSection 6: a periodic step size and reservoirsampling. Substitute Training: By augmenting an initial training set of 100 test set samples, wetrain a DNN and LR substitute for each of the two oracles. We measure success as the rate of adversarialsamples misclassified by the corresponding oracle, among the 10,000 produced from the test set using the fast gradient sign method with parameter \\varepsilon=0.3. These rates, computed after \\rho\\in\\{3,6\\} dataset augmentation iterations, are reported in Table 3. Results reported in the last row use both a periodic step size and reservoir sampling (hence the reduced number of queries made to train the substitute).\n\nQuestion: The accuracy is achieved by which ML Model used for training the substitute" }, { "role": "assistant", "content": "The ML model used for achieving the accuracy is a DNN (Deep Neural Network) combined with LR (Logistic Regression), and the two refinements as introduced in Section 6: a periodic step size and reservoir sampling." } ]
science.acl_arc_intent_classification
science.acl_arc_intent_classification.1520
[ { "role": "user", "content": "You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:\n- Background: provides context or foundational information related to the topic.\n- Extends: builds upon the cited work.\n- Uses: applies the methods or findings of the cited work.\n- Motivation: cites the work as inspiration or rationale for the research.\n- CompareOrContrast: compares or contrasts the cited work with others.\n- FutureWork: cites the work as a direction for future research.\n\nYour answer should be a single word from the following list of options: [\"Background\", \"Extends\", \"Uses\", \"Motivation\", \"CompareOrContrast\", \"FutureWork\"]. Do not include any other text in your response.\n\nSection Title:\nexperiments\n\nContext before the citation:\nAs they are required to enable test subjects to distinguish between senses, we use artificial glosses composed from synonyms and hypernyms as a surrogate, e.g. for brother: “brother, male sibling” vs. “brother, comrade, friend” (Gurevych, 2005). GermaNet contains only a few conceptual glosses. It is the most complete resource of this type for German.\n\nCitation Sentence:\nAs they are required to enable test subjects to distinguish between senses , we use artificial glosses composed from synonyms and hypernyms as a surrogate , e.g. for brother : `` brother , male sibling '' vs. `` brother , comrade , friend '' ( Gurevych , 2005 ) .\n\nContext after the citation:\nWe removed words which had more than three senses. Marginal manual post-processing was necessary, since the lemmatization process introduced some errors. Foreign words were translated into German, unless they are common technical terminology. We initially selected 100 word pairs from each corpus." }, { "role": "assistant", "content": "Uses" } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.140470
[ { "role": "user", "content": "You will be presented with the full text of a science research paper. Please write an abstract for this paper. Your response should include the abstract and no additional text.\n\nPaper text:\nthe discovery of the extrasolar planets should show the diversity of the planetary systems in the universe and also sketch out an innovative world outside our solar system .\nmayor & queloz ( 1995 ) began such great revolution in our minds when they found the first extrasolar giant jupiter-51 peg , and to date there are more than 100 planetary systems ( see also a daily updated website ; butler et al .\n2003 ) , which most of them were detected with the radial velocity technique in the surveys of nearby young stars . at present\n, a dozen of multiple planetary systems were discovered in recent years and this number is undoubtedly cumulative as time elapses .\nhence , it is necessary to categorize discovered multiple systems according to their statistical characteristics ( such as the distribution of the planetary masses , semi - major axes , eccentricities and metallicity ) , then to study the correlation between mass ratio and period ratio ( mazeh & zucker 2003 ) and to improve the understanding of the correlation between the planet occurrence rate and stellar metallicity ( santos et al .\n2003 ; fischer , valenti , & marcy 2004 ) .\nthe other key point is to investigate possible stable configurations for the multiple systems , in which the observations reveal that most of them are typically characterized by mean motion resonance ( mmr ) and ( or ) apsidal phase - locking between their orbiting companions ( fischer et al .\n2003 ; ji et al .\n2003a ) , so that one can better understand the full dynamics of these systems .\namongst the discovered extrasolar planetary systems , the resonant pairs are quite common for the multiple systems , e.g. , hd 82943 ( gozdziewski & maciejewski 2001 ) and gj 876 ( lee & peale 2002 ; ji , li & liu 2002 ) in a 2:1 resonance , and 55 cancri in a 3:1 mmr ( marcy et al .\n2002 ; ji et al .\nin addition , as the observation results show that most of the systems can potentially harbor more than one planet , thus we turn to the fascinating topic of studying the stable planetary geometry locking into a resonance , which such investigations are expected to be helpful in an understanding of the orbital evolution of the planets and the dynamical mechanisms of sustaining the stability of such systems . in the present study\n, we mainly focus our attention on the hd 82943 system . in table 1\nare listed the orbital parameters for this system , where two sets of the best - fit solutions are respectively given according to mayor et al .\n( 2004 ) ( fit 2 ) and their original orbital data ( fit 1 ; as of july 31 , 2002 ) , which are adopted in our simulations . in this paper ,\nwe employ n - body codes ( ji et al . 2002 ) to perform direct numerical simulations by using rkf7(8 ) and symplectic integrators ( wisdom & holman 1991 ) for this system . in the simulations\n, we always take the stellar mass and the minimum planetary masses from table 1 .\nthe time stepsize is usually adopted to be @xmath10 1%-2.5% of the orbital period of the innermost planet , which is sufficiently smaller for the integration .\nadditionally , the numerical errors were effectively controlled with the accuracy of @xmath11 over the integration timescale .\norbital parameters of hd 82943 planetary system .\nthe stellar mass @xmath12 , for fit 1 , @xmath13 , while for fit 2 , @xmath14 [ cols=\"<,^,<,<,^,^,^\",options=\"header \" , ] in our simulations for hd 82943 system , we found three possible stable configurations for a system in a 2:1 mmr : ( i ) only @xmath15 librates about @xmath16 , ( ii ) @xmath2 ( aligned case ) , ( iii ) @xmath3 , @xmath4 , @xmath5 ( antialigned case ) . the definition of the lowest eccentricity - type 2:1 resonant arguments are @xmath17 and @xmath18 . and the relative apsidal longitude is @xmath19 , where @xmath20 are , respectively , the mean longitudes of the inner and outer planets , and @xmath21 , respectively , their periastron longitudes .\nobviously , for above three angles , no more than two are linearly independent . besides , these stable planetary geometry is related to the existence of symmetric stable equilibrium solutions , where we find that other 2:1 resonant systems ( e.g. , gj 876 ) may possess one of the above three stable orbits in their realistic motions . as a paradigm\n, we simply show the dynamical evolution for one of the types of the stable orbits for hd 82943 and the reader may refer to ji et al .\n( 2004 ) for a detailed study .\nfigures 1 and 2 exhibit that the long - term orbital evolution for one set of stable solutions derived from fit 2 , where @xmath15 and @xmath22 both librate about @xmath16 ( type ii ) for @xmath0 yr , meanwhile the semi - major axes and eccentricities for two massive planets perform slight vibrations .\nstill , one may understand that the 2:1 resonance can help remain the semi - major axes for two companions almost unaltered over the secular evolution ( see figs . 1 and 2 ) , and the apsidal phase - locking between two orbits can further enhance the stability for this system , because the eccentricities are simultaneously preserved to restrain the planets from frequent close encounters .\nour results are well consistent with previous works , e.g. , hadjidemetriou ( 2002 ) numerically studied the families of periodic orbits for the hd 82943 system under the rotating framework , and indicated that two kinds of families orbits can survive for the 2:1 resonant planets .\nmoreover , beauge et al .\n( 2003 ) further showed the asymmetric stable solutions for the 2:1 resonant system using both analytical and numerical means . in summary , all the above outcomes imply the potential planetary configurations that the extrasolar systems involved in 2:1 resonance may hold . in order to understand the dynamics of this system , in a companying paper ( see ji et al .\n2004 ) , we analytically plot the evolution of the eccentricities and the relative apsidal longitudes .\nthe semi - analytical contour chart exhibits a good agreement with the numerical investigations of maintaining the eccentricities with larger @xmath22-libration amplitudes ( ref . to fig.4b of ji\n2004 ; type ii ) , and the figure also implies that the @xmath22-libration will be broken up if the libration amplitudes exceed or approach the critical value of @xmath23 near the separatrix .\nbois ( 2003 , private communication ) also confirmed that the stable strip of the alignment for the initial @xmath24 is much narrower than that of the antialignment in the calculations for hd 82943 .\nthe habitable zones ( hz ) are generally convinced to be suitable places for terrestrial planets that can provide the liquid - water , subtle temperature and atmosphere environment , and other proper conditions ( kasting et al .\n1993 ) , supporting the development and biological evolution of life on their surfaces . and herein , we performed numerical surveys to examine the potential habitable zones for the systems ( e.g. hd 82943 and gj 876 ) with a stable geometry in 2:1 resonance .\nwe generated 100 seed planets that all bear the same masses as earth in each run .\nthe distribution of the initial orbital elements for these postulated planets are as follows : they move on the much less inclined belt with the relative inclination with regard to the orbital plane of the resonant companions less than @xmath25 , with 0.96 au@xmath26 au and @xmath27 .\nfigure 3 summarizes the main results of the simulations for the hd 82943 system - no earth - mass planet can survive about 1 au in the final system , where 95% of the orbits are usually ejected at @xmath28 yr due to the gravitational scattering arising from two close massive planets .\nthe scenario of a typical ejected orbit may replay that the semi - major axis @xmath29 grows from @xmath9 au to tens of au , meanwhile the eccentricity @xmath30 undergoes a rapid increase from @xmath31 to 1 .\nhence , the assumed earth - like planet is thrown out from the two - planet system in less than @xmath32 yr . as for gj 876 , all tests about 1 au are stable at least for @xmath33 yr . figure 4 exhibits the typical orbital evolution for the earth - like planet , both the semi - major axis and the eccentricity execute small fluctuations about 1 au and 0.06 , respectively , and the inclination also remains less than 2 degrees in the same time span . and there is no sign to indicate that such regular orbits at @xmath9 au with low eccentricities will become chaotic for much longer time even for the age of the star . in addition , we also carried out extended integrations for the hd 82943 planetary system to investigate whether there may exist the analogous structure of asteroidal belts that the residual planetesimals are left in the system in later planetary formation .\nwe let several hundreds of test particles initially located at the regions of the 2:1 , 3:1 and 5:2 mmrs with the inner massive planet and then performed the simulations for 1 myr .\nthe results show that most of the small bodies can be removed at the dynamical time @xmath34 yr , which suggests that no survivals can finally stay at above three resonant regions . in this sense\n, it is quite similar to kirkwood gaps for the main belt asteroids in our solar system .\nhowever , there is much prospect of the space observations with high resolutions ( e.g. , spitzer ) revealing the evidence of the disk debris structure in other extrasolar planetary systems in future .\nacknowledgments this work is financially supported by national natural science foundation of china ( grant no . 10203005 , 10173006 , 10233020 ) and foundation of minor planets of purple mountain observatory ." }, { "role": "assistant", "content": "we have numerically explored the stable planetary geometry for the multiple systems involved in a 2:1 mean motion resonance , and herein we mainly concentrate on the hd 82943 system by employing two sets of the orbital parameters ( mayor et al . \n 2004 ; ji et al . 2004 ) . in the simulations , we find that all stable orbits are related to the 2:1 resonance that can help to remain the semi - major axes for two companions almost unaltered over the secular evolution for @xmath0 yr . \n in addition , we also show that there exist three possible stable configurations:(1 ) type i , only @xmath1 , ( 2 ) type ii , @xmath2 ( aligned case ) , and ( 3 ) type iii , @xmath3 , @xmath4 , @xmath5 ( antialigned case ) , here the lowest eccentricity - type mean motion resonant arguments are @xmath6 and @xmath7 , the relative apsidal longitudes @xmath8 . and we find that other 2:1 resonant systems ( e.g. , gj 876 ) may possess one of three stable orbits in their realistic motions . \n furthermore , we show that the assumed terrestrial bodies can not survive near the habitable zones for hd 82943 due to strong perturbations induced by two resonant companions , but such low - mass planets can be dynamically habitable in the gj 876 system at @xmath9 au in the numerical surveys . , , , , methods : n - body simulations celestial mechanics planetary systems stars : individual ( hd 82943 , gj 876 )" } ]
science.scitldr_aic
science.scitldr_aic.471
[ { "role": "user", "content": "Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.\n\nPaper text:\n\nThe Convolutional Neural Network (CNN) has been successfully applied in many fields during recent decades; however it lacks the ability to utilize prior domain knowledge when dealing with many realistic problems.\nWe present a framework called Geometric Operator Convolutional Neural Network (GO-CNN) that uses domain knowledge, wherein the kernel of the first convolutional layer is replaced with a kernel generated by a geometric operator function.\nThis framework integrates many conventional geometric operators, which allows it to adapt to a diverse range of problems.\nUnder certain conditions, we theoretically analyze the convergence and the bound of the generalization errors between GO-CNNs and common CNNs.\nAlthough the geometric operator convolution kernels have fewer trainable parameters than common convolution kernels, the experimental results indicate that GO-CNN performs more accurately than common CNN on CIFAR-10/100.\nFurthermore, GO-CNN reduces dependence on the amount of training examples and enhances adversarial stability.\nConvolutional Neural Networks have been successfully applied in many fields during recent decades, but the theoretical understanding of the deep neural network is still in the preliminary stages.\nAlthough CNNs have strong expressive abilities, they have two clear deficiencies.\nFirst, as complex functional mappings, CNNs, like black boxes, cannot take full advantage of domain knowledge and prior information.\nSecond, when little data is available for a certain task, CNNs' generalization ability weakens.\nThis is due to overfitting, which may occur due to the large number of parameters and the large model size.\nStemming from these two defects, a great deal of research has been done to modify CNNs BID7 Wang et al., 2018; Sarwar et al., 2017) .Before\nCNNs were applied, traditional geometric operators had developed quite well. Each geometric\noperator represents the precipitation of domain knowledge and prior information. For example, the\nSobel operator (Works) is a discrete difference operator, which can extract image edge information for edge detection. The Schmid operator\n(Schmid, 2001 ) is an isotropic circular operator, which extracts texture information from images for face recognition. The Histogram of Oriented\nGradients (HOG) BID8 ) is a statistic operator of gradient direction, which extracts edge direction distributions from images for pedestrian detection and other uses.Many computer vision tasks require domain knowledge and prior information. For example, in BID2 , the\ntexture information from the image is used for an auxiliary diagnosis of a fracture. Geometric operators can make\nuse of domain knowledge and prior information, but cannot automatically change parameter values by learning from data. Convolutional Neural Networks\nhave strong data expression abilities and learning abilities, but they struggle to make use of domain knowledge. For better data learning, we\nhave combined the two. It is natural to directly use\ngeometric operators for pre-processing, and then classify the data through a Convolutional Neural Network (Yao et al., 2016) . However, this method uses human\nexperience to select geometric operator parameter values, and then carries out the Convolutional Neural Network learning separately. This method is a kind of two-stage\ntechnique, and without reducing parameter redundancy in a Convolutional Neural Network, it is difficult to achieve global optimization. The method proposed in this paper\ndirectly constructs geometric operator convolution and then integrates geometric operator convolution into a Convolutional Neural Network to form a new framework -the Geometric Operator Convolutional Neural Network. This method achieves global optimizations\nand utilizes the properties of geometric operators.In summary, the contributions of this work are as follows:• This framework can integrates many conventional geometric operators, which reveals its broad customization capabilities when handling diverse problems.• In theory, the same approximation accuracy\nand generalization error bounds are achieved when geometric operators meet certain conditions.• The Geometric Operator Convolutional Neural\nNetwork not only reduces the redundancy of the parameters, but also reduces the dependence on the amount of the training samples.• The Geometric Operator Convolutional Neural\nNetwork enhances adversarial stability.\nIn this paper, we present a novel framework named the Geometric Operator Convolution Neural Network, where the kernel in the first convolutional layer is replaced with kernels generated by geometric operator functions.\nThis new network boasts several contributions.\nFirstly, the GO-CNN is customizable for diverse situations.\nSecondly, there is a theoretical guarantee in the learning framework of the GO-CNN.\nThirdly, the GO-CNN reduces the dependence on training samples.\nLastly, the GO-CNN enhances adversarial stability.\nIn the future, we can explore a more appropriate geometric operator convolution block." }, { "role": "assistant", "content": "Traditional image processing algorithms are combined with Convolutional Neural Networks,a new neural network." } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.88456
[ { "role": "user", "content": "You will be presented with the full text of a science research paper. Please write an abstract for this paper. Your response should include the abstract and no additional text.\n\nPaper text:\nthe argo - ybj experiment , located at the yangbajing cosmic ray laboratory ( tibet , p.r .\nchina , 4300 m a.s.l . ) , is an air shower array exploiting the full coverage approach at very high altitude , with the aim of studying the cosmic radiation with a low energy threshold ( a few hundreds of gev ) .\nthe detector is constituted by a central carpet , made of a single layer of resistive plate chambers ( rpcs ) with @xmath192@xmath2 of active area , enclosed by a guard ring partially instrumented ( @xmath140@xmath2 ) mainly to improve the rejection capability for external events .\nthe apparatus has a modular structure , the basic element being a cluster ( [email protected] m@xmath4 ) , divided into 12 rpcs ( [email protected] m@xmath4 each ) .\neach chamber is read by 80 strips of [email protected] cm@xmath4 ( the spatial pixel ) , logically organized in 10 independent pads of [email protected] cm@xmath4 which are individually acquired and represent the time pixel of the detector . in order to extend the dynamic range up to pev energies , a charge read - out has been implemented by instrumenting every rpc also with two large size pads of dimension 140@xmath3125 cm@xmath4 each .\nthe full detector is composed of 154 clusters for a total active surface of @xmath16700 m@xmath4 ( aielli g. et al . , 2006 ) .\na 0.5 cm thick lead converter will cover the apparatus uniformly in order to improve the angular resolution at lower energies .\nthe detector is connected to two different data acquisition systems , working independently , and corresponding to the two operation modes , shower and scaler . in shower mode , for each event the location and timing of every detected particle is recorded , allowing the lateral distribution and arrival direction reconstruction ( di sciascio g. et al . , 2007 ) . in scaler mode\nthe total counts are measured every 0.5 s , with very poor information on both the space distribution and arrival direction of the detected particles . for each cluster , the signal coming from the 120 pads is added up and put in coincidence in a narrow time window ( 150 ns ) , giving the counting rates of @xmath61 , @xmath62 , @xmath63 , @xmath64 pads , that are read by four independent scaler channels .\nthe corresponding measured rates are , respectively , @xmath140 khz , @xmath12 khz , @xmath1300 hz and @xmath1120 hz for each cluster ( aielli g. et al . , 2008 ) .\nsince 2007 november the detector has been in stable data taking with a multiplicity trigger n@xmath720 and a duty cycle @xmath8 : the trigger rate is about 3.5 khz . in this paper\nwe report on the first results in gamma - ray astronomy and cosmic ray physics .\nin this analysis all events are considered , without any internal shower selection .\nno lead converter has been mounted on the rpcs yet ( with the lead we expect an improvement of the angular resolution and , consequently , of the sensitivity , of @xmath940@xmath2 at tev energies ) .\nthe cosmic ray background rejection is performed by exploiting the good angular resolution of the detector and the fact that at this altitude the trigger efficiency of tev @xmath10-rays is @xmath92 times larger than that of protons at fixed energy .\nno additional @xmath10/hadron discrimination algorithms have been yet applied .\nwe expect an improvement of our sensitivity of a factor @xmath91.5 by applying `` topological '' selection criteria presently under study ( dattoli m. et al . , 2007 ) .\n= 4.5 cm = 6.cm = 6.5 cm = 6.5 cm we have measured the pointing accuracy of the argo - ybj detector by studying the moon shadow .\ncosmic rays are hampered by the moon , therefore a deficit of cosmic rays in its direction is expected ( the so - called `` moon shadow '' ) .\nthe moon shadow is an important tool to calibrate the performance of an air shower array .\nin fact , the size of the deficit allows a measurement of the angular resolution and the position of the deficit allows the evaluation of the absolute pointing accuracy of the detector .\nin addition , positively charged particles are deflected towards east due to the geomagnetic field by an angle @xmath11 .\ntherefore , the observation of the displacement of the moon provides a direct check of the relation between shower size and primary energy thus calibrating the detector .\nthe argo - ybj experiment is observing the moon shadow with a sensitivity of about 10 standard deviations per month at a multiplicity n@xmath740 , with the zenith angle @xmath12 ( corresponding to a proton median energy [email protected] tev ) . in fig.[fig : moon1 ] the shadow observed in the period december 2007 - august 2008 ( 802 hours on - source ) is shown . in the left plot the 3d view of the shadow ( i.e. , the point spread function of the detector ) is shown .\nthe sensitivity of the observation is about 26 standard deviations as can be seen from the moon shadow map in the right plot of fig.[fig : moon1 ] . in fig.[fig : angresol ] the angular resolution measured with the moon shadow method in 2007 and 2008 is compared to mc simulation .\nthe upper scale refers to the median energy of the triggered protons .\nas can be seen , the values are in fair agreement : the angular resolution of the argo - ybj experiment is less than 0.6@xmath14 for n@xmath7200 ( e@xmath136 tev ) . in fig.[fig : moonshift ] the displacements of the moon shadow in the east - west ( upper panel ) and north - south ( lower panel ) directions are shown . in the upper figure the measured shift towards west due to the geomagnetic field is compared to a mc simulation of the proton propagation in earth - moon system . as can be seen , the fair agreement between the measurements and the simulations make us confident about the energy calibration of the detector .\nthe study of the displacement along the north - south axis , not affected by the geomagnetic field at the yangbajing site , enables us to estimate the magnitude of the systematic pointing error without the moon shadow simulation . from this preliminary analysis\nit results that there is a residual systematic shift towards north of @xmath90.2@xmath14 , less than the angular resolution .\ndeeper analysis is in progress in order to eliminate this pointing error .\ni would like to emphasize that these results are stable even in periods mainly devoted to installation and debugging operations ( 2006 and 2007 ) .\nthis makes us confident about the stability of the detector .\nthe shadow of the sun measured in the period december 2007 - august 2008 ( 954 hours on - source with @xmath12 ) for events with n@xmath740 is shown in fig .\n[ fig : sun08 ] .\nthe significance of the maximum event deficit is about 25@xmath15 : the shadow is well visible , as expected near the solar minimum .\nthe displacement of the shadow from the apparent position of the sun could be explained by the effect of the geomagnetic field and of the solar and interplanetary magnetic fields ( imf ) whose configuration considerably changes with the phases of the solar activity cycle ( amenomori m. et al . , 2000 ) .\ntherefore , high energy cosmic rays may bring direct information on the structure of the imf under the influence of the solar activity .\nthis result shows the capability of the argo - ybj experiment to study the effect of the imf on the sun shadow down to the tev energy region with a monthly recurrence .\n= 6.5 cm = 6.5 cm fig.[fig : crab08 ] shows the significance map of the crab nebula region measured with the full argo - ybj detector during days 1 - 229 of 2008 ( 1143 hours on - source with @xmath16 ) with a multiplicity n@xmath740 , corresponding to a photon median energy of about 1 tev ( [email protected] tev ) .\nas can be seen from the map a clear signal is evident at a level of about 6 standard deviations .\ndeeper analysis and further data integration are in progress in order to evaluate the final sensitivity of the detector .\nnevertheless , we note that this is the first time that an air shower array is able to detect photons from a point source with such a low peak energy .\nthe detector started recording data with the full central carpet during the x - ray flare of mrk421 in july 2006 .\nthe argo - ybj experiment observed this source in the days 190 - 245 ( 109 hours on - source ) with a multiplicity n@xmath760 . a clear evidence of a tev emission at a level of about 6 standard deviations was observed .\nthe detector was in its commissioning phase , therefore new analysis are in progress to properly evaluate the statistical significance of the observation .\nmrk421 was again flaring during the first months of 2008 and the argo - ybj experiment again reported evidence for a tev emission in correlation with the x - ray flares .\nthe significance map of the mrk421 region is shown in fig.[fig : mrk421_08 ] : a clear signal at about 7 standard deviations level is visible during the 2008 ( days 1 - 229 , 1217 hours on - source ) .\nthe observation refers to a multiplicity n@xmath760 .\na correlation of tev photons detected by the argo - ybj experiment with the x - ray events detected by the rossi rxte satellite is evident in fig .\n[ fig_mrk421_xray ] for two different multiplicity values .\nwe note that an all - sky vhe gamma - ray telescope as the argo - ybj experiment is able to monitor the mrk421 in a continuous way , not being affected by the problems of the cherenkov telescopes which can operate only on clear moonless nights .\n= 6.5 cm = 6.5 cm the scaler mode technique offers a unique tool to study grbs in the gev energy range , where @xmath10-rays are less affected by the absorption due to pair production in the extragalactic space . in order to extract the maximum information from the data ,\nseveral grb searches have been implemented : ( 1 ) search in coincidence with the satellite detection ; ( 2 ) search for a delayed or anticipated signal of fixed duration ; ( 3 ) phase pile - up of all grbs .\nthe search has been done in coincidence with 38 grbs observed by satellites ( mainly swift ) in the period december 2004 - march 2008 in the argo - ybj field of view ( @xmath18 ) .\nno deviation from the statistical behaviour has been found either in coincidence with the low energy detection , or in an interval of 2 hours around it ( di girolamo t. et al . , 2007 ) .\nthe corresponding upper limits on the fluence are of the order of 10@xmath19 - 10@xmath20 erg / cm@xmath4 in the 1 - 100 gev range , well below the energy range explored by the present generation of cherenkov telescopes .\na preliminary measurement of the proton - air cross section has been performed by exploiting the attenuation of cosmic ray flux with increasing zenith angles @xmath21 ( i.e. , atmospheric depths ) @xmath22 exp(-x@xmath23 ( sec@xmath21 - 1)/@xmath24 ) , where x@xmath25 is the vertical atmospheric depth of the detector and @xmath24 is the observed attenuation length of air showers , related to the mean free path @xmath26 of the primary through the parameter k=@xmath27 .\ntherefore , with a measurement of the attenuation length we can estimate , in principle , the p - air and p - p cross sections .\nthe k parameter , which takes into account the fluctuations both in the shower development and in the shower sampling , is calculated via mc simulation .\nthe analysis has been performed in two multiplicity bins , 300@xmath28n@xmath291000 and n@xmath301000 , corresponding to mean energies ( [email protected] ) tev and ( [email protected] ) tev , respectively .\nthe measured p - air cross section values are ( 275@xmath3151 ) mb and ( 282@xmath3131 ) mb , respectively ( de mitri i. et al . , 2007 ) ( see fig.[fig : pair ] ) .\nfurther studies are in progress in order to extend the measurement to the pev energy region . in order to measure the @xmath32 ratio at tev energies we exploit the earth - moon system as an ion spectrometer :\nif protons are deflected towards east , antiprotons are deflected towards west .\nif the energy is low enough and the angular resolution small we can distinguish , in principle , between two shadows , one shifted towards west due to the protons and the other shifted towards east due to the antiprotons . if no event deficit is observed on the antimatter side an upper limit on the antiproton content can be calculated .\na very preliminary measurement has been performed with the argo - ybj experiment for n@xmath740 in the period december 2007 - august 2008 ( 802 hours on - source ) . for this multiplicity\nthe moon shadow shifts westward by about 0.45@xmath14 , at a median energy @xmath92 tev ( mode energy @xmath90.5 tev ) .\nthe data is in fair agreement with mc simulation .\nthe deficit events around the moon shadow are fitted with a gaussian formula : protons are estimated to be @xmath170% of cosmic rays .\na preliminary upper limit on the antiproton / proton ratio at the 90% confidence level is calculated to be about 10@xmath2 and reported in fig .\n[ fig : antip ] which shows the status of the antiproton / proton measurements .\nwe note that in the multi - tev range this result is among the lowest available .\n* g. di sciascio : * the statistical significance of the 2008 crab observation appears smaller than that of 2007 .\nthis may have been caused by a statistical signal fluctuation : the 2007 data sample is about a factor of 5 smaller than that integrated up to august 2008 .\nnevertheless this analysis is yet preliminary and deeper reanalysis is in progress in order to evaluate the final sensitivity of the detector properly ." }, { "role": "assistant", "content": "the argo - ybj experiment has been in stable data taking since november 2007 at the yangbajing cosmic ray laboratory ( tibet , p.r . \n china , 4300 m a.s.l . ) . in this paper \n we report a few selected results in gamma - ray astronomy ( crab nebula and mrk421 observations , search for high energy tails of grbs ) and cosmic ray physics ( moon and sun shadow observations , proton - air cross section and @xmath0/p preliminary measurements ) . \n 2m^2 2sin^2(2 ) epsf" } ]
science.chia_ner
science.chia_ner.1870
[ { "role": "user", "content": "You will be shown inclusion or exclusion criteria for a clinical trial. Given these criteria, your task is to extract all unique entities of the following types: [\"Observation\", \"Condition\", \"Procedure\", \"Drug\", \"Temporal\", \"Multiplier\", \"Qualifier\", \"Negation\", \"ReferencePoint\", \"Scope\", \"Value\", \"Mood\", \"Device\", \"Visit\", \"Measurement\", \"Person\"].\n\nPlease return the output as a JSON object of the format: {\"Condition\" : [\"breast cancer\", ...], \"Device\" : [\"ports\", ...], \"Drug\" : [\"Atenolol\", ...], \"Measurement\" : [\"blood pressure\", ...], \"Mood\" : [\"suspected\", ...], \"Multiplier\" : [\"more than two\", ...], \"Negation\" : [\"no\", ...], \"Observation\" : [\"history\", ...], \"Person\" : [\"male\", ...], \"Procedure\" : [\"surgery\", ...], \"Qualifier\" : [\"severe\", ...], \"ReferencePoint\" : [\"enrollment\", ...], \"Scope\" : [\"infection\", ...], \"Temporal\" : [\"one day\", ...], \"Value\" : [\"50%\", ...], \"Visit\" : [\"ICU\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nCurrent DVT\nRecurrent varicose veins\nArterial disease (ABPI<0.8)\nVein diameter < 3mm\nPreference for one of the treatment options\nPatient who are unwilling to participate\nInability or unwillingness to complete questionnaires\nInability to attend follow-up appointments\nPatient currently included in a study of varicose vein treatment" }, { "role": "assistant", "content": "{\"Condition\": [\"varicose veins\", \"Arterial disease\"], \"Device\": [], \"Drug\": [], \"Measurement\": [\"ABPI\", \"Vein diameter\"], \"Mood\": [], \"Multiplier\": [\"Recurrent\"], \"Negation\": [], \"Observation\": [\"unwillingness to complete questionnaires\", \"Inability to attend follow-up appointments\", \"unwilling to participate\", \"ity or un plete questionnaires\\nInabi\"], \"Person\": [], \"Procedure\": [\"DVT\"], \"Qualifier\": [], \"ReferencePoint\": [], \"Scope\": [\"ABPI\\u003c0.8\"], \"Temporal\": [\"Current\"], \"Value\": [\"\\u003c 3mm\", \"\\u003c0.8\"], \"Visit\": []}" } ]
science.medmentions_ner
science.medmentions_ner.270
[ { "role": "user", "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"HealthCareActivity\", \"InjuryOrPoisoning\", \"BodySubstance\", \"IntellectualProduct\", \"AnatomicalStructure\", \"SpatialConcept\", \"Chemical\", \"Bacterium\", \"MedicalDevice\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Finding\", \"BiologicFunction\", \"Virus\", \"ResearchActivity\", \"ClinicalAttribute\", \"PopulationGroup\", \"Eukaryote\", \"BodySystem\", \"Food\", \"ProfessionalOrOccupationalGroup\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\npH affects bacterial community composition in soils across the Huashan Watershed, China To investigate soil bacterial richness and diversity and to determine the correlations between bacterial communities and soil properties, 8 soil samples were collected from the Huashan watershed in Anhui, China. Subsequently, 454 high-throughput pyrosequencing and bioinformatics analyses were performed to examine the soil bacterial community compositions. The operational taxonomic unit richness of the bacterial community ranged from 3664 to 5899, and the diversity indices, including Chao1, Shannon-Wiener, and Faith's phylogenetic diversity ranged from 7751 to 15 204, 7.386 to 8.327, and 415.77 to 679.11, respectively. The 2 most dominant phyla in the soil samples were Actinobacteria and Proteobacteria. The richness and diversity of the bacterial community were positively correlated with soil pH. The Mantel test revealed that the soil pH was the dominant factor influencing the bacterial community. The positive modular structure of co-occurrence patterns at the genus level was discovered by network analysis. The results obtained in this study provide useful information that enhances our understanding of the effects of soil properties on the bacterial communities." }, { "role": "assistant", "content": "{\"AnatomicalStructure\": [], \"Bacterium\": [\"Proteobacteria\", \"bacterial\", \"Actinobacteria\"], \"BiologicFunction\": [], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [], \"ClinicalAttribute\": [\"composition\", \"compositions\"], \"Eukaryote\": [], \"Finding\": [], \"Food\": [], \"HealthCareActivity\": [\"high-throughput pyrosequencing\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"network analysis\", \"modular structure\", \"genus\", \"phyla\", \"Mantel test\"], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"study\"], \"SpatialConcept\": [\"Huashan Watershed\", \"China\", \"Huashan watershed\", \"Anhui\"], \"Virus\": []}" } ]