8 ARR: Question Answering with Large Language Models via Analyzing, Retrieving, and Reasoning Large language models (LLMs) achieve remarkable performance on challenging benchmarks that are often structured as multiple-choice question-answering (QA) tasks. Zero-shot Chain-of-Thought (CoT) prompting enhances reasoning in LLMs but provides only vague and generic guidance ("think step by step"). This paper introduces ARR, an intuitive and effective zero-shot prompting method that explicitly incorporates three key steps in QA solving: analyzing the intent of the question, retrieving relevant information, and reasoning step by step. Comprehensive experiments across diverse and challenging QA tasks demonstrate that ARR consistently improves the Baseline (without ARR prompting) and outperforms CoT. Ablation and case studies further validate the positive contributions of each component: analyzing, retrieving, and reasoning. Notably, intent analysis plays a vital role in ARR. Additionally, extensive evaluations across various model sizes, LLM series, and generation settings solidify the effectiveness, robustness, and generalizability of ARR. University of British Columbia · Feb 7, 2025 3
5 ArrayBot: Reinforcement Learning for Generalizable Distributed Manipulation through Touch We present ArrayBot, a distributed manipulation system consisting of a 16 times 16 array of vertically sliding pillars integrated with tactile sensors, which can simultaneously support, perceive, and manipulate the tabletop objects. Towards generalizable distributed manipulation, we leverage reinforcement learning (RL) algorithms for the automatic discovery of control policies. In the face of the massively redundant actions, we propose to reshape the action space by considering the spatially local action patch and the low-frequency actions in the frequency domain. With this reshaped action space, we train RL agents that can relocate diverse objects through tactile observations only. Surprisingly, we find that the discovered policy can not only generalize to unseen object shapes in the simulator but also transfer to the physical robot without any domain randomization. Leveraging the deployed policy, we present abundant real-world manipulation tasks, illustrating the vast potential of RL on ArrayBot for distributed manipulation. 8 authors · Jun 29, 2023
1 Arrows of Math Reasoning Data Synthesis for Large Language Models: Diversity, Complexity and Correctness Enhancing the mathematical reasoning of large language models (LLMs) demands high-quality training data, yet conventional methods face critical challenges in scalability, cost, and data reliability. To address these limitations, we propose a novel program-assisted synthesis framework that systematically generates a high-quality mathematical corpus with guaranteed diversity, complexity, and correctness. This framework integrates mathematical knowledge systems and domain-specific tools to create executable programs. These programs are then translated into natural language problem-solution pairs and vetted by a bilateral validation mechanism that verifies solution correctness against program outputs and ensures program-problem consistency. We have generated 12.3 million such problem-solving triples. Experiments demonstrate that models fine-tuned on our data significantly improve their inference capabilities, achieving state-of-the-art performance on several benchmark datasets and showcasing the effectiveness of our synthesis approach. 7 authors · Aug 26, 2025
- Arrow-Guided VLM: Enhancing Flowchart Understanding via Arrow Direction Encoding Flowcharts are indispensable tools in software design and business-process analysis, yet current vision-language models (VLMs) frequently misinterpret the directional arrows and graph topology that set these diagrams apart from natural images. We introduce a seven-stage pipeline grouped into three broader processes: (1) arrow-aware detection of nodes and arrow endpoints; (2) optical character recognition (OCR) to extract node text; and (3) construction of a structured prompt that guides the VLMs. Tested on a 90-question benchmark distilled from 30 annotated flowcharts, the method raises overall accuracy from 80 % to 89 % (+9 percentage points) without any task-specific fine-tuning. The gain is most pronounced for next-step queries (25/30 -> 30/30; 100 %, +17 pp); branch-result questions improve more modestly, and before-step questions remain difficult. A parallel evaluation with an LLM-as-a-Judge protocol shows the same trends, reinforcing the advantage of explicit arrow encoding. Limitations include dependence on detector and OCR precision, the small evaluation set, and residual errors at nodes with multiple incoming edges. Future work will enlarge the benchmark with synthetic and handwritten flowcharts and assess the approach on Business Process Model and Notation (BPMN) and Unified Modeling Language (UML). 3 authors · May 9, 2025
- Arrows of Time for Large Language Models We study the probabilistic modeling performed by Autoregressive Large Language Models (LLMs) through the angle of time directionality, addressing a question first raised in (Shannon, 1951). For large enough models, we empirically find a time asymmetry in their ability to learn natural language: a difference in the average log-perplexity when trying to predict the next token versus when trying to predict the previous one. This difference is at the same time subtle and very consistent across various modalities (language, model size, training time, ...). Theoretically, this is surprising: from an information-theoretic point of view, there should be no such difference. We provide a theoretical framework to explain how such an asymmetry can appear from sparsity and computational complexity considerations, and outline a number of perspectives opened by our results. 3 authors · Jan 30, 2024
- From two dimensions to wire networks in a dice-lattice Josephson array We investigate Josephson arrays consisting of a dice-lattice network of superconducting weak links surrounding rhombic plaquettes of proximitized semiconductor. Josephson coupling of the weak links and electron density in the plaquettes are independently controlled by separate electrostatic gates. Applied magnetic flux results in an intricate pattern of switching currents associated with frustration, f. For depleted plaquettes, the switching current is nearly periodic in f, expected for a phase-only description, while occupied plaquettes yield a decreasing envelope of switching currents with increasing f. A model of flux dependence based on ballistic small-area junctions and diffusive large-area plaquettes yields excellent agreement with experiment. 8 authors · Oct 8, 2025
- ECGformer: Leveraging transformer for ECG heartbeat arrhythmia classification An arrhythmia, also known as a dysrhythmia, refers to an irregular heartbeat. There are various types of arrhythmias that can originate from different areas of the heart, resulting in either a rapid, slow, or irregular heartbeat. An electrocardiogram (ECG) is a vital diagnostic tool used to detect heart irregularities and abnormalities, allowing experts to analyze the heart's electrical signals to identify intricate patterns and deviations from the norm. Over the past few decades, numerous studies have been conducted to develop automated methods for classifying heartbeats based on ECG data. In recent years, deep learning has demonstrated exceptional capabilities in tackling various medical challenges, particularly with transformers as a model architecture for sequence processing. By leveraging the transformers, we developed the ECGformer model for the classification of various arrhythmias present in electrocardiogram data. We assessed the suggested approach using the MIT-BIH and PTB datasets. ECG heartbeat arrhythmia classification results show that the proposed method is highly effective. 3 authors · Jan 6, 2024
- Automatic Tooth Arrangement with Joint Features of Point and Mesh Representations via Diffusion Probabilistic Models Tooth arrangement is a crucial step in orthodontics treatment, in which aligning teeth could improve overall well-being, enhance facial aesthetics, and boost self-confidence. To improve the efficiency of tooth arrangement and minimize errors associated with unreasonable designs by inexperienced practitioners, some deep learning-based tooth arrangement methods have been proposed. Currently, most existing approaches employ MLPs to model the nonlinear relationship between tooth features and transformation matrices to achieve tooth arrangement automatically. However, the limited datasets (which to our knowledge, have not been made public) collected from clinical practice constrain the applicability of existing methods, making them inadequate for addressing diverse malocclusion issues. To address this challenge, we propose a general tooth arrangement neural network based on the diffusion probabilistic model. Conditioned on the features extracted from the dental model, the diffusion probabilistic model can learn the distribution of teeth transformation matrices from malocclusion to normal occlusion by gradually denoising from a random variable, thus more adeptly managing real orthodontic data. To take full advantage of effective features, we exploit both mesh and point cloud representations by designing different encoding networks to extract the tooth (local) and jaw (global) features, respectively. In addition to traditional metrics ADD, PA-ADD, CSA, and ME_{rot}, we propose a new evaluation metric based on dental arch curves to judge whether the generated teeth meet the individual normal occlusion. Experimental results demonstrate that our proposed method achieves state-of-the-art tooth alignment results and satisfactory occlusal relationships between dental arches. We will publish the code and dataset. 7 authors · Dec 22, 2023
- Seismic Arrival-time Picking on Distributed Acoustic Sensing Data using Semi-supervised Learning Distributed Acoustic Sensing (DAS) is an emerging technology for earthquake monitoring and subsurface imaging. The recorded seismic signals by DAS have several distinct characteristics, such as unknown coupling effects, strong anthropogenic noise, and ultra-dense spatial sampling. These aspects differ from conventional seismic data recorded by seismic networks, making it challenging to utilize DAS at present for seismic monitoring. New data analysis algorithms are needed to extract useful information from DAS data. Previous studies on conventional seismic data demonstrated that deep learning models could achieve performance close to human analysts in picking seismic phases. However, phase picking on DAS data is still a difficult problem due to the lack of manual labels. Further, the differences in mathematical structure between these two data formats, i.e., ultra-dense DAS arrays and sparse seismic networks, make model fine-tuning or transfer learning difficult to implement on DAS data. In this work, we design a new approach using semi-supervised learning to solve the phase-picking task on DAS arrays. We use a pre-trained PhaseNet model as a teacher network to generate noisy labels of P and S arrivals on DAS data and apply the Gaussian mixture model phase association (GaMMA) method to refine these noisy labels to build training datasets. We develop a new deep learning model, PhaseNet-DAS, to process the 2D spatial-temporal data of DAS arrays and train the model on DAS data. The new deep learning model achieves high picking accuracy and good earthquake detection performance. We then apply the model to process continuous data and build earthquake catalogs directly from DAS recording. Our approach using semi-supervised learning provides a way to build effective deep learning models for DAS, which have the potential to improve earthquake monitoring using large-scale fiber networks. 6 authors · Feb 17, 2023
1 RealMAN: A Real-Recorded and Annotated Microphone Array Dataset for Dynamic Speech Enhancement and Localization The training of deep learning-based multichannel speech enhancement and source localization systems relies heavily on the simulation of room impulse response and multichannel diffuse noise, due to the lack of large-scale real-recorded datasets. However, the acoustic mismatch between simulated and real-world data could degrade the model performance when applying in real-world scenarios. To bridge this simulation-to-real gap, this paper presents a new relatively large-scale Real-recorded and annotated Microphone Array speech&Noise (RealMAN) dataset. The proposed dataset is valuable in two aspects: 1) benchmarking speech enhancement and localization algorithms in real scenarios; 2) offering a substantial amount of real-world training data for potentially improving the performance of real-world applications. Specifically, a 32-channel array with high-fidelity microphones is used for recording. A loudspeaker is used for playing source speech signals. A total of 83-hour speech signals (48 hours for static speaker and 35 hours for moving speaker) are recorded in 32 different scenes, and 144 hours of background noise are recorded in 31 different scenes. Both speech and noise recording scenes cover various common indoor, outdoor, semi-outdoor and transportation environments, which enables the training of general-purpose speech enhancement and source localization networks. To obtain the task-specific annotations, the azimuth angle of the loudspeaker is annotated with an omni-direction fisheye camera by automatically detecting the loudspeaker. The direct-path signal is set as the target clean speech for speech enhancement, which is obtained by filtering the source speech signal with an estimated direct-path propagation filter. 10 authors · Jun 28, 2024
1 Super-Directive Antenna Arrays: How Many Elements Do We Need? Super-directive antenna arrays have faced challenges in achieving high realized gains ever since their introduction in the academic literature. The primary challenges are high impedance mismatches and resistive losses, which become increasingly more dominant as the number of elements increases. Consequently, a critical limitation arises in determining the maximum number of elements that should be utilized to achieve super-directivity, particularly within dense array configurations. This paper addresses precisely this issue through an optimization study to design a super-directive antenna array with a maximum number of elements. An iterative approach is employed to increase the array of elements while sustaining a satisfactory realized gain using the differential evolution (DE) algorithm. Thus, it is observed that super-directivity can be obtained in an array with a maximum of five elements. Our results indicate that the obtained unit array has a 67.20% higher realized gain than a uniform linear array with conventional excitation. For these reasons, these results make the proposed architecture a strong candidate for applications that require densely packed arrays, particularly in the context of massive multiple-input multiple-output (MIMO). 3 authors · Jan 17, 2024
- Deep Synoptic Array Science: Searching for Long Duration Radio Transients with the DSA-110 We describe the design and commissioning tests for the DSA-110 Not-So-Fast Radio Burst (NSFRB) search pipeline, a 1.4 GHz image-plane single-pulse search sensitive to 134 ms-160.8 s radio bursts. Extending the pulse width range of the Fast Radio Burst (FRB) search by 3 orders of magnitude, the NSFRB search is sensitive to the recently-discovered Galactic Long Period Radio Transients (LPRTs). The NSFRB search operates in real-time, utilizing a custom GPU-accelerated search code, cerberus, implemented in Python with JAX. We summarize successful commissioning sensitivity tests with continuum sources and pulsar B0329+54, estimating the 6sigma flux (fluence) threshold to be ~290 mJy (~40 Jy ms). Future tests of recovery of longer timescale transients, e.g. CHIME J1634+44, are planned to supplement injection testing and B0329+54 observations. An offline DSA-110 NSFRB Galactic Plane Survey was conducted to search for LPRTs, covering -3.5^circ<b<5.7^circ and 141^circ<l<225^circ (~770 square degrees) in Galactic coordinates. We estimate an upper limit Poissonian burst rate ~1 hr^{-1} per square degree (~7 hr^{-1} per 3^circtimes3^circ survey grid cell) maximized across the inner |b|<0.25^circ of the surveyed region. By imposing the ~290 mJy flux limit on two representative models (the magnetar plastic flow model and the White Dwarf-M Dwarf binary model), we reject with 95% confidence the presence of White Dwarf-M Dwarf binary LPRTs with periods between ~10-70s within ~95% of the surveyed region. Combined with the prevalence of LPRTs in the Galactic Plane, our results motivate further consideration of both White Dwarf-M Dwarf binary models and isolated magnetar models. We will continue to explore novel LPRT search strategies during real-time operations, such as triggered periodicity searches and additional targeted surveys. 13 authors · Oct 20, 2025
- Streaming Sortformer: Speaker Cache-Based Online Speaker Diarization with Arrival-Time Ordering This paper presents a streaming extension for the Sortformer speaker diarization framework, whose key property is the arrival-time ordering of output speakers. The proposed approach employs an Arrival-Order Speaker Cache (AOSC) to store frame-level acoustic embeddings of previously observed speakers. Unlike conventional speaker-tracing buffers, AOSC orders embeddings by speaker index corresponding to their arrival time order, and is dynamically updated by selecting frames with the highest scores based on the model's past predictions. Notably, the number of stored embeddings per speaker is determined dynamically by the update mechanism, ensuring efficient cache utilization and precise speaker tracking. Experiments on benchmark datasets confirm the effectiveness and flexibility of our approach, even in low-latency setups. These results establish Streaming Sortformer as a robust solution for real-time multi-speaker tracking and a foundation for streaming multi-talker speech processing. 8 authors · Jul 24, 2025
- CasaGPT: Cuboid Arrangement and Scene Assembly for Interior Design We present a novel approach for indoor scene synthesis, which learns to arrange decomposed cuboid primitives to represent 3D objects within a scene. Unlike conventional methods that use bounding boxes to determine the placement and scale of 3D objects, our approach leverages cuboids as a straightforward yet highly effective alternative for modeling objects. This allows for compact scene generation while minimizing object intersections. Our approach, coined CasaGPT for Cuboid Arrangement and Scene Assembly, employs an autoregressive model to sequentially arrange cuboids, producing physically plausible scenes. By applying rejection sampling during the fine-tuning stage to filter out scenes with object collisions, our model further reduces intersections and enhances scene quality. Additionally, we introduce a refined dataset, 3DFRONT-NC, which eliminates significant noise presented in the original dataset, 3D-FRONT. Extensive experiments on the 3D-FRONT dataset as well as our dataset demonstrate that our approach consistently outperforms the state-of-the-art methods, enhancing the realism of generated scenes, and providing a promising direction for 3D scene synthesis. 5 authors · Apr 28, 2025
- The Mini-SiTian Array: Light Curves Analysis of Asteroids The SiTian project, with its vast field of view, will become an ideal platform for asteroid scientific research. In this study, we develop a pipeline to analyze the photometry of asteroids and derive their periods from the data collected by the SiTian pathfinder project Mini-SiTian (MST). The pipeline is applied to the MST f02 region, a MST test region with a sky area of 2.29^{circ} times 1.53^{circ}. Rotation periods of 22 asteroids are derived by the obtained light curves analysis. Among them, there are 8 asteroids available in the Asteroid Lightcurve Photometry Database (ALCDEF), and 6 of them with more photometric points (>200) have similar period parameters as the ones in ALCDEF. Additionally, the periods for 14 of these asteroids are newly obtained and are not listed in ALCDEF. This study demonstrates the feasibility of asteroid photometric research by the SiTian project. It shows that future observations from the SiTian project will provide even more photometry of asteroids, significantly increasing the number of available light curves. The potential vast photometric data of asteroids will help us to further understand the physics of asteroids, their material composition, and the formation and evolution of the solar system. 9 authors · Apr 2, 2025
- The Mini-SiTian Array: real-bogus classification using deep learning The Mini-SiTian (MST) project is a pathfinder for China's next-generation large-scale time-domain survey, SiTian, aimed at discovering variable stars, transients, and explosive events. MST generates hundreds of thousands of transient alerts every night, approximately 99\% of which are false alarms, posing a significant challenge to its scientific goals. To mitigate the impact of false positives, we propose a deep learning-based solution and systematically evaluate thirteen convolutional neural networks. The results show that ResNet achieves exceptional specificity (99.70\%), EfficientNet achieves the highest recall rate (98.68\%), and DenseNet provides balanced performance with a recall rate of 94.55\% and specificity of 98.66\%. Leveraging these complementary strengths, we developed a bagging-based ensemble classifier that integrates ResNet18, DenseNet121, and EfficientNet\_B0 using a soft voting strategy. This classifier achieved the best AUC value (0.9961) among all models, with a recall rate of 95.37\% and specificity of 99.25\%. It has now been successfully deployed in the MST real-time data processing pipeline. Validation using 5,000 practically processed samples with a classification threshold of 0.798 showed that the classifier achieved 88.31\% accuracy, 91.89\% recall rate, and 99.82\% specificity, confirming its effectiveness and robustness under real application conditions. 5 authors · Apr 2, 2025
- Multi-Cali Anything: Dense Feature Multi-Frame Structure-from-Motion for Large-Scale Camera Array Calibration Calibrating large-scale camera arrays, such as those in dome-based setups, is time-intensive and typically requires dedicated captures of known patterns. While extrinsics in such arrays are fixed due to the physical setup, intrinsics often vary across sessions due to factors like lens adjustments or temperature changes. In this paper, we propose a dense-feature-driven multi-frame calibration method that refines intrinsics directly from scene data, eliminating the necessity for additional calibration captures. Our approach enhances traditional Structure-from-Motion (SfM) pipelines by introducing an extrinsics regularization term to progressively align estimated extrinsics with ground-truth values, a dense feature reprojection term to reduce keypoint errors by minimizing reprojection loss in the feature space, and an intrinsics variance term for joint optimization across multiple frames. Experiments on the Multiface dataset show that our method achieves nearly the same precision as dedicated calibration processes, and significantly enhances intrinsics and 3D reconstruction accuracy. Fully compatible with existing SfM pipelines, our method provides an efficient and practical plug-and-play solution for large-scale camera setups. Our code is publicly available at: https://github.com/YJJfish/Multi-Cali-Anything 10 authors · Mar 2, 2025
- Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models Much recent work seeks to evaluate values and opinions in large language models (LLMs) using multiple-choice surveys and questionnaires. Most of this work is motivated by concerns around real-world LLM applications. For example, politically-biased LLMs may subtly influence society when they are used by millions of people. Such real-world concerns, however, stand in stark contrast to the artificiality of current evaluations: real users do not typically ask LLMs survey questions. Motivated by this discrepancy, we challenge the prevailing constrained evaluation paradigm for values and opinions in LLMs and explore more realistic unconstrained evaluations. As a case study, we focus on the popular Political Compass Test (PCT). In a systematic review, we find that most prior work using the PCT forces models to comply with the PCT's multiple-choice format. We show that models give substantively different answers when not forced; that answers change depending on how models are forced; and that answers lack paraphrase robustness. Then, we demonstrate that models give different answers yet again in a more realistic open-ended answer setting. We distill these findings into recommendations and open challenges in evaluating values and opinions in LLMs. 7 authors · Feb 26, 2024
- Deep Learning Models for Arrhythmia Classification Using Stacked Time-frequency Scalogram Images from ECG Signals Electrocardiograms (ECGs), a medical monitoring technology recording cardiac activity, are widely used for diagnosing cardiac arrhythmia. The diagnosis is based on the analysis of the deformation of the signal shapes due to irregular heart rates associated with heart diseases. Due to the infeasibility of manual examination of large volumes of ECG data, this paper aims to propose an automated AI based system for ECG-based arrhythmia classification. To this front, a deep learning based solution has been proposed for ECG-based arrhythmia classification. Twelve lead electrocardiograms (ECG) of length 10 sec from 45, 152 individuals from Shaoxing People's Hospital (SPH) dataset from PhysioNet with four different types of arrhythmias were used. The sampling frequency utilized was 500 Hz. Median filtering was used to preprocess the ECG signals. For every 1 sec of ECG signal, the time-frequency (TF) scalogram was estimated and stacked row wise to obtain a single image from 12 channels, resulting in 10 stacked TF scalograms for each ECG signal. These stacked TF scalograms are fed to the pretrained convolutional neural network (CNN), 1D CNN, and 1D CNN-LSTM (Long short-term memory) models, for arrhythmia classification. The fine-tuned CNN models obtained the best test accuracy of about 98% followed by 95% test accuracy by basic CNN-LSTM in arrhythmia classification. 2 authors · Nov 30, 2023
- Quantum Monte Carlo simulations in the restricted Hilbert space of Rydberg atom arrays Rydberg atom arrays have emerged as a powerful platform to simulate a number of exotic quantum ground states and phase transitions. To verify these capabilities numerically, we develop a versatile quantum Monte Carlo sampling technique which operates in the reduced Hilbert space generated by enforcing the constraint of a Rydberg blockade. We use the framework of stochastic series expansion and show that in the restricted space, the configuration space of operator strings can be understood as a hard rod gas in d+1 dimensions. We use this mapping to develop cluster algorithms which can be visualized as various non-local movements of rods. We study the efficiency of each of our updates individually and collectively. To elucidate the utility of the algorithm, we show that it can efficiently generate the phase diagram of a Rydberg atom array, to temperatures much smaller than all energy scales involved, on a Kagom\'e link lattice. This is of broad interest as the presence of a Z_2 spin liquid has been hypothesized recently. 1 authors · Sep 1, 2023
- Generating Diverse Indoor Furniture Arrangements We present a method for generating arrangements of indoor furniture from human-designed furniture layout data. Our method creates arrangements that target specified diversity, such as the total price of all furniture in the room and the number of pieces placed. To generate realistic furniture arrangement, we train a generative adversarial network (GAN) on human-designed layouts. To target specific diversity in the arrangements, we optimize the latent space of the GAN via a quality diversity algorithm to generate a diverse arrangement collection. Experiments show our approach discovers a set of arrangements that are similar to human-designed layouts but varies in price and number of furniture pieces. 6 authors · Jun 20, 2022
- Direction of arrival estimation for multiple sound sources using convolutional recurrent neural network This paper proposes a deep neural network for estimating the directions of arrival (DOA) of multiple sound sources. The proposed stacked convolutional and recurrent neural network (DOAnet) generates a spatial pseudo-spectrum (SPS) along with the DOA estimates in both azimuth and elevation. We avoid any explicit feature extraction step by using the magnitudes and phases of the spectrograms of all the channels as input to the network. The proposed DOAnet is evaluated by estimating the DOAs of multiple concurrently present sources in anechoic, matched and unmatched reverberant conditions. The results show that the proposed DOAnet is capable of estimating the number of sources and their respective DOAs with good precision and generate SPS with high signal-to-noise ratio. 3 authors · Oct 27, 2017
- European Pulsar Timing Array Limits On An Isotropic Stochastic Gravitational-Wave Background We present new limits on an isotropic stochastic gravitational-wave background (GWB) using a six pulsar dataset spanning 18 yr of observations from the 2015 European Pulsar Timing Array data release. Performing a Bayesian analysis, we fit simultaneously for the intrinsic noise parameters for each pulsar, along with common correlated signals including clock, and Solar System ephemeris errors, obtaining a robust 95% upper limit on the dimensionless strain amplitude A of the background of A<3.0times 10^{-15} at a reference frequency of 1yr^{-1} and a spectral index of 13/3, corresponding to a background from inspiralling super-massive black hole binaries, constraining the GW energy density to Omega_gw(f)h^2 < 1.1times10^{-9} at 2.8 nHz. We also present limits on the correlated power spectrum at a series of discrete frequencies, and show that our sensitivity to a fiducial isotropic GWB is highest at a frequency of sim 5times10^{-9}~Hz. Finally we discuss the implications of our analysis for the astrophysics of supermassive black hole binaries, and present 95% upper limits on the string tension, Gmu/c^2, characterising a background produced by a cosmic string network for a set of possible scenarios, and for a stochastic relic GWB. For a Nambu-Goto field theory cosmic string network, we set a limit Gmu/c^2<1.3times10^{-7}, identical to that set by the {\it Planck} Collaboration, when combining {\it Planck} and high-ell Cosmic Microwave Background data from other experiments. For a stochastic relic background we set a limit of Omega^relic_gw(f)h^2<1.2 times10^{-9}, a factor of 9 improvement over the most stringent limits previously set by a pulsar timing array. 36 authors · Apr 14, 2015
9 Your Context Is Not an Array: Unveiling Random Access Limitations in Transformers Despite their recent successes, Transformer-based large language models show surprising failure modes. A well-known example of such failure modes is their inability to length-generalize: solving problem instances at inference time that are longer than those seen during training. In this work, we further explore the root cause of this failure by performing a detailed analysis of model behaviors on the simple parity task. Our analysis suggests that length generalization failures are intricately related to a model's inability to perform random memory accesses within its context window. We present supporting evidence for this hypothesis by demonstrating the effectiveness of methodologies that circumvent the need for indexing or that enable random token access indirectly, through content-based addressing. We further show where and how the failure to perform random memory access manifests through attention map visualizations. 3 authors · Aug 10, 2024 2
- Accelerated Bayesian Inference for Pulsar Timing Arrays: Normalizing Flows for Rapid Model Comparison Across Stochastic Gravitational-Wave Background Sources The recent detection of nanohertz stochastic gravitational-wave backgrounds (SGWBs) by pulsar timing arrays (PTAs) promises unique insights into astrophysical and cosmological origins. However, traditional Markov Chain Monte Carlo (MCMC) approaches become prohibitively expensive for large datasets. We employ a normalizing flow (NF)-based machine learning framework to accelerate Bayesian inference in PTA analyses. For the first time, we perform Bayesian model comparison across SGWB source models in the framework of machine learning by training NF architectures on the PTA dataset (NANOGrav 15-year) and enabling direct evidence estimation via learned harmonic mean estimators. Our examples include 10 conventional SGWB source models such as supermassive black hole binaries, power-law spectrum, cosmic strings, domain walls, scalar-induced GWs, first-order phase transitions, and dual scenario/inflationary gravitational wave. Our approach jointly infers 20 red noise parameters and 2 SGWB parameters per model in sim 20\,hours (including training), compared to sim 10\,days with MCMC. Critically, the NF method preserves rigorous model selection accuracy, with small Hellinger distances (lesssim 0.3) relative to MCMC posteriors, and reproduces MCMC-based Bayes factors across all tested scenarios. This scalable technique for SGWB source comparison will be essential for future PTA expansions and next-generation arrays such as the SKA, offering orders-of-magnitude efficiency gains without sacrificing physical interpretability. 2 authors · Apr 5, 2025
- The Right Time Matters: Data Arrangement Affects Zero-Shot Generalization in Instruction Tuning Understanding alignment techniques begins with comprehending zero-shot generalization brought by instruction tuning, but little of the mechanism has been understood. Existing work has largely been confined to the task level, without considering that tasks are artificially defined and, to LLMs, merely consist of tokens and representations. To bridge this gap, we investigate zero-shot generalization from the perspective of the data itself. We first demonstrate that zero-shot generalization happens very early during instruction tuning, with loss serving as a stable indicator. Next, we investigate training data arrangement through similarity and granularity perspectives, confirming that the timing of exposure to certain training examples may greatly facilitate generalization on unseen tasks. Finally, we propose a more grounded training data arrangement framework, Test-centric Multi-turn Arrangement, and show its effectiveness in promoting continual learning and further loss reduction. For the first time, we show that zero-shot generalization during instruction tuning is a form of similarity-based generalization between training and test data at the instance level. Our code is released at https://github.com/thunlp/Dynamics-of-Zero-Shot-Generalization. 13 authors · Jun 17, 2024
- Lay-A-Scene: Personalized 3D Object Arrangement Using Text-to-Image Priors Generating 3D visual scenes is at the forefront of visual generative AI, but current 3D generation techniques struggle with generating scenes with multiple high-resolution objects. Here we introduce Lay-A-Scene, which solves the task of Open-set 3D Object Arrangement, effectively arranging unseen objects. Given a set of 3D objects, the task is to find a plausible arrangement of these objects in a scene. We address this task by leveraging pre-trained text-to-image models. We personalize the model and explain how to generate images of a scene that contains multiple predefined objects without neglecting any of them. Then, we describe how to infer the 3D poses and arrangement of objects from a 2D generated image by finding a consistent projection of objects onto the 2D scene. We evaluate the quality of Lay-A-Scene using 3D objects from Objaverse and human raters and find that it often generates coherent and feasible 3D object arrangements. 6 authors · Jun 2, 2024
- Quantum simulation of generic spin exchange models in Floquet-engineered Rydberg atom arrays Although quantum simulation can give insight into elusive or intractable physical phenomena, many quantum simulators are unavoidably limited in the models they mimic. Such is also the case for atom arrays interacting via Rydberg states - a platform potentially capable of simulating any kind of spin exchange model, albeit with currently unattainable experimental capabilities. Here, we propose a new route towards simulating generic spin exchange Hamiltonians in atom arrays, using Floquet engineering with both global and local control. To demonstrate the versatility and applicability of our approach, we numerically investigate the generation of several spin exchange models which have yet to be realized in atom arrays, using only previously-demonstrated experimental capabilities. Our proposed scheme can be readily explored in many existing setups, providing a path to investigate a large class of exotic quantum spin models. 5 authors · Jun 12, 2023
- TinyML Design Contest for Life-Threatening Ventricular Arrhythmia Detection The first ACM/IEEE TinyML Design Contest (TDC) held at the 41st International Conference on Computer-Aided Design (ICCAD) in 2022 is a challenging, multi-month, research and development competition. TDC'22 focuses on real-world medical problems that require the innovation and implementation of artificial intelligence/machine learning (AI/ML) algorithms on implantable devices. The challenge problem of TDC'22 is to develop a novel AI/ML-based real-time detection algorithm for life-threatening ventricular arrhythmia over low-power microcontrollers utilized in Implantable Cardioverter-Defibrillators (ICDs). The dataset contains more than 38,000 5-second intracardiac electrograms (IEGMs) segments over 8 different types of rhythm from 90 subjects. The dedicated hardware platform is NUCLEO-L432KC manufactured by STMicroelectronics. TDC'22, which is open to multi-person teams world-wide, attracted more than 150 teams from over 50 organizations. This paper first presents the medical problem, dataset, and evaluation procedure in detail. It further demonstrates and discusses the designs developed by the leading teams as well as representative results. This paper concludes with the direction of improvement for the future TinyML design for health monitoring applications. 7 authors · May 8, 2023
- Size and Shape Constraints of (486958) Arrokoth from Stellar Occultations We present the results from four stellar occultations by (486958) Arrokoth, the flyby target of the New Horizons extended mission. Three of the four efforts led to positive detections of the body, and all constrained the presence of rings and other debris, finding none. Twenty-five mobile stations were deployed for 2017 June 3 and augmented by fixed telescopes. There were no positive detections from this effort. The event on 2017 July 10 was observed by SOFIA with one very short chord. Twenty-four deployed stations on 2017 July 17 resulted in five chords that clearly showed a complicated shape consistent with a contact binary with rough dimensions of 20 by 30 km for the overall outline. A visible albedo of 10% was derived from these data. Twenty-two systems were deployed for the fourth event on 2018 Aug 4 and resulted in two chords. The combination of the occultation data and the flyby results provides a significant refinement of the rotation period, now estimated to be 15.9380 pm 0.0005 hours. The occultation data also provided high-precision astrometric constraints on the position of the object that were crucial for supporting the navigation for the New Horizons flyby. This work demonstrates an effective method for obtaining detailed size and shape information and probing for rings and dust on distant Kuiper Belt objects as well as being an important source of positional data that can aid in spacecraft navigation that is particularly useful for small and distant bodies. 133 authors · Dec 31, 2019
- Meeting Transcription Using Virtual Microphone Arrays We describe a system that generates speaker-annotated transcripts of meetings by using a virtual microphone array, a set of spatially distributed asynchronous recording devices such as laptops and mobile phones. The system is composed of continuous audio stream alignment, blind beamforming, speech recognition, speaker diarization using prior speaker information, and system combination. When utilizing seven input audio streams, our system achieves a word error rate (WER) of 22.3% and comes within 3% of the close-talking microphone WER on the non-overlapping speech segments. The speaker-attributed WER (SAWER) is 26.7%. The relative gains in SAWER over the single-device system are 14.8%, 20.3%, and 22.4% for three, five, and seven microphones, respectively. The presented system achieves a 13.6% diarization error rate when 10% of the speech duration contains more than one speaker. The contribution of each component to the overall performance is also investigated, and we validate the system with experiments on the NIST RT-07 conference meeting test set. 7 authors · May 3, 2019
- PhaseNet: A Deep-Neural-Network-Based Seismic Arrival Time Picking Method As the number of seismic sensors grows, it is becoming increasingly difficult for analysts to pick seismic phases manually and comprehensively, yet such efforts are fundamental to earthquake monitoring. Despite years of improvements in automatic phase picking, it is difficult to match the performance of experienced analysts. A more subtle issue is that different seismic analysts may pick phases differently, which can introduce bias into earthquake locations. We present a deep-neural-network-based arrival-time picking method called "PhaseNet" that picks the arrival times of both P and S waves. Deep neural networks have recently made rapid progress in feature learning, and with sufficient training, have achieved super-human performance in many applications. PhaseNet uses three-component seismic waveforms as input and generates probability distributions of P arrivals, S arrivals, and noise as output. We engineer PhaseNet such that peaks in probability provide accurate arrival times for both P and S waves, and have the potential to increase the number of S-wave observations dramatically over what is currently available. This will enable both improved locations and improved shear wave velocity models. PhaseNet is trained on the prodigious available data set provided by analyst-labeled P and S arrival times from the Northern California Earthquake Data Center. The dataset we use contains more than seven million waveform samples extracted from over thirty years of earthquake recordings. We demonstrate that PhaseNet achieves much higher picking accuracy and recall rate than existing methods. 2 authors · Mar 8, 2018
- Searching For Anisotropic Gravitational-wave Backgrounds Using Pulsar Timing Arrays We present the results of simulated injections testing the first Bayesian search-pipeline capable of investigating the angular-structure of a gravitational-wave (GW) background influencing pulsar signals. A stochastic background of GWs from the incoherent superposition of many inspiraling supermassive black hole binaries at nHz frequencies is likely to be the dominant GW signal detectable by pulsar timing arrays (PTAs). Even though one might expect a background composed of a high-redshift cosmological population of sources to be fairly isotropic, deviations from isotropy may be indicative of local GW hotspots or some form of continuous anisotropy in the angular-distribution of GW-power. A GWB induces time-of-arrival deviations in pulsar signals which are correlated between separated pulsars. In an isotropic background this cross-correlation follows a distinctive relationship, known as the Hellings and Downs curve, that depends only on the angular separation of the pulsars. If the background is anisotropic, the cross-correlation is different, but predictable, and also depends on the absolute position of the pulsars. By simulating datasets containing GWBs with various anisotropic configurations, we have explored the prospects for constraining anisotropy using near future data. We find that at moderate to high signal to noise ratio the assumption of isotropy is no longer an appropriate description of the simulated background. Furthermore, we can recover the nature of the injected anisotropy in a Bayesian parameter-estimation search, and propose a prior on the anisotropy search-space motivated by the physicality of the implied distribution of sources. 2 authors · Jun 23, 2013
1 Classical Sorting Algorithms as a Model of Morphogenesis: self-sorting arrays reveal unexpected competencies in a minimal model of basal intelligence The emerging field of Diverse Intelligence seeks to identify, formalize, and understand commonalities in behavioral competencies across a wide range of implementations. Especially interesting are simple systems that provide unexpected examples of memory, decision-making, or problem-solving in substrates that at first glance do not appear to be complex enough to implement such capabilities. We seek to develop tools to help understand the minimal requirements for such capabilities, and to learn to recognize and predict basal forms of intelligence in unconventional substrates. Here, we apply novel analyses to the behavior of classical sorting algorithms, short pieces of code which have been studied for many decades. To study these sorting algorithms as a model of biological morphogenesis and its competencies, we break two formerly-ubiquitous assumptions: top-down control (instead, showing how each element within a array of numbers can exert minimal agency and implement sorting policies from the bottom up), and fully reliable hardware (instead, allowing some of the elements to be "damaged" and fail to execute the algorithm). We quantitatively characterize sorting activity as the traversal of a problem space, showing that arrays of autonomous elements sort themselves more reliably and robustly than traditional implementations in the presence of errors. Moreover, we find the ability to temporarily reduce progress in order to navigate around a defect, and unexpected clustering behavior among the elements in chimeric arrays whose elements follow one of two different algorithms. The discovery of emergent problem-solving capacities in simple, familiar algorithms contributes a new perspective to the field of Diverse Intelligence, showing how basal forms of intelligence can emerge in simple systems without being explicitly encoded in their underlying mechanics. 3 authors · Dec 15, 2023
1 Empirical Modeling of Variance in Medium Frequency R-Mode Time-of-Arrival Measurements The R-Mode system, an advanced terrestrial integrated navigation system, is designed to address the vulnerabilities of global navigation satellite systems (GNSS) and explore the potential of a complementary navigation system. This study aims to enhance the accuracy of performance simulation for the medium frequency (MF) R-Mode system by modeling the variance of time-of-arrival (TOA) measurements based on actual data. Drawing inspiration from the method used to calculate the standard deviation of time-of-reception (TOR) measurements in Loran, we adapted and applied this approach to the MF R-Mode system. Data were collected from transmitters in Palmi and Chungju, South Korea, and the parameters for modeling the variance of TOA were estimated. 2 authors · Aug 31, 2023
1 Moving Object Classification with a Sub-6 GHz Massive MIMO Array using Real Data Classification between different activities in an indoor environment using wireless signals is an emerging technology for various applications, including intrusion detection, patient care, and smart home. Researchers have shown different methods to classify activities and their potential benefits by utilizing WiFi signals. In this paper, we analyze classification of moving objects by employing machine learning on real data from a massive multi-input-multi-output (MIMO) system in an indoor environment. We conduct measurements for different activities in both line-of-sight and non line-of-sight scenarios with a massive MIMO testbed operating at 3.7 GHz. We propose algorithms to exploit amplitude and phase-based features classification task. For the considered setup, we benchmark the classification performance and show that we can achieve up to 98% accuracy using real massive MIMO data, even with a small number of experiments. Furthermore, we demonstrate the gain in performance results with a massive MIMO system as compared with that of a limited number of antennas such as in WiFi devices. 5 authors · Feb 9, 2021
1 Pulsed Schlieren Imaging of Ultrasonic Haptics and Levitation using Phased Arrays Ultrasonic acoustic fields have recently been used to generate haptic effects on the human skin as well as to levitate small sub-wavelength size particles. Schlieren imaging and background-oriented schlieren techniques can be used for acoustic wave pattern and beam shape visualization. These techniques exploit variations in the refractive index of a propagation medium by applying refractive optics or cross-correlation algorithms of photographs of illuminated background patterns. Here both background-oriented and traditional schlieren systems are used to visualize the regions of the acoustic power involved in creating dynamic haptic sensations and dynamic levitation traps. We demonstrate for the first time the application of back-ground-oriented schlieren for imaging ultrasonic fields in air. We detail our imaging apparatus and present improved algorithms used to visualize these phenomena that we have produced using multiple phased arrays. Moreover, to improve imaging, we leverage an electronically controlled, high-output LED which is pulsed in synchrony with the ultrasonic carrier frequency. 5 authors · Sep 29, 2018
- Follow the curvature of viscoelastic stress: Insights into the steady arrowhead structure Focusing on simulated dilute polymer solutions, this letter investigates the interactions between flow structures and organized polymer stress sheets for the steady arrowhead coherent structure in a two-dimensional periodic channel flow. Formulating the problem in a frame of reference moving with the arrowhead velocity, streamlines, which are also pathlines in this frame, enables the identification of two distinct topological regions linked to two stagnation points. The streamlines help connecting the spatial distribution of polymer stress within the sheets and the dynamics of polymers transported by the flow. Using stresslines, lines parallel to the eigenvectors of polymer stress, a novel formulation of the viscoelastic stress term in the momentum transport equation proposes a more intuitive interpretation of the relation between the curvature of the stresslines, and the variation of stress along these lines, with the local flow topology. An approximation of this formulation is shown to explain the pressure jump observed in the arrowhead structure as a function of the local curvature of the polymer stress sheet. 3 authors · Aug 29, 2025
- Reconstruction of inclined extensive air showers using radio signals: from arrival times and amplitudes to direction and energy Radio detection is now an established technique for the study of ultra-high-energy (UHE) cosmic rays with energies above sim10^{17} eV. The next-generation of radio experiments aims to extend this technique to the observation of UHE earth-skimming neutrinos, which requires the detection of very inclined extensive air showers (EAS). In this article we present a new reconstruction method for the arrival direction and the energy of EAS. It combines a point-source-like description of the radio wavefront with a phenomenological model: the Angular Distribution Function (ADF). The ADF describes the angular distribution of the radio signal amplitude in the 50-200 MHz frequency range, with a particular focus on the Cherenkov angle, a crucial feature of the radio amplitude pattern. The method is applicable to showers with zenith angles larger than 60^circ, and in principle up to neutrino-induced showers with up-going trajectories. It is tested here on a simulated data set of EAS induced by cosmic rays. A resolution better than 4 arc-minutes (0.07^circ) is achieved on arrival direction, as well as an intrinsic resolution of 5% on the electromagnetic energy, and around 15% on the primary energy. 7 authors · Apr 25, 2025
- Statistical selection of high-redshift, neutral-hydrogen-rich, lensed galaxies with the Square Kilometre Array Deep wide spectral line surveys with the Square Kilometre Array (SKA) will expand the cosmic frontiers of neutral atomic hydrogen (HI) in galaxies. However, at cosmologically significant redshifts (z gtrsim 0.5), detections will typically be spatially unresolved and limited to the highest mass systems. Gravitational lensing could potentially alleviate these limitations, enabling lower mass systems to be studied at higher redshift and spatially resolved dynamical studies of some HI discs. Additionally, lensed HI systems would select foreground dark matter haloes using a different, more extended baryonic tracer compared to other lens surveys. This may result in a wider selected range of foreground dark matter halo properties, such as the concentration parameter. This paper uses the distortion of the observed HI mass function (HIMF) produced by strong gravitational lensing to find a flux density criterion for selecting lensed HI sources in future SKA-Mid spectral line surveys. This selection approach could yield lensed HI source densities in the range of sim 0.1--10 galaxies per square degree out to a redshift of z simeq 3 covered by SKA-MID Band 1. Although the sample sizes are modest, even with the proposed SKA-Mid surveys, the selection approach is straightforward and should have a 50% efficiency without any additional information, such as low-impact-factor or lower-redshift massive galaxies. The efficiency of selecting high-redshift, neutral-hydrogen-rich, lensed galaxies should then be greatly enhanced by using SKA-MID data in concert with the Vera C. Rubin Large Survey of Space and Time. 2 authors · Feb 11, 2025
- Rearrangement of single atoms in a 2000-site optical tweezers array at cryogenic temperatures We report on the trapping of single rubidium atoms in large arrays of optical tweezers comprising up to 2088 sites in a cryogenic environment at 6 K. Our approach relies on the use of microscope objectives that are in-vacuum but at room temperature, in combination with windowless thermal shields into which the objectives are protruding to ensure a cryogenic environment for the trapped atoms. To achieve enough optical power for efficient trapping, we combine two lasers at slightly different wavelengths. We discuss the performance and limitations of our design. Finally, we demonstrate atom-by-atom rearrangement of an 828-atom target array using moving optical tweezers controlled by a field-programmable gate array. 15 authors · May 29, 2024
- A crowdsourced dataset of aerial images with annotated solar photovoltaic arrays and installation metadata Photovoltaic (PV) energy generation plays a crucial role in the energy transition. Small-scale PV installations are deployed at an unprecedented pace, and their integration into the grid can be challenging since public authorities often lack quality data about them. Overhead imagery is increasingly used to improve the knowledge of residential PV installations with machine learning models capable of automatically mapping these installations. However, these models cannot be easily transferred from one region or data source to another due to differences in image acquisition. To address this issue known as domain shift and foster the development of PV array mapping pipelines, we propose a dataset containing aerial images, annotations, and segmentation masks. We provide installation metadata for more than 28,000 installations. We provide ground truth segmentation masks for 13,000 installations, including 7,000 with annotations for two different image providers. Finally, we provide installation metadata that matches the annotation for more than 8,000 installations. Dataset applications include end-to-end PV registry construction, robust PV installations mapping, and analysis of crowdsourced datasets. 7 authors · Sep 8, 2022
- Hybrid Digital and Analog Beamforming Design for Large-Scale Antenna Arrays The potential of using of millimeter wave (mmWave) frequency for future wireless cellular communication systems has motivated the study of large-scale antenna arrays for achieving highly directional beamforming. However, the conventional fully digital beamforming methods which require one radio frequency (RF) chain per antenna element is not viable for large-scale antenna arrays due to the high cost and high power consumption of RF chain components in high frequencies. To address the challenge of this hardware limitation, this paper considers a hybrid beamforming architecture in which the overall beamformer consists of a low-dimensional digital beamformer followed by an RF beamformer implemented using analog phase shifters. Our aim is to show that such an architecture can approach the performance of a fully digital scheme with much fewer number of RF chains. Specifically, this paper establishes that if the number of RF chains is twice the total number of data streams, the hybrid beamforming structure can realize any fully digital beamformer exactly, regardless of the number of antenna elements. For cases with fewer number of RF chains, this paper further considers the hybrid beamforming design problem for both the transmission scenario of a point-to-point multipleinput multiple-output (MIMO) system and a downlink multiuser multiple-input single-output (MU-MISO) system. For each scenario, we propose a heuristic hybrid beamforming design that achieves a performance close to the performance of the fully digital beamforming baseline. Finally, the proposed algorithms are modified for the more practical setting in which only finite resolution phase shifters are available. Numerical simulations show that the proposed schemes are effective even when phase shifters with very low resolution are used. 2 authors · Jan 25, 2016 1
- Understanding the gravitational-wave Hellings and Downs curve for pulsar timing arrays in terms of sound and electromagnetic waves Searches for stochastic gravitational-wave backgrounds using pulsar timing arrays look for correlations in the timing residuals induced by the background across the pulsars in the array. The correlation signature of an isotropic, unpolarized gravitational-wave background predicted by general relativity follows the so-called Hellings and Downs curve, which is a relatively simple function of the angle between a pair of Earth-pulsar baselines. In this paper, we give a pedagogical discussion of the Hellings and Downs curve for pulsar timing arrays, considering simpler analogous scenarios involving sound and electromagnetic waves. We calculate Hellings-and-Downs-type functions for these two scenarios and develop a framework suitable for doing more general correlation calculations. 2 authors · Dec 2, 2014
- Characterising gravitational wave stochastic background anisotropy with Pulsar Timing Arrays Detecting a stochastic gravitational wave background, particularly radiation from individually unresolvable super-massive black hole binary systems, is one of the primary targets for Pulsar Timing Arrays. Increasingly more stringent upper limits are being set on these signals under the assumption that the background radiation is isotropic. However, some level of anisotropy may be present and the characterisation of the power at different angular scales carries important information. We show that the standard analysis for isotropic backgrounds can be generalised in a conceptually straightforward way to the case of generic anisotropic background radiation by decomposing the angular distribution of the gravitational wave power on the sky into multipole moments. We introduce the concept of generalised overlap reduction functions which characterise the effect of the anisotropy multipoles on the correlation of the timing residuals from the pulsars timed by a Pulsar Timing Array. In a search for a signal characterised by a generic anisotropy, the generalised overlap reduction functions play the role of the so-called Hellings and Downs curve used for isotropic radiation. We compute the generalised overlap reduction functions for a generic level of anisotropy and Pulsar Timing Array configuration. We also provide an order of magnitude estimate of the level of anisotropy that can be expected in the background generated by super-massive black hole binary systems. 4 authors · Jun 23, 2013
- Unlocking Potential in Pre-Trained Music Language Models for Versatile Multi-Track Music Arrangement Large language models have shown significant capabilities across various domains, including symbolic music generation. However, leveraging these pre-trained models for controllable music arrangement tasks, each requiring different forms of musical information as control, remains a novel challenge. In this paper, we propose a unified sequence-to-sequence framework that enables the fine-tuning of a symbolic music language model for multiple multi-track arrangement tasks, including band arrangement, piano reduction, drum arrangement, and voice separation. Our experiments demonstrate that the proposed approach consistently achieves higher musical quality compared to task-specific baselines across all four tasks. Furthermore, through additional experiments on probing analysis, we show the pre-training phase equips the model with essential knowledge to understand musical conditions, which is hard to acquired solely through task-specific fine-tuning. 5 authors · Aug 27, 2024
- Automatic channel selection and spatial feature integration for multi-channel speech recognition across various array topologies Automatic Speech Recognition (ASR) has shown remarkable progress, yet it still faces challenges in real-world distant scenarios across various array topologies each with multiple recording devices. The focal point of the CHiME-7 Distant ASR task is to devise a unified system capable of generalizing various array topologies that have multiple recording devices and offering reliable recognition performance in real-world environments. Addressing this task, we introduce an ASR system that demonstrates exceptional performance across various array topologies. First of all, we propose two attention-based automatic channel selection modules to select the most advantageous subset of multi-channel signals from multiple recording devices for each utterance. Furthermore, we introduce inter-channel spatial features to augment the effectiveness of multi-frame cross-channel attention, aiding it in improving the capability of spatial information awareness. Finally, we propose a multi-layer convolution fusion module drawing inspiration from the U-Net architecture to integrate the multi-channel output into a single-channel output. Experimental results on the CHiME-7 corpus with oracle segmentation demonstrate that the improvements introduced in our proposed ASR system lead to a relative reduction of 40.1% in the Macro Diarization Attributed Word Error Rates (DA-WER) when compared to the baseline ASR system on the Eval sets. 6 authors · Dec 15, 2023
- Mapping gravitational-wave backgrounds in modified theories of gravity using pulsar timing arrays We extend our previous work on applying CMB techniques to the mapping of gravitational-wave backgrounds to backgrounds which have non-GR polarisations. Our analysis and results are presented in the context of pulsar-timing array observations, but the overarching methods are general, and can be easily applied to LIGO or eLISA observations using appropriately modified response functions. Analytic expressions for the pulsar-timing response to gravitational waves with non-GR polarisation are given for each mode of a spin-weighted spherical-harmonic decomposition of the background, which permit the signal to be mapped across the sky to any desired resolution. We also derive the pulsar-timing overlap reduction functions for the various non-GR polarisations, finding analytic forms for anisotropic backgrounds with scalar-transverse ("breathing") and vector-longitudinal polarisations, and a semi-analytic form for scalar-longitudinal backgrounds. Our results indicate that pulsar-timing observations will be completely insensitive to scalar-transverse mode anisotropies in the polarisation amplitude beyond dipole, and anisotropies in the power beyond quadrupole. Analogously to our previous findings that pulsar-timing observations lack sensitivity to tensor-curl modes for a transverse-traceless tensor background, we also find insensitivity to vector-curl modes for a vector-longitudinal background. 3 authors · Jun 29, 2015
- Stochastic backgrounds in alternative theories of gravity: overlap reduction functions for pulsar timing arrays In the next decade gravitational waves might be detected using a pulsar timing array. In an effort to develop optimal detection strategies for stochastic backgrounds of gravitational waves in generic metric theories of gravity, we investigate the overlap reduction functions for these theories and discuss their features. We show that the sensitivity to non-transverse gravitational waves is greater than the sensitivity to transverse gravitational waves and discuss the physical origin of this effect. We calculate the overlap reduction functions for the current NANOGrav Pulsar Timing Array (PTA) and show that the sensitivity to the vector and scalar-longitudinal modes can increase dramatically for pulsar pairs with small angular separations. For example, the J1853+1303-J1857+0943 pulsar pair, with an angular separation of about 3 degrees, is about 10^4 times more sensitive to the longitudinal component of the stochastic background, if it is present, than the transverse components. 2 authors · Nov 23, 2011
- A systematic analysis of the radio properties of 22 X-ray selected tidal disruption event candidates with the Australia Telescope Compact Array We present a systematic analysis of the radio properties of an X-ray selected sample of tidal disruption event (TDE) candidates discovered by the eROSITA telescope. We find radio sources coincident with half of the transient events (11 TDEs), with 8 radio sources showing statistically significant variability over a 6-month period. We model the radio spectra of 6 sources with sufficiently bright radio emission and find the sources show radio spectra consistent with optically thin synchrotron emission and radio outflow minimum radii of 10^{16}--10^{17} cm, velocities 0.01--0.05 c, and energies 10^{48}--10^{51} erg. On comparison with the radio properties of an optically-selected TDE sample at similar late times, we find no significant difference in the radio luminosity range or radio detection rate. We find a tentative positive trend with peak radio and X-ray luminosity, but require further observations to determine if this is real or due to observational bias due to the large range in distances of the events. Interestingly, none of the X-ray selected events show late rising radio emission, compared to 45% of radio-detected sources of an optically-selected sample that showed late rising radio emission. We propose that this may indicate that many TDEs launch radio outflows at or near peak X-ray luminosity, which can be significantly delayed from peak optical luminosity. This study presents the first systematic analysis of the radio properties of an X-ray selected sample of TDEs, and gives insight into the possible link between the physical processes that power X-ray and radio emission in TDEs. 10 authors · Apr 11, 2025
- SALSA-Lite: A Fast and Effective Feature for Polyphonic Sound Event Localization and Detection with Microphone Arrays Polyphonic sound event localization and detection (SELD) has many practical applications in acoustic sensing and monitoring. However, the development of real-time SELD has been limited by the demanding computational requirement of most recent SELD systems. In this work, we introduce SALSA-Lite, a fast and effective feature for polyphonic SELD using microphone array inputs. SALSA-Lite is a lightweight variation of a previously proposed SALSA feature for polyphonic SELD. SALSA, which stands for Spatial Cue-Augmented Log-Spectrogram, consists of multichannel log-spectrograms stacked channelwise with the normalized principal eigenvectors of the spectrotemporally corresponding spatial covariance matrices. In contrast to SALSA, which uses eigenvector-based spatial features, SALSA-Lite uses normalized inter-channel phase differences as spatial features, allowing a 30-fold speedup compared to the original SALSA feature. Experimental results on the TAU-NIGENS Spatial Sound Events 2021 dataset showed that the SALSA-Lite feature achieved competitive performance compared to the full SALSA feature, and significantly outperformed the traditional feature set of multichannel log-mel spectrograms with generalized cross-correlation spectra. Specifically, using SALSA-Lite features increased localization-dependent F1 score and class-dependent localization recall by 15% and 5%, respectively, compared to using multichannel log-mel spectrograms with generalized cross-correlation spectra. 5 authors · Nov 15, 2021
- Deep SNP: An End-to-end Deep Neural Network with Attention-based Localization for Break-point Detection in SNP Array Genomic data Diagnosis and risk stratification of cancer and many other diseases require the detection of genomic breakpoints as a prerequisite of calling copy number alterations (CNA). This, however, is still challenging and requires time-consuming manual curation. As deep-learning methods outperformed classical state-of-the-art algorithms in various domains and have also been successfully applied to life science problems including medicine and biology, we here propose Deep SNP, a novel Deep Neural Network to learn from genomic data. Specifically, we used a manually curated dataset from 12 genomic single nucleotide polymorphism array (SNPa) profiles as truth-set and aimed at predicting the presence or absence of genomic breakpoints, an indicator of structural chromosomal variations, in windows of 40,000 probes. We compare our results with well-known neural network models as well as Rawcopy though this tool is designed to predict breakpoints and in addition genomic segments with high sensitivity. We show, that Deep SNP is capable of successfully predicting the presence or absence of a breakpoint in large genomic windows and outperforms state-of-the-art neural network models. Qualitative examples suggest that integration of a localization unit may enable breakpoint detection and prediction of genomic segments, even if the breakpoint coordinates were not provided for network training. These results warrant further evaluation of DeepSNP for breakpoint localization and subsequent calling of genomic segments. 12 authors · Jun 22, 2018
- Efficient Feature Extraction Using Light-Weight CNN Attention-Based Deep Learning Architectures for Ultrasound Fetal Plane Classification Ultrasound fetal imaging is beneficial to support prenatal development because it is affordable and non-intrusive. Nevertheless, fetal plane classification (FPC) remains challenging and time-consuming for obstetricians since it depends on nuanced clinical aspects, which increases the difficulty in identifying relevant features of the fetal anatomy. Thus, to assist with its accurate feature extraction, a lightweight artificial intelligence architecture leveraging convolutional neural networks and attention mechanisms is proposed to classify the largest benchmark ultrasound dataset. The approach fine-tunes from lightweight EfficientNet feature extraction backbones pre-trained on the ImageNet1k. to classify key fetal planes such as the brain, femur, thorax, cervix, and abdomen. Our methodology incorporates the attention mechanism to refine features and 3-layer perceptrons for classification, achieving superior performance with the highest Top-1 accuracy of 96.25%, Top-2 accuracy of 99.80% and F1-Score of 0.9576. Importantly, the model has 40x fewer trainable parameters than existing benchmark ensemble or transformer pipelines, facilitating easy deployment on edge devices to help clinical practitioners with real-time FPC. The findings are also interpreted using GradCAM to carry out clinical correlation to aid doctors with diagnostics and improve treatment plans for expectant mothers. 4 authors · Oct 22, 2024
74 Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models Diffusion language models offer unique benefits over autoregressive models due to their potential for parallelized generation and controllability, yet they lag in likelihood modeling and are limited to fixed-length generation. In this work, we introduce a class of block diffusion language models that interpolate between discrete denoising diffusion and autoregressive models. Block diffusion overcomes key limitations of both approaches by supporting flexible-length generation and improving inference efficiency with KV caching and parallel token sampling. We propose a recipe for building effective block diffusion models that includes an efficient training algorithm, estimators of gradient variance, and data-driven noise schedules to minimize the variance. Block diffusion sets a new state-of-the-art performance among diffusion models on language modeling benchmarks and enables generation of arbitrary-length sequences. We provide the code, along with the model weights and blog post on the project page: https://m-arriola.com/bd3lms/ 8 authors · Mar 12, 2025 3
23 o3-mini vs DeepSeek-R1: Which One is Safer? The irruption of DeepSeek-R1 constitutes a turning point for the AI industry in general and the LLMs in particular. Its capabilities have demonstrated outstanding performance in several tasks, including creative thinking, code generation, maths and automated program repair, at apparently lower execution cost. However, LLMs must adhere to an important qualitative property, i.e., their alignment with safety and human values. A clear competitor of DeepSeek-R1 is its American counterpart, OpenAI's o3-mini model, which is expected to set high standards in terms of performance, safety and cost. In this paper we conduct a systematic assessment of the safety level of both, DeepSeek-R1 (70b version) and OpenAI's o3-mini (beta version). To this end, we make use of our recently released automated safety testing tool, named ASTRAL. By leveraging this tool, we automatically and systematically generate and execute a total of 1260 unsafe test inputs on both models. After conducting a semi-automated assessment of the outcomes provided by both LLMs, the results indicate that DeepSeek-R1 is highly unsafe as compared to OpenAI's o3-mini. Based on our evaluation, DeepSeek-R1 answered unsafely to 11.98% of the executed prompts whereas o3-mini only to 1.19%. 5 authors · Jan 30, 2025 3
14 Early External Safety Testing of OpenAI's o3-mini: Insights from the Pre-Deployment Evaluation Large Language Models (LLMs) have become an integral part of our daily lives. However, they impose certain risks, including those that can harm individuals' privacy, perpetuate biases and spread misinformation. These risks highlight the need for robust safety mechanisms, ethical guidelines, and thorough testing to ensure their responsible deployment. Safety of LLMs is a key property that needs to be thoroughly tested prior the model to be deployed and accessible to the general users. This paper reports the external safety testing experience conducted by researchers from Mondragon University and University of Seville on OpenAI's new o3-mini LLM as part of OpenAI's early access for safety testing program. In particular, we apply our tool, ASTRAL, to automatically and systematically generate up to date unsafe test inputs (i.e., prompts) that helps us test and assess different safety categories of LLMs. We automatically generate and execute a total of 10,080 unsafe test input on a early o3-mini beta version. After manually verifying the test cases classified as unsafe by ASTRAL, we identify a total of 87 actual instances of unsafe LLM behavior. We highlight key insights and findings uncovered during the pre-deployment external testing phase of OpenAI's latest LLM. 5 authors · Jan 29, 2025 2
4 The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations The evaluation of large language models is a complex task, in which several approaches have been proposed. The most common is the use of automated benchmarks in which LLMs have to answer multiple-choice questions of different topics. However, this method has certain limitations, being the most concerning, the poor correlation with the humans. An alternative approach, is to have humans evaluate the LLMs. This poses scalability issues as there is a large and growing number of models to evaluate making it impractical (and costly) to run traditional studies based on recruiting a number of evaluators and having them rank the responses of the models. An alternative approach is the use of public arenas, such as the popular LM arena, on which any user can freely evaluate models on any question and rank the responses of two models. The results are then elaborated into a model ranking. An increasingly important aspect of LLMs is their energy consumption and, therefore, evaluating how energy awareness influences the decisions of humans in selecting a model is of interest. In this paper, we present GEA, the Generative Energy Arena, an arena that incorporates information on the energy consumption of the model in the evaluation process. Preliminary results obtained with GEA are also presented, showing that for most questions, when users are aware of the energy consumption, they favor smaller and more energy efficient models. This suggests that for most user interactions, the extra cost and energy incurred by the more complex and top-performing models do not provide an increase in the perceived quality of the responses that justifies their use. 5 authors · Jul 17, 2025 1
1 Encoder-Decoder Diffusion Language Models for Efficient Training and Inference Discrete diffusion models enable parallel token sampling for faster inference than autoregressive approaches. However, prior diffusion models use a decoder-only architecture, which requires sampling algorithms that invoke the full network at every denoising step and incur high computational cost. Our key insight is that discrete diffusion models perform two types of computation: 1) representing clean tokens and 2) denoising corrupted tokens, which enables us to use separate modules for each task. We propose an encoder-decoder architecture to accelerate discrete diffusion inference, which relies on an encoder to represent clean tokens and a lightweight decoder to iteratively refine a noised sequence. We also show that this architecture enables faster training of block diffusion models, which partition sequences into blocks for better quality and are commonly used in diffusion language model inference. We introduce a framework for Efficient Encoder-Decoder Diffusion (E2D2), consisting of an architecture with specialized training and sampling algorithms, and we show that E2D2 achieves superior trade-offs between generation quality and inference throughput on summarization, translation, and mathematical reasoning tasks. We provide the code, model weights, and blog post on the project page: https://m-arriola.com/e2d2 5 authors · Oct 26, 2025
1 A Close Look at Decomposition-based XAI-Methods for Transformer Language Models Various XAI attribution methods have been recently proposed for the transformer architecture, allowing for insights into the decision-making process of large language models by assigning importance scores to input tokens and intermediate representations. One class of methods that seems very promising in this direction includes decomposition-based approaches, i.e., XAI-methods that redistribute the model's prediction logit through the network, as this value is directly related to the prediction. In the previous literature we note though that two prominent methods of this category, namely ALTI-Logit and LRP, have not yet been analyzed in juxtaposition and hence we propose to close this gap by conducting a careful quantitative evaluation w.r.t. ground truth annotations on a subject-verb agreement task, as well as various qualitative inspections, using BERT, GPT-2 and LLaMA-3 as a testbed. Along the way we compare and extend the ALTI-Logit and LRP methods, including the recently proposed AttnLRP variant, from an algorithmic and implementation perspective. We further incorporate in our benchmark two widely-used gradient-based attribution techniques. Finally, we make our carefullly constructed benchmark dataset for evaluating attributions on language models, as well as our code, publicly available in order to foster evaluation of XAI-methods on a well-defined common ground. 5 authors · Feb 21, 2025
- Cross-View Meets Diffusion: Aerial Image Synthesis with Geometry and Text Guidance Aerial imagery analysis is critical for many research fields. However, obtaining frequent high-quality aerial images is not always accessible due to its high effort and cost requirements. One solution is to use the Ground-to-Aerial (G2A) technique to synthesize aerial images from easily collectible ground images. However, G2A is rarely studied, because of its challenges, including but not limited to, the drastic view changes, occlusion, and range of visibility. In this paper, we present a novel Geometric Preserving Ground-to-Aerial (G2A) image synthesis (GPG2A) model that can generate realistic aerial images from ground images. GPG2A consists of two stages. The first stage predicts the Bird's Eye View (BEV) segmentation (referred to as the BEV layout map) from the ground image. The second stage synthesizes the aerial image from the predicted BEV layout map and text descriptions of the ground image. To train our model, we present a new multi-modal cross-view dataset, namely VIGORv2 which is built upon VIGOR with newly collected aerial images, maps, and text descriptions. Our extensive experiments illustrate that GPG2A synthesizes better geometry-preserved aerial images than existing models. We also present two applications, data augmentation for cross-view geo-localization and sketch-based region search, to further verify the effectiveness of our GPG2A. The code and data will be publicly available. 5 authors · Aug 8, 2024
- Interpolation of Point Distributions for Digital Stippling We present a new way to merge any two point distribution approaches using distance fields. Our new process allows us to produce digital stippling that fills areas with stipple dots without visual artifacts as well as includes clear linear features without fussiness. Our merging thus benefits from past work that can optimize for either goal individually, yet typically by sacrificing the other. The new possibility of combining any two distributions using different distance field functions and their parameters also allows us to produce a vast range of stippling styles, which we demonstrate as well. 3 authors · Jul 3, 2023 1
- Shape-Based Plagiarism Detection for Flowchart Figures in Texts Plagiarism detection is well known phenomenon in the academic arena. Copying other people is considered as serious offence that needs to be checked. There are many plagiarism detection systems such as turn-it-in that has been developed to provide this checks. Most, if not all, discard the figures and charts before checking for plagiarism. Discarding the figures and charts results in look holes that people can take advantage. That means people can plagiarized figures and charts easily without the current plagiarism systems detecting it. There are very few papers which talks about flowcharts plagiarism detection. Therefore, there is a need to develop a system that will detect plagiarism in figures and charts. This paper presents a method for detecting flow chart figure plagiarism based on shape-based image processing and multimedia retrieval. The method managed to retrieve flowcharts with ranked similarity according to different matching sets. 4 authors · Mar 12, 2014
- Variational Transformer Networks for Layout Generation Generative models able to synthesize layouts of different kinds (e.g. documents, user interfaces or furniture arrangements) are a useful tool to aid design processes and as a first step in the generation of synthetic data, among other tasks. We exploit the properties of self-attention layers to capture high level relationships between elements in a layout, and use these as the building blocks of the well-known Variational Autoencoder (VAE) formulation. Our proposed Variational Transformer Network (VTN) is capable of learning margins, alignments and other global design rules without explicit supervision. Layouts sampled from our model have a high degree of resemblance to the training data, while demonstrating appealing diversity. In an extensive evaluation on publicly available benchmarks for different layout types VTNs achieve state-of-the-art diversity and perceptual quality. Additionally, we show the capabilities of this method as part of a document layout detection pipeline. 3 authors · Apr 6, 2021
12 Simple and Effective Masked Diffusion Language Models While diffusion models excel at generating high-quality images, prior work reports a significant performance gap between diffusion and autoregressive (AR) methods in language modeling. In this work, we show that simple masked discrete diffusion is more performant than previously thought. We apply an effective training recipe that improves the performance of masked diffusion models and derive a simplified, Rao-Blackwellized objective that results in additional improvements. Our objective has a simple form -- it is a mixture of classical masked language modeling losses -- and can be used to train encoder-only language models that admit efficient samplers, including ones that can generate arbitrary lengths of text semi-autoregressively like a traditional language model. On language modeling benchmarks, a range of masked diffusion models trained with modern engineering practices achieves a new state-of-the-art among diffusion models, and approaches AR perplexity. We release our code at: https://github.com/kuleshov-group/mdlm 8 authors · Jun 11, 2024 2
2 Why do LLMs attend to the first token? Large Language Models (LLMs) tend to attend heavily to the first token in the sequence -- creating a so-called attention sink. Many works have studied this phenomenon in detail, proposing various ways to either leverage or alleviate it. Attention sinks have been connected to quantisation difficulties, security issues, and streaming attention. Yet, while many works have provided conditions in which they occur or not, a critical question remains shallowly answered: Why do LLMs learn such patterns and how are they being used? In this work, we argue theoretically and empirically that this mechanism provides a method for LLMs to avoid over-mixing, connecting this to existing lines of work that study mathematically how information propagates in Transformers. We conduct experiments to validate our theoretical intuitions and show how choices such as context length, depth, and data packing influence the sink behaviour. We hope that this study provides a new practical perspective on why attention sinks are useful in LLMs, leading to a better understanding of the attention patterns that form during training. 7 authors · Apr 3, 2025
1 Credit Risk Meets Large Language Models: Building a Risk Indicator from Loan Descriptions in P2P Lending Peer-to-peer (P2P) lending connects borrowers and lenders through online platforms but suffers from significant information asymmetry, as lenders often lack sufficient data to assess borrowers' creditworthiness. This paper addresses this challenge by leveraging BERT, a Large Language Model (LLM) known for its ability to capture contextual nuances in text, to generate a risk score based on borrowers' loan descriptions using a dataset from the Lending Club platform. We fine-tune BERT to distinguish between defaulted and non-defaulted loans using the loan descriptions provided by the borrowers. The resulting BERT-generated risk score is then integrated as an additional feature into an XGBoost classifier used at the loan granting stage, where decision-makers have limited information available to guide their decisions. This integration enhances predictive performance, with improvements in balanced accuracy and AUC, highlighting the value of textual features in complementing traditional inputs. Moreover, we find that the incorporation of the BERT score alters how classification models utilize traditional input variables, with these changes varying by loan purpose. These findings suggest that BERT discerns meaningful patterns in loan descriptions, encompassing borrower-specific features, specific purposes, and linguistic characteristics. However, the inherent opacity of LLMs and their potential biases underscore the need for transparent frameworks to ensure regulatory compliance and foster trust. Overall, this study demonstrates how LLM-derived insights interact with traditional features in credit risk modeling, opening new avenues to enhance the explainability and fairness of these models. 2 authors · Jan 29, 2024
- Enhancing LLM Problem Solving with REAP: Reflection, Explicit Problem Deconstruction, and Advanced Prompting Large Language Models (LLMs) have transformed natural language processing, yet improving their problem-solving capabilities, particularly for complex, reasoning-intensive tasks, remains a persistent challenge. This paper introduces the REAP (Reflection, Explicit Problem Deconstruction, and Advanced Prompting) method, an innovative approach within the dynamic context generation framework. REAP guides LLMs through reflection on the query, deconstructing it into manageable components, and generating relevant context to enhance the solution process. We evaluated REAP using a dataset designed to expose LLM limitations, comparing zero-shot prompting with REAP-enhanced prompts across six state-of-the-art models: OpenAI's o1-preview, o1-mini, GPT-4o, GPT-4o-mini, Google's Gemini 1.5 Pro, and Claude 3.5 Sonnet. The results demonstrate notable performance gains, with o1-mini improving by 40.97%, GPT-4o by 66.26%, and GPT-4o-mini by 112.93%. Despite the already strong baseline performance of OpenAI's o1-preview, modest gains were observed. Beyond performance improvements, REAP offers a cost-effective solution; for example, GPT-4o-mini, which is approximately 100 times cheaper than o1-preview, delivered competitive results. REAP also improves the clarity of model outputs, making it easier for humans to understand the reasoning behind the results and simplifying the process of identifying and addressing any issues. These findings demonstrate REAP's potential to greatly improve the capabilities of LLMs, providing both better performance and increased cost-efficiency across a wide range of applications. 3 authors · Sep 14, 2024
- Enhancing Skin Disease Classification Leveraging Transformer-based Deep Learning Architectures and Explainable AI Skin diseases affect over a third of the global population, yet their impact is often underestimated. Automating skin disease classification to assist doctors with their prognosis might be difficult. Nevertheless, due to efficient feature extraction pipelines, deep learning techniques have shown much promise for various tasks, including dermatological disease identification. This study uses a skin disease dataset with 31 classes and compares it with all versions of Vision Transformers, Swin Transformers and DivoV2. The analysis is also extended to compare with benchmark convolution-based architecture presented in the literature. Transfer learning with ImageNet1k weights on the skin disease dataset contributes to a high test accuracy of 96.48\% and an F1-Score of 0.9727 using DinoV2, which is almost a 10\% improvement over this data's current benchmark results. The performance of DinoV2 was also compared for the HAM10000 and Dermnet datasets to test the model's robustness, and the trained model overcomes the benchmark results by a slight margin in test accuracy and in F1-Score on the 23 and 7 class datasets. The results are substantiated using explainable AI frameworks like GradCAM and SHAP, which provide precise image locations to map the disease, assisting dermatologists in early detection, prompt prognosis, and treatment. 4 authors · Jul 20, 2024
- Bilingual Dual-Head Deep Model for Parkinson's Disease Detection from Speech This work aims to tackle the Parkinson's disease (PD) detection problem from the speech signal in a bilingual setting by proposing an ad-hoc dual-head deep neural architecture for type-based binary classification. One head is specialized for diadochokinetic patterns. The other head looks for natural speech patterns present in continuous spoken utterances. Only one of the two heads is operative accordingly to the nature of the input. Speech representations are extracted from self-supervised learning (SSL) models and wavelet transforms. Adaptive layers, convolutional bottlenecks, and contrastive learning are exploited to reduce variations across languages. Our solution is assessed against two distinct datasets, EWA-DB, and PC-GITA, which cover Slovak and Spanish languages, respectively. Results indicate that conventional models trained on a single language dataset struggle with cross-linguistic generalization, and naive combinations of datasets are suboptimal. In contrast, our model improves generalization on both languages, simultaneously. 3 authors · Mar 13, 2025
1 Contracting Skeletal Kinematics for Human-Related Video Anomaly Detection Detecting the anomaly of human behavior is paramount to timely recognizing endangering situations, such as street fights or elderly falls. However, anomaly detection is complex since anomalous events are rare and because it is an open set recognition task, i.e., what is anomalous at inference has not been observed at training. We propose COSKAD, a novel model that encodes skeletal human motion by a graph convolutional network and learns to COntract SKeletal kinematic embeddings onto a latent hypersphere of minimum volume for Video Anomaly Detection. We propose three latent spaces: the commonly-adopted Euclidean and the novel spherical and hyperbolic. All variants outperform the state-of-the-art on the most recent UBnormal dataset, for which we contribute a human-related version with annotated skeletons. COSKAD sets a new state-of-the-art on the human-related versions of ShanghaiTech Campus and CUHK Avenue, with performance comparable to video-based methods. Source code and dataset will be released upon acceptance. 6 authors · Jan 23, 2023
- NeRFMeshing: Distilling Neural Radiance Fields into Geometrically-Accurate 3D Meshes With the introduction of Neural Radiance Fields (NeRFs), novel view synthesis has recently made a big leap forward. At the core, NeRF proposes that each 3D point can emit radiance, allowing to conduct view synthesis using differentiable volumetric rendering. While neural radiance fields can accurately represent 3D scenes for computing the image rendering, 3D meshes are still the main scene representation supported by most computer graphics and simulation pipelines, enabling tasks such as real time rendering and physics-based simulations. Obtaining 3D meshes from neural radiance fields still remains an open challenge since NeRFs are optimized for view synthesis, not enforcing an accurate underlying geometry on the radiance field. We thus propose a novel compact and flexible architecture that enables easy 3D surface reconstruction from any NeRF-driven approach. Upon having trained the radiance field, we distill the volumetric 3D representation into a Signed Surface Approximation Network, allowing easy extraction of the 3D mesh and appearance. Our final 3D mesh is physically accurate and can be rendered in real time on an array of devices. 6 authors · Mar 16, 2023
- Using Large Language Models to Simulate Multiple Humans and Replicate Human Subject Studies We introduce a new type of test, called a Turing Experiment (TE), for evaluating how well a language model, such as GPT-3, can simulate different aspects of human behavior. Unlike the Turing Test, which involves simulating a single arbitrary individual, a TE requires simulating a representative sample of participants in human subject research. We give TEs that attempt to replicate well-established findings in prior studies. We design a methodology for simulating TEs and illustrate its use to compare how well different language models are able to reproduce classic economic, psycholinguistic, and social psychology experiments: Ultimatum Game, Garden Path Sentences, Milgram Shock Experiment, and Wisdom of Crowds. In the first three TEs, the existing findings were replicated using recent models, while the last TE reveals a "hyper-accuracy distortion" present in some language models. 3 authors · Aug 18, 2022