Fault-tolerant quantum computation allows quantum computations to be carried out while resisting unwanted noise. Several error correcting codes have been developed to achieve this task, but none alone are capable of universal quantum computation. This universality is highly desired and often achieved using additional techniques such as code concatenation, code switching, or magic state distillation, which can be costly and only work for specific codes. This work implements logical Clifford and T gates through novel ancilla-mediated protocols to construct a universal fault-tolerant quantum gate set. Unlike traditional techniques, our implementation is deterministic, does not consume ancilla registers, does not modify the underlying data codes or registers, and is generic over all stabilizer codes. Thus, any single code becomes capable of universal quantum computation by leveraging helper codes in ancilla registers and mid-circuit measurements. Furthermore, since these logical gates are stabilizer code-generic, these implementations enable communication between heterogeneous stabilizer codes. These features collectively open the door to countless possibilities for existing and undiscovered codes as well as their scalable, heterogeneous coexistence.
Creating precise timing devices at ultra-short time scales is not just an important technological challenge, but confronts us with foundational questions about timekeeping's ultimate precision limits. Research on clocks has either focused on long-term stability using an oscillator stabilized by a level transition, limiting precision at short timescales, or on making individual stochastic ticks as precise as possible. Here, we prove the viability of a conceptually different avenue: the autonomous self-correction of consecutive ticks by quantum correlations. This provides a new paradigm that integrates the advantages and insights from quantum transport theory to operate clocks at ultra-short timescales. We fully solve a model of coupled quantum systems and show how the emergent Pauli exclusion principle correlates the clock at the quantum level yielding an exponential advantage in precision. We furthermore demonstrate through simulations with realistic imperfections that this remarkable gain in precision remains stable providing a roadmap for implementation with contemporary quantum technologies.
Biased-noise qubits, in which one type of error (e.g. $X$- and $Y$-type errors) is significantly suppressed relative to the other (e.g. $Z$-type errors), can significantly reduce the overhead of quantum error correction. Codes such as the rectangular surface code or XZZX code substantially reduce the qubit overhead under biased noise, but they still face challenges. The rectangular surface code suffers from a relatively low threshold, while the XZZX code requires twice as many physical qubits to maintain the same code distance as the surface code. In this work, we introduce a 2D local code construction that outperforms these codes for noise biases $\eta \ge 7\times10^{4}$, reducing the qubit overhead by over 50% at $p_Z=10^{-3}$ and $\eta = 2 \times 10^6$ to achieve a logical error rate of $10^{-12}$. Our construction relies on the concatenation of two classical codes. The inner codes are repetition phase-flip codes while the outer codes are high-rate bit-flip codes enabled by their implementation at the logical level, which circumvents device connectivity constraints. These results indicate that under sufficiently biased noise, it is advantageous to address phase-flip and bit-flip errors at different layers of the coding scheme. The inner code should prioritize a high threshold for phase-flip errors, while the bit-flip outer code should optimize for encoding rate efficiency. In the strong biased-noise regime, high-rate outer codes keep the overhead for correcting residual bit-flip errors comparable to that of the repetition code itself, meaningfully lower than that required by earlier approaches.
We demonstrate the existence of an extended non-equilibrium critical phase, characterized by sub-exponential decay of conditional mutual information (CMI), in the surface code subject to heralded random Pauli measurement channels. By mapping the resulting mixed state to the ensemble of completely packed loops on a square lattice, we relate the extended phase to the Goldstone phase of the loop model. In particular, CMI is controlled by the characteristic length scale of loops, and we use analytic results of the latter to establish polylogarithmic decay of CMI in the critical phase. We find that the critical phase retains partial logical information that can be recovered by a global decoder, but not by any quasi-local decoder. To demonstrate this, we introduce a diagnostic called punctured coherent information which provides a necessary condition for quasi-local decoding.
We introduce a Pauli-measurement-based algorithm to certify the Schmidt number of $n$-qubit pure states. Our protocol achieves an average-case sample complexity of $\caO(\mathrm{poly}(n)\chi^2)$, a substantial improvement over the $\caO(2^n \chi)$ worst-case bound. By utilizing local pseudorandom unitaries, we ensure the worst case can be transformed into the average-case with high probability. This work establishes a scalable approach to high-dimensional entanglement certification and introduces a proof framework for random Pauli sampling.
The quantum Fourier transform and quantum wavelet transform have been cornerstones of quantum information processing. However, for non-stationary signals and anomaly detection, the Hilbert transform can be a more powerful tool, yet no prior work has provided efficient quantum implementations for the discrete Hilbert transform. This letter presents a novel construction for a quantum Hilbert transform in polylogarithmic size and logarithmic depth for a signal of length $N$, exponentially fewer operations than classical algorithms for the same mapping. We generalize this algorithm to create any $d$-dimensional Hilbert transform in depth $O(d\log N)$. Simulations demonstrate effectiveness for tasks such as power systems control and image processing, with exact agreement with classical results.
Quantum many-body scars (QMBS) have attracted considerable interest due to their role in weak ergodicity breaking in many-body systems. We present a general construction that embeds stabilizer states as QMBS of local Hamiltonians. The method relies on a notion of factorizability of Pauli strings on a lattice, which is used to convert stabilizer elements into local, few-body operators that annihilate the stabilizer state. This enables the systematic construction of parent Hamiltonians with zero-energy stabilizer QMBS typically near the middle of the spectrum. The method reproduces several known results in a unified framework, including recent examples of volume-law entangled QMBS, such as the ``rainbow'' QMBS and the entangled antipodal Bell pair state. We also apply the framework to construct examples of stabilizer QMBS with a more complex entanglement structure, such as the cluster state, the toric code state, and a volume-law entangled state we dub the antipodal toric code (ATC) state. Exact diagonalization confirms our results and reveal the stabilizer states as exact eigenstates of their parent Hamiltonian.
Elias Pescoller, Santiago Beltrán-Romero, Sebastian Egginger, Nicolas Jungwirth, Martino Zanetti, Dominik Hornof, Michael S. Seifner, Iva Březinová, Philipp Haslinger, Thomas Juffmann, Johannes Kofler, Philipp Schindler, Dennis Rätzel Freely propagating electrons may serve as quantum probes that can become coherently correlated with other quantum systems, offering access to advanced metrological resources. We propose a setup that coherently couples free electrons in an electron microscope to a trapped-ion quantum processor, enabling non-destructive, quantum-coherent detection and the accumulation of information across multiple electrons. Our analysis shows that single electrons can induce resolvable qubit excitations, establishing a platform for practical applications such as quantum-enhanced, dose-efficient electron microscopy.
Ruixia Wang, Jiayu Ding, Chenlu Wang, Yujia Zhang, He Wang, Wuerkaixi Nuerbolati, Zhen Yang, Xuehui Liang, Weijie Sun, Haifeng Yu, Fei Yan Understanding error mechanisms in two-qubit gate operations is essential for building high-fidelity quantum processors. While prior studies predominantly treat dephasing noise as either Markovian or predominantly low-frequency, realistic qubit environments exhibit structured, frequency-dependent spectra. Here we demonstrate that noise at frequencies matching the dressed-state energy splitting--set by the inter-qubit coupling strength g--induces a distinct relaxation channel that degrades gate performance. Through combined theoretical analysis and experimental verification on superconducting qubits with engineered noise spectra, we show that two-qubit gate errors scale predictably with the noise power spectral density at frequency 2g, extending the concept of $T_{1\rho}$ relaxation to interacting systems. This frequency-selective relaxation mechanism, universal across platforms, enriches our understanding of decoherence pathways during gate operations. The same mechanism sets coherence limits for dual-rail or singlet-triplet encodings.
Chenlu Liu, Yulong Li, Jiahui Wang, Quan Guan, Lijing Jin, Lu Ma, Ruizi Hu, Tenghui Wang, Xing Zhu, Hai-Feng Yu, Chunqing Deng, Xizheng Ma Qubits that experience predominantly erasure errors offer distinct advantages for fault-tolerant operation. Indeed, dual-rail encoded erasure qubits in superconducting cavities and transmons have demonstrated high-fidelity operations by converting physical-qubit relaxation into logical-qubit erasures, but this comes at the cost of increased hardware overhead and circuit complexity. Here, we address these limitations by realizing erasure conversion in a single fluxonium operated at zero flux, where the logical state is encoded in its 0-2 subspace. A single, carefully engineered resonator provides both mid-circuit erasure detection and end-of-line (EOL) logical measurement. Post-selection on non-erasure outcomes results in more than four-fold increase of the logical lifetime, from $193~\mu$s to $869~\mu$s. Finally, we characterize measurement-induced logical dephasing as a function of measurement power and frequency, and infer that each erasure check contributes a negligible error of $7.2\times 10^{-5}$. These results establish integer-fluxonium as a promising, resource-efficient platform for erasure-based error mitigation, without requiring additional hardware.
Daniel L. Campbell, Stephen McCoy, Melinda Andrews, Alexander Madden, Viva R. Horowitz, Bakir Husremović, Samuel Marash, Christopher Nadeau, Man Nguyen, Michael Senatore, Samuel Schwab, Erin Sheridan, Matthew D. LaHaye We showcase the recently developed double transmon coupler (DTC) circuit as a compact, drop-in, tunable and transition-selective link between an otherwise coherent transmon and the continuum of modes in a waveguide. We use these transmon-DTC devices as transmon emitter/dectectors (TEDs) for microwave photons. We highlight the flexibility of these devices by sending photons from a source TED to a measurement TED using a meter of coaxial cable and a circulator, each TED with nominally identical circuit parameters. We detect $60\,\%$ of the photons using this setup where we infer that $95\,\%$ of the photons at the input of the measurement TED are detected. Reset and photon emission/detection each require about $2\,\mu$s, for a minimum protocol duration of $4\,\mu$s, for our choice of TED parameters. Transmon-waveguide links like the DTC serve an important role in quantum information processors: they provide a mechanism for unconditional fast reset, metrology, and as nascent quantum communication interfaces for quantum networking.
Takuma Kuno, Takeru Utsugi, Andrew J. Ramsay, Normann Mertig, Noriyuki Lee, Itaru Yanagi, Toshiyuki Mine, Nobuhiro Kusuno, Hideo Arimoto, Sofie Beyne, Julien Jussot, Stefan Kubicek, Yann Canvel, Clement Godfrin, Bart Raes, Yosuke Shimura, Roger Loo, Sylvain Baudot, Danny Wan, Kristiaan De Greve, et al (5) The rate of coherence loss is lower for a qubit under Rabi drive compared to a freely evolving qubit, $T_{2}^{\rm{Rabi}}>T_{2}^*$. Building on this principle, concatenated continuous driving (CCD) keeps the qubit under continuous drive to suppress noise and manipulate dressed states by either phase or amplitude modulation. In this work, we propose a new variant of CCD which simultaneously modulates both the amplitude and phase of the driving field to generate a circularly-polarized field in the rotating frame of the carrier frequency. This circular-modulated (CM)-CCD cancels the counter-rotating term in the second rotating frame, eliminating a systematic pulse-area error that arises from an imperfect rotating wave approximation for fast gates. Numerical simulations demonstrate that the proposed CMCCD achieves higher gate fidelity than conventional CCD schemes. We further implement and compare different CCD protocols using an electron spin-qubit in an isotopically purified $^{28}$Si-MOS quantum dot and evaluate its robustness by applying static detuning and Rabi frequency errors. The robustness is significantly improved compared to standard Rabi-drive, showing the effectiveness of this scheme for qubit arrays with variation in qubit frequency, coupling to Rabi drive, and low frequency noise. The proposed scheme can be applied to various physical systems, including trapped atoms, cold atoms, superconducting qubits, and NV-centers.
Quantum trajectories are dynamical equations for quantum states conditioned on the results of a time-continuous measurement, such as a continuous-in-time current $\vec y_t$. Recently there has been renewed interest in dynamical maps for quantum trajectories with time-intervals of finite size $\Delta t$. Guilmin \emphet al. (unpublished) derived such a dynamical map for the (experimentally relevant) case where only the average current $I_t$ over each interval is available. Surprisingly, this binned data still generates a conditioned state $\rho_\text{\faFaucet}$ that is almost pure (for efficient measurements), with an impurity scaling as $(\Delta t)^{3}$. We show that, nevertheless, the typical distance of $\rho_\text{\faFaucet}$ from $\hat{\psi}_{\text{F}; \vec y_t}$ -- the projector for the pure state conditioned on the full current -- is as large as $(\Delta t)^{3/2}$. We introduce another finite-interval dynamical map (``$\Phi$-map''), which requires only one additional real statistic, $\phi_t$, of the current in the interval, that gives a conditioned state $\hat{\psi}_\Phi$ which is only $(\Delta t)^{2}$-distant from $\hat{\psi}_{\text{F}; \vec y_t}$. We numerically verify these scalings of the error (distance from the true states) for these two maps, as well as for the lowest-order (Itô) map and two other higher-order maps. Our results show that, for a generic system, if the statistic $\phi_t$ can be extracted from experiment along with $I_t$, then the $\Phi$-map gives a smaller error than any other.
A phase transition is an example of a ``topological defect'' in the space of parameters of a quantum or classical many-body systems. In this paper, we consider phase diagram topological defects of higher codimension. These have the property that equilibrium states undergo some kind of non-trivial winding as one moves around the defect. We show that such topological defects exist even in classical statistical mechanical systems, and describe their general structure in this context. We then introduce the term ``diabolical critical point'' (DCP), which is a higher-codimension analog of a continuous phase transition, with the proximate phases of matter replaced by the non-trivial winding of the proximate equilibrium states. We propose conditions under which a system can have a stable DCP. We also discuss some examples of stable DCPs in (1+1)-dimensional quantum systems.
Measurement of mutual gravitation on laboratory scales is an outstanding challenge and a prerequisite to probing theories of quantum gravity. A leading technology in tabletop gravity experiments is the torsion balance, with limitations due to thermal decoherence. Recent demonstrations of lithographically defined suspensions in thin-film silicon nitride with macroscale test masses suggest a path forward, as torsion pendulums dominated by gravitational stiffness may achieve higher mechanical quality factors through dilution of material losses. Here we demonstrate a 250 micron by 5 mm by 1.8 micron torsion fiber supporting 87 grams and forming a Cavendish-style torsion pendulum with tungsten test masses that -- to our knowledge -- is the largest thin-film silicon-nitride-based oscillator to date. Torsion pendulums with thin-film, nanofabricated suspensions provide a test bed for near-term tabletop experiments probing classical and quantum gravitational interaction between oscillators.
In this paper a central server Charlie has access to a quantum system C and measures it with a POVM $\{\Lambda_x\}$. Alice and Bob are only interested in the partial results $g_A(x)$ respectively $g_B(x)$. Alice, Bob, and Charlie share common randomness and Alice and Bob only need to faithfully simulate their measurements. The paper develops to achievable regions for the amount of communication needed to Alice and Bob.
The dynamics of quantum systems are generally described by a family of quantum channels (linear, completely positive and trace preserving maps). In this note, we mainly study the range of all possible values of $\|\mathcal{E}\|_2^2+\|\widetilde{\mathcal{E}}\|_2^2$ for quantum channels $\mathcal{E}$ and give the equivalent characterizations for quantum channels that achieve these maximum and minimum values, respectively, where $\|\mathcal{E}\|_2$ is the Hilbert-Schmidt norm of $\mathcal{E}$ and $\widetilde{\mathcal{E}}$ is a complementary channel of $\mathcal{E}.$ Also, we get a concrete description of completely positive maps on infinite dimensional systems preserving pure states. Moreover, the equivalency of several matrix integrals over the unit sphere is demonstrated and some extensions of these matrix integrals are obtained.
We study joint source-channel coding over Markov channels through the empirical coordination framework. More specifically, we aim at determining the empirical distributions of source and channel symbols that can be induced by a coding scheme. We consider strictly causal encoders that generate channel inputs, without access to the past channel states, henceforth driving the current Markov state evolution. Our main result is the single-letter inner and outer bounds of the set of achievable joint distributions, coordinating all the symbols in the network. To establish the inner bound, we introduce a new notion of typicality, the input-driven Markov typicality, and develop its fundamental properties. Contrary to the classical block-Markov coding schemes that rely on blockwise independence for discrete memoryless channels, our analysis directly exploits the Markov channel structure and improves beyond the independence-based arguments.
To make DNA a suitable medium for archival data storage, it is essential to consider the decay process of the strands observed in DNA storage systems. This paper studies the decay process as a probabilistic noisy torn paper channel (TPC), which first corrupts the bits of the transmitted sequence in a probabilistic manner by substitutions, then breaks the sequence into a set of noisy unordered substrings. The present work devises coding schemes for the noisy TPC by embedding markers in the transmitted sequence. We investigate the use of static markers and markers connected to the data in the form of hash functions. These two tools have also been recently exploited to tackle the noiseless TPC. Simulations show that static markers excel at higher substitution probabilities, while data-dependent markers are superior at lower noise levels. Both approaches achieve reconstruction rates exceeding $99\%$ with no false decodings observed, primarily limited by computational resources.
An important part of the information theory folklore had been about the output statistics of codes that achieve the capacity and how the empirical distributions compare to the output distributions induced by the optimal input in the channel capacity problem. Results for a variety of such empirical output distributions of good codes have been known in the literature, such as the comparison of the output distribution of the code to the optimal output distribution in vanishing and non-vanishing error probability cases. Motivated by these, we aim to achieve similar results for the quantum codes that are used for classical communication, that is the setting in which the classical messages are communicated through quantum codewords that pass through a noisy quantum channel. We first show the uniqueness of the optimal output distribution, to be able to talk more concretely about the optimal output distribution. Then, we extend the vanishing error probability results to the quantum case, by using techniques that are close in spirit to the classical case. We also extend non-vanishing error probability results to the quantum case on block codes, by using the second-order converses for such codes based on hypercontractivity results for the quantum generalized depolarizing semi-groups.
An accelerating Rindler frame in Minkowski spacetime acting for a finite time interval is used to carry a box of particles or waves between two relativistic inertial frames. The finite spatial extent of the box allows treatment of the equations of motion for particles or for waves, while the Rindler acceleration provides a substitute for scattering to test for thermal equilibrium. In the case of equilibrium for relativist particles, the Juttner distribution is derived. For relativistic waves, a full derivation of the Planck spectrum including zero-point radiation is obtained within classical theory. For relativistic waves, relativistic behavior and conformal symmetry are crucial. It is emphasized that the classical two-point correlation function for classical zero-point radiation depends upon the geodesic separation between the spacetime points and is independent of the coordinate system choice. The classical point of view here does not give any support for the idea that a system in uniform acceleration through classical zero-point radiation finds a thermal system.
A central building block of a heat engine is the working fluid, which mediates the conversion of heat into work. In nanoscale heat engines, the working fluid can be a quantum system whose behavior and dynamics are non-classical. A particularly versatile realization is a quantum resonator, which allows for precise control and coupling to thermal reservoirs, making it an ideal platform for exploring quantum thermodynamic processes. Here, we investigate the thermodynamic properties of a driven quantum resonator whose temperature is controlled by modulating its natural frequency. We evaluate the work performed by the external drive and the resulting heat flow between the resonator and its environment, both within linear response and beyond. To further elucidate these processes, we determine the full distribution of photon exchanges between the resonator and its environment, characterized by its first few cumulants. Our results provide quantitative insights into the interplay between heat, work, and fluctuations, and may help in designing future heat engines.
Jan 19 2026
cs.LG arXiv:2601.11433v1
Deep Differentiable Logic Gate Networks (LGNs) and Lookup Table Networks (LUTNs) are demonstrated to be suitable for the automatic classification of electrocardiograms (ECGs) using the inter-patient paradigm. The methods are benchmarked using the MIT-BIH arrhythmia data set, achieving up to 94.28% accuracy and a $j\kappa$ index of 0.683 on a four-class classification problem. Our models use between 2.89k and 6.17k FLOPs, including preprocessing and readout, which is three to six orders of magnitude less compared to SOTA methods. A novel preprocessing method is utilized that attains superior performance compared to existing methods for both the mixed-patient and inter-patient paradigms. In addition, a novel method for training the Lookup Tables (LUTs) in LUTNs is devised that uses the Boolean equation of a multiplexer (MUX). Additionally, rate coding was utilized for the first time in these LGNs and LUTNs, enhancing the performance of LGNs. Furthermore, it is the first time that LGNs and LUTNs have been benchmarked on the MIT-BIH arrhythmia dataset using the inter-patient paradigm. Using an Artix 7 FPGA, between 2000 and 2990 LUTs were needed, and between 5 to 7 mW (i.e. 50 pJ to 70 pJ per inference) was estimated for running these models. The performance in terms of both accuracy and $j\kappa$-index is significantly higher compared to previous LGN results. These positive results suggest that one can utilize LGNs and LUTNs for the detection of arrhythmias at extremely low power and high speeds in heart implants or wearable devices, even for patients not included in the training set.
In this work, we present a new algorithm for generating quantum circuits that efficiently implement continuous time quantum walks on arbitrary simple sparse graphs. The algorithm, called matching decomposition, works by decomposing a continuous-time quantum walk Hamiltonian into a collection of exactly implementable Hamiltonians corresponding to matchings in the underlying graph followed by a novel graph compression algorithm that merges edges in the graph. Lastly, we convert the walks to a circuit and Trotterize over these components. The dynamics of the walker on each edge in the matching can be implemented in the circuit model as sequences of CX and CRx gates. We do not use Pauli decomposition when implementing walks along each matching. Furthermore, we compare matching decomposition to a standard Pauli-based simulation pipeline and find that matching decomposition consistently yields substantial resource reductions, requiring up to 43% fewer controlled gates and up to 54% shallower circuits than Pauli decomposition across multiple graph families. Finally, we also present examples and theoretical results for when matching decomposition can exactly simulate a continuous-time quantum walk on a graph.
We combine the study of resources in measurement-based quantum computation (MBQC) with that of quantum solutions to linear constraint systems (LCS). Contextuality of the input state in MBQC has been identified as a key resource for quantum advantage, and in a stronger form, underlies algebraic relations between (measurement) operators which obey classically unsatisfiable (linear) constraints. Here, we compare these two perspectives on contextuality, and study to what extent they are related. More precisely, we associate a LCS to certain MBQC which exhibit strong forms of state-dependent contextuality, and ask if the measurement operators in such MBQC give rise to state-independent contextuality in the form of quantum solutions of its associated LCS. Our main result rules out such quantum solutions for a large class of MBQC. This both sharpens the distinction between state-dependent and state-independent forms of contextuality, and further generalises results on the non-existence of quantum solutions to LCS in finite odd (prime) dimension.
Colloidal semiconductor nanocrystals are promising building blocks for optoelectronics due to their solution processability, spectral tunability, and ability to self-assemble into complex architectures. However, their use in lasing application remains limited by high working thresholds, rapid nonradiative losses from Auger recombination, and sensitivity to environmental conditions. Here, we report hybrid microscale supraparticles composed of core/shell CdSe/ZnS quantum dots (QDs) and CdSe/CdxZn1-xS nanoplatelets (NPLs), which overcome these limitations through efficient, cavity-mediated energy funneling and coupling. Broadband absorbing QDs rapidly transfer excitation to narrow emitting NPLs, enabling stable whispering gallery mode lasing with a low threshold of 0.35 mJ/cm2. These supraparticles retain optical performance after prolonged exposure to air, water, and continuous irradiation, offering practical advantages for optoelectronic devices and advanced pigment technologies. Ultimately, our approach provides a versatile, programmable platform for optical amplification and tunable emission control within colloidal photonic architectures. Keywords
The ability to generate quantum light at room temperature on a mature semiconductor platform opens up new possibilities for quantum technologies. Heteroepitaxial growth of gallium nitride on silicon substrates offers the opportunity to leverage existing expertise and wafer-scale manufacturing to integrate bright quantum emitters in this material within cavities, diodes, and photonic circuits. Until now, it has only been possible to grow GaN QEs at uncontrolled depths on sapphire substrates, which is disadvantageous for potential device architectures. Here, we report a method to produce GaN QEs by metal-organic vapor phase epitaxy at a controlled depth in the crystal through the application of silane treatment and subsequent growth of 3D islands. We demonstrate this process on highly technologically relevant silicon substrates, producing room-temperature QEs with a high Debye Waller factor and strongly anti-bunched emission.
We analyze shortcuts to adiabaticity (STA) and their completions for the quantum harmonic oscillator (QHO) with time-dependent frequency, as well as for quantum field theory (QFT) in non-stationary backgrounds. We exploit the analogy with one-dimensional quantum mechanics, and the well known correspondence between Bogoliubov coefficients in the QHO and transmission/reflection amplitudes in scattering theory. Within this framework, STA protocols for the QHO are equivalent to transmission resonances, while STA in QFT with homogeneous backgrounds correspond to reflectionless potentials. Moreover, using the connection between particle creation and squeezed states, we show how STA completions can be understood in terms of the anti-squeezing operator.
We study the nonlocal advantage of quantum imaginarity (NAQI) and distillable imaginarity of assistance (DIA), which treat imaginarity as a resource in distributed scenarios. For two qubits interacting with a lossy cavity, it is shown that both the NAQI and DIA can be well preserved for long times in the presence of large and symmetric detuning between the qubits and the cavity. Moreover, the off-resonant interaction generates a high degree of NAQI and DIA from the initial product states of two qubits having the same detunings and unequal couplings to the cavity. Based on the effective coupling of the qubits induced by the cavity mode, we explain the physical mechanism underlying the validity of this strategy. Our findings shed light on the role that off-resonant interactions have in the efficient control of imaginarity in distributed scenarios.
Quantum architecture search (QAS) has emerged to automate the design of high-performance quantum circuits under specific tasks and hardware constraints. We propose a noise-aware quantum architecture search (NA-QAS) framework based on variational quantum circuit design. By incorporating a noise model into the training of parameterized quantum circuits (PQCs) , the proposed framework identifies the noise-robust architectures. We introduce a hybrid Hamiltonian $\varepsilon$ -greedy strategy to optimize evaluation costs and circumvent local optima. Furthermore, an enhanced variable-depth NSGA-II algorithm is employed to navigate the vast search space, enabling an automated trade-off between architectural expressibility and quantum hardware overhead. The effectiveness of the framework is validated through binary classification and iris multi-classification tasks under a noisy condition. Compared to existing approaches, our framework can search for quantum architectures with superior performance and greater resource efficiency under a noisy condition.
We introduce a two-tooth bosonic quantum comb that captures the sequential interactions between a thermal absorber and a long-lived coherent probe. The comb provides a causal, multi-time description of coherence transport, tracking how the probe records both instantaneous fluctuations and their temporal correlations. Using a process-tensor formulation, we derive closed form expressions showing that interference between the two interaction windows generates a non-monotonic memory response that reflects a fundamental competition between the absorbers thermal population and its dynamical correlations. By sweeping the temporal separation between the interaction windows, the probe directly samples the absorbers population correlator, enabling bosonic noise spectroscopy that discriminates Markovian temperature noise from slow or spectrally structured fluctuations. The approach is readily compatible with circuit-QED platforms and offers a general method for probing fluctuating bosonic environments.
We present a systematic study of Tensor Network (TN) models $\unicode{x2013}$ Matrix Product States (MPS) and Tree Tensor Networks (TTN) $\unicode{x2013}$ for real-time jet tagging in high-energy physics, with a focus on low-latency deployment on Field Programmable Gate Arrays (FPGAs). Motivated by the strict requirements of the HL-LHC Level-1 trigger system, we explore TNs as compact and interpretable alternatives to deep neural networks. Using low-level jet constituent features, our models achieve competitive performance compared to state-of-the-art deep learning classifiers. We investigate post-training quantization to enable hardware-efficient implementations without degrading classification performance or latency. The best-performing models are synthesized to estimate FPGA resource usage, latency, and memory occupancy, demonstrating sub-microsecond latency and supporting the feasibility of online deployment in real-time trigger systems. Overall, this study highlights the potential of TN-based models for fast and resource-efficient inference in low-latency environments.
Ruiheng Zhang, Jingfeng Yao, Huangxuan Zhao, Hao Yan, Xiao He, Lei Chen, Zhou Wei, Yong Luo, Zengmao Wang, Lefei Zhang, Dacheng Tao, Bo Du Jan 19 2026
cs.CV arXiv:2601.11522v1
Despite recent progress, medical foundation models still struggle to unify visual understanding and generation, as these tasks have inherently conflicting goals: semantic abstraction versus pixel-level reconstruction. Existing approaches, typically based on parameter-shared autoregressive architectures, frequently lead to compromised performance in one or both tasks. To address this, we present UniX, a next-generation unified medical foundation model for chest X-ray understanding and generation. UniX decouples the two tasks into an autoregressive branch for understanding and a diffusion branch for high-fidelity generation. Crucially, a cross-modal self-attention mechanism is introduced to dynamically guide the generation process with understanding features. Coupled with a rigorous data cleaning pipeline and a multi-stage training strategy, this architecture enables synergistic collaboration between tasks while leveraging the strengths of diffusion models for superior generation. On two representative benchmarks, UniX achieves a 46.1% improvement in understanding performance (Micro-F1) and a 24.2% gain in generation quality (FD-RadDino), using only a quarter of the parameters of LLM-CXR. By achieving performance on par with task-specific models, our work establishes a scalable paradigm for synergistic medical image understanding and generation. Codes and models are available at https://github.com/ZrH42/UniX.
The intrabinary shocks (IBS) in spider pulsars emit non-thermal synchrotron X-rays from accelerated electrons and positrons in the shocked pulsar wind, likely energized by magnetic reconnection. The double-peaked X-ray light curves from these shocks have been well characterized in several spider systems. In this paper, we analyze Imaging X-ray Polarimetry Explorer (IXPE) observations of the redback pulsar J1723$-$2837 to examine the expected synchrotron polarization. Using advanced extraction methods that include spatial, temporal, and particle background weights, we constrain the polarization of the IBS. We compare different models for the magnetic field in the radiation zone and find that the best fit prefers a striped pulsar wind model over other polarized models, with maximum polarization degree of the IBS emission component $\Pi_{\rm IBS}=36^{+16}_{-15}\%$, in addition to an unpolarized non-IBS component. Since this is only 2.4$\sigma$, we cannot claim strong preference over an unpolarized model; we report a $99\%$ confidence level upper limit on the total polarization of both IBS and non-IBS components $\Pi_{99}<36\%$, which is improved over the $50\%$ limit obtained in previous work. The best-fit polarization of the IBS component is consistent with numerical simulations. Detailed tests of such models are accessible to future measurements.
In this work, we demonstrate that the intrinsic timescale of a Josephson junction can be controlled through dynamical vacuum selection. By applying a Kapitza-like high-frequency drive to the system, the effective Josephson potential is reshaped, allowing for the stabilization of inphase or antiphase configuration. As a result, the Josephson plasma frequency, that is, the clock frequency of the junction, becomes a tunable property of the selected vacuum. Our findings establish a vacuum-controlled Josephson clock principle, in which the dynamical vacuum acts as an internal reference that fixes the operational timescale of Josephson oscillations, rather than this scale being imposed externally.
Jan 19 2026
cs.CL arXiv:2601.11518v1
Frontier LLMs are increasingly utilised across academia, society and industry. A commonly used unit for comparing models, their inputs and outputs, and estimating inference pricing is the token. In general, tokens are used as a stable currency, assumed to be broadly consistent across tokenizers and contexts, enabling direct comparisons. However, tokenization varies significantly across models and domains of text, making naive interpretation of token counts problematic. We quantify this variation by providing a comprehensive empirical analysis of tokenization, exploring the compression of sequences to tokens across different distributions of textual data. Our analysis challenges commonly held heuristics about token lengths, finding them to be overly simplistic. We hope the insights of our study add clarity and intuition toward tokenization in contemporary LLMs.
Large reasoning models (LRMs) produce a textual chain of thought (CoT) in the process of solving a problem, which serves as a potentially powerful tool to understand the problem by surfacing a human-readable, natural-language explanation. However, it is unclear whether these explanations generalize, i.e. whether they capture general patterns about the underlying problem rather than patterns which are esoteric to the LRM. This is a crucial question in understanding or discovering new concepts, e.g. in AI for science. We study this generalization question by evaluating a specific notion of generalizability: whether explanations produced by one LRM induce the same behavior when given to other LRMs. We find that CoT explanations often exhibit this form of generalization (i.e. they increase consistency between LRMs) and that this increased generalization is correlated with human preference rankings and post-training with reinforcement learning. We further analyze the conditions under which explanations yield consistent answers and propose a straightforward, sentence-level ensembling strategy that improves consistency. Taken together, these results prescribe caution when using LRM explanations to yield new insights and outline a framework for characterizing LRM explanation generalization.
Frontier language model capabilities are improving rapidly. We thus need stronger mitigations against bad actors misusing increasingly powerful systems. Prior work has shown that activation probes may be a promising misuse mitigation technique, but we identify a key remaining challenge: probes fail to generalize under important production distribution shifts. In particular, we find that the shift from short-context to long-context inputs is difficult for existing probe architectures. We propose several new probe architecture that handle this long-context distribution shift. We evaluate these probes in the cyber-offensive domain, testing their robustness against various production-relevant shifts, including multi-turn conversations, static jailbreaks, and adaptive red teaming. Our results demonstrate that while multimax addresses context length, a combination of architecture choice and training on diverse distributions is required for broad generalization. Additionally, we show that pairing probes with prompted classifiers achieves optimal accuracy at a low cost due to the computational efficiency of probes. These findings have informed the successful deployment of misuse mitigation probes in user-facing instances of Gemini, Google's frontier language model. Finally, we find early positive results using AlphaEvolve to automate improvements in both probe architecture search and adaptive red teaming, showing that automating some AI safety research is already possible.
Callum T. Donnan, Derek J. McLeod, Ross J. McLure, James S. Dunlop, Fergus Cullen, Mark Dickinson, Pablo Arrabal Haro, Anthony J. Taylor, Cecilia Bondestam, Feng-Yuan Liu, Karla Z. Arellano-Córdova, Laia Barrufet, Ryan Begley, Adam C. Carnall, Hanna Golawska, Ho-Hin Leung, Dirk Scholte, Thomas M. Stanton We present JWST/NIRSpec PRISM observations of a robust galaxy candidate at $z\simeq14$, selected from pure-parallel NIRCam imaging; PAN-z14-1. The NIRSpec spectrum allows confirmation of this source at $z_{\rm spec}=13.53^{+0.05}_{-0.06}$ through modeling of the Lyman-$\alpha$ break. PAN-z14-1 is the fourth most distant galaxy known to date and is extremely luminous ($M_{\rm UV}=-20.6\pm0.2$), with a blue UV-continuum slope ($\beta=-2.26\pm0.08$) and a large physical size ($r_{\rm c}=233\pm10\, \rm pc$). We fail to detect any rest-frame UV emission lines at $\geq 2\sigma$ significance, with upper limits sufficiently constraining to exclude the possibility of strong line emission. In terms of its physical properties, PAN-z14-1 is remarkably similar to the previously confirmed $z_{\rm spec}=14.18$ galaxy GS-z14-0. The lack of strong emission lines and large physical size is consistent with an emerging picture of two potentially distinct galaxy populations at $z>10$, distinguished by star-formation rate surface density. In this scenario, PAN-z14-1 is a second example of a ``normal'', extended, luminous, star-forming galaxy at $z \simeq 14$, and differs markedly from the other class of extremely compact galaxies with strong emission lines recently uncovered at extreme redshifts with JWST. These results highlight the importance of further spectroscopic confirmation of $z>10$ galaxy candidates in order to fully understand the diversity of properties displayed by the first galaxies.
Yawar Siddiqui, Duncan Frost, Samir Aroudj, Armen Avetisyan, Henry Howard-Jenkins, Daniel DeTone, Pierre Moulon, Qirui Wu, Zhengqin Li, Julian Straub, Richard Newcombe, Jakob Engel Recent advances in 3D shape generation have achieved impressive results, but most existing methods rely on clean, unoccluded, and well-segmented inputs. Such conditions are rarely met in real-world scenarios. We present ShapeR, a novel approach for conditional 3D object shape generation from casually captured sequences. Given an image sequence, we leverage off-the-shelf visual-inertial SLAM, 3D detection algorithms, and vision-language models to extract, for each object, a set of sparse SLAM points, posed multi-view images, and machine-generated captions. A rectified flow transformer trained to effectively condition on these modalities then generates high-fidelity metric 3D shapes. To ensure robustness to the challenges of casually captured data, we employ a range of techniques including on-the-fly compositional augmentations, a curriculum training scheme spanning object- and scene-level datasets, and strategies to handle background clutter. Additionally, we introduce a new evaluation benchmark comprising 178 in-the-wild objects across 7 real-world scenes with geometry annotations. Experiments show that ShapeR significantly outperforms existing approaches in this challenging setting, achieving an improvement of 2.7x in Chamfer distance compared to state of the art.
Jan 19 2026
cs.CY arXiv:2601.11513v1
Machine learning models are often used to make predictions about admissions process outcomes, such as for colleges or jobs. However, such decision processes differ substantially from the conventional machine learning paradigm. Because admissions decisions are capacity-constrained, whether a student is admitted depends on the other applicants who apply. We show how this dependence affects predictive performance even in otherwise ideal settings. Theoretically, we introduce two concepts that characterize the relationship between admission function properties, machine learning representation, and generalization to applicant pool distribution shifts: instability, which measures how many existing decisions can change when a single new applicant is introduced; and variability, which measures the number of unique students whose decisions can change. Empirically, we illustrate our theory on individual-level admissions data from the New York City high school matching system, showing that machine learning performance degrades as the applicant pool increasingly differs from the training data. Furthermore, there are larger performance drops for schools using decision rules that are more unstable and variable. Our work raises questions about the reliability of predicting individual admissions probabilities.
For an algebraically closed field K, let G be a finite abelian group of K-linear automorphisms of a finite-dimensional algebra A and AG is the associated skew group algebra. The author with S. Trepode and A. G. Chaio introduced the notion of a Galois semi-covering functor to study the irreducible morphisms over skew group algebras. In this paper, we establish a Galois semi-covering functor between the morphism categories as well as the functor categories over the algebras A and AG and prove that their Krull-Gabriel dimension are equal. This computation confirms Prests conjecture on the finiteness of Krull-Gabriel dimension and Schroers conjecture on its connection with the stable rank (the least stabilized radical power) over skew gentle algebras. Moreover, we determine all posible stable ranks for (skew) Brauer graph algebras.
We study the abelian sub-C*-algebra of the CAR algebra generated by the start and face opertors of Kitaev's toric code. We show that it is a C*-diagonal equivalent to the canonical diagonal of the CAR algebra.
While using formal methods offers advantages over unit testing, their steep learning curve can be daunting to developers and can be a major impediment to widespread adoption. To support integration into an industrial software engineering workflow, a tool must provide useful information and must be usable with relatively minimal user effort. In this paper, we discuss our experiences associated with identifying and applying formal methods tools on an electronic warfare (EW) system with stringent safety requirements and present perspectives on formal methods tools from EW software engineers who are proficient in development yet lack formal methods training. In addition to a difference in mindset between formal methods and unit testing approaches, some formal methods tools use terminology or annotations that differ from their target programming language, creating another barrier to adoption. Input/output contracts, objects in memory affected by a function, and loop invariants can be difficult to grasp and use. In addition to usability, our findings include a comparison of vulnerabilities detected by different tools. Finally, we present suggestions for improving formal methods usability including better documentation of capabilities, decreased manual effort, and improved handling of library code.
X-ray absorption spectroscopy (XAS) and electron energy-loss spectroscopy (EELS) produce detailed information about oxidation state, bonding, and coordination, making them essential for quantitative studies of redox and structure in functional materials. However, high-throughput quantitative analysis of these spectra, especially for mixed valence materials, remains challenging as diverse experimental conditions introduce noise, misalignment, broadening of the spectral features. We address this challenge by training a machine learning model consisting of an autoencoder to standardize the spectra and a transformer model to predict both Cu oxidation state and Bader charge directly from L-edge spectra. The model is trained on a large dataset of FEFF-simulated spectra and evaluates model performance on both simulated and experimental data. The results of the machine learning model exhibit highly accurate prediction across the domains of simulated and experimental XAS as well as experimental EELS. These advances enable future quantitative analysis of Cu redox processes under in situ and operando conditions.
Jan 19 2026
cs.CV arXiv:2601.11508v1
Indoor environments evolve as objects move, appear, or disappear. Capturing these dynamics requires maintaining temporally consistent instance identities across intermittently captured 3D scans, even when changes are unobserved. We introduce and formalize the task of temporally sparse 4D indoor semantic instance segmentation (SIS), which jointly segments, identifies, and temporally associates object instances. This setting poses a challenge for existing 3DSIS methods, which require a discrete matching step due to their lack of temporal reasoning, and for 4D LiDAR approaches, which perform poorly due to their reliance on high-frequency temporal measurements that are uncommon in the longer-horizon evolution of indoor environments. We propose ReScene4D, a novel method that adapts 3DSIS architectures for 4DSIS without needing dense observations. It explores strategies to share information across observations, demonstrating that this shared context not only enables consistent instance tracking but also improves standard 3DSIS quality. To evaluate this task, we define a new metric, t-mAP, that extends mAP to reward temporal identity consistency. ReScene4D achieves state-of-the-art performance on the 3RScan dataset, establishing a new benchmark for understanding evolving indoor scenes.
Jan 19 2026
cs.SI arXiv:2601.11507v1
To what extent is social media research independent from industry influence? Leveraging openly available data, we show that half of the research published in top journals has disclosable ties to industry in the form of prior funding, collaboration, or employment. However, the majority of these ties go undisclosed in the published research. These trends do not arise from broad scientific engagement with industry, but rather from a select group of scientists who maintain long-lasting relationships with industry. Undisclosed ties to industry are common not just among authors, but among reviewers and academic editors during manuscript evaluation. Further, industry-tied research garners more attention within the academy, among policymakers, on social media, and in the news. Finally, we find evidence that industry ties are associated with a topical focus away from impacts of platform-scale features. Together, these findings suggest industry influence in social media research is extensive, impactful, and often opaque. Going forward there is a need to strengthen disclosure norms and implement policies to ensure the visibility of independent research, and the integrity of industry supported research.
We have constructed a terahertz time domain spectroscopy system using a Bluefors dilution refrigerator with a 7 T split-coil magnet. Using a gallium arsenide single quantum well sample, terahertz waveforms were measured at 145 mK in a magnetic field range from 0 to 6 Tesla to measure cyclotron resonance. Effective mass is found to be $0.073 m_{e}$, which is larger than the commonly accepted bulk value of $0.068 m_{e}$.
Progress in Type 1 Diabetes (T1D) algorithm development is limited by the fragmentation and lack of standardization across existing T1D management datasets. Current datasets differ substantially in structure and are time-consuming to access and process, which impedes data integration and reduces the comparability and generalizability of algorithmic developments. This work aims to establish a unified and accessible data resource for T1D algorithm development. Multiple publicly available T1D datasets were consolidated into a unified resource, termed the MetaboNet dataset. Inclusion required the availability of both continuous glucose monitoring (CGM) data and corresponding insulin pump dosing records. Additionally, auxiliary information such as reported carbohydrate intake and physical activity was retained when present. The MetaboNet dataset comprises 3135 subjects and 1228 patient-years of overlapping CGM and insulin data, making it substantially larger than existing standalone benchmark datasets. The resource is distributed as a fully public subset available for immediate download at https://metabo-net.org/ , and with a Data Use Agreement (DUA)-restricted subset accessible through their respective application processes. For the datasets in the latter subset, processing pipelines are provided to automatically convert the data into the standardized MetaboNet format. A consolidated public dataset for T1D research is presented, and the access pathways for both its unrestricted and DUA-governed components are described. The resulting dataset covers a broad range of glycemic profiles and demographics and thus can yield more generalizable algorithmic performance than individual datasets.
Sandy Adhitia Ekahana, Aalok Tiwari, Souvik Sasmal, Zefeng Cai, Ravi Kumar Bandapelli, I-Hsuan Kao, Jian Tang, Chenbo Min, Tiema Qian, Kenji Watanabe, Takashi Taniguchi, Ni Ni, Qiong Ma, Chris Jozwiak, Eli Rotenberg, Aaron Bostwick, Simranjeet Singh, Noa Marom, Jyoti Katoch Monolayer TaIrTe$_4$ has emerged as an attractive material platform to study intriguing phenomena related to topology and strong electron correlations. Recently, strong interactions have been demonstrated to induce strain and dielectric screening tunable topological phases such as quantum spin Hall insulator (QSHI), trivial insulator, higher-order topological insulator, and metallic phase, in the ground state of monolayer TaIrTe$_4$. Moreover, charge dosing has been demonstrated to convert the QSHI into a dual QSHI state. Although the band structure of monolayer TaIrTe$_4$ is central to interpreting its topological phases in transport experiments, direct experimental access to its intrinsic electronic structure has so far remained elusive. Here we report direct measurements of the monolayer TaIrTe$_4$ band structure using spatially resolved micro-angle-resolved photoemission spectroscopy (microARPES) with micrometre-scale resolution. The observed dispersions show quantitative agreement with density functional theory calculations using the Heyd-Scuseria-Ernzerhof hybrid functional, establishing the insulating ground state and revealing no evidence for strong electronic correlations. We further uncover a pronounced electron-hole asymmetry in the doping response. Whereas hole doping is readily induced by electrostatic gating, attempts to introduce electrons via gating or alkali metal deposition do not yield a rigid upward shift of the Fermi level. Fractional charge calculations demonstrate that added electrons instead drive band renormalization and shrink the band gap. Taken together, our experimental and theoretical results identify the microscopic mechanism by which induced charges reshape the band topology of monolayer TaIrTe$_4$, showing that doping can fundamentally alter the electronic structure beyond the rigid band behaviour that is typically assumed.