text
stringlengths 121
2.54k
| summary
stringlengths 23
219
|
---|---|
Given a countable sofic group $\Gamma$, a finite alphabet $A$, a subshift $X \subseteq A^\Gamma$, and a potential $\phi: X \to \mathbb{R}$, we give sufficient conditions on $X$ and $\phi$ for expressing, in the uniqueness regime, the sofic entropy of the associated Gibbs measure $\mu$ as the limit of the Shannon entropies of some suitable finite systems approximating $\Gamma \curvearrowright (X,\mu)$. Next, we prove that if $\mu$ satisfies strong spatial mixing, then the sofic pressure admits a formula in terms of the integral of a random information function with respect to any $\Gamma$-invariant Borel probability measure with nonnegative sofic entropy. As a consequence of our results, we provide sufficient conditions on $X$ and $\phi$ for having independence of the sofic approximation for sofic pressure and sofic entropy, and for having locality of pressure in some relevant families of systems, among other applications. These results complement and unify those of Marcus and Pavlov (2015), Alpeev (2017), and Austin and Podder (2018). | Kieffer-Pinsker type formulas for Gibbs measures on sofic groups |
We present an approach to the dynamics of interacting particle systems, which allows to derive path integral formulas from purely stochastic considerations. We show that the resulting field theory is a dual version of the standard theory of Doi and Peliti. This clarify both the origin of the Cole-Hopf map between the two approaches and the occurence of imaginary noises in effective Langevin equations for reaction-diffusion systems. The advantage of our approach is that it focuses directly on the density field. We show some applications, in particular on the Zero Range Process, hydrodynamic limits and large deviation functional. | Dynamics of interacting particle systems: stochastic process and field theory |
In recent years participatory budgeting (PB) in Scotland has grown from a handful of community-led processes to a movement supported by local and national government. This is epitomized by an agreement between the Scottish Government and the Convention of Scottish Local Authorities (COSLA) that at least 1% of local authority budgets will be subject to PB. This ongoing research paper explores the challenges that emerge from this 'scaling up' or 'mainstreaming' across the 32 local authorities that make up Scotland. The main objective is to evaluate local authority use of the digital platform Consul, which applies Natural Language Processing (NLP) to address these challenges. This project adopts a qualitative longitudinal design with interviews, observations of PB processes, and analysis of the digital platform data. Thematic analysis is employed to capture the major issues and themes which emerge. Longitudinal analysis then explores how these evolve over time. The potential for 32 live study sites provides a unique opportunity to explore discrete political and social contexts which materialize and allow for a deeper dive into the challenges and issues that may exist, something a wider cross-sectional study would miss. Initial results show that issues and challenges which come from scaling up may be tackled using NLP technology which, in a previous controlled use case-based evaluation, has shown to improve the effectiveness of citizen participation. | Evaluating the application of NLP tools in mainstream participatory budgeting processes in Scotland |
Recently, graph-based planning algorithms have gained much attention to solve goal-conditioned reinforcement learning (RL) tasks: they provide a sequence of subgoals to reach the target-goal, and the agents learn to execute subgoal-conditioned policies. However, the sample-efficiency of such RL schemes still remains a challenge, particularly for long-horizon tasks. To address this issue, we present a simple yet effective self-imitation scheme which distills a subgoal-conditioned policy into the target-goal-conditioned policy. Our intuition here is that to reach a target-goal, an agent should pass through a subgoal, so target-goal- and subgoal- conditioned policies should be similar to each other. We also propose a novel scheme of stochastically skipping executed subgoals in a planned path, which further improves performance. Unlike prior methods that only utilize graph-based planning in an execution phase, our method transfers knowledge from a planner along with a graph into policy learning. We empirically show that our method can significantly boost the sample-efficiency of the existing goal-conditioned RL methods under various long-horizon control tasks. | Imitating Graph-Based Planning with Goal-Conditioned Policies |
Certain patterns of symmetry fractionalization in topologically ordered phases of matter are anomalous, in the sense that they can only occur at the surface of a higher dimensional symmetry-protected topological (SPT) state. An important question is to determine how to compute this anomaly, which means determining which SPT hosts a given symmetry-enriched topological order at its surface. While special cases are known, a general method to compute the anomaly has so far been lacking. In this paper we propose a general method to compute relative anomalies between different symmetry fractionalization classes of a given (2+1)D topological order. This method applies to all types of symmetry actions, including anyon-permuting symmetries and general space-time reflection symmetries. We demonstrate compatibility of the relative anomaly formula with previous results for diagnosing anomalies for $\mathbb{Z}_2^{\bf T}$ space-time reflection symmetry (e.g. where time-reversal squares to the identity) and mixed anomalies for $U(1) \times \mathbb{Z}_2^{\bf T}$ and $U(1) \rtimes \mathbb{Z}_2^{\bf T}$ symmetries. We also study a number of additional examples, including cases where space-time reflection symmetries are intertwined in non-trivial ways with unitary symmetries, such as $\mathbb{Z}_4^{\bf T}$ and mixed anomalies for $\mathbb{Z}_2 \times \mathbb{Z}_2^{\bf T}$ symmetry, and unitary $\mathbb{Z}_2 \times \mathbb{Z}_2$ symmetry with non-trivial anyon permutations. | Relative Anomalies in (2+1)D Symmetry Enriched Topological States |
In this paper we study algebraic structures of the classes of the $L_2$ analytic Fourier-Feynman transforms on Wiener space. To do this we first develop several rotation properties of the generalized Wiener integral associated with Gaussian processes. We then proceed to analyze the $L_2$ analytic Fourier-Feynman transforms associated with Gaussian processes. Our results show that these $L_2$ analytic Fourier--Feynman transforms are actually linear operator isomorphisms from a Hilbert space into itself. We finally investigate the algebraic structures of these classes of the transforms on Wiener space, and show that they indeed are group isomorphic. | Algebraic structure of the $L_2$ analytic Fourier-Feynman transform associated with Gaussian processes on Wiener space |
We present $\Delta$-UQ -- a novel, general-purpose uncertainty estimator using the concept of anchoring in predictive models. Anchoring works by first transforming the input into a tuple consisting of an anchor point drawn from a prior distribution, and a combination of the input sample with the anchor using a pretext encoding scheme. This encoding is such that the original input can be perfectly recovered from the tuple -- regardless of the choice of the anchor. Therefore, any predictive model should be able to predict the target response from the tuple alone (since it implicitly represents the input). Moreover, by varying the anchors for a fixed sample, we can estimate uncertainty in the prediction even using only a single predictive model. We find this uncertainty is deeply connected to improper sampling of the input data, and inherent noise, enabling us to estimate the total uncertainty in any system. With extensive empirical studies on a variety of use-cases, we demonstrate that $\Delta$-UQ outperforms several competitive baselines. Specifically, we study model fitting, sequential model optimization, model based inversion in the regression setting and out of distribution detection, & calibration under distribution shifts for classification. | $\Delta$-UQ: Accurate Uncertainty Quantification via Anchor Marginalization |
Small inhibitory neuronal circuits have long been identified as key neuronal motifs to generate and modulate the coexisting rhythms of various motor functions. Our paper highlights the role of a cellular switching mechanism to orchestrate such circuits. The cellular switch makes the circuits reconfigurable, robust, adaptable, and externally controllable. Without this cellular mechanism, the circuits rhythms entirely rely on specific tunings of the synaptic connectivity, which makes them rigid, fragile, and difficult to control externally. We illustrate those properties on the much studied architecture of a small network controlling both the pyloric and gastric rhythms of crabs. The cellular switch is provided by a slow negative conductance often neglected in mathematical modeling of central pattern generators. We propose that this conductance is simple to model and key to computational studies of rhythmic circuit neuromodulation. | Cellular switches orchestrate rhythmic circuits |
The resolution function of a spectrometer based on a strongly bent single crystal (bending radius of 10 cm or less) is evaluated. It is shown that the resolution is controlled by two parameters, (i) the ratio of the lattice spacing of the chosen reflection to the crystal thickness and (ii) a single parameter comprising crystal thickness, its bending radius, and anisotropic elastic constants of the chosen crystal. Diamond, due to its unique elastic properties, can provide notably higher resolution than silicon. The results allow to optimize the parameters of bent crystal spectrometers for the hard X-ray free electron laser sources. | Resolution of a bent-crystal spectrometer for X-ray free electron laser pulses: diamond vs. silicon |
Density functional theory was used to study the nonmagnetic (NM) and ferromagnetic (FM) phases of face-centered cubic cerium. Functionals of four levels of approximations for the exchange-correlation energy were used: LDA, PBE, LDA/PBE+$U$, and YS-PBEh. The latter two contain an adjustable parameter, the onsite Coulomb repulsion parameter $U$ for LDA/PBE+$U$ and the fraction $\alpha_x$ of Hartree-Fock exchange for YS-PBEh, which were varied in order to study their influence on the results. By supposing that, as a first approximation, the NM and FM solutions can be identified to the observed $\alpha$ and $\gamma$ phases, respectively, it is concluded that while a small value of $U$ or $\alpha_x$ leads to the correct trend for the stability ordering of the two phases, larger values are necessary for a more appropriate (but still not satisfying) description of the electronic structure. | Nonmagnetic and ferromagnetic fcc cerium studied with one-electron methods |
Numerical MHD codes have become extraordinarily powerful tools with which to study accretion turbulence. They have been used primarily to extract values for the classical $\alpha$ parameter, and to follow complex evolutionary development. Energy transport, which is at the heart of classical disk theory, has yet to be explored in any detail. Further topics that should be explored by simulation include nonideal MHD, radiation physics, and outburst behavior related to the temperature sensitivity of the resistivity. | Numerical Simulations of the MRI and Real Disks |
Following the recent discovery of X-ray quasi-periodic eruptions (QPEs) coming from the nucleus of the galaxy GSN 069, here we report on the detection of QPEs in the active galaxy named RX J1301.9+2747. QPEs are rapid and recurrent increases of the X-ray count-rate by more than one order of magnitude with respect to a stable quiescent level. During a XMM-Newton observation lasting 48 ks that was performed on 30 and 31 May 2019, three strong QPEs lasting about half an hour each were detected in the light curves of RX J1301.9+2747. The first two QPEs are separated by a longer recurrence time (about 20 ks) compared to the second and third (about 13 ks). This pattern is consistent with the alternating long-short recurrence times of the GSN 069 QPEs, although the difference between the consecutive recurrence times is significantly smaller in GSN 069. Longer X-ray observations will better clarify the temporal pattern of the QPEs in RX J1301.9+2747 and will allow a detailed comparison with GSN 069 to be performed. The X-ray spectral properties of QPEs in the two sources are remarkably similar, with QPEs representing fast transitions from a relatively cold and likely disk-dominated state to a state that is characterized by a warmer emission similar to the so-called soft X-ray excess, a component that is almost ubiquitously seen in the X-ray spectra of unobscured, radiatively efficient active galaxies. Previous X-ray observations of RX J1301.9+2747 in 2000 and 2009 strongly suggest that QPEs have been present for at least the past 18.5 years. The detection of QPEs from a second galactic nucleus after GSN 069 rules out contamination by a Galactic source in both cases, such that QPEs ought to be considered a novel extragalactic phenomenon associated with accreting supermassive black holes. | X-ray quasi-periodic eruptions from the galactic nucleus of RX J1301.9+2747 |
We here discuss the emergence of Quasi Stationary States (QSS), a universal feature of systems with long-range interactions. With reference to the Hamiltonian Mean Field (HMF) model, numerical simulations are performed based on both the original $N$-body setting and the continuum Vlasov model which is supposed to hold in the thermodynamic limit. A detailed comparison unambiguously demonstrates that the Vlasov-wave system provides the correct framework to address the study of QSS. Further, analytical calculations based on Lynden-Bell's theory of violent relaxation are shown to result in accurate predictions. Finally, in specific regions of parameters space, Vlasov numerical solutions are shown to be affected by small scale fluctuations, a finding that points to the need for novel schemes able to account for particles correlations. | Exploring the thermodynamic limit of Hamiltonian models: convergence to the Vlasov equation |
In 2020, Yamakawa and Okuno proposed a stabilized sequential quadratic semidefinite programming (SQSDP) method for solving, in particular, degenerate nonlinear semidefinite optimization problems. The algorithm is shown to converge globally without a constraint qualification, and it has some nice properties, including the feasible subproblems, and their possible inexact computations. In particular, the convergence was established for approximate-Karush-Kuhn-Tucker (AKKT) and trace-AKKT conditions, which are two sequential optimality conditions for the nonlinear conic contexts. However, recently, complementarity-AKKT (CAKKT) conditions were also consider, as an alternative to the previous mentioned ones, that is more practical. Since few methods are shown to converge to CAKKT points, at least in conic optimization, and to complete the study associated to the SQSDP, here we propose a revised version of the method, maintaining the good properties. We modify the previous algorithm, prove the global convergence in the sense of CAKKT, and show some preliminary numerical experiments. | A revised sequential quadratic semidefinite programming method for nonlinear semidefinite optimization |
This paper shows how to apply memoization (caching of subgoals and associated answer substitutions) in a constraint logic programming setting. The research is is motivated by the desire to apply constraint logic programming (CLP) to problems in natural language processing that involve (constraint) interleaving or coroutining, such as GB and HPSG parsing. | Memoization in Constraint Logic Programming |
We report the observation of charmless hadronic decays of charged B mesons to the final state K+K-pi+. Using a data sample of 347.5 fb^-1 collected at the Y(4S) resonance with the BABAR detector, we observe 429+/-43 signal events with a significance of 9.6 sigma. We measure the inclusive branching fraction BF(B+ --> K+K-pi+) = [5.0+/-0.5(stat)+/-0.5(syst)]x10^-6. Inspection of the Dalitz plot of signal candidates shows a broad structure peaking near 1.5 GeV/c^2 in the K+K- invariant mass distribution. We find the direct CP asymmetry to be consistent with zero. | Observation of the Decay B+ --> K+K-pi+ |
The tunnel conductance in normal-metal / insulator / PrOs$_4$Sb$_{12}$ junctions is theoretically studied, where skutterudite PrOs$_4$Sb$_{12}$ is considered to be an unconventional superconductor. The conductance are calculated for several pair potentials which have been proposed in recent works. The results show that the conductance is sensitive to the relation between the direction of electric currents and the position of point nodes. We also show that the conductance spectra often deviate from the shape of the bulk density of states and that the sub gap spectra have peak structures in the case of the spin-triplet pair potentials. The results indicate that the tunnel conductance is a useful tool to obtain an information of the pairing symmetry. | Tunneling Spectra of Skutterudite PrOs_4Sb_{12} |
The analysis of perturbative quantities is a powerful tool to distinguish between different Dark Energy models and gravity theories degenerated at the background level. In this work, we generalise the integral solution of the matter density contrast for General Relativity gravity to a wide class of Modified Gravity (MG) theories. To calculate this solution is necessary prior knowledge of the Hubble rate, the density parameter at the present epoch ($\Omega_{m0}$) and the functional form of the effective Newton's constant that characterises the gravity theory. We estimate in a model-independent way the Hubble expansion rate by applying a non-parametric reconstruction method to model-independent cosmic chronometer data and high-$z$ quasar data. In order to compare our generalised solution of the matter density contrast, using the non-parametric reconstruction of $H(z)$ from observational data, with purely theoretical one, we choose a parameterisation of the Screened MG and the $\Omega_{m0}$ from WMAP-9 collaborations. Finally, we calculate the growth index for the analysed cases, finding very good agreement between theoretical values and the obtained ones using the approach presented in this work. | Reconstruction of cosmological matter perturbations in Modified Gravity |
We report equilibrium geometric structures of CuO2, CuO3, CuO6, and CuO clusters obtained by an all-electron linear combination of atomic orbitals scheme within the density-functional theory with generalized gradient approximation to describe the exchange-correlation effects. The vibrational stability of all clusters is examined on the basis of the vibrational frequencies. A structure with Cs symmetry is found to be the lowest-energy structure for CuO2, while a -shaped structure with C2v symmetry is the most stable structure for CuO3. For the larger CuO6 and CuO clusters, several competitive structures exist with structures containing ozonide units being higher in energy than those with O2 units. The infrared and Raman spectra are calculated for the stable optimal geometries. ~ | Molecular structures and vibrations of neutral and anionic CuOx (x = 1-3,6) clusters |
Quantum metrology holds the promise of an early practical application of quantum technologies, in which measurements of physical quantities can be made with much greater precision than what is achievable with classical technologies. In this review, we collect some of the key theoretical results in quantum parameter estimation by presenting the theory for the quantum estimation of a single parameter, multiple parameters, and optical estimation using Gaussian states. We give an overview of results in areas of current research interest, such as Bayesian quantum estimation, noisy quantum metrology, and distributed quantum sensing. We address the question how minimum measurement errors can be achieved using entanglement as well as more general quantum states. This review is presented from a geometric perspective. This has the advantage that it unifies a wide variety of estimation procedures and strategies, thus providing a more intuitive big picture of quantum parameter estimation. | A Geometric Perspective on Quantum Parameter Estimation |
Pretrained language models have been suggested as a possible alternative or complement to structured knowledge bases. However, this emerging LM-as-KB paradigm has so far only been considered in a very limited setting, which only allows handling 21k entities whose single-token name is found in common LM vocabularies. Furthermore, the main benefit of this paradigm, namely querying the KB using a variety of natural language paraphrases, is underexplored so far. Here, we formulate two basic requirements for treating LMs as KBs: (i) the ability to store a large number facts involving a large number of entities and (ii) the ability to query stored facts. We explore three entity representations that allow LMs to represent millions of entities and present a detailed case study on paraphrased querying of world knowledge in LMs, thereby providing a proof-of-concept that language models can indeed serve as knowledge bases. | Language Models as Knowledge Bases: On Entity Representations, Storage Capacity, and Paraphrased Queries |
We discuss two ways in which one can study two-charge supertubes as components of generic three-charge, three-dipole charge supergravity solutions. The first is using the Born-Infeld action of the supertubes, and the second is via the complete supergravity solution. Even though the Born-Infeld description is only a probe approximation, we find that it gives exactly the same essential physics as the complete supergravity solution. Since supertubes can depend on arbitrary functions, our analysis strengthens the evidence for the existence of three-charge black-hole microstate geometries that depend on an infinite set of parameters, and sets the stage for the computation of the entropy of these backgrounds. We examine numerous other aspects of supertubes in three-charge, three-dipole charge supergravity backgrounds, including chronology protection during mergers, the contribution of supertubes to the charges and angular momenta, and the enhancement of their entropy. In particular, we find that entropy enhancement affects supertube fluctuations both along the internal and the spacetime directions, and we prove that the charges that give the enhanced entropy can be much larger than the asymptotic charges of the solution. We also re-examine the embedding of five-dimensional black rings in Taub-NUT, and show that in different coordinate patches a ring can correspond to different four-dimensional black holes. Last, but not least, we show that all the three-charge black hole microstate geometries constructed so far can be embedded in AdS_3 x S^3, and hence can be related to states of the D1-D5 CFT. | Supertubes in Bubbling Backgrounds: Born-Infeld Meets Supergravity |
In this contribution a method is introduced that allows for a linkage between the process-induced structural damage and the fracture behaviour. Based on an anisotropic elastic material model, different modelling approaches for initial damage effects are introduced and compared. The approaches are applied to remote laser cut carbon fibre reinforced polymers in order to model various thermally induced damage effects like chemical decomposition, micro-cracks and delamination. The dimensions of this heat affected zone are calculated with 1D-heat conduction. In experiment and simulation milled and laser cut specimens with different process parameters are compared in order to quantify the impact of the cutting technology on the fracture behaviour. For this purpose open hole specimens were used. | Analysis of process-induced damage in remote laser cut carbon fibre reinforced polymers |
We predict the dwarf galaxy detection limits for the upcoming Chinese Space Station Telescope (CSST) survey that will cover 17,500 deg$^{2}$ of the sky with a wide field of view of 1.1 deg$^2$. The point-source depth reaches 26.3 mag in the $g$ band and 25.9 mag in the $i$ band. Constructing mock survey data based on the designed photometric bands, we estimate the recovery rate of artificial dwarf galaxies from mock point-source photometric catalogues. The detection of these artificial dwarf galaxies is strongly dependent on their distance, magnitude and size, in agreement with searches in current surveys. We expect CSST to enable the detection of dwarf galaxies with $M_V = -3.0$ and $\mu_{250} = 32.0$ mag/arcsec$^2$ (surface-brightness limit for a system of half-light radius $r_{\rm h}$ = 250 pc at 400 kpc, and $M_V = -4.9$ and $\mu_{250} = 30.5$ mag/arcsec$^2$ around the Andromeda galaxy. Beyond the Local Group, the CSST survey will achieve $M_V = -5.8$, and $\mu_{250}$ = 29.7 mag/arcsec$^2$ in the distance range of 1--2 Mpc, opening up an exciting discovery space for faint field dwarf galaxies. With its optical bands, wide survey footprint, and space resolution, CSST will undoubtedly expand our knowledge of low-mass dwarf galaxies to an unprecedented volume. | Local Group Dwarf Galaxy Detection Limit in the CSST survey |
We propose a Markovian quantum master equation that can describe the Fano effect directly, by assuming a standard cavity quantum electrodynamics system. The framework allows us to generalize the Fano formula, applicable over the weak and strong coupling regimes with pure dephasing. A formulation of its emission spectrum is also given in a consistent manner. We then find that the interference responsible for the Fano effect is robust against pure dephasing. This is counterintuitive because the impact of interference is, in general, severely reduced by decoherence processes. Our approach thus provides a basis for theoretical treatments of the Fano effect and new insights into the quantum interference in open quantum systems. | Theory of Fano effect in cavity quantum electrodynamics |
Integral field spectroscopy of 11 type-Ib/c supernova explosion sites in nearby galaxies has been obtained using UH88/SNIFS and Gemini-N/GMOS. The use of integral field spectroscopy enables us to obtain both spatial and spectral information of the explosion site, allowing the identification of the parent stellar population of the supernova progenitor star. The spectrum of the parent population provides metallicity determination via strong-line method and age estimation obtained via comparison with simple stellar population (SSP) models. We adopt this information as the metallicity and age of the supernova progenitor, under the assumption that it was coeval with the parent stellar population. The age of the star corresponds to its lifetime, which in turn gives the estimate of its initial mass. With this method we were able to determine both the metallicity and initial (ZAMS) mass of the progenitor stars of the type Ib and Ic supernovae. We found that on average SN Ic explosion sites are more metal-rich and younger than SN Ib sites. The initial mass of the progenitors derived from parent stellar population age suggests that SN Ic have more massive progenitors than SN Ib. In addition, we also found indication that some of our SN progenitors are less massive than ~25 Msun, indicating that they may have been stars in a close binary system that have lost their outer envelope via binary interactions to produce Ib/c supernovae, instead of single Wolf-Rayet stars. These findings support the current suggestions that both binary and single progenitor channels are in effect in producing type Ib/c supernovae. This work also demonstrates the power of integral field spectroscopy in investigating supernova environments and active star forming regions. | Integral field spectroscopy of supernova explosion sites: constraining mass and metallicity of the progenitors - I. Type Ib and Ic supernovae |
In this work, we study strong and radiative decays of S-wave D\Xi molecular state, which is related to the \Omega^*_c states newly observed at LHCb. The coupling between the D\Xi molecular state and its constituents D and \Xi is calculated by using the compositeness condition. With the obtained coupling, the partial decay widths of the D\Xi molecular state into the \Xi_c^{+}K^{-}, \Xi^{'+}_cK^{-} and \Omega^{*}_c(2695)\gamma final states through hadronic loop are calculated with the help of the effective Lagrangians. By comparison with the LHCb observation, the current results of total decay width support the \Omega^{*}_c(3119) or \Omega^{*}_c(3050) as D\Xi molecule while the the decay width of the \Omega^{*}_c(3000), \Omega^{*}_c(3066) and \Omega^{*}_c(3090) can not be well reproduced in the molecular state picture. The partial decay widths are also presented and helpful to further understand the internal structures of \Omega^{*}_c(3119) and \Omega^{*}_c(3050). | Strong and radiative decays of D\Xi molecular state and newly observed $\Omega_c$ states |
We analyze stability and generation of discrete gap solitons in weakly coupled optical waveguides. We demonstrate how both stable and unstable solitons can be observed experimentally in the engineered binary waveguide arrays, and also reveal a connection between the gap-soliton instabilities and limitations on the mutual beam focusing in periodic photonic structures. | Generation and stability of discrete gap solitons |
A main puzzle of deep neural networks (DNNs) revolves around the apparent absence of "overfitting", defined in this paper as follows: the expected error does not get worse when increasing the number of neurons or of iterations of gradient descent. This is surprising because of the large capacity demonstrated by DNNs to fit randomly labeled data and the absence of explicit regularization. Recent results by Srebro et al. provide a satisfying solution of the puzzle for linear networks used in binary classification. They prove that minimization of loss functions such as the logistic, the cross-entropy and the exp-loss yields asymptotic, "slow" convergence to the maximum margin solution for linearly separable datasets, independently of the initial conditions. Here we prove a similar result for nonlinear multilayer DNNs near zero minima of the empirical loss. The result holds for exponential-type losses but not for the square loss. In particular, we prove that the weight matrix at each layer of a deep network converges to a minimum norm solution up to a scale factor (in the separable case). Our analysis of the dynamical system corresponding to gradient descent of a multilayer network suggests a simple criterion for ranking the generalization performance of different zero minimizers of the empirical loss. | Theory IIIb: Generalization in Deep Networks |
Safe interaction with the environment is one of the most challenging aspects of Reinforcement Learning (RL) when applied to real-world problems. This is particularly important when unsafe actions have a high or irreversible negative impact on the environment. In the context of network management operations, Remote Electrical Tilt (RET) optimisation is a safety-critical application in which exploratory modifications of antenna tilt angles of base stations can cause significant performance degradation in the network. In this paper, we propose a modular Safe Reinforcement Learning (SRL) architecture which is then used to address the RET optimisation in cellular networks. In this approach, a safety shield continuously benchmarks the performance of RL agents against safe baselines, and determines safe antenna tilt updates to be performed on the network. Our results demonstrate improved performance of the SRL agent over the baseline while ensuring the safety of the performed actions. | A Safe Reinforcement Learning Architecture for Antenna Tilt Optimisation |
We review recent finite opacity approaches (GLV, WW, WOGZ) to the computation of the induced gluon radiative energy loss and their application to the tomographic studies of the density evolution in ultra-relativistic nuclear collisions. | Jet Quenching and Radiative Energy Loss in Dense Nuclear Matter |
We establish that in Quantum Chromodynamics (QCD) at zero temperature, SU_{L+R}(N_F) exhibits the vector mode conjectured by Georgi and SU_{L-R}(N_F) is realized in either the Nambu-Goldstone mode or else Q_5^a is also screened from view at infinity. The Wigner-Weyl mode is ruled out unless the beta function in QCD develops an infrared stable zero. | The Georgi "Avatar" of Broken Chiral Symmetry in Quantum Chromodynamics |
In this talk we discuss an improvement of the Diakonov-Petrov QCD Effective Action. We propose the Improved Effective Action, which is derived on the basis of the Lee-Bardeen results for the quark determinant in the instanton field. The Improved Effective Action provides proper account of the current quark masses, which is particularly important for strange quarks. This Action is successfully tested by the calculations of the quark condensate, the masses of the pseudoscalar meson octet and by axial-anomaly low-energy theorems. | Light Quarks Beyond Chiral Limit |
We show that a Reissner-Nordstr\"{o}m (RN) black hole can be formed by dropping a charged thin dust shell onto a RN naked singularity. This is in contrast to the fact that a RN naked singularity is prohibited from forming by dropping a charged thin dust shell onto a RN black hole. This implies the strong tendency of the RN singularity to be covered by a horizon in favour of cosmic censorship. We show that an extreme RN black hole can also be formed from a RN naked singularity by the same process in a finite advanced time. We also discuss the evolution of the charged thin dust shells and the causal structure of the resultant spacetimes. | Dynamical Transition from a Naked Singularity to a Black Hole |
We make use of the S=1 pseudospin formalism to describe the charge degree of freedom in a model high-$T_c$ cuprate with the on-site Hilbert space reduced to the three effective valence centers, nominally Cu$^{1+,\,2+,\,3+}$. Starting with a parent cuprate as an analogue of the quantum paramagnet ground state and using the Schwinger boson technique we found the pseudospin spectrum and conditions for the pseudomagnon condensation with phase transition to a superconducting state. | Superconductivity in model cuprate as an S=1 pseudomagnon condensation |
Following Cui et al. 2018 (hereafter Paper I) on the classification of large-scale environments (LSE) at z = 0, we push our analysis to higher redshifts and study the evolution of LSE and the baryon distributions in them. Our aim is to investigate how baryons affect the LSE as a function of redshift. In agreement with Paper I, the baryon models have negligible effect on the LSE over all investigated redshifts. We further validate the conclusion obtained in Paper I that the gas web is an unbiased tracer of total matter -- even better at high redshifts. By separating the gas mainly by temperature, we find that about 40 per cent of gas is in the so-called warm-hot intergalactic medium (WHIM). This fraction of gas mass in the WHIM decreases with redshift, especially from z = 1 (29 per cent) to z = 2.1 (10 per cent). By separating the whole WHIM gas mass into the four large-scale environments (i.e. voids, sheets, filaments, and knots), we find that about half of the WHIM gas is located in filaments. Although the total gas mass in WHIM decreases with redshift, the WHIM mass fractions in the different LSE seem unchanged. | The large-scale environment from cosmological simulations II: The redshift evolution and distributions of baryons |
The spectrum of higher harmonics in atoms calculated with a uniformized semiclassical propagator is presented and it is shown that higher harmonic generation is an interference phenomenon which can be described semiclassically. This can be concluded from the good agreement with the quantum spectrum. Moreover, the formation of a plateau in the spectrum is specifically due to the interference of irregular, time delayed, trajectories with regular orbits without a time-delay. This is proven by the absence of the plateau in an artificial semiclassical spectrum generated from a sample of trajectories from which the irregular trajectories (only a few percent) have been discarded. | Irregular orbits generate higher harmonics |
In this paper, we study in-depth the problem of online self-calibration for robust and accurate visual-inertial state estimation. In particular, we first perform a complete observability analysis for visual-inertial navigation systems (VINS) with full calibration of sensing parameters, including IMU and camera intrinsics and IMU-camera spatial-temporal extrinsic calibration, along with readout time of rolling shutter (RS) cameras (if used). We investigate different inertial model variants containing IMU intrinsic parameters that encompass most commonly used models for low-cost inertial sensors. The observability analysis results prove that VINS with full sensor calibration has four unobservable directions, corresponding to the system's global yaw and translation, while all sensor calibration parameters are observable given fully-excited 6-axis motion. Moreover, we, for the first time, identify primitive degenerate motions for IMU and camera intrinsic calibration. Each degenerate motion profile will cause a set of calibration parameters to be unobservable and any combination of these degenerate motions are still degenerate. Extensive Monte-Carlo simulations and real-world experiments are performed to validate both the observability analysis and identified degenerate motions, showing that online self-calibration improves system accuracy and robustness to calibration inaccuracies. We compare the proposed online self-calibration on commonly-used IMUs against the state-of-art offline calibration toolbox Kalibr, and show that the proposed system achieves better consistency and repeatability. Based on our analysis and experimental evaluations, we also provide practical guidelines for how to perform online IMU-camera sensor self-calibration. | Online Self-Calibration for Visual-Inertial Navigation Systems: Models, Analysis and Degeneracy |
Electronegativity is shown to control charge transfer, energy level alignments, and electron currents in single molecule tunnel junctions, all of which are governed by correlations contained within the density matrix. This is demonstrated by the fact that currents calculated from the one-electron reduced density matrix to second order in electron correlation are identical to the currents obtained from the Green's function corrected to second order in electron self-energy. | Electronegativity in quantum electronic transport |
We present a numerically efficient technique to evaluate the Green's function for extended two dimensional systems without relying on periodic boundary conditions. Different regions of interest, or `patches', are connected using self energy terms which encode the information of the extended parts of the system. The calculation scheme uses a combination of analytic expressions for the Green's function of infinite pristine systems and an adaptive recursive Green's function technique for the patches. The method allows for an efficient calculation of both local electronic and transport properties, as well as the inclusion of multiple probes in arbitrary geometries embedded in extended samples. We apply the Patched Green's function method to evaluate the local densities of states and transmission properties of graphene systems with two kinds of deviations from the pristine structure: bubbles and perforations with characteristic dimensions of the order of 10-25 nm, i.e. including hundreds of thousands of atoms. The strain field induced by a bubble is treated beyond an effective Dirac model, and we demonstrate the existence of both Friedel-type oscillations arising from the edges of the bubble, as well as pseudo-Landau levels related to the pseudomagnetic field induced by the nonuniform strain. Secondly, we compute the transport properties of a large perforation with atomic positions extracted from a TEM image, and show that current vortices may form near the zigzag segments of the perforation. | Patched Green's function techniques for two dimensional systems: Electronic behaviour of bubbles and perforations in graphene |
We study the production of isolated photons in $e^+e^-$ annihilation and give the proof of the all-order factorization of the collinear singularities. These singularities are absorbed in the standard fragmentation functions of partons into a photon, while the effects of the isolation are consistently included in the short-distance cross section. We compute this cross section at order $\as$ and show that it contains large double logarithms of the isolation parameters. We explain the physical origin of these logarithms and discuss the possibility to resum them to all orders in $\as$. | Factorization and soft-gluon divergences in isolated-photon cross sections |
We consider a superlattice of parallel metal tunnel junctions with a spatially non-homogeneous probability for electrons to tunnel. In such structures tunneling can be accompanied by electron scattering that conserves energy but not momentum. In the special case of a tunneling probability that varies periodically with period $a$ in the longitudinal direction, i.e., perpendicular to the junctions, electron tunneling is accompanied by "umklapp" scattering, where the longitudinal momentum changes by a multiple of $h/a$. We predict that as a result a sequence of metal-insulator transitions can be induced by an external electric- or magnetic field as the field strength is increased. | Umklapp-Assisted Electron Transport Oscillations in Metal Superlattices |
In the first part we summarize the status of the nucleon-nucleon (NN) problem in the context of Hamiltonian based constituent quark models and present results for the l=0 phase shifts obtained from the Goldstone-boson exchange model by applying the resonationg group method. The second part deals with the construction of local shallow and deep equivalent potentials based on a Supersymmetric Quantum Mechanics approach. | The Nucleon-Nucleon Problem in Quark Models |
We study the effect of the exchange interaction on the Coulomb blockade peak height statistics in chaotic quantum dots. Because exchange reduces the level repulsion in the many body spectrum, it strongly affects the fluctuations of the peak conductance at finite temperature. We find that including exchange substantially improves the description of the experimental data. Moreover, it provides further evidence of the presence of high spin states (S>1) in such systems. | Exchange and the Coulomb blockade: Peak height statistics in quantum dots |
We demonstrate the reconstruction of the exciton-polariton condensate loaded in a single active miniband in one-dimensional microcavity wires with a complex-valued periodic potentials. The effect appears due to strong polariton-polariton repulsion and it depends on the type of the single-particle dispersion of the miniband, which can be fine tuned by the real and imaginary components of the potential. As a result, the condensate can be formed in a $0$-state, $\pi$-state, or mixed state of spatiotemporal intermittency, depending on the shape of the miniband, strength of interparticle interaction, and distribution of losses in the system. The reconstruction of the condensate wave function takes place by proliferation of nuclei of the new condensate phase in the form of dark solitons. We show that, in general, the interacting polaritons are not condensed in the state with minimal losses, neither they accumulate in the state with a well-defined wave vector. | Reconstruction of Exciton-Polariton Condensates in 1D Periodic Structures |
Given a forcing notion $P$ that forces certain values to several classical cardinal characteristics of the reals, we show how we can compose $P$ with a collapse (of a cardinal $\lambda>\kappa$ to $\kappa$) such that the composition still forces the previous values to these characteristics. We also show how to force distinct values to $\mathfrak m$, $\mathfrak p$ and $\mathfrak h$ and also keeping all the values in Cicho\'n's diagram distint, using the Boolean Ultrapower method of arXiv:1708.03691 . (In arXiv:2006.09826 , the same was done for the newer Cicho\'n's Maximum construction, which avoids large cardinals.) | Controlling classical cardinal characteristics while collapsing cardinals |
Order estimates for the Kolmogorov widths of an intersection of two finite-dimensional balls in a mixed norm under some conditions on the parameters are obtained. | Estimates for the Kolmogorov widths of an intersection of two balls in a mixed norm |
Counterfactual Explanations (CEs) are an important tool in Algorithmic Recourse for addressing two questions: 1. What are the crucial factors that led to an automated prediction/decision? 2. How can these factors be changed to achieve a more favorable outcome from a user's perspective? Thus, guiding the user's interaction with AI systems by proposing easy-to-understand explanations and easy-to-attain feasible changes is essential for the trustworthy adoption and long-term acceptance of AI systems. In the literature, various methods have been proposed to generate CEs, and different quality measures have been suggested to evaluate these methods. However, the generation of CEs is usually computationally expensive, and the resulting suggestions are unrealistic and thus non-actionable. In this paper, we introduce a new method to generate CEs for a pre-trained binary classifier by first shaping the latent space of an autoencoder to be a mixture of Gaussian distributions. CEs are then generated in latent space by linear interpolation between the query sample and the centroid of the target class. We show that our method maintains the characteristics of the input sample during the counterfactual search. In various experiments, we show that the proposed method is competitive based on different quality measures on image and tabular datasets -- efficiently returns results that are closer to the original data manifold compared to three state-of-the-art methods, which are essential for realistic high-dimensional machine learning applications. | Counterfactual Explanation via Search in Gaussian Mixture Distributed Latent Space |
The lifetimed of the Bbar0 and B- meson lifetimes are measured using data recorded on the Z peak with the ALEPH detector at LEP. An improved analysis based on partially reconstructed Bbar0 -> D*+l-nubar and B- -> D0l-nubar decays is presented. | Measurement of the B0 and B- meson lifetimes in ALEPH |
A heterogeneous brittle material characterized by a random field of local toughness Kc(x) can be represented by an equivalent homogeneous medium of toughness, Keff. Homogenization refers to a process of estimating Keff from the local field Kc(x). An approach based on a perturbative expansion of the stress intensity factor along a rough crack front shows the occurrence of different regimes depending on the correlation length of the local toughness field in the direction of crack propagation. A `"weak pinning" regime takes place for long correlation lengths, where the effective toughness is the average of the local toughness. For shorter correlation lengths, a transition to "strong pinning" occurs leading to a much higher effective toughness, and characterized by a propagation regime consisting in jumps between pinning configurations. | Effective toughness of heterogeneous brittle materials |
We present a simple model to study L\'{e}vy-flight foraging in a finite landscape with countable targets. In our approach, foraging is a step-based exploratory random search process with a power-law step-size distribution $P(l) \propto l^{-\mu}$. We find that, when the termination is regulated by a finite number of steps $N$, the optimum value of $\mu$ that maximises the foraging efficiency can vary substantially in the interval $\mu \in (1,3)$, depending on the landscape features (landscape size and number of targets). We further demonstrate that subjective returning can be another significant factor that affects the foraging efficiency in such context. Our results suggest that L\'{e}vy-flight foraging may arise through an interaction between the environmental context and the termination of exploitation, and particularly that the number of steps can play an important role in this scenario which is overlooked by most previous work. Our study not only provides a new perspective on L\'{e}vy-flight foraging, but also opens new avenues for investigating the interaction between foraging dynamics and environment as well as offers a realistic framework for analysing animal movement patterns from empirical data. | Optimal L\'{e}vy-flight foraging in a finite landscape |
An s-tuple of positive integers are k-wise relatively prime if any k of them are relatively prime. Exact formula is obtained for the probability that s positive integers are k-wise relatively prime. | The probability that random positive integers are k-wise relatively prime |
We investigate the asymptotic rates of length-$n$ binary codes with VC-dimension at most $dn$ and minimum distance at least $\delta n$. Two upper bounds are obtained, one as a simple corollary of a result by Haussler and the other via a shortening approach combining Sauer-Shelah lemma and the linear programming bound. Two lower bounds are given using Gilbert-Varshamov type arguments over constant-weight and Markov-type sets. | On the VC-Dimension of Binary Codes |
Most depression assessment tools are based on self-report questionnaires, such as the Patient Health Questionnaire (PHQ-9). These psychometric instruments can be easily adapted to an online setting by means of electronic forms. However, this approach lacks the interacting and engaging features of modern digital environments. With the aim of making depression screening more available, attractive and effective, we developed Perla, a conversational agent able to perform an interview based on the PHQ-9. We also conducted a validation study in which we compared the results obtained by the traditional self-report questionnaire with Perla's automated interview. Analyzing the results from this study we draw two significant conclusions: firstly, Perla is much preferred by Internet users, achieving more than 2.5 times more reach than a traditional form-based questionnaire; secondly, her psychometric properties (Cronbach's alpha of 0.81, sensitivity of 96% and specificity of 90%) are excellent and comparable to the traditional well-established depression screening questionnaires. | Perla: A Conversational Agent for Depression Screening in Digital Ecosystems. Design, Implementation and Validation |
We demonstrate that the new single crystal of YCu$_3$[OH(D)]$_{6.5}$Br$_{2.5}$ (YCOB) is a kagome Heisenberg antiferromagnet (KHA) without evident orphan spins ($\ll$ 0.8\%). The site mixing between polar OH$^-$ and non-polar Br$^-$ causes local distortions of Cu-O-Cu exchange paths, and gives rise to 70(2)\% of randomly distributed hexagons of alternate bonds ($\sim$ $J_1-\Delta J$ and $J_1+\Delta J$) and the rest of almost uniform hexagons ($\sim$ $J_1$) on the kagome lattice. Simulations of the random exchange model with $\Delta J$/$J_1$ = 0.7(1) show good agreement with the experimental observations, including the weak upturn seen in susceptibility and the slight polarization in magnetization. Despite the average antiferromagnetic coupling of $J_1$ $\sim$ 60 K, no conventional freezing is observed down to $T$ $\sim$ 0.001$J_1$, and the raw specific heat exhibits a nearly quadratic temperature dependence below 1 K $\sim$ 0.02$J_1$, phenomenologically consistent with a gapless (spin gap $\leq$ 0.025$J_1$) Dirac quantum spin liquid (QSL). Our result sheds new light on the theoretical understanding of the randomness-relevant gapless QSL behavior in YCOB, as well as in other relevant materials. | Gapless Spin Liquid Behavior in A Kagome Heisenberg Antiferromagnet with Randomly Distributed Hexagons of Alternate Bonds |
The surface tension of living cells and tissues originates from the generation of nonequilibrium active stresses within the cell cytoskeleton. Here, using laser ablation, we generate gradients in the surface tension of cellular aggregates as models of simple tissues. These gradients of active surface stress drive large-scale and rapid toroidal motion. Subsequently, the motions spontaneously reverse as stresses reaccumulate and cells return to their original positions. Both forward and reverse motions resemble Marangoni flows in viscous fluids. However, the motions are faster than the timescales of viscoelastic relaxation, and the surface tension gradient is proportional to mechanical strain at the surface. Further, due to active stress, both the surface tension gradient and surface strain are dependent upon the volume of the aggregate. These results indicate that surface tension can induce rapid and highly correlated elastic deformations in the maintenance of tissue shape and configuration. | Gradients in solid surface tension drive Marangoni-like motions in cell aggregates |
The one-loop Higgs coupling to two gluons has been invoked in the past to estimate that the fraction of the nucleon mass which is due to the Higgs is rather small but calculable (approximately 8 percent). To test the veracity of this hypothesis, we employ the same mechanism to compute the Higgs coupling to an arbitrary stable nucleus $A$ and its anti-nucleus $\bar{A}$. We find that the physical decay rate of a Higgs into a spin zero $A\bar{A}$ pair near the threshold corresponding to the Higgs mass is quite substantial, once we include the final state Coulomb corrections as well as possible form factor effects. If true, observation of even a few such decay events would be truly spectacular (with no competing background) since we are unaware of any other interaction which might lead to the production of a very heavy nucleus accompanied by its anti nucleus in nucleon-(anti-) nucleon scattering. | Production and detection of heavy matter anti-matter from Higgs decays |
We introduce a double quantum (DQ) 4-Ramsey measurement protocol that enables wide-field magnetic imaging using nitrogen vacancy (NV) centers in diamond, with enhanced homogeneity of the magnetic sensitivity relative to conventional single quantum (SQ) techniques. The DQ 4-Ramsey protocol employs microwave-phase alternation across four consecutive Ramsey (4-Ramsey) measurements to isolate the desired DQ magnetic signal from any residual SQ signal induced by microwave pulse errors. In a demonstration experiment employing a 1-$\mu$m-thick NV layer in a macroscopic diamond chip, the DQ 4-Ramsey protocol provides volume-normalized DC magnetic sensitivity of $\eta^\text{V}=34\,$nTHz$^{-1/2} \mu$m$^{3/2}$ across a $125\,\mu$m$ \,\times\,125\,\mu $m field of view, with about 5$\times$ less spatial variation in sensitivity across the field of view compared to a SQ measurement. The improved robustness and magnetic sensitivity homogeneity of the DQ 4-Ramsey protocol enable imaging of dynamic, broadband magnetic sources such as integrated circuits and electrically-active cells. | NV-Diamond Magnetic Microscopy using a Double Quantum 4-Ramsey Protocol |
Traditionally, networks operate at a small fraction of their capacities; however, recent technologies, such as Software-Defined Networking, may let operators run their networks harder (i.e., at higher utilization levels). Higher utilization can increase the network operator's revenue, but this gain comes at a cost: daily traffic fluctuations and failures might occasionally overload the network. We call such situations Resource Crunch. Dealing with Resource Crunch requires certain types of flexibility in the system. We focus on scenarios with flexible bandwidth requirements, e.g., some connections can tolerate lower bandwidth allocation. This may free capacity to provision new requests that would otherwise be blocked. For that, the network operator needs to make an informed decision, since reducing the bandwidth of a high-paying connection to allocate a low-value connection is not sensible. We propose a strategy to decide whether or not to provision a request (and which other connections to degrade) focusing on maximizing profits during Resource Crunch. To address this problem, we use an abstraction of the network state, called a Connection Adjacency Graph (CAG). We propose PROVISIONER, which integrates our CAG solution with an efficient Linear Program (LP). We compare our method to existing greedy approaches and to LP-only solutions, and show that our method outperforms them during Resource Crunch. | Running the Network Harder: Connection Provisioning under Resource Crunch |
Based on the isospin-dependent quantum molecular dynamics model, finite-size scaling effects on nuclear liquid--gas phase transition probes are investigated by studying the de-excitation processes of six thermal sources of different sizes with the same initial density and similar $N/Z$. Using several probes including the total multiplicity derivative ($dM_{tot}/dT$), second moment parameter ($M_2$), intermediate mass fragment (IMF) multiplicity ($N_{IMF}$), Fisher's power-law exponent ($\tau$), and Ma's nuclear Zipf's law exponent ($\xi$), the relationship between the phase transition temperature and the source size has been established. It is observed that the phase transition temperatures obtained from the IMF multiplicity, Fisher's exponent, and Ma's nuclear Zipf's law exponent have a strong correlation with the source size. Moreover, by employing the finite-size scaling law, the critical temperature $T_c$ and the critical exponent $\nu$ have been obtained for infinite nuclear matter. | Finite-size scaling phenomenon of nuclear liquid--gas phase transition probes |
Dynamic parallelism on GPUs allows GPU threads to dynamically launch other GPU threads. It is useful in applications with nested parallelism, particularly where the amount of nested parallelism is irregular and cannot be predicted beforehand. However, prior works have shown that dynamic parallelism may impose a high performance penalty when a large number of small grids are launched. The large number of launches results in high launch latency due to congestion, and the small grid sizes result in hardware underutilization. To address this issue, we propose a compiler framework for optimizing the use of dynamic parallelism in applications with nested parallelism. The framework features three key optimizations: thresholding, coarsening, and aggregation. Thresholding involves launching a grid dynamically only if the number of child threads exceeds some threshold, and serializing the child threads in the parent thread otherwise. Coarsening involves executing the work of multiple thread blocks by a single coarsened block to amortize the common work across them. Aggregation involves combining multiple child grids into a single aggregated grid. Our evaluation shows that our compiler framework improves the performance of applications with nested parallelism by a geometric mean of 43.0x over applications that use dynamic parallelism, 8.7x over applications that do not use dynamic parallelism, and 3.6x over applications that use dynamic parallelism with aggregation alone as proposed in prior work. | A Compiler Framework for Optimizing Dynamic Parallelism on GPUs |
In this talk the question of what is the upper bound on the lightest supersymmetric Higgs mass, m_h is addressed. This question is relevant since experimental lower bounds on m_h might implement, in the near future, exclusion of supersymmetry. By imposing (perturbative) unification of the gauge couplings at some high scale \simgt 10^{17} GeV, we have found that for a top-quark mass M_t=175 GeV, and depending on the supersymmetric parameters, this bound can be as high as 205 GeV. | What is the upper limit on the lightest supersymmetric Higgs mass? |
We study some kinematical aspects of quantum fields on causal sets. In particular, we are interested in free scalar fields on a fixed background causal set. We present various results building up to the study of the entanglement entropy of de Sitter horizons using causal sets. We begin by obtaining causal set analogs of Green functions for this field. First we construct the retarded Green function in a Riemann normal neighborhood (RNN) of an arbitrary curved spacetime. Then, we show that in de Sitter and patches of anti-de Sitter spacetimes the construction can be done beyond the RNN. This allows us to construct the QFT vacuum on the causal set using the Sorkin-Johnston construction. We calculate the SJ vacuum on a causal set approximated by de Sitter spacetime, using numerical techniques. We find that the causal set SJ vacuum does not correspond to any of the known Mottola-Allen vacua of de Sitter spacetime. This has potential phenomenological consequences for early universe physics. Finally, we study the spacetime entanglement entropy for causal set de Sitter horizons. The entanglement entropy of de Sitter horizons is of particular interest. As in the case of nested causal diamonds in 2d Minkowski spacetime, we find that the causal set naturally gives a volume law of entropy, both for nested causal diamonds in 4d Minkowski spacetime as well as 2d and 4d de Sitter spacetimes. However, an area law emerges when the high frequency modes in the SJ spectrum are truncated. The choice of truncation turns out to be non-trivial and we end with several interesting questions. | Aspects of Quantum Fields on Causal Sets |
Magnesium aluminate scandium oxide (ScAlMgO4) is a promising lattice-matched substrate material for GaN- and ZnO-based optoelectronic devices. Yet, despite its clear advantages over substrates commonly used in heteroepitaxial growth, several fundamental properties of ScAlMgO4 remain unsettled. Here, we provide a comprehensive picture of its optical, electronic and structural properties by studying ScAlMgO4 single crystals grown by the Czochralski method. We use variable angle spectroscopic ellipsometry to determine complex in-plane and out-of-plane refractive indices in the range from 193 to 1690 nm. An oscillator-based model provides a phenomenological description of the ellipsometric spectra with excellent agreement over the entire range of wavelengths. For convenience, we supply the reader also with Cauchy formulas describing the real part of the anisotropic refractive index for wavelengths above 400 nm. Ab initio many-body perturbation theory modeling provides information about the electronic structure of ScAlMgO4, and successfully validated experimentally obtained refractive index values. Simulations also show exciton binding energy as large as a few hundred of meV, indicating ScAlMgO4 as a promising material for implementation in low-threshold, deep-UV lasing devices operating at room temperature. X-ray diffraction measurements confirm lattice constants of ScAlMgO4 previously reported, but in addition, reveal that dominant crystallographic planes (001) are mutually inclined by about 0.009{\deg}. In view of our work, ScAlMgO4 is a highly transparent, low refractive index, birefringent material similar to a sapphire, but with a much more favorable lattice constant and simpler processing. | Optical, electronic and structural properties of ScAlMgO4 |
We show that the discrete time quantum walk on the Boolean hypercube of dimension $n$ has a strong dispersion property: if the walk is started in one vertex, then the probability of the walker being at any particular vertex after $O(n)$ steps is of an order $O(1.4818^{-n})$. This improves over the known mixing results for this quantum walk which show that the probability distribution after $O(n)$ steps is close to uniform but do not show that the probability is small for every vertex. A rigorous proof of this result involves an intricate argument about analytic properties of Bessel functions. | Strong dispersion property for the quantum walk on the hypercube |
The Reynolds-averaged Navier-Stokes (RANS) equations for steady-state assessment of incompressible turbulent flows remain the workhorse for practical computational fluid dynamics (CFD) applications. Consequently, improvements in speed or accuracy have the potential to affect a diverse range of applications. We introduce a machine learning framework for the {surrogate modeling of steady-state turbulent eddy viscosities for RANS simulations, given the initial conditions. This modeling strategy} is assessed for parametric interpolation, while numerically solving for the pressure and velocity equations to steady state, thus representing a framework that is hybridized with machine learning. We achieve {competitive} steady-state results with a significant reduction in solution time when compared to those obtained by the Spalart-Allmaras one-equation model. This is because the proposed methodology allows for considerably larger relaxation factors for the steady-state velocity and pressure solvers. Our assessments are made for a backward-facing step with considerable mesh anisotropy and separation to represent a practical CFD application. For test experiments with \textcolor{black}{either} varying inlet velocity conditions or step heights we see time-to-solution reductions around a factor of 5. The results represent an opportunity for the rapid exploration of parameter spaces that prove prohibitive when utilizing turbulence closure models with multiple coupled partial differential equations. \blfootnote{Code available publicly at \texttt{https://github.com/argonne-lcf/TensorFlowFoam}}. | A turbulent eddy-viscosity surrogate modeling framework for Reynolds-Averaged Navier-Stokes simulations |
Aims: We probe the radiatively-efficient, hot wind feedback mode in two nearby luminous unobscured (type 1) AGN from the Close AGN Reference Survey (CARS), which show intriguing kpc-scale arc-like features of extended [OIII] ionized gas as mapped with VLT-MUSE. We aimed to detect hot gas bubbles that would indicate the existence of powerful, galaxy-scale outflows in our targets, HE 0227-0931 and HE 0351+0240, from deep (200 ks) Chandra observations. Methods: By measuring the spatial and spectral properties of the extended X-ray emission and comparing with the sub kpc-scale IFU data, we are able to constrain feedback scenarios and directly test if the ionized gas is due to a shocked wind. Results: No extended hot gas emission on kpc-scales was detected. Unless the ambient medium density is low ($n_{H}\sim~1$ cm$^{-3}$ at 100 pc), the inferred upper limits on the extended X-ray luminosities are well below what is expected from theoretical models at matching AGN luminosities. Conclusions: We conclude that the highly-ionized gas structures on kpc scales are not inflated by a hot outflow in either target, and instead are likely caused by photo-ionization of pre-existing gas streams of different origins. Our non-detections suggest that extended X-ray emission from an AGN-driven wind is not universal, and may lead to conflicts with current theoretical predictions. | The Close AGN Reference Survey (CARS): No evidence of galaxy-scale hot outflows in two nearby AGN |
For a simple model of price-responsive demand, we consider a deregulated electricity marketplace wherein the grid (ISO, retailer-distributor) accepts bids per-unit supply from generators (simplified herein neither to consider start-up/ramp-up expenses nor day-ahead or shorter-term load following) which are then averaged (by supply allocations via an economic dispatch) to a common "clearing" price borne by customers (irrespective of variations in transmission/distribution or generation prices), i.e., the ISO does not compensate generators based on their marginal costs. Rather, the ISO provides sufficient information for generators to sensibly adjust their bids. Notwithstanding our idealizations, the dispatch dynamics are complex. For a simple benchmark power system, we find a price-symmetric Nash equilibrium through numerical experiments. | Generation bidding game with flexible demand |
In this paper, we study complete Vacuum Static Spaces. A complete classification of 3-dimensional complete Vacuum Static Spaces with non-negative scalar curvature and constant squared norm of Ricci curvature tensor is given by making use of the generalized maximum principle. | $3$-dimensional complete vacuum static spaces |
The $e^+ e^- \to K^0_{S}K^0_{L}$ cross section has been measured in the center-of-mass energy range 1004--1060 MeV at 25 energy points using $6.1 \times 10^5$ events with $K^0_{S}\to \pi^+\pi^-$ decay. The analysis is based on 5.9 pb$^{-1}$ of an integrated luminosity collected with the CMD-3 detector at the VEPP-2000 $e^+ e^-$ collider. To obtain $\phi(1020)$ meson parameters the measured cross section is approximated according to the Vector Meson Dominance model as a sum of the $\rho, \omega, \phi$-like amplitudes and their excitations. This is the most precise measurement of the $e^+ e^- \to K^0_{S}K^0_{L}$ cross section with a 1.8\% systematic uncertainty. | Study of the process $e^+ e^- \to K^0_{S}K^0_{L}$ in the center-of-mass energy range 1004--1060 MeV with the CMD-3 detector at the VEPP-2000 $e^+ e^-$ collider |
Tuning topological and magnetic properties of materials by applying an electric field is widely used in spintronics. In this work, we find a topological phase transition from topologically trivial to nontrivial states at an external electric field of about 0.1 V/A in MnBi$_2$Te$_4$ monolayer that is a topologically trivial ferromagnetic semiconductor. It is shown that when electric field increases from 0 to 0.15 V/A, the magnetic anisotropy energy (MAE) increases from about 0.1 to 6.3 meV, and the Curie temperature Tc increases from 13 to about 61 K. The increased MAE mainly comes from the enhanced spin-orbit coupling due to the applied electric field. The enhanced Tc can be understood from the enhanced $p$-$d$ hybridization and decreased energy difference between $p$ orbitals of Te atoms and $d$ orbitals of Mn atoms. Moreover, we propose two novel Janus materials MnBi$_2$Se$_2$Te$_2$ and MnBi$_2$S$_2$Te$_2$ monolayers with different internal electric polarizations, which can realize quantum anomalous Hall effect (QAHE) with Chern numbers $C$=1 and $C$=2, respectively. Our study not only exposes the electric field induced exotic properties of MnBi2Te4 monolayer, but also proposes novel materials to realize QAHE in ferromagnetic Janus semiconductors with electric polarization. | Electric field induced topological phase transition and large enhancements of spin-orbit coupling and Curie temperature in two-dimensional ferromagnetic semiconductors |
Several applications in astrophysics require adequately resolving many physical and temporal scales which vary over several orders of magnitude. Adaptive mesh refinement techniques address this problem effectively but often result in constrained strong scaling performance. The ParalleX execution model is an experimental execution model that aims to expose new forms of program parallelism and eliminate any global barriers present in a scaling-impaired application such as adaptive mesh refinement. We present two astrophysics applications using the ParalleX execution model: a tabulated equation of state component for neutron star evolutions and a cosmology model evolution. Performance and strong scaling results from both simulations are presented. The tabulated equation of state data are distributed with transparent access over the nodes of the cluster. This allows seamless overlapping of computation with the latencies introduced by the remote access to the table. Because of the expected size increases to the equation of state table, this type of table partitioning for neutron star simulations is essential while the implementation is greatly simplified by ParalleX semantics. | Adaptive Mesh Refinement for Astrophysics Applications with ParalleX |
The experimental results relevant for the understanding of the microscopic dynamics in liquid metals are reviewed, with special regards to the ones achieved in the last two decades. Inelastic Neutron Scattering played a major role since the development of neutron facilities in the sixties. The last ten years, however, saw the development of third generation radiation sources, which opened the possibility of performing Inelastic Scattering with X rays, thus disclosing previously unaccessible energy-momentum regions. The purely coherent response of X rays, moreover, combined with the mixed coherent/incoherent response typical of neutron scattering, provides enormous potentialities to disentangle aspects related to the collectivity of motion from the single particle dynamics. If the last twenty years saw major experimental developments, on the theoretical side fresh ideas came up to the side of the most traditional and established theories. Beside the raw experimental results, therefore, we review models and theoretical approaches for the description of microscopic dynamics over different length-scales, from the hydrodynamic region down to the single particle regime, walking the perilous and sometimes uncharted path of the generalized hydrodynamics extension. Approaches peculiar of conductive systems, based on the ionic plasma theory, are also considered, as well as kinetic and mode coupling theory applied to hard sphere systems, which turn out to mimic with remarkable detail the atomic dynamics of liquid metals. Finally, cutting edges issues and open problems, such as the ultimate origin of the anomalous acoustic dispersion or the relevance of transport properties of a conductive systems in ruling the ionic dynamic structure factor are discussed. | Microscopic dynamics in liquid metals: the experimental point of view |
In this report we discuss the organization of different levels of nature and the corresponding space-time structures by the consideration of a particular problem of time irreversibility. The fundamental time irreversibility problem consists in the following: how to reconcile the time-reversible microscopic dynamics and the irreversible macroscopic one. The recently proposed functional formulation of mechanics is aimed to solve this problem. The basic concept of this formulation is not a material point and a trajectory, like in the traditional formulation of mechanics, but a probability density function. Even if we deal with a single particle (not with an ensemble of particles), we describe its state as a probability density function. We justify this approach using measurement theory. A particular problem in the framework of the irreversibility problem is the derivation of the Boltzmann kinetic equation from the equations of microscopic dynamics. We propose a procedure for obtaining the Boltzmann equation from the Liouville equation based on the BBGKY hierarchy, the recently proposed functional formulation of classical mechanics, and the distinguishing between two scales of space-time, i.e., macro- and microscale. The notion of a space-time structure is introduced. It takes into account not only the space-time itself (i.e., a pseudo-Riemannian manifold), but also a characteristic length and time. The space-time structures form a hierarchy in sense that the initial values for the processes on the microscopic space-time structure (interactions of the particles) are assigned from the processes on the macroscopic one (kinetic phenomena). | Hierarchy of space-time structures, Boltzmann equation, and functional mechanics |
We present the results of determining the parameters of the spiral arms of the Galaxy using the stars Gaia DR3, whose absolute magnitude is $M_G$ < 4, and which allow tracing spiral arms at large distances from the Sun. As tracers of spiral arms, we use the centroids of stellar spherical regions with a radius of 0.5 kpc, in which the deformation velocities along the coordinate axis R are insignificant. These kinematic tracers cover the Galactic plane within the Galactocentric coordinate ranges 140{\deg} < ${\theta}$ < 220{\deg} and 4 kpc < R < 14 kpc. The numerical values of the pitch angles of the spirals and their Galactocentric distances to the point of intersection of the spiral with the direction of the Galactic center - the Sun are in good agreement with the results of other authors. By extrapolating beyond the data we have, we present a schematic four-arm global pattern, consisting of the Scutum-Centaurus, Sagitarius-Carina, Perseus, Norma-Outer arms, as well as the local arm Orion. The uncertainties of the determined spiral parameters confirm that the structures identified are not false, but are reliable from the statistical point of view. | Determining the parameters of the spiral arms of the Galaxy from kinematic tracers based on Gaia DR3 data |
Using 2-d U(1) lattice gauge theory we study two definitions of the topological charge constructed from a generalized Villain action and analyze the implementation of the index theorem based on the overlap Dirac operator. One of the two definitions expresses the topological charge as a sum of the Villain variables and treats charge conjugation symmetry exactly, making it particularly useful for studying related physics. Our numerical analysis establishes that for both topological charge definitions the index theorem becomes exact quickly towards the continuum limit. | Topology and index theorem with a generalized Villain lattice action -- a test in 2d |
The two-dimensional q-state Potts model is subjected to a Z_q symmetric disorder that allows for the existence of a Nishimori line. At q=2, this model coincides with the +/- J random-bond Ising model. For q>2, apart from the usual pure and zero-temperature fixed points, the ferro/paramagnetic phase boundary is controlled by two critical fixed points: a weak disorder point, whose universality class is that of the ferromagnetic bond-disordered Potts model, and a strong disorder point which generalizes the usual Nishimori point. We numerically study the case q=3, tracing out the phase diagram and precisely determining the critical exponents. The universality class of the Nishimori point is inconsistent with percolation on Potts clusters. | Phase diagram and critical exponents of a Potts gauge glass |
The transition quadrupole moments, $Q_{\rm t}$, of four weakly populated collective bands up to spin $\sim$ $65\hbar$ in $^{157,158}$Er have been measured to be ${\sim}11 {\rm eb}$ demonstrating that these sequences are associated with large deformations. However, the data are inconsistent with calculated values from cranked Nilsson-Strutinsky calculations that predict the lowest energy triaxial shape to be associated with rotation about the short principal axis. The data appear to favor either a stable triaxial shape rotating about the intermediate axis or, alternatively, a triaxial shape with larger deformation rotating about the short axis. These new results challenge the present understanding of triaxiality in nuclei. | Quadrupole Moments of Collective Structures up to Spin $\sim$ $65\hbar$ in $^{157}$Er and $^{158}$Er: A Challenge for Understanding Triaxiality in Nuclei |
We present griz light curves of 146 spectroscopically confirmed Type Ia Supernovae ($0.03 < z <0.65$) discovered during the first 1.5 years of the Pan-STARRS1 Medium Deep Survey. The Pan-STARRS1 natural photometric system is determined by a combination of on-site measurements of the instrument response function and observations of spectrophotometric standard stars. We find that the systematic uncertainties in the photometric system are currently 1.2\% without accounting for the uncertainty in the HST Calspec definition of the AB system. A Hubble diagram is constructed with a subset of 113 out of 146 SNe Ia that pass our light curve quality cuts. The cosmological fit to 310 SNe Ia (113 PS1 SNe Ia + 222 light curves from 197 low-z SNe Ia), using only SNe and assuming a constant dark energy equation of state and flatness, yields $w=-1.120^{+0.360}_{-0.206}\textrm{(Stat)} ^{+0.269}_{-0.291}\textrm{(Sys)}$. When combined with BAO+CMB(Planck)+$H_0$, the analysis yields $\Omega_{\rm M}=0.280^{+0.013}_{-0.012}$ and $w=-1.166^{+0.072}_{-0.069}$ including all identified systematics (see also Scolnic et al. 2014). The value of $w$ is inconsistent with the cosmological constant value of $-1$ at the 2.3$\sigma$ level. Tension endures after removing either the BAO or the $H_0$ constraint, though it is strongest when including the $H_0$ constraint. If we include WMAP9 CMB constraints instead of those from Planck, we find $w=-1.124^{+0.083}_{-0.065}$, which diminishes the discord to $<2\sigma$. We cannot conclude whether the tension with flat $\Lambda$CDM is a feature of dark energy, new physics, or a combination of chance and systematic errors. The full Pan-STARRS1 supernova sample with $\sim\!\!$3 times as many SNe should provide more conclusive results. | Cosmological Constraints from Measurements of Type Ia Supernovae discovered during the first 1.5 years of the Pan-STARRS1 Survey |
This paper presents 1.4-GHz radio continuum observations of 15 very extended radio galaxies. These sources are so large that most interferometers lose partly their structure and total flux density. Therefore, single-dish detections are required to fill in the central (u,v) gap of interferometric data and obtain reliable spectral index patterns across the structures, and thus also an integrated radio continuum spectrum. We have obtained such 1.4-GHz maps with the 100-m Effelsberg telescope and combined them with the corresponding maps available from the NVSS. The aggregated data allow us to produce high-quality images, which can be used to obtain physical parameters of the mapped sources. The combined images reveal in many cases extended low surface-brightness cocoons. | 1.4-GHz observations of extended giant radio galaxies |
Perturbative probability conservation provides a strong constraint on the presence of new interactions of the Higgs boson. In this work we consider CP violating Higgs interactions in conjunction with unitarity constraints in the gauge-Higgs and fermion-Higgs sectors. Injecting signal strength measurements of the recently discovered Higgs boson allows us to make concrete and correlated predictions of how CP-violation in the Higgs sector can be directly constrained through collider searches for either characteristic new states or tell-tale enhancements in multi-Higgs processes. | Perturbative Higgs CP violation, unitarity and phenomenology |
The structural and magnetic phase transitions of the ternary iron arsenides SrFe2As2 and EuFe2As2 were studied by temperature-dependent x-ray powder diffraction and 57-Fe Moessbauer spectroscopy. Both compounds crystallize in the tetragonal ThCr2Si2-type structure at room temperature and exhibit displacive structural transitions at 203 K (SrFe2As2) or 190 K (EuFe2As2) to orthorhombic lattice symmetry in agreement with the group-subgroup relationship between I4/mmm and Fmmm. 57-Fe Moessbauer spectroscopy experiments with SrFe2 As2 show full hyperfine field splitting below the phase transition temperature (8.91(1) T at 4.2 K). Order parameters were extracted from detailed measurements of the lattice parameters and fitted to a simple power law. We find a relation between the critical exponents and the transition temperatures for AFe2As2 compounds, which shows that the transition of BaFe2As2 is indeed more continuous than the transition of SrFe2As2 but it remains second order even in the latter case. | Structural and magnetic phase transitions in the ternary iron arsenides SrFe2As2 and EuFe2As2 |
Abundances and energy spectra of cosmic ray nuclei are being measured with high accuracy by the AMS experiment. These observations can provide tight constraints to the propagation models of galactic cosmic rays. In the view of the release of these data, I present an evaluation of the model uncertainties associated to the cross-sections for secondary production of Li-Be-B nuclei in cosmic rays. I discuss the role of cross section uncertainties in the calculation of the boron-to-carbon and beryllium-to-boron ratios, as well as their impact in the determination of the cosmic-ray transport parameters. | Fragmentation cross-sections and model uncertainties in Cosmic Ray propagation physics |
We study the edge and surface theories of topological insulators from the perspective of anomalies and identify a novel Z2-anomaly associated with charge conservation. The anomaly is manifested through a 2-point correlation function involving creation and annihilation operators on two decoupled boundaries. Although charge conservation on each boundary requires this quantity to vanish, we find that it diverges. A corollary result is that under an insertion of a flux quantum the ground state evolves to an exactly orthogonal state independent of the rate at which the flux is inserted. The anomaly persists in the presence of disorder and imposes sharp restrictions on possible low energy theories. Being formulated in a many-body, field theoretical language, the anomaly allows to test the robustness of topological insulators to interactions in a concise way. | The Z2-anomaly and boundaries of topological insulators |
The plasma equilibrium in a linear trap at $\beta\approx 1$ (or above the mirror-instability threshold) under the topology-conservation constraint evolves into a kind of diamagnetic "bubble". This can take two forms: either the plasma body greatly expands in radius while containing the same magnetic flux, or, if the plasma radius is limited, the plasma distribution across flux-tubes changes, so that the same cross-section contains a greatly reduced flux. If the magnetic field of the trap is quasi-uniform around its minimum, the bubble can be made roughly cylindrical, with radius much larger than the radius of the corresponding vacuum flux-tube, and with non-paraxial ends. Then the effective mirror ratio of the diamagnetic trap becomes very large, but the cross-field transport increases. The confinement time can be found from solution of the system of equilibrium and transport equations and is shown to be $\tau_E\approx\sqrt{\tau_\parallel\tau_\perp}$. If the cross-field confinement is not too degraded by turbulence, this estimate in principle allows construction of relatively compact fusion reactors with lengths in the range of a few tens of meters. In many ways the described here diamagnetic confinement and the corresponding reactor parameters are similar to those claimed by the FRCs. | Diamagnetic "bubble" equilibria in linear traps |
This paper is concerned with strong convergence of the truncated Euler-Maruyama scheme for neutral stochastic differential delay equations driven by Brownian motion and pure jumps respectively. Under local Lipschitz condition, convergence rates of the truncated EM scheme are given. | Convergence rates of truncated EM scheme for NSDDEs |
The microturbulent approximation of turbulent motions is widely used in radiative transfer calculations. Mainly motivated by its simple computational application it is probably in many cases an oversimplified treatment of the dynamical processes involved. This aspect is in particular important in the analysis of maser lines, since the strong amplification of radiation leads to a sensitive dependence of the radiation field on the overall velocity structure. To demonstrate the influence of large scale motions on the formation of maser lines we present a simple stochastic model which takes velocity correlations into account. For a quantitative analysis of correlation effects, we generate in a Monte Carlo simulation individual realizations of a turbulent velocity field along a line of sight. Depending on the size of the velocity correlation length we find huge deviations between the resulting random profiles in respect of line shape, intensity and position of single spectral components. Finally, we simulate the emission of extended maser sources. A qualitative comparison with observed masers associated with star forming regions shows that our model can reproduce the observed general spectral characteristics. We also investigate shortly, how the spectra are effected when a systematic velocity field (simulating expansion) is superposed on the fluctuations. Our results convincingly demonstrate that hydrodynamical motions are of great importance for the understanding of cosmic masers. | Effects of correlated turbulent velocity fields on the formation of maser lines |
We introduce a class of models for multidimensional control problems which we call skip-free Markov decision processes on trees. We describe and analyse an algorithm applicable to Markov decision processes of this type that are skip-free in the negative direction. Starting with the finite average cost case, we show that the algorithm combines the advantages of both value iteration and policy iteration -- it is guaranteed to converge to an optimal policy and optimal value function after a finite number of iterations but the computational effort required for each iteration step is comparable with that for value iteration. We show that the algorithm can also be used to solve discounted cost models and continuous time models, and that a suitably modified algorithm can be used to solve communicating models. | Models and algorithms for skip-free Markov decision processes on trees |
We analyze the dynamics of finite width effects in wide but finite feature learning neural networks. Starting from a dynamical mean field theory description of infinite width deep neural network kernel and prediction dynamics, we provide a characterization of the $\mathcal{O}(1/\sqrt{\text{width}})$ fluctuations of the DMFT order parameters over random initializations of the network weights. Our results, while perturbative in width, unlike prior analyses, are non-perturbative in the strength of feature learning. In the lazy limit of network training, all kernels are random but static in time and the prediction variance has a universal form. However, in the rich, feature learning regime, the fluctuations of the kernels and predictions are dynamically coupled with a variance that can be computed self-consistently. In two layer networks, we show how feature learning can dynamically reduce the variance of the final tangent kernel and final network predictions. We also show how initialization variance can slow down online learning in wide but finite networks. In deeper networks, kernel variance can dramatically accumulate through subsequent layers at large feature learning strengths, but feature learning continues to improve the signal-to-noise ratio of the feature kernels. In discrete time, we demonstrate that large learning rate phenomena such as edge of stability effects can be well captured by infinite width dynamics and that initialization variance can decrease dynamically. For CNNs trained on CIFAR-10, we empirically find significant corrections to both the bias and variance of network dynamics due to finite width. | Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks |
Multisensor track-to-track fusion for target tracking involves two primary operations: track association and estimation fusion. For estimation fusion, lossless measurement transformation of sensor measurements has been proposed for single target tracking. In this paper, we investigate track association which is a fundamental and important problem for multitarget tracking. First, since the optimal track association problem is a multi-dimensional assignment (MDA) problem, we demonstrate that MDA-based data association (with and without prior track information) using linear transformations of track measurements is lossless, and is equivalent to that using raw track measurements. Second, recent superior scalability and performance of belief propagation (BP) algorithms enable new real-time applications of multitarget tracking with resource-limited devices. Thus, we present a BP-based multisensor track association method with transformed measurements and show that it is equivalent to that with raw measurements. Third, considering communication constraints, it is more beneficial for local sensors to send in compressed data. Two analytical lossless transformations for track association are provided, and it is shown that their communication requirements from each sensor to the fusion center are less than those of fusion with raw track measurements. Numerical examples for tracking an unknown number of targets verify that track association with transformed track measurements has the same performance as that with raw measurements and requires fewer communication bandwidths. | On Communication-Efficient Multisensor Track Association via Measurement Transformation (Extended Version) |
In many regular cases, there exists a (properly defined) limit of iterations of a function in several real variables, and this limit satisfies the functional equation (1-z)f(x)=f(f(xz)(1-z)/z); here z is a scalar and x is a vector. This is a special case of a well-known translation equation. In this paper we present a complete solution to this functional equation in case f is a continuous function on a single point compactification of a 2-dimensional real vector space. It appears that, up to conjugation by a homogeneous continuous function, there are exactly four solutions. Further, in a 1-dimensional case we present a solution with no regularity assumptions on f. | Multi-variable translation equation which arises from homothety |
We study the relation between neutron removal cross section ($\sigma_{-N}$) and neutron skin thickness for finite neutron rich nuclei using the statistical abrasion ablation (SAA) model. Different sizes of neutron skin are obtained by adjusting the diffuseness parameter of neutrons in the Fermi distribution. It is demonstrated that there is a good linear correlation between $\sigma_{-N}$ and the neutron skin thickness for neutron rich nuclei. Further analysis suggests that the relative increase of neutron removal cross section could be used as a quantitative measure for the neutron skin thickness in neutron rich nuclei. | Neutron removal cross section as a measure of neutron skin |
The spin of a single electron confined in a semiconductor quantum dot is a natural qubit candidate. Fundamental building blocks of spin-based quantum computing have been demonstrated in double quantum dots with significant spin-orbit coupling. Here, we show that spin-orbit-coupled double quantum dots can be categorised in six classes, according to a partitioning of the multi-dimensional space of their $g$-tensors. The class determines physical characteristics of the double dot, i.e., features in transport, spectroscopy and coherence measurements, as well as qubit control, shuttling, and readout experiments. In particular, we predict that the spin physics is highly simplified due to pseudospin conservation, whenever the external magnetic field is pointing to special directions (`magic directions'), where the number of special directions is determined by the class. We also analyze the existence and relevance of magic loops in the space of magnetic-field directions, corresponding to equal local Zeeman splittings. These results present an important step toward precise interpretation and efficient design of spin-based quantum computing experiments in materials with strong spin-orbit coupling. | Classification and magic magnetic-field directions for spin-orbit-coupled double quantum dots |
We study several properties of blazars detected in the gamma-ray energy range by comparing the EGRET sources with a sample of radio blazars which can be considered possible gamma-ray candidates. We define three classes: non-gamma-ray blazars, blazars with quasi-steady gamma-ray emission, and gamma-ray blazars with substantial activity level. By combining the information of detected and candidate AGNs, we characterise the blazar activity, including the discovery of a region of consistency between the gamma-ray flaring duty-cycle and the recurrence time between flares. We also find a possible relation between the activity index of FSRQs and their black hole mass. | The Duty-cycle of Gamma-ray Blazars: a New Approach, New Results |
In this paper we prove transference inequalities for regular and uniform Diophantine exponents in the weighted setting. Our results generalize the corresponding inequalities that exist in the `non-weighted' case. | Transference theorems for Diophantine approximation with weights |
The CP Violating asymmetry in Bs mixing (beta_s) is one of the most promising measurements where physics beyond the Standard Model could be revealed. As such, analyses need to be subjected to great scrutiny. The mode Bs -> J/psi\phi has been used, and the mode Bs -> \phi \phi proposed for future measurements. These modes both have two vector particles in the final state and thus angular analyses must be used to disentangle the contributions from CP+ and CP- configurations. The angular distributions, however, could be distorted by the presence of S-waves masquerading as low mass K+K- pairs, that could result in erroneous values of beta_s. The S-waves could well be the result of a final state formed from an s-quark anti-s-quark pair in a 0+ spin-parity state, such as the f0(980) meson. Data driven and theoretical estimates of the Bs decay rate into the CP+ final state J/psi f0(980) are given, when f0 -> pi+pi-. The S-wave contribution in J\psi\phi should be taken into account when determining beta_s by including a K+K- S-wave amplitude in the fit. This may change the central value of current results and will also increase the statistical uncertainty. Importantly, the J/psi f0(980) mode has been suggested as an alternative channel for measuring beta_s. | S-waves and the extraction of beta_s |
In this work we explore the problem of answering a set of sum queries under Differential Privacy. This is a little understood, non-trivial problem especially in the case of numerical domains. We show that traditional techniques from the literature are not always the best choice and a more rigorous approach is necessary to develop low error algorithms. | Answering Summation Queries for Numerical Attributes under Differential Privacy |
Right-handed (RH) Majorana neutrinos play a crucial role in understanding the origin of neutrino mass, the nature of dark matter and the mechanism of matter-antimatter asymmetry. In this work, we investigate the observability of heavy Majorana neutrino through the top quark neutrinoless double beta decay process $t \to b \ell^+ \ell^+ j j$ at hadron colliders. By performing detector level simulation, we demonstrate that our method can give stronger limits on the light-heavy neutrino mixing parameters $|V_{eN, \mu N}|$ in the mass range of 15 GeV $< m_N <$ 80 GeV than other existing collider bounds. | Top quark as a probe of heavy Majorana neutrino at the LHC and future collider |
An ansatz is proposed for heptagon relation, that is, algebraic imitation of five-dimensional Pachner move 4--3. Our relation is realized in terms of matrices acting in a direct sum of one-dimensional linear spaces corresponding to 4-faces. | Heptagon relation in a direct sum |
We study the origin of the stellar $\alpha$-element-to-iron abundance ratio, $[\alpha/\mathrm{Fe}]_{\ast}$, of present-day central galaxies, using cosmological, hydrodynamical simulations from the Evolution and Assembly of GaLaxies and their Environments (EAGLE) project. For galaxies with stellar masses of $M_{\ast} > 10^{10.5}$ M$_{\odot}$, $[\alpha/\mathrm{Fe}]_{\ast}$ increases with increasing galaxy stellar mass and age. These trends are in good agreement with observations of early-type galaxies, and are consistent with a `downsizing' galaxy formation scenario: more massive galaxies have formed the bulk of their stars earlier and more rapidly, hence from an interstellar medium that was mostly $\alpha$-enriched by massive stars. In the absence of feedback from active galactic nuclei (AGN), however, $[\alpha/\mathrm{Fe}]_{\ast}$ in $M_{\ast} > 10^{10.5}$ M$_{\odot}$ galaxies is roughly constant with stellar mass and decreases with mean stellar age, extending the trends found for lower-mass galaxies in both simulations with and without AGN. We conclude that AGN feedback can account for the $\alpha$-enhancement of massive galaxies, as it suppresses their star formation, quenching more massive galaxies at earlier times, thereby preventing the iron from longer-lived intermediate-mass stars (supernova Type Ia) from being incorporated into younger stars. | The origin of the $\alpha$-enhancement of massive galaxies |