text
stringlengths
11
9.77k
label
stringlengths
2
104
The understanding of the behaviour of systems of identical composite bosons has progressed significantly in connection with the analysis of the entanglement between constituents and the development of coboson theory. The basis of these treatments is a coboson ansatz for the ground state of a system of N pairs, stating that in appropriate limits this state is well approximated by the account of Pauli exclusion in what would otherwise be the product state of N independent pairs, each described by the single-pair ground state. In this work we study the validity of this ansatz for particularly simple problems, and show that short-ranged attractive interactions in very dilute limits and a single-pair ground state with very large entanglement are not enough to render the ansatz valid. On the contrary, we find that the dimensionality of the problem plays a crucial role in the behaviour of the many-body ground state.
quantum physics
The vast majority of supermassive black holes (SMBHs) in the local universe exhibit levels of activity much lower than those expected from gas supplying rates onto the galactic nuclei, and only a small fraction of silent SMBHs can turn into active galactic nuclei. Revisiting observational data of very nearby SMBHs whose gravitational spheres of influence are spatially reached by the Chandra X-ray satellite, we find that the level of BH activity drastically increases from the quiescent phase when the inflow rate outside of the BH influence radius is higher than 0.1% of the Eddington accretion rate. We also show that the relation between the nuclear luminosity and gas accretion rate from the BH influence radius measured from X-ray observations is well described by the universal state transition of accreting SMBHs, as predicted by recent hydrodynamical simulations with radiative cooling and BH feedback. After the state transition, young massive stars should form naturally in the nucleus, as observed in the case of the nearest SMBH, Sagittarius A$^\ast$, which is currently quiescent but was recently active.
astrophysics
We extend the GENEVA Monte Carlo framework using the transverse momentum of a colour-singlet system as the resolution variable. This allows us to use next-to-next-to-next-to leading logarithm (N$^3$LL) resummation via the RadISH formalism to obtain precise predictions for any colour-singlet production process at the fully exclusive level. Thanks to the implementation of two different resolution variables within the GENEVA framework, we are able to assess the impact of such a choice on differential observables for the first time. As a first application we present predictions for Drell-Yan lepton pair production at next-to-next-to-leading order (NNLO) in QCD interfaced to a parton shower simulation that includes additional all-order radiative corrections. We provide fully showered and hadronised events using PYTHIA 8, while retaining the NNLO QCD accuracy for observables which are inclusive over the additional radiation. We compare our final predictions to LHC data at 13 TeV, finding good agreement across several distributions.
high energy physics phenomenology
The so-called aromatic infrared bands are attributed to emission of polycyclic aromatic hydrocarbons. The observed variations toward different regions in space are believed to be caused by contributions of different classes of PAH molecules, i.e. with respect to their size, structure, and charge state. Laboratory spectra of members of these classes are needed to compare them to observations and to benchmark quantum-chemically computed spectra of these species. In this paper we present the experimental infrared spectra of three different PAH dications, naphthalene$^{2+}$, anthracene$^{2+}$, and phenanthrene$^{2+}$, in the vibrational fingerprint region 500-1700~cm$^{-1}$. The dications were produced by electron impact ionization of the vapors with 70 eV electrons, and they remained stable against dissociation and Coulomb explosion. The vibrational spectra were obtained by IR predissociation of the PAH$^{2+}$ complexed with neon in a 22-pole cryogenic ion trap setup coupled to a free-electron infrared laser at the Free-Electron Lasers for Infrared eXperiments (FELIX) Laboratory. We performed anharmonic density-functional theory calculations for both singly and doubly charged states of the three molecules. The experimental band positions showed excellent agreement with the calculated band positions of the singlet electronic ground state for all three doubly charged species, indicating its higher stability over the triplet state. The presence of several strong combination bands and additional weaker features in the recorded spectra, especially in the 10-15~$\mu$m region of the mid-IR spectrum, required anharmonic calculations to understand their effects on the total integrated intensity for the different charge states. These measurements, in tandem with theoretical calculations, will help in the identification of this specific class of doubly-charged PAHs as carriers of AIBs.
astrophysics
The digital Subscriber Line (DSL) remains an important component of heterogeneous networking, especially in historic city-centers, where using optical fibre is less realistic. Recently, the power consumption has become an important performance metric in telecommunication due to the associated environmental issues. In the recent bonding model, customer sites have been equipped with two/four copper pairs, which may be exploited for designing grouped spatial modulation (SM) aiming for reducing the power consumption and mitigating the stubborn crosstalk in DSL communications. Explicitly, we view the two pair copper pairs equipped for each user as a group and propose an energy efficient transmission scheme based on grouped SM strategy for the upstream DSL systems, which is capable of reducing the power consumption of the upstream transmitters by activating a single copper line of each user. More especially, in order to compensate for the potential bit-rate reduction imposed by reducing the number of activated lines, the proposed scheme implicitly delivers ``virtual bits" via activating/deactivating the lines in addition to the classic modulation scheme. This is particularly beneficial in the DSL context, because the cross-talk imposed by activating several lines may swamp the desired signal. Furthermore, a pair of near-optimal soft turbo detection schemes are proposed for exploiting the unique properties of the DSL channel in order to eliminate the error propagation problem of SM detection routinely encountered in wireless channels. Both the attainable energy-efficiency and the achievable Bit Error Ratio (BER) are investigated. Our simulation results demonstrate that the proposed group-based SM is capable of outperforming the vectoring scheme both in terms of its energy efficiency for all the examined loop lengths and transmit powers.
electrical engineering and systems science
A procedure for solving the Maxwell equations in vacuum, under the additional requirement that both scalar invariants are equal to zero, is presented. Such a field is usually called a null electromagnetic field. Based on the complex Euler potentials that appear as arbitrary functions in the general solution, a vector potential for the null electromagnetic field is defined. This potential is called natural vector potential of the null electromagnetic field. An attempt is made to make the most of knowing the general solution. The properties of the field and the potential are studied without fixing a specific family of solutions. A equality, which is similar to the Dirac gauge condition, is found to be true for both null field and Lienard-Wiechert field. It turns out that the natural potential is a substantially complex vector, which is equivalent to two real potentials. A modification of the coupling term in the Dirac equation is proposed, that makes the equation work with both real potentials. A solution, that corresponds to the Volkov's solution for a Dirac particle in a linearly polarized plane electromagnetic wave, is found. The solution found is directly compared to Volkov's solution under the same conditions.
physics
We derive and analyze a generic, recursive algorithm for estimating all splits in a finite cluster tree as well as the corresponding clusters. We further investigate statistical properties of this generic clustering algorithm when it receives level set estimates from a kernel density estimator. In particular, we derive finite sample guarantees, consistency, rates of convergence, and an adaptive data-driven strategy for choosing the kernel bandwidth. For these results we do not need continuity assumptions on the density such as H\"{o}lder continuity, but only require intuitive geometric assumptions of non-parametric nature.
statistics
In this article we describe the basic principles of Rydberg atom-based RF sensing and present the development of atomic pulsed RF detection and RF phase sensing establishing capabilities pertinent to applications in communications and sensing. To date advances in Rydberg atom-based RF field sensors have been rooted in a method in which the fundamental physical quantity being detected and measured is the electric field amplitude, $E$, of the incident RF electromagnetic wave. The first part of this paper is focused on using atom-based $E$-field measurement for RF field-sensing and communications applications. With established phase-sensitive technologies, such as synthetic aperture radar (SAR) as well as emerging trends in phased-array antennas in 5G, a method is desired that allows robust, optical retrieval of the RF phase using an enhanced atom-based field sensor. In the second part of this paper we describe our fundamentally new atomic RF sensor and measurement method for the phase of the RF electromagnetic wave that affords all the performance advantages exhibited by the atomic sensor. The presented phase-sensitive RF field detection capability opens atomic RF sensor technology to a wide array of application areas including phase-modulated signal communication systems, radar, and field amplitude and phase mapping for near-field/far-field antenna characterizations.
quantum physics
We study the entanglement and the Bell nonlocality of a coupled two-qubit system, in which each qubit is coupled with one individual environment. We study how the nonequilibrium environments (with different temperatures or chemical potentials) influence the entanglement and the Bell nonlocality. The nonequilibrium environments can have constructive effects on the entanglement and the Bell nonlocality. Nonequilibrium thermodynamic cost can sustain the thermal energy or particle current and enhance the entanglement and the Bell nonlocality. However, the nonequilibrium conditions (characterized by the temperature differences or the thermodynamic cost quantified by the entropy production rates) which give the maximal violation of the Bell inequalities are different from the nonequilibrium conditions which give the maximal entanglement. When the Bell inequality has asymmetric observables (between Alice and Bob), for example the $I_{3322}$ inequality, such asymmetry can also be reflected from the effects under the nonequilibrium environments. The spatial asymmetric two-qubit system coupled with nonequilibrium bosonic environments shows the thermal rectification effect, which can be witnessed by the Bell nonlocality. Different spatial asymmetric factors can be linearly cancelled with each other in the thermal rectification effect, which is also reflected on the changes of the entanglement and the Bell nonlocality. Our study demonstrates that the nonequilibrium environments are both valuable for the entanglement and Bell nonlocality resources, based on different optimal nonequilibrium conditions though.
quantum physics
Reversible plastic events allow amorphous solids to retain memory of past deformations while irreversible plastic events are responsible for structural evolution and are associated with yielding. Here we use numerical simulations to extract the topology of networks of plastic transitions of amorphous solids and show that the strongly connected components (SCCs) of these networks can be used to distinguish reversible from irreversible events. This approach allows us to predict the memory capacity of amorphous solids and provides a framework for understanding the irreversibility transition under oscillatory shear. Our findings should be useful for understanding better recent experiments on memory retention and yielding in granular media and amorphous solids.
condensed matter
We show that the following group constructions preserve the semilinearity of the solution sets for knapsack equations (equations of the form $g_1^{x_1} \cdots g_k^{x_k} = g$ in a group $G$, where the variables $x_1, \ldots, x_k$ take values in the natural numbers): graph products, amalgamated free products with finite amalgamated subgroups, HNN-extensions with finite associated subgroups, and finite extensions. Moreover, we study the dependence of the so-called magnitude for the solution set of a knapsack equation (the magnitude is a complexity measure for semi-linear sets) with respect to the length of the knapsack equation (measured in number of generators). We investigate, how this dependence changes under the above group operations.
mathematics
We study the effect of reaction times on the kinetics of relaxation to stationary states and on congestion transitions in heterogeneous traffic. Heterogeneity is modeled as quenched disorders in the parameters of the car following model and in the reaction times of the drivers. We observed that at low densities, the relaxation to stationary state from a homogeneous initial state is governed by the same power laws as derived by E. Ben-Naim et al., Kinetics of clustering in traffic flow, Phys. Rev. E 50, 822 (1994). The stationary state, at low densities, is a single giant platoon of vehicles with the slowest vehicle being the leader. We observed formation of spontaneous jams inside the giant platoon which move upstream as stop-go waves and dissipate at its tail. The transition happens when the head of the giant platoon interacts with its tail, stable stop-go waves form, which circulate in the ring without dissipating. We observed that the system behaves differently when the transition density is approached from above that it does when approached from below. When the transition density is approached from below, the gap distribution behind the leader has a double peak and is fat-tailed but has a bounded support and thus the maximum gap in the system and the variance of the gap distribution tend to size-independent values. When the transition density is approached from above, the gap distribution becomes a power law and, consequently, the maximum gap in the system and the variance in the gaps diverge as a power law, thereby creating a discontinuity at the transition. Thus, we observe a phase transition of unusual kind in which both a discontinuity and a power law are observed at the transition density. These unusual features vanish in the absence of reaction time (e.g., automated driving).
physics
We study the negative modes of gravitational instantons representing vacuum decay in asymptotically flat space-time. We consider two different vacuum decay scenarios: the Coleman-de Luccia $\mathrm{O}(4)$-symmetric bubble, and $\mathrm{O}(3) \times \mathbb{R}$ instantons with a static black hole. In spite of the similarities between the models, we find qualitatively different behaviours. In the $\mathrm{O}(4)$-symmetric case, the number of negative modes is known to be either one or infinite, depending on the sign of the kinetic term in the quadratic action. In contrast, solving the mode equation numerically for the static black hole instanton, we find only one negative mode with the kinetic term always positive outside the event horizon. The absence of additional negative modes supports the interpretation of these solutions as giving the tunnelling rate for false vacuum decay seeded by microscopic black holes.
high energy physics theory
Recently a vector charmonium-like state $Y(4626)$ was observed in the portal of $D^+_sD_{s1}(2536)^-$. It intrigues an active discussion on the structure of the resonance because it has obvious significance for gaining a better understanding on its hadronic structure with suitable inner constituents. It indeed concerns the general theoretical framework about possible structures of exotic states. Since the mass of $Y(4626)$ is slightly above the production threshold of $D^+_s\bar D_{s1}(2536)^-$ whereas below that of $D^*_s\bar D_{s1}(2536)$ with the same quark contents as that of $D^+_s\bar D_{s1}(2536)^-$, it is natural to conjecture $Y(4626)$ to be a molecular state of $D^{*}_s\bar D_{s1}(2536)$, as suggested in literature. Confirming or negating this allegation would shed light on the goal we concern. We calculate the mass spectrum of a system composed of a vector meson and an axial vector i.e. $D^*_s\bar D_{s1}(2536)$ within the framework of the Bethe-Salpeter equations. Our numerical results show that the dimensionless parameter $\lambda$ in the form factor which is phenomenologically introduced to every vertex, is far beyond the reasonable range for inducing an even very small binding energy $\Delta E$. It implies that the $D^*_s\bar D_{s1}(2536)$ system cannot exist in the nature as a hadronic molecule in this model, so that we may not think the resonance $Y(4626)$ to be a bound state of $D^*_s\bar D_{s1}(2536)$, but something else, for example a tetraquark and etc.
high energy physics phenomenology
It is well known that in a small P\'olya urn, i.e., an urn where second largest real part of an eigenvalue is at most half the largest eigenvalue, the distribution of the numbers of balls of different colours in the urn is asymptotically normal under weak additional conditions. We consider the balanced case, and then give asymptotics of the mean and the covariance matrix, showing that after appropriate normalization, the mean and covariance matrix converge to the mean and variance of the limiting normal distribution.
mathematics
In this paper, we consider mixtures of multinomial logistic models (MNL), which are known to $\epsilon$-approximate any random utility model. Despite its long history and broad use, rigorous results are only available for learning a uniform mixture of two MNLs. Continuing this line of research, we study the problem of learning an arbitrary mixture of two MNLs. We show that the identifiability of the mixture models may only fail on an algebraic variety of a negligible measure. This is done by reducing the problem of learning a mixture of two MNLs to the problem of solving a system of univariate quartic equations. We also devise an algorithm to learn any mixture of two MNLs using a polynomial number of samples and a linear number of queries, provided that a mixture of two MNLs over some finite universe is identifiable. Several numerical experiments and conjectures are also presented.
statistics
We study little string theory (LST) compactified on $\mathbf{T}^2$, partially breaking supersymmetry by a discrete T-duality twist acting on both the K\"ahler and the complex structure of the torus. This setup gives raise to 4d $\mathcal{N}=3$ models and it can be performed in both the type IIA and type IIB LSTs. We comment on the relation with other constructions proposed in the literature.
high energy physics theory
We perform a comprehensive analysis of the secluded UMSSM model, consistent with present experimental constraints. We find that in this model the additional $Z^\prime$ gauge boson can be leptophobic without resorting to gauge kinetic mixing and, consequently, also $d$-quark-phobic, thus lowering the LHC bounds on its mass. The model can accommodate very light singlinos as DM candidates, consistent with present day cosmological and collider constraints. Light charginos and neutralinos are responsible for muon anomalous magnetic predictions within 1$\sigma$ of the measured experimental value. Finally, we look at the possibility that a lighter $Z^\prime$, expected to decay mainly into chargino pairs and followed by the decay into lepton pairs, could be observed at 27 TeV.
high energy physics phenomenology
Filament eruptions occurring at different places within a relatively short time internal, but with a certain physical causal connection are usually known as sympathetic eruption. Studies on sympathetic eruptions are not uncommon. However, in the existed reports, the causal links between sympathetic eruptions remain rather speculative. In this work, we present detailed observations of a sympathetic filament eruption event, where an identifiable causal link between two eruptive filaments is observed. On 2015 November 15, two filaments (F1 in the north and F2 in the south) were located at the southwestern quadrant of solar disk. The main axes of them were almost parallel to each other. Around 22:20 UT, F1 began to erupt, forming two flare ribbons. The southwestern ribbon apparently moved to southwest and intruded southeast part of F2. This continuous intrusion caused F2's eventual eruption. Accompanying the eruption of F2, flare ribbons and post-flare loops appeared in northwest region of F2. Meanwhile, neither flare ribbons nor post-flare loops could be observed in southeastern area of F2. In addition, the nonlinear force-free field (NLFFF) extrapolations show that the magnetic fields above F2 in the southeast region are much weaker than that in the northwest region. These results imply that the overlying magnetic fields of F2 were not uniform. So we propose that the southwest ribbon formed by eruptive F1 invaded F2 from its southeast region with relatively weaker overlying magnetic fields in comparison with its northwest region, disturbing F2 and leading F2 to erupt eventually.
astrophysics
The first in situ quantitative synchrotron X-ray diffraction (XRD) study of plastic strain-induced phase transformation (PT) has been performed on $\alpha-\omega$ PT in ultra-pure, strongly plastically predeformed Zr as an example, under different compression-shear pathways in rotational diamond anvil cell (RDAC). Radial distributions of pressure in each phase and in the mixture, and concentration of $\omega$-Zr, all averaged over the sample thickness, as well as thickness profile were measured. The minimum pressure for the strain-induced $\alpha-\omega$ PT, $p^d_{\varepsilon}$=1.2 GPa, is smaller than under hydrostatic loading by a factor of 4.5 and smaller than the phase equilibrium pressure by a factor of 3; it is independent of the compression-shear straining path. The theoretically predicted plastic strain-controlled kinetic equation was verified and quantified; it is independent of the pressure-plastic strain loading path and plastic deformation at pressures below $p^d_{\varepsilon}$. Thus, strain-induced PTs under compression in DAC and torsion in RDAC do not fundamentally differ. The yield strength of both phases is estimated using hardness and x-ray peak broadening; the yield strength in shear is not reached by the contact friction stress and cannot be evaluated using the pressure gradient. Obtained results open a new opportunity for quantitative study of strain-induced PTs and reactions with applications to material synthesis and processing, mechanochemistry, and geophysics.
condensed matter
The difference between the quark orbital angular momentum (OAM) defined in light-cone gauge (Jaffe-Manohar) compared to defined using a local manifestly gauge invariant operator (Ji) is interpreted in terms of the change in quark OAM as the quark leaves the target in a DIS experiment.
high energy physics phenomenology
We use nearly 20 years of photometry obtained by the OGLE survey to measure the occurrence rate of wide-orbit (or ice giant) microlensing planets, i.e., with separations from ~5 AU to ~15 AU and mass-ratios from $10^{-4}$ to 0.033. In a sample of 3112 events we find six previously known wide-orbit planets and a new microlensing planet or brown dwarf OGLE-2017-BLG-0114Lb, for which close and wide orbits are possible and close orbit is preferred. We run extensive simulations of the planet detection efficiency, robustly taking into account the finite-source effects. We find that the extrapolation of the previously measured rate of microlensing planets significantly underpredicts the number of wide-orbit planets. On average, every microlensing star hosts $1.4^{+0.9}_{-0.6}$ ice giant planets.
astrophysics
Recently, the characterization based approach for the construction of goodness of fit tests has become popular. Most of the proposed tests have been designed for complete i.i.d. samples. Here we present the adaptation of the recently proposed exponentiality tests based on equidistribution-type characterizations for the case of randomly censored data. Their asymptotic properties are provided. Besides, we present the results of wide empirical power study including the powers of several recent competitors. This study can be used as a benchmark for future tests proposed for this kind of data.
statistics
This paper addresses the extraction of multiple F0 values from polyphonic and a cappella vocal performances using convolutional neural networks (CNNs). We address the major challenges of ensemble singing, i.e., all melodic sources are vocals and singers sing in harmony. We build upon an existing architecture to produce a pitch salience function of the input signal, where the harmonic constant-Q transform (HCQT) and its associated phase differentials are used as an input representation. The pitch salience function is subsequently thresholded to obtain a multiple F0 estimation output. For training, we build a dataset that comprises several multi-track datasets of vocal quartets with F0 annotations. This work proposes and evaluates a set of CNNs for this task in diverse scenarios and data configurations, including recordings with additional reverb. Our models outperform a state-of-the-art method intended for the same music genre when evaluated with an increased F0 resolution, as well as a general-purpose method for multi-F0 estimation. We conclude with a discussion on future research directions.
electrical engineering and systems science
We analyze the Gaussian and chiral supereigenvalue models in the Neveu-Schwarz sector. We show that their partition functions can be expressed as the infinite sums of the homogeneous operators acting on the elementary functions. In spite of the fact that the usual W-representations of these matrix models can not be provided here, we can still derive the compact expressions of the correlators in these two supereigenvalue models. Furthermore, the non-Gaussian (chiral) cases are also discussed.
high energy physics theory
To gain insight into the peculiar temperature dependence of the thermoelectric material SnSe, we employ many-body perturbation theory and explore the influence of the electron-phonon interaction on its electronic and transport properties. We show that a lattice dynamics characterized by soft highly-polar phonons induces a large thermal enhancement of the Fr\"ohlich interaction. We account for these phenomena in ab-initio calculations of the photoemission spectrum and electrical conductivity at finite temperature, unraveling the mechanisms behind recent experimental data. Our results reveal a complex interplay between lattice thermal expansion and Fr\"ohlich coupling, providing a new rationale for the in-silico prediction of transport coefficients of high-performance thermoelectrics.
condensed matter
In this pedagogical article, we explore a powerful language for describing the notion of spacetime and particle dynamics in it intrinsic to a given fundamental physical theory, focusing on special relativity and its Newtonian limit. The starting point of the formulation is the representations of the relativity symmetries. Moreover, that seriously furnishes -- via the notion of symmetry contractions -- a natural way in which one can understand how the Newtonian theory arise as an approximation to Einstein's theory. We begin with the Poincar\'{e} symmetry underlying special relativity and the nature of Minkowski spacetime as a coset representation space of the algebra and the group, as well as how the representation. Then, we proceed to the parallel for the phase space of a particle, in relation to which we present the full scheme for its dynamics under the Hamiltonian formulation illustrating that as essentially the symmetry feature of the phase space geometry. Lastly, the reduction of all that to the Newtonian theory as an approximation with its space-time, phase space, and dynamics under the appropriate relativity symmetry contraction is presented.
physics
We present a simple, accessible, yet rigorous outreach/educational program focused on quantum information science and technology for high-school and early undergraduate students. This program allows students to perform meaningful hands-on calculations with quantum circuits and algorithms, without requiring knowledge of advanced mathematics. A combination of pen-and-paper exercises and IBM Q simulations helps students understand the structure of quantum gates and circuits, as well as the principles of superposition, entanglement, and measurement in quantum mechanics.
physics
We derive a new variational formula for the R{\'e}nyi family of divergences, $R_\alpha(Q\|P)$, between probability measures $Q$ and $P$. Our result generalizes the classical Donsker-Varadhan variational formula for the Kullback-Leibler divergence. We further show that this R{\'e}nyi variational formula holds over a range of function spaces; this leads to a formula for the optimizer under very weak assumptions and is also key in our development of a consistency theory for R{\'e}nyi divergence estimators. By applying this theory to neural-network estimators, we show that if a neural network family satisfies one of several strengthened versions of the universal approximation property then the corresponding R{\'e}nyi divergence estimator is consistent. In contrast to density-estimator based methods, our estimators involve only expectations under $Q$ and $P$ and hence are more effective in high dimensional systems. We illustrate this via several numerical examples of neural network estimation in systems of up to 5000 dimensions.
statistics
Social Robotics poses tough challenges to software designers who are required to take care of difficult architectural drivers like acceptability, trust of robots as well as to guarantee that robots establish a personalised interaction with their users. Moreover, in this context recurrent software design issues such as ensuring interoperability, improving reusability and customizability of software components also arise. Designing and implementing social robotic software architectures is a time-intensive activity requiring multi-disciplinary expertise: this makes difficult to rapidly develop, customise, and personalise robotic solutions. These challenges may be mitigated at design time by choosing certain architectural styles, implementing specific architectural patterns and using particular technologies. Leveraging on our experience in the MARIO project, in this paper we propose a series of principles that social robots may benefit from. These principles lay also the foundations for the design of a reference software architecture for Social Robots. The ultimate goal of this work is to establish a common ground based on a reference software architecture to allow to easily reuse robotic software components in order to rapidly develop, implement, and personalise Social Robots.
computer science
In this paper, we use corrected $f(R)$ gravitational model which is polynomial function with a logarithmic term. In that case, we employ the slow-roll condition and obtain the number of cosmological parameter. This help us to verify the swampland conjectures which is guarantee the validation of low - energy quantum field theory. The obtained results shown that the corresponding model is consistent with the swampland conjectures. Also the upper and lower limit of the parameter \textcolor{red}{$n$} are \textcolor{blue}{0.15} and \textcolor{blue}{0.0033}. Finally, by using scalar spectrum index $ n_{s} $ and tensor to scalar ratio $ r $ relations and compared with Planck 2018 empirical data, we obtain the coefficients $\alpha$,$\beta$ and $\gamma$. Also, the corresponding results are creaked by several figures, literature and also plank 2018 data.
high energy physics theory
The latent space of normalizing flows must be of the same dimensionality as their output space. This constraint presents a problem if we want to learn low-dimensional, semantically meaningful representations. Recent work has provided compact representations by fitting flows constrained to manifolds, but hasn't defined a density off that manifold. In this work we consider flows with full support in data space, but with ordered latent variables. Like in PCA, the leading latent dimensions define a sequence of manifolds that lie close to the data. We note a trade-off between the flow likelihood and the quality of the ordering, depending on the parameterization of the flow.
statistics
A family of permutations $\mathcal{F} \subset S_{n}$ is said to be $t$-intersecting if any two permutations in $\mathcal{F}$ agree on at least $t$ points. It is said to be $(t-1)$-intersection-free if no two permutations in $\mathcal{F}$ agree on exactly $t-1$ points. If $S,T \subset \{1,2,\ldots,n\}$ with $|S|=|T|$, and $\pi: S \to T$ is a bijection, the $\pi$-star in $S_n$ is the family of all permutations in $S_n$ that agree with $\pi$ on all of $S$. An $s$-star is a $\pi$-star such that $\pi$ is a bijection between sets of size $s$. Friedgut and Pilpel, and independently the first author, showed that if $\mathcal{F} \subset S_n$ is $t$-intersecting, and $n$ is sufficiently large depending on $t$, then $|\mathcal{F}| \leq (n-t)!$; this proved a conjecture of Deza and Frankl from 1977. Equality holds only if $\mathcal{F}$ is a $t$-star. In this paper, we give a more `robust' proof of a strengthening of the Deza-Frankl conjecture, namely that if $n$ is sufficiently large depending on $t$, and $\mathcal{F} \subset S_n$ is $(t-1)$-intersection-free, then $|\mathcal{F} \leq (n-t)!$, with equality only if $\mathcal{F}$ is a $t$-star. The main ingredient of our proof is a `junta approximation' result, namely, that any $(t-1)$-intersection-free family of permutations is essentially contained in a $t$-intersecting {\em junta} (a `junta' being a union of a bounded number of $O(1)$-stars). The proof of our junta approximation result relies, in turn, on a weak regularity lemma for families of permutations, a combinatorial argument that `bootstraps' a weak notion of pseudorandomness into a stronger one, and finally a spectral argument for pairs of highly-pseudorandom fractional families. Our proof employs four different notions of pseudorandomness, three being combinatorial in nature, and one being algebraic.
mathematics
This article summarizes the BCN20000 dataset, composed of 19424 dermoscopic images of skin lesions captured from 2010 to 2016 in the facilities of the Hospital Cl\'inic in Barcelona. With this dataset, we aim to study the problem of unconstrained classification of dermoscopic images of skin cancer, including lesions found in hard-to-diagnose locations (nails and mucosa), large lesions which do not fit in the aperture of the dermoscopy device, and hypo-pigmented lesions. The BCN20000 will be provided to the participants of the ISIC Challenge 2019, where they will be asked to train algorithms to classify dermoscopic images of skin cancer automatically.
electrical engineering and systems science
We focus on the robustness of neural networks for classification. To permit a fair comparison between methods to achieve robustness, we first introduce a standard based on the mensuration of a classifier's degradation. Then, we propose natural perturbed training to robustify the network. Natural perturbations will be encountered in practice: the difference of two images of the same object may be approximated by an elastic deformation (when they have slightly different viewing angles), by occlusions (when they hide differently behind objects), or by saturation, Gaussian noise etc. Training some fraction of the epochs on random versions of such variations will help the classifier to learn better. We conduct extensive experiments on six datasets of varying sizes and granularity. Natural perturbed learning show better and much faster performance than adversarial training on clean, adversarial as well as natural perturbed images. It even improves general robustness on perturbations not seen during the training. For Cifar-10 and STL-10 natural perturbed training even improves the accuracy for clean data and reaches the state of the art performance. Ablation studies verify the effectiveness of natural perturbed training.
computer science
Bipartite experiments are a recent object of study in causal inference, whereby treatment is applied to one set of units and outcomes of interest are measured on a different set of units. These experiments are particularly useful in settings where strong interference effects occur between units of a bipartite graph. In market experiments for example, assigning treatment at the seller-level and measuring outcomes at the buyer-level (or vice-versa) may lead to causal models that better account for the interference that naturally occurs between buyers and sellers. While bipartite experiments have been shown to improve the estimation of causal effects in certain settings, the analysis must be done carefully so as to not introduce unnecessary bias. We leverage the generalized propensity score literature to show that we can obtain unbiased estimates of causal effects for bipartite experiments under a standard set of assumptions. We also discuss the construction of confidence sets with proper coverage probabilities. We evaluate these methods using a bipartite graph from a publicly available dataset studied in previous work on bipartite experiments, showing through simulations a significant bias reduction and improved coverage.
statistics
Motivated by lunar exploration, we consider deploying a network of mobile robots to explore an unknown environment while acting as a cooperative positioning system. Robots measure and communicate position-related data in order to perform localization in the absence of infrastructure-based solutions (e.g. stationary beacons or GPS). We present Trilateration for Exploration and Mapping (TEAM), a novel algorithm for low-complexity localization and mapping with robotic networks. TEAM is designed to leverage the capability of commercially-available ultra-wideband (UWB) radios on board the robots to provide range estimates with centimeter accuracy and perform anchorless localization in a shared, stationary frame. It is well-suited for feature-deprived environments, where feature-based localization approaches suffer. We provide experimental results in varied Gazebo simulation environments as well as on a testbed of Turtlebot3 Burgers with Pozyx UWB radios. We compare TEAM to the popular Rao-Blackwellized Particle Filter for Simultaneous Localization and Mapping (SLAM). We demonstrate that TEAM requires an order of magnitude less computational complexity and reduces the necessary sample rate of LiDAR measurements by an order of magnitude. These advantages do not require sacrificing performance, as TEAM reduces the maximum localization error by 50% and achieves up to a 28% increase in map accuracy in feature-deprived environments and comparable map accuracy in other settings.
computer science
The unique surface edge states make topological insulators a primary focus among different applications. In this article, we synthesized a large single crystal of Niobium(Nb)-doped Bi2Se3 topological insulator (TI) with a formula Nb0.25Bi2Se3. The single crystal has characterized by using various techniques such as Powder X-ray Diffractometer (PXRD), DC magnetization measurements, Raman, and Ultrafast transient absorption spectroscopy (TRUS). There are (00l) reflections in the PXRD, and Superconductivity ingrown crystal is evident from clearly visible diamagnetic transition at 2.5K in both FC and ZFC measurements. The Raman spectroscopy is used to find the different vibrational modes in the sample. Further, the sample is excited by a pump of 1.90 eV, and a kinetic decay profile at 1.38 eV is considered for terahertz analysis. The differential decay profile has different vibrations, and these oscillations have analyzed in terms of terahertz. This article not only provides evidence of terahertz generation in Nb-doped sample along with undoped sample but also show that the dopant atom changes the dynamics of charge carriers and thereby the shift in the Terahertz frequency response. In conclusion, a suitable dopant can be used as a processor for the tunability of terahertz frequency in TI.
condensed matter
It is well known that in classical optics, the visibility of interference, in a two-beam light interference, is related to the optical coherence of the two beams. A wave-particle duality relation can be derived using this mutual coherence. The issue of wave-particle duality in classical optics is analyzed here, in the more general context of multipath interference. New definitions of interference visibility and path distinguishability have been introduced, which lead to a duality relation for multipath interference. The visibility is shown to be related to a new multi-point optical coherence function.
quantum physics
Millions of online discussions are generated everyday on social media platforms. Topic modelling is an efficient way of better understanding large text datasets at scale. Conventional topic models have had limited success in online discussions, and to overcome their limitations, we use the discussion thread tree structure and propose a "popularity" metric to quantify the number of replies to a comment to extend the frequency of word occurrences, and the "transitivity" concept to characterize topic dependency among nodes in a nested discussion thread. We build a Conversational Structure Aware Topic Model (CSATM) based on popularity and transitivity to infer topics and their assignments to comments. Experiments on real forum datasets are used to demonstrate improved performance for topic extraction with six different measurements of coherence and impressive accuracy for topic assignments.
computer science
For the Tikhonov regularization of ill-posed nonlinear operator equations, convergence is studied in a Hilbert scale setting. We include the case of oversmoothing penalty terms, which means that the exact solution does not belong to the domain of definition of the considered penalty functional. In this case, we try to close a gap in the present theory, where H\"older-type convergence rates results have been proven under corresponding source conditions, but assertions on norm convergence of regularized solutions without source conditions are completely missing. A result of the present work is to provide sufficient conditions for convergence under a priori and a posteriori regularization parameter choice strategies, without any additional smoothness assumption on the solution. The obtained error estimates moreover allow us to prove low order convergence rates under associated (for example logarithmic) source conditions. Some numerical illustrations are also given.
mathematics
We develop new routing algorithms for a quantum network with noisy quantum devices such that each can store a small number of qubits. We thereby consider two models for the operation of such a network. The first is a continuous model, in which entanglement between a subset of the nodes is produced continuously in the background. This can in principle allows the rapid creation of entanglement between more distant nodes using the already pre-generated entanglement pairs in the network. The second is an on-demand model, where entanglement production does not commence before a request is made. Our objective is to find protocols, that minimise the latency of the network to serve a request to create entanglement between two distant nodes in the network. We propose three routing algorithms and analytically show that as expected when there is only a single request in the network, then employing them on the continuous model yields a lower latency than on the on-demand one. We study the performance of the routing algorithms in a ring, grid, and recursively generated network topologies. We also give an analytical upper bound on the number of entanglement swap operations the nodes need to perform for routing entangled links between a source and a destination yielding a lower bound on the end to end fidelity of the shared entangled state. We proceed to study the case of multiple concurrent requests and show that in some of the scenarios the on-demand model can outperform the continuous one. Using numerical simulations on ring and grid networks we also study the behaviour of the latency of all the routing algorithms. We observe that the proposed routing algorithms behave far better than the existing classical greedy routing algorithm. The simulations also help to understand the advantages and disadvantages of different types of continuous models for different types of demands.
quantum physics
We propose an improved estimator for the multi-task averaging problem, whose goal is the joint estimation of the means of multiple distributions using separate, independent data sets. The naive approach is to take the empirical mean of each data set individually, whereas the proposed method exploits similarities between tasks, without any related information being known in advance. First, for each data set, similar or neighboring means are determined from the data by multiple testing. Then each naive estimator is shrunk towards the local average of its neighbors. We prove theoretically that this approach provides a reduction in mean squared error. This improvement can be significant when the dimension of the input space is large, demonstrating a "blessing of dimensionality" phenomenon. An application of this approach is the estimation of multiple kernel mean embeddings, which plays an important role in many modern applications. The theoretical results are verified on artificial and real world data.
statistics
A proposed paradigm for out-of-equilibrium quantum systems is that an analogue of quantum phase transitions exists between parameter regimes of qualitatively distinct time-dependent behavior. Here, we present evidence of such a transition between dynamical phases in a cold-atom quantum simulator of the collective Heisenberg model. Our simulator encodes spin in the hyperfine states of ultracold fermionic potassium. Atoms are pinned in a network of single-particle modes, whose spatial extent emulates the long-range interactions of traditional quantum magnets. We find that below a critical interaction strength, magnetization of an initially polarized fermionic gas decays quickly, while above the transition point, the magnetization becomes long-lived, due to an energy gap that protects against dephasing by the inhomogeneous axial field. Our quantum simulation reveals a non-equilibrium transition predicted to exist but not yet directly observed in quenched s-wave superconductors.
quantum physics
A critical task in graph signal processing is to estimate the true signal from noisy observations over a subset of nodes, also known as the reconstruction problem. In this paper, we propose a node-adaptive regularization for graph signal reconstruction, which surmounts the conventional Tikhonov regularization, giving rise to more degrees of freedom; hence, an improved performance. We formulate the node-adaptive graph signal denoising problem, study its bias-variance trade-off, and identify conditions under which a lower mean squared error and variance can be obtained with respect to Tikhonov regularization. Compared with existing approaches, the node-adaptive regularization enjoys more general priors on the local signal variation, which can be obtained by optimally designing the regularization weights based on Prony's method or semidefinite programming. As these approaches require additional prior knowledge, we also propose a minimax (worst-case) strategy to address instances where this extra information is unavailable. Numerical experiments with synthetic and real data corroborate the proposed regularization strategy for graph signal denoising and interpolation, and show its improved performance compared with competing alternatives.
electrical engineering and systems science
We emphasize the importance of applying power counting to the small-$x$ observables, which introduces novel soft contributions usually missing and allows for a unified treatment of the Balitsky-Kovchegov (BK) evolution and various Sudakov logarithms. We use $pA \to h(p_{h\perp})X$ at forward rapidity to highlight how the power counting yields a partonic cross section with collinear and soft sectors. We show how the kinematic constraints can be obtained in the soft sector without violating the power counting. We further show how one can resum the threshold Sudakov logarithms systematically to all orders in a re-factorized framework with additional collinear-soft contributions. Direct applications to other small-$x$ processes involving heavy particles, jet (sub-)observables and EIC physics are straightforward.
high energy physics phenomenology
We examine the stability of marginally Anderson localized phase transitions between localized phases to the addition of many-body interactions, focusing in particular on the spin-glass to paramagnet transition in a disordered transverse field Ising model in one dimension. We find evidence for a perturbative instability of localization at finite energy densities once interactions are added, i.e. evidence for the relevance of interactions - in a renormalization group sense - to the non-interacting critical point governed by infinite randomness scaling. We introduce a novel diagnostic, the "susceptibility of entanglement", which allows us to perturbatively probe the effect of adding interactions on the entanglement properties of eigenstates, and helps us elucidate the resonant processes that can cause thermalization. The susceptibility serves as a much more sensitive probe, and its divergence can detect the perturbative beginnings of an incipient instability even in regimes and system sizes for which conventional diagnostics point towards localization. We expect this new measure to be of independent interest for analyzing the stability of localization in a variety of different settings.
condensed matter
We investigate nonequilibrium steady states in a class of one-dimensional diffusive systems that can attain negative absolute temperatures. The cases of a paramagnetic spin system, a Hamiltonian rotator chain and a one-dimensional discrete linear Schr\"odinger equation are considered. Suitable models of reservoirs are implemented to impose given, possibly negative, temperatures at the chain ends. We show that a phenomenological description in terms of a Fourier law can consistently describe unusual transport regimes where the temperature profiles are entirely or partially in the negative-temperature region. Negative-temperature Fourier transport is observed both for deterministic and stochastic dynamics and it can be generalized to coupled transport when two or more thermodynamic currents flow through the system.
condensed matter
Ridesharing platforms match drivers and riders to trips, using dynamic prices to balance supply and demand. A challenge is to set prices that are appropriately smooth in space and time, so that drivers with the flexibility to decide how to work will nevertheless choose to accept their dispatched trips, rather than drive to another area or wait for higher prices or a better trip. In this work, we propose a complete information model that is simple yet rich enough to incorporate spatial imbalance and temporal variations in supply and demand -- conditions that lead to market failures in today's platforms. We introduce the Spatio-Temporal Pricing (STP) mechanism. The mechanism is incentive-aligned, in that it is a subgame-perfect equilibrium for drivers to always accept their trip dispatches. From any history onward, the equilibrium outcome of the STP mechanism is welfare-optimal, envy-free, individually rational, budget balanced, and core-selecting. We also prove the impossibility of achieving the same economic properties in a dominant-strategy equilibrium. Simulation results show that the STP mechanism can achieve substantially improved social welfare and earning equity than a myopic mechanism.
computer science
Skyrmions are important in topological quantum field theory for being soliton solutions of a nonlinear sigma model and in information technology for their attractive applications. Skyrmions are believed to be circular and stripy spin textures appeared in the vicinity of skyrmion crystals are termed spiral, helical, and cycloid spin orders, but not skyrmions. Here we present convincing evidences showing that those stripy spin textures are skyrmions, "siblings" of circular skyrmions in skyrmion crystals and "cousins" of isolated circular skyrmions. Specifically, isolated skyrmions are excitations when skyrmion formation energy is positive. The skyrmion morphologies are various stripy structures when the ground states of chiral magnetic films are skyrmions. The density of skyrmion number determines the morphology of condensed skyrmion states. At the extreme of one skyrmion in the whole sample, the skyrmion is a ramified stripe. As the skyrmion number density increases, individual skyrmion shapes gradually change from ramified stripes to rectangular stripes, and eventually to disk-like objects. At a low skyrmion number density, the natural width of stripes is proportional to the ratio between the exchange stiffness constant and Dzyaloshinskii-Moriya interaction coefficient. At a high skyrmion number density, skyrmion crystals are the preferred states. Our findings reveal the nature and properties of stripy spin texture, and open a new avenue for manipulating skyrmions, especially condensed skyrmions such as skyrmion crystals.
condensed matter
We consider motility of keratocyte cells driven by myosin contraction and introduce a 2D free boundary model for such motion. This model generalizes a 1D model from [12] by combining a 2D Keller-Segel model and a Hele-Shaw type boundary condition with the Young-Laplace law resulting in a boundary curvature term which provides a regularizing effect. We show that this model has a family of traveling solutions with constant shape and velocity which bifurcates from a family of radially symmetric stationary states. Our goal is to establish observable steady motion of the cell with constant velocity. Mathematically, this amounts to establishing stability of the traveling solutions. Our key result is an explicit asymptotic formula for the stability-determining eigenvalue of the linearized problem. This formula greatly simplifies the task of numerically computing the sign of this eigenvalue and reveals the physical mechanisms of stability. The derivation of this formula is based on a special ansatz for the corresponding eigenvector which exhibits an interesting singular behavior such that it asymptotically (in the small-velocity limit) becomes parallel to another eigenvector. This reflects the non-self-adjoint nature of the linearized problem, a signature of living systems. Finally, our results describe the onset of motion via a transition from unstable radial stationary solutions to stable asymmetric traveling solutions.
physics
To encourage intra-class compactness and inter-class separability among trainable feature vectors, large-margin softmax methods are developed and widely applied in the face recognition community. The introduction of the large-margin concept into the softmax is reported to have good properties such as enhanced discriminative power, less overfitting and well-defined geometric intuitions. Nowadays, language modeling is commonly approached with neural networks using softmax and cross entropy. In this work, we are curious to see if introducing large-margins to neural language models would improve the perplexity and consequently word error rate in automatic speech recognition. Specifically, we first implement and test various types of conventional margins following the previous works in face recognition. To address the distribution of natural language data, we then compare different strategies for word vector norm-scaling. After that, we apply the best norm-scaling setup in combination with various margins and conduct neural language models rescoring experiments in automatic speech recognition. We find that although perplexity is slightly deteriorated, neural language models with large-margin softmax can yield word error rate similar to that of the standard softmax baseline. Finally, expected margins are analyzed through visualization of word vectors, showing that the syntactic and semantic relationships are also preserved.
electrical engineering and systems science
Image based modeling and laser scanning are two commonly used approaches in large-scale architectural scene reconstruction nowadays. In order to generate a complete scene reconstruction, an effective way is to completely cover the scene using ground and aerial images, supplemented by laser scanning on certain regions with low texture and complicated structure. Thus, the key issue is to accurately calibrate cameras and register laser scans in a unified framework. To this end, we proposed a three-step pipeline for complete scene reconstruction by merging images and laser scans. First, images are captured around the architecture in a multi-view and multi-scale way and are feed into a structure-from-motion (SfM) pipeline to generate SfM points. Then, based on the SfM result, the laser scanning locations are automatically planned by considering textural richness, structural complexity of the scene and spatial layout of the laser scans. Finally, the images and laser scans are accurately merged in a coarse-to-fine manner. Experimental evaluations on two ancient Chinese architecture datasets demonstrate the effectiveness of our proposed complete scene reconstruction pipeline.
computer science
We prove the existence of a mild solution to the three dimensional incompressible stochastic magnetohydrodynamic equations in the whole space with the initial data which belong to the Sobolev spaces.
mathematics
Mapping of the large-scale structure through cosmic time has numerous applications in the studies of cosmology and galaxy evolution. At $z > 2$, the structure can be traced by the neutral intergalactic medium (IGM) by way of observing the Ly$\alpha$, forest towards densely-sampled lines-of-sight of bright background sources, such as quasars and star forming galaxies. We investigate the scientific potential of MOSAIC, a planned multi-object spectrograph on the European Extremely Large Telescope (ELT), for the 3D mapping of the IGM at $z \gtrsim 3$. We simulate a survey of $3 \lesssim z \lesssim 4$ galaxies down to a limiting magnitude of $m_{r}\sim 25.5$ mag in an area of 1 degree$^2$ in the sky. Galaxies and their spectra (including the line-of-sight Ly$\alpha$ absorption) are taken from the lightcone extracted from the Horizon-AGN cosmological hydrodynamical simulation. The quality of the reconstruction of the original density field is studied for different spectral resolutions and signal-to-noise ratios of the spectra. We demonstrate that the minimum $S/N$ (per resolution element) of the faintest galaxies that such survey has to reach is $S/N = 4$. We show that a survey with such sensitivity enables a robust extraction of cosmic filaments and the detection of the theoretically-predicted galaxy stellar mass and star-formation rate gradients towards filaments. By simulating the realistic performance of MOSAIC we obtain $S/N(T_{\rm obs}, R, m_{r})$ scaling relations. We estimate that $\lesssim 35~(65)$ nights of observation time are required to carry out the survey with the instrument's high multiplex mode and with the spectral resolution of $R=1000~(2000)$. A survey with a MOSAIC-concept instrument on the ELT is found to enable the mapping of the IGM at $z > 3$ on Mpc scales, and as such will be complementary to and competitive with other planned IGM tomography surveys. [abridged]
astrophysics
We have developed a numerical framework for a full solution of the relativistic Boltzmann equations for the quark-gluon matter using the multiple Graphics Processing Units (GPUs) on distributed clusters. Including all the $2 \to 2$ scattering processes of 3-flavor quarks and gluons, we compute the time evolution of distribution functions in both coordinate and momentum spaces for the cases of pure gluons, quarks and the mixture of quarks and gluons. By introducing a symmetrical sampling method on GPUs which ensures the particle number conservation, our framework is able to perform the space-time evolution of quark-gluon system towards thermal equilibrium with high performance. We also observe that the gluons naturally accumulate in the soft region at the early time, which may indicate the gluon condensation.
high energy physics phenomenology
We present a novel method to constrain the past collisional evolution of observed globular cluster (GC) systems, in particular their mass functions. We apply our method to a pair of galaxies hypothesized to have recently undergone an episode of violent relaxation due to a strong galaxy-galaxy interaction, namely NGC 1052-DF2 and NGC 1052-DF4. We begin by exploring the observational evidence for a collisional origin for these two recently discovered ultra-diffuse galaxies observed in the NGC 1052 group, posited in the literature to be dark matter (DM)-free. We compute the timescales for infall to the central nucleus due to dynamical friction (DF) for the GCs in these galaxies, using the shortest of these times to constrain how long ago a galaxy-galaxy interaction could have occurred. We go on to quantify the initial GC numbers and densities needed for significant collisional evolution to occur within the allotted times, and show that, if the hypothesis of a previous galaxy-galaxy interaction is correct, a paucity of low-mass GCs should be revealed by deeper observational surveys. If any are found, they should be more spatially extended than the currently observed GC population. Finally, we apply our method to these galaxies, in order to illustrate its efficacy in constraining their dynamical evolution. Our results motivate more complete observations of the GC luminosity functions in these galaxies, in addition to future studies aimed at combining the method presented here with a suite of numerical simulations in order to further constrain the origins of the curious GC populations in these (and other) galaxies.
astrophysics
Recent theoretical studies proved that deep neural network (DNN) estimators obtained by minimizing empirical risk with a certain sparsity constraint can attain optimal convergence rates for regression and classification problems. However, the sparsity constraint requires to know certain properties of the true model, which are not available in practice. Moreover, computation is difficult due to the discrete nature of the sparsity constraint. In this paper, we propose a novel penalized estimation method for sparse DNNs, which resolves the aforementioned problems existing in the sparsity constraint. We establish an oracle inequality for the excess risk of the proposed sparse-penalized DNN estimator and derive convergence rates for several learning tasks. In particular, we prove that the sparse-penalized estimator can adaptively attain minimax convergence rates for various nonparametric regression problems. For computation, we develop an efficient gradient-based optimization algorithm that guarantees the monotonic reduction of the objective function.
mathematics
Rohatgi and the author recently proved a shuffling theorem for lozenge tilings of `doubly-dented hexagons' (arXiv:1905.08311). The theorem can be considered as a hybrid between two classical theorems in the enumeration of tilings: MacMahon's theorem about centrally symmetric hexagons and Cohn-Larsen-Prop's theorem about semihexagons with dents. In this paper, we consider a similar shuffling theorem for the centrally symmetric tilings of the doubly-dented hexagons. Our theorem also implies a conjecture posed by the author in arXiv:1803.02792 about the enumeration of centrally symmetric tilings of hexagons with three arrays of triangular holes. This enumeration, in turn, can be considered as a common generalization of (a tiling-equivalent version of) Stanley's enumeration of self-complementary plane partitions and Ciucu's work on symmetries of the shamrock structure. Moreover, our enumeration also confirms a recent conjecture posed by Ciucu in arXiv:1906.02951.
mathematics
We present a method of extracting information about the topological order from the ground state of a strongly correlated two-dimensional system computed with the infinite projected entangled pair state (iPEPS). For topologically ordered systems, the iPEPS wrapped on a torus becomes a superposition of degenerate, locally indistinguishable ground states. Projectors in the form of infinite matrix product operators (iMPO) onto states with well-defined anyon flux are used to compute topological $S$ and $T$ matrices (encoding mutual- and self-statistics of emergent anyons). The algorithm is shown to be robust against a perturbation driving string-net toric code across a phase transition to a ferromagnetic phase. Our approach provides accurate results near quantum phase transition, where the correlation length is prohibitively large for other numerical methods. Moreover, we used numerically optimized iPEPS describing the ground state of the Kitaev honeycomb model in the toric code phase and obtained topological data in excellent agreement with theoretical prediction.
condensed matter
Due to a drastic improvement in the quality of internet services worldwide, there is an explosion of multilingual content generation and consumption. This is especially prevalent in countries with large multilingual audience, who are increasingly consuming media outside their linguistic familiarity/preference. Hence, there is an increasing need for real-time and fine-grained content analysis services, including language identification, content transcription, and analysis. Accurate and fine-grained spoken language detection is an essential first step for all the subsequent content analysis algorithms. Current techniques in spoken language detection may lack on one of these fronts: accuracy, fine-grained detection, data requirements, manual effort in data collection \& pre-processing. Hence in this work, a real-time language detection approach to detect spoken language from 5 seconds' audio clips with an accuracy of 91.8\% is presented with exiguous data requirements and minimal pre-processing. Novel architectures for Capsule Networks is proposed which operates on spectrogram images of the provided audio snippets. We use previous approaches based on Recurrent Neural Networks and iVectors to present the results. Finally we show a ``Non-Class'' analysis to further stress on why CapsNet architecture works for LID task.
electrical engineering and systems science
In-field DC and AC magnetization measurements were carried out on a sigma-phase Fe55Re45 intermetallic compound aimed at determination of the magnetic phase diagram in the H-T plane. Field cooled, M_FC, and zero-field cooled, M_ZFC, DC magnetization curves were measured in the magnetic field, H, up to 1200 Oe. AC magnetic susceptibility measurements were carried out at a constant frequency of 1465 Hz under DC fields up to H=500 Oe. The obtained results provide evidences for re-entrant magnetism in the investigated sample. The magnetic phase diagrams in the H-T plane have been outlined based on characteristic temperatures determined from the DC and AC measurements. The phase diagrams are similar yet not identical. The main difference is that in the DC diagram constructed there are two cross-over transitions within the strong-irreversibility spin-glass state, whereas in the AC susceptibility based diagram only one transition is observed. The border lines (irreversibility, cross-over) can be described in terms of the power laws.
condensed matter
In this paper, our proposal consists of incorporating frailty into a statistical methodology for modeling time-to-event data, based on non-proportional hazards regression model. Specifically, we use the generalized time-dependent logistic (GTDL) model with a frailty term introduced in the hazard function to control for unobservable heterogeneity among the sampling units. We also add a regression in the parameter that measures the effect of time, since it can directly reflect the influence of covariates on the effect of time-to-failure. The practical relevance of the proposed model is illustrated in a real problem based on a data set for downhole safety valves (DHSVs) used in offshore oil and gas production wells. The reliability estimation of DHSVs can be used, among others, to predict the blowout occurrence, assess the workover demand and aid decision-making actions.
statistics
We explore the interplay of matter with quantum gravity with a preferred frame to highlight that the matter sector cannot be protected from the symmetry-breaking effects in the gravitational sector. Focusing on Abelian gauge fields, we show that quantum gravitational radiative corrections induce Lorentz-invariance-violating couplings for the Abelian gauge field. In particular, we discuss how such a mechanism could result in the possibility to translate observational constraints on Lorentz violation in the matter sector into strong constraints on the Lorentz-violating gravitational couplings.
high energy physics theory
By studying the set of correlations that are theoretically possible between physical systems without allowing for signalling of information backwards in time, we here identify correlations that can only be achieved if the time ordering between the systems is fundamentally indefinite. These correlations, if they exist in nature, must result from non-classical, non-deterministic time, and so may have relevance for quantum (or post-quantum) gravity, where a definite global time might not exist.
quantum physics
We propose Cotatron, a transcription-guided speech encoder for speaker-independent linguistic representation. Cotatron is based on the multispeaker TTS architecture and can be trained with conventional TTS datasets. We train a voice conversion system to reconstruct speech with Cotatron features, which is similar to the previous methods based on Phonetic Posteriorgram (PPG). By training and evaluating our system with 108 speakers from the VCTK dataset, we outperform the previous method in terms of both naturalness and speaker similarity. Our system can also convert speech from speakers that are unseen during training, and utilize ASR to automate the transcription with minimal reduction of the performance. Audio samples are available at https://mindslab-ai.github.io/cotatron, and the code with a pre-trained model will be made available soon.
electrical engineering and systems science
We present a way to derive a deformation of special relativistic kinematics (possible low energy signal of a quantum theory of gravity) from the geometry of a maximally symmetric curved momentum space. The deformed kinematics is fixed (up to change of coordinates in the momentum variables) by the algebra of isometries of the metric in momentum space. In particular, the well-known example of $\kappa$-Poincar\'e kinematics is obtained when one considers an isotropic metric in de Sitter momentum space such that translations are a subgroup of the isometry group, and for a Lorentz covariant algebra one gets the also well-known case of Snyder kinematics. We prove that our construction gives generically a relativistic kinematics and explain how it relates to previous attempts of connecting a deformed kinematics with a geometry in momentum space.
high energy physics theory
We prove that for any parameter r an r-locally 2-connected graph G embeds r-locally planarly in a surface if and only if a certain matroid associated to the graph G is co-graphic. This extends Whitney's abstract planar duality theorem from 1932.
mathematics
Inducing magnetic orders in a topological insulator (TI) to break its time reversal symmetry has been predicted to reveal many exotic topological quantum phenomena. The manipulation of magnetic orders in a TI layer can play a key role in harnessing these quantum phenomena towards technological applications. Here we fabricated a thin magnetic TI film on an antiferromagnetic (AFM) insulator Cr2O3 layer and found that the magnetic moments of the magnetic TI layer and the surface spins of the Cr2O3 layers favor interfacial AFM coupling. Field cooling studies show a crossover from negative to positive exchange bias clarifying the competition between the interfacial AFM coupling energy and the Zeeman energy in the AFM insulator layer. The interfacial exchange coupling also enhances the Curie temperature of the magnetic TI layer. The unique interfacial AFM alignment in magnetic TI on AFM insulator heterostructures opens a new route toward manipulating the interplay between topological states and magnetic orders in spin-engineered heterostructures, facilitating the exploration of proof-of-concept TI-based spintronic and electronic devices with multi-functionality and low power consumption.
condensed matter
Device-independent quantum key distribution protocols allow two honest users to establish a secret key with minimal levels of trust on the provider, as security is proven without any assumption on the inner working of the devices used for the distribution. Unfortunately, the implementation of these protocols is challenging, as it requires the observation of a large Bell-inequality violation between the two distant users. Here, we introduce novel photonic protocols for device-independent quantum key distribution exploiting single-photon sources and heralding-type architectures. The heralding process is designed so that transmission losses become irrelevant for security. We then show how the use of single-photon sources for entanglement distribution in these architectures, instead of standard entangled-pair generation schemes, provides significant improvements on the attainable key rates and distances over previous proposals. Given the current progress in single-photon sources, our work opens up a promising avenue for device-independent quantum key distribution implementations.
quantum physics
In developing organisms, internal cellular processes generate mechanical stresses at the tissue scale. The resulting deformations depend on the material properties of the tissue, which can exhibit long-ranged orientational order and topological defects. It remains a challenge to determine these properties on the time scales relevant for developmental processes. Here, we build on the physics of liquid crystals to determine material parameters of cell monolayers. Specifically, we use a hydrodynamic description to characterize the stationary states of compressible active polar fluids around defects. We illustrate our approach by analyzing monolayers of C2C12 cells in small circular confinements, where they form a single topological defect with integer charge. We find that such monolayers exert compressive stresses at the defect centers, where localized cell differentiation and formation of three-dimensional shapes is observed.
condensed matter
We examine an equivalence relation between free homotopy classes of closed curves on the pair of pants known as k-equivalence, a generalization of a concept previously defined by Leininger. We prove that two classes of closed curves on the pair of pants that are k-equivalent must also be 1-equivalent and 2-equivalent. We also examine properties of 1-equivalence on the pair of pants in greater depth.
mathematics
The photochemical acid generation is refined from the very first principles of elementary particle physics. We first briefly review the formulation of the quantum theory of light based on the quantum electrodynamics framework to establish the probability for acid generation at a given spacetime point. The quantum mechanical acid generation is then combined with the deprotection mechanism to obtain the probabilistic description of the deprotection density, directly related to the feature formation in a photoresist. A statistical analysis of the random deprotection density is presented to reveal the leading characteristics of stochastic feature formation.
physics
A clear consensus on how long it takes a particle to tunnel through a potential barrier has never been so urgently required, since the electron dynamics in strong-field ionization can be resolved on attosecond time-scale in experiment and the exact nature of the tunneling process is the key to trigger subsequent attosecond techniques. Here a general picture of tunneling time is suggested by introducing a quantum travel time, which is defined as the ratio of the travel distance to the expected value of the velocity operator under the barrier. Specially, if applied to rectangular barrier tunneling, it can retrieve the B\"{u}ttiker-Landauer time $\tau_{BL}$ in the case of an opaque barrier, and has a clear physical meaning in the case of a very thin barrier wherein $\tau_{BL}$ can not be well defined. In the case of strong-field tunneling process, with the help of the newly defined time, the tunneling delay time measured by attoclock experiment can be interpreted as a travel time spent by the electron to tunnel from a point under the barrier to the tunnel exit. In addition, a peculiar oscillation structure in the wavelength dependence of tunneling delay time in deep tunneling regime is observed, which is beyond the scope of adiabatic tunneling picture. This oscillation structure can be attributed to the interference between the ground state tunneling channel and the excited states tunneling channels.
physics
KOI-3278 is a self-lensing stellar binary consisting of a white-dwarf secondary orbiting a Sun-like primary star. Kruse and Agol (2014) noticed small periodic brightenings every 88.18 days in the Kepler photometry and interpreted these as the result of microlensing by a white dwarf with about 63$\%$ of the mass of the Sun. We obtained two sets of spectra for the primary that allowed us to derive three sets of spectroscopic estimates for its effective temperature, surface gravity, and metallicity for the first time. We used these values to update the Kruse and Agol (2014) Einsteinian microlensing model, resulting in a revised mass for the white dwarf of $0.539^{+0.022}_{-0.020} \, M_{\odot}$. The spectra also allowed us to determine radial velocities and derive orbital solutions, with good agreement between the two independent data sets. An independent Newtonian dynamical MCMC model of the combined velocities yielded a mass for the white dwarf of $0.5122^{+0.0057}_{-0.0058} \, M_{\odot}$. The nominal uncertainty for the Newtonian mass is about four times better than for the Einsteinian, $\pm 1.1\%$ vs. $\pm 4.1\%$ and the difference between the two mass determinations is $5.2 \%$. We then present a joint Einsteinian microlensing and Newtonian radial velocity model for KOI-3278, which yielded a mass for the white dwarf of $0.5250^{+0.0082}_{-0.0089} \, M_{\odot}$. This joint model does not rely on any white dwarf evolutionary models or assumptions on the white dwarf mass-radius relation. We discuss the benefits of a joint model of self-lensing binaries, and how future studies of these systems can provide insight into the mass-radius relation of white dwarfs.
astrophysics
The three-spin-$1/2$ decoherence-free subsystem defines a logical qubit protected from collective noise and supports exchange-only universal gates. Such logical qubits are well-suited for implementation with electrically-defined quantum dots. Exact exchange-only entangling logical gates exist but are challenging to construct and understand. We use a decoupling strategy to obtain straightforward approximate entangling gates. A benefit of the strategy is that if the physical spins are aligned, then it can implement evolution under entangling Hamiltonians. Hamiltonians expressible as linear combinations of logical Pauli products not involving $\sigma_y$ can be implemented directly. Self-inverse gates that are constructible from these Hamiltonians, such as the CNOT, can be implemented without the assumption on the physical spins. We compare the control complexity of implementing CNOT to previous methods and find that the complexity for fault-tolerant fidelities is competitive.
quantum physics
Outliers due to technical errors in water-quality data from in situ sensors can reduce data quality and have a direct impact on inference drawn from subsequent data analysis. However, outlier detection through manual monitoring is unfeasible given the volume and velocity of data the sensors produce. Here, we proposed an automated framework that provides early detection of outliers in water-quality data from in situ sensors caused by technical issues.The framework was used first to identify the data features that differentiate outlying instances from typical behaviours. Then statistical transformations were applied to make the outlying instances stand out in transformed data space. Unsupervised outlier scoring techniques were then applied to the transformed data space and an approach based on extreme value theory was used to calculate a threshold for each potential outlier. Using two data sets obtained from in situ sensors in rivers flowing into the Great Barrier Reef lagoon, Australia, we showed that the proposed framework successfully identified outliers involving abrupt changes in turbidity, conductivity and river level, including sudden spikes, sudden isolated drops and level shifts, while maintaining very low false detection rates. We implemented this framework in the open source R package oddwater.
statistics
This tutorial provides a gentle introduction to the particle Metropolis-Hastings (PMH) algorithm for parameter inference in nonlinear state-space models together with a software implementation in the statistical programming language R. We employ a step-by-step approach to develop an implementation of the PMH algorithm (and the particle filter within) together with the reader. This final implementation is also available as the package pmhtutorial in the CRAN repository. Throughout the tutorial, we provide some intuition as to how the algorithm operates and discuss some solutions to problems that might occur in practice. To illustrate the use of PMH, we consider parameter inference in a linear Gaussian state-space model with synthetic data and a nonlinear stochastic volatility model with real-world data.
statistics
We derive a general framework for a quantum metrology scheme where the quantum probes are exchanged via an unsecured quantum channel. We construct two protocols for this task which offer a trade-off between difficulty of implementation and efficiency. We show that, for both protocols, a malicious eavesdropper cannot access any information regarding the unknown parameter. We further derive general inequalities regarding how the uncertainty in a resource state for quantum metrology can bias the estimate and the precision. From this, we link the effectiveness of the cryptographic part of the protocol to the effectiveness of the metrology scheme with a (potentially) malicious probe resource state.
quantum physics
Diffusion-based classifiers such as those relying on the Personalized PageRank and the Heat kernel, enjoy remarkable classification accuracy at modest computational requirements. Their performance however is affected by the extent to which the chosen diffusion captures a typically unknown label propagation mechanism, that can be specific to the underlying graph, and potentially different for each class. The present work introduces a disciplined, data-efficient approach to learning class-specific diffusion functions adapted to the underlying network topology. The novel learning approach leverages the notion of "landing probabilities" of class-specific random walks, which can be computed efficiently, thereby ensuring scalability to large graphs. This is supported by rigorous analysis of the properties of the model as well as the proposed algorithms. Furthermore, a robust version of the classifier facilitates learning even in noisy environments. Classification tests on real networks demonstrate that adapting the diffusion function to the given graph and observed labels, significantly improves the performance over fixed diffusions; reaching -- and many times surpassing -- the classification accuracy of computationally heavier state-of-the-art competing methods, that rely on node embeddings and deep neural networks.
statistics
We investigate the interplay between the enumerative geometry of Calabi-Yau fourfolds with fluxes and the modularity of elliptic genera in four-dimensional string theories. We argue that certain contributions to the elliptic genus are given by derivatives of modular or quasi-modular forms, which encode BPS invariants of Calabi-Yau or non-Calabi-Yau threefolds that are embedded in the given fourfold. As a result, the elliptic genus is only a quasi-Jacobi form, rather than a modular or quasi-modular one in the usual sense. This manifests itself as a holomorphic anomaly of the spectral flow symmetry, and in an elliptic holomorphic anomaly equation that maps between different flux sectors. We support our general considerations by a detailed study of examples, including non-critical strings in four dimensions. For the critical heterotic string, we explain how anomaly cancellation is restored due to the properties of the derivative sector. Essentially, while the modular sector of the elliptic genus takes care of anomaly cancellation involving the universal B-field, the quasi-Jacobi one accounts for additional B-fields that can be present. Thus once again, diverse mathematical ingredients, namely here the algebraic geometry of fourfolds, relative Gromow-Witten theory pertaining to flux backgrounds, and the modular properties of (quasi-)Jacobi forms, conspire in an intriguing manner precisely as required by stringy consistency.
high energy physics theory
In this article we present a unified framework based on receding horizon techniques that can be used to design the three tasks (guidance, navigation and path-planning) which are involved in the autonomy of unmanned vehicles. This tasks are solved using model predictive control and moving horizon estimation techniques, which allows us to include physical and dynamical constraints at the design stage, thus leading to optimal and feasible results. In order to demonstrate the capabilities of the proposed framework, we have used Gazebo simulator in order to drive a Jackal unmanned ground vehicle (UGV) along a desired path computed by the path-planning task. The results we have obtained are successful as the estimation and guidance errors are small and the Jackal UGV is able to follow the desired path satisfactorily and it is also capable to avoid the obstacles which are in its way.
electrical engineering and systems science
In this paper we prove a general approximation result for reflected stochastic differential equations in bounded domains satisfying conditions reorganized by Ren and Wu. Then we show that it includes Wong-Zakai approximation, mollifier approximation, etc.
mathematics
Following~\cite{Arkani-Hamed:2017thz}, we derive a recursion relation by applying a one-parameter deformation of kinematic variables for tree-level scattering amplitudes in bi-adjoint $\phi^3$ theory. The recursion relies on properties of the amplitude that can be made manifest in the underlying kinematic associahedron, and it provides triangulations for the latter. Furthermore, we solve the recursion relation and present all-multiplicity results for the amplitude: by reformulating the associahedron in terms of its vertices, it is given explicitly as a sum of "volume" of simplicies for any triangulation, which is an analogy of BCFW representation/triangulation of amplituhedron for ${\cal N}=4$ SYM.
high energy physics theory
In this work, we combine a stochastic model reduction with a particle filter augmented with tempering and jittering, and apply the combined algorithm to a damped and forced incompressible 2D Euler dynamics defined on a simply connected bounded domain. We show that using the combined algorithm, we are able to assimilate data from a reference system state (the ``truth") modelled by a highly resolved numerical solution of the flow that has roughly $3.1\times10^6$ degrees of freedom, into a stochastic system having two orders of magnitude less degrees of freedom, which is able to approximate the true state reasonably accurately for $5$ large scale eddy turnover times, using modest computational hardware. The model reduction is performed through the introduction of a stochastic advection by Lie transport (SALT) model as the signal on a coarser resolution. The SALT approach was introduced as a general theory using a geometric mechanics framework from Holm, Proc. Roy. Soc. A (2015). This work follows on the numerical implementation for SALT presented by Cotter et al, SIAM Multiscale Model. Sim. (2019) for the flow in consideration. The model reduction is substantial: The reduced SALT model has $4.9\times 10^4$ degrees of freedom. Results from reliability tests on the assimilated system are also presented.
statistics
Aquatic biospheres reliant on oxygenic photosynthesis are expected to play an important role on Earth-like planets endowed with large-scale oceans insofar as carbon fixation (i.e., biosynthesis of organic compounds) is concerned. We investigate the properties of aquatic biospheres comprising Earth-like biota for habitable rocky planets orbiting Sun-like stars and late-type M-dwarfs such as TRAPPIST-1. In particular, we estimate how these characteristics evolve with the available flux of photosynthetically active radiation (PAR) and the ambient ocean temperature ($T_W$), the latter of which constitutes a key environmental variable. We show that many salient properties, such as the depth of the photosynthesis zone and the net primary productivity (i.e., the effective rate of carbon fixation), are sensitive to PAR flux and $T_W$ and decline substantially when the former is decreased or the latter is increased. We conclude by exploring the implications of our analysis for exoplanets around Sun-like stars and M-dwarfs.
astrophysics
Models of axion inflation based on a single cosine potential require the axion decay constant $f$ to be super-Planckian in size. However, $f > M_{Pl}$ is disfavored by the Weak Gravity Conjecture (WGC). It is then pertinent to ask if one can construct axion inflation models in conformity with WGC. In this work we assume that WGC holds for the microscopic Lagrangian so that $f < M_{Pl}$. However, inflation is controlled by an effective Lagrangian much below the Planck scale where the inflaton is an effective axionic field associated with an effective decay constant $f_e$ which could be very different from $f$. In this work we propose a Coherent Enhancement Mechanism (CEM) for slow roll inflation controlled by flat potentials which can produce $f_e \gg M_{Pl}$ while $f < M_{Pl}$. In the analysis we consider a landscape of chiral fields charged under a $U\left(1\right)$ global shift symmetry and consider breaking of the $U\left(1\right)$ symmetry by instanton type symmetry breaking terms. In the broken phase there is one light pseudo-Nambu-Goldstone-Boson (pNGB) which acts as the inflaton. We show that with an appropriate choice of symmetry breaking terms the inflaton potential is a superposition of many cosines and the condition that they produce a flat potential allows one to enhance $f_e$ so that $f_e / M_{Pl} \gg 1$. We discuss the utility of this mechanism for a variety of inflaton models originating in supersymmetry and supergravity. The Coherent Enhancement Mechanism allows one to reduce an inflation model with an arbitrary potential to an effective model of natural inflation, i.e. with a single cosine, by expanding the potential near a field point where horizon exit occurs, and matching the expansion coefficients to those of natural inflation.
high energy physics phenomenology
Vector boson scattering is a well known probe of electroweak symmetry breaking. Here we study a related process of two electroweak vector bosons scattering into a vector boson and a Higgs boson ($VV \rightarrow Vh, V=W,Z$). This process exhibits tree level interference and grows with energy if the Higgs couplings to electroweak bosons deviate from their Standard Model values. Therefore, this process is particularly sensitive to the relative sign of the ratio of the coupling between the Higgs and the $W$ and $Z$, $\lambda_{WZ}$. In this work we show that a high energy lepton collider is well suited to study this process through vector boson fusion, estimate the potential sensitivity to this ratio, and show that a relatively modest amount of data can exclude $\lambda_{WZ} \simeq -1$.
high energy physics phenomenology
The main purpose of a control allocator is to distribute a total control effort among redundant actuators. This paper proposes a discrete adaptive control allocator for over-actuated sampled-data systems in the presence of actuator uncertainty. The proposed method does not require uncertainty estimation or persistency of excitation. Furthermore, the presented algorithm employs a closed loop reference model, which provides fast convergence without introducing excessive oscillations. To generate the total control signal, an LQR controller with reference tracking is used to guarantee the outer loop asymptotic stability. The discretized version of the Aerodata Model in Research Environment (ADMIRE) is used as an over-actuated system, to demonstrate the efficacy of the proposed method.
electrical engineering and systems science
Recently a new group of two dimensional (2D) materials, originating from the group V elements (pnictogens), has gained global attention owing to their outstanding properties.
condensed matter
To track online emotional expressions of the Austrian population close to real-time during the COVID-19 pandemic, we build a self-updating monitor of emotion dynamics using digital traces from three different data sources. This enables decision makers and the interested public to assess issues such as the attitude towards counter-measures taken during the pandemic and the possible emergence of a (mental) health crisis early on. We use web scraping and API access to retrieve data from the news platform derstandard.at, Twitter and a chat platform for students. We document the technical details of our workflow in order to provide materials for other researchers interested in building a similar tool for different contexts. Automated text analysis allows us to highlight changes of language use during COVID-19 in comparison to a neutral baseline. We use special word clouds to visualize that overall difference. Longitudinally, our time series show spikes in anxiety that can be linked to several events and media reporting. Additionally, we find a marked decrease in anger. The changes last for remarkably long periods of time (up to 12 weeks). We discuss these and more patterns and connect them to the emergence of collective emotions. The interactive dashboard showcasing our data is available online under http://www.mpellert.at/covid19_monitor_austria/. Our work has attracted media attention and is part of an web archive of resources on COVID-19 collected by the Austrian National Library.
computer science
Performing large calculations with a quantum computer will likely require a fault-tolerant architecture based on quantum error-correcting codes. The challenge is to design practical quantum error-correcting codes that perform well against realistic noise using modest resources. Here we show that a variant of the surface code -- the XZZX code -- offers remarkable performance for fault-tolerant quantum computation. The error threshold of this code matches what can be achieved with random codes (hashing) for every single-qubit Pauli noise channel; it is the first explicit code shown to have this universal property. We present numerical evidence that the threshold even exceeds this hashing bound for an experimentally relevant range of noise parameters. Focusing on the common situation where qubit dephasing is the dominant noise, we show that this code has a practical, high-performance decoder and surpasses all previously known thresholds in the realistic setting where syndrome measurements are unreliable. We go on to demonstrate the favourable sub-threshold resource scaling that can be obtained by specialising a code to exploit structure in the noise. We show that it is possible to maintain all of these advantages when we perform fault-tolerant quantum computation.
quantum physics
Driven quantum systems may realize novel phenomena absent in static systems, but driving-induced heating can limit the time-scale on which these persist. We study heating in interacting quantum many-body systems driven by random sequences with $n-$multipolar correlations, corresponding to a polynomially suppressed low frequency spectrum. For $n\geq1$, we find a prethermal regime, the lifetime of which grows algebraically with the driving rate, with exponent ${2n+1}$. A simple theory based on Fermi's golden rule accounts for this behaviour. The quasiperiodic Thue-Morse sequence corresponds to the $n\to \infty$ limit, and accordingly exhibits an exponentially long-lived prethermal regime. Despite the absence of periodicity in the drive, and in spite of its eventual heat death, the prethermal regime can host versatile non-equilibrium phases, which we illustrate with a random multipolar discrete time crystal.
quantum physics
This paper shows that deep neural network (DNN) can be used for efficient and distributed channel estimation, quantization, feedback, and downlink multiuser precoding for a frequency-division duplex massive multiple-input multiple-output system in which a base station (BS) serves multiple mobile users, but with rate-limited feedback from the users to the BS. A key observation is that the multiuser channel estimation and feedback problem can be thought of as a distributed source coding problem. In contrast to the traditional approach where the channel state information (CSI) is estimated and quantized at each user independently, this paper shows that a joint design of pilots and a new DNN architecture, which maps the received pilots directly into feedback bits at the user side then maps the feedback bits from all the users directly into the precoding matrix at the BS, can significantly improve the overall performance. This paper further proposes robust design strategies with respect to channel parameters and also a generalizable DNN architecture for varying number of users and number of feedback bits. Numerical results show that the DNN-based approach with short pilot sequences and very limited feedback overhead can already approach the performance of conventional linear precoding schemes with full CSI.
computer science
The topical workshop {\it Strong QCD from Hadron Structure Experiments} took place at Jefferson Lab from Nov. 6-9, 2019. Impressive progress in relating hadron structure observables to the strong QCD mechanisms has been achieved from the {\it ab initio} QCD description of hadron structure in a diverse array of methods in order to expose emergent phenomena via quasi-particle formation. The wealth of experimental data and the advances in hadron structure theory make it possible to gain insight into strong interaction dynamics in the regime of large quark-gluon coupling (the strong QCD regime), which will address the most challenging problems of the Standard Model on the nature of the dominant part of hadron mass, quark-gluon confinement, and the emergence of the ground and excited state hadrons, as well as atomic nuclei, from QCD. This workshop aimed to develop plans and to facilitate the future synergistic efforts between experimentalists, phenomenologists, and theorists working on studies of hadron spectroscopy and structure with the goal to connect the properties of hadrons and atomic nuclei available from data to the strong QCD dynamics underlying their emergence from QCD. These results pave the way for a future breakthrough extension in the studies of QCD with an Electron-Ion Collider in the U.S.
high energy physics phenomenology
The flavor changing rare decay $B\to K^{*}(\to K\pi)\ell^+\ell^-$ is one of the most studied modes due to its sensitivity to physics beyond the standard model and several discrepancies have come to light among the plethora of observables that are measured. In this paper we revisit the analogous baryonic decay mode $\Lambda_{b}\rightarrow \Lambda (\to p\pi) \ell^{+}\ell^{-}$ and we present a complete set of ten angular observables that can be measured using this decay mode. Our calculations are done retaining the finite lepton mass so that the signal of lepton non-universality observed in $B\to K^{*} \ell^+\ell^-$ can be corroborated by the corresponding baryonic decay mode. We show that due to the parity violating nature of the subsequent $\Lambda\to p\pi$ decay there exist at least one angular asymmetry that is non-vanishing in the large recoil limit unlike the case in $B\to K^{*}\ell^+\ell^-$ decay mode, making it particularly sensitive to new physics that violates lepton flavor universality.
high energy physics phenomenology
Tracking control for soft robots is challenging due to uncertainties in the system model and environment. Using high feedback gains to overcome this issue results in an increasing stiffness that clearly destroys the inherent safety property of soft robots. However, accurate models for feed-forward control are often difficult to obtain. In this article, we employ Gaussian Process regression to obtain a data-driven model that is used for the feed-forward compensation of unknown dynamics. The model fidelity is used to adapt the feed-forward and feedback part allowing low feedback gains in regions of high model confidence.
electrical engineering and systems science
We study the maximum weight perfect $f$-factor problem on any general simple graph $G=(V,E,w)$ with positive integral edge weights $w$, and $n=|V|$, $m=|E|$. When we have a function $f:V\rightarrow \mathbb{N}_+$ on vertices, a perfect $f$-factor is a generalized matching so that every vertex $u$ is matched to $f(u)$ different edges. The previous best algorithms on this problem have running time $O(m f(V))$ [Gabow 2018] or $\tilde{O}(W(f(V))^{2.373}))$ [Gabow and Sankowski 2013], where $W$ is the maximum edge weight, and $f(V)=\sum_{u\in V}f(u)$. In this paper, we present a scaling algorithm for this problem with running time $\tilde{O}(mn^{2/3}\log W)$. Previously this bound is only known for bipartite graphs [Gabow and Tarjan 1989]. The running time of our algorithm is independent of $f(V)$, and consequently it first breaks the $\Omega(mn)$ barrier for large $f(V)$ even for the unweighted $f$-factor problem in general graphs.
computer science
Natural convection is usually complicated by additional factors such as rotation, shear, radiative transfer, compressibility and electromagnetic fields (in the case of electro-conductive fluids). It is shown, using results of numerical simulations and measurements in atmospheric boundary layer and solar photosphere that strong stratification can transform turbulence into deterministic chaos with exponential spectral decay of kinetic energy. When the stratification becomes weaker the deterministic chaos is replaced by the distributed chaos with stretched exponential spectral decay controlled by the second or third moments of the helicity distribution.
physics
This work addresses the joint object discovery problem in videos while utilizing multiple object-related cues. In contrast to the usual spatial fusion approach, a novel appearance fusion approach is presented here. Specifically, this paper proposes an effective fusion process of different GMMs derived from multiple cues into one GMM. Much the same as any fusion strategy, this approach also needs some guidance. The proposed method relies on reliability and consensus phenomenon for guidance. As a case study, we pursue the "video co-localization" object discovery problem to propose our methodology. Our experiments on YouTube Objects and YouTube Co-localization datasets demonstrate that the proposed method of appearance fusion undoubtedly has an advantage over both the spatial fusion strategy and the current state-of-the-art video co-localization methods.
computer science