text
stringlengths
11
9.77k
label
stringlengths
2
104
We will consider the indefinite truncated multidimensional moment problem. Necessary and sufficient conditions for a given truncated multisequence to have a signed representing measure $\mu$ with ${\rm card}\,{\rm supp}\, \mu$ as small as possible are given by the existence of a rank preserving extension of a multivariate Hankel matrix (built from the given truncated multisequence) such that the corresponding associated polynomial ideal is real radical. This result is a special case of a more general characterisation of truncated multisequences with a minimal complex representing measure whose support is symmetric with respect to complex conjugation (which we will call {\it quasi-complex}). One motivation for our results is the fact that positive semidefinite truncated multisequence need not have a positive representing measure. Thus, our main result gives the potential for computing a signed representing measure $\mu = \mu_+ - \mu_-$, where ${\rm card} \,\mu_-$ is small. We illustrate this point on concrete examples.
mathematics
We discuss the Higgs mass and cosmological constant in the context of an emergent Standard Model, where the gauge symmetries "dissolve" in the extreme ultraviolet. In this scenario the cosmological constant scale is suppressed by power of the large scale of emergence and expected to be of similar size to neutrino masses. Cosmology constraints then give an anthropic upper bound on the Higgs mass.
high energy physics phenomenology
While a lot of work in theoretical computer science has gone into optimizing the runtime and space usage of data structures, such work very often neglects a very important component of modern computers: the cache. In doing so, very often, data structures are developed that achieve theoretically-good runtimes but are slow in practice due to a large number of cache misses. In 1999, Frigo et al. introduced the notion of a cache-oblivious algorithm: an algorithm that uses the cache to its advantage, regardless of the size or structure of said cache. Since then, various authors have designed cache-oblivious algorithms and data structures for problems from matrix multiplication to array sorting. We focus in this work on cache-oblivious search trees; i.e. implementing an ordered dictionary in a cache-friendly manner. We will start by presenting an overview of cache-oblivious data structures, especially cache-oblivious search trees. We then give practical results using these cache-oblivious structures on modern-day machinery, comparing them to the standard std::set and other cache-friendly dictionaries such as B-trees.
computer science
We explore the physics of the gyro-resonant cosmic ray streaming instability (CRSI) including the effects of ion-neutral (IN) damping. This is the main damping mechanism in (partially-ionized) atomic and molecular gas, which are the primary components of the interstellar medium (ISM) by mass. Limitation of CRSI by IN damping is important in setting the amplitude of Alfv\'en waves that scatter cosmic rays and control galactic-scale transport. Our study employs the MHD-PIC hybrid fluid-kinetic numerical technique to follow linear growth as well as post-linear and saturation phases. During the linear phase of the instability -- where simulations and analytical theory are in good agreement -- IN damping prevents wave growth at small and large wavelengths, with the unstable bandwidth lower for higher ion-neutral collision rate $\nu_{\rm in}$. Purely MHD effects during the post-linear phase extend the wave spectrum towards larger $k$. In the saturated state, the cosmic ray distribution evolves toward greater isotropy (lower streaming velocity) by scattering off of Alv\'en waves excited by the instability. In the absence of low-$k$ waves, CRs with sufficiently high momentum are not isotropized. The maximum wave amplitude and rate of isotropization of the distribution function decreases at higher $\nu_{\rm in}$. When the IN damping rate approaches the maximum growth rate of CSRI, wave growth and isotropization is suppressed. Implications of our results for CR transport in partially ionized ISM phases are discussed.
astrophysics
Estimating the size of hard-to-reach populations is an important problem for many fields. The Network Scale-up Method (NSUM) is a relatively new approach to estimate the size of these hard-to-reach populations by asking respondents the question, "How many X's do you know," where X is the population of interest (e.g. "How many female sex workers do you know?"). The answers to these questions form Aggregated Relational Data (ARD). The NSUM has been used to estimate the size of a variety of subpopulations, including female sex workers, drug users, and even children who have been hospitalized for choking. Within the Network Scale-up methodology, there are a multitude of estimators for the size of the hidden population, including direct estimators, maximum likelihood estimators, and Bayesian estimators. In this article, we first provide an in-depth analysis of ARD properties and the techniques to collect the data. Then, we comprehensively review different estimation methods in terms of the assumptions behind each model, the relationships between the estimators, and the practical considerations of implementing the methods. Finally, we provide a summary of the dominant methods and an extensive list of the applications, and discuss the open problems and potential research directions in this area.
statistics
We present a data-driven framework for strategy synthesis for partially-known switched stochastic systems. The properties of the system are specified using linear temporal logic (LTL) over finite traces (LTLf), which is as expressive as LTL and enables interpretations over finite behaviors. The framework first learns the unknown dynamics via Gaussian process regression. Then, it builds a formal abstraction of the switched system in terms of an uncertain Markov model, namely an Interval Markov Decision Process (IMDP), by accounting for both the stochastic behavior of the system and the uncertainty in the learning step. Then, we synthesize a strategy on the resulting IMDP that maximizes the satisfaction probability of the LTLf specification and is robust against all the uncertainties in the abstraction. This strategy is then refined into a switching strategy for the original stochastic system. We show that this strategy is near-optimal and provide a bound on its distance (error) to the optimal strategy. We experimentally validate our framework on various case studies, including both linear and non-linear switched stochastic systems.
electrical engineering and systems science
The depth of a secondary eclipse contains information of both the thermally emitted light component of a hot Jupiter and the reflected light component. If the dayside atmosphere of the planet is assumed to be isothermal, it is possible to disentangle both. In this work, we analyze 11 eclipse light curves of the hot Jupiter HAT-P-32b obtained at 0.89 $\mu$m in the z' band. We obtain a null detection for the eclipse depth with state-of-the-art precision, -0.01 +- 0.10 ppt. We confirm previous studies showing that a non-inverted atmosphere model is in disagreement to the measured emission spectrum of HAT-P-32b. We derive an upper limit on the reflected light component, and thus, on the planetary geometric albedo $A_g$. The 97.5%-confidence upper limit is $A_g$ < 0.2. This is the first albedo constraint for HAT-P-32b, and the first z' band albedo value for any exoplanet. It disfavors the influence of large-sized silicate condensates on the planetary day side. We inferred z' band geometric albedo limits from published eclipse measurements also for the ultra-hot Jupiters WASP-12b, WASP-19b, WASP-103b, and WASP-121b, applying the same method. These values consistently point to a low reflectivity in the optical to near-infrared transition regime for hot to ultra-hot Jupiters.
astrophysics
Using first-principles molecular dynamics, we calculated the equation of state and shock Hugoniot of various boron phases. We find a large mismatch between Hugoniots based on existing knowledge of the equilibrium phase diagram and those measured by shock experiments, which could be reconciled if the $\alpha$-B$_{12}$/$\beta \rightarrow\gamma$-B$_{28}$ transition is significantly over-pressurized in boron under shock compression. Our results also indicate that there exists an anomaly and negative Clapeyron slope along the melting curve of boron at 100 GPa and 1500--3000 Kelvin. These results enable in-depth understanding of matter under shock compression, in particular the significance of compression-rate dependence of phase transitions and kinetic effects in experimental measurements.
condensed matter
Phosphorene is a single elemental two-dimensional semiconductor that has quickly emerged as a high mobility material for transistors and optoelectronic devices. In addition, being a 2D material, it can sustain high levels of strain, enabling sensitive modification of its electronic properties. In this paper, we investigate the strain dependent electrical properties of phosphorene nanocrystals. Performing extensive calculations we determine electrical conductance as a function uniaxial as well as biaxial strain stimulus, and uncover a unique zone phase diagram. This enables us to uncover for the first time conductance oscillations in pristine phopshorene, by simple application of strain. We show that how such unconventional current-voltage behaviour is tuneable by the nature of strain, and how an additional gate voltage can modulate amplitude (peak to valley ratio) of the observed phenomena and its switching efficiency. Furthermore, we show that the switching is highly robust against doping and defects. Our detailed results present new leads for innovations in strain based gauging and high-frequency nanoelectronic switches of phosphorene.
condensed matter
Following the Unlimited Sampling strategy to alleviate the omnipresent dynamic range barrier, we study the problem of recovering a bandlimited signal from point-wise modulo samples, aiming to connect theoretical guarantees with hardware implementation considerations. Our starting point is a class of non-idealities that we observe in prototyping an unlimited sampling based analog-to-digital converter. To address these non-idealities, we provide a new Fourier domain recovery algorithm. Our approach is validated both in theory and via extensive experiments on our prototype analog-to-digital converter, providing the first demonstration of unlimited sampling for data arising from real hardware, both for the current and previous approaches. Advantages of our algorithm include that it is agnostic to the modulo threshold and it can handle arbitrary folding times. We expect that the end-to-end realization studied in this paper will pave the path for exploring the unlimited sampling methodology in a number of real world applications.
computer science
Objective: A novel structure based on channel-wise attention mechanism is presented in this paper. Embedding with the proposed structure, an efficient classification model that accepts multi-lead electrocardiogram (ECG) as input is constructed. Methods: One-dimensional convolutional neural networks (CNN) have proven to be effective in pervasive classification tasks, enabling the automatic extraction of features while classifying targets. We implement the Residual connection and design a structure which can learn the weights from the information contained in different channels in the input feature map during the training process. An indicator named mean square deviation is introduced to monitor the performance of a particular model segment in the classification task on the two out of the five ECG classes. The data in the MIT-BIH arrhythmia database is used and a series of control experiments is conducted. Results: Utilizing both leads of the ECG signals as input to the neural network classifier can achieve better classification results than those from using single channel inputs in different application scenarios. Models embedded with the channel-wise attention structure always achieve better scores on sensitivity and precision than the plain Resnet models. The proposed model exceeds the performance of most of the state-of-the-art models in ventricular ectopic beats (VEB) classification, and achieves competitive scores for supraventricular ectopic beats (SVEB). Conclusion: Adopting more lead ECG signals as input can increase the dimensions of the input feature maps, helping to improve both the performance and generalization of the network model. Significance: Due to its end-to-end characteristics, and the extensible intrinsic for multi-lead heart diseases diagnosing, the proposed model can be used for the real-time ECG tracking of ECG waveforms for Holter or wearable devices.
electrical engineering and systems science
In this paper, we study the impact of stealthy attacks on the Cyber-Physical System (CPS) modeled as a stochastic linear system. An attack is characterised by a malicious injection into the system through input, output or both, and it is called stealthy (resp.~strictly stealthy) if it produces bounded changes (resp.~no changes) in the detection residue. Correspondingly, a CPS is called vulnerable (resp.~strictly vulnerable) if it can be destabilized by a stealthy attack (resp.~strictly stealthy attack). We provide necessary and sufficient conditions for the vulnerability and strictly vulnerability. For the invulnerable case, we also provide a performance bound for the difference between healthy and attacked system. Numerical examples are provided to illustrate the theoretical results.
electrical engineering and systems science
The local supertwistor formalism, which involves a superconformal connection acting on the bundle of such objects over superspace, is used to investigate superconformal geometry in six dimensions. The geometry corresponding to (1, 0) and (2, 0) off-shell conformal supergravity multiplets, as well the associated finite super-Weyl transformations, are derived.
high energy physics theory
Among all two-dimensional commutative algebras of the second rank a totally of all their biharmonic bases $\{e_1,e_2\}$, satisfying conditions $\left(e_1^2+ e_2^2\right)^{2} = 0$, $e_1^2 + e_2^2 \ne 0$, is found in an explicit form. A set of "analytic" (monogenic) functions satisfying the biharmonic equation and defined in the real planes generated by the biharmonic bases is built. A characterization of biharmonic functions in bounded simply connected domains by real components of some monogenic functions is found.
mathematics
The Daniel K. Inouye Solar Telescope (DKIST) will revolutionize our ability to measure, understand and model the basic physical processes that control the structure and dynamics of the Sun and its atmosphere. The first-light DKIST images, released publicly on 29 January 2020, only hint at the extraordinary capabilities which will accompany full commissioning of the five facility instruments. With this Critical Science Plan (CSP) we attempt to anticipate some of what those capabilities will enable, providing a snapshot of some of the scientific pursuits that the Daniel K. Inouye Solar Telescope hopes to engage as start-of-operations nears. The work builds on the combined contributions of the DKIST Science Working Group (SWG) and CSP Community members, who generously shared their experiences, plans, knowledge and dreams. Discussion is primarily focused on those issues to which DKIST will uniquely contribute.
astrophysics
We propose a novel approach to image segmentation based on combining implicit spline representations with deep convolutional neural networks. This is done by predicting the control points of a bivariate spline function whose zero-set represents the segmentation boundary. We adapt several existing neural network architectures and design novel loss functions that are tailored towards providing implicit spline curve approximations. The method is evaluated on a congenital heart disease computed tomography medical imaging dataset. Experiments are carried out by measuring performance in various standard metrics for different networks and loss functions. We determine that splines of bidegree $(1,1)$ with $128\times128$ coefficient resolution performed optimally for $512\times 512$ resolution CT images. For our best network, we achieve an average volumetric test Dice score of almost 92%, which reaches the state of the art for this congenital heart disease dataset.
electrical engineering and systems science
In the framework of spin-0 $s$-channel dark matter (DM) simplified models, we reassess the sensitivity of future LHC runs to the production of DM in association with top quarks. We consider two different missing transverse energy ($E_T^{\mathrm{miss}}$) signatures, namely production of DM in association with either a $t \bar t$ pair or a top quark and a $W$ boson, where the latter channel has not been the focus of a dedicated analysis prior to this work. Final states with two leptons are studied and a realistic analysis strategy is developed that simultaneously takes into account both channels. Compared to other existing search strategies the proposed combination of $t \bar t + E_T^{\mathrm{miss}}$ and $t W + E_T^{\mathrm{miss}}$ production provides a significantly improved coverage of the parameter space of spin-0 $s$-channel DM simplified models.
high energy physics phenomenology
Fire incidence is a big problem for every local government unit in the Philippines. The two most detrimental effects of fire incidence are economic loss and loss of life. To mitigate these losses, proper planning and implementation of control measures must be done. An essential aspect of planning and control measures is prediction of possible fire incidences. This study is conducted to analyze the historical data to create a forecasting model for the fire incidence in Davao City. Results of the analyses show that fire incidence has no trend or seasonality, and occurrences of fire are neither consistently increasing nor decreasing over time. Furthermore, the absence of seasonality in the data indicate that surge of fire incidence may occur at any time of the year. Therefore, fire prevention activities should be done all year round and not just during fire prevention month.
statistics
Epoch of Reionization data analysis requires unprecedented levels of accuracy in radio interferometer pipelines. We have developed an imaging power spectrum analysis to meet these requirements and generate robust 21 cm EoR measurements. In this work, we build a signal path framework to mathematically describe each step in the analysis, from data reduction in the FHD package to power spectrum generation in the $\varepsilon$ppsilon package. In particular, we focus on the distinguishing characteristics of FHD/$\varepsilon$ppsilon: highly accurate spectral calibration, extensive data verification products, and end-to-end error propagation. We present our key data analysis products in detail to facilitate understanding of the prominent systematics in image-based power spectrum analyses. As a verification to our analysis, we also highlight a full-pipeline analysis simulation to demonstrate signal preservation and lack of signal loss. This careful treatment ensures that the FHD/$\varepsilon$ppsilon power spectrum pipeline can reduce radio interferometric data to produce credible 21 cm EoR measurements.
astrophysics
Based on the Dyson-Schwinger equation, we compute the resummed gluon propagator in a holonomous plasma that is described by introducing a constant background field for the vector potential $A_{0}$. Due to the transversality of the holonomous Hard-Thermal-Loop in gluon self-energy, the resummed propagator has a similar Lorentz structure as that in the perturbative Quark-Gluon Plasma where the holonomy vanishes. As for the color structures, since diagonal gluons are mixed in the over-complete double line basis, only the propagators for off-diagonal gluons can be obtained unambiguously. On the other hand, multiplied by a projection operator, the propagators for diagonal gluons, which exhibit a highly non-trivial dependence on the background field, are uniquely determined after summing over the color indices. As an application of these results, we consider the Debye screening effect on the in-medium binding of quarkonium states by analyzing the static limit of the resummed gluon propagator. In general, introducing non-zero holonomy merely amounts to modifications on the perturbative screening mass $m_D$ and the resulting heavy-quark potential, which remains the standard Debye screened form, is always deeper than the screened potential in the perturbative Quark-Gluon Plasma. Therefore, a weaker screening, thus a more tightly bounded quarkonium state can be expected in a holonomous plasma. In addition, both the diagonal and off-diagonal gluons become distinguishable by their modified screening masses ${\cal M}_D$ and the temperature dependence of the ratio ${\cal M}_D/T$ shows a very similar behavior as that found in lattice simulations.
high energy physics phenomenology
We study some combinatorial properties of higher-dimensional partitions which generalize plane partitions. We present a natural bijection between $d$-dimensional partitions and $d$-dimensional arrays of nonnegative integers. This bijection has a number of important applications. We introduce a statistic on $d$-dimensional partitions, called the corner-hook volume, whose generating function has the formula of MacMahon's conjecture. We obtain multivariable formulas whose specializations give analogues of various formulas known for plane partitions. We also introduce higher-dimensional analogues of dual Grothendieck polynomials which are quasisymmetric functions and whose specializations enumerate higher-dimensional partitions of a given shape. Finally, we show probabilistic connections with a directed last passage percolation model in $\mathbb{Z}^d$.
mathematics
In this paper, we study the Kakeya type inequality in $\mathbb{R}^n$ for $n\ge2$ by the theory of multipliers. And we obtain several useful inequalities.
mathematics
In this paper, we address the problem of reconstructing coverage maps from path-loss measurements in cellular networks. We propose and evaluate two kernel-based adaptive online algorithms as an alternative to typical offline methods. The proposed algorithms are application-tailored extensions of powerful iterative methods such as the adaptive projected subgradient method and a state-of-the-art adaptive multikernel method. Assuming that the moving trajectories of users are available, it is shown how side information can be incorporated in the algorithms to improve their convergence performance and the quality of the estimation. The complexity is significantly reduced by imposing sparsity-awareness in the sense that the algorithms exploit the compressibility of the measurement data to reduce the amount of data which is saved and processed. Finally, we present extensive simulations based on realistic data to show that our algorithms provide fast, robust estimates of coverage maps in real-world scenarios. Envisioned applications include path-loss prediction along trajectories of mobile users as a building block for anticipatory buffering or traffic offloading.
computer science
The previously derived vortex atomic form factor, which is directly related to a differential reaction cross section, is used to analyze the elastic scattering of twisted vortex photons with a hydrogenic atomic target. The vortex atomic form factor is expressed in a unified spherical basis and implemented in a MatLab code that numerically evaluates it using globally adaptive quadrature. The results of this code show the influence of variation in the photon wavelength, Rayleigh range, and scattering angle on differential reaction cross sections and the twist factor, which measures the impact of introducing orbital angular momentum. The recently suggested double mirror effect that accounts for a non-zero effect in the forward direction for twisted photon interactions is numerically confirmed. Finally, it is shown that differential reaction cross sections are greatly amplified when the Rayleigh range and photon wavelength are brought close to the scale of an atom. Experimental considerations and applications are briefly discussed, including quantum information, in which the scattering of twisted photons on atomic targets can be used to transfer information between light and matter.
quantum physics
Large star-to-star abundance variations are direct evidence of multiple stellar populations in Galactic globular clusters (GCs). The main and most widespread chemical signature is the anti-correlation of the stellar Na and O abundances. The interquartile range (IQR) of the [O/Na] ratio is well suited to quantifying the extent of the anti-correlation and to probe its links to global cluster parameters. However, since it is quite time consuming to obtain precise abundances from spectroscopy for large samples of stars in GCs, here we show empirical calibrations of IQR[O/Na] based on the O, Na abundances homogeneously derived from more than 2000 red giants in 22 GCs in our FLAMES survey. We find a statistically robust bivariate correlation of IQR as a function of the total luminosity (a proxy for mass) and cluster concentration c. Calibrated and observed values lie along the identity line when a term accounting for the horizontal branch (HB) morphology is added to the calibration, from which we obtained empirical values for 95 GCs. Spreads in proton-capture elements O and Na are found for all GCs in the luminosity range from Mv=-3.76 to Mv=-9.98. This calibration reproduces in a self-consistent picture the link of abundance variations in light elements with the He enhancements and its effect on the stellar distribution on the HB. We show that the spreads in light elements seem already to be dependent on the initial GC masses. The dependence of IQR on structural parameters stems from the well known correlation between c and Mv, which is likely to be of primordial origin. Empirical estimates can be used to extend our investigation of multiple stellar populations to GCs in external galaxies, up to M31, where even integrated light spectroscopy may currently provide only a hint of such a phenomenon.
astrophysics
The passive transient response of tetanized muscles is usually simulated using the classical Huxley-Simmons (HS) model. It predicts negative effective elastic stiffness in the state of isometric contractions (stall conditions), which can potentially trigger spatial inhomogeneity at the scale of the whole muscle fiber. Such instability has not been observed. Here we argue that the passive stabilization of the homogeneous state in real muscles may be due to the steric short-range interaction between individual myosin heads, which competes with the long-range elastic interaction induced by semi-rigid myosin backbones. We construct a phase diagram for the HS-type model accounting for such competing interactions and show that the resulting mechanical response in stall conditions is strongly influenced by a tricritical point. In addition to the coherent pre- and post-power stroke configurations anticipated by the original HS theory, the augmented model predicts the stability of configurations with pre- and post-power stroke cross-bridges finely mixed. In this new "phase," the overall stiffness of a half-sarcomere is positive, which suggests that it may be adequately representing the physiological state of isometric contractions.
physics
Tremendous progress has been witnessed in artificial intelligence where neural network backed deep learning systems and applications. As a representative deep learning framework, Generative Adversarial Network (GAN) is widely used for generating artificial images, text-to-image or image augmentation across areas of science, arts and video games. However, GANs are very computationally expensive, sometimes computationally prohibitive. Training GAN may suffer from convergence failure and modal collapse. Aiming at the acceleration of practical quantum computers, we propose QuGAN, a quantum GAN architecture that provides stable differentiation, quantum-states based gradients and significantly reduced parameter sets. QuGAN architecture runs the discriminator and the generator purely on quantum hardware and utilizes the swap test on qubits to calculate the values of loss functions. Built on quantum layers, QuGAN is able to achieve similar performance with 98.5% reduction on the parameter set when compared to classical GANs. With the same number of parameters, additionally, QuGAN outperforms other quantum based GANs in the literature for up to 125.0% in terms of similarity between generated distributions and original datasets.
quantum physics
Given a prime power $q$ and positive integers $m,t,e$ with $e > mt/2$, we determine the number of all monic irreducible polynomials $f(x)$ of degree $m$ with coefficients in $\mathbb{F}_q$ such that $f(x^t)$ contains an irreducible factor of degree $e$. Polynomials with these properties are important for justifying randomised algorithms for computing with matrix groups.
mathematics
Umbral flashes are sudden brightenings commonly visible in the core of chromospheric lines. Theoretical and numerical modeling suggest that they are produced by the propagation of shock waves. According to these models and early observations, umbral flashes are associated with upflows. However, recent studies have reported umbral flashes in downflowing atmospheres. We aim to understand the origin of downflowing umbral flashes. We explore how the existence of standing waves in the umbral chromosphere impacts the generation of flashed profiles. We performed numerical simulations of wave propagation in a sunspot umbra with the code MANCHA. The Stokes profiles of the Ca II 8542 \AA\ line were synthesized with NICOLE. For freely-propagating waves, the chromospheric temperature enhancements of the oscillations are in phase with velocity upflows. In this case, the intensity core of the Ca II 8542 \AA\ atmosphere is heated during the upflowing stage of the oscillation. If we consider a different scenario with a resonant cavity, the wave reflections at the sharp temperature gradient of the transition region lead to standing oscillations. In this situation, temperature fluctuations are shifted backward and temperature enhancements partially coincide with the downflowing stage of the oscillation. In umbral flashes produced by standing oscillations, the reversal of the emission feature is produced when the oscillation is downflowing. The chromospheric temperature keeps increasing while the atmosphere is changing from a downflow to an upflow. During the appearance of flashed Ca II 8542 \AA\ cores, the atmosphere is upflowing most of the time, and only 38\% of the flashed profiles are associated with downflows. We find a scenario that remarkably explains the recent empirical findings of downflowing umbral flashes as a natural consequence of the presence of standing oscillations above sunspot umbrae.
astrophysics
Investigation of dynamic processes in cell biology very often relies on the observation in two dimensions of 3D biological processes. Consequently, the data are partial and statistical methods and models are required to recover the parameters describing the dynamical processes. In the case of molecules moving over the 3D surface, such as proteins on walls of bacteria cell, a large portion of the 3D surface is not observed in 2D-time microscopy. It follows that biomolecules may disappear for a period of time in a region of interest, and then reappear later. Assuming Brownian motion with drift, we address the mathematical problem of the reconstruction of biomolecules trajectories on a cylindrical surface. A subregion of the cylinder is typically recorded during the observation period, and biomolecules may appear or disappear in any place of the 3D surface. The performance of the method is demonstrated on simulated particle trajectories that mimic MreB protein dynamics observed in 2D time-lapse fluorescence microscopy in rod-shaped bacteria.
statistics
This paper presents our preliminary results with ABEONA, an edge-to-cloud architecture that allows migrating tasks from low-energy, resource-constrained devices on the edge up to the cloud. Our preliminary results on artificial and real world datasets show that it is possible to execute workloads in a more efficient manner energy-wise by scaling horizontally at the edge, without negatively affecting the execution runtime.
computer science
We investigate the local time $(T_{loc})$ statistics for a run and tumble particle in an one dimensional inhomogeneous medium. The inhomogeneity is introduced by considering the position dependent rate of the form $R(x) = \gamma \frac{|x|^{\alpha}}{l^{\alpha}}$ with $\alpha \geq 0$. For $\alpha =0$, we derive the probability distribution of $T_{loc}$ exactly which is expressed as a series of $\delta$-functions in which the coefficients can be interpreted as the probability of multiple revisits of the particle to the origin starting from the origin. For general $\alpha$, we show that the typical fluctuations of $T_{loc}$ scale with time as $T_{loc} \sim t^{\frac{1+\alpha}{2+\alpha}}$ for large $t$ and their probability distribution possesses a scaling behaviour described by a scaling function which we have computed analytically. In the second part, we study the statistics of $T_{loc}$ till the RTP makes a first passage to $x=M~(>0)$. In this case also, we show that the probability distribution can be expressed as a series sum of $\delta$-functions for all values of $\alpha~(\geq 0)$ with coefficients appearing from appropriate exit problems. All our analytical findings are supported with the numerical simulations.
condensed matter
M\"obius invariance is used to construct gluon tree amplitudes in the Cachazo, He, and Yuan (CHY) formalism. If it is equally effective in steering the construction of off-shell tree amplitudes, then the S-matrix CHY theory can be used to replace the Lagrangian Yang-Mills theory. In the process of investigating this possibility, we find that the CHY formula can indeed be modified to obtain a M\"obius invariant off-shell amplitude, but unfortunately this modified amplitude $M_P$ is not the Yang-Mills amplitude because it lacks gauge invariance. A complementary amplitude $M_Q$ must be added to restore gauge invariance, but its construction relies on the Lagrangian and not M\"obius invariance. Although neither $M_P$ nor $M_Q$ is fully gauge invariant, both are partially gauge invariant in a sense to be explained. This partial gauge invariance turns out to be very useful for checking calculations. A Feynman amplitude so split into the sum of $M_P$ and $M_Q$ also contains fewer terms.
high energy physics theory
We present a theoretical model and experimental demonstration for deformations of a thin liquid layer due to an electric field established by surface electrodes. We model the spatial electric field produced by a pair of parallel electrodes and use it to evaluate the stress on the interface through Maxwell stresses. By coupling this force with the Young-Laplace equation, we obtain the deformation of the interface. To validate our theory, we design an experimental setup which uses microfabricated electrodes to achieve spatial dielectrophoretic actuation of a thin liquid film, while providing measurements of microscale deformations through digital holographic microscopy. We characterize the deformation as a function of the electrode-pair geometry and film thickness, showing very good agreement with the model. Based on the insights from the characterization of the system, we pattern conductive lines of electrode pairs on the surface of a microfluidic chamber and demonstrate the ability to produce complex two-dimensional deformations. We demonstrate that the films can remain in liquid form and be dynamically modulated between different configurations or polymerized to create solid structures with high surface quality.
condensed matter
High-dimensional black-box optimisation remains an important yet notoriously challenging problem. Despite the success of Bayesian optimisation methods on continuous domains, domains that are categorical, or that mix continuous and categorical variables, remain challenging. We propose a novel solution -- we combine local optimisation with a tailored kernel design, effectively handling high-dimensional categorical and mixed search spaces, whilst retaining sample efficiency. We further derive convergence guarantee for the proposed approach. Finally, we demonstrate empirically that our method outperforms the current baselines on a variety of synthetic and real-world tasks in terms of performance, computational costs, or both.
statistics
So-called quantum limits and their achievement are important themes in physics. Heisenberg's uncertainty relations are the most famous of them but are not universally valid and violated in general. In recent years, the reformulation of uncertainty relations is actively studied, and several universally valid uncertainty relations are derived. On the other hand, several measuring models, in particular, spin-1/2 measurements, are constructed and quantitatively examined. However, there are not so many studies on simultaneous measurements of position and momentum despite their importance. Here we show that an error-trade-off relation (ETR), called the Branciard-Ozawa ETR, for simultaneous measurements of position and momentum gives the achievable bound in minimum uncertainty states. We construct linear simultaneous measurements of position and momentum that achieve the bound of the Branciard-Ozawa ETR in each minimum uncertainty state. To check their performance, we then calculate probability distributions and families of posterior states, sets of states after the measurements, when using them. The results of the paper show the possibility of developing the theory of simultaneous measurements of incompatible observables. In the future, it will be widely applied to quantum information processing.
quantum physics
Post-processing is a significant step in quantum key distribution(QKD), which is used for correcting the quantum-channel noise errors and distilling identical corrected keys between two distant legitimate parties. Efficient error reconciliation protocol, which can lead to an increase in the secure key generation rate, is one of the main performance indicators of QKD setups. In this paper, we propose a multi-low-density parity-check codes based reconciliation scheme, which can provide remarkable perspectives for highly efficient information reconciliation. With testing our approach through data simulation, we show that the proposed scheme combining multi-syndrome-based error rate estimation allows a more accurate estimation about the error rate as compared with random sampling and single-syndrome estimation techniques before the error correction, as well as a significant increase in the efficiency of the procedure without compromising security and sacrificing reconciliation efficiency.
quantum physics
We analyze quantum fluctuations around black hole solutions to the Jackiw-Teitelboim model. We use harmonic analysis on Euclidean AdS$_2$ to show that the logarithmic corrections to the partition function are determined entirely by quadratic holomorphic differentials, even when conformal symmetry is broken and harmonic modes are no longer true zero modes. Our quantum-corrected partition function agrees precisely with the SYK result. We argue that our effective quantum field theory methods and results generalize to other theories of two-dimensional dilaton gravity.
high energy physics theory
Accurate day-ahead individual residential load forecasting is of great importance to various applications of smart grid on day-ahead market. Deep learning, as a powerful machine learning technology, has shown great advantages and promising application in load forecasting tasks. However, deep learning is a computationally-hungry method, and requires high costs (e.g., time, energy and CO2 emission) to train a deep learning model, which aggravates the energy crisis and incurs a substantial burden to the environment. As a consequence, the deep learning methods are difficult to be popularized and applied in the real smart grid environment. In this paper, we propose a low training cost model based on convolutional neural network, namely LoadCNN, for next-day load forecasting of individual resident with reduced training cost. The experiments show that the training time of LoadCNN is only approximately 1/54 of the one of other state-of-the-art models, and energy consumption and CO2 emissions are only approximate 1/45 of those of other state-of-the-art models based on the same indicators. Meanwhile, the prediction accuracy of our model is equal to that of current state-of-the-art models, making LoadCNN the first load forecasting model simultaneously achieving high prediction accuracy and low training costs. LoadCNN is an efficient green model that is able to be quickly, cost-effectively and environmentally-friendly deployed in a realistic smart grid environment.
electrical engineering and systems science
Many modern data sets require inference methods that can estimate the shared and individual-specific components of variability in collections of matrices that change over time. Promising methods have been developed to analyze these types of data in static cases, but very few approaches are available for dynamic settings. To address this gap, we consider novel models and inference methods for pairs of matrices in which the columns correspond to multivariate observations at different time points. In order to characterize common and individual features, we propose a Bayesian dynamic factor modeling framework called Time Aligned Common and Individual Factor Analysis (TACIFA) that includes uncertainty in time alignment through an unknown warping function. We provide theoretical support for the proposed model, showing identifiability and posterior concentration. The structure enables efficient computation through a Hamiltonian Monte Carlo (HMC) algorithm. We show excellent performance in simulations, and illustrate the method through application to a social synchrony experiment.
statistics
In this paper we study the scattering of non-radial solutions in the energy space to coupled system of nonlinear Schr\"{o}dinger equations with quadratic-type growth interactions in dimension five without the mass-resonance condition. Our approach is based on the recent technique introduced by Dodson and Murphy, which relies on an interaction Morawetz estimate. It is proved that any solution below the ground states scatters in time.
mathematics
Motivated by models of signaling pathways in B lymphocytes, which have extremely large nuclei, we study the question of how reaction-diffusion equations in thin $2D$ domains may be approximated by diffusion equations in regions of smaller dimensions. In particular, we study how transmission conditions featuring in the approximating equations become integral parts of the limit master equation. We device a scheme which, by appropriate rescaling of coefficients and finding a common reference space for all Feller semigroups involved, allows deriving the form of the limit equation formally. The results obtained, expressed as convergence theorems for the Feller semigroups, may also be interpreted as a weak convergence of underlying stochastic processes.
mathematics
In this article, we derive a novel non-reversible, continuous-time Markov chain Monte Carlo (MCMC) sampler, called Coordinate Sampler, based on a piecewise deterministic Markov process (PDMP), which can be seen as a variant of the Zigzag sampler. In addition to proving a theoretical validation for this new sampling algorithm, we show that the Markov chain it induces exhibits geometrical ergodicity convergence, for distributions whose tails decay at least as fast as an exponential distribution and at most as fast as a Gaussian distribution. Several numerical examples highlight that our coordinate sampler is more efficient than the Zigzag sampler, in terms of effective sample size.
statistics
Current flicker mitigation (or DC-balance) solutions based on run-length limited (RLL) decoding algorithms are high in complexity, suffer from reduced code rates, or are limited in application to hard-decoding forward error correction (FEC) decoders. Fortunately, non-RLL DC-balance solutions can overcome the drawbacks of RLL-based algorithms, but they meet some difficulties in system latency, low code rate or inferior error-correction performance. Recently, non-RLL flicker mitigation solution based on Polar code has proved to be a most optimal approach due to its natural equal probabilities of short runs of 1's and 0's with high error-correction performance. However, we found that this solution can only maintain DC balance only when the data frame length is sufficiently long. Therefore, these solutions are not suitable for using in beacon-based visible light communication (VLC) systems, which usually transmit ID information in small-size data frames. In this paper, we introduce a flicker mitigation solution designed for beacon-based VLC systems that combines a simple pre-scrambler with a (256;158) non-systematic polar encoder.
computer science
In this paper we classify M\"{o}bius invariant differential operators of second order in two dimensional Euclidean space, and establish a Liouville type theorem for general M\"{o}bius invariant elliptic equations.
mathematics
In some inflation scenarios such as $R^{2}$ inflation, a gravitational scalar degrees of freedom called scalaron is identified as inflaton. Scalaron linearly couples to matter via the trace of energy-momentum tensor. We study scenarios with a sequestered matter sector, where the trace of energy-momentum tensor predominantly determines the scalaron coupling to matter. In a sequestered setup, heavy degrees of freedom are expected to decouple from low-energy dynamics. On the other hand, it is non-trivial to see the decoupling since scalaron couples to a mass term of heavy degrees of freedom. Actually, when heavy degrees of freedom carry some gauge charge, the amplitude of scalaron decay to two gauge bosons does not vanish in the heavy mass limit. Here the quantum contribution to the trace of energy-momentum tensor plays an essential role. This quantum contribution is known as trace anomaly or Weyl anomaly. The trace anomaly contribution from heavy degrees of freedom cancels with the contribution from the ${\it classical}$ scalaron coupling to a mass term of heavy degrees of freedom. We see how trace anomaly appears both in the Fujikawa method and in dimensional renormalization. In dimensional renormalization, one can evaluate the scalaron decay amplitude in principle at all orders, while it is unclear how to process it beyond the one-loop level in the Fujikawa method. We consider scalaron decay to two gauge bosons via the trace of energy-momentum tensor in quantum electrodynamics with scalars and fermions. We evaluate the decay amplitude at the leading order to demonstrate the decoupling of heavy degrees of freedom.
high energy physics phenomenology
We introduce the problem of private information delivery (PID), comprised of $K$ messages, a user, and $N$ servers (each holds $M\leq K$ messages) that wish to deliver one out of $K$ messages to the user privately, i.e., without revealing the delivered message index to the user. The information theoretic capacity of PID, $C$, is defined as the maximum number of bits of the desired message that can be privately delivered per bit of total communication to the user. For the PID problem with $K$ messages, $N$ servers, $M$ messages stored per server, and $N \geq \lceil \frac{K}{M} \rceil$, we provide an achievable scheme of rate $1/\lceil \frac{K}{M} \rceil$ and an information theoretic converse of rate $M/K$, i.e., the PID capacity satisfies $1/\lceil \frac{K}{M} \rceil \leq C \leq M/K$. This settles the capacity of PID when $\frac{K}{M}$ is an integer. When $\frac{K}{M}$ is not an integer, we show that the converse rate of $M/K$ is achievable if $N \geq \frac{K}{\gcd(K,M)} - (\frac{M}{\gcd(K,M)}-1)(\lfloor \frac{K}{M} \rfloor -1)$, and the achievable rate of $1/\lceil \frac{K}{M} \rceil$ is optimal if $N = \lceil \frac{K}{M} \rceil$. Otherwise if $\lceil \frac{K}{M} \rceil < N < \frac{K}{\gcd(K,M)} - (\frac{M}{\gcd(K,M)}-1)(\lfloor \frac{K}{M} \rfloor -1)$, we give an improved achievable scheme and prove its optimality for several small settings.
computer science
Analysis of extended X-ray absorption fine structure (EXAFS) data by the use of sparse modeling is presented. We consider the two-body term in the n-body expansion of the EXAFS signal to implement the method, together with calculations of amplitudes and phase shifts to distinguish between different back-scattering elements. Within this approach no a priori assumption about the structure is used, other than the elements present inside the material. We apply the method to the experimental EXAFS signal of metals and oxides, for which we were able to extract the radial distribution function peak positions, and the Debye-Waller factor for first neighbors.
physics
Graze-and-merge collisions (GMCs) are common multi-step mergers occurring in low-velocity off-axis impacts between similar sized planetary bodies. The first impact happens at somewhat faster than the mutual escape velocity; for typical impact angles this does not result in immediate accretion, but the smaller body is slowed down so that it loops back around and collides again, ultimately accreting. The scenario changes in the presence of a third major body, i.e. planets accreting around a star, or satellites around a planet. We find that when the loop-back orbit remains inside roughly 1/3 of the Hill radius from the target, then the overall process is not strongly affected. As the loop-back orbit increases in radius, the return velocity and angle of the second collision become increasingly random, with no record of the first collision's orientation. When the loop-back orbit gets to about 3/4 of the Hill radius, the path of smaller body is disturbed up to the point that it will usually escape the target.
astrophysics
The current world averages of the ratios $R_{D^{(*)}}$ are about $4\sigma$ away from their Standard Model prediction. These measurements indicate towards the violation of lepton flavor universality in $b\rightarrow c\,l\,\bar{\nu}$ decay. The different new physics operators, which can explain the $R_{D^{(*)}}$ measurements, have been identified previously. We show that a simultaneous measurement of the polarization fractions of $\tau$ and $D^*$ and the angular asymmetries $A_{FB}$ and $A_{LT}$ in $B\rightarrow D^*\tau\bar{\nu}$ decay can distinguish all the new physics amplitudes and hence uniquely identify the Lorentz structure of new physics.
high energy physics phenomenology
A novel method for control of dynamical systems, proposed in the paper, ensures an output signal belonging to the given set at any time. The method is based on a special change of coordinates such that the initial problem with given restrictions on an output variable can be performed as the problem of the input-to-state stability analysis of a new extended system without restrictions. The new control laws for linear plants, systems with sector nonlinearity and systems with an arbitrary relative degree are proposed. Examples of change of coordinates are given, and they are utilized to design the control algorithms. The simulations confirm theoretical results and illustrate the effectiveness of the proposed method in the presence of parametric uncertainty and external disturbances.
electrical engineering and systems science
We establish the existence of a broad class of asymptotically Euclidean solutions to Einstein's constraint equations whose asymptotic behavior is a priori prescribed. The seed-to-solution method (as we call it) proposed in this paper encompasses vacuum spaces as well as spaces with (possibly slowly decaying) matter, and generates a Riemannian manifold from any seed data set consisting of (1): a Riemannian metric and a symmetric two-tensor on a manifold with finitely many asymptotically Euclidean ends, and (2): a (density) field and a (momentum) vector field representing the matter content. We distinguish between several classes of seed data referred to as tame or strongly tame, depending whether the data provides a rough or an accurate asymptotic Ansatz at infinity. We encompass metrics with the weakest possible decay at infinity, as well as with the strongest possible decay. Our analysis is based on a linearization of the Einstein operator around a seed data and is motivated by Carlotto and Schoen's pioneering work on the localization problem for Einstein's vacuum equations. Dealing with possibly very low decay and establishing estimates beyond the critical decay require significantly new arguments. In a weighted Lebesgue-Holder framework adapted to the seed data, we analyze the nonlinear coupling between the Hamiltonian and momentum constraints and study critical terms, and uncover the novel notion of mass-momentum correctors. We estimate the difference between the seed data and the actual Einstein solution, a result that should be of interest for numerical computations. Next, we introduce and study the asymptotic localization problem (as we call it) in which the Carlotto-Schoen's localization property is required in an asymptotic sense only. By applying our method to a suitably parametrized family of seed data, we solve this problem at the critical decay level.
mathematics
The Aria project consists of a plant, hosting a 350 m cryogenic isotopic distillation column, the tallest ever built, which is currently in the installation phase in a mine shaft at Carbosulcis S.p.A., Nuraxi-Figus (SU), Italy. Aria is one of the pillars of the argon dark-matter search experimental program, lead by the Global Argon Dark Matter Collaboration. Aria was designed to reduce the isotopic abundance of $^{39}$Ar, a $\beta$-emitter of cosmogenic origin, whose activity poses background and pile-up concerns in the detectors, in the argon used for the dark-matter searches, the so-called Underground Argon (UAr). In this paper, we discuss the requirements, design, construction, tests, and projected performance of the plant for the isotopic cryogenic distillation of argon. We also present the successful results of isotopic cryogenic distillation of nitrogen with a prototype plant, operating the column at total reflux.
physics
We introduce the spike-and-slab group lasso (SSGL) for Bayesian estimation and variable selection in linear regression with grouped variables. We further extend the SSGL to sparse generalized additive models (GAMs), thereby introducing the first nonparametric variant of the spike-and-slab lasso methodology. Our model simultaneously performs group selection and estimation, while our fully Bayes treatment of the mixture proportion allows for model complexity control and automatic self-adaptivity to different levels of sparsity. We develop theory to uniquely characterize the global posterior mode under the SSGL and introduce a highly efficient block coordinate ascent algorithm for maximum a posteriori (MAP) estimation. We further employ de-biasing methods to provide uncertainty quantification of our estimates. Thus, implementation of our model avoids the computational intensiveness of Markov chain Monte Carlo (MCMC) in high dimensions. We derive posterior concentration rates for both grouped linear regression and sparse GAMs when the number of covariates grows at nearly exponential rate with sample size. Finally, we illustrate our methodology through extensive simulations and data analysis.
statistics
The increasing complexity of IT systems requires solutions, that support operations in case of failure. Therefore, Artificial Intelligence for System Operations (AIOps) is a field of research that is becoming increasingly focused, both in academia and industry. One of the major issues of this area is the lack of access to adequately labeled data, which is majorly due to legal protection regulations or industrial confidentiality. Methods to mitigate this stir from the area of federated learning, whereby no direct access to training data is required. Original approaches utilize a central instance to perform the model synchronization by periodical aggregation of all model parameters. However, there are many scenarios where trained models cannot be published since its either confidential knowledge or training data could be reconstructed from them. Furthermore the central instance needs to be trusted and is a single point of failure. As a solution, we propose a fully decentralized approach, which allows to share knowledge between trained models. Neither original training data nor model parameters need to be transmitted. The concept relies on teacher and student roles that are assigned to the models, whereby students are trained on the output of their teachers via synthetically generated input data. We conduct a case study on log anomaly detection. The results show that an untrained student model, trained on the teachers output reaches comparable F1-scores as the teacher. In addition, we demonstrate that our method allows the synchronization of several models trained on different distinct training data subsets.
computer science
The magnetic and superconducting properties of a series of underdoped $Ba_{1-x}Na_{x}Fe_{2}As_{2}$ (BNFA) single crystals with $0.19 \leq x\leq 0.34$ has been investigated with the complementary muon-spin-rotation ($\mu$SR) and infrared spectroscopy techniques. The focus has been on the different antiferromagnetic states in the underdoped regime and their competition with superconductivity, especially for the ones with a tetragonal crystal structure and a so-called double-$Q$ magnetic order. Besides the collinear state with a spatially inhomogeneous spin-charge-density wave (i-SCDW) order at $x=0.24$ and $0.26$, that was previously identified in BNFA, we obtained evidence for an orthomagnetic state with a "hedgehog"-type spin vortex crystal (SVC) structure at $x=0.32$ and $0.34$. Whereas in the former i-SCDW state the infrared spectra show no sign of a superconducting response down to the lowest measured temperature of about 10K, in the SVC state there is a strong superconducting response similar to the one at optimum doping. The magnetic order is strongly suppressed here in the superconducting state and at $x=0.34$ there is even a partial re-entrance into a paramagnetic state at $T<<T_c$.
condensed matter
For the 2017 September 6 flare (SOL2017-Sep-06T11:53) we present not only unusual radio bursts, but also their interesting time association with the other flare phenomena observed in EUV, white-light, X-ray, and $\gamma$-ray emissions. Using our new method based on wavelets we found quasi-periodic pulsations (QPPs) in several locations of the whole time-frequency domain of the analyzed radio spectrum (11:55-12:07 UT and 22-5000 MHz). Among them the drifting QPPs are new and the most interesting, especially a bi-directional QPP at the time of the hard X-ray and $\gamma$-ray peaks, and a sunquake start. In the pre-impulsive phase we show an unusual drifting pulsation structure (DPS) in association with the EUV brightenings caused by the interaction of magnetic ropes. In the flare impulsive phase we found an exceptional radio burst drifting from 5000 MHz to 800 MHz. In connection with this drifting burst, we show a U-burst at about the onset time of an EUV writhed structure and a drifting radio burst as a signature of a shock wave at high frequencies (1050-1350 MHz). In the peak flare phase we found an indication of an additional energy-release process located at higher altitudes in the solar atmosphere. These phenomena are interpreted considering a rising magnetic rope, magnetosonic waves and particle beams. Using a density model we estimated the density, wave velocities and source heights for the bi-directionally drifting QPPs, the density for the pre-impulsive DPS and U-burst, and the density and magnetic field strength for the drifting radio burst.
astrophysics
The correlation between electrons in different quantum wires is expected to affect the electronic properties of quantum electron-electron biwire systems. Here, we use the variational Monte Carlo method to study the ground-state properties of parallel, infinitely thin electron-electron biwires for several electron densities ($r_\text{s}$) and interwire separations ($d$). Specifically, the ground-state energy, the correlation energy, the interaction energy, the pair-correlation function (PCF), the static structure factor (SSF), and the momentum distribution (MD) function are calculated. We find that the interaction energy increases as $\ln(d)$ for $d\to 0$ and it decreases as $d^{-2}$ when $d\to \infty$. The PCF shows oscillatory behavior at all densities considered here. As two parallel wires approach each other, interwire correlations increase while intrawire correlations decrease as evidenced by the behavior of the PCF, SSF, and MD. The system evolves from two monowires of density parameter $r_\text{s}$ to a single monowire of density parameter $r_\text{s}/2$ as $d$ is reduced from infinity to zero. The MD reveals Tomonaga-Luttinger (TL) liquid behavior with a power-law nature near $k_\text{F}$ even in the presence of an extra interwire interaction between the electrons in biwire systems. It is observed that when $d$ is reduced the MD decreases for $k<k_\text{F}$ and increases for $k>k_\text{F}$, similar to its behavior with increasing $r_\text{s}$. The TL liquid exponent is extracted by fitting the MD data near $k_\text{F}$, from which the TL liquid interaction parameter $K_{\rho}$ is calculated. The value of the TL parameter is found to be in agreement with that of a single wire for large separation between the two wires.
condensed matter
As the COVID-19 outbreak continues to spread throughout the world, more and more information about the pandemic has been shared publicly on social media. For example, there are a huge number of COVID-19 English Tweets daily on Twitter. However, the majority of those Tweets are uninformative, and hence it is important to be able to automatically select only the informative ones for downstream applications. In this short paper, we present our participation in the W-NUT 2020 Shared Task 2: Identification of Informative COVID-19 English Tweets. Inspired by the recent advances in pretrained Transformer language models, we propose a simple yet effective baseline for the task. Despite its simplicity, our proposed approach shows very competitive results in the leaderboard as we ranked 8 over 56 teams participated in total.
computer science
The external structure of the spray-flamelet can be described using the Schvab-Zel'dovich-Li\~nan formulation. The gaseous mixture-fraction variable as function of the physical space, Z(x_i), typically employed for the description of gaseous diffusion flames leads to non-monotonicity behaviour for spray flames due to the extra fuel supplied by vaporisation of droplets distributed into the flow. As a result, the overall properties of spray flames depend not only on Z and the scalar dissipation rate, but also on the spray source term, S_v. We propose a new general coordinate variable which takes into account the spatial information about the entire mixture fraction due to the gaseous phase and droplet vaporisation. This coordinate variable, Z_C(x_i) is based on the cumulative value of the gaseous mixture fraction Z(x_i), and is shown to be monotonic. For pure gaseous flow, the new cumulative function, Z_C, yields the well-established flamelet structure in Z-space. In the present manuscript, the spray-flamelet structure and the new equations for temperature and mass fractions in terms of Z_C are derived and then applied to the canonical counterflow configuration with potential flow. Numerical results are obtained for ethanol and methanol sprays, and the effect of Lewis and Stokes numbers on the spray-flamelet structure are analyzed. The proposed formulation agrees well when mapping the structure back to physical space thereby confirming our integration methodology.
physics
Binary regression models are commonly used in disciplines such as epidemiology and ecology to determine how spatial covariates influence individuals. In many studies, binary data are shared in a spatially aggregated form to protect privacy. For example, rather than reporting the location and result for each individual that was tested for a disease, researchers may report that a disease was detected or not detected within geopolitical units. Often, the spatial aggregation process obscures the values of response variables, spatial covariates, and locations of each individual, which makes recovering individual-level inference difficult. We show that applying a series of transformations, including a change of support, to a bivariate point process model allows researchers to recover individual-level inference for spatial covariates from spatially aggregated binary data. The series of transformations preserves the convenient interpretation of desirable binary regression models that are commonly applied to individual-level data. Using a simulation experiment, we compare the performance of our proposed method under varying types of spatial aggregation against the performance of standard approaches using the original individual-level data. We illustrate our method by modeling individual-level probability of infection using a data set that has been aggregated to protect an at-risk and endangered species of bats. Our simulation experiment and data illustration demonstrate the utility of the proposed method when access to original non-aggregated data is impractical or prohibited.
statistics
In this article we study a family of four-dimensional, $\mathcal{N}=2$ supergravity theories that interpolates between all the single dilaton truncations of the $\mathrm{SO}(8)$ gauged $\mathcal{N}=8$ supergravity. In this infinitely many theories characterized by two real numbers -- the interpolation parameter and the dyonic "angle" of the gauging -- we construct non-extremal electrically or magnetically charged black hole solutions and their supersymmetric limits. All the supersymmetric black holes have non-singular horizons with spherical, hyperbolic or planar topology. Some of these supersymmetric and non-extremal black holes are new examples in the $\mathcal{N}=8$ theory that do not belong to the STU model. We compute the asymptotic charges, thermodynamics and boundary conditions of these black holes and show that all of them, except one, introduce a triple trace deformation in the dual theory.
high energy physics theory
Stepped wedge cluster randomized trials (SW-CRTs) have become increasingly popular and are used for a variety of interventions and outcomes, often chosen for their feasibility advantages. SW-CRTs must account for time trends in the outcome because of the staggered rollout of the intervention inherent in the design. Robust inference procedures and non-parametric analysis methods have recently been proposed to handle such trends without requiring strong parametric modeling assumptions, but these are less powerful than model-based approaches. We propose several novel analysis methods that reduce reliance on modeling assumptions while preserving some of the increased power provided by the use of mixed effects models. In one method, we use the synthetic control approach to find the best matching clusters for a given intervention cluster. This approach can improve the power of the analysis but is fully non-parametric. Another method makes use of within-cluster crossover information to construct an overall estimator. We also consider methods that combine these approaches to further improve power. We test these methods on simulated SW-CRTs and identify settings for which these methods gain robustness to model misspecification while retaining some of the power advantages of mixed effects models. Finally, we propose avenues for future research on the use of these methods; motivation for such research arises from their flexibility, which allows the identification of specific causal contrasts of interest, their robustness, and the potential for incorporating covariates to further increase power. Investigators conducting SW-CRTs might well consider such methods when common modeling assumptions may not hold.
statistics
We prove that a finite dimensional algebra $\Lambda$ is $\tau-$tilting finite if and only if all the bricks over $\Lambda$ are finitely generated. This is obtained as a consequence of the existence of proper locally maximal torsion classes for $\tau-$tilting infinite algebras.
mathematics
Standard Convolutional Neural Networks (CNNs) designed for computer vision tasks tend to have large intermediate activation maps. These require large working memory and are thus unsuitable for deployment on resource-constrained devices typically used for inference on the edge. Aggressively downsampling the images via pooling or strided convolutions can address the problem but leads to a significant decrease in accuracy due to gross aggregation of the feature map by standard pooling operators. In this paper, we introduce RNNPool, a novel pooling operator based on Recurrent Neural Networks (RNNs), that efficiently aggregates features over large patches of an image and rapidly downsamples activation maps. Empirical evaluation indicates that an RNNPool layer can effectively replace multiple blocks in a variety of architectures such as MobileNets, DenseNet when applied to standard vision tasks like image classification and face detection. That is, RNNPool can significantly decrease computational complexity and peak memory usage for inference while retaining comparable accuracy. We use RNNPool with the standard S3FD architecture to construct a face detection method that achieves state-of-the-art MAP for tiny ARM Cortex-M4 class microcontrollers with under 256 KB of RAM. Code is released at https://github.com/Microsoft/EdgeML.
computer science
A simple analysis of time-dependent $B_s\to K^+K^-$ transitions, based on recent results from the LHCb experiment, is presented. The benefits of adopting a fully consistent theoretical description of the $B^0_s$--$\bar B^0_s$ mixing are stressed. It is shown that bounds on CPT violation in the $B^0_s$--$\bar B^0_s$ system can be consistently obtained and that direct CP violation in $B_s\to K^+K^-$ can be robustly established, even in the presence of CPT violation in the mixing.
high energy physics phenomenology
We performed a systematic search for broad-velocity-width molecular features (BVFs) in the disk part of our Galaxy by using the CO J = 1-0 survey data obtained with the Nobeyama Radio Observatory 45 m telescope. From this search, 58 BVFs were identified. In comparisons with the infrared and radio continuum images, 36 BVFs appeared to have both infrared and radio continuum counterparts, and 15 of them are described as molecular outflows from young stellar objects in the literature. In addition, 21 BVFs have infrared counterparts only, and eight of them are described as molecular outflows in the literature. One BVF (CO 16.134-0.553) does not have any luminous counterpart in the other wavelengths, which suggests that it may be an analog of high-velocity compact clouds in the Galactic center.
astrophysics
Understanding how supermassive black holes (SMBHs) pair and merge helps to inform predictions of off-center, dual, and binary AGN, and provides key insights into how SMBHs grow and co-evolve with their galaxy hosts. As the loudest known gravitational wave source, binary SMBH mergers also hold centerstage for the Laser Interferometer Space Antenna (LISA), a joint ESA/NASA gravitational wave observatory set to launch in 2034. Here, we continue our work to characterize SMBH binary formation and evolution through increasingly more realistic high resolution direct $N$-body simulations, focusing on the effect of SMBH mass ratio, orientation, and eccentricity within a rotating and flattened stellar host. During the dynamical friction phase, we found a prolonged orbital decay for retrograde SMBHs and swift pairing timescales for prograde SMBHs compared to their counterparts in non-rotating models, an effect that becomes more pronounced for smaller mass ratios $M_{\rm sec}/M_{\rm prim} = q$. During this pairing phase, the eccentricity dramatically increases for retrograde configurations, but as the binary forms, the orbital plane flips so that it is almost perfectly prograde, which stifles the rapid eccentricity growth. In prograde configurations, SMBH binaries form and remain at comparatively low eccentricities. As in our prior work, we note that the center of mass of a prograde SMBH binary itself settles into an orbit about the center of the galaxy. Since even the initially retrograde binaries flip their orbital plane, we expect few binaries in rotating systems to reside at rest in the dynamic center of the host galaxy, though this effect is smaller as $q$ decreases.
astrophysics
The local higher-derivative interactions that enter into the low-energy expansion of the effective action of type IIB superstring theory with constant complex modulus generally violate the $U(1)$ R-symmetry of IIB supergravity by $q_U$ units. These interactions have coefficients that transform as non-holomorphic modular forms under $SL(2, {\mathbb Z})$ transformations with holomorphic and anti-holomorphic weights $(w,-w)$, where $q_U=-2w$. In this paper $SL(2, {\mathbb Z})$-covariance and supersymmetry are used to determine first-order differential equations on moduli space that relate the modular form coefficients of classes of BPS-protected maximal $U(1)$-violating interactions that arise at low orders in the low-energy expansion. These are the moduli-dependent coefficients of BPS interactions of the form $d^{2p} \mathcal{P}_n$ in linearised approximation, where $\mathcal{P}_n$ is the product of $n$ fields that has dimension $=8$ with $q_U=8-2n$, and $p=0$, $2$ or $3$. These first-order equations imply that the coefficients satisfy $SL(2, {\mathbb Z})$-covariant Laplace eigenvalue equations on moduli space with solutions that contain information concerning perturbative and non-perturbative contributions to superstring amplitudes. For $p=3$ and $n\ge 6$ there are two independent modular forms, one of which has a vanishing tree-level contribution. The analysis of super-amplitudes for $U(1)$-violating processes involving arbitrary numbers of external fluctuations of the complex modulus leads to a diagrammatic derivation of the first-order differential relations and Laplace equations satisfied by the coefficient modular forms. Combining this with a $SL(2, {\mathbb Z})$-covariant soft axio-dilaton limit that relates amplitudes with different values of $n$ determines most of the modular invariant coefficients, leaving a single undetermined constant.
high energy physics theory
We develop further the codim-2 future-past extremal surfaces stretching between the future and past boundaries in de Sitter space, discussed in previous work. We first make more elaborate the construction of such surfaces anchored at more general subregions of the future boundary, and stretching to equivalent subregions at the past boundary. These top-bottom symmetric future-past extremal surfaces cannot penetrate beyond a certain limiting surface in the Northern/Southern diamond regions: the boundary subregions become the whole boundary for this limiting surface. For multiple disjoint subregions, this construction leads to mutual information vanishing and strong subadditivity being saturated. We then discuss an effective codim-1 envelope surface arising from these codim-2 surfaces. This leads to analogs of the entanglement wedge and subregion duality for these future-past extremal surfaces in de Sitter space.
high energy physics theory
Technosignatures can represent a sign of technology that may infer the existence of intelligent life elsewhere in the universe. This had usually meant searches for extraterrestrial intelligence using narrow-band radio signals or pulsed lasers. Back in 1960 Freeman Dyson put forward the idea that advanced civilizations may construct large structures in order to capture, for use, the energy of their local star, leading to an object with an unusual infrared signature. Later it was noted that other objects may represent the signature of very advanced instrumentalities, such as interstellar vehicles, beaming stations for propulsion, unusual beacons not using radio or laser radiation but emission of gamma rays, neutrinos or gravitational radiation. Signs may be unintentional or may be directed. Among directed and undirected signs we present some models for signaling and by-product radiation that might be produced by extremely advanced societies not usually considered in the search for extraterrestrial intelligence.
physics
While accelerators such as GPUs have limited memory, deep neural networks are becoming larger and will not fit with the memory limitation of accelerators for training. We propose an approach to tackle this problem by rewriting the computational graph of a neural network, in which swap-out and swap-in operations are inserted to temporarily store intermediate results on CPU memory. In particular, we first revise the concept of a computational graph by defining a concrete semantics for variables in a graph. We then formally show how to derive swap-out and swap-in operations from an existing graph and present rules to optimize the graph. To realize our approach, we developed a module in TensorFlow, named TFLMS. TFLMS is published as a pull request in the TensorFlow repository for contributing to the TensorFlow community. With TFLMS, we were able to train ResNet-50 and 3DUnet with 4.7x and 2x larger batch size, respectively. In particular, we were able to train 3DUNet using images of size of $192^3$ for image segmentation, which, without TFLMS, had been done only by dividing the images to smaller images, which affects the accuracy.
computer science
Two new fast single radio bursts FRB 180924 and FRB 190523 well localized to massive galaxies have opened a new window to probe and characterize how cosmic baryons are allocated between galaxies, their surroundings and intergalactic medium. We are motivated by testing Einstein's weak equivalence principle with these two cosmic transients which have accurate redshifts. Using photons with different energies emitted by FRB 180924, we obtain, so far, the most stringent bound $\Delta\gamma<2.16\times10^{-10}$ for non-repeating FRBs with accurate redshifts when only considering the gravitational potential of the Milk Way. If using the gravitational potential of the Laniakea supercluster instead of the Milk Way one, we also obtain the strictest bound $\Delta\gamma<1.06\times10^{-14}$ to date. In light of rapid progress of FRB cosmology, towards the next two decades, we give an universal limitation $\Delta\gamma<8.24\times10^{-22}$ from photons with different energies emitted by single FRBs with accurate redshifts. Moreover, we analyze detailedly the effects of various astrophysical parameters on the precision of weak equivalence principle. We also estimate the abilities of single FRBs with known redshifts to test the validity of swampland criterion, and to distinguish which value of $H_0$ is preferred.
astrophysics
Physical and biological range uncertainties limit the clinical potential of Proton Beam Therapy (PBT). In this proceedings, we report on two research projects, which we are conducting in parallel and which both tackle the problem of range uncertainties. One aims at developing software tools and the other at developing detector instrumentation. Regarding the first, we report on our development and pre-clinical application of a GPU-accelerated Monte Carlo (MC) simulation toolkit Fred. Concerning the letter, we report on our investigations of plastic scintillator based PET detectors for particle therapy delivery monitoring. We study the feasibility of Jagiellonian-PET detector technology for proton beam therapy range monitoring by means of MC simulations of the $\beta^+$ activity induced in a phantom by proton beams and present preliminary results of PET image reconstruction. Using a GPU-accelerated Monte Carlo simulation toolkit Fred and plastic scintillator based PET detectors we aim to improve patient treatment quality with protons.
physics
We study the longest increasing subsequence problem for random permutations avoiding the pattern $312$ and another pattern $\tau$ under the uniform probability distribution. We determine the exact and asymptotic formulas for the average length of the longest increasing subsequences for such permutation classes specifically when the pattern $\tau$ is monotone increasing or decreasing, or any pattern of length four.
mathematics
A recent preprint arXiv:1807.08572 reported the observation of a transition in Ag/Au nanoparticle composites near room temperature and at ambient pressure, to a vanishingly small four-probe resistance, which was tentatively identified as a percolating superconducting transition. In this brief comment, I point out that a vanishing four-probe resistance may also emerge in non-superconducting systems near conductance percolation threshold.
condensed matter
Narrowband and broadband indoor radar images significantly deteriorate in the presence of target dependent and independent static and dynamic clutter arising from walls. A stacked and sparse denoising autoencoder (StackedSDAE) is proposed for mitigating wall clutter in indoor radar images. The algorithm relies on the availability of clean images and corresponding noisy images during training and requires no additional information regarding the wall characteristics. The algorithm is evaluated on simulated Doppler-time spectrograms and high range resolution profiles generated for diverse radar frequencies and wall characteristics in around-the-corner radar (ACR) scenarios. Additional experiments are performed on range-enhanced frontal images generated from measurements gathered from a wideband RF imaging sensor. The results from the experiments show that the StackedSDAE successfully reconstructs images that closely resemble those that would be obtained in free space conditions. Further, the incorporation of sparsity and depth in the hidden layer representations within the autoencoder makes the algorithm more robust to low signal to noise ratio (SNR) and label mismatch between clean and corrupt data during training than the conventional single layer DAE. For example, the denoised ACR signatures show a structural similarity above 0.75 to clean free space images at SNR of -10dB and label mismatch error of 50%.
electrical engineering and systems science
The possibility of low but nontrivial atmospheric oxygen (O2) levels during the mid-Proterozoic (between 1.8 and 0.8 billion years ago, Ga) has important ramifications for understanding Earth's O2 cycle, the evolution of complex life and evolving climate stability. However, the regulatory mechanisms and redox fluxes required to stabilize these O2 levels in the face of continued biological oxygen production remain uncertain. Here, we develop a biogeochemical model of the C-N-P-O2-S cycles and use it to constrain global redox balance in the mid-Proterozoic ocean-atmosphere system. By employing a Monte Carlo approach bounded by observations from the geologic record, we infer that the rate of net biospheric O2 production was 3.5 (+1.4 - 1.1) Tmol year-1 (1-sigma), or ~25% of today's value, owing largely to phosphorus scarcity in the ocean interior. Pyrite burial in marine sediments would have represented a comparable or more significant O2 source than organic carbon burial, implying a potentially important role for Earth's sulphur cycle in balancing the oxygen cycle and regulating atmospheric O2 levels. Our statistical approach provides a uniquely comprehensive view of Earth system biogeochemistry and global O2 cycling during mid-Proterozoic time and implicates severe P biolimitation as the backdrop for Precambrian geochemical and biological evolution.
astrophysics
The Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) is an unbiased, massively multiplexed spectroscopic survey, designed to measure the expansion history of the universe through low-resolution ($R\sim750$) spectra of Lyman-Alpha Emitters. In its search for these galaxies, HETDEX will also observe a few 10$^{5}$ stars. In this paper, we present the first stellar value-added catalog within the internal second data release of the HETDEX Survey (HDR2). The new catalog contains 120,571 low-resolution spectra for 98,736 unique stars between $10 < G < 22$ spread across the HETDEX footprint at relatively high ($b\sim60^\circ$) Galactic latitudes. With these spectra, we measure radial velocities (RVs) for $\sim$42,000 unique FGK-type stars in the catalog and show that the HETDEX spectra are sufficient to constrain these RVs with a 1$\sigma$ precision of 28.0 km/s and bias of 3.5 km/s with respect to the LAMOST surveys and 1$\sigma$ precision of 27.5 km/s and bias of 14.0 km/s compared to the SEGUE survey. Since these RVs are for faint ($G\geq16$) stars, they will be complementary to Gaia. Using t-Distributed Stochastic Neighbor Embedding (t-SNE), we also demonstrate that the HETDEX spectra can be used to determine a star's T${\rm{eff}}$, and log g and its [Fe/H]. With the t-SNE projection of the FGK-type stars with HETDEX spectra we also identify 416 new candidate metal-poor ([Fe/H] $< -1$~dex) stars for future study. These encouraging results illustrate the utility of future low-resolution stellar spectroscopic surveys.
astrophysics
This paper considers the final approach phase of visual-closed-loop grasping where the RGB-D camera is no longer able to provide valid depth information. Many current robotic grasping controllers are not closed-loop and therefore fail for moving objects. Closed-loop grasp controllers based on RGB-D imagery can track a moving object, but fail when the sensor's minimum object distance is violated just before grasping. To overcome this we propose the use of image-based visual servoing (IBVS) to guide the robot to the object-relative grasp pose using camera RGB information. IBVS robustly moves the camera to a goal pose defined implicitly in terms of an image-plane feature configuration. In this work, the goal image feature coordinates are predicted from RGB-D data to enable RGB-only tracking once depth data becomes unavailable -- this enables more reliable grasping of previously unseen moving objects. Experimental results are provided.
computer science
Trapped ions are among the most promising candidates for performing quantum information processing tasks. Recently, it was demonstrated how the properties of geometric phases can be used to implement an entangling two qubit phase gate with significantly reduced operation time while having a built-in resistance against certain types of errors (Palmero et. al., Phys. Rev. A 95, 022328 (2017)). In this article, we investigate the influence of both quantum and thermal fluctuations on the geometric phase in the Markov regime. We show that additional environmentally induced phases as well as a loss of coherence result from the non-unitary evolution, even at zero temperature. We connect these effects to the associated dynamical and geometrical phases. This suggests a strategy to compensate the detrimental environmental influences and restore some of the properties of the ideal implementation. Our main result is a strategy for zero temperature to construct forces for the geometric phase gate which compensate the dissipative effects and leave the produced phase as well as the final motional state identical to the isolated case. We show that the same strategy helps also at finite temperatures. Furthermore, we examine the effects of dissipation on the fidelity and the robustness of a two qubit phase gate against certain error types.
quantum physics
Efficient residential sector coupling plays a key role in supporting the energy transition. In this study, we analyze the structural properties associated with the optimal control of a home energy management system and the effects of common technological configurations and objectives. We conduct this study by modeling a representative building with a modulating air-sourced heat pump, a photovoltaic (PV) system, a battery, and thermal storage systems for floor heating and hot-water supply. In addition, we allow grid feed-in by assuming fixed feed-in tariffs and consider user comfort. In our numerical analysis, we find that the battery, naturally, is the essential building block for improving self-sufficiency. However, in order to use the PV surplus efficiently grid feed-in is necessary. The commonly considered objective of maximizing self-consumption is not economically viable under the given tariff structure; however, close-to-optimal performance and significant reduction in solution times can be achieved by maximizing self-sufficiency. Based on optimal control and considering seasonal effects, the dominant order of PV distribution and the target states of charge of the storage systems can be derived. Using a rolling horizon approach, the solution time can be reduced to less than 1 min (achieving a time resolution of 1 h per year). By evaluating the value of information, we find that the common value of 24 h for the prediction and control horizons results in unintended but avoidable end-of-horizon effects. Our input data and mixed-integer linear model developed using the Julia JuMP programming language are available in an open-source manner.
electrical engineering and systems science
Road accidents or maintenance often lead to the blockage of roads, causing severe traffic congestion. Diverted routes after road blockage are often decided individually and have no coordination. Here, we employ the cavity approach in statistical physics to obtain both analytical results and optimization algorithms to optimally divert and coordinate individual vehicle routes after road blockage. Depending on the number and the location of the blocked roads, we found that there can be a significant change in traveling path of individual vehicles, and a large increase in the average traveling distance and cost. Interestingly, traveling distance decreases but traveling cost increases for some instances of diverted traffic. By comparing networks with different topology and connectivity, we observe that the number of alternative routes play a crucial role in suppressing the increase in traveling cost after road blockage. We tested our algorithm using the England highway network and found that coordinated diversion can suppress the increase in traveling cost by as much as 66$\%$ in the scenarios studied. These results reveal the advantages brought by the optimally coordinated traffic diversion after road blockage.
physics
We implement type-II seesaw dominance for neutrino mass and baryogenesis through heavy scalar triplet leptogenesis in a class of minimal non-supersymmetric SO(10) models where matter parity as stabilising discrete symmetry as well as WIMP dark matter (DM) candidates are intrinsic predictions of the GUT symmetry. We also find modifications of relevant CP-asymmetry formulas in such minimal models. Baryon asymmetry of the universe as solutions of Boltzmann equations is further shown to be realized for both normal and inverted mass orderings in concordance with cosmological bound and best fit values of the neutrino oscillation data including $\theta_{23}$ in the second octant and large values of leptonic Dirac CP-phases. Type-II seesaw dominance is at first successfully implemented in two cases of spontaneous SO(10) breakings through SU(5) route where the presence of only one non-standard Higgs scalar of intermediate mass $\sim 10^9-10^{10}$ GeV achieves unification. Lower values of the SU(5) unification scales $\sim 10^{15}$ GeV are predicted to bring proton lifetimes to the accessible ranges of Super-Kamiokande and Hyper-Kamiokande experiments. Our prediction of WIMP DM relic density in each model is due to a $\sim$ TeV mass matter-parity odd real scalar singlet ($\subset {16}_H \subset$ SO(10)) verifiable by LUX and XENON1T experiments. This DM is also noted to resolve the vacuum stability issue of the standard scalar potential. When applied to the unification framework of M. Frigerio and T. Hambye, in addition to the minimal fermionic triplet DM solution of $2.7$ TeV mass, this procedure of type-II seesaw dominance and triplet leptogenesis is also found to make an alternative prediction of triplet fermion plus real scalar singlet DM at the TeV scale.
high energy physics phenomenology
Variance-based sensitivity indices have established themselves as a reference among practitioners of sensitivity analysis of model output. It is not unusual to consider a variance-based sensitivity analysis as informative if it produces at least the first order sensitivity indices S_j and the so-called total-effect sensitivity indices T_j for all the uncertain factors of the mathematical model under analysis. Computational economy is critical in sensitivity analysis. It depends mostly upon the number of model evaluations needed to obtain stable values of the estimates. While efficient estimation procedures independent from the number of factors under analysis are available for the first order indices, this is less the case for the total sensitivity indices. When estimating T_j, one can either use a sample-based approach, whose computational cost depends fromon the number of factors, or approaches based on meta-modelling/emulators, e.g. based on Gaussian processes. The present work focuses on sample-based estimation procedures for T_j and tries different avenues to achieve an algorithmic improvement over the designs proposed in the existing best practices. We conclude that some proposed sample-based improvements found in the literature do not work as claimed, and that improving on the existing best practice is indeed fraught with difficulties. We motivate our conclusions introducing the concepts of explorativity and efficiency of the design.
statistics
Nonreciprocity, the defining characteristic of isolators, circulators and a wealth of other applications in radio/microwave communications technologies, is in general difficult to achieve as most physical systems incorporate symmetries that prevent the effect. In particular, acoustic waves are an important medium for information transport, but they are inherently symmetric in time. In this work, we report giant nonreciprocity in the transmission of surface acoustic waves (SAWs) on lithium niobate substrate coated with ferromagnet/insulator/ferromagnet (FeGaB/Al2O3/FeGaB) multilayer structure. We exploit this novel structure with a unique asymmetric band diagram, and expand on magnetoelastic coupling theory to show how the magnetic bands couple with acoustic waves only in a single direction. We measure 48.4 dB (ratio of 1:100,000) isolation which outperforms current state of the art microwave isolator devices in a novel acoustic wave system that facilitates unprecedented size, weight, and power reduction. Additionally, these results offer a promising platform to study nonreciprocal SAW devices.
physics
Various tensor models have been recently shown to have the same properties as the celebrated Sachdev-Ye-Kitaev (SYK) model. In this paper we study in detail the diagrammatics of two such SYK-like tensor models: the multi-orientable (MO) model which has an $U(N) \times O(N) \times U(N)$ symmetry and a quartic $O(N)^3$-invariant model whose interaction has the tetrahedral pattern. We show that the Feynman graphs of the MO model can be seen as the Feynman graphs of the $O(N)^3$-invariant model which have an orientable jacket. We then present a diagrammatic toolbox to analyze the $O(N)^3$-invariant graphs. This toolbox allows for a simple strategy to identify all the graphs of a given order in the $1/N$ expansion. We apply it to the next-to-next-to-leading and next-to-next-to-next-to-leading orders which are the graphs of degree $1$ and $3/2$ respectively.
high energy physics theory
A large family of score-based methods are developed recently to solve unsupervised learning problems including density estimation, statistical testing and variational inference. These methods are attractive because they exploit the derivative of the log density, which is independent of the normaliser, and are thus suitable for tasks involving unnormalised densities. Despite the theoretical guarantees on the performance, here we illustrate a common practical issue suffered by these methods when the unnormalised distribution of interest has isolated components. In particular, we study the behaviour of some popular score-based methods on tasks involving 1-D mixture of Gaussian. These methods fail to identify appropriate mixing proportions when the unnormalised distribution is multimodal. Finally, some directions for finding a remedy are discussed in light of recent successes in specific tasks. We hope to bring the attention of theoreticians and practitioners to this issue when developing new algorithms and applications.
statistics
Ternary compounds of Transition Metal Dichalcogenides are emerging as an interesting class of crystals with tunable electronic properties, which make them attractive for nano-electronic and optoelectronic applications. Among them, Mo$_x$W$_{1-x}$S$_2$ is one of the most studied alloys, due to the well-known, remarkable features of its binary constituents, MoS$_2$ and WS$_2$. The band-gap of this compound can be modelled varying Mo and W percentages in the sample, and its vibrational modes result from a combination of MoS$_2$ and WS$_2$ phonons. In this work, we report transmission measurements on a Mo$_{0:5}$W$_{0:5}$S$_2$ single crystal in the far-infrared range. Absorbance spectra collected at ambient conditions enabled, for the first time, a classification of the infrared-active phonons, complementary to Raman studies. High-pressure measurements allowed to study the evolution of both the lattice dynamics and the free carrier density up to 31 GPa, indicating the occurrence of an isostructural semiconductor-to-metal transition above 18 GPa.
condensed matter
We give a pedagogical review of the properties of the various meson condensation phases triggered by a large isospin or strangeness imbalance. We argue that these phases are extremely interesting and powerful playground for exploring the properties of hadronic matter. The reason is that they are realized in a regime in which various theoretical methods overlap with increasingly precise numerical lattice QCD simulations, providing insight on the properties of color confinement and of chiral symmetry breaking.
high energy physics phenomenology
It is well known that Jackiw-Teitelboim (JT) gravity posses the simplest theory on 2-dimensional gravity. The model has been fruitfully studied in recent years. In the present work, we investigate exact solutions for both JT and deformed JT gravity recently proposed in the literature. We revisit exact Euclidean solutions for Jackiw-Teitelboim gravity using all the non-zero components of the dilatonic equations of motion using proper integral transformation over Euclidean time coordinate. More precisely, we study exact solutions for hyperbolic coverage, cusp geometry, and another compact sector of the AdS$_2$ spacetime manifold. We also introduce a nonminimal derivative coupling term to the original JT theory for the novel deformation and quantify its solutions.
high energy physics theory
In inertial confinement fusion (ICF), X-ray radiography is a critical diagnostic for measuring implosion dynamics, which contains rich 3D information. Traditional methods for reconstructing 3D volumes from 2D radiographs, such as filtered backprojection, require radiographs from at least two different angles or lines of sight (LOS). In ICF experiments, space for diagnostics is limited and cameras that can operate on the fast timescales are expensive to implement, limiting the number of projections that can be acquired. To improve the imaging quality as a result of this limitation, convolutional neural networks (CNN) have recently been shown to be capable of producing 3D models from visible light images or medical X-ray images rendered by volumetric computed tomography LOS (SLOS). We propose a CNN to reconstruct 3D ICF spherical shells from single radiographs. We also examine sensitivity of the 3D reconstruction to different illumination models using preprocessing techniques such as pseudo-flat fielding. To resolve the issue of the lack of 3D supervision, we show that training the CNN utilizing synthetic radiographs produced by known simulation methods allows for reconstruction of experimental data as long as the experimental data is similar to the synthetic data. We also show that the CNN allows for 3D reconstruction of shells that possess low mode asymmetries. Further comparisons of the 3D reconstructions with direct multiple LOS measurements are justified.
physics
We study propagation of closed bosonic strings in torsional Newton-Cartan geometry based on a recently proposed Polyakov type action derived by dimensional reduction of the ordinary bosonic string along a null direction. We generalize the Polyakov action proposal to include matter, i.e. the 2-form and the 1-form that originates from the Kalb-Ramond field and the dilaton. We determine the conditions for Weyl invariance which we express as the beta-function equations on the worldsheet, in analogy with the usual case of strings propagating on a pseudo-Riemannian manifold. The critical dimension of the TNC space-time turns out to be 25. We find that Newton's law of gravitation follows from the requirement of quantum Weyl invariance in the absence of torsion. Presence of the 1-form requires torsion to be non vanishing. Torsion has interesting consequences, in particular it yields a mass term and an advection term in the generalized Newton's law. U(1) mass invariance of the theory is an important ingredient in deriving the beta functions.
high energy physics theory
The notion of an Evolutional Deep Neural Network (EDNN) is introduced for the solution of partial differential equations (PDE). The parameters of the network are trained to represent the initial state of the system only, and are subsequently updated dynamically, without any further training, to provide an accurate prediction of the evolution of the PDE system. In this framework, the network parameters are treated as functions with respect to the appropriate coordinate and are numerically updated using the governing equations. By marching the neural network weights in the parameter space, EDNN can predict state-space trajectories that are indefinitely long, which is difficult for other neural network approaches. Boundary conditions of the PDEs are treated as hard constraints, are embedded into the neural network, and are therefore exactly satisfied throughout the entire solution trajectory. Several applications including the heat equation, the advection equation, the Burgers equation, the Kuramoto Sivashinsky equation and the Navier-Stokes equations are solved to demonstrate the versatility and accuracy of EDNN. The application of EDNN to the incompressible Navier-Stokes equation embeds the divergence-free constraint into the network design so that the projection of the momentum equation to solenoidal space is implicitly achieved. The numerical results verify the accuracy of EDNN solutions relative to analytical and benchmark numerical solutions, both for the transient dynamics and statistics of the system.
physics
Latent autoregressive processes are a popular choice to model time varying parameters. These models can be formulated as nonlinear state space models for which inference is not straightforward due to the high number of parameters. Therefore maximum likelihood methods are often infeasible and researchers rely on alternative techniques, such as Gibbs sampling. But conventional Gibbs samplers are often tailored to specific situations and suffer from high autocorrelation among repeated draws. We present a Gibbs sampler for general nonlinear state space models with an univariate autoregressive state equation. For this we employ an interweaving strategy and elliptical slice sampling to exploit the dependence implied by the autoregressive process. Within a simulation study we demonstrate the efficiency of the proposed sampler for bivariate dynamic copula models. Further we are interested in modeling the volatility return relationship. Therefore we use the proposed sampler to estimate the parameters of stochastic volatility models with skew Student t errors and the parameters of a novel bivariate dynamic mixture copula model. This model allows for dynamic asymmetric tail dependence. Comparison to relevant benchmark models, such as the DCC-GARCH or a Student t copula model, with respect to predictive accuracy shows the superior performance of the proposed approach.
statistics
TileCal, the central hadronic calorimeter of the ATLAS detector is composed of plastic scintillators interleaved by steel plates, and wavelength shifting optical fibres. The optical properties of these components are known to suffer from natural ageing and degrade due to exposure to radiation. The calorimeter was designed for 10 years of LHC operating at the design luminosity of $10^{34}$cm$^{-2}$s$^{-1}$. Irradiation tests of scintillators and fibres have shown that their light yield decrease by about 10% for the maximum dose expected after 10 years of LHC operation. The robustness of the TileCal optics components is evaluated using the calibration systems of the calorimeter: Cs-137 gamma source, laser light, and integrated photomultiplier signals of particles from proton-proton collisions. It is observed that the loss of light yield increases with exposure to radiation as expected. The decrease in the light yield during the years 2015-2017 corresponding to the LHC Run 2 will be reported. The current LHC operation plan foresees a second high luminosity LHC (HL-LHC) phase extending the experiment lifetime for 10 years more. The results obtained in Run 2 indicate that following the light yield response of TileCal is an essential step for predicting the calorimeter performance in future runs. Preliminary studies attempt to extrapolate these measurements to the HL-LHC running conditions.
physics
The life detection method based on a single type of information source cannot meet the requirement of post-earthquake rescue due to its limitations in different scenes and bad robustness in life detection. This paper proposes a method based on deep neural network for multi-sensor decision-level fusion which concludes Convolutional Neural Network and Long Short Term Memory neural network (CNN+LSTM). Firstly, we calculate the value of the life detection probability of each sensor with various methods in the same scene simultaneously, which will be gathered to make samples for inputs of the deep neural network. Then we use Convolutional Neural Network (CNN) to extract the distribution characteristics of the spatial domain from inputs which is the two-channel combination of the probability values and the smoothing probability values of each life detection sensor respectively. Furthermore, the sequence time relationship of the outputs from the last layers will be analyzed with Long Short Term Memory (LSTM) layers, then we concatenate the results from three branches of LSTM layers. Finally, two sets of LSTM neural networks that is different from the previous layers are used to integrate the three branches of the features, and the results of the two classifications are output using the fully connected network with Binary Cross Entropy (BEC) loss function. Therefore, the classification results of the life detection can be concluded accurately with the proposed algorithm.
electrical engineering and systems science
Using Langevin dynamics simulations, we study the hysteresis in unzipping of longer double stranded DNA chains whose ends are subjected to a time dependent periodic force with frequency $\omega$ and amplitude $G$ keeping the other end fixed. We find that the area of the hysteresis loop, $A_{loop}$, scales as $1/\omega$ at higher frequencies, whereas it scales as $(G-G_c)^{\alpha}\omega^{\beta}$ with exponents $\alpha=1$ and $\beta=1.25$ in the low frequency regime. These values are same as the exponents obtained in Monte Carlo simulation studies of a directed self avoiding walk model of a homopolymer DNA [R. Kapri, Phys. Rev. E 90, 062719 (2014)], and the block copolymer DNA [R. K. Yadav and R. Kapri, Phys. Rev. E 103, 012413 (2021)] on a square lattice, and differs from the values reported earlier using Langevin dynamics simulation studies on a much shorter DNA hairpins.
condensed matter
The Sachdev--Ye--Kitaev is a quantum mechanical model of $N$ Majorana fermions which displays a number of appealing features -- solvability in the strong coupling regime, near-conformal invariance and maximal chaos -- which make it a suitable model for black holes in the context of the AdS/CFT holography. In this paper, we show for the colored SYK model and several of its tensor model cousins that the next-to-leading order in the large $N$ expansion preserves the conformal invariance of the $2$-point function in the strong coupling regime, up to the contribution of the pseudo-Goldstone bosons due to the explicit breaking of the symmetry and which are already seen in the leading order $4$-point function. We also comment on the composite field approach for computing correlation functions in colored tensor models.
high energy physics theory
We study anomalous hydrodynamics with a dyonic charge. We show that the local second law of thermodynamics constrains the structure of the anomaly in addition to the structure of the hydrodynamic constitutive equations. In particular, we show that not only the usual $E\cdot B$ term but also $E^2 -B^2$ term should be present in the anomaly with a specific coefficient for the local entropy production to be positive definite.
high energy physics theory

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
5
Add dataset card