text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
We study quantum communication protocols, in which the players' storage starts out in a state where one qubit is in a pure state, and all other qubits are totally mixed (i.e. in a random state), and no other storage is available (for messages or internal computations). This restriction on the available quantum memory has been studied extensively in the model of quantum circuits, and it is known that classically simulating quantum circuits operating on such memory is hard when the additive error of the simulation is exponentially small (in the input length), under the assumption that the polynomial hierarchy does not collapse. We study this setting in communication complexity. The goal is to consider larger additive error for simulation-hardness results, and to not use unproven assumptions. We define a complexity measure for this model that takes into account that standard error reduction techniques do not work here. We define a clocked and a semi-unclocked model, and describe efficient simulations between those. We characterize a one-way communication version of the model in terms of weakly unbounded error communication complexity. Our main result is that there is a quantum protocol using one clean qubit only and using $O(\log n)$ qubits of communication, such that any classical protocol simulating the acceptance behaviour of the quantum protocol within additive error $1/poly(n)$ needs communication $\Omega(n)$. We also describe a candidate problem, for which an exponential gap between the one-clean-qubit communication complexity and the randomized complexity is likely to hold, and hence a classical simulation of the one-clean-qubit model within {\em constant} additive error might be hard in communication complexity. We describe a geometrical conjecture that implies the lower bound. | quantum physics |
To the best of our knowledge, the existing deep-learning-based Video Super-Resolution (VSR) methods exclusively make use of videos produced by the Image Signal Processor (ISP) of the camera system as inputs. Such methods are 1) inherently suboptimal due to information loss incurred by non-invertible operations in ISP, and 2) inconsistent with the real imaging pipeline where VSR in fact serves as a pre-processing unit of ISP. To address this issue, we propose a new VSR method that can directly exploit camera sensor data, accompanied by a carefully built Raw Video Dataset (RawVD) for training, validation, and testing. This method consists of a Successive Deep Inference (SDI) module and a reconstruction module, among others. The SDI module is designed according to the architectural principle suggested by a canonical decomposition result for Hidden Markov Model (HMM) inference; it estimates the target high-resolution frame by repeatedly performing pairwise feature fusion using deformable convolutions. The reconstruction module, built with elaborately designed Attention-based Residual Dense Blocks (ARDBs), serves the purpose of 1) refining the fused feature and 2) learning the color information needed to generate a spatial-specific transformation for accurate color correction. Extensive experiments demonstrate that owing to the informativeness of the camera raw data, the effectiveness of the network architecture, and the separation of super-resolution and color correction processes, the proposed method achieves superior VSR results compared to the state-of-the-art and can be adapted to any specific camera-ISP. Code and dataset are available at https://github.com/proteus1991/RawVSR. | electrical engineering and systems science |
We consider the Kirchhoff equation $$ \partial_{tt} u - \Delta u \Big( 1 + \int_{\mathbb T^d} |\nabla u|^2 \Big) = 0 $$ on the $d$-dimensional torus $\mathbb T^d$, and its Cauchy problem with initial data $u(0,x)$, $\partial_t u(0,x)$ of size $\varepsilon$ in Sobolev class. The effective equation for the dynamics at the quintic order, obtained in previous papers by quasilinear normal form, contains resonances corresponding to nontrivial terms in the energy estimates. Such resonances cannot be avoided by tuning external parameters (simply because the Kirchhoff equation does not contain parameters). In this paper we introduce nonresonance conditions on the initial data of the Cauchy problem and prove a lower bound $\varepsilon^{-6}$ for the lifespan of the corresponding solutions (the standard local theory gives $\varepsilon^{-2}$, and the normal form for the cubic terms gives $\varepsilon^{-4}$). The proof relies on the fact that, under these nonresonance conditions, the growth rate of the "superactions" of the effective equations on large time intervals is smaller (by a factor $\varepsilon^2$) than its a priori estimate based on the normal form for the cubic terms. The set of initial data satisfying such nonresonance conditions contains several nontrivial examples that are discussed in the paper. | mathematics |
This paper provides details of the massless three-loop three-point integrals calculation at the symmetric point. Our work aimed to extend known two-loop results for such integrals to the three-loop level. Obtained results can find their application in regularization-invariant symmetric point momentum-subtraction (RI/SMOM) scheme QCD calculations of renormalization group functions and various composite operator matrix elements. To calculate integrals, we solve differential equations for auxiliary integrals by transforming the system to the $\varepsilon$-form. Calculated integrals are expressed through the basis of functions with uniform transcendental weight. We provide expansion up to the transcendental weight six for the basis functions in terms of harmonic polylogarithms with six-root of unity argument. | high energy physics phenomenology |
High-energy neutrino emission has been predicted for several short-lived astrophysical transients including gamma-ray bursts (GRBs), core-collapse supernovae with choked jets and neutron star mergers. IceCube's optical and X-ray follow-up program searches for such transient sources by looking for two or more muon neutrino candidates in directional coincidence and arriving within 100s. The measured rate of neutrino alerts is consistent with the expected rate of chance coincidences of atmospheric background events and no likely electromagnetic counterparts have been identified in Swift follow-up observations. Here, we calculate generic bounds on the neutrino flux of short-lived transient sources. Assuming an $E^{-2.5}$ neutrino spectrum, we find that the neutrino flux of rare sources, like long gamma-ray bursts, is constrained to <5% of the detected astrophysical flux and the energy released in neutrinos (100GeV to 10PeV) by a median bright GRB-like source is $<10^{52.5}$erg. For a harder $E^{-2.13}$ neutrino spectrum up to 30% of the flux could be produced by GRBs and the allowed median source energy is $< 10^{52}$erg. A hypothetical population of transient sources has to be more common than $10^{-5}\text{Mpc}^{-3}\text{yr}^{-1}$ ($5\times10^{-8}\text{Mpc}^{-3}\text{yr}^{-1}$ for the $E^{-2.13}$ spectrum) to account for the complete astrophysical neutrino flux. | astrophysics |
We present two approaches for describing chemical reactions taking place in fluid phase. The first method mirrors the usual derivation of the hydrodynamic equations of motion by relating conserved---or to account for chemical reactions, non-conserved---currents to local-equilibrium parameters. The second method involves a higher-brow approach in which we attack the same problem from the perspective of non-equilibrium effective field theory (EFT). Non-equilibrium effective actions are defined using the in-in formalism on the Schwinger-Keldysh contour and are therefore capable of describing thermal fluctuations and dissipation as well as quantum effects. The non-equilibrium EFT approach is especially powerful as all terms in the action are fully specified by the symmetries of the system; in particular the second law of thermodynamics does not need to be included by hand, but is instead derived from the action itself. We find that the equations of motion generated by both methods agree, but the EFT approach yields certain advantages. To demonstrate some of these advantages we construct a quadratic action that is valid to very small distance scales---much smaller than the scales at which ordinary hydrodynamic theories break down. Such an action captures the full thermodynamic and quantum behavior of reactions and diffusion at quadratic order. Finally, taking the low-frequency and low-wavenumber limit, we reproduce the linearized version of the well-known reaction-diffusion equations as a final coherence check. | high energy physics theory |
Given two probability measures $\mu, \nu$ on $\mathbb{R}^d$, in subharmonic order, we describe optimal stopping times $\tau$ that maximize/minimize the cost functional $\mathbb{E} |B_0 - B_\tau|^{\alpha}$, $\alpha > 0$, where $(B_t)_t$ is Brownian motion with initial law $\mu$ and with final distribution --once stopped at $\tau$-- equal to $\nu$. Under the assumption of radial symmetry on $\mu$ and $\nu$, we show that in dimension $d \geq 3$ and $\alpha \neq 2$, there exists a unique optimal solution given by a non-randomized stopping time characterized as the hitting time to a suitably symmetric barrier. We also relate this problem to the optimal transportation problem for subharmonic martingales, and establish a duality result. This paper is an expanded version of a previously posted but not published work by the authors. | mathematics |
In filtering, each output is produced by a certain number of different inputs. We explore the statistics of this degeneracy in an explicitly treatable filtering problem in which filtering performs the maximal compression of relevant information contained in inputs (arrays of zeroes and ones). This problem serves as a reference model for the statistics of filtering and related sampling problems. The filter patterns in this problem conveniently allow a microscopic, combinatorial consideration. This allows us to find the statistics of outputs, namely the exact distribution of output degeneracies, for arbitrary input sizes. We observe that the resulting degeneracy distribution of outputs decays as $e^{-c\log^\alpha \!d}$ with degeneracy $d$, where $c$ is a constant and exponent $\alpha>1$, i.e. faster than a power law. Importantly, its form essentially depends on the size of the input data set, appearing to be closer to a power-law dependence for small data set sizes than for large ones. We demonstrate that for sufficiently small input data set sizes typical for empirical studies, this distribution could be easily perceived as a power law. We extend our results to filter patterns of various sizes and demonstrate that the shortest filter pattern provides the maximum informative representations of the inputs. | condensed matter |
A nanoscale object evidenced in a non-classical state of its centre of mass will hugely extend the boundaries of quantum mechanics. To obtain a practical scheme for the same, we exploit a hitherto unexplored coupled system: an atom and a nanoparticle coupled by an optical field. We show how to control the center-of-mass of a large $\sim500$nm nanoparticle using the internal state of the atom so as to create, as well as detect, nonclassical motional states of the nanoparticle. Specifically, we consider a setup based on a silica nanoparticle coupled to a Cesium atom and discuss a protocol for preparing and verifying a Schr\"{o}dinger-cat state of the nanoparticle that does no require cooling to the motional ground state. We show that the existence of the superposition can be revealed using the Earth's gravitational field using a method that is insensitive to the most common sources of decoherence and works for any initial state of the nanoparticle. | quantum physics |
The circular motion of charged test particles in the gravitational field of a Reissner-Nordstr\"{o}m black hole in Anti de Sitter space-time is investigated, using a set of independent parameters, such as charge Q, mass M and cosmological constant $\Lambda= -3/l^2$ of the space-time, and charge to mass ratio $\epsilon=q/m$ of the test particles. Classification of different spatial regions where circular motion is allowed, is presented, showing in particular, the presence of orbits at special limiting values, $M=4/\sqrt{6} Q$ and $l=6 Q$. Thermodynamically, these values are known to occur when the black hole is on the verge of a second order phase transition, there by, giving an interesting connection between thermodynamics and geodesics of black holes. We also comment on the possibility of such a connection for black holes in flat spacetime in a box. | high energy physics theory |
The basal sliding of glaciers and ice sheets can constitute a large part of the total observed ice velocity, in particular in dynamically active areas. It is therefore important to accurately represent this process in numerical models. The condition that the sliding velocity should be tangential to the bed is realized by imposing an impenetrability condition at the base. We study the, in glaciological literature used, numerical implementations of the impenetrability condition for non-linear Stokes flow with Navier's slip on the boundary. Using the finite element method, we enforce impenetrability by: a local rotation of the coordinate system (strong method), a Lagrange multiplier method enforcing zero average flow across each facet (weak method) and an approximative method that uses the pressure variable as a Lagrange multiplier for both incompressibility and impenetrability. An analysis of the latter shows that it relaxes the incompressibility constraint, but enforces impenetrability approximately if the pressure is close to the normal component of the stress at the bed. Comparing the methods numerically using a method of manufactured solutions unexpectedly leads to similar convergence results. However, we find that, for more realistic cases, in areas of high sliding or varying topography the velocity field simulated by the approximative method differs from that of the other methods by $\sim 1\%$ (two-dimensional flow) and $> 5\%$ when compared to the strong method (three-dimensional flow). In this study the strong method, which is the most commonly used in numerical ice sheet models, emerges as the preferred method due to its stable properties (compared to the weak method in three dimensions) and ability to well enforce the impenetrability condition. | physics |
The SuperNEMO experiment will search for neutrinoless double-beta decay ($0\nu\beta\beta$), and study the Standard-Model double-beta decay process ($2\nu\beta\beta$). The SuperNEMO technology can measure the energy of each of the electrons produced in a double-beta ($\beta\beta$) decay, and can reconstruct the topology of their individual tracks. The study of the double-beta decay spectrum requires very accurate energy calibration to be carried out periodically. The SuperNEMO Demonstrator Module will be calibrated using 42 calibration sources, each consisting of a droplet of $^{207}$Bi within a frame assembly. The quality of these sources, which depends upon the entire $^{207}$Bi droplet being contained within the frame, is key for correctly calibrating SuperNEMO's energy response. In this paper, we present a novel method for precisely measuring the exact geometry of the deposition of $^{207}$Bi droplets within the frames, using Timepix pixel detectors. We studied 49 different sources and selected 42 high-quality sources with the most central source positioning. | physics |
The equivalence of two formulations of Fokker's quantum theory is proved - based on the Feynman functional integral representation of the propagator for a system of charges with direct electromagnetic interaction and the quantum principle of least action as an analogue of the Schr\"{o}dinger wave equation. The common basis for the two approaches is the generalized canonical form of Fokker's action. | quantum physics |
This survey describes the recent advances in the construction of Markov partitions for nonuniformly hyperbolic systems. One important feature of this development comes from a finer theory of nonuniformly hyperbolic systems, which we also describe. The Markov partition defines a symbolic extension that is finite-to-one and onto a non-uniformly hyperbolic locus, and this provides dynamical and statistical consequences such as estimates on the number of closed orbits and properties of equilibrium measures. The class of systems includes diffeomorphisms, flows, and maps with singularities. | mathematics |
In this paper, we prove some blow-up results for the semilinear wave equation in generalized Einstein-de Sitter spacetime by using an iteration argument and we derive upper bound estimates for the lifespan. In particular, we will focus on the critical cases which require the employment of a slicing procedure in the iterative mechanism. Furthermore, in order to deal with the main critical case, we will introduce a non-autonomous and parameter-dependent Cauchy problem for a linear ODE of second-order, whose explicit solution will be determined by applying the theory of special functions. | mathematics |
In this study we extend the concepts of $m$-pluripotential theory to the Riemannian superspace formalism. Since in this setting positive supercurrents and tropical varieties are closely related, we try to understand the relative capacity notion with respect to the intersection of tropical hypersurfaces. Moreover, we generalize the classical quasicontinuity result of Cartan to $m$-subharmonic functions of Riemannian spaces and lastly we introduce the indicators of $m$-subharmonic functions and give a geometric characterization of their Newton numbers. | mathematics |
Bars and Sezgin have proposed a super Yang-Mills theory in $D=s+t=11+3$ space-time dimensions with an electric 3-brane that generalizes the 2-brane of M-theory. More recently, the authors found an infinite family of exceptional super Yang-Mills theories in $D=(8n+3)+3$ via the so-called Magic Star algebras. A particularly interesting case occurs in signature $D=27+3$, where the superalgebra is centrally extended by an electric 11-brane and its 15-brane magnetic dual. The worldvolume symmetry of the 11-brane has signature $D=11+3$ and can reproduce super Yang-Mills theory in $D=11+3$. Upon reduction to $D=26+2$, the 11-brane reduces to a 10-brane with $10+2$ worldvolume signature. A single time projection gives a $10+1$ worldvolume signature and can serve as a model for $D=10+1$ M-theory as a reduction from the $D=26+1$ signature of the bosonic M-theory of Horowitz and Susskind; this is further confirmed by the reduction of chiral $(1,0)$, $D=11+3$ superalgebra to the $\mathcal{N}=1$ superalgebra in $D=10+1$, as found by Rudychev, Sezgin and Sundell some time ago. Extending previous results of Dijkgraaf, Verlinde and Verlinde, we also put forward the realization of spinors as total cohomologies of (the largest spatially extended) branes which centrally extend the $(1,0)$ superalgebra underlying the corresponding exceptional super Yang-Mills theory. Moreover, by making use of an "anomalous" Dynkin embedding, we strengthen Ramond and Sati's argument that M-theory has hidden Cayley plane fibers. | high energy physics theory |
In this paper we propose a new method of estimation for discrete choice demand models when individual level data are available. The method employs a two-step procedure. Step 1 predicts the choice probabilities as functions of the observed individual level characteristics. Step 2 estimates the structural parameters of the model using the estimated choice probabilities at a particular point of interest and the moment restrictions. In essence, the method uses nonparametric approximation (followed by) moment estimation. Hence the name---NAME. We use simulations to compare the performance of NAME with the standard methodology. We find that our method improves precision as well as convergence time. We supplement the analysis by providing the large sample properties of the proposed estimator. | statistics |
Mobile edge computing is beneficial to reduce service response time and core network traffic by pushing cloud functionalities to network edge. Equipped with storage and computation capacities, edge nodes can cache services of resource-intensive and delay-sensitive mobile applications and process the corresponding computation tasks without outsourcing to central clouds. However, the heterogeneity of edge resource capacities and inconsistence of edge storage and computation capacities make it difficult to jointly fully utilize the storage and computation capacities when there is no cooperation among edge nodes. To address this issue, we consider cooperation among edge nodes and investigate cooperative service caching and workload scheduling in mobile edge computing. This problem can be formulated as a mixed integer nonlinear programming problem, which has non-polynomial computation complexity. To overcome the challenges of subproblem coupling, computation-communication tradeoff, and edge node heterogeneity, we develop an iterative algorithm called ICE. This algorithm is designed based on Gibbs sampling, which has provably near-optimal results, and the idea of water filling, which has polynomial computation complexity. Simulations are conducted and the results demonstrate that our algorithm can jointly reduce the service response time and the outsourcing traffic compared with the benchmark algorithms. | computer science |
How to measure the degree of uncertainty of a given frame of discernment has been a hot topic for years. A lot of meaningful works have provided some effective methods to measure the degree properly. However, a crucial factor, sequence of propositions, is missing in the definition of traditional frame of discernment. In this paper, a detailed definition of ordinal frame of discernment has been provided. Besides, an innovative method utilizing a concept of computer vision to combine the order of propositions and the mass of them is proposed to better manifest relationships between the two important element of the frame of discernment. More than that, a specially designed method covering some powerful tools in indicating the degree of uncertainty of a traditional frame of discernment is also offered to give an indicator of level of uncertainty of an ordinal frame of discernment on the level of vector. | computer science |
Robust traffic sign detection and recognition (TSDR) is of paramount importance for the successful realization of autonomous vehicle technology. The importance of this task has led to a vast amount of research efforts and many promising methods have been proposed in the existing literature. However, the SOTA (SOTA) methods have been evaluated on clean and challenge-free datasets and overlooked the performance deterioration associated with different challenging conditions (CCs) that obscure the traffic images captured in the wild. In this paper, we look at the TSDR problem under CCs and focus on the performance degradation associated with them. To overcome this, we propose a Convolutional Neural Network (CNN) based TSDR framework with prior enhancement. Our modular approach consists of a CNN-based challenge classifier, Enhance-Net, an encoder-decoder CNN architecture for image enhancement, and two separate CNN architectures for sign-detection and classification. We propose a novel training pipeline for Enhance-Net that focuses on the enhancement of the traffic sign regions (instead of the whole image) in the challenging images subject to their accurate detection. We used CURE-TSD dataset consisting of traffic videos captured under different CCs to evaluate the efficacy of our approach. We experimentally show that our method obtains an overall precision and recall of 91.1% and 70.71% that is 7.58% and 35.90% improvement in precision and recall, respectively, compared to the current benchmark. Furthermore, we compare our approach with SOTA object detection networks, Faster-RCNN and R-FCN, and show that our approach outperforms them by a large margin. | electrical engineering and systems science |
The state of a quantum system may be steered towards a predesignated target state, employing a sequence of weak $\textit{blind}$ measurements (where the detector's readouts are traced out). Here we analyze the steering of a two-level system using the interplay of a system Hamiltonian and weak measurements, and show that $\textit{any}$ pure or mixed state can be targeted. We show that the optimization of such a steering protocol is underlain by the presence of Liouvillian exceptional points. More specifically, for high purity target states, optimal steering implies purely relaxational dynamics marked by a second-order exceptional point, while for low purity target states, it implies an oscillatory approach to the target state. The phase transition between these two regimes is characterized by a third-order exceptional point. | quantum physics |
Tensor models and tensor field theories admit a $1/N$ expansion and a melonic large $N$ limit which is simpler than the planar limit of random matrices and richer than the large $N$ limit of vector models. They provide examples of analytically tractable but non trivial strongly coupled quantum field theories and lead to a new class of conformal field theories. We present a compact introduction to the topic, covering both some of the classical results in the field, like the details of the $1/N$ expansion, as well as recent developments. These notes are loosely bases on four lectures given at the Journ\'ees de physique math\'ematique Lyon 2019: Random tensors and SYK models. | high energy physics theory |
The vacuum must contain virtual fluctuations of black hole microstates for each mass $M$. We observe that the expected suppression for $M\gg m_p$ is counteracted by the large number $Exp[S_{bek}]$ of such states. From string theory we learn that these microstates are extended objects that are resistant to compression. We argue that recognizing this `virtual extended compression-resistant' component of the gravitational vacuum is crucial for understanding gravitational physics. Remarkably, such virtual excitations have no significant effect for observable systems like stars, but they resolve two important problems: (a) gravitational collapse is halted outside the horizon radius, removing the information paradox; (b) spacetime acquires a `stiffness' against the curving effects of vacuum energy; this ameliorates the cosmological constant problem posed by the existence of a planck scale $\Lambda$. | high energy physics theory |
To make progress in science, we often build abstract representations of physical systems that meaningfully encode information about the systems. The representations learnt by most current machine learning techniques reflect statistical structure present in the training data; however, these methods do not allow us to specify explicit and operationally meaningful requirements on the representation. Here, we present a neural network architecture based on the notion that agents dealing with different aspects of a physical system should be able to communicate relevant information as efficiently as possible to one another. This produces representations that separate different parameters which are useful for making statements about the physical system in different experimental settings. We present examples involving both classical and quantum physics. For instance, our architecture finds a compact representation of an arbitrary two-qubit system that separates local parameters from parameters describing quantum correlations. We further show that this method can be combined with reinforcement learning to enable representation learning within interactive scenarios where agents need to explore experimental settings to identify relevant variables. | quantum physics |
We analyze the Helmholtz equation in a complex domain. A sound absorbing structure at a part of the boundary is modelled by a periodic geometry with periodicity $\varepsilon>0$. A resonator volume of thickness $\varepsilon$ is connected with thin channels (opening $\varepsilon^3$) with the main part of the macroscopic domain. For this problem with three different scales we analyze solutions in the limit $\varepsilon\to 0$ and find that the effective system can describe sound absorption. | mathematics |
We investigate the thermodynamic behaviour of asymptotically anti de Sitter black holes in generalized quasi-topological gravity containing terms both cubic and quartic in the curvature. We investigate the general conditions required for physical phase transitions and critical behaviour in any dimension and then consider in detail specific properties in spacetime dimensions 4, 5, and 6. We find for spherical black holes that there are respectively at most two and three physical critical points in five and six dimensions. For hyperbolic black holes we find the occurrence of Van der Waals phase transitions in four dimensions and reverse Van der Waals phase transitions in dimensions greater than 4 if both cubic and quartic curvature terms are present. We also observe the occurrence of phase transitions in for fixed chemical potential. We consider some applications of our work in the dual CFT, investigating how the ratio of viscosity to entropy is modified by inclusion of these higher curvature terms. We conclude that the presence of the quartic curvature term results in a violation of the KSS bound in five dimensions, but not in other dimensions. | high energy physics theory |
A two-dimensional (2D) material system with both piezoelectricity and ferromagnetic (FM) order, referred to as a 2D piezoelectric ferromagnetism (PFM), may open up unprecedented opportunities for intriguing physics. Inspired by experimentally synthesized Janus monolayer MoSSe from $\mathrm{MoS_2}$, in this work, the Janus monolayer $\mathrm{CrBr_{1.5}I_{1.5}}$ with dynamic, mechanical and thermal stabilities is predicted, which is constructed from synthesized ferromagnetic $\mathrm{CrI_3}$ monolayer by replacing the top I atomic layer with Br atoms. Calculated results show that monolayer $\mathrm{CrBr_{1.5}I_{1.5}}$ is an intrinsic FM half semiconductor with valence and conduction bands being fully spin-polarized in the same spin direction. Furthermore, monolayer $\mathrm{CrBr_{1.5}I_{1.5}}$ possesses a sizable magnetic anisotropy energy (MAE). By symmetry analysis, it is found that both in-plane and out-of-plane piezoelectric polarizations can be induced by a uniaxial strain in the basal plane. The calculated in-plane $d_{22}$ value of 0.557 pm/V is small. However, more excitingly, the out-of-plane $d_{31}$ is as high as 1.138 pm/V, which is obviously higher compared with ones of other 2D known materials. The strong out of-plane piezoelectricity is highly desirable for ultrathin piezoelectric devices. Moreover, strain engineering is used to tune piezoelectricity of monolayer $\mathrm{CrBr_{1.5}I_{1.5}}$. It is found that compressive strain can improve the $d_{22}$, and tensile strain can enhance the $d_{31}$. A FM order to antiferromagnetic (AFM) order phase transition can be induced by compressive strain, and the critical point is about 0.95 strain. That is to say that a 2D piezoelectric antiferromagnetism (PAFM) can be achieved by compressive strain, and the corresponding $d_{22}$ and $d_{31}$ are 0.677 pm/V and 0.999 pm/V at 0.94 strain, respectively. | condensed matter |
The standard linear and logistic regression models assume that the response variables are independent, but share the same linear relationship to their corresponding vectors of covariates. The assumption that the response variables are independent is, however, too strong. In many applications, these responses are collected on nodes of a network, or some spatial or temporal domain, and are dependent. Examples abound in financial and meteorological applications, and dependencies naturally arise in social networks through peer effects. Regression with dependent responses has thus received a lot of attention in the Statistics and Economics literature, but there are no strong consistency results unless multiple independent samples of the vectors of dependent responses can be collected from these models. We present computationally and statistically efficient methods for linear and logistic regression models when the response variables are dependent on a network. Given one sample from a networked linear or logistic regression model and under mild assumptions, we prove strong consistency results for recovering the vector of coefficients and the strength of the dependencies, recovering the rates of standard regression under independent observations. We use projected gradient descent on the negative log-likelihood, or negative log-pseudolikelihood, and establish their strong convexity and consistency using concentration of measure for dependent random variables. | computer science |
The physics of high-energy colliders relies on the knowledge of different non-perturbative parton correlators, such as parton distribution functions, that encode the information on universal hadron structure and are thus the main building blocks of any factorization theorem of the underlying process in such collision. These functions are given in terms of gauge-invariant light-front operators, they are non-local in both space and real time, and are thus intractable by standard lattice techniques due to the well-known sign problem. In this paper, we propose a quantum algorithm to perform a quantum simulation of these type of correlators, and illustrate it by considering a space-time Wilson loop. We discuss the implementation of the quantum algorithm in terms of quantum gates that are accessible within actual quantum technologies such as cold atoms setups, trapped ions or superconducting circuits. | quantum physics |
Experiments on graphene bilayers, where the top layer is rotated with respect to the one below, have displayed insulating behavior when the moir\'e bands are partially filled. We calculate the charge distributions in these phases, and estimate the excitation gaps. | condensed matter |
We use the technique of coherent population trapping (CPT) to access the ground hyperfine interval (clock transition) in $^{133}$Cs. The probe and control beams required for CPT are obtained from a single compact diode laser system. The phase coherence between the beams, whose frequency difference is the clock frequency, is obtained by frequency modulating the laser with an electro-optic modulator (EOM). The EOM is fiber coupled and hence does not require alignment, and the atoms are contained in a vapor cell. Both of these should prove advantageous for potential use as atomic clocks in satellites. | physics |
With the large-scale integration of renewable power generation, frequency regulation resources (FRRs) are required to have larger capacities and faster ramp rates, which increases the cost of the frequency regulation ancillary service. Therefore, it is necessary to consider the frequency regulation cost and constraint along with real-time economic dispatch (RTED). In this paper, a data-driven distributionally robust optimization (DRO) method for RTED considering automatic generation control (AGC) is proposed. First, a Copula-based AGC signal model is developed to reflect the correlations among the AGC signal, load power and renewable generation variations. Secondly, samples of the AGC signal are taken from its conditional probability distribution under the forecasted load power and renewable generation variations. Thirdly, a distributionally robust RTED model considering the frequency regulation cost and constraint is built and transformed into a linear programming problem by leveraging the Wasserstein metric-based DRO technique. Simulation results show that the proposed method can reduce the total cost of power generation and frequency regulation. | electrical engineering and systems science |
We develop a theoretical framework that predicts and fully characterizes the diverse experimental observations of the nonlinear, combustion wave propagation in a rotating detonation engine (RDE), including the nucleation and formation of combustion pulses, the soliton-like interactions between these combustion fronts, and the fundamental, underlying Hopf bifurcation to time-periodic modulation of the waves. In this framework, the mode-locked structures are classified as autosolitons, or stably-propagating nonlinear waves where the local physics of nonlinearity, dispersion, gain, and dissipation exactly balance. We find that the global dominant balance physics in the RDE combustion chamber are dissipative and multi-scale in nature, with local fast scale (nano- to microseconds) combustion balances generating the fundamental mode-locked autosoliton state, while slow scale (milliseconds) gain-loss balances determine the instabilities and structure of the total number of autosolitons. In this manner, the global multi-scale balance physics give rise to the stable structures - not exclusively the frontal dynamics prescribed by classical detonation theory. Experimental observations and numerical models of the RDE combustion chamber are in strong qualitative agreement with no parameter tuning. Moreover, numerical continuation (computational bifurcation tracking) of the RDE analog system establishes that a Hopf bifurcation of the steadily propagating pulse train leads to the fundamental instability of the RDE, or time-periodic modulation of the waves. Along branches of Hopf orbits in parameter space exist a continuum of wave-pair interactions that exhibit solitonic interactions of varying strength. | physics |
We propose a framework to model ferroelectric negative capacitance: electrostatic Micro Electro Mechanical Systems (MEMS) hybrid actuators and analyze their dynamic (step input) response. Using this framework, we report the first proposal for reduction in the dynamic pull-in and pull-out voltages of the hybrid actuators due to the negative capacitance of the ferroelectric. The proposed model also reveals the effect of ferroelectric thickness on the dynamic pull-in and pull-out voltages and the effect of ferroelectric damping on the energy dissipated during actuation. We infer from our analysis that the hybrid actuators are better than the standalone MEMS actuators in terms of operating voltage and energy dissipation. Further, we show that one can trade-off a small part of the reduction in actuation voltage to achieve identical pull-in times in the hybrid and standalone MEMS actuators, while still consuming substantially lower energy in the former as compared to the latter. The circuit compatibility of the proposed hybrid actuator model makes it suitable for analysis and evaluation of various heterogeneous systems consisting of hybrid MEMS actuators and other electronic devices. | physics |
Resolvent analysis identifies the most responsive forcings and most receptive states of a dynamical system, in an input--output sense, based on its governing equations. Interest in the method has continued to grow during the past decade due to its potential to reveal structures in turbulent flows, to guide sensor/actuator placement, and for flow control applications. However, resolvent analysis requires access to high-fidelity numerical solvers to produce the linearized dynamics operator. In this work, we develop a purely data-driven algorithm to perform resolvent analysis to obtain the leading forcing and response modes, without recourse to the governing equations, but instead based on snapshots of the transient evolution of linearly stable flows. The formulation of our method follows from two established facts: $1)$ dynamic mode decomposition can approximate eigenvalues and eigenvectors of the underlying operator governing the evolution of a system from measurement data, and $2)$ a projection of the resolvent operator onto an invariant subspace can be built from this learned eigendecomposition. We demonstrate the method on numerical data of the linearized complex Ginzburg--Landau equation and of three-dimensional transitional channel flow, and discuss data requirements. The ability to perform resolvent analysis in a completely equation-free and adjoint-free manner will play a significant role in lowering the barrier of entry to resolvent research and applications. | physics |
In this paper, we study some conditions related to the question of the possible blow-up of regular solutions to the 3D Navier-Stokes equations. In particular, up to a modification in a proof of a very recent result from \cite{Isab}, we prove that if one component of the velocity remains small enough in a sub-space of $\dot{H}^{\frac{1}{2}}$ "almost" scaling invariant, then the 3D Navier Stokes is globally wellposed. In a second time, we investigate the same question under some conditions on one component of the vorticity and unidirectional derivative of one component of the velocity in some critical Besov spaces of the form $L^p_T(\dot{B}_{2,\infty}^{\alpha, \frac{2}{p}-\frac{1}{2}-\alpha})$ or $L^p_T(\dot{B}_{q,\infty}^{ \frac{2}{p}+\frac{3}{q}-2})$. | mathematics |
In this work we propose a quantum version of a generalized Monty Hall game, that is, one in which the parameters of the game are left free, and not fixed on its regular values. The developed quantum scheme is then used to study the expected payoff of the player, using both a separable and an entangled initial-state. In the two cases, the classical mixed-strategy payoff is recovered under certain conditions. Lastly, we extend our quantum scheme to include multiple independent players, and use this extension to sketch two possible application of the game mechanics to quantum networks, specifically, two validated, mult-party, key-distribution, quantum protocols. | quantum physics |
Torsion pendulums have been widely used in physical experiments, because their small restoring forces are suitable for tiny force measurement. Recently, some applications such as low-frequency gravity gradiometers have been proposed by focusing on their low resonant frequencies. Torsion pendulums with low resonant frequencies enable the suspended masses to be isolated from the ground, allowing for good response to the fluctuation of local gravity field at low frequencies. However, translational ground vibration can be transferred to the horizontal rotation of the torsion pendulum nonlinearly. This effect can have a non-negligible contribution to the sensitivity of torsion pendulums as gravity gradiometers. This paper evaluates the amount of nonlinear vibration noise, and discusses how to reduce it. | physics |
The large amount of videos popping up every day, make it more and more critical that key information within videos can be extracted and understood in a very short time. Video summarization, the task of finding the smallest subset of frames, which still conveys the whole story of a given video, is thus of great significance to improve efficiency of video understanding. We propose a novel Dilated Temporal Relational Generative Adversarial Network (DTR-GAN) to achieve frame-level video summarization. Given a video, it selects the set of key frames, which contain the most meaningful and compact information. Specifically, DTR-GAN learns a dilated temporal relational generator and a discriminator with three-player loss in an adversarial manner. A new dilated temporal relation (DTR) unit is introduced to enhance temporal representation capturing. The generator uses this unit to effectively exploit global multi-scale temporal context to select key frames and to complement the commonly used Bi-LSTM. To ensure that summaries capture enough key video representation from a global perspective rather than a trivial randomly shorten sequence, we present a discriminator that learns to enforce both the information completeness and compactness of summaries via a three-player loss. The loss includes the generated summary loss, the random summary loss, and the real summary (ground-truth) loss, which play important roles for better regularizing the learned model to obtain useful summaries. Comprehensive experiments on three public datasets show the effectiveness of the proposed approach. | computer science |
Some effects of vacuum polarization in QED due to the presence of field sources are investigated. We focus on effects with no counter-part in Maxwell electrodynamics. The the Uehling interaction energy between two stationary point-like charges is calculated exactly in terms of Meijer-G functions. Effects induced on a hydrogen atom by the vacuum polarization in the vicinity of a Dirac string are considered. We also calculate the interaction between two parallel Dirac strings and corrections to the energy levels of a quantum particle constrained to move on a ring circumventing a solenoid. | high energy physics theory |
Bayesian interpretations of neural network have a long history, dating back to early work in the 1990's and have recently regained attention because of their desirable properties like uncertainty estimation, model robustness and regularisation. We want to discuss here the application of Bayesian models to knowledge sharing between neural networks. Knowledge sharing comes in different facets, such as transfer learning, model distillation and shared embeddings. All of these tasks have in common that learned "features" ought to be shared across different networks. Theoretically rooted in the concepts of Bayesian neural networks this work has widespread application to general deep learning. | statistics |
Classical chaotic systems exhibit exponentially diverging trajectories due to small differences in their initial state. The analogous diagnostic in quantum many-body systems is an exponential growth of out-of-time-ordered correlation functions (OTOCs). These quantities can be computed for various models, but their experimental study requires the ability to evolve quantum states backward in time, similar to the canonical Loschmidt echo measurement. In some simple systems, backward time evolution can be achieved by reversing the sign of the Hamiltonian; however in most interacting many-body systems, this is not a viable option. Here we propose a new family of protocols for OTOC measurement that do not require backward time evolution. Instead, they rely on ordinary time-ordered measurements performed in the thermofield double (TFD) state, an entangled state formed between two identical copies of the system. We show that, remarkably, in this situation the Lyapunov chaos exponent $\lambda_L$ can be extracted from the measurement of an ordinary two-point correlation function. As an unexpected bonus, we find that our proposed method yields the so-called "regularized" OTOC -- a quantity that is believed to most directly indicate quantum chaos. According to recent theoretical work, the TFD state can be prepared as the ground state of two weakly coupled identical systems and is therefore amenable to experimental study. We illustrate the utility of these protocols on the example of the maximally chaotic Sachdev-Ye-Kitaev model and support our findings by extensive numerical simulations. | condensed matter |
Cities are complex systems comprised of socioeconomic systems relying on critical services delivered by multiple physical infrastructure networks. Due to interdependencies between social and physical systems, disruptions caused by natural hazards may cascade across systems, amplifying the impact of disasters. Despite the increasing threat posed by climate change and rapid urban growth, how to design interdependencies between social and physical systems to achieve resilient cities have been largely unexplored. Here, we study the socio-physical interdependencies in urban systems and their effects on disaster recovery and resilience, using large-scale mobility data collected from Puerto Rico during Hurricane Maria. We find that as cities grow in scale and expand their centralized infrastructure systems, the recovery efficiency of critical services improves, however, curtails the self-reliance of socio-economic systems during crises. Results show that maintaining self-reliance among social systems could be key in developing resilient urban socio-physical systems for cities facing rapid urban growth. | physics |
Hybrid entangled states prove to be necessary for quantum information processing within heterogeneous quantum networks. A method with irreducible number of consumed resources that firmly provides hybrid CV-DV entanglement for any input conditions of the experimental setup is proposed. Namely, a family of CV states is introduced. Each of such CV states is first superimposed on a beam-splitter with a delocalized photon and then detected by a photo-detector behind the beam-splitter. Detection of any photon number heralds generation of a hybrid CV-DV entangled state in the outputs, independent of transmission/reflection coefficients of the beam-splitter and size of the input CV state. Nonclassical properties of the generated state are studied and their entanglement degree in terms of negativity is calculated. There are wide domains of values of input parameters of the experimental setup that can be chosen to make the generated state maximally entangled. The proposed method is also applicable to truncated versions of the input CV states. We also propose a simple method to produce even/odd CV states. | quantum physics |
The heavy singlet Majorana neutrinos are introduced to generate the neutrino mass in the so-called phenomenological type-I seesaw mechanism. The phenomena induced by the heavy Majorana neutrinos are important to search for new physics. In this paper, we explore the heavy Majorana neutrino production and decay at future $e^{-}p$ colliders. The corresponding cross sections via $W$ and photon fusion are predicted for different collider energies. Combined with the results of the heavy Majorana neutrino production via single $W$ exchange, this work can provide helpful information to search for heavy Majorana neutrinos at future $e^{-}p$ colliders. | high energy physics phenomenology |
Maximum run-length limited codes are constraint codes used in communication and data storage systems. Insertion/deletion correcting codes correct insertion or deletion errors caused in transmitted sequences and are used for combating synchronization errors. This paper investigates the maximum run-length limited single insertion/deletion correcting (RLL-SIDC) codes. More precisely, we construct efficiently encodable and decodable RLL-SIDC codes. Moreover, we present its encoding algorithm and show the redundancy of the code. | computer science |
Incoherent solar radio radiation comes from the free-free, gyroresonance, and gyrosynchrotron emission mechanisms. Free-free is primarily produced from Coulomb collisions between thermal electrons and ions. Gyroresonance and gyrosynchrotron result from the acceleration of low-energy electrons and mildly relativistic electrons, respectively, in the presence of a magnetic field. In the non-flaring Sun, free-free is the dominant emission mechanism with the exception of regions of strong magnetic fields which emit gyroresonance at microwaves. Due to its ubiquitous presence, free-free emission can be used to probe the non-flaring solar atmosphere above temperature minimum. Gyroresonance opacity depends strongly on the magnetic field strength and orientation; hence it provides a unique tool for the estimation of coronal magnetic fields. Gyrosynchrotron is the primary emission mechanism in flares at frequencies higher than 1-2 GHz and depends on the properties of both the magnetic field and the accelerated electrons, as well as the properties of the ambient plasma. In this paper we discuss in detail the above mechanisms and their diagnostic potential. | astrophysics |
We analyze the free energy and the overlaps in the 2-spin spherical Sherrington Kirkpatrick spin glass model with an external field for the purpose of understanding the transition between this model and the one without an external field. We compute the limiting values and fluctuations of the free energy as well as three types of overlaps in the setting where the strength of the external field goes to zero as the dimension of the spin variable grows. In particular, we consider overlaps with the external field, the ground state, and a replica. Our methods involve a contour integral representation of the partition function along with random matrix techniques. We also provide computations for the matching between different scaling regimes. Finally, we discuss the implications of our results for susceptibility and for the geometry of the Gibbs measure. Some of the findings of this paper are confirmed rigorously by Landon and Sosoe in their recent paper which came out independently and simultaneously. | condensed matter |
Spin waves (SW) are the excitation of the spin system in a ferromagnetic condensed matter body. They are collective excitations of the electron system and, from a quasi-classical point of view, can be understood as a coherent precession of the electrons' spins. Analogous to photons, they are also referred to as magnons indicating their quasi-particle character. The collective nature of SWs is established by the short-range exchange interaction as well as the non-local magnetic dipolar interaction, resulting in coherence of SWs from mesoscopic to even macroscopic length scales. As one consequence of this collective interaction, SWs are "charge current free" and, therefore, less subject to dissipation caused by scattering with impurities on the atomic level. This is a clear advantage over diffusive transport in spintronics that not only uses the charge of an electron but also its spin degree of freedom. Any (spin) current naturally involves motion and, thus, scattering of electrons leading to excessive heating as well as losses. This renders SWs a promising alternative to electric (spin) currents for the transport of spin information - one of the grand challenges of condensed matter physics. | condensed matter |
It is well-known that r-mode oscillations of rotating neutron stars may be unstable with respect to the gravitational wave emission. It is highly unlikely to observe a neutron star with the parameters within the instability window, a domain where this instability is not suppressed. But if one adopts the `minimal' (nucleonic) composition of the stellar interior, a lot of observed stars appear to be within the r-mode instability window. One of the possible solutions to this problem is to account for hyperons in the neutron star core. The presence of hyperons allows for a set of powerful (lepton-free) non-equilibrium weak processes, which increase the bulk viscosity, and thus suppress the r-mode instability. Existing calculations of the instability windows for hyperon NSs generally use reaction rates calculated for the $\Sigma^-\Lambda$ hyperonic composition via the contact $W$ boson exchange interaction. In contrast, here we employ hyperonic equations of state where the $\Lambda$ and $\Xi^-$ are the first hyperons to appear (the $\Sigma^-$'s, if they are present, appear at much larger densities), and consider the meson exchange channel, which is more effective for the lepton-free weak processes. We calculate the bulk viscosity for the non-paired $npe\mu\Lambda\Xi^-$ matter using the meson exchange weak interaction. A number of viscosity-generating non-equilibrium processes is considered (some of them for the first time in the neutron-star context). The calculated reaction rates and bulk viscosity are approximated by simple analytic formulas, easy-to-use in applications. Applying our results to calculation of the instability window, we argue that accounting for hyperons may be a viable solution to the r-mode problem. | astrophysics |
Temporal action localization is an important step towards video understanding. Most current action localization methods depend on untrimmed videos with full temporal annotations of action instances. However, it is expensive and time-consuming to annotate both action labels and temporal boundaries of videos. To this end, we propose a weakly supervised temporal action localization method that only requires video-level action instances as supervision during training. We propose a classification module to generate action labels for each segment in the video, and a deep metric learning module to learn the similarity between different action instances. We jointly optimize a balanced binary cross-entropy loss and a metric loss using a standard backpropagation algorithm. Extensive experiments demonstrate the effectiveness of both of these components in temporal localization. We evaluate our algorithm on two challenging untrimmed video datasets: THUMOS14 and ActivityNet1.2. Our approach improves the current state-of-the-art result for THUMOS14 by 6.5% mAP at IoU threshold 0.5, and achieves competitive performance for ActivityNet1.2. | computer science |
Asteroids of size larger than 0.15 km generally do not have periods P smaller than about 2.2 hours, a limit known as cohesionless spin-barrier. This barrier can be explained by means of the cohesionless rubble-pile structure model. In this paper we will explore the possibility for the observed spin-barrier value to be different for C and S-type Main Asteroids Belt (MBAs). On the basis of the actual bulk density values, the expected ratio between the maximum rotation periods is $P_C/P_S \approx 1.4 \pm 0.3$. Using the data available in the asteroid LightCurve Data Base (LCDB) we have found that, as regards the mean spin-barrier values and for asteroids in the 4-20 km range, there is a little difference between the two asteroids population with a ratio $P_C/P_S \approx 1.20 \pm 0.04$. Uncertainties are still high, mainly because of the small number of MBAs with known taxonomic class in the considered range. In the 4-10 km range, instead, the ratio between the spin-barriers seems closer to 1 because $P_C/P_S \approx 1.11 \pm 0.05$. This behavior could be a direct consequence of a different cohesion strength for C and S-type asteroids of which the ratio can be estimated. | astrophysics |
In this work we derive for the first time the complete gravitational cubic-in-spin effective action at the next-to-leading order in the post-Newtonian (PN) expansion for the interaction of generic compact binaries via the effective field theory for gravitating spinning objects, which we extend in this work. This sector, which enters at the fourth and a half PN (4.5PN) order for rapidly-rotating compact objects, completes finite-size effects up to this PN order, and is the first sector completed beyond the current state of the art for generic compact binary dynamics at the 4PN order. At this order in spins with gravitational nonlinearities we have to take into account additional terms, which arise from a new type of worldline couplings, due to the fact that at this order the Tulczyjew gauge for the rotational degrees of freedom, which involves the linear momentum, can no longer be approximated only in terms of the four-velocity. One of the main motivations for us to tackle this sector is also to see what happens when we go to a sector, which corresponds to the gravitational Compton scattering with quantum spins larger than one, and maybe possibly also get an insight on the inability to uniquely fix its amplitude from factorization when spins larger than two are involved. A general observation that we can clearly make already is that even-parity sectors in the order of the spin are easier to handle than odd ones. In the quantum context this corresponds to the greater ease of dealing with bosons compared to fermions. | high energy physics theory |
We construct de Sitter branes in a flat bulk of massive gravity in $5D$. We find two branches of solutions, reminiscent of the normal and self-accelerating branches in DGP, but with rather different properties. Neither branch has a self-accelerating limit: the background geometry requires having a nonvanishing tension. On the other hand, on both branches there are sub-branches where the leading order contributions of the tension to the curvature cancel. In these cases it turns out that larger tensions curve the background less. Further, both branches support a localized $4D$ massless graviton for a special choice of bulk mass terms. This choice may be protected by enhanced gauge symmetry at least at the linearized level. Finally, we generalize the solutions to the case of bigravity in a flat $5D$ bulk. | high energy physics theory |
We investigate the interesting impact of mobility on the problem of efficient wireless power transfer in ad hoc networks. We consider a set of mobile agents (consuming energy to perform certain sensing and communication tasks), and a single static charger (with finite energy) which can recharge the agents when they get in its range. In particular, we focus on the problem of efficiently computing the appropriate range of the charger with the goal of prolonging the network lifetime. We first demonstrate (under the realistic assumption of fixed energy supplies) the limitations of any fixed charging range and, therefore, the need for (and power of) a dynamic selection of the charging range, by adapting to the behavior of the mobile agents which is revealed in an online manner. We investigate the complexity of optimizing the selection of such an adaptive charging range, by showing that two simplified offline optimization problems (closely related to the online one) are NP-hard. To effectively address the involved performance trade-offs, we finally present a variety of adaptive heuristics, assuming different levels of agent information regarding their mobility and energy. | computer science |
Implementation of high-fidelity swapping operations is of vital importance to execute quantum algorithms on a quantum processor with limited connectivity. We present an efficient pulse control technique, cross-cross resonance (CCR) gate, to implement iSWAP and SWAP operations with dispersively-coupled fixed-frequency transmon qubits. The key ingredient of the CCR gate is simultaneously driving both of the coupled qubits at the frequency of another qubit, wherein the fast two-qubit interaction roughly equivalent to the XY entangling gates is realized without strongly driving the qubits. We develop the calibration technique for the CCR gate and evaluate the performance of iSWAP and SWAP gates The CCR gate shows roughly two-fold improvement in the average gate error and more than 10~\% reduction in gate times from the conventional decomposition based on the cross resonance gate. | quantum physics |
We study the transfer of energy through a network of coupled oscillators, which represents a minimalmicroscopic power grid connecting multiple active quantum machines. We evaluate the resulting energy currentsin the macroscopic, thermal, and quantum regime and describe how transport is affected by the competitionbetween coherent and incoherent processes and nonlinear saturation effects. Specifically, we show that thetransfer of energy through such networks is strongly influenced by a nonequilibrium phase transition between anoise-dominated and a coherent transport regime. This transition is associated with the formation and breakingof spatial symmetries and is identified as a generic feature of active networks. Therefore, these findings haveimportant practical consequences for the distribution of energy over coherent microwave, optical, or phononicchannels, in particular close to or at the quantum limit | quantum physics |
Here we review some of the recent developments in Quantum Optics. After a brief introduction to the historical development of the subject, we discuss some of the modern aspects of quantum optics including atom field interactions, quantum state engineering, metamaterials and plasmonics, optomechanical systems, PT (Parity-Time) symmetry in quantum optics as well as quasi-probability distributions and quantum state tomography. Further, the recent developments in topological photonics is briefly discussed. The potent role of the subject in the development of our understanding of quantum physics and modern technologies is brought out. | quantum physics |
We study kernel functions of L-functions and products of L-functions of Hilbert cusp forms over real quadratic fields. This extends the results on elliptic modualr forms by Diamantis and C. O'Sullivan. . | mathematics |
We propose an automated verification technique for hypersafety properties, which express sets of valid interrelations between multiple finite runs of a program. The key observation is that constructing a proof for a small representative set of the runs of the product program (i.e. the product of the several copies of the program by itself), called a reduction, is sufficient to formally prove the hypersafety property about the program. We propose an algorithm based on a counterexample-guided refinement loop that simultaneously searches for a reduction and a proof of the correctness for the reduction. We demonstrate that our tool Weaver is very effective in verifying a diverse array of hypersafety properties for a diverse class of input programs. | computer science |
We propose the cone epsilon-dominance approach to improve convergence and diversity in multiobjective evolutionary algorithms (MOEAs). A cone-eps-MOEA is presented and compared with MOEAs based on the standard Pareto relation (NSGA-II, NSGA-II*, SPEA2, and a clustered NSGA-II) and on the epsilon-dominance (eps-MOEA). The comparison is performed both in terms of computational complexity and on four performance indicators selected to quantify the quality of the final results obtained by each algorithm: the convergence, diversity, hypervolume, and coverage of many sets metrics. Sixteen well-known benchmark problems are considered in the experimental section, including the ZDT and the DTLZ families. To evaluate the possible differences amongst the algorithms, a carefully designed experiment is performed for the four performance metrics. The results obtained suggest that the cone-eps-MOEA is capable of presenting an efficient and balanced performance over all the performance metrics considered. These results strongly support the conclusion that the cone-eps-MOEA is a competitive approach for obtaining an efficient balance between convergence and diversity to the Pareto front, and as such represents a useful tool for the solution of multiobjective optimization problems. | computer science |
Molecular matter-wave interferometry enables novel strategies for manipulating the internal mechanical motion of complex molecules. Here, we show how chiral molecules can be prepared in a quantum superposition of two enantiomers by far-field matter-wave diffraction and how the resulting tunnelling dynamics can be observed. We determine the impact of ro-vibrational phase averaging and propose a setup for sensing enantiomer-dependent forces, parity-violating weak interactions, and environment-induced superselection of handedness, as suggested to resolve Hund's paradox. Using ab-initio tunnelling calculations, we identify [4]-helicene derivatives as promising candidates to implement the proposal with state-of-the-art techniques. This work opens the door for quantum sensing and metrology with chiral molecules. | quantum physics |
We calculate the simultaneous double-soft limit of two massless closed strings scattering with any number of closed string tachyons to the subleading order at the tree level. The limit factorizes the scattering amplitude into a double-soft factor multiplying the pure tachyon subamplitude, suggesting a universal double-soft theorem for the massless closed string. We confirm an existing result for the double-soft graviton in an on-shell equivalent, but different form, while also establishing the double-soft factorization behavior of the string dilaton and of the Kalb-Ramond state, as well as the mixed graviton-dilaton case. We also show that the simultaneous and consecutive double-soft theorems are consistent with each other. We furthermore provide a complete field theory diagrammatic view on our result, which enables us in particular to establish a four-point interaction vertex for two tachyons and two massless closed string states, as well as the missing in field theory of three-point interaction of two massless closed string state and one tachyon. | high energy physics theory |
Quantum information and quantum foundations are becoming popular topics for advanced undergraduate courses. Many of the fundamental concepts and applications in these two fields, such as delayed choice experiments and quantum encryption, are comprehensible to undergraduates with basic knowledge of quantum mechanics. In this paper, we show that the quantum eraser, usually used to study the duality between wave and particle properties, can also serve as a generic platform for quantum key distribution. We present a pedagogical example of an algorithm to securely share random keys using the quantum eraser platform and propose its implementation with quantum circuits. | quantum physics |
We apply the method of QCD sum rules to study the $s s \bar s \bar s$ tetraquark states of $J^{PC} = 0^{-+}$. We construct all the relevant $s s \bar s \bar s$ tetraquark currents, and find that there are only two independent ones. We use them to further construct two weakly-correlated mixed currents. One of them leads to reliable QCD sum rule results and the mass is extracted to be $2.51^{+0.15}_{-0.12}$ GeV, suggesting that the $X(2370)$ or the $X(2500)$ can be explained as the $ss\bar s\bar s$ tetraquark state of $J^{PC} = 0^{-+}$. To verify this interpretation, we propose to further study the $\pi\pi/K \bar K$ invariant mass spectra of the $J/\psi \to \gamma \pi \pi \eta^\prime/\gamma K \bar K \eta^\prime$ decays in BESIII to examine whether there exists the $f_0(980)$ resonance. | high energy physics phenomenology |
Since the descovery by Stephen Hawking that black holes emit radiation in the context of the semiclassical approach to gravity, the black hole thermodynamics has become an active field of research in theoretical physics. In this thesis, the influence of scalar fields on the black hole thermodynamics in $D=4$ dimensions is studied. On one hand, the role played by scalar fields in the first law of black hole thermodynamics is elucidated, by using the quasilocal formalism of Brown and York, which is based on a correct variational principle, and some concrete examples are provided. On the other, the thermodynamic stability of asymptotically flat charged hairy black hole exact solutions is analysed. The solutions considered have a non-trivial scalar field potential and they can be embebbed in supergravity theories. It is explicitly shown that these solutions contain thermodynamically stable black holes. | high energy physics theory |
In this Letter, we study the kinematic properties of ascending hot blobs associated with confined flares. Taking advantage of high-cadence extreme-ultraviolet images provided by the Atmospheric Imaging Assembly on board the Solar Dynamics Observatory, we find that for the 26 events selected here, the hot blobs are first impulsively accelerated outward, but then quickly slow down to motionlessness. Their velocity evolution is basically synchronous with the temporal variation of the Geostationary Operational Environmental Satellite soft X-ray flux of the associated flares, except that the velocity peak precedes the soft X-ray peak by minutes. Moreover, the duration of the acceleration phase of the erupting blobs is moderately correlated with that of the flare rise phase. For nine of the 26 cases, the erupting blobs even appear minutes prior to the onset of the associated flares. Our results show that a fraction of confined flares also involve the eruption of a magnetic flux rope, which sometimes is formed and heated prior to the flare onset. We suggest that the initiation and development of these confined flares are similar to that of eruptive ones, and the main difference may lie in the background field constraint, which is stronger for the former than for the latter. | astrophysics |
We investigate the discounting mismatch in actor-critic algorithm implementations from a representation learning perspective. Theoretically, actor-critic algorithms usually have discounting for both actor and critic, i.e., there is a $\gamma^t$ term in the actor update for the transition observed at time $t$ in a trajectory and the critic is a discounted value function. Practitioners, however, usually ignore the discounting ($\gamma^t$) for the actor while using a discounted critic. We investigate this mismatch in two scenarios. In the first scenario, we consider optimizing an undiscounted objective $(\gamma = 1)$ where $\gamma^t$ disappears naturally $(1^t = 1)$. We then propose to interpret the discounting in critic in terms of a bias-variance-representation trade-off and provide supporting empirical results. In the second scenario, we consider optimizing a discounted objective ($\gamma < 1$) and propose to interpret the omission of the discounting in the actor update from an auxiliary task perspective and provide supporting empirical results. | computer science |
Magnetization hysteresis loops of tin samples with an inverted opal structure are presented. The sample formed by tin particles with the size of 70 and 128 nm is found to be a type-I superconductor. The tin sample formed by 80 and 42 nm particles demonstrates an analog of intertype superconductivity: features of both type-I and II superconductors are observed on the magnetization isothermal curves. Behaviors of the irreversible and reversible magnetization support coexistence of type-I and II superconducting nanoparticles in this sample. | condensed matter |
Natural disasters such as storms usually bring significant damages to distribution grids. However, plenty of distribution grids are not installed sensors that can pinpoint the location of damaged equipments (lines, transforms, etc.), which greatly increased the difficulty of the outage restoration. This paper investigates the optimal routing of repair vehicles to restore outages in distribution grid as fast as possible after a storm. First, the vehicle routing problem is formulated as a sequential stochastic optimization problem. In the formulated optimization model, the belief state of the power grid is updated according to the phone calls from customers and the information collected by the repair vehicles. Second, an AlphaZero based utility vehicle routing (AlphaZero-UVR) approach is developed to achieve the real-time dispatching of the repair vehicle. The proposed AlphaZero-UVR approach combines deep neural networks with a Monte-Carlo tree search (MCTS) to give a lookahead search decisions, which can learn to navigate the repair vehicle without human guidance. Simulation results show that the proposed approach can effectively navigate the vehicle to repair all outages. | electrical engineering and systems science |
Typically, a less fundamental theory, or structure, emerging from a more fundamental one is an example of synchronic emergence. A model (and the physical state it describes) emerging from a prior model (state) upon which it nevertheless depends is an example of diachronic emergence. The case of spacetime emergent from quantum gravity and quantum cosmology challenges these two conceptions of emergence. Here, I propose two more-general conceptions of emergence, analogous to the synchronic and diachronic ones, but which are potentially applicable to the case of emergent spacetime: an inter-level, hierarchical conception, and an intra-level, `flat' conception. I then explore whether, and how, these ideas may be applicable in the case of several putative examples of relativistic spacetime emergent from the non-spatiotemporal structures described by different approaches to quantum gravity, and of spacetime emergent from a non-spatiotemporal `big bang' state according to different examples of quantum cosmology. | physics |
We consider a class of area-preserving, piecewise affine maps on the 2-sphere. These maps encode degenerating families of K3 surface automorphisms and are profitably studied using techniques from tropical and Berkovich geometries. | mathematics |
We study the relations between entropy computed using the island prescription, von Neumann entropy, and coarse grained entropy in a doubly-holographic model. In this model an eternal AdS black hole with a Planck brane has two dual boundary formulations: a BCFT coupled to additional degrees of freedom at its boundary, and an effective CFT in thermal equilibrium with a semi-classical eternal black hole. Assuming entanglement wedge reconstruction, we show that the homology condition for bulk RT surfaces depends on which of the dual boundary formulations is taken. The island formula in the effective description computes the von Neumann entropy in the BCFT description. The von Neumann entropy in the effective description is a coarse grained entropy in the BCFT description. Simple operators which act on the radiation are identical in both formulations, while complex operators map non-locally. We use a toy model to demonstrate how such a map between two descriptions of the system gives rise to the island formula and wormholes. We conjecture that black hole complementarity is realized analogously to the doubly-holographic situation. A black hole can either be described as quantum gravitational object with structure near the horizon, or as a semi-classical black hole which respects the equivalence principle but never releases its information. The latter description is only valid if we allow observers to only act with sufficiently simple operators. Information is never cloned in either description. | high energy physics theory |
We present results of a numerical experiment in which a neutral spin-1/2 particle subjected to a static magnetic vortex field passes through a double-slit barrier. We demonstrate that the resulting interference pattern on a detection screen exhibits fringes reminiscent of Aharonov-Bohm scattering by a magnetic flux tube. To gain better understanding of the observed behavior, we provide analytic solutions for a neutral spin-1/2 rigid planar rotor in the aforementioned magnetic field. We demonstrate how that system exhibits a non-Abelian Aharonov-Bohm effect due to the emergence of an effective Wu-Yang (WY)flux tube. We study the behavior of the gauge invariant partition function and demonstrate a topological phase transition for the spin-1/2 planar rotor. We provide an expression for the partition function in which its dependence on the Wilson loop integral of the WY gauge potential is explicit. We generalize to a spin-1 system in order to explore the Wilzcek-Zee (WZ) mechanism in a full quantum setting. We show how degeneracy can be lifted by higher order gauge corrections that alter the semi-classical, non-Abelian, WZ phase. Models that allow analytic description offer a foil to objections that question the fidelity of predictions based on the generalized Born-Oppenheimer approximation in atomic and molecular systems. Though the primary focus of this study concerns the emergence of gauge structure in neutral systems, the theory is also applicable to systems that posses electric charge. In that case, we explore interference between fundamental gauge fields (i.e. electromagnetism) with effective gauge potentials. We propose a possible laboratory demonstration for the latter in an ion trap setting. We illustrate how effective gauge potentials influence wave-packet revivals in the said ion trap. | quantum physics |
Joining the shortest or least loaded queue among $d$ randomly selected queues are two fundamental load balancing policies. Under both policies the dispatcher does not maintain any information on the queue length or load of the servers. In this paper we analyze the performance of these policies when the dispatcher has some memory available to store the ids of some of the idle servers. We consider methods where the dispatcher discovers idle servers as well as methods where idle servers inform the dispatcher about their state. We focus on large-scale systems and our analysis uses the cavity method. The main insight provided is that the performance measures obtained via the cavity method for a load balancing policy {\it with} memory reduce to the performance measures for the same policy {\it without} memory provided that the arrival rate is properly scaled. Thus, we can study the performance of load balancers with memory in the same manner as load balancers without memory. In particular this entails closed form solutions for joining the shortest or least loaded queue among $d$ randomly selected queues with memory in case of exponential job sizes. Moreover, we obtain a simple closed form expression for the (scaled) expected waiting time as the system tends towards instability. We present simulation results that support our belief that the approximation obtained by the cavity method becomes exact as the number of servers tends to infinity. | computer science |
Reliable segmentation of retinal vessels can be employed as a way of monitoring and diagnosing certain diseases, such as diabetes and hypertension, as they affect the retinal vascular structure. In this work, we propose the Residual Spatial Attention Network (RSAN) for retinal vessel segmentation. RSAN employs a modified residual block structure that integrates DropBlock, which can not only be utilized to construct deep networks to extract more complex vascular features, but can also effectively alleviate the overfitting. Moreover, in order to further improve the representation capability of the network, based on this modified residual block, we introduce the spatial attention (SA) and propose the Residual Spatial Attention Block (RSAB) to build RSAN. We adopt the public DRIVE and CHASE DB1 color fundus image datasets to evaluate the proposed RSAN. Experiments show that the modified residual structure and the spatial attention are effective in this work, and our proposed RSAN achieves the state-of-the-art performance. | electrical engineering and systems science |
Vibrationally resolved near-edge x-ray absorption spectra at the K-edge for a number of small molecules have been computed from anharmonic vibrational configuration interaction calculations of the Franck-Condon factors. The potential energy surfaces for ground and core-excited states were obtained at the core-valence separated CC2, CCSD, CCSDR(3), and CC3 levels of theory, employing the Adaptive Density-Guided Approach (ADGA) scheme to select the single points at which to perform the energy calculations. We put forward an initial attempt to include pair-mode coupling terms to describe the potential of polyatomic molecules | physics |
The presence of dark matter has been ascertained through a wealth of astrophysical and cosmological phenomena and its nature is a central puzzle in modern science. Elementary particles stand as the most compelling explanation. They have been intensively searched for at underground laboratories looking for an energy recoil signal and at telescopes sifting for excess events in gamma-ray or cosmic-ray observations. In this work, we investigate a detection method based on spectroscopy measurements of neutron stars. We outline the luminosity and age of neutrons stars whose dark matter scattering off neutrons can heat neutron stars up to a measurable level. We show that in this case neutron star spectroscopy could constitute the best probe for dark matter particles over a wide masses and interactions strength. | high energy physics phenomenology |
In this paper, we propose a new beam allocation strategy aiming to maximize the average successful tracking probability (ASTP) of time-varying millimeter-wave MIMO systems. In contrast to most existing works that employ one transmitting-receiving (Tx-Rx) beam pair once only in each training period, we investigate a more general framework, where the Tx-Rx beam pairs are allowed to be used repeatedly to improve the received signal powers in specific directions. In the case of orthogonal Tx-Rx beam pairs, a power-based estimator is employed to track the time-varying AoA and AoD of the channel, and the resulting training beam pair sequence design problem is formulated as an integer nonlinear programming (I-NLP) problem. By dividing the feasible region into a set of subregions, the formulated I-NLP is decomposed into a series of concave sub I-NLPs, which can be solved by recursively invoking a nonlinear branch-and-bound algorithm. To reduce the computational cost, we relax the integer constraints of each sub I-NLP and obtain a low-complexity solution via solving the Karush-Kuhn-Tucker conditions of their relaxed problems. For the case when the Tx-Rx beam pairs are overlapped in the angular space, we estimate the updated AoA and AoD via an orthogonal matching pursuit (OMP) algorithm. Moreover, since no explicit expression for the ASTP exists for the OMP-based estimator, we derive a closed-form lower bound of the ASTP, based on which a favorable beam pair allocation strategy can be obtained. Numerical results demonstrate the superiority of the proposed beam allocation strategy over existing benchmarks. | electrical engineering and systems science |
We investigate the statistics of work performed on generic disordered, non-interacting nanograins during quantum quenches. The time evolution of work statistics as well as the probability of adiabaticity are found to exhibit universal features, the latter decaying as a stretched exponential. In slowly driven systems, the most important features of work statistics are understood in terms of a diffusion of fermions in energy space, generated by Landau-Zener transitions, and are captured by a Markovian symmetrical exclusion process, with the diffusion constant identified as the absorption rate. The energy absorption is found to exhibit an anomalous frequency dependence at small energies, reflecting the symmetry class of the underlying Hamiltonian. Our predictions can be experimentally verified by calorimetric measurements performed on nanoscale circuits. | condensed matter |
We give a hypothesis on the mass spectrum of compact $N$-quark hadron states in a classical field picture, which indicates that there would be a mass dependence on about $N^4$. We call our model "bag-tube oscillation model", which can be seemed as a kind of combination of quark-bag model and flux-tube model. The large decay widths due to large masses might be the reason why the compact $N$-quark hadrons still disappear so far. | physics |
In the present paper, we investigate the power-law behaviour of the magnetic field spectra in the Earths magnetosheath region using Cluster spacecraft data under solar minimum condition. The power spectral density of the magnetic field data and spectral slopes at various frequencies are analysed. Propagation angle and compressibility are used to test the nature of turbulent fluctuations. The magnetic field spectra have the spectral slopes between -1.5 to 0 down to spatial scales of 20 ion gyroradius and show clear evidence of a transition to steeper spectra for small scales with a second power-law, having slopes between -2.6 to -1.8. At low frequencies, f_sc<0.3f_ci(where f_ci is ion gyro-frequency), propagation angle approximately 90 degrees to the mean magnetic field, B_0, and compressibility shows a broad distribution, 0.1 < R > 0.9. On the other hand at f_sc>10f_ci, the propagation angle exhibits a broad range between 30-90 degree while 'R' has a small variation: 0.2 < R > 0.5. We conjecture that at high frequencies, the perpendicularly propagating Alfven waves could partly explain the statistical analysis of spectra. To support our prediction of kinetic Alfven wave-dominated spectral slope behaviour at high frequency, we also present a theoretical model and simulate the magnetic field turbulence spectra due to the nonlinear evolution of kinetic Alfven waves. The present study also shows the analogy between the observational and simulated spectra. | physics |
The concept of reconfigurable intelligent surface (RIS) has been proposed to change the propagation of electromagnetic waves, e.g., reflection, diffraction, and refraction. To accomplish this goal, the phase values of the discrete RIS units need to be optimized. In this paper, we consider RIS-aided millimeter-wave (mmWave) multiple-input multiple-output (MIMO) systems for both accurate positioning and high data-rate transmission. We propose an adaptive phase shifter design based on hierarchical codebooks and feedback from the mobile station (MS). The benefit of the scheme lies in that the RIS does not require deployment of any active sensors and baseband processing units. During the update process of phase shifters, the combining vector at the MS is also sequentially refined. Simulation results show the performance improvement of the proposed algorithm over the random design scheme, in terms of both positioning accuracy and data rate. Moreover, the performance converges to exhaustive search scheme even in the low signal-to-noise ratio regime. | electrical engineering and systems science |
We present results of an all-sky search for continuous gravitational waves (CWs), which can be produced by fast-spinning neutron stars with an asymmetry around their rotation axis, using data from the second observing run of the Advanced LIGO detectors. We employ three different semi-coherent methods ($\textit{FrequencyHough}$, $\textit{SkyHough}$, and $\textit{Time-Domain $\mathcal{F}$-statistic}$) to search in a gravitational-wave frequency band from 20 to 1922 Hz and a first frequency derivative from $-1\times10^{-8}$ to $2\times10^{-9}$ Hz/s. None of these searches has found clear evidence for a CW signal, so we present upper limits on the gravitational-wave strain amplitude $h_0$ (the lowest upper limit on $h_0$ is $1.7\times10^{-25}$ in the 123-124 Hz region) and discuss the astrophysical implications of this result. This is the most sensitive search ever performed over the broad range of parameters explored in this study. | astrophysics |
Quantifying tensions -- inconsistencies amongst measurements of cosmological parameters by different experiments -- has emerged as a crucial part of modern cosmological data analysis. Statistically-significant tensions between two experiments or cosmological probes may indicate new physics extending beyond the standard cosmological model and need to be promptly identified. We apply several tension estimators proposed in the literature to the Dark Energy Survey (DES) large-scale structure measurement and Planck cosmic microwave background data. We first evaluate the responsiveness of these metrics to an input tension artificially introduced between the two, using synthetic DES data. We then apply the metrics to the comparison of Planck and actual DES Year 1 data. We find that the parameter differences, Eigentension, and Suspiciousness metrics all yield similar results on both simulated and real data, while the Bayes ratio is inconsistent with the rest due to its dependence on the prior volume. Using these metrics, we calculate the tension between DES Year 1 $3\times 2$pt and Planck, finding the surveys to be in $\sim 2.3\sigma$ tension under the $\Lambda$CDM paradigm. This suite of metrics provides a toolset for robustly testing tensions in the DES Year 3 data and beyond. | astrophysics |
In this paper, a Multiple Models Adaptive Fuzzy Logic Controller (MM-AFLC) with Neural Network Identification is designed to control the unmanned vehicle in Intelligent Autonomous Parking System. The objective is to achieve robust control while maintaining a low implementation cost. The proposed controller design incorporates the following control theorems -- non-linear system identification using neural network, fuzzy logic control, adaptive control as well as multiple models adaptation. Such integration ensures superior performance compared to previous work. The generalized controller can be applied to different systems without prior knowledge of the actual plant model. In the intelligent autonomous parking system, the proposed controller can be used for both vehicle speed control and steering wheel turning. With a multiple model adaptive fuzzy logic controller, robustness can be also assured under various operating environments regardless of unpredictable disturbances. Last but not least, comparative experiments have also demonstrated that systems equipped with the new controller are able to achieve faster and smoother convergence. | electrical engineering and systems science |
A monolayer of the high-$T_c$ superconductor FeTe$_{1-x}$Se$_x$ has been predicted to realize a topologically non-trivial state with helical edge modes at its boundary, providing a novel intrinsic system to search for topological superconductivity and Majorana zero modes. Evidence in favor of a topological phase transition and helical edge modes has been identified in recent experiments \cite{Peng2019}. We propose to create Majorana zero modes by applying an in-plane magnetic field to the FeTe$_{1-x}$Se$_x$ monolayer and by tuning the local chemical potential via electric gating. Owing to the anisotropic magnetic couplings on edges from a topological band inversion, an in-plane magnetic field drives the system into an intrinsic high-order topological superconductor phase with Majorana corner modes, without fabricating heterostructures. Furthermore, we demonstrate that Majorana zero modes can occur at other different locations, including the domain wall of chemical potentials at one edge and certain type of tri-junction in the 2D bulk. Our study not only demonstrates FeTe$_{1-x}$Se$_x$ monolayer as a promising Majorana platform with scalability and electrical tunability and within reach of contemporary experimental capability, but also provides a general principle to search for realistic realization of high-order topological superconductivity. | condensed matter |
This paper entails application of the energy shaping methodology to control a flexible, elastic Cosserat rod model. Recent interest in such continuum models stems from applications in soft robotics, and from the growing recognition of the role of mechanics and embodiment in biological control strategies: octopuses are often regarded as iconic examples of this interplay. Here, the dynamics of the Cosserat rod, modeling a single octopus arm, are treated as a Hamiltonian system and the internal muscle actuators are modeled as distributed forces and couples. The proposed energy shaping control design procedure involves two steps: (1) a potential energy is designed such that its minimizer is the desired equilibrium configuration; (2) an energy shaping control law is implemented to reach the desired equilibrium. By interpreting the controlled Hamiltonian as a Lyapunov function, asymptotic stability of the equilibrium configuration is deduced. The energy shaping control law is shown to require only the deformations of the equilibrium configuration. A forward-backward algorithm is proposed to compute these deformations in an online iterative manner. The overall control design methodology is implemented and demonstrated in a dynamic simulation environment. Results of several bio-inspired numerical experiments involving the control of octopus arms are reported. | electrical engineering and systems science |
The spatial collection efficiency portrays the driving forces and loss mechanisms in photovoltaic and photoelectrochemical devices. It is defined as the fraction of photogenerated charge carriers created at a specific point within the device that contribute to the photocurrent. In stratified planar structures, the spatial collection efficiency can be extracted out of photocurrent action spectra measurements empirically, with few a priori assumptions. Although this method was applied to photovoltaic cells made of well-understood materials, it has never been used to study unconventional materials such as metal-oxide semiconductors that are often employed in photoelectrochemical cells. This perspective shows the opportunities that this method has to offer for investigating new materials and devices with unknown properties. The relative simplicity of the method, and its applicability to operando performance characterization, makes it an important tool for analysis and design of new photovoltaic and photoelectrochemical materials and devices. | physics |
The notion of a simple ordered state implies homogeneity. If the order is established by a broken symmetry, elementary Landau theory of phase transitions shows that only one symmetry mode describes this state. Precisely at points of phase coexistence domain states formed of large regions of different phases can be stabilized by long range interactions. In uniaxial antiferromagnets the so-called metamagnetism is an example of such a behavior, when an antiferromagnetic and field-induced spin-polarized paramagnetic/ferromagnetic state co-exist at a jump-like transition in the magnetic phase diagram. Here, combining experiment with theoretical analysis, we show that a different type of mixed state between antiferromagnetism and ferromagnetism can be created in certain acentric materials. In the small-angle neutron scattering experiments we observe a field-driven spin-state in the layered antiferromagnet Ca3Ru2O7, which is modulated on a scale between 8 and 20 nm and has both antiferromagnetic and ferromagnetic parts. We call this state a metamagnetic texture and explain its appearance by the chiral twisting effects of the asymmetric Dzyaloshinskii-Moriya (DM) exchange. The observation can be understood as an extraordinary coexistence, in one thermodynamic state, of spin orders belonging to different symmetries. Experimentally, the complex nature of this metamagnetic state is demonstrated by measurements of anomalies in electronic transport which reflect the spin-polarization in the metamagnetic texture, determination of the magnetic orbital moments, which supports the existence of strong spin-orbit effects, a pre-requisite for the mechanism of twisted magnetic states in this material. | condensed matter |
Oscillons are bound states sustained by self-interactions that appear in rather generic scalar models. They can be extremely long-lived and in the context of cosmology they have a built-in formation mechanism - parametric resonance instability. These features suggest that oscillons can affect the standard picture of scalar ultra-light dark matter (ULDM) models. We explore this idea along two directions. First, we investigate numerically oscillon lifetimes and their dependence on the shape of the potential. We find that scalar potentials that occur in well motivated axion-like models can lead to oscillons that live up to $10^8$ cycles or more. Second, we discuss the observational constraints on the ULDM models once the presence of oscillons is taken into account. For a wide range of axion masses, oscillons decay around or after matter-radiation equality and can thus act as early seeds for structure formation. We also discuss the possibility that oscillons survive up to today. In this case they can most easily play the role of dark matter. | high energy physics phenomenology |
To explore new constituents in two-dimensional materials and to combine their best in van der Waals heterostructures, are in great demand as being unique platform to discover new physical phenomena and to design novel functionalities in interface-based devices. Herein, PbI2 crystals as thin as few-layers are first synthesized, particularly through a facile low-temperature solution approach with the crystals of large size, regular shape, different thicknesses and high-yields. As a prototypical demonstration of flexible band engineering of PbI2-based interfacial semiconductors, these PbI2 crystals are subsequently assembled with several transition metal dichalcogenide monolayers. The photoluminescence of MoS2 is strongly enhanced in MoS2/PbI2 stacks, while a dramatic photoluminescence quenching of WS2 and WSe2 is revealed in WS2/PbI2 and WSe2/PbI2 stacks. This is attributed to the effective heterojunction formation between PbI2 and these monolayers, but type I band alignment in MoS2/PbI2 stacks where fast-transferred charge carriers accumulate in MoS2 with high emission efficiency, and type II in WS2/PbI2 and WSe2/PbI2 stacks with separated electrons and holes suitable for light harvesting. Our results demonstrate that MoS2, WS2, WSe2 monolayers with very similar electronic structures themselves, show completely distinct light-matter interactions when interfacing with PbI2, providing unprecedent capabilities to engineer the device performance of two-dimensional heterostructures. | condensed matter |
Recently, the chiral superconductivity of the cosmic string in the axion model has gathered attention. The superconductive nature can alter the standard understanding of the cosmology of the axion model. For example, a string loop with a sizable superconducting current can become a stable configuration, which is called a Vorton. The superconductive nature can also affect the cosmological evolution of the string network. In this paper, we study the stability of the superconducting current in the string. We find the superconductivity is indeed stable for a straight string or infinitely small string core size, even if the carrier particles are unstable in the vacuum. However we also find that the carrier particle decays in a curved string in typical axion models, if the carrier particles are unstable in the vacuum. Accordingly, the lifetime of the Vorton is not far from that of the carrier particle in the vacuum. | high energy physics phenomenology |
Through the use of wavelet based Besov norms, we compute nontrivial multiscale nonlinear features of a given data set so as to enhance the standard Dynamic-Mode Decomposition algorithm. Thus we are able to build sophisticated observables which enhance algorithm performance without placing undue computational burdens on the user. | mathematics |
Dark matter (DM) could couple to particles in the Standard Model (SM) through a light vector mediator. In the limit of small coupling, this portal could be responsible for producing the observed DM abundance through a mechanism known as freeze-in. Furthermore, the requisite DM-SM couplings provide a concrete benchmark for direct and indirect searches for DM. In this paper, we present updated calculations of the relic abundance for DM produced by freeze-in through a light vector mediator. We identify an additional production channel: the decay of photons that acquire an in-medium plasma mass. These plasmon decays are a dominant channel for DM production for sub-MeV DM masses, and including this channel leads to a significant reduction in the predicted signal strength for DM searches. Accounting for production from both plasmon decays and annihilations of SM fermions, the DM acquires a highly non-thermal phase space distribution which impacts the cosmology at later times; these cosmological effects will be explored in a companion paper. | high energy physics phenomenology |
We examine the influence of input data representations on learning complexity. For learning, we posit that each model implicitly uses a candidate model distribution for unexplained variations in the data, its noise model. If the model distribution is not well aligned to the true distribution, then even relevant variations will be treated as noise. Crucially however, the alignment of model and true distribution can be changed, albeit implicitly, by changing data representations. "Better" representations can better align the model to the true distribution, making it easier to approximate the input-output relationship in the data without discarding useful data variations. To quantify this alignment effect of data representations on the difficulty of a learning task, we make use of an existing task complexity score and show its connection to the representation-dependent information coding length of the input. Empirically we extract the necessary statistics from a linear regression approximation and show that these are sufficient to predict relative learning performance outcomes of different data representations and neural network types obtained when utilizing an extensive neural network architecture search. We conclude that to ensure better learning outcomes, representations may need to be tailored to both task and model to align with the implicit distribution of model and task. | computer science |
In the projective space $\mathrm{PG}(3,q)$, we consider the orbits of lines under the stabilizer group of the twisted cubic. It is well known that the lines can be partitioned into classes every of which is a union of line orbits. All types of lines forming a unique orbit are found. For the rest of the line types (apart from one of them) it is proved that they form exactly two or three orbits; sizes and structures of these orbits are determined. Problems remaining open for one type of lines are formulated. For $5\le q\le37$ and $q=64$, they are solved. | mathematics |
A prime $p$ is called balancing non-Wieferich prime if $B_{p-(\frac{8}{p})}\not\equiv 0\, (mod \, p^2)$, where $B_n$ be the $n$-th balancing number and $\displaystyle\bigg(\frac{8}{p}\bigg)$ denotes the Jacobi symbol. Under the assumption of the $abc$ conjecture for the number field $\mathbb{Q}[\sqrt{2}]$, S. S. Rout proved that there are at least $O(\log x/\log\log x)$ said primes $p\equiv 1 \, (mod\, r)$, where $r>2$ be any fixed integer. In this paper, we improve the lower bound such that for any given integer $r>2$ there are $\gg\log x$ primes $p\leq x$ satisfies $B_{p-(\frac{8}{p})}\not\equiv 0\, (mod \, p^2)$ and $p\equiv 1 \, (mod\, r)$, under the $abc$ conjecture for the number field $\mathbb{Q}[\sqrt{2}]$. This improves the recent result of Y. Wang and Y. Ding by adding an additional condition that the primes $p$ are in arithmetic progression. | mathematics |
Deep neural networks show high accuracy in theproblem of semantic and instance segmentation of biomedicaldata. However, this approach is computationally expensive. Thecomputational cost may be reduced with network simplificationafter training or choosing the proper architecture, which providessegmentation with less accuracy but does it much faster. In thepresent study, we analyzed the accuracy and performance ofUNet and ENet architectures for the problem of semantic imagesegmentation. In addition, we investigated the ENet architecture by replacing of some convolution layers with box-convolutionlayers. The analysis performed on the original dataset consisted of histology slices with mast cells. These cells provide a region forsegmentation with different types of borders, which vary fromclearly visible to ragged. ENet was less accurate than UNet byonly about 1-2%, but ENet performance was 8-15 times faster than UNet one. | electrical engineering and systems science |