text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Electric dipole moments of atoms can arise from P-odd and T-odd electron--nucleon couplings. This work studies a general class of dimension-six electron--nucleon interactions mediated by Lorentz-violating tensors of ranks ranging from $1$ to $4$. The possible couplings are listed as well as their behavior under C, P, and T, allowing us to select the couplings compatible with electric-dipole-moment physics. The unsuppressed contributions of these couplings to the atom's hamiltonian can be read as equivalent to an electric dipole moment. The Lorentz-violating coefficients' magnitudes are limited using electric-dipole-moment measurements at the levels of $3.2\times10^{-31}\text{(eV)}^{-2}$ or $1.6\times10^{-33}\text{(eV)}^{-2}$. | high energy physics phenomenology |
In this paper, we present a new Marshall-Olkin exponential shock model. The new construction method gives the proposed model further ability to allocate the common joint shock on each of the components, making it suitable for application in fields like reliability and credit risk. The given model has a singular part and supports both positive and negative dependence structure. Main dependence properties of the model is given and an analysis of stress-strength is presented. After a performance analysis on the estimator of parameters, a real data is studied. Finally, we give the multivariate version of the proposed model and its main properties. | statistics |
We propose a neural network model to estimate the current frame from two reference frames, using affine transformation and adaptive spatially-varying filters. The estimated affine transformation allows for using shorter filters compared to existing approaches for deep frame prediction. The predicted frame is used as a reference for coding the current frame. Since the proposed model is available at both encoder and decoder, there is no need to code or transmit motion information for the predicted frame. By making use of dilated convolutions and reduced filter length, our model is significantly smaller, yet more accurate, than any of the neural networks in prior works on this topic. Two versions of the proposed model - one for uni-directional, and one for bi-directional prediction - are trained using a combination of Discrete Cosine Transform (DCT)-based l1-loss with various transform sizes, multi-scale Mean Squared Error (MSE) loss, and an object context reconstruction loss. The trained models are integrated with the HEVC video coding pipeline. The experiments show that the proposed models achieve about 7.3%, 5.4%, and 4.2% bit savings for the luminance component on average in the Low delay P, Low delay, and Random access configurations, respectively. | electrical engineering and systems science |
Knot theory is actively studied both by physicists and mathematicians as it provides a connecting centerpiece for many physical and mathematical theories. One of the challenging problems in knot theory is distinguishing mutant knots. Mutant knots are not distinguished by colored HOMFLY-PT polynomials for knots colored by either symmetric and or antisymmetric representations of $SU(N)$. Some of the mutant knots can be distinguished by the simplest non-symmetric representation $[2,1]$. However there is a class of mutant knots which require more complex representations like $[4,2]$. In this paper we calculate polynomials and differences for the mutant knot polynomials in representations $[3,1]$ and $[4,2]$ and study their properties. | high energy physics theory |
The Network-on-Chip (NoC) paradigm has been proposed as a favorable solution to handle the strict communication requirements between the increasingly large number of cores on a single chip. However, NoC systems are exposed to the aggressive scaling down of transistors, low operating voltages, and high integration and power densities, making them vulnerable to permanent (hard) faults and transient (soft) errors. A hard fault in a NoC can lead to external blocking, causing congestion across the whole network. A soft error is more challenging because of its silent data corruption, which leads to a large area of erroneous data due to error propagation, packet re-transmission, and deadlock. In this paper, we present the architecture and design of a comprehensive soft error and hard fault-tolerant 3D-NoC system, named 3D-Hard-Fault-Soft-Error-Tolerant-OASIS-NoC (3D-FETO). With the aid of efficient mechanisms and algorithms, 3D-FETO is capable of detecting and recovering from soft errors which occur in the routing pipeline stages and leverages reconfigurable components to handle permanent faults in links, input buffers, and crossbars. In-depth evaluation results show that the 3D-FETO system is able to work around different kinds of hard faults and soft errors, ensuring graceful performance degradation, while minimizing additional hardware complexity and remaining power efficient. | computer science |
A reliable and affordable access to electricity has become one of the basic needs for humans and is, as such, at the top of the development agenda. It contributes to socio-economic development by transforming the whole spectrum of people's lives - food, education, health care; it spurs new economic opportunities and thus improves livelihoods. Using a comprehensive dataset of pseudonymised mobile phone records, provided by the market share leader, we analyse the impact of electrification on the attractiveness of rural areas in Senegal. We extract communication and mobility flows from the call detail records (CDRs) and show that electrification has a small, yet positive and specific, impact on centrality measures within the communication network and on the volume of incoming visitors. This increased influence is however circumscribed to a limited spatial extent, creating a complex competition with nearby areas. Nevertheless, we found that the volume of visitors between any two sites could be well predicted from the level of electrification at the destination combined with the living standard at the origin. In view of these results, we discuss how to obtain the best outcomes from a rural electrification planning strategy. We determine that electrifying clusters of rural sites is a better solution than attempting to centralise electricity supplies to maximise the development of specifically targeted sites. | physics |
Asteroseismology is a powerful tool for probing the internal structures of stars by using their natural pulsation frequencies. It relies on identifying sequences of pulsation modes that can be compared with theoretical models, which has been done successfully for many classes of pulsators, including low-mass solar-type stars, red giants, high-mass stars and white dwarfs. However, a large group of pulsating stars of intermediate mass--the so-called delta Scuti stars--have rich pulsation spectra for which systematic mode identification has not hitherto been possible. This arises because only a seemingly random subset of possible modes are excited, and because rapid rotation tends to spoil the regular patterns. Here we report the detection of remarkably regular sequences of high-frequency pulsation modes in 60 intermediate-mass main-sequence stars, allowing definitive mode identification. Some of these stars have space motions that indicate they are members of known associations of young stars, and modelling of their pulsation spectra confirms that these stars are indeed young. | astrophysics |
This work investigates twinlike scalar field models that support kinks with the same energy density and stability. We find the first order equations compatible with the equations of motion. We use them to calculate the conditions under which they attain the twinlike character. The linear stability is also investigated, and there we show that the addition of extra requirements may lead to the same stability under small fluctuations. | high energy physics theory |
In this paper, we study the cross-modal image retrieval, where the inputs contain a source image plus some text that describes certain modifications to this image and the desired image. Prior work usually uses a three-stage strategy to tackle this task: 1) extract the features of the inputs; 2) fuse the feature of the source image and its modified text to obtain fusion feature; 3) learn a similarity metric between the desired image and the source image + modified text by using deep metric learning. Since classical image/text encoders can learn the useful representation and common pair-based loss functions of distance metric learning are enough for cross-modal retrieval, people usually improve retrieval accuracy by designing new fusion networks. However, these methods do not successfully handle the modality gap caused by the inconsistent distribution and representation of the features of different modalities, which greatly influences the feature fusion and similarity learning. To alleviate this problem, we adopt the contrastive self-supervised learning method Deep InforMax (DIM) to our approach to bridge this gap by enhancing the dependence between the text, the image, and their fusion. Specifically, our method narrows the modality gap between the text modality and the image modality by maximizing mutual information between their not exactly semantically identical representation. Moreover, we seek an effective common subspace for the semantically same fusion feature and desired image's feature by utilizing Deep InforMax between the low-level layer of the image encoder and the high-level layer of the fusion network. Extensive experiments on three large-scale benchmark datasets show that we have bridged the modality gap between different modalities and achieve state-of-the-art retrieval performance. | computer science |
The count-min sketch (CMS) is a randomized data structure that provides estimates of tokens' frequencies in a large data stream using a compressed representation of the data by random hashing. In this paper, we rely on a recent Bayesian nonparametric (BNP) view on the CMS to develop a novel learning-augmented CMS under power-law data streams. We assume that tokens in the stream are drawn from an unknown discrete distribution, which is endowed with a normalized inverse Gaussian process (NIGP) prior. Then, using distributional properties of the NIGP, we compute the posterior distribution of a token's frequency in the stream, given the hashed data, and in turn corresponding BNP estimates. Applications to synthetic and real data show that our approach achieves a remarkable performance in the estimation of low-frequency tokens. This is known to be a desirable feature in the context of natural language processing, where it is indeed common in the context of the power-law behaviour of the data. | statistics |
This paper extends the study of the quantum dissipative effects of a cosmological scalar field by taking into account the cosmic expansion and contraction. Cheung, Drewes, Kang and Kim calculated the effective action and quantum dissipative effects of a cosmological scalar field. The analytic expressions for the effective potential and damping coefficient were presented using a simple scalar model with quartic interaction. Their work was done using Minkowski-space propagators in loop diagrams. In this work we incorporate the Hubble expansion and contraction of the comic background, and focus on the thermal dynamics of a scalar field in a regime where the effective potential changes slowly. We let the Hubble parameter, $H$, attain a small but non-zero value and carry out calculations to first order in $H$. If we set $H=0$ all results match those obtained previously in flat spacetime [1]. Interestingly we have to integrate over the resonances, which in turn leads to an amplification of the effects of a non-zero $H$. This is an intriguing phenomenon which cannot be uncovered in flat spacetime. The implications on particle creations in the early universe will be studied in a forthcoming work. | high energy physics theory |
This work presents a hybrid and hierarchical deep learning model for mid-term load forecasting. The model combines exponential smoothing (ETS), advanced Long Short-Term Memory (LSTM) and ensembling. ETS extracts dynamically the main components of each individual time series and enables the model to learn their representation. Multi-layer LSTM is equipped with dilated recurrent skip connections and a spatial shortcut path from lower layers to allow the model to better capture long-term seasonal relationships and ensure more efficient training. A common learning procedure for LSTM and ETS, with a penalized pinball loss, leads to simultaneous optimization of data representation and forecasting performance. In addition, ensembling at three levels ensures a powerful regularization. A simulation study performed on the monthly electricity demand time series for 35 European countries confirmed the high performance of the proposed model and its competitiveness with classical models such as ARIMA and ETS as well as state-of-the-art models based on machine learning. | electrical engineering and systems science |
In vehicular communications, intracell interference and the stringent latency requirement are challenging issues. In this paper, a joint spectrum reuse and power allocation problem is formulated for hybrid vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications. Recognizing the high capacity and low-latency requirements for V2I and V2V links, respectively, we aim to maximize the weighted sum of the capacities and latency requirement. By decomposing the original problem into a classification subproblem and a regression sub-problem, a convolutional neural network (CNN) based approach is developed to obtain real-time decisions on spectrum reuse and power allocation. Numerical results further demonstrate that the proposed CNN can achieve similar performance as the Exhaustive method, while needs only 3.62% of its CPU runtime. | electrical engineering and systems science |
In this paper, we are interested in studying the existence or non-existence of solutions for a class of elliptic problems involving the $N$-Laplacian operator in the whole space. The nonlinearity considered involves critical Trudinger-Moser growth. Our approach is non-variational, and in this way, we can address a wide range of problems not yet contained in the literature. Even $W^{1,N}(\mathbb{R}^N)\hookrightarrow L^\infty(\mathbb{R}^N)$ failing, we establish $\|u_{\lambda}\|_{L^\infty(\mathbb{R}^N)} \leq C \|u\|_{W^{1,N}(\mathbb{R}^N)}^{\Theta}$ (for some $\Theta>0$), when $u$ is a solution. To conclude, we explore some asymptotic properties. | mathematics |
In this paper we first demonstrate continuous noisy speech recognition using electroencephalography (EEG) signals on English vocabulary using different types of state of the art end-to-end automatic speech recognition (ASR) models, we further provide results obtained using EEG data recorded under different experimental conditions. We finally demonstrate decoding of speech spectrum from EEG signals using a long short term memory (LSTM) based regression model and Generative Adversarial Network (GAN) based model. Our results demonstrate the feasibility of using EEG signals for continuous noisy speech recognition under different experimental conditions and we provide preliminary results for synthesis of speech from EEG features. | electrical engineering and systems science |
It has been recently suggested \cite{Bezrukov:2017ike,Bezrukov:2018wvd} that a cosmic scalar field can completely change the keV-scale sterile neutrino production in the early Universe. Its effect may, for various parameter choices, either suppress sterile neutrino production and make moderate active-sterile mixing cosmologically acceptable, or increase the production and generate considerable dark matter component out of sterile neutrino with otherwise negligible mixing with SM. In this paper we provide analytic estimates complementing and providing details of the numerical calculations performed in \cite{Bezrukov:2018wvd} in the case of resonant amplification of the sterile neutrino production. We also discuss phenomenological and theoretical issues related to the successful implementation of this idea in fully realistic extensions of the Standard Model of particle physics. | high energy physics phenomenology |
We explore various tree-level double copy constructions for amplitudes including massive particles with spin. By working in general dimensions, we use that particles with spins $s\leq 2$ are fundamental to argue that the corresponding double copy relations partially follow from compactification of their massless counterparts. This massless origin fixes the coupling of gluons, dilatons and axions to matter in a characteristic way (for instance fixing the gyromagnetic ratio), whereas the graviton couples universally reflecting the equivalence principle. For spin-1 matter we conjecture all-order Lagrangians reproducing the interactions with up to two massive lines and we test them in a classical setup, where the massive lines represent spinning compact objects such as black holes. We also test the amplitudes via CHY formulae for both bosonic and fermionic integrands. At five points, we show that by applying generalized gauge transformations one can obtain a smooth transition from quantum to classical BCJ double copy relations for radiation, thereby providing a QFT derivation for the latter. As an application, we show how the theory arising in the classical double copy of Goldberger and Ridgway can be naturally identified with a certain compactification of $\mathcal{N}=4$ Supergravity. | high energy physics theory |
The scan rate of an axion haloscope is proportional to the square of the cavity volume. In this paper, a new class of thin-shell cavities are proposed to search for axionic dark matter. These cavities feature active volume much larger (>20X) than that of a conventional cylindrical haloscope, comparable quality factor Q, and a similar frequency tuning range. Full 3D numerical finite-element analyses have been used to show that the TM_010 eigenmodes are singly polarized throughout the volume of the cavity and can facilitate axion-photon conversion in uniform magnetic field produced by a superconducting solenoid. To mitigate spurious mode crowding and volume loss due to localization, a pre-amplification binary summing network will be used for coupling. Because of the favorable frequency-scaling, the new cavities are most suitable for centimeter-wavelength (~ 10-100 GHz), corresponding to the promising post-inflation axion production window. In this frequency range, the tight machining tolerances required for high-Q thin-shell cavities are achievable with standard machining techniques for near-infrared mirrors. | physics |
The pure quadratic term of New Massive Gravity in three dimensions admits asymptotically locally flat, rotating black holes. These black holes are characterized by their mass and angular momentum, as well as by a hair of gravitational origin. As in the Myers-Perry solution in dimensions greater than five, there is no upper bound on the angular momentum. We show that, remarkably, the equation for a massless scalar field on this background can be solved in an analytic manner and that the quasinormal frequencies can be found in a closed form. The spectrum is obtained requiring ingoing boundary conditions at the horizon and an asymptotic behavior at spatial infinity that provides a well-defined action principle for the scalar probe. As the angular momentum of the black hole approaches zero, the imaginary part of the quasinormal frequencies tends to minus infinity, migrating to the north pole of the Riemann Sphere and providing infinitely damped modes of high frequency. We show that this is consistent with the fact that the static black hole within this family does not admit quasinormal modes for a massless scalar probe. | high energy physics theory |
We assess the prospects for detecting the moving lens effect using cosmological surveys. The bulk motion of cosmological structure induces a small-scale dipolar temperature anisotropy of the cosmic microwave radiation (CMB), centered around halos and oriented along the transverse velocity field. We introduce a set of optimal filters for this signal, and forecast that a high significance detection can be made with upcoming experiments. We discuss the prospects for reconstructing the bulk transverse velocity field on large scales using matched filters, finding good agreement with previous work using quadratic estimators. | astrophysics |
In the quiet regions on the solar surface, turbulent convective motions of granulation play an important role in creating small-scale magnetic structures, as well as in energy injection into the upper atmosphere. The turbulent nature of granulation can be studied using spectral line profiles, especially line broadening, which contains information on the flow field smaller than the spatial resolution of an instrument. Moreover, the Doppler velocity gradient along a line-of-sight (LOS) causes line broadening as well. However, the quantitative relationship between velocity gradient and line broadening has not been understood well. In this study, we perform bisector analyses using the spectral profiles obtained using the Spectro-Polarimeter of the Hinode/Solar Optical Telescope to investigate the relationship of line broadening and bisector velocities with the granulation flows. The results indicate that line broadening has a positive correlation with the Doppler velocity gradients along the LOS. We found excessive line broadening in fading granules, that cannot be explained only by the LOS velocity gradient, although the velocity gradient is enhanced in the process of fading. If this excessive line broadening is attributed to small-scale turbulent motions, the averaged turbulent velocity is obtained as 0.9 km/s. | astrophysics |
PURPOSE: Magnetic Resonance Fingerprinting (MRF) with spiral readout enables rapid quantification of tissue relaxation times. However, it is prone to blurring due to off-resonance effects. Hence, fat blurring into adjacent regions might prevent identification of small tumors by their quantitative T1 and T2 values. This study aims to correct for the blurring artifacts, thereby enabling fast quantitative mapping in the female breast. METHODS: The impact of fat blurring on spiral MRF results was first assessed by simulations. Then, MRF was combined with 3-point Dixon water-fat separation and spiral blurring correction based on conjugate phase reconstruction. The approach was assessed in phantom experiments and compared to Cartesian reference measurements, namely inversion recovery (IR), multi-echo spin echo (MESE) and Cartesian MRF, by normalized root mean square error (NRMSE) and standard deviation (STD) calculations. Feasibility is further demonstrated in-vivo for quantitative breast measurements of 6 healthy female volunteers, age range 24-31 years. RESULTS: In the phantom experiment, the blurring correction reduced the NRMSE per phantom vial on average from 16% to 8% for T1 and from 18% to 11% for T2 when comparing spiral MRF to IR/MESE sequences. When comparing to Cartesian MRF, the NRMSE reduced from 15% to 8% for T1 and from 12% to 7% for T2. Furthermore, STDs decreased. In-vivo, the blurring correction removed fat bias on T1/T2 from a rim of about 7-8 mm width adjacent to fatty structures. CONCLUSION: The blurring correction for spiral MRF yields improved quantitative maps in the presence of water and fat. | physics |
Predicting time-to-event outcomes in large databases can be a challenging but important task. One example of this is in predicting the time to a clinical outcome for patients in intensive care units (ICUs), which helps to support critical medical treatment decisions. In this context, the time to an event of interest could be, for example, survival time or time to recovery from a disease/ailment observed within the ICU. The massive health datasets generated from the uptake of Electronic Health Records (EHRs) are quite heterogeneous as patients can be quite dissimilar in their relationship between the feature vector and the outcome, adding more noise than information to prediction. In this paper, we propose a modified random forest method for survival data that identifies similar cases in an attempt to improve accuracy for predicting time-to-event outcomes; this methodology can be applied in various settings, including with ICU databases. We also introduce an adaptation of our methodology in the case of dependent censoring. Our proposed method is demonstrated in the Medical Information Mart for Intensive Care (MIMIC-III) database, and, in addition, we present properties of our methodology through a comprehensive simulation study. Introducing similarity to the random survival forest method indeed provides improved predictive accuracy compared to random survival forest alone across the various analyses we undertook. | statistics |
Designing microscopic and nanoscopic self-propelled particles and characterising their motion has become a major scientific challenge over the past decades. To this purpose, phoretic effects, namely propulsion mechanisms relying on local field gradients, have been the focus of many theoretical and experimental studies. In this review, we adopt a tutorial approach to present the basic physical mechanisms at stake in phoretic motion, and describe the different experimental works that lead to the fabrication of active particles based on this principle. We also present the collective effects observed in assemblies of interacting active colloids, and the theoretical tools that have been used to describe phoretic and hydrodynamic interactions. | condensed matter |
We use effective field theory to compute the influence of nuclear structure on precision calculations of atomic energy levels. As usual, the EFT's effective couplings correspond to the various nuclear properties (such as the charge radius, nuclear polarizabilities, Friar and Zemach moments {\it etc.}) that dominate its low-energy electromagnetic influence on its surroundings. By extending to spinning nuclei the arguments developed for spinless ones in {\tt arXiv:1708.09768}, we use the EFT to show -- to any fixed order in $Z\alpha$ (where $Z$ is the atomic number and $\alpha$ the fine-structure constant) and the ratio of nuclear to atomic size -- that nuclear properties actually contribute to electronic energies through fewer parameters than the number of these effective nuclear couplings naively suggests. Our result is derived using a position-space method for matching effective parameters to nuclear properties in the EFT, that more efficiently exploits the simplicity of the small-nucleus limit in atomic systems. By showing that precision calculations of atomic spectra depend on fewer nuclear uncertainties than naively expected, this observation allows the construction of many nucleus-independent combinations of atomic energy differences whose measurement can be used to test fundamental physics (such as the predictions of QED) because their theoretical uncertainties are not limited by the accuracy of nuclear calculations. We provide several simple examples of such nucleus-free predictions for Hydrogen-like atoms. | high energy physics phenomenology |
Titan has an abundance of lakes and seas, as confirmed by Cassini. Major components of these liquid bodies include methane ($CH_4$) and ethane ($C_2H_6$); however, evidence indicates that minor components such as ethylene ($C_2H_4$) may also exist in the lakes. As the lake levels drop, 5 $\mu m$-bright deposits, resembling evaporite deposits on earth, are left behind. Here, we provide saturation values, evaporation rates, and constraints on ethylene evaporite formation by using a Titan simulation chamber capable of reproducing Titan surface conditions (89-94 K, 1.5 bar $N_2$). Experimental samples were analyzed using Fourier transform infrared spectroscopy, mass, and temperature readings. Ethylene evaporites form more quickly in a methane solvent than in an ethane solvent or in a mixture of methane/ethane. We measured an average evaporation rate of $(2.8 \pm 0.3) \times 10^{-4} kg \; m^{-2} \; s^{-1}$ for methane and an average upper limit evaporation rate of less than $5.5 \times 10^{-6} kg \; m^{-2} \; s^{-1}$ for ethane. Additionally, we observed red shifts in ethylene absorption bands at 1.630 and 2.121 $\mu m$ and the persistence of a methane band at 1.666 $\mu m$. | astrophysics |
Full truckload transportation (FTL) in the form of freight containers represents one of the most important transportation modes in international trade. Due to large volume and scale, in FTL, delivery time is often less critical but cost and service quality are crucial. Therefore, efficiently solving large scale multiple shift FTL problems is becoming more and more important and requires further research. In one of our earlier studies, a set covering model and a three-stage solution method were developed for a multi-shift FTL problem. This paper extends the previous work and presents a significantly more efficient approach by hybridising pricing and cutting strategies with metaheuristics (a variable neighbourhood search and a genetic algorithm). The metaheuristics were adopted to find promising columns (vehicle routes) guided by pricing and cuts are dynamically generated to eliminate infeasible flow assignments caused by incompatible commodities. Computational experiments on real-life and artificial benchmark FTL problems showed superior performance both in terms of computational time and solution quality, when compared with previous MIP based three-stage methods and two existing metaheuristics. The proposed cutting and heuristic pricing approach can efficiently solve large scale real-life FTL problems. | computer science |
The increase in luminosity and center of mass energy at the FCC-hh will open up new clean channels where BSM contributions are enhanced at high energy. In this paper we study one such channel, $Wh \to \ell\nu\gamma\gamma$. We estimate the sensitivity to the $\mathcal{O}_{\varphi q}^{(3)}$, $\mathcal{O}_{\varphi {W}}$, and $\mathcal{O}_{\varphi \widetilde {W}}$ SMEFT operators. We find that this channel will be competitive with fully leptonic $WZ$ production in setting bounds on $\mathcal{O}_{\varphi q}^{(3)}$. We also find that the double differential distribution in the $p_T^h$ and the leptonic azimuthal angle can be exploited to enhance the sensitivity to $\mathcal{O}_{\varphi \widetilde {W}}$. However, the bounds on $\mathcal{O}_{\varphi {W}}$ and $\mathcal{O}_{\varphi \widetilde {W}}$ we obtain in our analysis, though complementary and more direct, are not competitive with those coming from other measurements such as EDMs and inclusive Higgs measurements. | high energy physics phenomenology |
We provide prophet inequality algorithms for online weighted matching in general (non-bipartite) graphs, under two well-studied arrival models, namely edge arrival and vertex arrival. The weight of each edge is drawn independently from an a-priori known probability distribution. Under edge arrival, the weight of each edge is revealed upon arrival, and the algorithm decides whether to include it in the matching or not. Under vertex arrival, the weights of all edges from the newly arriving vertex to all previously arrived vertices are revealed, and the algorithm decides which of these edges, if any, to include in the matching. To study these settings, we introduce a novel unified framework of batched prophet inequalities that captures online settings where elements arrive in batches; in particular it captures matching under the two aforementioned arrival models. Our algorithms rely on the construction of suitable online contention resolution scheme (OCRS). We first extend the framework of OCRS to batched-OCRS, we then establish a reduction from batched prophet inequality to batched OCRS, and finally we construct batched OCRSs with selectable ratios of 0.337 and 0.5 for edge and vertex arrival models, respectively. Both results improve the state of the art for the corresponding settings. For the vertex arrival, our result is tight. Interestingly, a pricing-based prophet inequality with comparable competitive ratios is unknown. | computer science |
Infrared quasi-stellar objects (IR QSOs) are a rare subpopulation selected from ultraluminous infrared galaxies (ULIRGs) and have been regarded as promising candidates of ULIRG-to-optical QSO transition objects. Here we present NOEMA observations of the CO(1-0) line and 3 mm continuum emission in an IR QSO IRAS F07599+6508 at $z=0.1486$, which has many properties in common with Mrk 231. The CO emission is found to be resolved with a major axis of $\sim$6.1 kpc that is larger than the size of $\sim$4.0 kpc derived for 3 mm continuum. We identify two faint CO features located at a projected distance of $\sim$11.4 and 19.1 kpc from the galaxy nucleus, respectively, both of which are found to have counterparts in the optical and radio bands and may have a merger origin. A systematic velocity gradient is found in the CO main component, suggesting that the bulk of molecular gas is likely rotationally supported. Based on the radio-to-millimeter spectral energy distribution and IR data, we estimate that about 30$\%$ of the flux at 3 mm arises from free-free emission and infer a free-free-derived star formation rate of 77 $M_\odot\ {\rm yr^{-1}}$, close to the IR estimate corrected for the AGN contribution. We find a high-velocity CO emission feature at the velocity range of about -1300 to -2000 km s$^{-1}$. Additional deep CO observations are needed to confirm the presence of a possible very high-velocity CO extension of the OH outflow in this IR QSO. | astrophysics |
We study how solidification of model freely rotating polymers under athermal quasistatic compression varies with their bond angle $\theta_0$. All systems undergo two discrete, first-order-like transitions: entanglement at $\phi = \phi_E(\theta_0)$ followed by jamming at $\phi = \phi_J(\theta_0) \simeq (4/3 \pm 1/10)\phi_E(\theta_0)$. For $\phi < \phi_E(\theta_0)$, systems are in a "gas" phase wherein all chains remain free to translate and reorient. For $\phi_E(\theta_0) \leq \phi \leq \phi_J(\theta_0)$, systems are in a liquid-like phase wherein chains are entangled. In this phase, chains' rigid-body-like motion is blocked, yet they can still locally relax via dihedral rotations, and hence energy and pressure remain extremely small. The ability of dihedral relaxation mechanisms to accommodate further compression becomes exhausted, and systems rigidify, at $\phi_J(\theta_0)$. At and slightly above $\phi_J$, the bulk moduli increase linearly with the pressure $P$ rather than jumping discontinuously, indicating these systems solidify via rigidity percolation. The character of the energy and pressure increases above $\phi_J(\theta_0)$ can be characterized via chains' effective aspect ratio $\alpha_{\rm eff}$. Large-$\alpha_{\rm eff}$ (small-$\theta_0$) systems' jamming is bending-dominated and is similar to that observed in systems composed of straight fibers. Small-$\alpha_{\rm eff}$ (large-$\theta_0$) systems' jamming is dominated by the degree to which individual chains' dihedrals can collapse into compact, tetrahedron-like structures. For intermediate $\theta_0$, chains remain in highly disordered globule-like configurations throughout the compression process; jamming occurs when entangled globules can no longer even locally relax away from one another. | condensed matter |
Variational quantum algorithms (VQAs) that estimate values of widely used physical quantities such as the rank, quantum entropies, the Bures fidelity and the quantum Fisher information of mixed quantum states are developed. In addition, variations of these VQAs are also adapted to perform other useful functions such as quantum state learning and approximate fractional inverses. The common theme shared by the proposed algorithms is that their cost functions are all based on minimizing the quantum purity of a quantum state. Strategies to mitigate or avoid the problem of exponentially vanishing cost function gradients are also discussed. | quantum physics |
We present a comparative study of four physical dust models and two single-temperature modified blackbody models by fitting them to the resolved WISE, Spitzer, and Herschel photometry of M101 (NGC 5457). Using identical data and a grid-based fitting technique, we compare the resulting dust and radiation field properties derived from the models. We find that the dust mass yielded by the different models can vary by up to factor of 3 (factor of 1.4 between physical models only), although the fits have similar quality. Despite differences in their definition of the carriers of the mid-IR aromatic features, all physical models show the same spatial variations for the abundance of that grain population. Using the well determined metallicity gradient in M101 and resolved gas maps, we calculate an approximate upper limit on the dust mass as a function of radius. All physical dust models are found to exceed this maximum estimate over some range of galactocentric radii. We show that renormalizing the models to match the same Milky Way high latitude cirrus spectrum and abundance constraints can reduce the dust mass differences between models and bring the total dust mass below the maximum estimate at all radii. | astrophysics |
I combine duplicate spectroscopic stellar parameter estimates in the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) Data Release 6 Low Resolution Spectral Survey A, F, G, and K Type stellar parameter catalog. Combining repeat measurements results in a factor of two improvement in the precision of the spectroscopic stellar parameter estimates. Moreover, this trivializes the process of performing coordinate-based cross-matching with other catalogs. Similarly, I combine duplicate stellar abundance estimates for the Xiang et al. catalog which was produced using LAMOST Data Release 5 Low Resolution Spectral Survey data. These data have numerous applications in stellar, galactic, and exoplanet astronomy. The catalogs I produce are available as machine-readable tables at https://doi.org/10.7281/T1/QISGRU . | astrophysics |
Blind quantum computation is a scheme that adds unconditional security to cloud quantum computation. In the protocol proposed by Broadbent, Fitzsimons, and Kashefi, the ability to prepare and transmit a single qubit is required for a user (client) who uses a quantum computer remotely. In case a weak coherent pulse is used as a pseudo single photon source, however, we must introduce decoy states, owing to the inherent risk of transmitting multiple photon. In this study, we demonstrate that by using a heralded single photon source and a probabilistic photon number resolving detector, we can gain a higher blind state generation efficiency and longer access distance, owing to noise reduction on account of the heralding signal. | quantum physics |
A partial slotted curved waveguide leaky-wave antenna which can generate orbital angular momentum (OAM) mode-groups (MG) with high equivalent OAM order negative and positive 40 at 60 GHz is proposed in this paper. The proposed antenna with partial slotting is designed according to the circular traveling-wave antenna which can generate single conventional OAM wave, so it can be regarded as partial arc transmitting (PAT) scheme compared with the full 2pi aperture slotting of the circular traveling-wave antenna. The full-wave simulation results show that the generated OAM MGs present a high gain beam with a helical phase distribution. This method may lead to novel applications for next generation communication and radar system. | electrical engineering and systems science |
Wi-Fi signals-based person identification attracts increasing attention in the booming Internet-of-Things era mainly due to its pervasiveness and passiveness. Most previous work applies gaits extracted from WiFi distortions caused by the person walking to achieve the identification. However, to extract useful gait, a person must walk along a pre-defined path for several meters, which requires user high collaboration and increases identification time overhead, thus limiting use scenarios. Moreover, gait based work has severe shortcoming in identification performance, especially when the user volume is large. In order to eliminate the above limitations, in this paper, we present an operation-free person identification system, namely WiPIN, that requires least user collaboration and achieves good performance. WiPIN is based on an entirely new insight that Wi-Fi signals would carry person body information when propagating through the body, which is potentially discriminated for person identification. Then we demonstrate the feasibility on commodity off-the-shelf Wi-Fi devices by well-designed signal pre-processing, feature extraction, and identity matching algorithms. Results show that WiPIN achieves 92% identification accuracy over 30 users, high robustness to various experimental settings, and low identifying time overhead, i.e., less than 300ms. | electrical engineering and systems science |
Using the CHY-formalism and its extension to a double cover we provide covariant expressions for tree-level amplitudes with two massive scalar legs and an arbitrary number of gravitons in D dimensions. Using unitarity methods, such amplitudes are needed inputs for the computation of post-Newtonian and post-Minkowskian expansions in classical general relativity. | high energy physics theory |
This paper considers the estimation and prediction of a high-dimensional linear regression in the setting of transfer learning, using samples from the target model as well as auxiliary samples from different but possibly related regression models. When the set of "informative" auxiliary samples is known, an estimator and a predictor are proposed and their optimality is established. The optimal rates of convergence for prediction and estimation are faster than the corresponding rates without using the auxiliary samples. This implies that knowledge from the informative auxiliary samples can be transferred to improve the learning performance of the target problem. In the case that the set of informative auxiliary samples is unknown, we propose a data-driven procedure for transfer learning, called Trans-Lasso, and reveal its robustness to non-informative auxiliary samples and its efficiency in knowledge transfer. The proposed procedures are demonstrated in numerical studies and are applied to a dataset concerning the associations among gene expressions. It is shown that Trans-Lasso leads to improved performance in gene expression prediction in a target tissue by incorporating the data from multiple different tissues as auxiliary samples. | statistics |
During the Leidenfrost effect, a thin insulating vapor layer separates an evaporating liquid from a hot solid. Here we demonstrate that Leidenfrost vapor layers can be sustained at much lower temperatures than those required for formation. Using a high-speed electrical technique to measure the thickness of water vapor layers over smooth, metallic surfaces, we find that the explosive failure point is nearly independent of material and fluid properties, suggesting a purely hydrodynamic mechanism determines this threshold. For water vapor layers of several millimeters in size, the minimum temperature for stability is $\approx 140^{\circ}$C, corresponding to an average vapor layer thickness of 10-20$\mu$m. | physics |
Intelligent Personal Assistants (IPAs) are software agents that can perform tasks on behalf of individuals and assist them on many of their daily activities. IPAs capabilities are expanding rapidly due to the recent advances on areas such as natural language processing, machine learning, artificial cognition, and ubiquitous computing, which equip the agents with competences to understand what users say, collect information from everyday ubiquitous devices (e.g., smartphones, wearables, tablets, laptops, cars, household appliances, etc.), learn user preferences, deliver data-driven search results, and make decisions based on user's context. Apart from the inherent complexity of building such IPAs, developers and researchers have to address many critical architectural challenges (e.g., low-latency, scalability, concurrency, ubiquity, code mobility, interoperability, support to cognitive services and reasoning, to name a few.), thereby diverting them from their main goal: building IPAs. Thus, our contribution in this paper is twofold: 1) we propose an architecture for a platform-agnostic, high-performance, ubiquitous, and distributed middleware that alleviates the burdensome task of dealing with low-level implementation details when building IPAs by adding multiple abstraction layers that hide the underlying complexity; and 2) we present an implementation of the middleware that concretizes the aforementioned architecture and allows the development of high-level capabilities while scaling the system up to hundreds of thousands of IPAs with no extra effort. We demonstrate the powerfulness of our middleware by analyzing software metrics for complexity, effort, performance, cohesion and coupling when developing a conversational IPA. | computer science |
Training deep neural networks with stochastic gradient descent (SGD) can often achieve zero training loss on real-world tasks although the optimization landscape is known to be highly non-convex. To understand the success of SGD for training deep neural networks, this work presents a mean-field analysis of deep residual networks, based on a line of works that interpret the continuum limit of the deep residual network as an ordinary differential equation when the network capacity tends to infinity. Specifically, we propose a new continuum limit of deep residual networks, which enjoys a good landscape in the sense that every local minimizer is global. This characterization enables us to derive the first global convergence result for multilayer neural networks in the mean-field regime. Furthermore, without assuming the convexity of the loss landscape, our proof relies on a zero-loss assumption at the global minimizer that can be achieved when the model shares a universal approximation property. Key to our result is the observation that a deep residual network resembles a shallow network ensemble, i.e. a two-layer network. We bound the difference between the shallow network and our ResNet model via the adjoint sensitivity method, which enables us to apply existing mean-field analyses of two-layer networks to deep networks. Furthermore, we propose several novel training schemes based on the new continuous model, including one training procedure that switches the order of the residual blocks and results in strong empirical performance on the benchmark datasets. | statistics |
The Bell Spaceship Paradox has promoted confusion and numerous resolutions since its first statement in 1959, including resolutions based on relativistic stress due to Lorentz contractions. The paradox is that two ships, starting from the same reference frame and subject to the same acceleration, would snap a string that connected them, even as their separation distance would not change as measured from the original reference frame. This paper uses a Simple Relativity approach to resolve the paradox and explain both why the string snaps, and how to adjust accelerations to avoid snapping the string. In doing so, an interesting parallel understanding of the Lorentz contraction is generated. The solution is applied to rotation to address the Ehrenfest paradox and orbital precession as well. | physics |
In recent years efficient algorithms have been developed for the numerical computation of relativistic single-particle path integrals in quantum field theory. Here, we adapt this "worldline Monte Carlo" approach to the standard problem of the numerical approximation of the non-relativistic path integral, resulting in a formalism whose characteristic feature is the fast, non-recursive generation of an ensemble of trajectories that is independent of the potential, and thus universally applicable. The numerical implementation discretises the trajectories with respect to their time parametrisation but maintains a continuous spatial domain. In the case of singular potentials, the discretised action gets adapted to the singularity through a "smoothing" procedure. We show for a variety of examples (the harmonic oscillator in various dimensions, the modified P\"oschl-Teller potential, delta-function potentials, the Coulomb and Yukawa potentials) that the method allows one to obtain fast and reliable estimates for the Euclidean propagator and use them in a certain time window suitable for extracting the ground state energy. As an aside, we apply it for studying the classical limit where nearly classical trajectories are expected to dominate in the path integral. We expect the advances made here to be useful also in the relativistic case. | quantum physics |
Feature selection often leads to increased model interpretability, faster computation, and improved model performance by discarding irrelevant or redundant features. While feature selection is a well-studied problem with many widely-used techniques, there are typically two key challenges: i) many existing approaches become computationally intractable in huge-data settings with millions of observations and features; and ii) the statistical accuracy of selected features degrades in high-noise, high-correlation settings, thus hindering reliable model interpretation. We tackle these problems by proposing Stable Minipatch Selection (STAMPS) and Adaptive STAMPS (AdaSTAMPS). These are meta-algorithms that build ensembles of selection events of base feature selectors trained on many tiny, (adaptively-chosen) random subsets of both the observations and features of the data, which we call minipatches. Our approaches are general and can be employed with a variety of existing feature selection strategies and machine learning techniques. In addition, we provide theoretical insights on STAMPS and empirically demonstrate that our approaches, especially AdaSTAMPS, dominate competing methods in terms of feature selection accuracy and computational time. | statistics |
I develop the solution to the problem of an electron confined in a composite quadratic well subject to a simple, external periodic force. The method of solution illustrates several of the basic techniques useful in formally solving the one-dimensional, time-dependent Schrodinger equation. One of the aims of this exercise is to see how far is it possible to push analytics, before plowing into numerical methods. I hope this presentation of the problem may result useful to others seeking to gain additional experience in the details of solving the time-dependent Schrodinger equation in one space dimension. | quantum physics |
We present final Spitzer trigonometric parallaxes for 361 L, T, and Y dwarfs. We combine these with prior studies to build a list of 525 known L, T, and Y dwarfs within 20 pc of the Sun, 38 of which are presented here for the first time. Using published photometry and spectroscopy as well as our own follow-up, we present an array of color-magnitude and color-color diagrams to further characterize census members, and we provide polynomial fits to the bulk trends. Using these characterizations, we assign each object a $T_{\rm eff}$ value and judge sample completeness over bins of $T_{\rm eff}$ and spectral type. Except for types $\ge$ T8 and $T_{\rm eff} <$ 600K, our census is statistically complete to the 20-pc limit. We compare our measured space densities to simulated density distributions and find that the best fit is a power law ($dN/dM \propto M^{-\alpha}$) with $\alpha = 0.6{\pm}0.1$. We find that the evolutionary models of Saumon & Marley correctly predict the observed magnitude of the space density spike seen at 1200K $< T_{\rm eff} <$ 1350K, believed to be caused by an increase in the cooling timescale across the L/T transition. Defining the low-mass terminus using this sample requires a more statistically robust and complete sample of dwarfs $\ge$Y0.5 and with $T_{\rm eff} <$ 400K. We conclude that such frigid objects must exist in substantial numbers, despite the fact that few have so far been identified, and we discuss possible reasons why they have largely eluded detection. | astrophysics |
Complementary to studies of symmetry-protected band-touching points for electron bands in metallic systems, we explore analogous physics for propagating bosonic quasiparticles, magnons and spin-orbit excitons, in the insulating easy-plane honeycomb quantum magnet CoTiO3. We probe directly the winding of the isospin texture of the quasiparticle wavefunction in momentum space near a nodal point through its characteristic fingerprint in the dynamical structure factor probed by inelastic neutron scattering. In addition, our high-resolution measurements reveal a finite spectral gap at low energies, which cannot be explained by a semiclassical treatment for the ground state pseudospins-1/2. As possible mechanisms for the spectral gap generation we propose quantum-order-by-disorder induced by bond-dependent anisotropic couplings such as Kitaev exchange, and higher-order spin-orbital exchanges. We provide a spin-orbital flavor-wave model that captures both the gapped magnons and dispersive excitons within the same Hamiltonian. | condensed matter |
This paper presents our latest investigation on end-to-end automatic speech recognition (ASR) for overlapped speech. We propose to train an end-to-end system conditioned on speaker embeddings and further improved by transfer learning from clean speech. This proposed framework does not require any parallel non-overlapped speech materials and is independent of the number of speakers. Our experimental results on overlapped speech datasets show that joint conditioning on speaker embeddings and transfer learning significantly improves the ASR performance. | electrical engineering and systems science |
We give two graph theoretical characterizations of tope graphs of (complexes of) oriented matroids. The first is in terms of excluded partial cube minors, the second is that all antipodal subgraphs are gated. A direct consequence is a third characterization in terms of zone graphs of tope graphs. Further corollaries include a characterization of topes of oriented matroids due to da Silva, another one of Handa, a characterization of lopsided systems due to Lawrence, and an intrinsic characterization of tope graphs of affine oriented matroids. Furthermore, we obtain polynomial time recognition algorithms for tope graphs of the above and a finite list of excluded partial cube minors for the bounded rank case. In particular, this answers a relatively long-standing open question in oriented matroids. Another consequence is that all finite Pasch graphs are tope graphs of complexes of oriented matroids, which confirms a conjecture of Chepoi and the two authors. | mathematics |
Belief space planning is a viable alternative to formalise partially observable control problems and, in the recent years, its application to robot manipulation problems has grown. However, this planning approach was tried successfully only on simplified control problems. In this paper, we apply belief space planning to the problem of planning dexterous reach-to-grasp trajectories under object pose uncertainty. In our framework, the robot perceives the object to be grasped on-the-fly as a point cloud and compute a full 6D, non-Gaussian distribution over the object's pose (our belief space). The system has no limitations on the geometry of the object, i.e., non-convex objects can be represented, nor assumes that the point cloud is a complete representation of the object. A plan in the belief space is then created to reach and grasp the object, such that the information value of expected contacts along the trajectory is maximised to compensate for the pose uncertainty. If an unexpected contact occurs when performing the action, such information is used to refine the pose distribution and triggers a re-planning. Experimental results show that our planner (IR3ne) improves grasp reliability and compensates for the pose uncertainty such that it doubles the proportion of grasps that succeed on a first attempt. | computer science |
Radial velocity (RV) searches for Earth-mass exoplanets in the habitable zone around Sun-like stars are limited by the effects of stellar variability on the host star. In particular, suppression of convective blueshift and brightness inhomogeneities due to photospheric faculae/plage and starspots are the dominant contribution to the variability of such stellar RVs. Gaussian process (GP) regression is a powerful tool for modeling these quasi-periodic variations. We investigate the limits of this technique using 800 days of RVs from the solar telescope on the HARPS-N spectrograph. These data provide a well-sampled time series of stellar RV variations. Into this data set, we inject Keplerian signals with periods between 100 and 500 days and amplitudes between 0.6 and 2.4 m s$^{-1}$. We use GP regression to fit the resulting RVs and determine the statistical significance of recovered periods and amplitudes. We then generate synthetic RVs with the same covariance properties as the solar data to determine a lower bound on the observational baseline necessary to detect low-mass planets in Venus-like orbits around a Sun-like star. Our simulations show that discovering such planets using current-generation spectrographs using GP regression will require more than 12 years of densely sampled RV observations. Furthermore, even with a perfect model of stellar variability, discovering a true exo-Venus with current instruments would take over 15 years. Therefore, next-generation spectrographs and better models of stellar variability are required for detection of such planets. | astrophysics |
Building on recent progress in the study of compactifications of $6d$ $(1,0)$ superconformal field theories (SCFTs) on Riemann surfaces to $4d$ $\mathcal{N}=1$ theories, we initiate a systematic study of compactifications of $5d$ $\mathcal{N}=1$ SCFTs on Riemann surfaces to $3d$ $\mathcal{N}=2$ theories. Specifically, we consider the compactification of the so-called rank 1 Seiberg $E_{N_f+1}$ SCFTs on tori and tubes with flux in their global symmetry, and put the resulting $3d$ theories to various consistency checks. These include matching the (usually enhanced) IR symmetry of the $3d$ theories with the one expected from the compactification, given by the commutant of the flux in the global symmetry of the corresponding $5d$ SCFT, and identifying the spectrum of operators and conformal manifolds predicted by the $5d$ picture. As the models we examine are in three dimensions, we encounter novel elements that are not present in compactifications to four dimensions, notably Chern-Simons terms and monopole superpotentials, that play an important role in our construction. The methods used in this paper can also be used for the compactification of any other $5d$ SCFT that has a deformation leading to a $5d$ gauge theory. | high energy physics theory |
It has been observed by Maldacena that one can extract asymptotically anti-de Sitter Einstein $4$-metrics from Bach-flat spacetimes by imposing simple principles and data choices. We cast this problem in a conformally compact Riemannian setting. Following an approach pioneered by Fefferman and Graham for the Einstein equation, we find formal power series for conformally compactifiable, asymptotically hyperbolic Bach-flat 4-metrics expanded about conformal infinity. We also consider Bach-flat metrics in the special case of constant scalar curvature and in the special case of constant $Q$-curvature. This allows us to determine the free data at conformal infinity, and to select those choices that lead to Einstein metrics. Interestingly, the mass is part of that free data, in contrast to the pure Einstein case. We then choose a convenient generalization of the Bach tensor to (bulk) dimensions $n>4$ and consider the higher dimensional problem. We find that the free data for the expansions split into low-order and high-order pairs. The former pair consists of the metric on the conformal boundary and its first radial derivative, while the latter pair consists of the radial derivatives of order $n-2$ and $n-1$. Higher dimensional generalizations of the Bach tensor lack some of the geometrical meaning of the 4-dimensional case. This is reflected in the relative complexity of the higher dimensional problem, but we are able to obtain a relatively complete result if conformal infinity is not scalar flat. | mathematics |
Coherent superposition and entanglement are two fundamental aspects of non-classicality. Here we provide a quantitative connection between the two on the level of operations by showing that the dynamical coherence of an operation upper bounds the dynamical entanglement that can be generated from it with the help of additional incoherent operations. In case a particular choice of monotones based on the relative entropy is used for the quantification of these dynamical resources, this bound can be achieved. In addition, we show that an analog to the entanglement potential exists on the level of operations and serves as a valid quantifier for dynamical coherence. | quantum physics |
We develop a deep generative model built on a fully differentiable simulator for multi-agent trajectory prediction. Agents are modeled with conditional recurrent variational neural networks (CVRNNs), which take as input an ego-centric birdview image representing the current state of the world and output an action, consisting of steering and acceleration, which is used to derive the subsequent agent state using a kinematic bicycle model. The full simulation state is then differentiably rendered for each agent, initiating the next time step. We achieve state-of-the-art results on the INTERACTION dataset, using standard neural architectures and a standard variational training objective, producing realistic multi-modal predictions without any ad-hoc diversity-inducing losses. We conduct ablation studies to examine individual components of the simulator, finding that both the kinematic bicycle model and the continuous feedback from the birdview image are crucial for achieving this level of performance. We name our model ITRA, for "Imagining the Road Ahead". | statistics |
In this paper, we present a study of the recent advancements which have helped bring Transfer Learning to NLP through the use of semi-supervised training. We discuss cutting-edge methods and architectures such as BERT, GPT, ELMo, ULMFit among others. Classically, tasks in natural language processing have been performed through rule-based and statistical methodologies. However, owing to the vast nature of natural languages these methods do not generalise well and failed to learn the nuances of language. Thus machine learning algorithms such as Naive Bayes and decision trees coupled with traditional models such as Bag-of-Words and N-grams were used to usurp this problem. Eventually, with the advent of advanced recurrent neural network architectures such as the LSTM, we were able to achieve state-of-the-art performance in several natural language processing tasks such as text classification and machine translation. We talk about how Transfer Learning has brought about the well-known ImageNet moment for NLP. Several advanced architectures such as the Transformer and its variants have allowed practitioners to leverage knowledge gained from unrelated task to drastically fasten convergence and provide better performance on the target task. This survey represents an effort at providing a succinct yet complete understanding of the recent advances in natural language processing using deep learning in with a special focus on detailing transfer learning and its potential advantages. | computer science |
Novel Coronavirus (COVID-19) has drastically overwhelmed more than 200 countries affecting millions and claiming almost 1 million lives, since its emergence in late 2019. This highly contagious disease can easily spread, and if not controlled in a timely fashion, can rapidly incapacitate healthcare systems. The current standard diagnosis method, the Reverse Transcription Polymerase Chain Reaction (RT- PCR), is time consuming, and subject to low sensitivity. Chest Radiograph (CXR), the first imaging modality to be used, is readily available and gives immediate results. However, it has notoriously lower sensitivity than Computed Tomography (CT), which can be used efficiently to complement other diagnostic methods. This paper introduces a new COVID-19 CT scan dataset, referred to as COVID-CT-MD, consisting of not only COVID-19 cases, but also healthy and subjects infected by Community Acquired Pneumonia (CAP). COVID-CT-MD dataset, which is accompanied with lobe-level, slice-level and patient-level labels, has the potential to facilitate the COVID-19 research, in particular COVID-CT-MD can assist in development of advanced Machine Learning (ML) and Deep Neural Network (DNN) based solutions. | electrical engineering and systems science |
We present explicit mathematical structures that allow for the reconstruction of the field content of a full local conformal field theory from its boundary fields. Our framework is the one of modular tensor categories, without requiring semisimplicity, and thus covers in particular finite rigid logarithmic conformal field theories. We assume that the boundary data are described by a pivotal module category over the modular tensor category, which ensures that the algebras of boundary fields are Frobenius algebras. Bulk fields and, more generally, defect fields inserted on defect lines, are given by internal natural transformations between the functors that label the types of defect lines. We use the theory of internal natural transformations to identify candidates for operator products of defect fields (of which there are two types, either along a single defect line, or accompanied by the fusion of two defect lines), and for bulk-boundary OPEs. We show that the so obtained OPEs pass various consistency conditions, including in particular all genus-zero constraints in Lewellen's list. | high energy physics theory |
We present stellar metallicity measurements of more than 600 late-type stars in the central 10 pc of the Galactic centre. Together with our previously published KMOS data, this data set allows us to investigate, for the first time, spatial variations of the nuclear star cluster's metallicity distribution. Using the integral-field spectrograph KMOS (VLT) we observed almost half of the area enclosed by the nuclear star cluster's effective radius. We extract spectra at medium spectral resolution, and apply full spectral fitting utilising the PHOENIX library of synthetic stellar spectra. The stellar metallicities range from [M/H]=-1.25 dex to [M/H]> +0.3 dex, with most of the stars having super-solar metallicity. We are able to measure an anisotropy of the stellar metallicity distribution. In the Galactic North, the portion of sub-solar metallicity stars with [M/H]<0.0 dex is more than twice as high as in the Galactic South. One possible explanation for different fractions of sub-solar metallicity stars in different parts of the cluster is a recent merger event. We propose to test this hypothesis with high-resolution spectroscopy, and by combining the metallicity information with kinematic data. | astrophysics |
Simultaneous vibration control and energy harvesting of vehicle suspensions has attracted great research interests over the past decades. However, existing frameworks tradeoff suspension performance for energy recovery and are only responsive to narrow-bandwidth vibrations. In this paper, a new energy-regenerative vibration absorber (ERVA) using a ball-screw mechanism is investigated. The ERVA system is based on a rotary electromagnetic generator with adjustable nonlinear rotational inertia which passively increases the moment of inertia as the vibration amplitude increases. This structure is effective for energy harvesting and vibration control without increasing the suspension size. Furthermore, a nonlinear model predictive controller (NMPC) is applied to the system for further performance enhancement where we exploit road profile information as a preview. The performance of NMPC-based ERVA is demonstrated in a number of simulations and superior performance is demonstrated. | electrical engineering and systems science |
Connected Vehicles (CVs) have the potential to significantly increase the safety, mobility, and environmental benefits of transportation applications. In this research, we have developed a real time adaptive traffic signal control algorithm that utilizes only CV data to compute the signal timing parameters for an urban arterial in the near congested condition. We have used a machine learning based short term traffic forecasting model to predict the overall traffic counts in CV based platoons. Using a multi objective optimization technique, we compute the green interval time for each intersection using CV based platoons. Later, we dynamically adjust intersection offsets in real time, so the vehicles in the major street can experience improved operational conditions compared to loop detector based actuated coordinated signal control. Using a 3 mile long simulated corridor of US 29 in Greenville, SC, we have evaluated the performance of our CV based adaptive signal control. For the next time interval, using only 5% CV data, the Root Mean Square Error of the machine learning based prediction is 10 vehicles. Our analysis reveals that the CV based adaptive signal control improves operational conditions in the major street compared to the actuated coordinated scenario. Also, using only CV data, the operational performance improves even for a low CV penetration (5% CV), and the benefit increases with increasing CV penetration. We can provide operational benefits to both CVs and non CVs with the limited data from 5% CVs, with 5.6% average speed increase, and 66.7% and 32.4% reduction in average maximum queue length and stopped delay, respectively, in major street coordinated direction compared to the actuated coordinated scenario in the same direction. | electrical engineering and systems science |
We study the coherent transport of one or two photons in a 1D waveguide chirally coupled to a nonlinear resonator. Analytic solutions of the one-photon and two-photon scattering is derived. Although the resonator acts as a non-reciprocal phase shifter, light transmission is reciprocal at one-photon level. However, the forward and reverse transmitted probabilities for two photons incident from either the left side or the right side of the nonlinear resonator are nonreciprocal due to the energy redistribution of the two-photon bound state. Hence, the nonlinear resonator acts as an optical diode at two-photon level. | quantum physics |
The long-standing challenges for offline handwritten Chinese character recognition (HCCR) are twofold: Chinese characters can be very diverse and complicated while similarly looking, and cursive handwriting (due to increased writing speed and infrequent pen lifting) makes strokes and even characters connected together in a flowing manner. In this paper, we propose the template and instance loss functions for the relevant machine learning tasks in offline handwritten Chinese character recognition. First, the character template is designed to deal with the intrinsic similarities among Chinese characters. Second, the instance loss can reduce category variance according to classification difficulty, giving a large penalty to the outlier instance of handwritten Chinese character. Trained with the new loss functions using our deep network architecture HCCR14Layer model consisting of simple layers, our extensive experiments show that it yields state-of-the-art performance and beyond for offline HCCR. | computer science |
The meson decays $B\to D\tau\nu$ and $B\to D^* \tau \nu$ are sensitive probes of the $b\to c\tau\nu$ transition. In this work we present a complete framework to obtain the maximum information on the physics of $B\to D^{(*)}\tau\nu$ with polarized $\tau$ leptons and unpolarized $D^{(*)}$ mesons. Focusing on the hadronic decays $\tau\to \pi\nu$ and $\tau\to\rho\nu$, we show how to extract seven $\tau$ asymmetries from a fully differential analysis of the final-state kinematics. At Belle II with $50~\text{ab}^{-1}$ of data, these asymmetries could potentially be measured with percent level statistical uncertainty. This would open a new window into possible new physics contributions in $b\to c\tau\nu$ and would allow us to decipher its Lorentz and gauge structure. | high energy physics phenomenology |
We present a novel variation of online kernel machines in which we exploit a consensus based optimization mechanism to guide the evolution of decision functions drawn from a reproducing kernel Hilbert space, which efficiently models the observed stationary process. | statistics |
Chiral integer quantum Hall (QH) edge modes are immune to backscattering and therefore are non-localized and show a vanishing longitudinal as well as non-local resistance along with quantized 2-terminal and Hall resistance even in the presence of sample disorder. However, this is not the case for contact disorder, which refers to the possibility that a contact can reflect edge modes either partially or fully. This paper shows that when all contacts are disordered in a N-terminal quantum Hall bar, then transport via chiral QH edge modes can have a significant localization correction. The Hall and 2-terminal resistance in an N-terminal quantum Hall sample deviate from their values derived while neglecting the phase acquired at disordered contacts, and this deviation is called the quantum localization correction. This correction term increases with the increase of disorderedness of contacts but decreases with the increase in the number of contacts in an N-terminal Hall bar. The presence of inelastic scattering, however, can completely destroy the quantum localization correction. | condensed matter |
Forecast reconciliation is a post-forecasting process aimed to improve the quality of the base forecasts for a system of hierarchical/grouped time series (Hyndman et al., 2011). Contemporaneous (cross-sectional) and temporal hierarchies have been considered in the literature, but - except for Kourentzes and Athanasopoulos (2019) - generally these two features have not been fully considered together. Adopting a notation able to simultaneously deal with both forecast reconciliation dimensions, the paper shows two new results: (i) an iterative cross-temporal forecast reconciliation procedure which extends, and overcomes some weaknesses of, the two-step procedure by Kourentzes and Athanasopoulos (2019), and (ii) the closed-form expression of the optimal (in least squares sense) point forecasts which fulfill both contemporaneous and temporal constraints. The feasibility of the proposed procedures, along with first evaluations of their performance as compared to the most performing `single dimension' (either cross-sectional or temporal) forecast reconciliation procedures, is studied through a forecasting experiment on the 95 quarterly time series of the Australian GDP from Income and Expenditure sides considered by Athanasopoulos et al. (2019). | statistics |
Many recent developments in the high-dimensional statistical time series literature have centered around time-dependent applications that can be adapted to regularized least squares. Of particular interest is the lasso, which both serves to regularize and provide feature selection. The lasso requires the specification of a penalty parameter that determines the degree of sparsity to impose. The most popular penalty parameter selection approaches that respect time dependence are very computationally intensive and are not appropriate for modeling certain classes of time series. We propose enhancing a canonical time series model, the autoregressive model with exogenous variables, with a novel online penalty parameter selection procedure that takes advantage of the sequential nature of time series data to improve both computational performance and forecast accuracy relative to existing methods in both a simulation and empirical application involving macroeconomic indicators. | statistics |
Since the introduction of the logarithmic law of the wall more than 80 years ago, the equation for the mean velocity profile in turbulent boundary layers has been widely applied to model near-surface processes and parameterise surface drag. Yet the hypothetical turbulent eddies proposed in the original logarithmic law derivation and mixing length theory of Prandtl have never been conclusively linked to physical features in the flow. Here, we present evidence that suggests these eddies correspond to regions of coherent streamwise momentum known as uniform momentum zones (UMZs). The arrangement of UMZs results in a step-like shape for the instantaneous velocity profile, and the smooth mean profile results from the average UMZ properties, which are shown to scale with the friction velocity and wall-normal distance in the logarithmic region. These findings are confirmed across a wide range of Reynolds number and surface roughness conditions from the laboratory scale to the atmospheric surface layer. | physics |
The discrete phase space continuous time representation of relativistic quantum mechanics involving a characteristic length $l$ is investigated. Fundamental physical constants such as $\hbar$, $c$, and $l$ are retained for most sections of the paper. The energy eigenvalue problem for the Planck oscillator is solved exactly in this framework. Discrete concircular orbits of constant energy are shown to be circles $S^{1}_{n}$ of radii $2E_n =\sqrt{2n+1}$ within the discrete (1 + 1)-dimensional phase plane. Moreover, the time evolution of these orbits sweep out world-sheet like geometrical entities $S^{1}_{n} \times \mathbb{R} \subset \mathbb{R}^2$ and therefore appear as closed string-like geometrical configurations. The physical interpretation for these discrete orbits in phase space as degenerate, string-like phase cells is shown in a mathematically rigorous way. The existence of these closed concircular orbits in the arena of discrete phase space quantum mechanics, known for the non-singular nature of lower order expansion $S^{\#}$ matrix terms, was known to exist but has not been fully explored until now. Finally, the discrete partial difference-differential Klein-Gordon equation is shown to be invariant under the continuous inhomogeneous orthogonal group $\mathcal{I} [O(3,1)]$ . | quantum physics |
The efficiency of ultrafast excitation of spins in antiferromagnetic $\mathrm{\alpha-Fe_{2}O_{3}}$ using nearly single-cycle THz pulse is studied as a function of the polarization of the THz pulse and the sample temperature. Above the Morin point the most efficient excitation is achieved when the magnetic field of the THz pulse is perpendicular to the antiferromagnetically coupled spins. Using the experimental results and equations of motion for spins, we show that the mechanism of the spin excitation above and below the Morin point relies on magnetic-dipole interaction of the THz magnetic field with spins and the efficiency of the coupling is proportional to the time derivative of the magnetic field. | condensed matter |
We develop the effective field theory of diffusive Nambu-Goldstone (NG) modes associated with spontaneous internal symmetry breaking taking place in nonequilibrium open systems. The effective Lagrangian describing semi-classical dynamics of the NG modes is derived and matching conditions for low-energy coefficients are also investigated. Due to new terms peculiar to open systems, the associated NG modes show diffusive gapless behaviors in contrast to the propagating NG mode in closed systems. We demonstrate two typical situations relevant to the condensed matter physics and high-energy physics, where diffusive type-A or type-B NG modes appear. | high energy physics theory |
A holographic model for QCD is employed to investigate the effects of the gluon condensate on the spectrum and melting of scalar mesons. We find the evolution of the free energy density with the temperature, and the result shows that the temperature of the confinement/deconfinement transition is sensitive to the gluon-condensate parameter. The spectral functions (SPFs) are also obtained and show a series of peaks in the low-temperature regime, indicating the presence of quasiparticle states associated to the mesons, while the number of peaks decreases with the increment of the temperature, characterizing the quasiparticle melting. In the dual gravitational description, the scalar mesons are identified with the black-hole quasinormal modes (QNMs). We obtain the spectrum of QNMs and the dispersion relations corresponding to the scalar-field perturbations of the gravitational background, and find their dependence with the gluon-condensate parameter. | high energy physics theory |
Shannon channel capacity of an additive white Gaussian noise channel is the highest reliable transmission bit rate (RTBR) with arbitrary small error probability. However, the authors find that the concept is correct only when the channel input and output is treated as a single signal-stream. Hence, this work reveals a possibility for increasing the RTBR further by transmitting two independent signal-streams in parallel manner. The gain is obtained by separating the two signals at the receiver without any inter-steam interference. For doing so, we borrow the QPSK constellation to layer the two independent signals and create the partial decoding method to work with the signal separation from Hamming to Euclidean space. The theoretical derivations prove that the proposed method can exceed the conventional QPSK in terms of RTBRs. | computer science |
We evaluate a large-scale set of interventions to increase demand for immunization in Haryana, India. The policies under consideration include the two most frequently discussed tools--reminders and incentives--as well as an intervention inspired by the networks literature. We cross-randomize whether (a) individuals receive SMS reminders about upcoming vaccination drives; (b) individuals receive incentives for vaccinating their children; (c) influential individuals (information hubs, trusted individuals, or both) are asked to act as "ambassadors" receiving regular reminders to spread the word about immunization in their community. By taking into account different versions (or "dosages") of each intervention, we obtain 75 unique policy combinations. We develop a new statistical technique--a smart pooling and pruning procedure--for finding a best policy from a large set, which also determines which policies are effective and the effect of the best policy. We proceed in two steps. First, we use a LASSO technique to collapse the data: we pool dosages of the same treatment if the data cannot reject that they had the same impact, and prune policies deemed ineffective. Second, using the remaining (pooled) policies, we estimate the effect of the best policy, accounting for the winner's curse. The key outcomes are (i) the number of measles immunizations and (ii) the number of immunizations per dollar spent. The policy that has the largest impact (information hubs, SMS reminders, incentives that increase with each immunization) increases the number of immunizations by 44% relative to the status quo. The most cost-effective policy (information hubs, SMS reminders, no incentives) increases the number of immunizations per dollar by 9.1%. | statistics |
For applications like the numerical solution of physical equations a discretization scheme for operators is necessary. Recently frames have been used for such an operator representation. In this paper, we apply fusion frames for this task. We interpret the operator representation using fusion frames as a generalization of fusion Gram matrices. We present the basic definition of $U$-fusion cross Gram matrices of operators for a bounded operator $U$. We give sufficient conditions for their (pseudo-)invertibility and present explicit formulas for the inverse. In particular, we characterize fusion Riesz bases and fusion orthonormal bases by such matrices. Finally, we look at which perturbations of fusion Bessel sequences preserve the invertibility of the fusion Gram matrix of operators. | mathematics |
We perform magnetohydrodynamic simulations of accreting, equal-mass binary black holes in full general relativity focusing on the impact of black hole spin on the dynamical formation and evolution of minidisks. We find that during the late inspiral the sizes of minidisks are primarily determined by the interplay between the tidal field and the effective innermost stable orbit around each black hole. Our calculations support that a minidisk forms when the Hill sphere around each black hole is significantly larger than the black hole's effective innermost stable orbit. As the binary inspirals, the radius of the Hill sphere decreases, and minidisk sconsequently shrink in size. As a result, electromagnetic signatures associated with minidisks may be expected to gradually disappear prior to merger when there are no more stable orbits within the Hill sphere. In particular, a gradual disappearance of a hard electromagnetic component in the spectrum of such systems could provide a characteristic signature of merging black hole binaries. For a binary of given total mass, the timescale to minidisk "evaporation" should therefore depend on the black hole spins and the mass ratio. We also demonstrate that accreting binary black holes with spin have a higher efficiency for converting accretion power to jet luminosity. These results could provide new ways to estimate black hole spins in the future. | astrophysics |
We discover present anti-PT-symmetry operator is a problematic non-hermit operator. However the the usage of similarity transformation we seriously change to a new non-hermit operator underneath the identical commutating operator. Interestingly the use of new shape of PT-symmetry we reproduce preceding experimental outcomes and propose new nature of asymmetry in symmetry breakdown. | quantum physics |
Magnetic features on the surfaces of cool stars cause variations of their brightness. Such variations have been extensively studied for the Sun. Recent planet-hunting space telescopes allowed measuring brightness variations in hundred thousands of other stars. The new data posed the question of how typical is the Sun as a variable star. Putting solar variability into the stellar context suffers, however, from the bias of solar observations being made from its near-equatorial plane, whereas stars are observed at all possible inclinations. We model solar brightness variations at timescales from days to years as they would be observed at different inclinations. In particular, we consider the effect of the inclination on the power spectrum of solar brightness variations. The variations are calculated in several passbands routinely used for stellar measurements. We employ the Surface Flux Transport Model (SFTM) to simulate the time-dependent spatial distribution of magnetic features on both near- and far-sides of the Sun. This distribution is then used to calculate solar brightness variations following the SATIRE (Spectral And Total Irradiance REconstruction) approach. We have quantified the effect of the inclination on solar brightness variability at timescales down to a day. Thus, our results allow making solar brightness records directly comparable to those obtained by the planet-hunting space telescopes. Furthermore, we decompose solar brightness variations into the components originating from the solar rotation and from the evolution of magnetic features. | astrophysics |
The study of the dynamics of magnetically ordered states in strong excitation through micromagnetic modeling has become relevant due to the observation of magnon Bose condensation. In particular, the question has arisen about the possibility of describing the coherent quantum state by the quasi-classical Landau-Lifshitz-Gilbert equations. We performed micromagnetic simulations of magnetization precession with a high angle of deviation in an out-of-plane nonuniform dc field. Our results confirm the formation of coherent magnon state under conditions of high excitation. This coherent state extends over long distances and described by a spatially inhomogeneous amplitude and a homogeneous precession phase. | condensed matter |
A novel quantum-classical hybrid scheme is proposed to efficiently solve large-scale combinatorial optimization problems. The key concept is to introduce a Hamiltonian dynamics of the classical flux variables associated with the quantum spins of the transverse-field Ising model. Molecular dynamics of the classical fluxes can be used as a powerful preconditioner to sort out the frozen and ambivalent spins for quantum annealers. The performance and accuracy of our smooth hybridization in comparison to the standard classical algorithms (the tabu search and the simulated annealing) are demonstrated by employing the MAX-CUT and Ising spin-glass problems. | quantum physics |
A first-order gauge invariant formulation for the two-dimensional quantum rigid rotor is long known in the theoretical physics community as an isolated peculiar model. Parallel to that fact, the longstanding constraints abelianization problem, aiming at the conversion from second to first class systems for quantization purposes, has been approached a number of times in the literature with a handful of different forms and techniques and still continues to be a source of lively and interesting discussions. Connecting these two points, we develop a new systematic method for converting second class systems to first class ones, valid for a class of systems encompassing the quantum rigid rotor as a special case. In particular the gauge invariance of the quantum rigid rotor is fully clarified and generalized in the context of arbitrary translations along the radial momentum direction. Our method differs substantially from previous ones as it does not rely neither on the introduction of new auxiliary variables nor on the a priori interpretation of the second class constraints as coming from a gauge-fixing process. | high energy physics theory |
In this paper, we present a new variable selection method for regression and classification purposes. Our method, called Subsampling Ranking Forward selection (SuRF), is based on LASSO penalised regression, subsampling and forward-selection methods. SuRF offers major advantages over existing variable selection methods in terms of both sparsity of selected models and model inference. We provide an R package that can implement our method for generalized linear models. We apply our method to classification problems from microbiome data, using a novel agglomeration approach to deal with the special tree-like correlation structure of the variables. Existing methods arbitrarily choose a taxonomic level a priori before performing the analysis, whereas by combining SuRF with these aggregated variables, we are able to identify the key biomarkers at the appropriate taxonomic level, as suggested by the data. We present simulations in multiple sparse settings to demonstrate that our approach performs better than several other popularly used existing approaches in recovering the true variables. We apply SuRF to two microbiome data sets: one about prediction of pouchitis and another for identifying samples from two healthy individuals. We find that SuRF can provide a better or comparable prediction with other methods while controlling the false positive rate of variable selection. | statistics |
In wireless sensor networks (WSNs), simulation practices, system models, algorithms, and protocols have been published worldwide based on the assumption of randomness. The applied statistics used for randomness in WSNs are broad in nature, e.g., random deployment, activity tracking, packet generation, etc. Even though with adequate formal and informal information provided and pledge by authors, validation of the proposal became a challenging issue. The minuscule information alteration in implementation and validation can reflect the enormous effect on eventual results. In this proposal, we show how the results are affected by the generalized assumption made on randomness. In sensor node deployment, ambiguity arises due to node error-value ($\epsilon$), and it's upper bound in the relative position is estimated to understand the delicacy of diminutives changes. Moreover, the effect of uniformity in the traffic and contribution of scheduling position of nodes also generalized. We propose an algorithm to generate the unified dataset for the general and some specific applications system models in WSNs. The results produced by our algorithm reflects the pseudo-randomness and can efficiently regenerate through seed value for validation. | computer science |
We review O$(d,d)$ Covariant String Cosmology to all orders in $\alpha'$ in the presence of matter and study its solutions. We show that the perturbative analysis for a constant dilaton in the absence of a dilatonic charge does not lead to a time-independet equation of state. Meanwhile, the non-perturbative equations of motion allow de Sitter solutions in the String frame parametrized by the equation of state and the dilatonic charge. Among this set of solutions, we show that a cosmological constant equation of state implies a de Sitter solution both in String and Einstein frames while a winding equation of state implies a de Sitter solution in the former and a static phase in the latter. We also consider the stability of these solutions under homogeneous linear perturbations and show that they are not unstable, therefore defining viable cosmological scenarios. | high energy physics theory |
Multilayer network analysis is a useful approach for studying the structural properties of entities with diverse, multitudinous relations. Classifying the importance of nodes and node-layer tuples is an important aspect of the study of multilayer networks. To do this, it is common to calculate various centrality measures, which allow one to rank nodes and node-layers according to a variety of structural features. In this paper, we formulate occupation, PageRank, betweenness, and closeness centralities in terms of node-occupation properties of different types of continuous-time classical and quantum random walks on multilayer networks. We apply our framework to a variety of synthetic and real-world multilayer networks, and we identify marked differences between classical and quantum centrality measures. Our computations also give insights into the correlations between certain random-walk-based and geodesic-path-based centralities. | physics |
Witten-Sakai-Sugimoto model is used to study two flavour Yang-Mills theory with large number of colours at finite temperature and in presence of chemical potential for baryon number and isospin. Sources for $U(1)_B$ and $U(1)_3$ gauge fields on the flavour 8-branes are D4-branes wrapped on $S^4$ part of the background. Here, gauge symmetry on the flavour branes has been decomposed as $U(2) \equiv U(1)_B \times SU(2)$ and $U(1)_3$ is within $SU(2)$ and generated by the diagonal generator. We show various brane configurations, along with the phases in the boundary theory they correspond to, and explore the possibility of phase transition between various pairs of phases. | high energy physics theory |
In the era of Big Code, when researchers seek to study an increasingly large number of repositories to support their findings, the data processing stage may require manipulating millions and more of records. In this work we focus on studies involving fine-grained AST level source code changes. We present how we extended the CodeDistillery source code mining framework with data manipulation capabilities, aimed to alleviate the processing of large datasets of fine grained source code changes. The capabilities we have introduced allow researchers to highly automate their repository mining process and streamline the data acquisition and processing phases. These capabilities have been successfully used to conduct a number of studies, in the course of which dozens of millions of fine-grained source code changes have been processed. | computer science |
Determinantal consensus clustering is a promising and attractive alternative to partitioning about medoids and k-means for ensemble clustering. Based on a determinantal point process or DPP sampling, it ensures that subsets of similar points are less likely to be selected as centroids. It favors more diverse subsets of points. The sampling algorithm of the determinantal point process requires the eigendecomposition of a Gram matrix. This becomes computationally intensive when the data size is very large. This is particularly an issue in consensus clustering, where a given clustering algorithm is run several times in order to produce a final consolidated clustering. We propose two efficient alternatives to carry out determinantal consensus clustering on large datasets. They consist in DPP sampling based on sparse and small kernel matrices whose eigenvalue distributions are close to that of the original Gram matrix. | statistics |
The interface of two solids in contact introduces a thermal boundary resistance (TBR), which is challenging to measure from experiments. Besides, if the interface is reactive, it can form an intermediate recrystallized or amorphous region, and extra influencing phenomena are introduced. Reactive force field Molecular Dynamics (ReaxFF MD) is used to study these interfacial phenomena at the (non-)reactive interface. The non-reactive interfaces are compared using a phenomenological theory (PT), predicting the temperature discontinuity at the interface. By connecting ReaxFF MD and PT we confirm a continuous temperature profile for the homogeneous non-reactive interface and a temperature jump in case of the heterogeneous non-reactive interface. ReaxFF MD is further used to understand the effect of chemical activity of two solids in contact. The selected Si/SiO$_2$ materials showed that the TBR of the reacted interface is two times larger than the non-reactive, going from $1.65\times 10^{-9}$ to $3.38\times 10^{-9}$ m$^2$K/W. This is linked to the formation of an intermediate amorphous layer induced by heating, which remains stable when the system is cooled again. This provides the possibility to design multi-layered structures with a desired TBR. | condensed matter |
The novel corona-virus disease (COVID-19) pandemic has caused a major outbreak in more than 200 countries around the world, leading to a severe impact on the health and life of many people globally. As of mid-July 2020, more than 12 million people were infected, and more than 570,000 death were reported. Computed Tomography (CT) images can be used as an alternative to the time-consuming RT-PCR test, to detect COVID-19. In this work we propose a segmentation framework to detect chest regions in CT images, which are infected by COVID-19. We use an architecture similar to U-Net model, and train it to detect ground glass regions, on pixel level. As the infected regions tend to form a connected component (rather than randomly distributed pixels), we add a suitable regularization term to the loss function, to promote connectivity of the segmentation map for COVID-19 pixels. 2D-anisotropic total-variation is used for this purpose, and therefore the proposed model is called "TV-UNet". Through experimental results on a relatively large-scale CT segmentation dataset of around 900 images, we show that adding this new regularization term leads to 2\% gain on overall segmentation performance compared to the U-Net model. Our experimental analysis, ranging from visual evaluation of the predicted segmentation results to quantitative assessment of segmentation performance (precision, recall, Dice score, and mIoU) demonstrated great ability to identify COVID-19 associated regions of the lungs, achieving a mIoU rate of over 99\%, and a Dice score of around 86\%. | electrical engineering and systems science |
We study the well-posedness of a semilinear fractional diffusion equation and formulate an associated inverse problem. We determine fractional power type nonlinearities from the exterior partial measurements of the Dirichlet-to-Neumann map. Our arguments are based on a first order linearization as well as the parabolic Runge approximation property. | mathematics |
We study generalized discrete symmetries of quantum field theories in 1+1D generated by topological defect lines with no inverse. In particular, we describe 't Hooft anomalies and classify gapped phases stabilized by these symmetries, including new 1+1D topological phases. The algebra of these operators is not a group but rather is described by their fusion ring and crossing relations, captured algebraically as a fusion category. Such data defines a Turaev-Viro/Levin-Wen model in 2+1D, while a 1+1D system with this fusion category acting as a global symmetry defines a boundary condition. This is akin to gauging a discrete global symmetry at the boundary of Dijkgraaf-Witten theory. We describe how to "ungauge" the fusion category symmetry in these boundary conditions and separate the symmetry-preserving phases from the symmetry-breaking ones. For Tambara-Yamagami categories and their generalizations, which are associated with Kramers-Wannier-like self-dualities under orbifolding, we develop gauge theoretic techniques which simplify the analysis. We include some examples of CFTs with fusion category symmetry derived from Kramers-Wannier-like dualities as an appetizer for the Part II companion paper. | high energy physics theory |
Training for telerobotic systems often makes heavy use of simulated platforms, which ensure safe operation during the learning process. Outer space is one domain in which such a simulated training platform would be useful, as On-Orbit Operations (O3) can be costly, inefficient, or even dangerous if not performed properly. In this paper, we present a new telerobotic training simulator for the Canadarm2 on the International Space Station (ISS), which is able to modulate workload through the addition of confounding factors such as latency, obstacles, and time pressure. In addition, multimodal physiological data is collected from subjects as they perform a task from the simulator under these different conditions. As most current workload measures are subjective, we analyse objective measures from the simulator and EEG data that can provide a reliable measure. ANOVA of task data revealed which simulator-based performance measures could predict the presence of latency and time pressure. Furthermore, EEG classification using a Riemannian classifier and Leave-One-Subject-Out cross-validation showed promising classification performance and allowed for comparison of different channel configurations and preprocessing methods. Additionally, Riemannian distance and beta power of EEG data were investigated as potential cross-trial and continuous workload measures. | computer science |
Indications of a possible composition-dependent fifth force, based on a reanalysis of the E\"{o}tv\"{o}s experiment, have not been supported by a number of modern experiments. Here, we argue that searching for a composition-dependent fifth force necessarily requires data from experiments in which the acceleration differences of three or more independent pairs of test samples of varying composition are determined. We suggest that a new round of fifth-force experiments is called for, in each of which three or more different pairs of samples are compared. | high energy physics phenomenology |
We explore the interplay between random and deterministic phenomena using a representation of uncertainty based on the measure-theoretic concept of outer measure. The meaning of the analogues of different probabilistic concepts is investigated and examples of application are given. The novelty of this article lies mainly in the suitability of the tools introduced for jointly representing random and deterministic uncertainty. These tools are shown to yield intuitive results in simple situations and to generalise easily to more complex cases. Connections with Dempster-Shafer theory, the empirical Bayes methods and generalised Bayesian inference are also highlighted. | statistics |
We explore a simple extension to the Standard Model containing two gauge singlets: a Dirac fermion and a real pseudoscalar. In large regions of the parameter space, both singlets are stable without the necessity of additional symmetries, then becoming a possible two-component dark matter model. We study the relic abundance production via freeze-out, with the latter determined by annihilations, conversions and semi-annihilations. Experimental constraints from invisible Higgs decay, dark matter relic abundance and direct/indirect detection are studied. We found three viable regions of the parameter space. | high energy physics phenomenology |
Since the study by Jacobi and Hecke, Hecke-type series have received a lot of attention. Unlike such series associated with indefinite quadratic forms, identities on Hecke-type series associated with definite quadratic forms are quite rare in the literature. Motivated by the works of Liu, we first establish many parameterized identities with two parameters by employing different $q$-transformation formulas and then deduce various Hecke-type identities associated with definite quadratic forms by specializing the choice of these two parameters. As applications, we utilize some of these Hecke-type identities to establish families of inequalities for several partition functions. Our proofs heavily rely on some formulas from the work of Zhi-Guo Liu. | mathematics |
In this paper, we propose an approach to effectively accelerating the computation of continuous normalizing flow (CNF), which has been proven to be a powerful tool for the tasks such as variational inference and density estimation. The training time cost of CNF can be extremely high because the required number of function evaluations (NFE) for solving corresponding ordinary differential equations (ODE) is very large. We think that the high NFE results from large truncation errors of solving ODEs. To address the problem, we propose to add a regularization. The regularization penalizes the difference between the trajectory of the ODE and its fitted polynomial regression. The trajectory of ODE will approximate a polynomial function, and thus the truncation error will be smaller. Furthermore, we provide two proofs and claim that the additional regularization does not harm training quality. Experimental results show that our proposed method can result in 42.3% to 71.3% reduction of NFE on the task of density estimation, and 19.3% to 32.1% reduction of NFE on variational auto-encoder, while the testing losses are not affected. | computer science |