abstract
stringlengths 42
2.09k
|
---|
A prototype version of the Q & U bolometric interferometer for cosmology
(QUBIC) underwent a campaign of testing in the laboratory at Astroparticle
Physics and Cosmology laboratory in Paris (APC). The detection chain is
currently made of 256 NbSi transition edge sensors (TES) cooled to 320 mK. The
readout system is a 128:1 time domain multiplexing scheme based on 128 SQUIDs
cooled at 1 K that are controlled and amplified by an SiGe application specific
integrated circuit at 40 K. We report the performance of this readout chain and
the characterization of the TES. The readout system has been functionally
tested and characterized in the lab and in QUBIC. The low noise amplifier
demonstrated a white noise level of 0.3 nV.Hz^-0.5. Characterizations of the
QUBIC detectors and readout electronics includes the measurement of I-V curves,
time constant and the noise equivalent power. The QUBIC TES bolometer array has
approximately 80% detectors within operational parameters. It demonstrated a
thermal decoupling compatible with a phonon noise of about 5.10^-17 W.Hz^-0.5
at 410 mK critical temperature. While still limited by microphonics from the
pulse tubes and noise aliasing from readout system, the instrument noise
equivalent power is about 2.10^-16 W.Hz^-0.5, enough for the demonstration of
bolometric interferometry.
|
We investigate the viability of producing galaxy mock catalogues with
COmoving Lagrangian Acceleration (COLA) simulations in Modified Gravity (MG)
models employing the Halo Occupation Distribution (HOD) formalism. In this
work, we focus on two theories of MG: $f(R)$ gravity with the chameleon
mechanism, and a braneworld model (nDGP) that incorporates the Vainshtein
mechanism. We use a suite of full $N$-body simulations in MG as a benchmark to
test the accuracy of COLA simulations. At the level of Dark Matter (DM), we
show that COLA accurately reproduces the matter power spectrum up to $k \sim 1
h {\rm Mpc}^{-1}$, while it is less accurate in reproducing the velocity field.
To produce halo catalogues, we find that the ROCKSTAR halo-finder does not
perform well with COLA simulations. On the other hand, using a simple
Friends-of-Friends (FoF) finder and an empirical mass conversion from FoF to
spherical over-density masses, we are able to produce halo catalogues in COLA
that are in good agreement with those in $N$-body simulations. To consider the
effects of the MG fifth force on the halo profile, we derive simple fitting
formulae for the concentration-mass and the velocity dispersion-mass relations
that we calibrate using ROCKSTAR halo catalogues in $N$-body simulations. We
then use these results to extend the HOD formalism to modified gravity
simulations in COLA. We use an HOD model with five parameters that we tune to
obtain galaxy catalogues in redshift space. We find that despite the great
freedom of the HOD model, MG leaves characteristic imprints in the redshift
space power spectrum multipoles and these features are well captured by the
COLA galaxy catalogues.
|
Combinatorial design theory studies set systems with certain balance and
symmetry properties and has applications to computer science and elsewhere.
This paper presents a modular approach to formalising designs for the first
time using Isabelle and assesses the usability of a locale-centric approach to
formalisations of mathematical structures. We demonstrate how locales can be
used to specify numerous types of designs and their hierarchy. The resulting
library, which is concise and adaptable, includes formal definitions and proofs
for many key properties, operations, and theorems on the construction and
existence of designs.
|
Motivated by the study of high energy Steklov eigenfunctions, we examine the
semi-classical Robin Laplacian. In the two dimensional situation, we determine
an effective operator describing the asymptotic distribution of the negative
eigenvalues, and we prove that the corresponding eigenfunctions decay away from
the boundary, for all dimensions.
|
Given a finite abelian group $G$ and a natural number $t$, there are two
natural substructures of the Cartesian power $G^t$; namely, $S^t$ where $S$ is
a subset of $G$, and $x+H$ a coset of a subgroup $H$ of $G^t$. A natural
question is whether two such different structures have non-empty intersection.
This turns out to be an NP-complete problem. If we fix $G$ and $S$, then the
problem is in $P$ if $S$ is a coset in $G$ or if $S$ is empty, and NP-complete
otherwise; if we restrict to intersecting powers of $S$ with subgroups, the
problem is in $P$ if $\bigcap_{n\in\mathbb{Z} \mid nS \subset S} nS$ is a coset
or empty, and NP-complete otherwise. These theorems have applications in the
article [Spe21], where they are used as a stepping stone between a purely
combinatorial and a purely algebraic problem.
|
Cycles, which can be found in many different kinds of networks, make the
problems more intractable, especially when dealing with dynamical processes on
networks. On the contrary, tree networks in which no cycle exists, are
simplifications and usually allow for analyticity. There lacks a quantity,
however, to tell the ratio of cycles which determines the extent of network
being close to tree networks. Therefore we introduce the term Cycle Nodes Ratio
(CNR) to describe the ratio of number of nodes belonging to cycles to the
number of total nodes, and provide an algorithm to calculate CNR. CNR is
studied in both network models and real networks. The CNR remains unchanged in
different sized Erd\"os R\'enyi (ER) networks with the same average degree, and
increases with the average degree, which yields a critical turning point. The
approximate analytical solutions of CNR in ER networks are given, which fits
the simulations well. Furthermore, the difference between CNR and two-core
ratio (TCR) is analyzed. The critical phenomenon is explored by analysing the
giant component of networks. We compare the CNR in network models and real
networks, and find the latter is generally smaller. Combining the
coarse-graining method can distinguish the CNR structure of networks with high
average degree. The CNR is also applied to four different kinds of
transportation networks and fungal networks, which give rise to different zones
of effect. It is interesting to see that CNR is very useful in network
recognition of machine learning.
|
We develop a theory of T-duality for transitive Courant algebroids. We show
that T-duality between transitive Courant algebroids E\rightarrow M and
\tilde{E}\rightarrow \tilde{M} induces a map between the spaces of sections of
the corresponding canonical weighted spinor bundles \mathbb{S}_{E} and
\mathbb{S}_{\tilde{E}} intertwining the canonical Dirac generating operators.
The map is shown to induce an isomorphism between the spaces of invariant
spinors, compatible with an isomorphism between the spaces of invariant
sections of the Courant algebroids. The notion of invariance is defined after
lifting the vertical parallelisms of the underlying torus bundles M\rightarrow
B and \tilde{M} \rightarrow B to the Courant algebroids and their spinor
bundles. We prove a general existence result for T-duals under assumptions
generalizing the cohomological integrality conditions for T-duality in the
exact case. Specializing our construction, we find that the T-dual of an exact
or a heterotic Courant algebroid is again exact or heterotic, respectively.
|
Reggeon field theory (RFT), originally developed in the context of high
energy diffraction scattering, has a much wider applicability, describing, for
example, the universal critical behavior of stochastic population models as
well as probabilistic geometric problems such as directed percolation. In 1975
Suranyi and others developed cut RFT, which can incorporate the cutting rules
of Abramovskii, Gribov and Kancheli for how each diagram contributes to
inclusive cross-sections. In this note we describe the corresponding
probabilistic interpretations of cut RFT: as a population model of two
genotypes, which can reproduce both asexually and sexually; and as a kind of
bicolor directed percolation problem. In both cases the AGK rules correspond to
simple limiting cases of these problems.
|
Utilizing the Atacama Large Millimeter/submillimeter Array (ALMA), we present
CS line maps in five rotational lines ($J_{\rm u}=7, 5, 4, 3, 2$) toward the
circumnuclear disk (CND) and streamers of the Galactic Center. Our primary goal
is to resolve the compact structures within the CND and the streamers, in order
to understand the stability conditions of molecular cores in the vicinity of
the supermassive black hole (SMBH) Sgr A*. Our data provide the first
homogeneous high-resolution ($1.3'' = 0.05$ pc) observations aiming at
resolving density and temperature structures. The CS clouds have sizes of
$0.05-0.2$ pc with a broad range of velocity dispersion ($\sigma_{\rm
FWHM}=5-40$ km s$^{-1}$). The CS clouds are a mixture of warm ($T_{\rm k}\ge
50-500$ K, n$_{\rm H_2}$=$10^{3-5}$ cm$^{-3}$) and cold gas ($T_{\rm k}\le 50$
K, n$_{\rm H_2}$=$10^{6-8}$ cm$^{-3}$). A stability analysis based on the
unmagnetized virial theorem including tidal force shows that $84^{+16}_{-37}$ %
of the total gas mass is tidally stable, which accounts for the majority of gas
mass. Turbulence dominates the internal energy and thereby sets the threshold
densities $10-100$ times higher than the tidal limit at distance $\ge 1.5$ pc
to Sgr A*, and therefore, inhibits the clouds from collapsing to form stars
near the SMBH. However, within the central $1.5$ pc, the tidal force overrides
turbulence and the threshold densities for a gravitational collapse quickly
grow to $\ge 10^{8}$ cm$^{-3}$.
|
Cross-modal recipe retrieval has recently gained substantial attention due to
the importance of food in people's lives, as well as the availability of vast
amounts of digital cooking recipes and food images to train machine learning
models. In this work, we revisit existing approaches for cross-modal recipe
retrieval and propose a simplified end-to-end model based on well established
and high performing encoders for text and images. We introduce a hierarchical
recipe Transformer which attentively encodes individual recipe components
(titles, ingredients and instructions). Further, we propose a self-supervised
loss function computed on top of pairs of individual recipe components, which
is able to leverage semantic relationships within recipes, and enables training
using both image-recipe and recipe-only samples. We conduct a thorough analysis
and ablation studies to validate our design choices. As a result, our proposed
method achieves state-of-the-art performance in the cross-modal recipe
retrieval task on the Recipe1M dataset. We make code and models publicly
available.
|
Quantization is one of the core components in lossy image compression. For
neural image compression, end-to-end optimization requires differentiable
approximations of quantization, which can generally be grouped into three
categories: additive uniform noise, straight-through estimator and soft-to-hard
annealing. Training with additive uniform noise approximates the quantization
error variationally but suffers from the train-test mismatch. The other two
methods do not encounter this mismatch but, as shown in this paper, hurt the
rate-distortion performance since the latent representation ability is
weakened. We thus propose a novel soft-then-hard quantization strategy for
neural image compression that first learns an expressive latent space softly,
then closes the train-test mismatch with hard quantization. In addition, beyond
the fixed integer quantization, we apply scaled additive uniform noise to
adaptively control the quantization granularity by deriving a new variational
upper bound on actual rate. Experiments demonstrate that our proposed methods
are easy to adopt, stable to train, and highly effective especially on complex
compression models.
|
In this paper, we introduce PASSAT, a practical system to boost the security
assurance delivered by the current cloud architecture without requiring any
changes or cooperation from the cloud service providers. PASSAT is an
application transparent to the cloud servers that allows users to securely and
efficiently store and access their files stored on public cloud storage based
on a single master password. Using a fast and light-weight XOR secret sharing
scheme, PASSAT secret-shares users' files and distributes them among n publicly
available cloud platforms. To access the files, PASSAT communicates with any k
out of n cloud platforms to receive the shares and runs a secret-sharing
reconstruction algorithm to recover the files. An attacker (insider or
outsider) who compromises or colludes with less than k platforms cannot learn
the user's files or modify the files stealthily. To authenticate the user to
multiple cloud platforms, PASSAT crucially stores the authentication
credentials, specific to each platform on a password manager, protected under
the user's master password. Upon requesting access to files, the user enters
the password to unlock the vault and fetches the authentication tokens using
which PASSAT can interact with cloud storage. Our instantiation of PASSAT based
on (2, 3)-XOR secret sharing of Kurihara et al., implemented with three popular
storage providers, namely, Google Drive, Box, and Dropbox, confirms that our
approach can efficiently enhance the confidentiality, integrity, and
availability of the stored files with no changes on the servers.
|
We describe a time lens to expand the dynamic range of photon Doppler
velocimetry (PDV) systems. The principle and preliminary design of a time-lens
PDV (TL-PDV) are explained and shown to be feasible through simulations. In a
PDV system, an interferometer is used for measuring frequency shifts due to the
Doppler effect from the target motion. However, the sampling rate of the
electronics could limit the velocity range of a PDV system. A four-wave-mixing
(FWM) time lens applies a quadratic temporal phase to an optical signal within
a nonlinear FWM medium (such as an integrated photonic waveguide or highly
nonlinear optical fiber). By spectrally isolating the mixing product, termed
the idler, and with appropriate lengths of dispersion prior and after to this
FWM time lens, a temporally magnified version of the input signal is generated.
Therefore, the frequency shifts of PDV can be "slowed down" with the
magnification factor $M$ of the time lens. $M=1$ corresponds to a regular PDV
without a TL. $M=10$ has been shown to be feasible for a TL-PDV. Use of this
effect for PDV can expand the velocity measurement range and allow the use of
lower bandwidth electronics. TL-PDV will open up new avenues for various
dynamic materials experiments.
|
3D perception using sensors under vehicle industrial standard is the rigid
demand in autonomous driving. MEMS LiDAR emerges with irresistible trend due to
its lower cost, more robust, and meeting the mass-production standards.
However, it suffers small field of view (FoV), slowing down the step of its
population. In this paper, we propose LEAD, i.e., LiDAR Extender for Autonomous
Driving, to extend the MEMS LiDAR by coupled image w.r.t both FoV and range. We
propose a multi-stage propagation strategy based on depth distributions and
uncertainty map, which shows effective propagation ability. Moreover, our depth
outpainting/propagation network follows a teacher-student training fashion,
which transfers depth estimation ability to depth completion network without
any scale error passed. To validate the LiDAR extension quality, we utilize a
high-precise laser scanner to generate a ground-truth dataset. Quantitative and
qualitative evaluations show that our scheme outperforms SOTAs with a large
margin. We believe the proposed LEAD along with the dataset would benefit the
community w.r.t depth researches.
|
This paper proposes VARA-TTS, a non-autoregressive (non-AR) text-to-speech
(TTS) model using a very deep Variational Autoencoder (VDVAE) with Residual
Attention mechanism, which refines the textual-to-acoustic alignment
layer-wisely. Hierarchical latent variables with different temporal resolutions
from the VDVAE are used as queries for residual attention module. By leveraging
the coarse global alignment from previous attention layer as an extra input,
the following attention layer can produce a refined version of alignment. This
amortizes the burden of learning the textual-to-acoustic alignment among
multiple attention layers and outperforms the use of only a single attention
layer in robustness. An utterance-level speaking speed factor is computed by a
jointly-trained speaking speed predictor, which takes the mean-pooled latent
variables of the coarsest layer as input, to determine number of acoustic
frames at inference. Experimental results show that VARA-TTS achieves slightly
inferior speech quality to an AR counterpart Tacotron 2 but an
order-of-magnitude speed-up at inference; and outperforms an analogous non-AR
model, BVAE-TTS, in terms of speech quality.
|
This work introduces a methodology for studying synchronization in adaptive
networks with heterogeneous plasticity (adaptation) rules. As a paradigmatic
model, we consider a network of adaptively coupled phase oscillators with
distance-dependent adaptations. For this system, we extend the master stability
function approach to adaptive networks with heterogeneous adaptation. Our
method allows for separating the contributions of network structure, local node
dynamics, and heterogeneous adaptation in determining synchronization.
Utilizing our proposed methodology, we explain mechanisms leading to
synchronization or desynchronization by enhanced long-range connections in
nonlocally coupled ring networks and networks with Gaussian distance-dependent
coupling weights equipped with a biologically motivated plasticity rule.
|
We consider the problem of budget allocation for competitive influence
maximization over social networks. In this problem, multiple competing parties
(players) want to distribute their limited advertising resources over a set of
social individuals to maximize their long-run cumulative payoffs. It is assumed
that the individuals are connected via a social network and update their
opinions based on the classical DeGroot model. The players must decide the
budget distribution among the individuals at a finite number of campaign times
to maximize their overall payoff given as a function of individuals' opinions.
We show that i) the optimal investment strategy for the case of a single-player
can be found in polynomial time by solving a concave program, and ii) the
open-loop equilibrium strategies for the multiplayer dynamic game can be
computed efficiently by following natural regret minimization dynamics. Our
results extend the earlier work on the static version of the problem to a
dynamic multistage game.
|
There is mounting evidence that ultra-energetic neutrinos of astrophysical
origin may be associated with blazars. Here we investigate a unique sample of
47 blazars, $\sim 20$ of which could be new neutrino sources. In particular, we
focus on 17 objects of yet unknown redshift, for which we present optical
spectroscopy secured at the Gran Telescopio Canarias and the ESO Very Large
Telescope. We find all sources but one (a quasar) to be BL Lac objects. For
nine targets we are able to determine the redshift (0.09~$<$~z~$<$~1.6), while
for the others we set a lower limit on it, based on either the robust detection
of intervening absorption systems or on an estimation derived from the absence
of spectral signatures of the host galaxy. In some spectra we detect forbidden
and semi-forbidden emission lines with luminosities in the range $10^{40} -
10^{41}$ erg s$^{-1}$. We also report on the spectroscopy of seven blazars
possibly associated with energetic neutrinos that partially meet the criteria
of our sample and are discussed in the Appendix. These results represent the
starting point of our investigation into the real nature of these objects and
their likelihood of being neutrino emitters.
|
Given C$^*$-algebras $A$ and $B$ and a $^*$-homomorphism $\phi:A\rightarrow
B$, we adopt the portrait of the relative $K$-theory $K_*(\phi)$ due to Karoubi
using Banach categories and Banach functors. We show that the elements of the
relative groups may be represented in a simple form. We prove the existence of
two six-term exact sequences, and we use these sequences to deduce the fact
that the relative theory is isomorphic, in a natural way, to the $K$-theory of
the mapping cone of $\phi$ as an excision result.
|
We experimentally demonstrate, for the first time, noise diagnostics by
repeated quantum measurements. Specifically, we establish the ability of a
single photon, subjected to random polarisation noise, to diagnose
non-Markovian temporal correlations of such a noise process. In the frequency
domain, these noise correlations correspond to colored noise spectra, as
opposed to the ones related to Markovian, white noise. Both the noise spectrum
and its corresponding temporal correlations are diagnosed by probing the photon
by means of frequent, (partially-)selective polarisation measurements. Our main
result is the experimental demonstration that noise with positive temporal
correlations corresponds to our single photon undergoing a dynamical regime
enabled by the quantum Zeno effect (QZE), while noise characterized by negative
(anti-) correlations corresponds to regimes associated with the anti-Zeno
effect (AZE). This demonstration opens the way to a new kind of noise
spectroscopy based on QZE and AZE in photon (or other single-particle) state
probing.
|
Known force terms arising in the Ehrenfest dynamics of quantum electrons and
classical nuclei, due to a moving basis set for the former, can be understood
in terms of the curvature of the manifold hosting the quantum states of the
electronic subsystem. Namely, the velocity-dependent terms appearing in the
Ehrenfest forces on the nuclei acquire a geometrical meaning in terms of the
intrinsic curvature of the manifold, while Pulay terms relate to its extrinsic
curvature.
|
The production of dileptons with an invariant mass in the range 1 GeV < M < 5
GeV provides unique insight into the approach to thermal equilibrium in
ultrarelativistic nucleus-nucleus collisions. In this mass range, they are
produced through the annihilation of quark-antiquark pairs in the early stages
of the collision. They are sensitive to the anisotropy of the quark momentum
distribution, and also to the quark abundance, which is expected to be
underpopulated relative to thermal equilibrium. We take into account both
effects based on recent theoretical developments in QCD kinetic theory, and
study how the dilepton mass spectrum depends on the shear viscosity to entropy
ratio that controls the equilibration time. We evaluate the background from the
Drell-Yan process and argue that future detector developments can suppress the
additional background from semileptonic decays of heavy flavors.
|
In extreme learning machines (ELM) the hidden-layer coefficients are randomly
set and fixed, while the output-layer coefficients of the neural network are
computed by a least squares method. The randomly-assigned coefficients in ELM
are known to influence its performance and accuracy significantly. In this
paper we present a modified batch intrinsic plasticity (modBIP) method for
pre-training the random coefficients in the ELM neural networks. The current
method is devised based on the same principle as the batch intrinsic plasticity
(BIP) method, namely, by enhancing the information transmission in every node
of the neural network. It differs from BIP in two prominent aspects. First,
modBIP does not involve the activation function in its algorithm, and it can be
applied with any activation function in the neural network. In contrast, BIP
employs the inverse of the activation function in its construction, and
requires the activation function to be invertible (or monotonic). The modBIP
method can work with the often-used non-monotonic activation functions (e.g.
Gaussian, swish, Gaussian error linear unit, and radial-basis type functions),
with which BIP breaks down. Second, modBIP generates target samples on random
intervals with a minimum size, which leads to highly accurate computation
results when combined with ELM. The combined ELM/modBIP method is markedly more
accurate than ELM/BIP in numerical simulations. Ample numerical experiments are
presented with shallow and deep neural networks for function approximation and
boundary/initial value problems with partial differential equations. They
demonstrate that the combined ELM/modBIP method produces highly accurate
simulation results, and that its accuracy is insensitive to the
random-coefficient initializations in the neural network. This is in sharp
contrast with the ELM results without pre-training of the random coefficients.
|
A key task in design work is grasping the client's implicit tastes. Designers
often do this based on a set of examples from the client. However, recognizing
a common pattern among many intertwining variables such as color, texture, and
layout and synthesizing them into a composite preference can be challenging. In
this paper, we leverage the pattern recognition capability of computational
models to aid in this task. We offer a set of principles for computationally
learning personal style. The principles are manifested in PseudoClient, a deep
learning framework that learns a computational model for personal graphic
design style from only a handful of examples. In several experiments, we found
that PseudoClient achieves a 79.40% accuracy with only five positive and
negative examples, outperforming several alternative methods. Finally, we
discuss how PseudoClient can be utilized as a building block to support the
development of future design applications.
|
Let $\Omega $ be an open subset of $\mathbb{R}^{N}$, and let $p,\, q:\Omega
\rightarrow \left[ 1,\infty \right] $ be measurable functions. We give a
necessary and sufficient condition for the embedding of the variable exponent
space $L^{p(\cdot )}\left( \Omega \right) $ in $L^{q(\cdot )}\left( \Omega
\right) $ to be almost compact. This leads to a condition on $\Omega, \, p$ and
$q$ sufficient to ensure that the Sobolev space $W^{1,p(\cdot )}\left( \Omega
\right) $ based on $L^{p(\cdot )}\left( \Omega \right) $ is compactly embedded
in $L^{q(\cdot )}\left( \Omega \right) ;$ compact embedding results of this
type already in the literature are included as special cases.
|
As widely recognized, vortex represents flow rotation. Vortex should have a
local rotation axis as its direction and angular speed as its strength.
Vorticity vector has been considered the rotation axis, and vorticity magnitude
the rotational strength for a long time in classical fluid kinematics. However,
this concept cannot stand in viscous flow. This study demonstrates by rigorous
mathematical proof that the vorticity vector is not the fluid rotation axis,
and vorticity is not the rotation strength. On the other hand, the Liutex
vector is mathematically proved as the fluid rotation axis, and the Liutex
magnitude is twice the fluid angular speed.
|
Most countries have started vaccinating people against COVID-19. However, due
to limited production capacities and logistical challenges it will take
months/years until herd immunity is achieved. Therefore, vaccination and social
distancing have to be coordinated. In this paper, we provide some insight on
this topic using optimization-based control on an age-differentiated
compartmental model. For real-life decision making, we investigate the impact
of the planning horizon on the optimal vaccination/social distancing strategy.
We find that in order to reduce social distancing in the long run, without
overburdening the healthcare system, it is essential to vaccinate the people
with the highest contact rates first. That is also the case if the objective is
to minimize fatalities provided that the social distancing measures are
sufficiently strict. However, for short-term planning it is optimal to focus on
the high-risk group.
|
We present a model of exclusive $\phi$-meson lepto-production $ep \to
e'p'\phi$ near threshold which features the strangeness gravitational form
factors of the proton. We argue that the shape of the differential cross
section $d\sigma/dt$ is a sensitive probe of the strangeness D-term of the
proton.
|
With the promise of reliability in cloud, more enterprises are migrating to
cloud. The process of continuous integration/deployment (CICD) in cloud
connects developers who need to deliver value faster and more transparently
with site reliability engineers (SREs) who need to manage applications
reliably. SREs feed back development issues to developers, and developers
commit fixes and trigger CICD to redeploy. The release cycle is more continuous
than ever, thus the code to production is faster and more automated. To provide
this higher level agility, the cloud platforms become more complex in the face
of flexibility with deeper layers of virtualization. However, reliability does
not come for free with all these complexities. Software engineers and SREs need
to deal with wider information spectrum from virtualized layers. Therefore,
providing correlated information with true positive evidences is critical to
identify the root cause of issues quickly in order to reduce mean time to
recover (MTTR), performance metrics for SREs. Similarity, knowledge, or
statistics driven approaches have been effective, but with increasing data
volume and types, an individual approach is limited to correlate semantic
relations of different data sources. In this paper, we introduce FIXME to
enhance software reliability with hybrid diagnosis approaches for enterprises.
Our evaluation results show using hybrid diagnosis approach is about 17% better
in precision. The results are helpful for both practitioners and researchers to
develop hybrid diagnosis in the highly dynamic cloud environment.
|
In multistage manufacturing systems, modeling multiple quality indices based
on the process sensing variables is important. However, the classic modeling
technique predicts each quality variable one at a time, which fails to consider
the correlation within or between stages. We propose a deep multistage
multi-task learning framework to jointly predict all output sensing variables
in a unified end-to-end learning framework according to the sequential system
architecture in the MMS. Our numerical studies and real case study have shown
that the new model has a superior performance compared to many benchmark
methods as well as great interpretability through developed variable selection
techniques.
|
Phase noise of the frequency synthesizer is one of the main limitations to
the short-term stability of microwave atomic clocks. In this work, we
demonstrated a low-noise, simple-architecture microwave frequency synthesizer
for a coherent population trapping (CPT) clock. The synthesizer is mainly
composed of a 100 MHz oven controlled crystal oscillator (OCXO), a microwave
comb generator and a direct digital synthesizer (DDS). The absolute phase
noises of 3.417 GHz signal are measured to be -55 dBc/Hz, -81 dBc/Hz, -111
dBc/Hz and -134 dBc/Hz, respectively, for 1 Hz, 10 Hz, 100 Hz and 1 kHz offset
frequencies, which shows only 1 dB deterioration at the second harmonic of the
modulation frequency of the atomic clock. The estimated frequency stability of
intermodulation effect is 4.7*10^{-14} at 1s averaging time, which is about
half order of magnitude lower than that of the state-of-the-art CPT Rb clock.
Our work offers an alternative microwave synthesizer for high-performance CPT
Rb atomic clock.
|
The Schr\"odinger equation is solved numerically for charmonium using the
discrete variable representation (DVR) method. The Hamiltonian matrix is
constructed and diagonalized to obtain the eigenvalues and eigenfunctions.
Using these eigenvalues and eigenfunctions, spectra and various decay widths
are calculated. The obtained results are in good agreement with other numerical
methods and with experiments.
|
We are motivated by the problem of providing strong generalization guarantees
in the context of meta-learning. Existing generalization bounds are either
challenging to evaluate or provide vacuous guarantees in even relatively simple
settings. We derive a probably approximately correct (PAC) bound for
gradient-based meta-learning using two different generalization frameworks in
order to deal with the qualitatively different challenges of generalization at
the "base" and "meta" levels. We employ bounds for uniformly stable algorithms
at the base level and bounds from the PAC-Bayes framework at the meta level.
The result of this approach is a novel PAC bound that is tighter when the base
learner adapts quickly, which is precisely the goal of meta-learning. We show
that our bound provides a tighter guarantee than other bounds on a toy
non-convex problem on the unit sphere and a text-based classification example.
We also present a practical regularization scheme motivated by the bound in
settings where the bound is loose and demonstrate improved performance over
baseline techniques.
|
Designing intelligent microrobots that can autonomously navigate and perform
instructed routines in blood vessels, a complex and crowded environment with
obstacles including dense cells, different flow patterns and diverse vascular
geometries, can offer enormous possibilities in biomedical applications. Here
we report a hierarchical control scheme that enables a microrobot to
efficiently navigate and execute customizable routines in blood vessels. The
control scheme consists of two highly decoupled components: a high-level
controller setting short-ranged dynamic targets to guide the microrobot to
follow a preset path and a low-level deep reinforcement learning (DRL)
controller responsible for maneuvering microrobots towards these dynamic
guiding targets. The proposed DRL controller utilizes three-dimensional (3D)
convolutional neural networks and is capable of learning control policy
directly from a coarse raw 3D sensory input. In blood vessels with rich
configurations of red blood cells and vessel geometry, the control scheme
enables efficient navigation and faithful execution of instructed routines. The
control scheme is also robust to adversarial perturbations including blood
flows. This study provides a proof-of-principle for designing data-driven
control systems for autonomous navigation in vascular networks; it illustrates
the great potential of artificial intelligence for broad biomedical
applications such as target drug delivery, blood clots clear, precision
surgery, disease diagnosis, and more.
|
Communication overhead is the key challenge for distributed training.
Gradient compression is a widely used approach to reduce communication traffic.
When combining with parallel communication mechanism method like pipeline,
gradient compression technique can greatly alleviate the impact of
communication overhead. However, there exists two problems of gradient
compression technique to be solved. Firstly, gradient compression brings in
extra computation cost, which will delay the next training iteration. Secondly,
gradient compression usually leads to the decrease of convergence accuracy.
|
Recently, much attention has been paid to the societal impact of AI,
especially concerns regarding its fairness. A growing body of research has
identified unfair AI systems and proposed methods to debias them, yet many
challenges remain. Representation learning for Heterogeneous Information
Networks (HINs), a fundamental building block used in complex network mining,
has socially consequential applications such as automated career counseling,
but there have been few attempts to ensure that it will not encode or amplify
harmful biases, e.g. sexism in the job market. To address this gap, in this
paper we propose a comprehensive set of de-biasing methods for fair HINs
representation learning, including sampling-based, projection-based, and graph
neural networks (GNNs)-based techniques. We systematically study the behavior
of these algorithms, especially their capability in balancing the trade-off
between fairness and prediction accuracy. We evaluate the performance of the
proposed methods in an automated career counseling application where we
mitigate gender bias in career recommendation. Based on the evaluation results
on two datasets, we identify the most effective fair HINs representation
learning techniques under different conditions.
|
This research recasts the network attack dataset from UNSW-NB15 as an
intrusion detection problem in image space. Using one-hot-encodings, the
resulting grayscale thumbnails provide a quarter-million examples for deep
learning algorithms. Applying the MobileNetV2's convolutional neural network
architecture, the work demonstrates a 97% accuracy in distinguishing normal and
attack traffic. Further class refinements to 9 individual attack families
(exploits, worms, shellcodes) show an overall 56% accuracy. Using feature
importance rank, a random forest solution on subsets show the most important
source-destination factors and the least important ones as mainly obscure
protocols. The dataset is available on Kaggle.
|
Tamil is a Dravidian language that is commonly used and spoken in the
southern part of Asia. In the era of social media, memes have been a fun moment
in the day-to-day life of people. Here, we try to analyze the true meaning of
Tamil memes by categorizing them as troll and non-troll. We propose an
ingenious model comprising of a transformer-transformer architecture that tries
to attain state-of-the-art by using attention as its main component. The
dataset consists of troll and non-troll images with their captions as text. The
task is a binary classification task. The objective of the model is to pay more
attention to the extracted features and to ignore the noise in both images and
text.
|
Self-organized spatial patterns of vegetation are frequent in water-limited
regions and have been suggested as important indicators of ecosystem health.
However, the mechanisms underlying their emergence remain unclear. Some
theories hypothesize that patterns could result from a scale-dependent feedback
(SDF), whereby interactions favoring plant growth dominate at short distances
and growth-inhibitory interactions dominate in the long range. However, we know
little about how net plant-to-plant interactions may change sign with
inter-individual distance, and in the absence of strong empirical support, the
relevance of this SDF for vegetation pattern formation remains disputed. These
theories predict a sequential change in pattern shape from gapped to
labyrinthine to spotted spatial patterns as precipitation declines.
Nonetheless, alternative theories show that the same sequence of patterns could
emerge even if net interactions between plants were always inhibitory (purely
competitive feedbacks, PCF). Although these alternative hypotheses lead to
visually indistinguishable patterns they predict very different desertification
dynamics following the spotted pattern. Moreover, vegetation interaction with
other ecosystem components can introduce additional spatio-temporal scales that
reshape both the patterns and the desertification dynamics. Therefore, to make
reliable ecological predictions for a focal ecosystem, it is crucial that
models accurately capture the mechanisms at play in the system of interest.
Here, we review existing theories for vegetation self-organization and their
conflicting predictions about desertification dynamics. We further discuss
possible ways for reconciling these predictions and potential empirical tests
via manipulative experiments to improve our understanding of how vegetation
self-organizes and better predict the fate of the ecosystems where they form.
|
Deep learning techniques have achieved great success in remote sensing image
change detection. Most of them are supervised techniques, which usually require
large amounts of training data and are limited to a particular application.
Self-supervised methods as an unsupervised approach are popularly used to solve
this problem and are widely used in unsupervised binary change detection tasks.
However, the existing self-supervised methods in change detection are based on
pre-tasks or at patch-level, which may be sub-optimal for pixel-wise change
detection tasks. Therefore, in this work, a pixel-wise contrastive approach is
proposed to overcome this limitation. This is achieved by using contrastive
loss in pixel-level features on an unlabeled multi-view setting. In this
approach, a Siamese ResUnet is trained to obtain pixel-wise representations and
to align features from shifted positive pairs. Meanwhile, vector quantization
is used to augment the learned features in two branches. The final binary
change map is obtained by subtracting features of one branch from features of
the other branch and using the Rosin thresholding method. To overcome the
effects of regular seasonal changes in binary change maps, we also used an
uncertainty method to enhance the temporal robustness of the proposed approach.
Two homogeneous (OSCD and MUDS) datasets and one heterogeneous (California
Flood) dataset are used to evaluate the performance of the proposed approach.
Results demonstrate improvements in both efficiency and accuracy over the
patch-wise multi-view contrastive method.
|
Convolutional Neural Networks (CNN) have been rigorously studied for
Hyperspectral Image Classification (HSIC) and are known to be effective in
exploiting joint spatial-spectral information with the expense of lower
generalization performance and learning speed due to the hard labels and
non-uniform distribution over labels. Several regularization techniques have
been used to overcome the aforesaid issues. However, sometimes models learn to
predict the samples extremely confidently which is not good from a
generalization point of view. Therefore, this paper proposed an idea to enhance
the generalization performance of a hybrid CNN for HSIC using soft labels that
are a weighted average of the hard labels and uniform distribution over ground
labels. The proposed method helps to prevent CNN from becoming over-confident.
We empirically show that in improving generalization performance, label
smoothing also improves model calibration which significantly improves
beam-search. Several publicly available Hyperspectral datasets are used to
validate the experimental evaluation which reveals improved generalization
performance, statistical significance, and computational complexity as compared
to the state-of-the-art models. The code will be made available at
https://github.com/mahmad00.
|
In recent years, the interest growth in the Blockchains (BC) and
Internet-of-Things (IoT) integration -- termed as BIoT -- for more trust via
decentralization has led to great potentials in various use cases such as
health care, supply chain tracking, and smart cities. A key element of BIoT
ecosystems is the data transactions (TX) that include the data collected by IoT
devices. BIoT applications face many challenges to comply with the European
General Data Protection Regulation (GDPR) i.e., enabling users to hold on to
their rights for deleting or modifying their data stored on publicly accessible
and immutable BCs. In this regard, this paper identifies the requirements of
BCs for being GDPR compliant in BIoT use cases. Accordingly, an on-chain
solution is proposed that allows fine-grained modification (update and erasure)
operations on TXs' data fields within a BC. The proposed solution is based on a
cryptographic primitive called Chameleon Hashing. The novelty of this approach
is manifold. BC users have the authority to update their data, which are
addressed at the TX level with no side-effects on the block or chain. By
performing and storing the data updates, all on-chain, traceability and
verifiability of the BC are preserved. Moreover, the compatibility with TX
aggregation mechanisms that allow the compression of the BC size is maintained.
|
Mobile health applications (mHealth apps for short) are being increasingly
adopted in the healthcare sector, enabling stakeholders such as governments,
health units, medics, and patients, to utilize health services in a pervasive
manner. Despite having several known benefits, mHealth apps entail significant
security and privacy challenges that can lead to data breaches with serious
social, legal, and financial consequences. This research presents an empirical
investigation about security awareness of end-users of mHealth apps that are
available on major mobile platforms, including Android and iOS. We collaborated
with two mHealth providers in Saudi Arabia to survey 101 end-users,
investigating their security awareness about (i) existing and desired security
features, (ii) security related issues, and (iii) methods to improve security
knowledge. Findings indicate that majority of the end-users are aware of the
existing security features provided by the apps (e.g., restricted app
permissions); however, they desire usable security (e.g., biometric
authentication) and are concerned about privacy of their health information
(e.g., data anonymization). End-users suggested that protocols such as session
timeout or Two-factor authentication (2FA) positively impact security but
compromise usability of the app. Security-awareness via social media, peer
guidance, or training from app providers can increase end-users trust in
mHealth apps. This research investigates human-centric knowledge based on
empirical evidence and provides a set of guidelines to develop secure and
usable mHealth apps.
|
We study theoretically subradiant states in the array of atoms coupled to
photons propagating in a one-dimensional waveguide focusing on the strongly
interacting many-body regime with large excitation fill factor $f$. We
introduce a generalized many-body entropy of entanglement based on exact
numerical diagonalization followed by a high-order singular value
decomposition. This approach has allowed us to visualize and understand the
structure of a many-body quantum state. We reveal the breakdown of fermionized
subradiant states with increase of $f$ with emergence of short-ranged dimerized
antiferromagnetic correlations at the critical point $f=1/2$ and the complete
disappearance of subradiant states at $f>1/2$.
|
A graph $G$ is a prime distance graph (respectively, a 2-odd graph) if its
vertices can be labeled with distinct integers such that for any two adjacent
vertices, the difference of their labels is prime (either 2 or odd). We prove
that trees, cycles, and bipartite graphs are prime distance graphs, and that
Dutch windmill graphs and paper mill graphs are prime distance graphs if and
only if the Twin Prime Conjecture and dePolignac's Conjecture are true,
respectively. We give a characterization of 2-odd graphs in terms of edge
colorings, and we use this characterization to determine which circulant graphs
of the form $Circ(n, \{1,k\})$ are 2-odd and to prove results on circulant
prime distance graphs.
|
We consider the stochastic shortest path planning problem in MDPs, i.e., the
problem of designing policies that ensure reaching a goal state from a given
initial state with minimum accrued cost. In order to account for rare but
important realizations of the system, we consider a nested dynamic coherent
risk total cost functional rather than the conventional risk-neutral total
expected cost. Under some assumptions, we show that optimal, stationary,
Markovian policies exist and can be found via a special Bellman's equation. We
propose a computational technique based on difference convex programs (DCPs) to
find the associated value functions and therefore the risk-averse policies. A
rover navigation MDP is used to illustrate the proposed methodology with
conditional-value-at-risk (CVaR) and entropic-value-at-risk (EVaR) coherent
risk measures.
|
Using a martingale concentration inequality, concentration bounds `from time
$n_0$ on' are derived for stochastic approximation algorithms with contractive
maps and both martingale difference and Markov noises. These are applied to
reinforcement learning algorithms, in particular to asynchronous Q-learning and
TD(0).
|
The properties of quasar-host galaxies might be determined by the growth and
feedback of their supermassive (SMBH, $10^{8-10}$ M$_{\odot}$) black holes. We
investigate such connection with a suite of cosmological simulations of massive
(halo mass $\approx 10^{12}$ M$_{\odot}$) galaxies at $z\simeq 6$ which include
a detailed sub-grid multiphase gas and accretion model. BH seeds of initial
mass $10^5$ M$_{\odot}$ grow mostly by gas accretion, and become SMBH by $z=6$
setting on the observed $M_{\rm BH} - M_{\star}$ relation without the need for
a boost factor. Although quasar feedback crucially controls the SMBH growth,
its impact on the properties of the host galaxy at $z=6$ is negligible. In our
model, quasar activity can both quench (via gas heating) or enhance (by ISM
over-pressurization) star formation. However, we find that the star formation
history is insensitive to such modulation as it is largely dominated, at least
at $z>6$, by cold gas accretion from the environment that cannot be hindered by
the quasar energy deposition. Although quasar-driven outflows can achieve
velocities $> 1000~\rm km~s^{-1}$, only $\approx 4$% of the outflowing gas mass
can actually escape from the host galaxy. These findings are only loosely
constrained by available data, but can guide observational campaigns searching
for signatures of quasar feedback in early galaxies.
|
The existence of $1$-factorizations of an infinite complete equipartite graph
$K_m[n]$ (with $m$ parts of size $n$) admitting a vertex-regular automorphism
group $G$ is known only when $n=1$ and $m$ is countable (that is, for countable
complete graphs) and, in addition, $G$ is a finitely generated abelian group
$G$ of order $m$.
In this paper, we show that a vertex-regular $1$-factorization of $K_m[n]$
under the group $G$ exists if and only if $G$ has a subgroup $H$ of order $n$
whose index in $G$ is $m$. Furthermore, we provide a sufficient condition for
an infinite Cayley graph to have a regular $1$-factorization. Finally, we
construct 1-factorizations that contain a given subfactorization, both having a
vertex-regular automorphism group.
|
This paper aims at solving an optimal control problem governed by an
anisotropic Allen-Cahn equation numerically. Therefore we first prove the
Fr\'echet differentiability of an in time discretized parabolic control problem
under certain assumptions on the involved quasilinearity and formulate the
first order necessary conditions. As a next step, since the anisotropies are in
general not smooth enough, the convergence behavior of the optimal controls are
studied for a sequence of (smooth) approximations of the former quasilinear
term. In addition the simultaneous limit in the approximation and the time step
size is considered. For a class covering a large variety of anisotropies we
introduce a certain regularization and show the previously formulated
requirements. Finally, a trust region Newton solver is applied to various
anisotropies and configurations, and numerical evidence for mesh independent
behavior and convergence with respect to regularization is presented.
|
We answer in negative to the following question of Boris Mitjagin: Is it true
that a product of two nuclear operators in Banach spaces can be factored
through a trace class operator in a Hilbert space?
|
We propose a new architecture for diacritics restoration based on
contextualized embeddings, namely BERT, and we evaluate it on 12 languages with
diacritics. Furthermore, we conduct a detailed error analysis on Czech, a
morphologically rich language with a high level of diacritization. Notably, we
manually annotate all mispredictions, showing that roughly 44% of them are
actually not errors, but either plausible variants (19%), or the system
corrections of erroneous data (25%). Finally, we categorize the real errors in
detail. We release the code at
https://github.com/ufal/bert-diacritics-restoration.
|
Knowledge distillation transfers knowledge from the teacher network to the
student one, with the goal of greatly improving the performance of the student
network. Previous methods mostly focus on proposing feature transformation and
loss functions between the same level's features to improve the effectiveness.
We differently study the factor of connection path cross levels between teacher
and student networks, and reveal its great importance. For the first time in
knowledge distillation, cross-stage connection paths are proposed. Our new
review mechanism is effective and structurally simple. Our finally designed
nested and compact framework requires negligible computation overhead, and
outperforms other methods on a variety of tasks. We apply our method to
classification, object detection, and instance segmentation tasks. All of them
witness significant student network performance improvement. Code is available
at https://github.com/Jia-Research-Lab/ReviewKD
|
Cell-free massive MIMO systems consist of many distributed access points with
simple components that jointly serve the users. In millimeter wave bands, only
a limited set of predetermined beams can be supported. In a network that
consolidates these technologies, downlink analog beam selection stands as a
challenging task for the network sum-rate maximization. Low-cost digital
filters can improve the network sum-rate further. In this work, we propose
low-cost joint designs of analog beam selection and digital filters. The
proposed joint designs achieve significantly higher sum-rates than the disjoint
design benchmark. Supervised machine learning (ML) algorithms can efficiently
approximate the input-output mapping functions of the beam selection decisions
of the joint designs with low computational complexities. Since the training of
ML algorithms is performed off-line, we propose a well-constructed joint design
that combines multiple initializations, iterations, and selection features, as
well as beam conflict control, i.e., the same beam cannot be used for multiple
users. The numerical results indicate that ML algorithms can retain 99-100% of
the original sum-rate results achieved by the proposed well-constructed
designs.
|
Classifying the sub-categories of an object from the same super-category
(e.g., bird) in a fine-grained visual classification (FGVC) task highly relies
on mining multiple discriminative features. Existing approaches mainly tackle
this problem by introducing attention mechanisms to locate the discriminative
parts or feature encoding approaches to extract the highly parameterized
features in a weakly-supervised fashion. In this work, we propose a lightweight
yet effective regularization method named Channel DropBlock (CDB), in
combination with two alternative correlation metrics, to address this problem.
The key idea is to randomly mask out a group of correlated channels during
training to destruct features from co-adaptations and thus enhance feature
representations. Extensive experiments on three benchmark FGVC datasets show
that CDB effectively improves the performance.
|
We show that for a model complete strongly minimal theory whose pregeometry
is flat, the recursive spectrum (SRM($T$)) is either of the form $[0,\alpha)$
for $\alpha\in \omega+2$ or $[0,n]\cup\{\omega\}$ for $n\in \omega$, or
$\{\omega\}$, or contained in $\{0,1,2\}$.
Combined with previous results, this leaves precisely 4 sets for which it is
not yet determined whether each is the spectrum of a model complete strongly
minimal theory with a flat pregeometry.
|
Data-driven methods for battery lifetime prediction are attracting increasing
attention for applications in which the degradation mechanisms are poorly
understood and suitable training sets are available. However, while advanced
machine learning and deep learning methods promise high performance with
minimal data preprocessing, simpler linear models with engineered features
often achieve comparable performance, especially for small training sets, while
also providing physical and statistical interpretability. In this work, we use
a previously published dataset to develop simple, accurate, and interpretable
data-driven models for battery lifetime prediction. We first present the
"capacity matrix" concept as a compact representation of battery
electrochemical cycling data, along with a series of feature representations.
We then create a number of univariate and multivariate models, many of which
achieve comparable performance to the highest-performing models previously
published for this dataset. These models also provide insights into the
degradation of these cells. Our approaches can be used both to quickly train
models for a new dataset and to benchmark the performance of more advanced
machine learning methods.
|
Willems' fundamental lemma asserts that all trajectories of a linear
time-invariant system can be obtained from a finite number of measured ones,
assuming that controllability and a persistency of excitation condition hold.
We show that these two conditions can be relaxed. First, we prove that the
controllability condition can be replaced by a condition on the controllable
subspace, unobservable subspace, and a certain subspace associated with the
measured trajectories. Second, we prove that the persistency of excitation
requirement can be relaxed if the degree of a certain minimal polynomial is
tightly bounded. Our results show that data-driven predictive control using
online data is equivalent to model predictive control, even for uncontrollable
systems. Moreover, our results significantly reduce the amount of data needed
in identifying homogeneous multi-agent systems.
|
This article describe globular weak $(n,\infty)$-transformations
($n\in\mathbb{N}$) in the sense of Grothendieck, i.e for each $n\in\mathbb{N}$
we build a coherator $\Theta^{\infty}_{\mathbb{M}^n}$ which sets models are
globular weak $(n,\infty)$-transformations. A natural globular filtration
emerges from these coherators.
|
In this article, logical concepts are defined using the internal syntactic
and semantic structure of language. For a first-order language, it has been
shown that its logical constants are connectives and a certain type of
quantifiers for which the universal and existential quantifiers form a
functionally complete set of quantifiers. Neither equality nor cardinal
quantifiers belong to the logical constants of a first-order language.
|
To safely deploy autonomous vehicles, onboard perception systems must work
reliably at high accuracy across a diverse set of environments and geographies.
One of the most common techniques to improve the efficacy of such systems in
new domains involves collecting large labeled datasets, but such datasets can
be extremely costly to obtain, especially if each new deployment geography
requires additional data with expensive 3D bounding box annotations. We
demonstrate that pseudo-labeling for 3D object detection is an effective way to
exploit less expensive and more widely available unlabeled data, and can lead
to performance gains across various architectures, data augmentation
strategies, and sizes of the labeled dataset. Overall, we show that better
teacher models lead to better student models, and that we can distill expensive
teachers into efficient, simple students.
Specifically, we demonstrate that pseudo-label-trained student models can
outperform supervised models trained on 3-10 times the amount of labeled
examples. Using PointPillars [24], a two-year-old architecture, as our student
model, we are able to achieve state of the art accuracy simply by leveraging
large quantities of pseudo-labeled data. Lastly, we show that these student
models generalize better than supervised models to a new domain in which we
only have unlabeled data, making pseudo-label training an effective form of
unsupervised domain adaptation.
|
In this work, we propose a Bayesian statistical model to simultaneously
characterize two or more social networks defined over a common set of actors.
The key feature of the model is a hierarchical prior distribution that allows
us to represent the entire system jointly, achieving a compromise between
dependent and independent networks. Among others things, such a specification
easily allows us to visualize multilayer network data in a low-dimensional
Euclidean space, generate a weighted network that reflects the consensus
affinity between actors, establish a measure of correlation between networks,
assess cognitive judgements that subjects form about the relationships among
actors, and perform clustering tasks at different social instances. Our model's
capabilities are illustrated using several real-world data sets, taking into
account different types of actors, sizes, and relations.
|
We define partial quasi-morphisms on the group of Hamiltonian diffeomorphisms
of the cotangent bundle using the spectral invariants in Lagrangian Floer
homology with conormal boundary conditions, where the product compatible with
the PSS isomorphism and the homological intersection product is lacking.
|
In this paper, we propose a data-driven method to discover multiscale
chemical reactions governed by the law of mass action. First, we use a single
matrix to represent the stoichiometric coefficients for both the reactants and
products in a system without catalysis reactions. The negative entries in the
matrix denote the stoichiometric coefficients for the reactants and the
positive ones for the products. Second, we find that the conventional
optimization methods usually get stuck in the local minima and could not find
the true solution in learning the multiscale chemical reactions. To overcome
this difficulty, we propose a partial-parameters-freezing (PPF) technique to
progressively determine the network parameters by using the fact that the
stoichiometric coefficients are integers. With such a technique, the dimension
of the searching space is gradually reduced in the training process and the
global mimina can be eventually obtained. Several numerical experiments
including the classical Michaelis-Menten kinetics, the hydrogen oxidation
reactions, and the simplified GRI-3.0 mechanism verify the good performance of
our algorithm in learning the multiscale chemical reactions. The code is
available at \url{https://github.com/JuntaoHuang/multiscale-chemical-reaction}.
|
Approximate nearest-neighbor search is a fundamental algorithmic problem that
continues to inspire study due its essential role in numerous contexts. In
contrast to most prior work, which has focused on point sets, we consider
nearest-neighbor queries against a set of line segments in $\mathbb{R}^d$, for
constant dimension $d$. Given a set $S$ of $n$ disjoint line segments in
$\mathbb{R}^d$ and an error parameter $\varepsilon > 0$, the objective is to
build a data structure such that for any query point $q$, it is possible to
return a line segment whose Euclidean distance from $q$ is at most
$(1+\varepsilon)$ times the distance from $q$ to its nearest line segment. We
present a data structure for this problem with storage $O((n^2/\varepsilon^{d})
\log (\Delta/\varepsilon))$ and query time $O(\log
(\max(n,\Delta)/\varepsilon))$, where $\Delta$ is the spread of the set of
segments $S$. Our approach is based on a covering of space by anisotropic
elements, which align themselves according to the orientations of nearby
segments.
|
We construct a large class of projective threefolds with one node (aka
non-degenerate quadratic singularity) such that their small resolutions are not
projective.
|
Data heterogeneity has been identified as one of the key features in
federated learning but often overlooked in the lens of robustness to
adversarial attacks. This paper focuses on characterizing and understanding its
impact on backdooring attacks in federated learning through comprehensive
experiments using synthetic and the LEAF benchmarks. The initial impression
driven by our experimental results suggests that data heterogeneity is the
dominant factor in the effectiveness of attacks and it may be a redemption for
defending against backdooring as it makes the attack less efficient, more
challenging to design effective attack strategies, and the attack result also
becomes less predictable. However, with further investigations, we found data
heterogeneity is more of a curse than a redemption as the attack effectiveness
can be significantly boosted by simply adjusting the client-side backdooring
timing. More importantly,data heterogeneity may result in overfitting at the
local training of benign clients, which can be utilized by attackers to
disguise themselves and fool skewed-feature based defenses. In addition,
effective attack strategies can be made by adjusting attack data distribution.
Finally, we discuss the potential directions of defending the curses brought by
data heterogeneity. The results and lessons learned from our extensive
experiments and analysis offer new insights for designing robust federated
learning methods and systems
|
Unsupervised domain adaptive classifcation intends to improve the
classifcation performance on unlabeled target domain. To alleviate the adverse
effect of domain shift, many approaches align the source and target domains in
the feature space. However, a feature is usually taken as a whole for alignment
without explicitly making domain alignment proactively serve the classifcation
task, leading to sub-optimal solution. In this paper, we propose an effective
Task-oriented Alignment (ToAlign) for unsupervised domain adaptation (UDA). We
study what features should be aligned across domains and propose to make the
domain alignment proactively serve classifcation by performing feature
decomposition and alignment under the guidance of the prior knowledge induced
from the classifcation task itself. Particularly, we explicitly decompose a
feature in the source domain into a task-related/discriminative feature that
should be aligned, and a task-irrelevant feature that should be
avoided/ignored, based on the classifcation meta-knowledge. Extensive
experimental results on various benchmarks (e.g., Offce-Home, Visda-2017, and
DomainNet) under different domain adaptation settings demonstrate the
effectiveness of ToAlign which helps achieve the state-of-the-art performance.
The code is publicly available at https://github.com/microsoft/UDA
|
The design space for inertial confinement fusion (ICF) experiments is vast
and experiments are extremely expensive. Researchers rely heavily on computer
simulations to explore the design space in search of high-performing
implosions. However, ICF multiphysics codes must make simplifying assumptions,
and thus deviate from experimental measurements for complex implosions. For
more effective design and investigation, simulations require input from past
experimental data to better predict future performance. In this work, we
describe a cognitive simulation method for combining simulation and
experimental data into a common, predictive model. This method leverages a
machine learning technique called transfer learning, the process of taking a
model trained to solve one task, and partially retraining it on a sparse
dataset to solve a different, but related task. In the context of ICF design,
neural network models trained on large simulation databases and partially
retrained on experimental data, producing models that are far more accurate
than simulations alone. We demonstrate improved model performance for a range
of ICF experiments at the National Ignition Facility, and predict the outcome
of recent experiments with less than ten percent error for several key
observables. We discuss how the methods might be used to carry out a
data-driven experimental campaign to optimize performance, illustrating the key
product -- models that become increasingly accurate as data is acquired.
|
This paper develops simple feed-forward neural networks that achieve the
universal approximation property for all continuous functions with a fixed
finite number of neurons. These neural networks are simple because they are
designed with a simple and computable continuous activation function $\sigma$
leveraging a triangular-wave function and a softsign function. We prove that
$\sigma$-activated networks with width $36d(2d+1)$ and depth $11$ can
approximate any continuous function on a $d$-dimensioanl hypercube within an
arbitrarily small error. Hence, for supervised learning and its related
regression problems, the hypothesis space generated by these networks with a
size not smaller than $36d(2d+1)\times 11$ is dense in the space of continuous
functions. Furthermore, classification functions arising from image and signal
classification are in the hypothesis space generated by $\sigma$-activated
networks with width $36d(2d+1)$ and depth $12$, when there exist pairwise
disjoint closed bounded subsets of $\mathbb{R}^d$ such that the samples of the
same class are located in the same subset.
|
Adversarial algorithms have shown to be effective against neural networks for
a variety of tasks. Some adversarial algorithms perturb all the pixels in the
image minimally for the image classification task in image classification. In
contrast, some algorithms perturb few pixels strongly. However, very little
information is available regarding why these adversarial samples so diverse
from each other exist. Recently, Vargas et al. showed that the existence of
these adversarial samples might be due to conflicting saliency within the
neural network. We test this hypothesis of conflicting saliency by analysing
the Saliency Maps (SM) and Gradient-weighted Class Activation Maps (Grad-CAM)
of original and few different types of adversarial samples. We also analyse how
different adversarial samples distort the attention of the neural network
compared to original samples. We show that in the case of Pixel Attack,
perturbed pixels either calls the network attention to themselves or divert the
attention from them. Simultaneously, the Projected Gradient Descent Attack
perturbs pixels so that intermediate layers inside the neural network lose
attention for the correct class. We also show that both attacks affect the
saliency map and activation maps differently. Thus, shedding light on why some
defences successful against some attacks remain vulnerable against other
attacks. We hope that this analysis will improve understanding of the existence
and the effect of adversarial samples and enable the community to develop more
robust neural networks.
|
Understanding and mitigating loss channels due to two-level systems (TLS) is
one of the main cornerstones in the quest of realizing long photon lifetimes in
superconducting quantum circuits. Typically, the TLS to which a circuit couples
are modeled as a large bath without any coherence. Here we demonstrate that the
coherence of TLS has to be considered to accurately describe the ring-down
dynamics of a coaxial quarter-wave resonator with an internal quality factor of
$0.5\times10^9$ at the single-photon level. The transient analysis reveals
effective non-Markovian dynamics of the combined TLS and cavity system, which
we can accurately fit by introducing a comprehensive TLS model. The fit returns
an average coherence time of around $T_2^*\approx0.3\,\mathrm{\mu s}$ for a
total of $N\approx10^{9}$ TLS with power-law distributed coupling strengths.
Despite the shortly coherent TLS excitations, we observe long-term effects on
the cavity decay due to coherent elastic scattering between the resonator field
and the TLS. Moreover, this model provides an accurate prediction of the
internal quality factor's temperature dependence.
|
Weakly nonlinear internal wave-wave interaction is a key mechanism that
cascades energy from large to small scales, leading to ocean turbulence and
mixing. Oceans typically have a non-uniform density stratification profile;
moreover, submarine topography leads to a spatially varying ocean depth ($h$).
Under these conditions and assuming mild-slope bathymetry, we employ
multiple-scale analysis to derive the wave amplitude equations for triadic- and
self-interactions. The waves are assumed to have a slowly (rapidly) varying
amplitude (phase) in space and time. For uniform stratifications, the
horizontal wavenumber ($k$) condition for waves ($1$,$2$,$3$), given by
${k}_{(1,a)}+{k}_{(2,b)}+{k}_{(3,c)}=0$, is unaffected as $h$ is varied, where
$(a,b,c)$ denote the modenumber. Moreover, the nonlinear coupling coefficients
(NLC) are proportional to $1/h^2$, implying that triadic waves grow faster
while travelling up a seamount. For non-uniform stratifications, triads that do
not satisfy the condition $a=b=c$ may not satisfy the horizontal wavenumber
condition as $h$ is varied, and unlike uniform stratification, the NLC may not
decrease (increase) monotonically with increasing (decreasing) $h$. NLC, and
hence wave growth rates for both triads and self-interactions, can also vary
rapidly with $h$. The most unstable daughter wave combination of a triad with a
mode-1 parent wave can also change for relatively small changes in $h$. We also
investigate higher-order self-interactions in the presence of a monochromatic,
small amplitude bathymetry; here the bathymetry behaves as a zero frequency
wave. We derive the amplitude evolution equations and show that higher-order
self-interactions might be a viable mechanism of energy cascade.
|
We give a $0.5368$-competitive algorithm for edge-weighted online bipartite
matching. Prior to our work, the best competitive ratio was $0.5086$ due to
Fahrbach, Huang, Tao, and Zadimoghaddam (FOCS 2020). They achieved their
breakthrough result by developing a subroutine called \emph{online correlated
selection} (OCS) which takes as input a sequence of pairs and selects one item
from each pair. Importantly, the selections the OCS makes are negatively
correlated.
We achieve our result by defining \emph{multiway} OCSes which receive
arbitrarily many elements at each step, rather than just two. In addition to
better competitive ratios, our formulation allows for a simpler reduction from
edge-weighted online bipartite matching to OCSes. While Fahrbach et al. used a
factor-revealing linear program to optimize the competitive ratio, our analysis
directly connects the competitive ratio to the parameters of the multiway OCS.
Finally, we show that the formulation of Farhbach et al. can achieve a
competitive ratio of at most $0.5239$, confirming that multiway OCSes are
strictly more powerful.
|
Let $\Omega\subset \mathbb{C}^n$ be a smooth bounded pseudoconvex domain and
$A^2 (\Omega)$ denote its Bergman space. Let $P:L^2(\Omega)\longrightarrow
A^2(\Omega)$ be the Bergman projection. For a measurable
$\varphi:\Omega\longrightarrow \Omega$, the projected composition operator is
defined by $(K_\varphi f)(z) = P(f \circ \varphi)(z), z \in\Omega, f\in A^2
(\Omega).$ In 1994, Rochberg studied boundedness of $K_\varphi$ on the Hardy
space of the unit disk and obtained different necessary or sufficient
conditions for boundedness of $K_\varphi$. In this paper we are interested in
projected composition operators on Bergman spaces on pseudoconvex domains. We
study boundedness of this operator under the smoothness assumptions on the
symbol $\varphi$ on $\overline\Omega$.
|
A new mechanism for the internal heating of ultra-short-period planets is
proposed based on the gravitational perturbation by a non-axisymmetric
quadrupole moment of their host stars. Such a quadrupole is due to the magnetic
flux tubes in the stellar convection zone, unevenly distributed in longitude
and persisting for many stellar rotations as observed in young late-type stars.
The rotation period of the host star evolves from its shortest value on the
zero-age main sequence to longer periods due to the loss of angular momentum
through a magnetized wind. If the stellar rotation period comes close to twice
the orbital period of the planet, the quadrupole leads to a spin-orbit
resonance that excites oscillations of the star-planet separation. As a
consequence, a strong tidal dissipation is produced inside the planet. We
illustrate the operation of the mechanism by modeling the evolution of the
stellar rotation and of the innermost planetary orbit in the cases of CoRoT-7,
Kepler-78, and K2-141 whose present orbital periods range between 0.28 and 0.85
days. If the spin-orbit resonance occurs, the maximum power dissipated inside
the planets ranges between $10^{18}$ and $10^{19}$ W, while the total
dissipated energy is of the order of $10^{30}-10^{32}$ J over a time interval
as short as $(1-4.5) \times 10^{4}$ yr. Such a huge heating over a so short
time interval produces a complete melting of the planetary interiors and may
shut off their hydromagnetic dynamos. These may initiate a successive phase of
intense internal heating owing to unipolar magnetic star-planet interactions
and affect the composition and the escape of their atmospheres, producing
effects that could be observable during the entire lifetime of the planets
[abridged abstract].
|
This paper presents a fully convolutional scene graph generation (FCSGG)
model that detects objects and relations simultaneously. Most of the scene
graph generation frameworks use a pre-trained two-stage object detector, like
Faster R-CNN, and build scene graphs using bounding box features. Such pipeline
usually has a large number of parameters and low inference speed. Unlike these
approaches, FCSGG is a conceptually elegant and efficient bottom-up approach
that encodes objects as bounding box center points, and relationships as 2D
vector fields which are named as Relation Affinity Fields (RAFs). RAFs encode
both semantic and spatial features, and explicitly represent the relationship
between a pair of objects by the integral on a sub-region that points from
subject to object. FCSGG only utilizes visual features and still generates
strong results for scene graph generation. Comprehensive experiments on the
Visual Genome dataset demonstrate the efficacy, efficiency, and
generalizability of the proposed method. FCSGG achieves highly competitive
results on recall and zero-shot recall with significantly reduced inference
time.
|
With extreme weather events becoming more common, the risk posed by surface
water flooding is ever increasing. In this work we propose a model, and
associated Bayesian inference scheme, for generating probabilistic
(high-resolution short-term) forecasts of localised precipitation. The
parametrisation of our underlying hierarchical dynamic spatio-temporal model is
motivated by a forward-time, centred-space finite difference solution to a
collection of stochastic partial differential equations, where the main driving
forces are advection and diffusion. Observations from both weather radar and
ground based rain gauges provide information from which we can learn about the
likely values of the (latent) precipitation field in addition to other unknown
model parameters. Working in the Bayesian paradigm provides a coherent
framework for capturing uncertainty both in the underlying model parameters and
also in our forecasts. Further, appealing to simulation based (MCMC) sampling
yields a straightforward solution to handling zeros, treated as censored
observations, via data augmentation. Both the underlying state and the
observations are of moderately large dimension ($\mathcal{O}(10^4)$ and
$\mathcal{O}(10^3)$ respectively) and this renders standard inference
approaches computationally infeasible. Our solution is to embed the ensemble
Kalman smoother within a Gibbs sampling scheme to facilitate approximate
Bayesian inference in reasonable time. Both the methodology and the
effectiveness of our posterior sampling scheme are demonstrated via simulation
studies and also by a case study of real data from the Urban Observatory
project based in Newcastle upon Tyne, UK.
|
Continuous variable measurement-based quantum computation on cluster states
has in recent years shown great potential for scalable, universal, and
fault-tolerant quantum computation when combined with the
Gottesman-Kitaev-Preskill (GKP) code and quantum error correction. However, no
complete fault-tolerant architecture exists that includes everything from
cluster state generation with finite squeezing to gate implementations with
realistic noise and error correction. In this work, we propose a simple
architecture for the preparation of a cluster state in three dimensions in
which gates by gate teleportation can be efficiently implemented. To
accommodate scalability, we propose architectures that allow for both spatial
and temporal multiplexing, with the temporal encoded version requiring as
little as two squeezed light sources. Due to its three-dimensional structure,
the architecture supports topological qubit error correction, while GKP error
correction is efficiently realized within the architecture by teleportation. To
validate fault-tolerance, the architecture is simulated using surface-GKP
codes, including noise from GKP-states as well as gate noise caused by finite
squeezing in the cluster state. We find a fault-tolerant squeezing threshold of
12.7 dB with room for further improvement.
|
In high energy exclusive processes involving leptons, QED corrections can be
sensitive to infrared scales like the lepton mass and the soft photon energy
cut, resulting in large logarithms that need to be resummed to all order in
$\alpha$. When considering the ratio of the exclusive processes between two
different lepton flavors, the ratio $R$ can be expressed in terms of factorized
functions in the decoupled leptonic sectors. While some of the functional terms
cancel, there remain the large logarithms due to the lepton mass difference and
the energy cut. This factorization process can be universally applied to the
exclusive processes such as $Z\to l^+l^-$ and $B^-\to l^-\bar{\nu}_l$, where
the resummed result in the ratio gives significant deviations from the naive
expectation from lepton universality.
|
A hyperbolic group acts by homeomorphisms on its Gromov boundary. We show
that if this boundary is a topological n-sphere the action is topologically
stable in the dynamical sense: any nearby action is semi-conjugate to the
standard boundary action.
|
In this paper we consider the problem of pointwise determining the fibres of
the flat unitary subbundle of a PVHS of weight one. Starting from the
associated Higgs field, and assuming the base has dimension $1$, we construct a
family of (smooth but possibly non-holomorphic) morphisms of vector bundles
with the property that the intersection of their kernels at a general point is
the fibre of the flat subbundle. We explore the first one of these morphisms in
the case of a geometric PVHS arising from a family of smooth projective curves,
showing that it acts as the cup-product with some sort of "second-order
Kodaira-Spencer class" which we introduce, and check in the case of a family of
smooth plane curves that this additional condition is non-trivial.
|
Multi-segment reconstruction (MSR) is the problem of estimating a signal
given noisy partial observations. Here each observation corresponds to a
randomly located segment of the signal. While previous works address this
problem using template or moment-matching, in this paper we address MSR from an
unsupervised adversarial learning standpoint, named MSR-GAN. We formulate MSR
as a distribution matching problem where the goal is to recover the signal and
the probability distribution of the segments such that the distribution of the
generated measurements following a known forward model is close to the real
observations. This is achieved once a min-max optimization involving a
generator-discriminator pair is solved. MSR-GAN is mainly inspired by CryoGAN
[1]. However, in MSR-GAN we no longer assume the probability distribution of
the latent variables, i.e. segment locations, is given and seek to recover it
alongside the unknown signal. For this purpose, we show that the loss at the
generator side originally is non-differentiable with respect to the segment
distribution. Thus, we propose to approximate it using Gumbel-Softmax
reparametrization trick. Our proposed solution is generalizable to a wide range
of inverse problems. Our simulation results and comparison with various
baselines verify the potential of our approach in different settings.
|
We compare the radial profiles of the specific star formation rate (sSFR) in
a sample of 169 star-forming galaxies in close pairs with those of mass-matched
control galaxies in the SDSS-IV MaNGA survey. We find that the sSFR is
centrally enhanced (within one effective radius) in interacting galaxies by
~0.3 dex and that there is a weak sSFR suppression in the outskirts of the
galaxies of ~0.1 dex. We stack the differences profiles for galaxies in five
stellar mass bins between log(M/Mstar) = 9.0-11.5 and find that the sSFR
enhancement has no dependence on the stellar mass. The same result is obtained
when the comparison galaxies are matched to each paired galaxy in both stellar
mass and redshift. In addition, we find that that the sSFR enhancement is
elevated in pairs with nearly equal masses and closer projected separations, in
agreement with previous work based on single-fiber spectroscopy. We also find
that the sSFR offsets in the outskirts of the paired galaxies are dependent on
whether the galaxy is the more massive or less massive companion in the pair.
The more massive companion experiences zero to a positive sSFR enhancement
while the less massive companion experiences sSFR suppression in their
outskirts. Our results illustrate the complex tidal effects on star formation
in closely paired galaxies.
|
Solving Partially Observable Markov Decision Processes (POMDPs) is hard.
Learning optimal controllers for POMDPs when the model is unknown is harder.
Online learning of optimal controllers for unknown POMDPs, which requires
efficient learning using regret-minimizing algorithms that effectively tradeoff
exploration and exploitation, is even harder, and no solution exists currently.
In this paper, we consider infinite-horizon average-cost POMDPs with unknown
transition model, though a known observation model. We propose a natural
posterior sampling-based reinforcement learning algorithm (PSRL-POMDP) and show
that it achieves a regret bound of $O(\log T)$, where $T$ is the time horizon,
when the parameter set is finite. In the general case (continuous parameter
set), we show that the algorithm achieves $O (T^{2/3})$ regret under two
technical assumptions. To the best of our knowledge, this is the first online
RL algorithm for POMDPs and has sub-linear regret.
|
Let $D$ be a $k$-regular bipartite tournament on $n$ vertices. We show that,
for every $p$ with $2 \le p \le n/2-2$, $D$ has a cycle $C$ of length $2p$ such
that $D \setminus C$ is hamiltonian unless $D$ is isomorphic to the special
digraph $F_{4k}$. This statement was conjectured by Manoussakis, Song and Zhang
[K. Zhang, Y. Manoussakis, and Z. Song. Complementary cycles containing a fixed
arc in diregular bipartite tournaments. Discrete Mathematics,
133(1-3):325--328,1994]. In the same paper, the conjecture was proved for $p=2$
and more recently Bai, Li and He gave a proof for $p=3$ [Y. Bai, H. Li, and W.
He. Complementary cycles in regular bipartite tournaments. Discrete
Mathematics, 333:14--27, 2014].
|
Missing time-series data is a prevalent practical problem. Imputation methods
in time-series data often are applied to the full panel data with the purpose
of training a model for a downstream out-of-sample task. For example, in
finance, imputation of missing returns may be applied prior to training a
portfolio optimization model. Unfortunately, this practice may result in a
look-ahead-bias in the future performance on the downstream task. There is an
inherent trade-off between the look-ahead-bias of using the full data set for
imputation and the larger variance in the imputation from using only the
training data. By connecting layers of information revealed in time, we propose
a Bayesian posterior consensus distribution which optimally controls the
variance and look-ahead-bias trade-off in the imputation. We demonstrate the
benefit of our methodology both in synthetic and real financial data.
|
Recent studies have witnessed that self-supervised methods based on view
synthesis obtain clear progress on multi-view stereo (MVS). However, existing
methods rely on the assumption that the corresponding points among different
views share the same color, which may not always be true in practice. This may
lead to unreliable self-supervised signal and harm the final reconstruction
performance. To address the issue, we propose a framework integrated with more
reliable supervision guided by semantic co-segmentation and data-augmentation.
Specially, we excavate mutual semantic from multi-view images to guide the
semantic consistency. And we devise effective data-augmentation mechanism which
ensures the transformation robustness by treating the prediction of regular
samples as pseudo ground truth to regularize the prediction of augmented
samples. Experimental results on DTU dataset show that our proposed methods
achieve the state-of-the-art performance among unsupervised methods, and even
compete on par with supervised methods. Furthermore, extensive experiments on
Tanks&Temples dataset demonstrate the effective generalization ability of the
proposed method.
|
Nearly a decade ago, Azrieli and Shmaya introduced the class of
$\lambda$-Lipschitz games in which every player's payoff function is
$\lambda$-Lipschitz with respect to the actions of the other players. They
showed that such games admit $\epsilon$-approximate pure Nash equilibria for
certain settings of $\epsilon$ and $\lambda$. They left open, however, the
question of how hard it is to find such an equilibrium. In this work, we
develop a query-efficient reduction from more general games to Lipschitz games.
We use this reduction to show a query lower bound for any randomized algorithm
finding $\epsilon$-approximate pure Nash equilibria of $n$-player,
binary-action, $\lambda$-Lipschitz games that is exponential in
$\frac{n\lambda}{\epsilon}$. In addition, we introduce ``Multi-Lipschitz
games,'' a generalization involving player-specific Lipschitz values, and
provide a reduction from finding equilibria of these games to finding
equilibria of Lipschitz games, showing that the value of interest is the sum of
the individual Lipschitz parameters. Finally, we provide an exponential lower
bound on the deterministic query complexity of finding $\epsilon$-approximate
correlated equilibria of $n$-player, $m$-action, $\lambda$-Lipschitz games for
strong values of $\epsilon$, motivating the consideration of explicitly
randomized algorithms in the above results. Our proof is arguably simpler than
those previously used to show similar results.
|
The WIMP proposed here yields the observed abundance of dark matter, and is
consistent with the current limits from direct detection, indirect detection,
and collider experiments, if its mass is $\sim 72$ GeV/$c^2$. It is also
consistent with analyses of the gamma rays observed by Fermi-LAT from the
Galactic center (and other sources), and of the antiprotons observed by AMS-02,
in which the excesses are attributed to dark matter annihilation. These
successes are shared by the inert doublet model (IDM), but the phenomenology is
very different: The dark matter candidate of the IDM has first-order gauge
couplings to other new particles, whereas the present candidate does not. In
addition to indirect detection through annihilation products, it appears that
the present particle can be observed in the most sensitive direct-detection and
collider experiments currently being planned.
|
It has been shown that the performance of neural machine translation (NMT)
drops starkly in low-resource conditions, often requiring large amounts of
auxiliary data to achieve competitive results. An effective method of
generating auxiliary data is back-translation of target language sentences. In
this work, we present a case study of Tigrinya where we investigate several
back-translation methods to generate synthetic source sentences. We find that
in low-resource conditions, back-translation by pivoting through a
higher-resource language related to the target language proves most effective
resulting in substantial improvements over baselines.
|
This paper proposes architectures that facilitate the extrapolation of
emotional expressions in deep neural network (DNN)-based text-to-speech (TTS).
In this study, the meaning of "extrapolate emotional expressions" is to borrow
emotional expressions from others, and the collection of emotional speech
uttered by target speakers is unnecessary. Although a DNN has potential power
to construct DNN-based TTS with emotional expressions and some DNN-based TTS
systems have demonstrated satisfactory performances in the expression of the
diversity of human speech, it is necessary and troublesome to collect emotional
speech uttered by target speakers. To solve this issue, we propose
architectures to separately train the speaker feature and the emotional feature
and to synthesize speech with any combined quality of speakers and emotions.
The architectures are parallel model (PM), serial model (SM), auxiliary input
model (AIM), and hybrid models (PM&AIM and SM&AIM). These models are trained
through emotional speech uttered by few speakers and neutral speech uttered by
many speakers. Objective evaluations demonstrate that the performances in the
open-emotion test provide insufficient information. They make a comparison with
those in the closed-emotion test, but each speaker has their own manner of
expressing emotion. However, subjective evaluation results indicate that the
proposed models could convey emotional information to some extent. Notably, the
PM can correctly convey sad and joyful emotions at a rate of >60%.
|
The baroclinic annular mode (BAM) is a leading-order mode of the eddy-kinetic
energy in the Southern Hemisphere exhibiting. oscillatory behavior at
intra-seasonal time scales. The oscillation mechanism has been linked to
transient eddy-mean flow interactions that remain poorly understood. Here we
demonstrate that the finite memory effect in eddy-heat flux dependence on the
large-scale flow can explain the origin of the BAM's oscillatory behavior. We
represent the eddy memory effect by a delayed integral kernel that leads to a
generalized Langevin equation for the planetary-scale heat equation. Using a
mathematical framework for the interactions between planetary and
synoptic-scale motions, we derive a reduced dynamical model of the BAM - a
stochastically-forced oscillator with a period proportional to the geometric
mean between the eddy-memory time scale and the diffusive eddy equilibration
timescale. Our model provides a formal justification for the previously
proposed phenomenological model of the BAM and could be used to explicitly
diagnose the memory kernel and improve our understanding of transient eddy-mean
flow interactions in the atmosphere.
|
Spelling irregularities, known now as spelling mistakes, have been found for
several centuries. As humans, we are able to understand most of the misspelled
words based on their location in the sentence, perceived pronunciation, and
context. Unlike humans, computer systems do not possess the convenient auto
complete functionality of which human brains are capable. While many programs
provide spelling correction functionality, many systems do not take context
into account. Moreover, Artificial Intelligence systems function in the way
they are trained on. With many current Natural Language Processing (NLP)
systems trained on grammatically correct text data, many are vulnerable against
adversarial examples, yet correctly spelled text processing is crucial for
learning. In this paper, we investigate how spelling errors can be corrected in
context, with a pre-trained language model BERT. We present two experiments,
based on BERT and the edit distance algorithm, for ranking and selecting
candidate corrections. The results of our experiments demonstrated that when
combined properly, contextual word embeddings of BERT and edit distance are
capable of effectively correcting spelling errors.
|
In game theory, mechanism design is concerned with the design of incentives
so that a desired outcome of the game can be achieved. In this paper, we study
the design of incentives so that a desirable equilibrium is obtained, for
instance, an equilibrium satisfying a given temporal logic property -- a
problem that we call equilibrium design. We base our study on a framework where
system specifications are represented as temporal logic formulae, games as
quantitative concurrent game structures, and players' goals as mean-payoff
objectives. In particular, we consider system specifications given by LTL and
GR(1) formulae, and show that implementing a mechanism to ensure that a given
temporal logic property is satisfied on some/every Nash equilibrium of the
game, whenever such a mechanism exists, can be done in PSPACE for LTL
properties and in NP/$\Sigma^{P}_{2}$ for GR(1) specifications. We also study
the complexity of various related decision and optimisation problems, such as
optimality and uniqueness of solutions, and show that the complexities of all
such problems lie within the polynomial hierarchy. As an application,
equilibrium design can be used as an alternative solution to the rational
synthesis and verification problems for concurrent games with mean-payoff
objectives whenever no solution exists, or as a technique to repair, whenever
possible, concurrent games with undesirable rational outcomes (Nash equilibria)
in an optimal way.
|
Fonts have had trends throughout their history, not only in when they were
invented but also in their usage and popularity. In this paper, we attempt to
specifically find the trends in font usage using robust regression on a large
collection of text images. We utilize movie posters as the source of fonts for
this task because movie posters can represent time periods by using their
release date. In addition, movie posters are documents that are carefully
designed and represent a wide range of fonts. To understand the relationship
between the fonts of movie posters and time, we use a regression Convolutional
Neural Network (CNN) to estimate the release year of a movie using an isolated
title text image. Due to the difficulty of the task, we propose to use of a
hybrid training regimen that uses a combination of Mean Squared Error (MSE) and
Tukey's biweight loss. Furthermore, we perform a thorough analysis on the
trends of fonts through time.
|
Using characteristics to treat advection terms in time-dependent PDEs leads
to a class of schemes, e.g., semi-Lagrangian and Lagrange-Galerkin schemes,
which preserve stability under large Courant numbers, and may therefore be
appealing in many practical situations. Unfortunately, the need of locating the
feet of characteristics may cause a serious drop of efficiency in the case of
unstructured space grids, and thus prevent the use of large time-step schemes
on complex geometries. In this paper, we perform an in-depth analysis of the
main recipes available for characteristic location, and propose a technique to
improve the efficiency of this phase, using additional information related to
the advecting vector field. This results in a clear improvement of execution
times in the unstructured case, thus extending the range of applicability of
large time-step schemes.
|
In this paper, we construct a sequence $(c_k)_{k\in\mathbb{Z}_{\geq 1}}$ of
symplectic capacities based on the Chiu-Tamarkin complex $C_{T,\ell}$, a
$\mathbb{Z}/\ell\mathbb{Z}$-equivariant invariant coming from the microlocal
theory of sheaves. We compute $(c_k)_{k\in\mathbb{Z}_{\geq 1}}$ for convex
toric domains, which are the same as the Gutt-Hutchings capacities. On the
other hand, our method works for the contact embedding problem. We define a
sequence of "contact capacities" $([c]_k)_{k\in\mathbb{Z}_{\geq 1}}$ on the
prequantized contact manifold $\mathbb{R}^{2d}\times S^1$, which could derive
some embedding obstructions of prequantized convex toric domains.
|
We prove that all homology 3-spheres are $J_4$-equivalent, i.e. that any
homology 3-sphere can be obtained from one another by twisting one of its
Heegaard splittings by an element of the mapping class group acting trivially
on the fourth nilpotent quotient of the fundamental group of the gluing
surface. We do so by exhibiting an element of $J_4$, the fourth term of the
Johnson filtration of the mapping class group, on which (the core of) the
Casson invariant takes the value $1$. In particular, this provides an explicit
example of an element of $J_4$ that is not a commutator of length $2$ in the
Torelli group.
|
This paper presents an approach to deal with safety of dynamical systems in
presence of multiple non-convex unsafe sets. While optimal control and model
predictive control strategies can be employed in these scenarios, they suffer
from high computational complexity in case of general nonlinear systems.
Leveraging control barrier functions, on the other hand, results in
computationally efficient control algorithms. Nevertheless, when safety
guarantees have to be enforced alongside stability objectives, undesired
asymptotically stable equilibrium points have been shown to arise. We propose a
computationally efficient optimization-based approach which allows us to ensure
safety of dynamical systems without introducing undesired equilibria even in
presence of multiple non-convex unsafe sets. The developed control algorithm is
showcased in simulation and in a real robot navigation application.
|