abstract
stringlengths 42
2.09k
|
---|
We define the random magnetic Laplacien with spatial white noise as magnetic
field on the two-dimensional torus using paracontrolled calculus. It yields a
random self-adjoint operator with pure point spectrum and domain a random
subspace of nonsmooth functions in L 2. We give sharp bounds on the eigenvalues
which imply an almost sure Weyl-type law.
|
Searches for gravitational-wave counterparts have been going in earnest since
GW170817 and the discovery of AT2017gfo. Since then, the lack of detection of
other optical counterparts connected to binary neutron star or black hole -
neutron star candidates has highlighted the need for a better discrimination
criterion to support this effort. At the moment, the low-latency
gravitational-wave alerts contain preliminary information about the binary
properties and, hence, on whether a detected binary might have an
electromagnetic counterpart. The current alert method is a classifier that
estimates the probability that there is a debris disc outside the black hole
created during the merger as well as the probability of a signal being a binary
neutron star, a black hole - neutron star, a binary black hole or of
terrestrial origin. In this work, we expand upon this approach to predict both
the ejecta properties and provide contours of potential lightcurves for these
events in order to improve follow-up observation strategy. The various sources
of uncertainty are discussed, and we conclude that our ignorance about the
ejecta composition and the insufficient constraint of the binary parameters, by
the low-latency pipelines, represent the main limitations. To validate the
method, we test our approach on real events from the second and third Advanced
LIGO-Virgo observing runs.
|
Fully implicit Runge-Kutta (IRK) methods have many desirable properties as
time integration schemes in terms of accuracy and stability, but high-order IRK
methods are not commonly used in practice with numerical PDEs due to the
difficulty of solving the stage equations. This paper introduces a theoretical
and algorithmic preconditioning framework for solving the systems of equations
that arise from IRK methods applied to linear numerical PDEs (without algebraic
constraints). This framework also naturally applies to discontinuous Galerkin
discretizations in time. Under quite general assumptions on the spatial
discretization that yield stable time integration, the preconditioned operator
is proven to have condition number bounded by a small, order-one constant,
independent of the spatial mesh and time-step size, and with only weak
dependence on number of stages/polynomial order; for example, the
preconditioned operator for 10th-order Gauss IRK has condition number less than
two, independent of the spatial discretization and time step. The new method
can be used with arbitrary existing preconditioners for backward Euler-type
time stepping schemes, and is amenable to the use of three-term recursion
Krylov methods when the underlying spatial discretization is symmetric. The new
method is demonstrated to be effective on various high-order finite-difference
and finite-element discretizations of linear parabolic and hyperbolic problems,
demonstrating fast, scalable solution of up to 10th order accuracy. The new
method consistently outperforms existing block preconditioning approaches, and
in several cases, the new method can achieve 4th-order accuracy using Gauss
integration with roughly half the number of preconditioner applications and
wallclock time as required using standard diagonally implicit RK methods.
|
To successfully negotiate a deal, it is not enough to communicate fluently:
pragmatic planning of persuasive negotiation strategies is essential. While
modern dialogue agents excel at generating fluent sentences, they still lack
pragmatic grounding and cannot reason strategically. We present DialoGraph, a
negotiation system that incorporates pragmatic strategies in a negotiation
dialogue using graph neural networks. DialoGraph explicitly incorporates
dependencies between sequences of strategies to enable improved and
interpretable prediction of next optimal strategies, given the dialogue
context. Our graph-based method outperforms prior state-of-the-art negotiation
models both in the accuracy of strategy/dialogue act prediction and in the
quality of downstream dialogue response generation. We qualitatively show
further benefits of learned strategy-graphs in providing explicit associations
between effective negotiation strategies over the course of the dialogue,
leading to interpretable and strategic dialogues.
|
We propose a systematic analysis method for identifying essential parameters
in various linear and nonlinear response tensors without which they vanish. By
using the Keldysh formalism and the Chebyshev polynomial expansion method, the
response tensors are decomposed into the model-independent and dependent parts,
in which the latter is utilized to extract the essential parameters. An
application of the method is demonstrated by analyzing the nonlinear Hall
effect in the ferroelectric SnTe monolayer for example. It is shown that in
this example the second-neighbor hopping is essential for the nonlinear Hall
effect whereas the spin-orbit coupling is unnecessary. Moreover, by analyzing
terms contributing to the essential parameters in the lowest order, the
appearance of the nonlinear Hall effect can be interpreted by the subsequent
two processes: the orbital magneto-current effect and the linear anomalous Hall
effect by the induced orbital magnetization. In this way, the present method
provides a microscopic picture of responses. By combining with computational
analysis, it stimulates further discoveries of anomalous responses by filling
in a missing link among hidden degrees of freedom in a wide variety of
materials.
|
It is a long-standing objective to ease the computation burden incurred by
the decision making process. Identification of this mechanism's sensitivity to
simplification has tremendous ramifications. Yet, algorithms for decision
making under uncertainty usually lean on approximations or heuristics without
quantifying their effect. Therefore, challenging scenarios could severely
impair the performance of such methods. In this paper, we extend the decision
making mechanism to the whole by removing standard approximations and
considering all previously suppressed stochastic sources of variability. On top
of this extension, our key contribution is a novel framework to simplify
decision making while assessing and controlling online the simplification's
impact. Furthermore, we present novel stochastic bounds on the return and
characterize online the effect of simplification using this framework on a
particular simplification technique - reducing the number of samples in belief
representation for planning. Finally, we verify the advantages of our approach
through extensive simulations.
|
For the development of successful share trading strategies, forecasting the
course of action of the stock market index is important. Effective prediction
of closing stock prices could guarantee investors attractive benefits. Machine
learning algorithms have the ability to process and forecast almost reliable
closing prices for historical stock patterns. In this article, we intensively
studied NASDAQ stock market and targeted to choose the portfolio of ten
different companies belongs to different sectors. The objective is to compute
opening price of next day stock using historical data. To fulfill this task
nine different Machine Learning regressor applied on this data and evaluated
using MSE and R2 as performance metric.
|
Ensuring the privacy of research participants is vital, even more so in
healthcare environments. Deep learning approaches to neuroimaging require large
datasets, and this often necessitates sharing data between multiple sites,
which is antithetical to the privacy objectives. Federated learning is a
commonly proposed solution to this problem. It circumvents the need for data
sharing by sharing parameters during the training process. However, we
demonstrate that allowing access to parameters may leak private information
even if data is never directly shared. In particular, we show that it is
possible to infer if a sample was used to train the model given only access to
the model prediction (black-box) or access to the model itself (white-box) and
some leaked samples from the training data distribution. Such attacks are
commonly referred to as Membership Inference attacks. We show realistic
Membership Inference attacks on deep learning models trained for 3D
neuroimaging tasks in a centralized as well as decentralized setup. We
demonstrate feasible attacks on brain age prediction models (deep learning
models that predict a person's age from their brain MRI scan). We correctly
identified whether an MRI scan was used in model training with a 60% to over
80% success rate depending on model complexity and security assumptions.
|
An intrinsic antiferromagnetic topological insulator $\mathrm{MnBi_2Te_4}$
can be realized by intercalating Mn-Te bilayer chain in a topological
insulator, $\mathrm{Bi_2Te_3}$. $\mathrm{MnBi_2Te_4}$ provides not only a
stable platform to demonstrate exotic physical phenomena, but also easy
tunability of the physical properties. For example, inserting more
$\mathrm{Bi_2Te_3}$ layers in between two adjacent $\mathrm{MnBi_2Te_4}$
weakens the interlayer magnetic interactions between the $\mathrm{MnBi_2Te_4}$
layers. Here we present the first observations on the inter- and intra-layer
phonon modes of $\mathrm{MnBi_{2n}Te_{3n+1}}$ (n=1,2,3,4) using cryogenic
low-frequency Raman spectroscopy. We experimentally and theoretically
distinguish the Raman vibrational modes using various polarization
configurations. The two peaks at 66 cm$^{-1}$ and 112 cm$^{-1}$ show an
abnormal perturbation in the Raman linewidths below the magnetic transition
temperature due to spin-phonon coupling. In $\mathrm{MnBi_4Te_7}$, the
$\mathrm{Bi_2Te_3}$ layers induce Davydov splitting of the A$_{1g}$ mode around
137 cm$^{-1}$ at 5 K. Using the linear chain model, we estimate the
out-of-plane interlayer force constant to be $(3.98 \pm 0.14) \times 10^{19}$
N/m$^3$ at 5 K, three times weaker than that of $\mathrm{Bi_2Te_3}$. Our work
discovers the dynamics of phonon modes of the $\mathrm{MnBi_2Te_4}$ and the
effect of the additional $\mathrm{Bi_2Te_3}$ layers, providing the
first-principles guidance to tailor the physical properties of layered
heterostructures.
|
We propose a strategy for optimizing a sensor trajectory in order to estimate
the time dependence of a localized scalar source in turbulent channel flow. The
approach leverages the view of the adjoint scalar field as the sensitivity of
measurement to a possible source. A cost functional is constructed so that the
optimal sensor trajectory maintains a high sensitivity and low temporal
variation in the measured signal, for a given source location. This naturally
leads to the adjoint-of-adjoint equation based on which the sensor trajectory
is iteratively optimized. It is shown that the estimation performance based on
the measurement obtained by a sensor moving along the optimal trajectory is
drastically improved from that achieved with a stationary sensor. It is also
shown that the ratio of the fluctuation and the mean of the sensitivity for a
given sensor trajectory can be used as a diagnostic tool to evaluate the
resultant performance. Based on this finding, we propose a new cost functional
which only includes the ratio without any adjustable parameters, and
demonstrate its effectiveness in predicting the time dependence of scalar
release from the source.
|
Gamma distributed delay differential equations (DDEs) arise naturally in many
modelling applications. However, appropriate numerical methods for generic
Gamma distributed DDEs are not currently available. Accordingly, modellers
often resort to approximating the gamma distribution with an Erlang
distribution and using the linear chain technique to derive an equivalent
system of ordinary differential equations. In this work, we develop a
functionally continuous Runge-Kutta method to numerically integrate the gamma
distributed DDE and perform numerical tests to confirm the accuracy of the
numerical method. As the functionally continuous Runge-Kutta method is not
available in most scientific software packages, we then derive hypoexponential
approximations of the gamma distributed DDE. Using our numerical method, we
show that while using the common Erlang approximation can produce solutions
that are qualitatively different from the underlying gamma distributed DDE, our
hypoexponential approximations do not have this limitation. Finally, we
implement our hypoexponential approximations to perform statistical inference
on synthetic epidemiological data.
|
Markov state models (MSMs) have been broadly adopted for analyzing molecular
dynamics trajectories, but the approximate nature of the models that results
from coarse-graining into discrete states is a long-known limitation. We show
theoretically that, despite the coarse graining, in principle MSM-like analysis
can yield unbiased estimation of key observables. We describe unbiased
estimators for equilibrium state populations, for the mean first-passage time
(MFPT) of an arbitrary process, and for state committors - i.e., splitting
probabilities. Generically, the estimators are only asymptotically unbiased but
we describe how extension of a recently proposed reweighting scheme can
accelerate relaxation to unbiased values. Exactly accounting for 'sliding
window' averaging over finite-length trajectories is a key, novel element of
our analysis. In general, our analysis indicates that coarse-grained MSMs are
asymptotically unbiased for steady-state properties only when appropriate
boundary conditions (e.g., source-sink for MFPT estimation) are applied
directly to trajectories, prior to calculation of the appropriate transition
matrix.
|
The phase field paradigm, in combination with a suitable variational
structure, has opened a path for using Griffith's energy balance to predict the
fracture of solids. These so-called phase field fracture methods have gained
significant popularity over the past decade, and are now part of commercial
finite element packages and engineering fitness-for-service assessments. Crack
paths can be predicted, in arbitrary geometries and dimensions, based on a
global energy minimisation - without the need for \textit{ad hoc} criteria. In
this work, we review the fundamentals of phase field fracture methods and
examine their capabilities in delivering predictions in agreement with the
classical fracture mechanics theory pioneered by Griffith. The two most widely
used phase field fracture models are implemented in the context of the finite
element method, and several paradigmatic boundary value problems are addressed
to gain insight into their predictive abilities across all cracking stages;
both the initiation of growth and stable crack propagation are investigated. In
addition, we examine the effectiveness of phase field models with an internal
material length scale in capturing size effects and the transition flaw size
concept. Our results show that phase field fracture methods satisfactorily
approximate classical fracture mechanics predictions and can also reconcile
stress and toughness criteria for fracture. The accuracy of the approximation
is however dependent on modelling and constitutive choices; we provide a
rationale for these differences and identify suitable approaches for delivering
phase field fracture predictions that are in good agreement with
well-established fracture mechanics paradigms.
|
The convergence property of a stochastic algorithm for the self-consistent
field (SCF) calculations of electron structures is studied. The algorithm is
formulated by rewriting the electron charges as a trace/diagonal of a matrix
function, which is subsequently expressed as a statistical average. The
function is further approximated by using a Krylov subspace approximation. As a
result, each SCF iteration only samples one random vector without having to
compute all the orbitals. We consider the common practice of SCF iterations
with damping and mixing. We prove with appropriate assumptions that the
iterations converge in the mean-square sense, when the stochastic error has an
almost sure bound. We also consider the scenario when such an assumption is
weakened to a second moment condition, and prove the convergence in
probability.
|
Multiple organ failure (MOF) is a severe syndrome with a high mortality rate
among Intensive Care Unit (ICU) patients. Early and precise detection is
critical for clinicians to make timely decisions. An essential challenge in
applying machine learning models to electronic health records (EHRs) is the
pervasiveness of missing values. Most existing imputation methods are involved
in the data preprocessing phase, failing to capture the relationship between
data and outcome for downstream predictions. In this paper, we propose
classifier-guided generative adversarial imputation networks Classifier-GAIN)
for MOF prediction to bridge this gap, by incorporating both observed data and
label information. Specifically, the classifier takes imputed values from the
generator(imputer) to predict task outcomes and provides additional supervision
signals to the generator by joint training. The classifier-guide generator
imputes missing values with label-awareness during training, improving the
classifier's performance during inference. We conduct extensive experiments
showing that our approach consistently outperforms classical and state-of-art
neural baselines across a range of missing data scenarios and evaluation
metrics.
|
Instrumental systematics need to be controlled to high precision for upcoming
Cosmic Microwave Background (CMB) experiments. The level of contamination
caused by these systematics is often linked to the scan strategy, and scan
strategies for satellite experiments can significantly mitigate these
systematics. However, no detailed study has been performed for ground-based
experiments. Here we show that under the assumption of constant elevation scans
(CESs), the ability of the scan strategy to mitigate these systematics is
strongly limited, irrespective of the detailed structure of the scan strategy.
We calculate typical values and maps of the quantities coupling the scan to the
systematics, and show how these quantities vary with the choice of observing
elevations. These values and maps can be used to calculate and forecast the
magnitude of different instrumental systematics without requiring detailed scan
strategy simulations. As a reference point, we show that inclusion of even a
single boresight rotation angle significantly improves over sky rotation alone
for mitigating these systematics. A standard metric for evaluating
cross-linking is related to one of the parameters studied in this work, so a
corollary of our work is that the cross-linking will suffer from the same CES
limitations and therefore upcoming CMB surveys will unavoidably have poorly
cross-linked regions if they use CESs, regardless of detailed scheduling
choices. Our results are also relevant for non-CMB surveys that perform
constant elevation scans and may have scan-coupled systematics, such as
intensity mapping surveys.
|
Integer quantization of neural networks can be defined as the approximation
of the high precision computation of the canonical neural network formulation,
using reduced integer precision. It plays a significant role in the efficient
deployment and execution of machine learning (ML) systems, reducing memory
consumption and leveraging typically faster computations. In this work, we
present an integer-only quantization strategy for Long Short-Term Memory (LSTM)
neural network topologies, which themselves are the foundation of many
production ML systems. Our quantization strategy is accurate (e.g. works well
with quantization post-training), efficient and fast to execute (utilizing 8
bit integer weights and mostly 8 bit activations), and is able to target a
variety of hardware (by leveraging instructions sets available in common CPU
architectures, as well as available neural accelerators).
|
Let ${\mathfrak M}=({\mathcal M},\rho)$ be a metric space and let $X$ be a
Banach space. Let $F$ be a set-valued mapping from ${\mathcal M}$ into the
family ${\mathcal K}_m(X)$ of all compact convex subsets of $X$ of dimension at
most $m$. The main result in our recent joint paper with Charles Fefferman
(which is referred to as a ``Finiteness Principle for Lipschitz selections'')
provides efficient conditions for the existence of a Lipschitz selection of
$F$, i.e., a Lipschitz mapping $f:{\mathcal M}\to X$ such that $f(x)\in F(x)$
for every $x\in{\mathcal M}$. We give new alternative proofs of this result in
two special cases. When $m=2$ we prove it for $X={\bf R}^{2}$, and when $m=1$
we prove it for all choices of $X$. Both of these proofs make use of a simple
reiteration formula for the ``core'' of a set-valued mapping $F$, i.e., for a
mapping $G:{\mathcal M}\to{\mathcal K}_m(X)$ which is Lipschitz with respect to
the Hausdorff distance, and such that $G(x)\subset F(x)$ for all $x\in{\mathcal
M}$.
|
Purpose: Quantitative magnetization transfer (qMT) imaging can be used to
quantify the proportion of protons in a voxel attached to macromolecules. Here,
we show that the original qMT balanced steady-state free precession (bSSFP)
model is biased due to over-simplistic assumptions made in its derivation.
Theory and Methods: We present an improved model for qMT bSSFP, which
incorporates finite radio-frequency (RF) pulse effects as well as simultaneous
exchange and relaxation. Further, a correction to finite RF pulse effects for
sinc-shaped excitations is derived. The new model is compared to the original
one in numerical simulations of the Bloch-McConnell equations and in previously
acquired in-vivo data. Results: Our numerical simulations show that the
original signal equation is significantly biased in typical brain tissue
structures (by 7-20 %) whereas the new signal equation outperforms the original
one with minimal bias (< 1%). It is further shown that the bias of the original
model strongly affects the acquired qMT parameters in human brain structures,
with differences in the clinically relevant parameter of pool-size-ratio of up
to 31 %. Particularly high biases of the original signal equation are expected
in an MS lesion within diseased brain tissue (due to a low T2/T1-ratio),
demanding a more accurate model for clinical applications. Conclusion: The
improved model for qMT bSSFP is recommended for accurate qMT parameter mapping
in healthy and diseased brain tissue structures.
|
We construct a categorical framework for nonlinear postquantum inference,
with embeddings of convex closed sets of suitable reflexive Banach spaces as
objects and pullbacks of Br\`egman quasi-nonexpansive mappings (in particular,
constrained maximisations of Br\`egman relative entropies) as morphisms. It
provides a nonlinear convex analytic analogue of Chencov's programme of
geometric study of categories of linear positive maps between spaces of states,
a working model of Mielnik's nonlinear transmitters, and a setting for
nonlinear resource theories (with monoids of Br\`egman quasi-nonexpansive maps
as free operations, their asymptotic fixed point sets as free sets, and
Br\`egman relative entropies as resource monotones). We construct a range of
concrete examples for semi-finite JBW-algebras and any W*-algebras. Due to
relative entropy's asymmetry, all constructions have left and right versions,
with Legendre duality inducing categorical equivalence between their
well-defined restrictions. Inner groupoids of these categories implement the
notion of statistical equivalence. The hom-sets of a subcategory of morphisms
given by entropic projections have the structure of partially ordered
commutative monoids (so, they are resource theories in Fritz's sense). Further
restriction of objects to affine sets turns Br\`egman relative entropy into a
functor. Finally, following Lawvere's adjointness paradigm for deductive logic,
but with a semantic twist representing Jaynes' and Chencov's views on
statistical inference, we introduce a category-theoretic multi-(co)agent
setting for inductive inference theories, implemented by families of monads and
comonads. We show that the br\`egmanian approach provides some special cases of
this setting.
|
Garc\'ia-Aguilar et al. [Phys. Rev. Lett 126, 038001 (2021)] have shown that
the deformations of "shape-shifting droplets" are consistent with an elastic
model, that, unlike previous models, includes the intrinsic curvature of the
frozen surfactant layer. In this Comment, we show that the interplay between
surface tension and intrinsic curvature in their model is in fact
mathematically equivalent to a physically very different phase-transition
mechanism of the same process that we developed previously [Phys. Rev. Lett.
118, 088001 (2017); Phys. Rev. Res. 1, 023017 (2019)]. The mathematical models
cannot therefore distinguish between the two mechanisms, and hence it is not
possible to claim that one mechanism underlies all observed shape-shifting
phenomena without a much more detailed comparison of experiment and theory.
|
We propose a novel scheme for the exact renormalisation group motivated by
the desire of reducing the complexity of practical computations. The key idea
is to specify renormalisation conditions for all inessential couplings, leaving
us with the task of computing only the flow of the essential ones. To achieve
this aim, we utilise a renormalisation group equation for the effective average
action which incorporates general non-linear field reparameterisations. A
prominent feature of the scheme is that, apart from the renormalisation of the
mass, the propagator evaluated at any constant value of the field maintains its
unrenormalised form. Conceptually, the scheme provides a clearer picture of
renormalisation itself since the redundant, non-physical content is
automatically disregarded in favour of a description based only on quantities
that enter expressions for physical observables. To exemplify the scheme's
utility, we investigate the Wilson-Fisher fixed point in three dimensions at
order two in the derivative expansion. In this case, the scheme removes all
order $\partial^2$ operators apart from the canonical term. Further
simplifications occur at higher orders in the derivative expansion. Although we
concentrate on a minimal scheme that reduces the complexity of computations, we
propose more general schemes where inessential couplings can be tuned to
optimise a given approximation. We further discuss the applicability of the
scheme to a broad range of physical theories.
|
Amorphous dielectric materials have been known to host two-level systems
(TLSs) for more than four decades. Recent developments on superconducting
resonators and qubits enable detailed studies on the physics of TLSs. In
particular, measuring the loss of a device over long time periods (a few days)
allows us to investigate stochastic fluctuations due to the interaction between
TLSs. We measure the energy relaxation time of a frequency-tunable planar
superconducting qubit over time and frequency. The experiments show a variety
of stochastic patterns that we are able to explain by means of extensive
simulations. The model used in our simulations assumes a qubit interacting with
high-frequency TLSs, which, in turn, interact with thermally activated
low-frequency TLSs. Our simulations match the experiments and suggest the
density of low-frequency TLSs is about three orders of magnitude larger than
that of high-frequency ones.
|
Learning from implicit user feedback is challenging as we can only observe
positive samples but never access negative ones. Most conventional methods cope
with this issue by adopting a pairwise ranking approach with negative sampling.
However, the pairwise ranking approach has a severe disadvantage in the
convergence time owing to the quadratically increasing computational cost with
respect to the sample size; it is problematic, particularly for large-scale
datasets and complex models such as neural networks. By contrast, a pointwise
approach does not directly solve a ranking problem, and is therefore inferior
to a pairwise counterpart in top-K ranking tasks; however, it is generally
advantageous in regards to the convergence time. This study aims to establish
an approach to learn personalised ranking from implicit feedback, which
reconciles the training efficiency of the pointwise approach and ranking
effectiveness of the pairwise counterpart. The key idea is to estimate the
ranking of items in a pointwise manner; we first reformulate the conventional
pointwise approach based on density ratio estimation and then incorporate the
essence of ranking-oriented approaches (e.g. the pairwise approach) into our
formulation. Through experiments on three real-world datasets, we demonstrate
that our approach not only dramatically reduces the convergence time (one to
two orders of magnitude faster) but also significantly improving the ranking
performance.
|
Full Duplex (FD) radio has emerged as a promising solution to increase the
data rates by up to a factor of two via simultaneous transmission and reception
in the same frequency band. This paper studies a novel hybrid beamforming
(HYBF) design to maximize the weighted sum-rate (WSR) in a single-cell
millimeter wave (mmWave) massive multiple-input-multiple-output (mMIMO) FD
system. Motivated by practical considerations, we assume that the multi-antenna
users and hybrid FD base station (BS) suffer from the limited dynamic range
(LDR) noise due to non-ideal hardware and an impairment aware HYBF approach is
adopted by integrating the traditional LDR noise model in the mmWave band. In
contrast to the conventional HYBF schemes, our design also considers the joint
sum-power and the practical per-antenna power constraints. A novel
interference, self-interference (SI) and LDR noise aware optimal power
allocation scheme for the uplink (UL) users and FD BS is also presented to
satisfy the joint constraints. The maximum achievable gain of a multi-user
mmWave FD system over a fully digital half duplex (HD) system with different
LDR noise levels and numbers of the radio-frequency (RF) chains is
investigated. Simulation results show that our design outperforms the HD system
with only a few RF chains at any LDR noise level. The advantage of having
amplitude control at the analog stage is also examined, and additional gain for
the mmWave FD system becomes evident when the number of RF chains at the hybrid
FD BS is small.
|
We study the influence of running vacuum on the baryon-to-photon ratio in
running vacuum models (RVMs). When there exists a non-minimal coupling between
photons and other matter in the expanding universe, the energy-momentum tensor
of photons is no longer conserved, but the energy of photons could remain
conserved. We discuss the conditions for the energy conservation of photons in
RVMs. The photon number density and baryon number density, from the epoch of
photon decoupling to the present day, are obtained in the context of RVMs by
assuming that photons and baryons can be coupled to running vacuum,
respectively. Both cases lead to a time-evolving baryon-to-photon ratio.
However the evolution of the baryon-to-photon ratio is strictly constrained by
observations. It is found that if the dynamic term of running vacuum is indeed
coupled to photons or baryons, the coefficient of the dynamic term must be
extremely small, which is unnatural. Therefore, our study basically rules out
the possibility that running vacuum is coupled to photons or baryons in RVMs.
|
To extract liver from medical images is a challenging task due to similar
intensity values of liver with adjacent organs, various contrast levels,
various noise associated with medical images and irregular shape of liver. To
address these issues, it is important to preprocess the medical images, i.e.,
computerized tomography (CT) and magnetic resonance imaging (MRI) data prior to
liver analysis and quantification. This paper investigates the impact of
permutation of various preprocessing techniques for CT images, on the automated
liver segmentation using deep learning, i.e., U-Net architecture. The study
focuses on Hounsfield Unit (HU) windowing, contrast limited adaptive histogram
equalization (CLAHE), z-score normalization, median filtering and
Block-Matching and 3D (BM3D) filtering. The segmented results show that
combination of three techniques; HU-windowing, median filtering and z-score
normalization achieve optimal performance with Dice coefficient of 96.93%,
90.77% and 90.84% for training, validation and testing respectively.
|
The Robot Operating System 2 (ROS2) targets distributed real-time systems and
is widely used in the robotics community. Especially in these systems, latency
in data processing and communication can lead to instabilities. Though being
highly configurable with respect to latency, ROS2 is often used with its
default settings.
In this paper, we investigate the end-to-end latency of ROS2 for distributed
systems with default settings and different Data Distribution Service (DDS)
middlewares. In addition, we profile the ROS2 stack and point out latency
bottlenecks. Our findings indicate that end-to-end latency strongly depends on
the used DDS middleware. Moreover, we show that ROS2 can lead to 50% latency
overhead compared to using low-level DDS communications. Our results imply
guidelines for designing distributed ROS2 architectures and indicate
possibilities for reducing the ROS2 overhead.
|
We study a long-recognised but under-appreciated symmetry called "dynamical
similarity" and illustrate its relevance to many important conceptual problems
in fundamental physics. Dynamical similarities are general transformations of a
system where the unit of Hamilton's principal function is rescaled, and
therefore represent a kind of dynamical scaling symmetry with formal properties
that differ from many standard symmetries. To study this symmetry, we develop a
general framework for symmetries that distinguishes the observable and surplus
structures of a theory by using the minimal freely specifiable initial data for
the theory that is necessary to achieve empirical adequacy. This framework is
then applied to well-studied examples including Galilean invariance and the
symmetries of the Kepler problem. We find that our framework gives a precise
dynamical criterion for identifying the observables of those systems, and that
those observables agree with epistemic expectations. We then apply our
framework to dynamical similarity. First we give a general definition of
dynamical similarity. Then we show, with the help of some previous results, how
the dynamics of our observables leads to singularity resolution and the
emergence of an arrow of time in cosmology.
|
State-space models (SSM) with Markov switching offer a powerful framework for
detecting multiple regimes in time series, analyzing mutual dependence and
dynamics within regimes, and asserting transitions between regimes. These
models however present considerable computational challenges due to the
exponential number of possible regime sequences to account for. In addition,
high dimensionality of time series can hinder likelihood-based inference. This
paper proposes novel statistical methods for Markov-switching SSMs using
maximum likelihood estimation, Expectation-Maximization (EM), and parametric
bootstrap. We develop solutions for initializing the EM algorithm, accelerating
convergence, and conducting inference that are ideally suited to massive
spatio-temporal data such as brain signals. We evaluate these methods in
simulations and present applications to EEG studies of epilepsy and of motor
imagery. All proposed methods are implemented in a MATLAB toolbox available at
https://github.com/ddegras/switch-ssm.
|
An increasing amount of location-based service (LBS) data is being
accumulated and helps to study urban dynamics and human mobility. GPS
coordinates and other location indicators are normally low dimensional and only
representing spatial proximity, thus difficult to be effectively utilized by
machine learning models in Geo-aware applications. Existing location embedding
methods are mostly tailored for specific problems that are taken place within
areas of interest. When it comes to the scale of a city or even a country,
existing approaches always suffer from extensive computational cost and
significant data sparsity. Different from existing studies, we propose to learn
representations through a GCN-aided skip-gram model named GCN-L2V by
considering both spatial connection and human mobility. With a flow graph and a
spatial graph, it embeds context information into vector representations.
GCN-L2V is able to capture relationships among locations and provide a better
notion of similarity in a spatial environment. Across quantitative experiments
and case studies, we empirically demonstrate that representations learned by
GCN-L2V are effective. As far as we know, this is the first study that provides
a fine-grained location embedding at the city level using only LBS records.
GCN-L2V is a general-purpose embedding model with high flexibility and can be
applied in down-streaming Geo-aware applications.
|
The Earth's magnetotail is characterized by stretched magnetic field lines.
Energetic particles are effectively scattered due to the field-line curvature,
which then leads to isotropization of energetic particle distributions and
particle precipitation to the Earth's atmosphere. Measurements of these
precipitation at low-altitude spacecraft are thus often used to remotely probe
the magnetotail current sheet configuration. This configuration may include
spatially localized maxima of the curvature radius at equator (due to localized
humps of the equatorial magnetic field magnitude) that reduce the energetic
particle scattering and precipitation. Therefore, the measured precipitation
patterns are related to the spatial distribution of the equatorial curvature
radius that is determined by the magnetotail current sheet configuration. In
this study, we show that, contrary to previous thoughts, the magnetic field
line configuration with the localized curvature radius maximum can actually
enhance the scattering and subsequent precipitation. The spatially localized
magnetic field dipolarization (magnetic field humps) can significantly curve
magnetic field lines far from the equator and create off-equatorial minima in
the curvature radius. Scattering of energetic particles in these off-equatorial
regions alters the scattering (and precipitation) patterns, which has not been
studied yet. We discuss our results in the context of remote-sensing the
magnetotail current sheet configuration with low-altitude spacecraft
measurements.
|
Since the heavy neutrinos of the inverse seesaw mechanism mix largely with
the standard ones, the charged currents formed with them and the muons have the
potential of generating robust and positive contribution to the anomalous
magnetic moment of the muon. Ho\-we\-ver, bounds from the non-unitary in the
leptonic mixing matrix may restrict so severely the parameters of the mechanism
that, depending on the framework under which the mechanism is implemented, may
render it unable to explain the recent muon g-2 result. In this paper we show
that this happens when we implement the mechanism into the standard model and
into two versions of the 3-3-1 models.
|
Using an argument of Baldwin--Hu--Sivek, we prove that if $K$ is a hyperbolic
fibered knot with fiber $F$ in a closed, oriented $3$--manifold $Y$, and
$\widehat{HFK}(Y,K,[F], g(F)-1)$ has rank $1$, then the monodromy of $K$ is
freely isotopic to a pseudo-Anosov map with no fixed points. In particular,
this shows that the monodromy of a hyperbolic L-space knot is freely isotopic
to a map with no fixed points.
|
The object of Weakly-supervised Temporal Action Localization (WS-TAL) is to
localize all action instances in an untrimmed video with only video-level
supervision. Due to the lack of frame-level annotations during training,
current WS-TAL methods rely on attention mechanisms to localize the foreground
snippets or frames that contribute to the video-level classification task. This
strategy frequently confuse context with the actual action, in the localization
result. Separating action and context is a core problem for precise WS-TAL, but
it is very challenging and has been largely ignored in the literature. In this
paper, we introduce an Action-Context Separation Network (ACSNet) that
explicitly takes into account context for accurate action localization. It
consists of two branches (i.e., the Foreground-Background branch and the
Action-Context branch). The Foreground- Background branch first distinguishes
foreground from background within the entire video while the Action-Context
branch further separates the foreground as action and context. We associate
video snippets with two latent components (i.e., a positive component and a
negative component), and their different combinations can effectively
characterize foreground, action and context. Furthermore, we introduce extended
labels with auxiliary context categories to facilitate the learning of
action-context separation. Experiments on THUMOS14 and ActivityNet v1.2/v1.3
datasets demonstrate the ACSNet outperforms existing state-of-the-art WS-TAL
methods by a large margin.
|
Intrinsic Image Decomposition is an open problem of generating the
constituents of an image. Generating reflectance and shading from a single
image is a challenging task specifically when there is no ground truth. There
is a lack of unsupervised learning approaches for decomposing an image into
reflectance and shading using a single image. We propose a neural network
architecture capable of this decomposition using physics-based parameters
derived from the image. Through experimental results, we show that (a) the
proposed methodology outperforms the existing deep learning-based IID
techniques and (b) the derived parameters improve the efficacy significantly.
We conclude with a closer analysis of the results (numerical and example
images) showing several avenues for improvement.
|
We consider the barotropic Navier-Stokes system describing the motion of a
compressible viscous fluid confined to a bounded domain driven by time periodic
inflow/outflow boundary conditions. We show that the problem admits a time
periodic solution in the class of weak solutions satisfying the energy
inequality.
|
Understanding the features learned by deep models is important from a model
trust perspective, especially as deep systems are deployed in the real world.
Most recent approaches for deep feature understanding or model explanation
focus on highlighting input data features that are relevant for classification
decisions. In this work, we instead take the perspective of relating deep
features to well-studied, hand-crafted features that are meaningful for the
application of interest. We propose a methodology and set of systematic
experiments for exploring deep features in this setting, where input feature
importance approaches for deep feature understanding do not apply. Our
experiments focus on understanding which hand-crafted and deep features are
useful for the classification task of interest, how robust these features are
for related tasks and how similar the deep features are to the meaningful
hand-crafted features. Our proposed method is general to many application areas
and we demonstrate its utility on orchestral music audio data.
|
An automorphism of a rooted spherically homogeneous tree is settled if it
satisfies certain conditions on the growth of cycles at finite levels of the
tree. In this paper, we consider a conjecture by Boston and Jones that the
image of an arboreal representation of the absolute Galois group of a number
field in the automorphism group of a tree has a dense subset of settled
elements. Inspired by analogous notions in theory of compact Lie groups, we
introduce the concepts of a maximal torus and a Weyl group for actions of
profinite groups on rooted trees, and we show that the Weyl group contains
important information about settled elements. We study maximal tori and their
Weyl groups in the images of arboreal representations associated to quadratic
polynomials over algebraic number fields, and in branch groups.
|
Open quantum systems exhibit a rich phenomenology, in comparison to closed
quantum systems that evolve unitarily according to the Schr\"odinger equation.
The dynamics of an open quantum system are typically classified into Markovian
and non-Markovian, depending on whether the dynamics can be decomposed into
valid quantum operations at any time scale. Since Markovian evolutions are
easier to simulate, compared to non-Markovian dynamics, it is reasonable to
assume that non-Markovianity can be employed for useful quantum-technological
applications. Here, we demonstrate the usefulness of non-Markovianity for
preserving correlations and coherence in quantum systems. For this, we consider
a broad class of qubit evolutions, having a decoherence matrix separated from
zero for large times. While any such Markovian evolution leads to an
exponential loss of correlations, non-Markovianity can help to preserve
correlations even in the limit $t \rightarrow \infty$. For covariant qubit
evolutions, we also show that non-Markovianity can be used to preserve quantum
coherence at all times, which is an important resource for quantum metrology.
We explicitly demonstrate this effect experimentally with linear optics, by
implementing the required evolution that is non-Markovian at all times.
|
People hope automated driving technology is always in a stable and
controllable state; specifically, it can be divided into controllable planning,
controllable responsibility, and controllable information. When this
controllability is undermined, it brings about the problems, e.g., trolley
dilemma, responsibility attribution, information leakage, and security. This
article discusses these three types of issues separately and clarifies the
misunderstandings.
|
In a recent paper with J.-P. Nicolas [J.-P. Nicolas and P.T. Xuan, Annales
Henri Poincare 2019], we studied the peeling for scalar fields on Kerr metrics.
The present work extends these results to Dirac fields on the same geometrical
background. We follow the approach initiated by L.J. Mason and J.-P. Nicolas
[L. Mason and J.-P. Nicolas, J.Inst.Math.Jussieu 2009; L. Mason and J.-P.
Nicolas, J.Geom.Phys 2012] on the Schwarzschild spacetime and extended to Kerr
metrics for scalar fields. The method combines the Penrose conformal
compactification and geometric energy estimates in order to work out a
definition of the peeling at all orders in terms of Sobolev regularity near
$\mathscr{I}$, instead of ${\mathcal C}^k$ regularity at $\mathscr{I}$, then
provides the optimal spaces of initial data such that the associated solution
satisfies the peeling at a given order. The results confirm that the analogous
decay and regularity assumptions on initial data in Minkowski and in Kerr
produce the same regularity across null infinity. Our results are local near
spacelike infinity and are valid for all values of the angular momentum of the
spacetime, including for fast Kerr metrics.
|
Two-dimensional (2D) hybrid organic-inorganic perovskites (HOIPs) are
introducing new directions in the 2D materials landscape. The coexistence of
ferroelectricity and spin-orbit interactions play a key role in their
optoelectronic properties. We perform a detailed study on a recently
synthesized ferroelectric 2D-HOIP, (AMP)PbI$_4$ (AMP =
4-aminomethyl-piperidinium). The calculated polarization and Rashba parameter
are in excellent agreement with experimental values. We report a striking new
effect, i.e., an extraordinarily large Rashba anisotropy that is tunable by
ferroelectric polarization: as polarization is reversed, not only the spin
texture chirality is inverted, but also the major and minor axes of the Rashba
anisotropy ellipse in k-space are interchanged - a pseudo rotation. A $k \cdot
p$ model Hamiltonian and symmetry-mode analysis reveal a quadrilinear coupling
between the cation-rotation modes responsible for the Rashba ellipse
pseudo-rotation, the framework rotation, and the polarization. These findings
may provide new avenues for spin-optoelectronic devices such as spin valves or
spin FETs.
|
This paper presents a supervised learning method to generate continuous
cost-to-go functions of non-holonomic systems directly from the workspace
description. Supervision from informative examples reduces training time and
improves network performance. The manifold representing the optimal
trajectories of a non-holonomic system has high-curvature regions which can not
be efficiently captured with uniform sampling. To address this challenge, we
present an adaptive sampling method which makes use of sampling-based planners
along with local, closed-form solutions to generate training samples. The
cost-to-go function over a specific workspace is represented as a neural
network whose weights are generated by a second, higher order network. The
networks are trained in an end-to-end fashion. In our previous work, this
architecture was shown to successfully learn to generate the cost-to-go
functions of holonomic systems using uniform sampling. In this work, we show
that uniform sampling fails for non-holonomic systems. However, with the
proposed adaptive sampling methodology, our network can generate near-optimal
trajectories for non-holonomic systems while avoiding obstacles. Experiments
show that our method is two orders of magnitude faster compared to traditional
approaches in cluttered environments.
|
We investigate the structure of the meson Regge trajectories based on the
quadratic form of the spinless Salpeter-type equation. It is found that the
forms of the Regge trajectories depend on the energy region. As the employed
Regge trajectory formula does not match the energy region, the fitted
parameters neither have explicit physical meanings nor obey the constraints
although the fitted Regge trajectory can give the satisfactory predictions if
the employed formula is appropriate mathematically. Moreover, the consistency
of the Regge trajectories obtained from different approaches is discussed. And
the Regge trajectories for different mesons are presented. Finally, we show
that the masses of the constituents will come into the slope and explain why
the slopes of the fitted linear Regge trajectories vary with different kinds of
mesons.
|
Higgs-portal effective field theories are widely used as benchmarks in order
to interpret collider and astroparticle searches for dark matter (DM)
particles. To assess the validity of these effective models, it is important to
confront them to concrete realizations that are complete in the ultraviolet
regime. In this paper, we compare effective Higgs-portal models with scalar,
fermionic and vector DM with a series of increasingly complex realistic models,
taking into account all existing constraints from collider and astroparticle
physics. These complete realizations include the inert doublet with scalar DM,
the singlet-doublet model for fermionic DM and models based on spontaneously
broken dark SU(2) and SU(3) gauge symmetries for vector boson DM. We also
discuss the simpler scenarios in which a new scalar singlet field that mixes
with the standard Higgs field is introduced with minimal couplings to
isosinglet spin--$0, \frac12$ and 1 DM states. We show that in large regions of
the parameter space of these models, the effective Higgs-portal approach
provides a consistent limit and thus, can be safely adopted, in particular for
the interpretation of searches for invisible Higgs boson decays at the LHC. The
phenomenological implications of assuming or not that the DM states generate
the correct cosmological relic density are also discussed.
|
We present geometric Bayesian active learning by disagreements (GBALD), a
framework that performs BALD on its core-set construction interacting with
model uncertainty estimation. Technically, GBALD constructs core-set on
ellipsoid, not typical sphere, preventing low-representative elements from
spherical boundaries. The improvements are twofold: 1) relieve uninformative
prior and 2) reduce redundant estimations. Theoretically, geodesic search with
ellipsoid can derive tighter lower bound on error and easier to achieve zero
error than with sphere. Experiments show that GBALD has slight perturbations to
noisy and repeated samples, and outperforms BALD, BatchBALD and other existing
deep active learning approaches.
|
Facial Expression Recognition(FER) is one of the most important topic in
Human-Computer interactions(HCI). In this work we report details and
experimental results about a facial expression recognition method based on
state-of-the-art methods. We fine-tuned a SeNet deep learning architecture
pre-trained on the well-known VGGFace2 dataset, on the AffWild2 facial
expression recognition dataset. The main goal of this work is to define a
baseline for a novel method we are going to propose in the near future. This
paper is also required by the Affective Behavior Analysis in-the-wild (ABAW)
competition in order to evaluate on the test set this approach. The results
reported here are on the validation set and are related on the Expression
Challenge part (seven basic emotion recognition) of the competition. We will
update them as soon as the actual results on the test set will be published on
the leaderboard.
|
In this paper we compute the Newton polytope $\mathcal M_A$ of the Morse
discriminant in the space of univariate polynomials with the given support set
$A.$ Namely, we establish a surjection between the set of all combinatorial
types of Morse univariate tropical polynomials and the vertices of $\mathcal
M_A.$
|
Breast cancer is the most common invasive cancer in women, and the second
main cause of death. Breast cancer screening is an efficient method to detect
indeterminate breast lesions early. The common approaches of screening for
women are tomosynthesis and mammography images. However, the traditional manual
diagnosis requires an intense workload by pathologists, who are prone to
diagnostic errors. Thus, the aim of this study is to build a deep convolutional
neural network method for automatic detection, segmentation, and classification
of breast lesions in mammography images. Based on deep learning the Mask-CNN
(RoIAlign) method was developed to features selection and extraction; and the
classification was carried out by DenseNet architecture. Finally, the precision
and accuracy of the model is evaluated by cross validation matrix and AUC
curve. To summarize, the findings of this study may provide a helpful to
improve the diagnosis and efficiency in the automatic tumor localization
through the medical image classification.
|
We have studied experimentally the generation of vortex flow by gravity waves
with a frequency of 2.34 Hz excited on the water surface at an angle $2 \theta
= arctan(3/4) \approx 36\deg$ to each other. The resulting horizontal surface
flow has a stripe-like spatial structure. The width of the stripes L =
$\pi$/(2ksin$\theta$) is determined by the wave vector k of the surface waves
and the angle between them, and the length of the stripes is limited by the
system size. It was found that the vertical vorticity $\Omega$ of the current
on the fluid surface is proportional to the product of wave amplitudes, but its
value is much higher than the value corresponding to the Stokes drift and it
continues to grow with time even after the wave motion reaches a stationary
regime. We demonstrate that the measured dependence $\Omega$(t) can be
described within the recently developed model that takes into account the
Eulerian contribution to the generated vortex flow and the effect of surface
contamination. This model contains a free parameter that describes the elastic
properties of the contaminated surface, and we also show that the found value
of this parameter is in reasonable agreement with the measured decay rate of
surface waves.
|
We investigate the impact of photochemical hazes and disequilibrium gases on
the thermal structure of hot-Jupiters, using a detailed 1-D
radiative-convective model. We find that the inclusion of photochemical hazes
results in major heating of the upper and cooling of the lower atmosphere.
Sulphur containing species, such as SH, S$_2$ and S$_3$ provide significant
opacity in the middle atmosphere and lead to local heating near 1 mbar, while
OH, CH, NH, and CN radicals produced by the photochemistry affect the thermal
structure near 1 $\mu$bar. Furthermore we show that the modifications on the
thermal structure from photochemical gases and hazes can have important
ramifications for the interpretation of transit observations. Specifically, our
study for the hazy HD 189733 b shows that the hotter upper atmosphere resulting
from the inclusion of photochemical haze opacity imposes an expansion of the
atmosphere, thus a steeper transit signature in the UV-Visible part of the
spectrum. In addition, the temperature changes in the photosphere also affect
the secondary eclipse spectrum. For HD 209458 b we find that a small haze
opacity could be present in this atmosphere, at pressures below 1 mbar, which
could be a result of both photochemical hazes and condensates. Our results
motivate the inclusion of radiative feedback from photochemical hazes in
general circulation models for a proper evaluation of atmospheric dynamics.
|
We compare the macroscopic and the local plastic behavior of a model
amorphous solid based on two radically different numerical descriptions. On the
one hand, we simulate glass samples by atomistic simulations. On the other, we
implement a mesoscale elasto-plastic model based on a solid-mechanics
description. The latter is extended to consider the anisotropy of the yield
surface via statistically distributed local and discrete weak planes on which
shear transformations can be activated. To make the comparison as quantitative
as possible, we consider the simple case of a quasistatically driven
two-dimensional system in the stationary flow state and compare mechanical
observables measured on both models over the same length scales. We show that
the macroscale response, including its fluctuations, can be quantitatively
recovered for a range of elasto-plastic mesoscale parameters. Using a newly
developed method that makes it possible to probe the local yield stresses in
atomistic simulations, we calibrate the local mechanical response of the
elasto-plastic model at different coarse-graining scales. In this case, the
calibration shows a qualitative agreement only for an optimized subset of
mesoscale parameters and for sufficiently coarse probing length scales. This
calibration allows us to establish a length scale for the mesoscopic elements
that corresponds to an upper bound of the shear transformation size, a key
physical parameter in elasto-plastic models. We find that certain properties
naturally emerge from the elasto-plastic model. In particular, we show that the
elasto-plastic model reproduces the Bauschinger effect, namely the
plasticity-induced anisotropy in the stress-strain response. We discuss the
successes and failures of our approach, the impact of different model
ingredients and propose future research directions for quantitative multi-scale
models of amorphous plasticity.
|
This paper expounds very innovative results achieved between the mid-14th
century and the beginning of the 16th century by Indian astronomers belonging
to the so-called "M\=adhava school". These results were in keeping with
researches in trigonometry: they concern the calculation of the eight of the
circumference of a circle. They not only expose an analog of the series
expansion of arctan(1) usually known as the "Leibniz series", but also other
analogs of series expansions, the convergence of which is much faster. These
series expansions are derived from evaluations of the rests of the partial sums
of the primordial series, by means of some convergents of generalized continued
fractions. A justification of these results in modern terms is provided, which
aims at restoring their full mathematical interest.
|
Radio relics are the manifestation of electrons presumably being shock
(re-)accelerated to high energies in the outskirts of galaxy clusters. However,
estimates of the shocks' strength yield different results when measured with
radio or X-ray observations. In general, Mach numbers obtained from radio
observations are larger than the corresponding X-ray measurements. In this
work, we investigate this Mach number discrepancy. For this purpose, we used
the cosmological code ENZO to simulate a sample of galaxy clusters that host
bright radio relics. For each relic, we computed the radio Mach number from the
integrated radio spectrum and the X-ray Mach number from the X-ray surface
brightness and temperature jumps. Our analysis suggests that the differences in
the Mach number estimates follow from the way in which different observables
are related to different parts of the underlying Mach number distribution:
radio observations are more sensistive to the high Mach numbers present only in
a small fraction of a shock's surface, while X-ray measurements reflect the
average of the Mach number distribution. Moreover, X-ray measurements are very
sensitive to the relic's orientation. If the same relic is observed from
different sides, the measured X-ray Mach number varies significantly. On the
other hand, the radio measurements are more robust, as they are unaffected by
the relic's orientation.
|
In this paper, we present a sharp analysis for an alternating gradient
descent algorithm which is used to solve the covariate adjusted precision
matrix estimation problem in the high dimensional setting. Without the
resampling assumption, we demonstrate that this algorithm not only enjoys a
linear rate of convergence, but also attains the optimal statistical rate
(i.e., minimax rate). Moreover, our analysis also characterizes the time-data
tradeoffs in the covariate adjusted precision matrix estimation problem.
Numerical experiments are provided to verify our theoretical results.
|
FISTA is a popular convex optimisation algorithm which is known to converge
at an optimal rate whenever the optimisation domain is contained in a suitable
Hilbert space. We propose a modified algorithm where each iteration is
performed in a subspace, and that subspace is allowed to change at every
iteration. Analytically, this allows us to guarantee convergence in a Banach
space setting, although at a reduced rate depending on the conditioning of the
specific problem. Numerically we show that a greedy adaptive choice of
discretisation can greatly increase the time and memory efficiency in infinite
dimensional Lasso optimisation problems.
|
When two identical two-dimensional (2D) periodic lattices are stacked in
parallel after rotating one layer by a certain angle relative to the other
layer, the resulting bilayer system can lose lattice periodicity completely and
become a 2D quasicrystal. Twisted bilayer graphene with 30-degree rotation is a
representative example. We show that such quasicrystalline bilayer systems
generally develop macroscopically degenerate localized zero-energy states
(ZESs) in strong coupling limit where the interlayer couplings are
overwhelmingly larger than the intralayer couplings. The emergent chiral
symmetry in strong coupling limit and aperiodicity of bilayer quasicrystals
guarantee the existence of the ZESs. The macroscopically degenerate ZESs are
analogous to the flat bands of periodic systems, in that both are composed of
localized eigenstates, which give divergent density of states. For monolayers,
we consider the triangular, square, and honeycomb lattices, comprised of
homogenous tiling of three possible planar regular polygons: the equilateral
triangle, square, and regular hexagon. We construct a compact theoretical
framework, which we call the quasiband model, that describes the low energy
properties of bilayer quasicrystals and counts the number of ZESs using a
subset of Bloch states of monolayers. We also propose a simple geometric scheme
in real space which can show the spatial localization of ZESs and count their
number. Our work clearly demonstrates that bilayer quasicrystals in strong
coupling limit are an ideal playground to study the intriguing interplay of
flat band physics and the aperiodicity of quasicrystals.
|
In this paper, we delve into semi-supervised object detection where unlabeled
images are leveraged to break through the upper bound of fully-supervised
object detection models. Previous semi-supervised methods based on pseudo
labels are severely degenerated by noise and prone to overfit to noisy labels,
thus are deficient in learning different unlabeled knowledge well. To address
this issue, we propose a data-uncertainty guided multi-phase learning method
for semi-supervised object detection. We comprehensively consider divergent
types of unlabeled images according to their difficulty levels, utilize them in
different phases and ensemble models from different phases together to generate
ultimate results. Image uncertainty guided easy data selection and region
uncertainty guided RoI Re-weighting are involved in multi-phase learning and
enable the detector to concentrate on more certain knowledge. Through extensive
experiments on PASCAL VOC and MS COCO, we demonstrate that our method behaves
extraordinarily compared to baseline approaches and outperforms them by a large
margin, more than 3% on VOC and 2% on COCO.
|
This paper theoretically investigates the following empirical phenomenon:
given a high-complexity network with poor generalization bounds, one can
distill it into a network with nearly identical predictions but low complexity
and vastly smaller generalization bounds. The main contribution is an analysis
showing that the original network inherits this good generalization bound from
its distillation, assuming the use of well-behaved data augmentation. This
bound is presented both in an abstract and in a concrete form, the latter
complemented by a reduction technique to handle modern computation graphs
featuring convolutional layers, fully-connected layers, and skip connections,
to name a few. To round out the story, a (looser) classical uniform convergence
analysis of compression is also presented, as well as a variety of experiments
on cifar and mnist demonstrating similar generalization performance between the
original network and its distillation.
|
Distributed networks and real-time systems are becoming the most important
components for the new computer age, the Internet of Things (IoT), with huge
data streams or data sets generated from sensors and data generated from
existing legacy systems. The data generated offers the ability to measure,
infer and understand environmental indicators, from delicate ecologies and
natural resources to urban environments. This can be achieved through the
analysis of the heterogeneous data sources (structured and unstructured). In
this paper, we propose a distributed framework Event STream Processing Engine
for Environmental Monitoring Domain (ESTemd) for the application of stream
processing on heterogeneous environmental data. Our work in this area
demonstrates the useful role big data techniques can play in an environmental
decision support system, early warning and forecasting systems. The proposed
framework addresses the challenges of data heterogeneity from heterogeneous
systems and real time processing of huge environmental datasets through a
publish/subscribe method via a unified data pipeline with the application of
Apache Kafka for real time analytics.
|
Morphological Segmentation involves decomposing words into morphemes, the
smallest meaning-bearing units of language. This is an important NLP task for
morphologically-rich agglutinative languages such as the Southern African Nguni
language group. In this paper, we investigate supervised and unsupervised
models for two variants of morphological segmentation: canonical and surface
segmentation. We train sequence-to-sequence models for canonical segmentation,
where the underlying morphemes may not be equal to the surface form of the
word, and Conditional Random Fields (CRF) for surface segmentation.
Transformers outperform LSTMs with attention on canonical segmentation,
obtaining an average F1 score of 72.5% across 4 languages. Feature-based CRFs
outperform bidirectional LSTM-CRFs to obtain an average of 97.1% F1 on surface
segmentation. In the unsupervised setting, an entropy-based approach using a
character-level LSTM language model fails to outperforms a Morfessor baseline,
while on some of the languages neither approach performs much better than a
random baseline. We hope that the high performance of the supervised
segmentation models will help to facilitate the development of better NLP tools
for Nguni languages.
|
We propose and study a new mathematical model of the human immunodeficiency
virus (HIV). The main novelty is to consider that the antibody growth depends
not only on the virus and on the antibodies concentration but also on the
uninfected cells concentration. The model consists of five nonlinear
differential equations describing the evolution of the uninfected cells, the
infected ones, the free viruses, and the adaptive immunity. The adaptive immune
response is represented by the cytotoxic T-lymphocytes (CTL) cells and the
antibodies with the growth function supposed to be trilinear. The model
includes two kinds of treatments. The objective of the first one is to reduce
the number of infected cells, while the aim of the second is to block free
viruses. Firstly, the positivity and the boundedness of solutions are
established. After that, the local stability of the disease free steady state
and the infection steady states are characterized. Next, an optimal control
problem is posed and investigated. Finally, numerical simulations are performed
in order to show the behavior of solutions and the effectiveness of the two
incorporated treatments via an efficient optimal control strategy.
|
We use high quality VLT/MUSE data to study the kinematics and the ionized gas
properties of Haro 11, a well known starburst merger system and the closest
confirmed Lyman continuum leaking galaxy. We present results from integrated
line maps, and from maps in three velocity bins comprising the blueshifted,
systemic and redshifted emission. The kinematic analysis reveals complex
velocities resulting from the interplay of virial motions and momentum
feedback. Star formation happens intensively in three compact knots (knots A, B
and C), but one, knot C, dominates the energy released in supernovae. The halo
is characterised by low gas density and extinction, but with large temperature
variations, coincident with fast shock regions. Moreover, we find large
temperature discrepancies in knot C, when using different temperature-sensitive
lines. The relative impact of the knots in the metal enrichment differs. While
knot B is strongly enriching its closest surrounding, knot C is likely the main
distributor of metals in the halo. In knot A, part of the metal enriched gas
seems to escape through low density channels towards the south. We compare the
metallicities from two methods and find large discrepancies in knot C, a
shocked area, and the highly ionized zones, that we partially attribute to the
effect of shocks. This work shows, that traditional relations developed from
averaged measurements or simplified methods, fail to probe the diverse
conditions of the gas in extreme environments. We need robust relations that
include realistic models where several physical processes are simultaneously at
work.
|
We demonstrate a method that merges the quantum filter diagonalization (QFD)
approach for hybrid quantum/classical solution of the time-independent
electronic Schr\"odinger equation with a low-rank double factorization (DF)
approach for the representation of the electronic Hamiltonian. In particular,
we explore the use of sparse "compressed" double factorization (C-DF)
truncation of the Hamiltonian within the time-propagation elements of QFD,
while retaining a similarly compressed but numerically converged
double-factorized representation of the Hamiltonian for the operator
expectation values needed in the QFD quantum matrix elements. Together with
significant circuit reduction optimizations and number-preserving
post-selection/echo-sequencing error mitigation strategies, the method is found
to provide accurate predictions for low-lying eigenspectra in a number of
representative molecular systems, while requiring reasonably short circuit
depths and modest measurement costs. The method is demonstrated by experiments
on noise-free simulators, decoherence- and shot-noise including simulators, and
real quantum hardware.
|
The Android operating system is the most spread mobile platform in the world.
Therefor attackers are producing an incredible number of malware applications
for Android. Our aim is to detect Android's malware in order to protect the
user. To do so really good results are obtained by dynamic analysis of
software, but it requires complex environments. In order to achieve the same
level of precision we analyze the machine code and investigate the frequencies
of ngrams of opcodes in order to detect singular code blocks. This allow us to
construct a database of infected code blocks. Then, because attacker may modify
and organized differently the infected injected code in their new malware, we
perform not only a semantic comparison of the tested software with the database
of infected code blocks but also a structured comparison. To do such comparison
we compute subgraph isomorphism. It allows us to characterize precisely if the
tested software is a malware and if so in witch family it belongs. Our method
is tested both on a laboratory database and a set of real data. It achieves an
almost perfect detection rate.
|
This thesis is concerned with continuous, static, and single-objective
optimization problems subject to inequality constraints. Nevertheless, some
methods to handle other kinds of problems are briefly reviewed. The particle
swarm optimization paradigm was inspired by previous simulations of the
cooperative behaviour observed in social beings. It is a bottom-up, randomly
weighted, population-based method whose ability to optimize emerges from local,
individual-to-individual interactions. As opposed to traditional methods, it
can deal with different problems with few or no adaptation due to the fact that
it does profit from problem-specific features of the problem at issue but
performs a parallel, cooperative exploration of the search-space by means of a
population of individuals. The main goal of this thesis consists of developing
an optimizer that can perform reasonably well on most problems. Hence, the
influence of the settings of the algorithm's parameters on the behaviour of the
system is studied, some general-purpose settings are sought, and some
variations to the canonical version are proposed aiming to turn it into a more
general-purpose optimizer. Since no termination condition is included in the
canonical version, this thesis is also concerned with the design of some
stopping criteria which allow the iterative search to be terminated if further
significant improvement is unlikely, or if a certain number of time-steps are
reached. In addition, some constraint-handling techniques are incorporated into
the canonical algorithm to handle inequality constraints. Finally, the
capabilities of the proposed general-purpose optimizers are illustrated by
optimizing a few benchmark problems.
|
In recent years, human activity recognition has garnered considerable
attention both in industrial and academic research because of the wide
deployment of sensors, such as accelerometers and gyroscopes, in products such
as smartphones and smartwatches. Activity recognition is currently applied in
various fields where valuable information about an individual's functional
ability and lifestyle is needed. In this study, we used the popular WISDM
dataset for activity recognition. Using multivariate analysis of covariance
(MANCOVA), we established a statistically significant difference (p<0.05)
between the data generated from the sensors embedded in smartphones and
smartwatches. By doing this, we show that smartphones and smartwatches don't
capture data in the same way due to the location where they are worn. We
deployed several neural network architectures to classify 15 different hand and
non-hand-oriented activities. These models include Long short-term memory
(LSTM), Bi-directional Long short-term memory (BiLSTM), Convolutional Neural
Network (CNN), and Convolutional LSTM (ConvLSTM). The developed models
performed best with watch accelerometer data. Also, we saw that the
classification precision obtained with the convolutional input classifiers (CNN
and ConvLSTM) was higher than the end-to-end LSTM classifier in 12 of the 15
activities. Additionally, the CNN model for the watch accelerometer was better
able to classify non-hand oriented activities when compared to hand-oriented
activities.
|
A space-time Trefftz discontinuous Galerkin method for the Schr\"odinger
equation with piecewise-constant potential is proposed and analyzed. Following
the spirit of Trefftz methods, trial and test spaces are spanned by
non-polynomial complex wave functions that satisfy the Schro\"odinger equation
locally on each element of the space-time mesh. This allows for a significant
reduction in the number of degrees of freedom in comparison with full
polynomial spaces. We prove well-posedness and stability of the method, and,
for the one- and two- dimensional cases, optimal, high-order, h-convergence
error estimates in a skeleton norm. Some numerical experiments validate the
theoretical results presented.
|
We demonstrate the use of multiple atomic-level Rydberg-atom schemes for
continuous frequency detection of radio frequency (RF) fields. Resonant
detection of RF fields by electromagnetically-induced transparency and
Autler-Townes (AT) in Rydberg atoms is typically limited to frequencies within
the narrow bandwidth of a Rydberg transition. By applying a second field
resonant with an adjacent Rydberg transition, far-detuned fields can be
detected through a two-photon resonance AT splitting. This two-photon AT
splitting method is several orders of magnitude more sensitive than
off-resonant detection using the Stark shift. We present the results of various
experimental configurations and a theoretical analysis to illustrate the
effectiveness of this multiple level scheme. These results show that this
approach allows for the detection of frequencies in continuous band between
resonances with adjacent Rydberg states.
|
We study the space of $C^1$ isogeometric spline functions defined on
trilinearly parameterized multi-patch volumes. Amongst others, we present a
general framework for the design of the $C^1$ isogeometric spline space and of
an associated basis, which is based on the two-patch construction [7], and
which works uniformly for any possible multi-patch configuration. The presented
method is demonstrated in more detail on the basis of a particular subclass of
trilinear multi-patch volumes, namely for the class of trilinearly
parameterized multi-patch volumes with exactly one inner edge. For this
specific subclass of trivariate multi-patch parameterizations, we further
numerically compute the dimension of the resulting $C^1$ isogeometric spline
space and use the constructed $C^1$ isogeometric basis functions to numerically
explore the approximation properties of the $C^1$ spline space by performing
$L^2$ approximation.
|
Recently, higher-order topological matter and 3D quantum Hall effects have
attracted great attention. The Fermi-arc mechanism of the 3D quantum Hall
effect proposed in Weyl semimetals is characterized by the one-sided hinge
states, which do not exist in all the previous quantum Hall systems and more
importantly pose a realistic example of the higher-order topological matter.
The experimental effort so far is in the Dirac semimetal Cd$_3$As$_2$, where
however time-reversal symmetry leads to hinge states on both sides of the
top/bottom surfaces, instead of the aspired one-sided hinge states. We propose
that under a tilted magnetic field, the hinge states in Cd$_3$As$_2$-like Dirac
semimetals can be one-sided, highly tunable by field direction and Fermi
energy, and robust against weak disorder. Furthermore, we propose a scanning
tunneling Hall measurement to detect the one-sided hinge states. Our results
will be insightful for exploring not only the quantum Hall effects beyond two
dimensions, but also other higher-order topological insulators in the future.
|
When two spherical particles submerged in a viscous fluid are subjected to an
oscillatory flow, they align themselves perpendicular to the direction of the
flow leaving a small gap between them. The formation of this compact structure
is attributed to a non-zero residual flow known as steady streaming. We have
performed direct numerical simulations of a fully-resolved, oscillating flow in
which the pair of particles is modeled using an immersed boundary method. Our
simulations show that the particles oscillate both parallel and perpendicular
to the oscillating flow in elongated figure 8 trajectories. In absence of
bottom friction, the mean gap between the particles depends only on the
normalized Stokes boundary layer thickness $\delta^*$, and on the normalized,
streamwise excursion length of the particles relative to the fluid $A_r^*$
(equivalent to the Keulegan-Carpenter number). For $A_r^*\lesssim 1$, viscous
effects dominate and the mean particle separation only depends on $\delta^*$.
For larger $A_r^*$-values, advection becomes important and the gap widens.
Overall, the normalized mean gap between the particles scales as
$L^*\approx3.0{\delta^*}^{1.5}+0.03{A_r^*}^3$, which also agrees well with
previous experimental results. The two regimes are also observed in the
magnitude of the oscillations of the gap perpendicular to the flow, which
increases in the viscous regime and decreases in the advective regime. When
bottom friction is considered, particle rotation increases and the gap widens.
Our results stress the importance of simulating the particle motion with all
its degrees of freedom to accurately model the system and reproduce
experimental results. The new insights of the particle pairs provide an
important step towards understanding denser and more complex systems.
|
We present a new model to describe the star formation process in galaxies,
which includes the description of the different gas phases -- molecular,
atomic, and ionized -- together with its metal content. The model, which will
be coupled to cosmological simulations of galaxy formation, will be used to
investigate the relation between the star formation rate (SFR) and the
formation of molecular hydrogen. The model follows the time evolution of the
molecular, atomic and ionized phases in a gas cloud and estimates the amount of
stellar mass formed, by solving a set of five coupled differential equations.
As expected, we find a positive, strong correlation between the molecular
fraction and the initial gas density, which manifests in a positive correlation
between the initial gas density and the SFR of the cloud.
|
The development of lightweight object detectors is essential due to the
limited computation resources. To reduce the computation cost, how to generate
redundant features plays a significant role. This paper proposes a new
lightweight Convolution method Cross-Stage Lightweight (CSL) Module, to
generate redundant features from cheap operations. In the intermediate
expansion stage, we replaced Pointwise Convolution with Depthwise Convolution
to produce candidate features. The proposed CSL-Module can reduce the
computation cost significantly. Experiments conducted at MS-COCO show that the
proposed CSL-Module can approximate the fitting ability of Convolution-3x3.
Finally, we use the module to construct a lightweight detector CSL-YOLO,
achieving better detection performance with only 43% FLOPs and 52% parameters
than Tiny-YOLOv4.
|
The dispersion of a tracer in a fluid flow is influenced by the Lagrangian
motion of fluid elements. Even in laminar regimes, the irregular chaotic
behavior of a fluid flow can lead to effective stirring that rapidly
redistributes a tracer throughout the domain. When the advected particles
possess a finite size and nontrivial shape, however, their dynamics can differ
markedly from passive tracers, thus affecting the dispersion phenomena. Here we
investigate the behavior of neutrally buoyant particles in 2-dimensional
chaotic flows, combining numerical simulations and laboratory experiments. We
show that depending on the particles shape and size, the underlying Lagrangian
coherent structures can be altered, resulting in distinct dispersion phenomena
within the same flow field. Experiments performed in a two-dimensional cellular
flow, exhibited a focusing effect in vortex cores of particles with anisotropic
shape. In agreement with our numerical model, neutrally buoyant ellipsoidal
particles display markedly different trajectories and overall organization than
spherical particles, with a clustering in vortices that changes accordingly
with the aspect ratio of the particles.
|
We explore the ability of overparameterized shallow neural networks to learn
Lipschitz regression functions with and without label noise when trained by
Gradient Descent (GD). To avoid the problem that in the presence of noisy
labels, neural networks trained to nearly zero training error are inconsistent
on this class, we propose an early stopping rule that allows us to show optimal
rates. This provides an alternative to the result of Hu et al. (2021) who
studied the performance of $\ell 2$ -regularized GD for training shallow
networks in nonparametric regression which fully relied on the infinite-width
network (Neural Tangent Kernel (NTK)) approximation. Here we present a simpler
analysis which is based on a partitioning argument of the input space (as in
the case of 1-nearest-neighbor rule) coupled with the fact that trained neural
networks are smooth with respect to their inputs when trained by GD. In the
noise-free case the proof does not rely on any kernelization and can be
regarded as a finite-width result. In the case of label noise, by slightly
modifying the proof, the noise is controlled using a technique of Yao, Rosasco,
and Caponnetto (2007).
|
Pretrained language models have significantly improved the performance of
down-stream language understanding tasks, including extractive question
answering, by providing high-quality contextualized word embeddings. However,
learning question answering models still need large-scaled data annotation in
specific domains. In this work, we propose a cooperative, self-play learning
framework, REGEX, for question generation and answering. REGEX is built upon a
masked answer extraction task with an interactive learning environment
containing an answer entity REcognizer, a question Generator, and an answer
EXtractor. Given a passage with a masked entity, the generator generates a
question around the entity, and the extractor is trained to extract the masked
entity with the generated question and raw texts. The framework allows the
training of question generation and answering models on any text corpora
without annotation. We further leverage a reinforcement learning technique to
reward generating high-quality questions and to improve the answer extraction
model's performance. Experiment results show that REGEX outperforms the
state-of-the-art (SOTA) pretrained language models and zero-shot approaches on
standard question-answering benchmarks, and yields the new SOTA performance
under the zero-shot setting.
|
Enterprise knowledge is a key asset in the competing and fast-changing
corporate landscape. The ability to learn, store and distribute implicit and
explicit knowledge can be the difference between success and failure. While
enterprise knowledge management is a well-defined research domain, current
implementations lack orientation towards small and medium enterprise. We
propose a semantic search engine for relevant documents in an enterprise, based
on automatic generated domain ontologies. In this paper we focus on the
component for ontology learning and population.
|
Recently, Doroudiani and Karimipour [Phys. Rev. A \textbf{102} 012427(2020)]
proposed the notation of planar maximally entangled (PME) states which are a
wider class of multipartite entangled states than absolutely maximally
entangled (AME) states. There they presented their constructions in the
multipartite systems but the number of particles is restricted to be even. Here
we first solve the remaining cases, i.e., constructions of planar maximally
entangled states on systems with odd number of particles. In addition, we
generalized the PME to the planar $k$-uniform states whose reductions to any
adjacent $k$ parties along a circle of $N$ parties are maximally mixed. We
presented a method to construct sets of planar $k$-uniform states which have
minimal support.
|
We construct exact solutions to the Einstein-Maxwell theory with uplifting
the four dimensional Fubini-Study Kahler manifold. We find the solutions can be
expressed exactly as the integrals of two special functions. The solutions are
regular almost everywhere except a bolt structure on a single point in any
dimensionality. We also show that the solutions are unique and can not be
non-trivially extended to include the cosmological constant in any dimensions.
|
The introduction of an optical resonator can enable efficient and precise
interaction between a photon and a solid-state emitter. It facilitates the
study of strong light-matter interaction, polaritonic physics and presents a
powerful interface for quantum communication and computing. A pivotal aspect in
the progress of light-matter interaction with solid-state systems is the
challenge of combining the requirements of cryogenic temperature and high
mechanical stability against vibrations while maintaining sufficient degrees of
freedom for in-situ tunability. Here, we present a fiber-based open
Fabry-P\'{e}rot cavity in a closed-cycle cryostat exhibiting ultra-high
mechanical stability while providing wide-range tunability in all three spatial
directions. We characterize the setup and demonstrate the operation with the
root-mean-square cavity length fluctuation of less than $90$ pm at temperature
of $6.5$ K and integration bandwidth of $100$ kHz. Finally, we benchmark the
cavity performance by demonstrating the strong-coupling formation of
exciton-polaritons in monolayer WSe$_2$ with a cooperativity of $1.6$. This set
of results manifests the open-cavity in a closed-cycle cryostat as a versatile
and powerful platform for low-temperature cavity QED experiments.
|
We determine the dark matter pair-wise relative velocity distribution in a
set of Milky Way-like halos in the Auriga and APOSTLE simulations. Focusing on
the smooth halo component, the relative velocity distribution is well-described
by a Maxwell-Boltzmann distribution over nearly all radii in the halo. We
explore the implications for velocity-dependent dark matter annihilation,
focusing on four models which scale as different powers of the relative
velocity: Sommerfeld, s-wave, p-wave, and d-wave models. We show that the
J-factors scale as the moments of the relative velocity distribution, and that
the halo-to-halo scatter is largest for d-wave, and smallest for Sommerfeld
models. The J-factor is strongly correlated with the dark matter density in the
halo, and is very weakly correlated with the velocity dispersion. This implies
that if the dark matter density in the Milky Way can be robustly determined,
one can accurately predict the dark matter annihilation signal, without the
need to identify the dark matter velocity distribution in the Galaxy.
|
In this essay, we qualitatively demonstrate how small non-perturbative
corrections are a necessary addition to semiclassical gravity's path integral.
We use this to discuss implications for Hawking's information paradox and the
bags of gold paradox.
|
We report Keck-NIRSPEC observations of the Brackett $\alpha$ 4.05 $\mu$m
recombination line across the two candidate embedded super star clusters (SSCs)
in NGC 1569. These SSCs power a bright HII region and have been previously
detected as radio and mid-infrared sources. Supplemented with high resolution
VLA mapping of the radio continuum along with IRTF-TEXES spectroscopy of the
[SIV] 10.5 $\mu$m line, the Brackett $\alpha$ spectra data provide new insight
into the dynamical state of gas ionized by these forming massive clusters. NIR
sources detected in 2 $\mu$m images from the Slit-viewing Camera are matched
with GAIA sources to obtain accurate celestial coordinates and slit positions
to within $\sim 0.1''$. Br$\alpha$ is detected as a strong emission peak
powered by the less luminous infrared source, MIR1 ($L_{\rm IR}\sim
2\times10^7~L_\odot$). The second candidate SSC MIR2 is more luminous ($L_{\rm
IR}\gtrsim 4\times10^8~L_\odot$) but exhibits weak radio continuum and
Br$\alpha$ emission, suggesting the ionized gas is extremely dense ($n_e\gtrsim
10^5$ cm$^{-3}$), corresponding to hypercompact HII regions around newborn
massive stars. The Br$\alpha$ and [SIV] lines across the region are both
remarkably symmetric and extremely narrow, with observed line widths $\Delta v
\simeq 40$ km s$^{-1}$, FWHM. This result is the first clear evidence that
feedback from NGC 1569's youngest giant clusters is currently incapable of
rapid gas dispersal, consistent with the emerging theoretical paradigm in the
formation of giant star clusters.
|
Machine learning has brought striking advances in multilingual natural
language processing capabilities over the past year. For example, the latest
techniques have improved the state-of-the-art performance on the XTREME
multilingual benchmark by more than 13 points. While a sizeable gap to
human-level performance remains, improvements have been easier to achieve in
some tasks than in others. This paper analyzes the current state of
cross-lingual transfer learning and summarizes some lessons learned. In order
to catalyze meaningful progress, we extend XTREME to XTREME-R, which consists
of an improved set of ten natural language understanding tasks, including
challenging language-agnostic retrieval tasks, and covers 50 typologically
diverse languages. In addition, we provide a massively multilingual diagnostic
suite (MultiCheckList) and fine-grained multi-dataset evaluation capabilities
through an interactive public leaderboard to gain a better understanding of
such models. The leaderboard and code for XTREME-R will be made available at
https://sites.research.google/xtreme and
https://github.com/google-research/xtreme respectively.
|
This paper presents a state-of-the-art LiDAR based autonomous navigation
system for under-canopy agricultural robots. Under-canopy agricultural
navigation has been a challenging problem because GNSS and other positioning
sensors are prone to significant errors due to attentuation and multi-path
caused by crop leaves and stems. Reactive navigation by detecting crop rows
using LiDAR measurements is a better alternative to GPS but suffers from
challenges due to occlusion from leaves under the canopy. Our system addresses
this challenge by fusing IMU and LiDAR measurements using an Extended Kalman
Filter framework on low-cost hardwware. In addition, a local goal generator is
introduced to provide locally optimal reference trajectories to the onboard
controller. Our system is validated extensively in real-world field
environments over a distance of 50.88~km on multiple robots in different field
conditions across different locations. We report state-of-the-art distance
between intervention results, showing that our system is able to safely
navigate without interventions for 386.9~m on average in fields without
significant gaps in the crop rows, 56.1~m in production fields and 47.5~m in
fields with gaps (space of 1~m without plants in both sides of the row).
|
Drone imagery is increasingly used in automated inspection for infrastructure
surface defects, especially in hazardous or unreachable environments. In
machine vision, the key to crack detection rests with robust and accurate
algorithms for image processing. To this end, this paper proposes a deep
learning approach using hierarchical convolutional neural networks with feature
preservation (HCNNFP) and an intercontrast iterative thresholding algorithm for
image binarization. First, a set of branch networks is proposed, wherein the
output of previous convolutional blocks is half-sizedly concatenated to the
current ones to reduce the obscuration in the down-sampling stage taking into
account the overall information loss. Next, to extract the feature map
generated from the enhanced HCNN, a binary contrast-based autotuned
thresholding (CBAT) approach is developed at the post-processing step, where
patterns of interest are clustered within the probability map of the identified
features. The proposed technique is then applied to identify surface cracks on
the surface of roads, bridges or pavements. An extensive comparison with
existing techniques is conducted on various datasets and subject to a number of
evaluation criteria including the average F-measure (AF\b{eta}) introduced here
for dynamic quantification of the performance. Experiments on crack images,
including those captured by unmanned aerial vehicles inspecting a monorail
bridge. The proposed technique outperforms the existing methods on various
tested datasets especially for GAPs dataset with an increase of about 1.4% in
terms of AF\b{eta} while the mean percentage error drops by 2.2%. Such
performance demonstrates the merits of the proposed HCNNFP architecture for
surface defect inspection.
|
We present a framework for simulating realistic inverse synthetic aperture
radar images of automotive targets at millimeter wave frequencies. The model
incorporates radar scattering phenomenology of commonly found vehicles along
with range-Doppler based clutter and receiver noise. These images provide
insights into the physical dimensions of the target, the number of wheels and
the trajectory undertaken by the target. The model is experimentally validated
with measurement data gathered from an automotive radar. The images from the
simulation database are subsequently classified using both traditional machine
learning techniques as well as deep neural networks based on transfer learning.
We show that the ISAR images offer a classification accuracy above 90% and are
robust to both noise and clutter.
|
There are two cases when the nonlinear Schr\"odinger equation (NLSE) with an
external complex potential is well-known to support continuous families of
localized stationary modes: the ${\cal PT}$-symmetric potentials and the Wadati
potentials. Recently Y. Kominis and coauthors [Chaos, Solitons and Fractals,
118, 222-233 (2019)] have suggested that the continuous families can be also
found in complex potentials of the form $W(x)=W_{1}(x)+iCW_{1,x}(x)$, where $C$
is an arbitrary real and $W_1(x)$ is a real-valued and bounded differentiable
function. Here we study in detail nonlinear stationary modes that emerge in
complex potentials of this type (for brevity, we call them W-dW potentials).
First, we assume that the potential is small and employ asymptotic methods to
construct a family of nonlinear modes. Our asymptotic procedure stops at the
terms of the $\varepsilon^2$ order, where small $\varepsilon$ characterizes
amplitude of the potential. We therefore conjecture that no continuous families
of authentic nonlinear modes exist in this case, but "pseudo-modes" that
satisfy the equation up to $\varepsilon^2$-error can indeed be found in W-dW
potentials. Second, we consider the particular case of a W-dW potential well of
finite depth and support our hypothesis with qualitative and numerical
arguments. Third, we simulate the nonlinear dynamics of found pseudo-modes and
observe that, if the amplitude of W-dW potential is small, then the
pseudo-modes are robust and display persistent oscillations around a certain
position predicted by the asymptotic expansion. Finally, we study the authentic
stationary modes which do not form a continuous family, but exist as isolated
points. Numerical simulations reveal dynamical instability of these solutions.
|
Many recent experimental ultrafast spectroscopy studies have hinted at
non-adiabatic dynamics indicating the existence of conical intersections, but
their direct observation remains a challenge. The rapid change of the energy
gap between the electronic states complicated their observation by requiring
bandwidths of several electron volts. In this manuscript, we propose to use the
combined information of different X-ray pump-probe techniques to identify the
conical intersection. We theoretically study the conical intersection in
pyrrole using transient X-ray absorption, time-resolved X-ray spontaneous
emission, and linear off-resonant Raman spectroscopy to gather evidence of the
curve crossing.
|
Robots performing tasks in warehouses provide the first example of
wide-spread adoption of autonomous vehicles in transportation and logistics.
The efficiency of these operations, which can vary widely in practice, are a
key factor in the success of supply chains. In this work we consider the
problem of coordinating a fleet of robots performing picking operations in a
warehouse so as to maximize the net profit achieved within a time period while
respecting problem- and robot-specific constraints. We formulate the problem as
a weighted set packing problem where the elements in consideration are items on
the warehouse floor that can be picked up and delivered within specified time
windows. We enforce the constraint that robots must not collide, that each item
is picked up and delivered by at most one robot, and that the number of robots
active at any time does not exceed the total number available. Since the set of
routes is exponential in the size of the input, we attack optimization of the
resulting integer linear program using column generation, where pricing amounts
to solving an elementary resource-constrained shortest-path problem. We propose
an efficient optimization scheme that avoids consideration of every increment
within the time windows. We also propose a heuristic pricing algorithm that can
efficiently solve the pricing subproblem. While this itself is an important
problem, the insights gained from solving these problems effectively can lead
to new advances in other time-widow constrained vehicle routing problems.
|
Following the tremendous success of transformer in natural language
processing and image understanding tasks, in this paper, we present a novel
point cloud representation learning architecture, named Dual Transformer
Network (DTNet), which mainly consists of Dual Point Cloud Transformer (DPCT)
module. Specifically, by aggregating the well-designed point-wise and
channel-wise multi-head self-attention models simultaneously, DPCT module can
capture much richer contextual dependencies semantically from the perspective
of position and channel. With the DPCT module as a fundamental component, we
construct the DTNet for performing point cloud analysis in an end-to-end
manner. Extensive quantitative and qualitative experiments on publicly
available benchmarks demonstrate the effectiveness of our proposed transformer
framework for the tasks of 3D point cloud classification and segmentation,
achieving highly competitive performance in comparison with the
state-of-the-art approaches.
|
We give two proofs to an old result of E. Salehi, showing that the Weyl
subalgebra $\mathcal{W}$ of $\ell^\infty(\mathbb{Z})$ is a proper subalgebra of
$\mathcal{D}$, the algebra of distal functions. We also show that the family
$\mathcal{S}^d$ of strictly ergodic functions in $\mathcal{D}$ does not form an
algebra and hence in particular does not coincide with $\mathcal{W}$. We then
use similar constructions to show that a function which is a multiplier for
strict ergodicity, either within $\mathcal{D}$ or in general, is necessarily a
constant. An example of a metric, strictly ergodic, distal flow is constructed
which admits a non-strictly ergodic $2$-fold minimal self-joining. It then
follows that the enveloping group of this flow is not strictly ergodic (as a
$T$-flow). Finally we show that the distal, strictly ergodic Heisenberg
nil-flow is relatively disjoint over its largest equicontinuous factor from
$|\mathcal{W}|$.
|
We study extensions of the Election Isomorphism problem, focused on the
existence of isomorphic subelections. Specifically, we propose the Subelection
Isomorphism and the Maximum Common Subelection problems and study their
computational complexity and approximability. Using our problems in
experiments, we provide some insights into the nature of several statistical
models of elections.
|
The High Altitude Water Cherenkov (HAWC) observatory and the High Energy
Stereoscopic System (H.E.S.S.) are two leading instruments in the ground-based
very-high-energy gamma-ray domain. HAWC employs the water Cherenkov detection
(WCD) technique, while H.E.S.S. is an array of Imaging Atmospheric Cherenkov
Telescopes (IACTs). The two facilities therefore differ in multiple aspects,
including their observation strategy, the size of their field of view and their
angular resolution, leading to different analysis approaches. Until now, it has
been unclear if the results of observations by both types of instruments are
consistent: several of the recently discovered HAWC sources have been followed
up by IACTs, resulting in a confirmed detection only in a minority of cases.
With this paper, we go further and try to resolve the tensions between previous
results by performing a new analysis of the H.E.S.S. Galactic plane survey
data, applying an analysis technique comparable between H.E.S.S. and HAWC.
Events above 1 TeV are selected for both datasets, the point spread function of
H.E.S.S. is broadened to approach that of HAWC, and a similar background
estimation method is used. This is the first detailed comparison of the
Galactic plane observed by both instruments. H.E.S.S. can confirm the gamma-ray
emission of four HAWC sources among seven previously undetected by IACTs, while
the three others have measured fluxes below the sensitivity of the H.E.S.S.
dataset. Remaining differences in the overall gamma-ray flux can be explained
by the systematic uncertainties. Therefore, we confirm a consistent view of the
gamma-ray sky between WCD and IACT techniques.
|
Cactus networks were introduced by Lam as a generalization of planar
electrical networks. He defined a map from these networks to the Grassmannian
Gr($n+1,2n$) and showed that the image of this map, $\mathcal X_n$ lies inside
the totally nonnegative part of this Grassmannian. In this paper, we show that
$\mathcal X_n$ is exactly the elements of Gr($n+1,2n$) that are both totally
nonnegative and isotropic for a particular skew-symmetric bilinear form. For
certain classes of cactus networks, we also explicitly describe how to turn
response matrices and effective resistance matrices into points of Gr($n+1,2n$)
given by Lam's map. Finally, we discuss how our work relates to earlier studies
of total positivity for Lagrangian Grassmannians.
|
We propose a new method for accelerating the computation of a concurrency
relation, that is all pairs of places in a Petri net that can be marked
together. Our approach relies on a state space abstraction, that involves a mix
between structural reductions and linear algebra, and a new data-structure that
is specifically designed for our task. Our algorithms are implemented in a
tool, called Kong, that we test on a large collection of models used during the
2020 edition of the Model Checking Contest. Our experiments show that the
approach works well, even when a moderate amount of reductions applies.
|
Ethics is sometimes considered to be too abstract to be meaningfully
implemented in artificial intelligence (AI). In this paper, we reflect on other
aspects of computing that were previously considered to be very abstract. Yet,
these are now accepted as being done very well by computers. These tasks have
ranged from multiple aspects of software engineering to mathematics to
conversation in natural language with humans. This was done by automating the
simplest possible step and then building on it to perform more complex tasks.
We wonder if ethical AI might be similarly achieved and advocate the process of
automation as key step in making AI take ethical decisions. The key
contribution of this paper is to reflect on how automation was introduced into
domains previously considered too abstract for computers.
|
In this paper, we investigate the decentralized statistical inference
problem, where a network of agents cooperatively recover a (structured) vector
from private noisy samples without centralized coordination. Existing
optimization-based algorithms suffer from issues of model mismatch and poor
convergence speed, and thus their performance would be degraded, provided that
the number of communication rounds is limited. This motivates us to propose a
learning-based framework, which unrolls well-noted decentralized optimization
algorithms (e.g., Prox-DGD and PG-EXTRA) into graph neural networks (GNNs). By
minimizing the recovery error via end-to-end training, this learning-based
framework resolves the model mismatch issue. Our convergence analysis (with
PG-EXTRA as the base algorithm) reveals that the learned model parameters may
accelerate the convergence and reduce the recovery error to a large extent. The
simulation results demonstrate that the proposed GNN-based learning methods
prominently outperform several state-of-the-art optimization-based algorithms
in convergence speed and recovery error.
|