abstract
stringlengths 42
2.09k
|
---|
We consider a certain lattice branching random walk with on-site competition
and in an environment which is heterogeneous at a macroscopic scale
$1/\varepsilon$ in space and time. This can be seen as a model for the spatial
dynamics of a biological population in a habitat which is heterogeneous at a
large scale (mountains, temperature or precipitation gradient...). The model
incorporates another parameter, $K$, which is a measure of the local population
density. We study the model in the limit when first $\varepsilon\to 0$ and then
$K\to\infty$. In this asymptotic regime, we show that the rescaled position of
the front as a function of time converges to the solution of an explicit ODE.
We further discuss the relation with another popular model of population
dynamics, the Fisher-KPP equation, which arises in the limit $K\to\infty$.
Combined with known results on the Fisher-KPP equation, our results show in
particular that the limits $\varepsilon\to0$ and $K\to\infty$ do not commute in
general. We conjecture that an interpolating regime appears when $\log K$ and
$1/\varepsilon$ are of the same order.
|
A long-held belief is that shock energy induces initiation of an energetic
material through an indirect energy up-pumping mechanism involving phonon
scattering through doorway modes. In this paper, a 3-phonon theoretical
analysis of energy up-pumping in RDX is presented that involves both direct and
indirect pathways where the direct energy transfer dominates. The calculation
considers individual phonon modes which are then analyzed in bands. Scattering
is handled up to the third order term in the Hamiltonian based on Fermi's
Golden Rule. On average, modes with frequencies up to 90 cm-1 scatter quickly
and redistribute the energy to all the modes. This direct stimulation occurs
rapidly, within 0.16 ps, and involves distortions to NN bonds. Modes from 90 to
1839 cm-1 further up-pump the energy to NN bond distortion modes through an
indirect route within 5.6 ps. The highest frequency modes have the lowest
contribution to energy transfer due to their lower participation in
phonon-phonon scattering. The modes stimulated directly by the shock with
frequencies up to 90 cm-1 are estimated to account for 52 to 89\% of the total
energy transfer to various NN bond distorting modes.
|
A new paradigm called physical reservoir computing has recently emerged,
where the nonlinear dynamics of high-dimensional and fixed physical systems are
harnessed as a computational resource to achieve complex tasks. Via extensive
simulations based on a dynamic truss-frame model, this study shows that an
origami structure can perform as a dynamic reservoir with sufficient computing
power to emulate high-order nonlinear systems, generate stable limit cycles,
and modulate outputs according to dynamic inputs. This study also uncovers the
linkages between the origami reservoir's physical designs and its computing
power, offering a guideline to optimize the computing performance.
Comprehensive parametric studies show that selecting optimal feedback crease
distribution and fine-tuning the underlying origami folding designs are the
most effective approach to improve computing performance. Furthermore, this
study shows how origami's physical reservoir computing power can apply to soft
robotic control problems by a case study of earthworm-like peristaltic crawling
without traditional controllers. These results can pave the way for
origami-based robots with embodied mechanical intelligence.
|
We study the algebraic conditions leading to the chain property of complexes
for vertex operator algebra $n$-point functions with differential being defined
through reduction formulas. The notion of the reduction cohomology of Riemann
surfaces is introduced. Algebraic, geometrical, and cohomological meanings of
reduction formulas is clarified. A counterpart of the Bott-Segal theorem for
Riemann surfaces in terms of the reductions cohomology is proven. It is shown
that the reduction cohomology is given by the cohomology of $n$-point
connections over the vertex operator algebra bundle defined on a genus $g$
Riemann surface $\Sigma^{(g)}$. The reduction cohomology for a vertex operator
algebra with formal parameters identified with local coordinates around marked
points on $\Sigma^{(g)}$ is found in terms of the space of analytical
continuations of solutions to Knizhnik-Zamolodchikov equations. For the
reduction cohomology, the Euler-Poincare formula is derived. Examples for
various genera and vertex operator cluster algebras are provided.
|
For a commutative ring $R$, we define the notions of deformed Picard
algebroids and deformed twisted differential operators on a smooth, separated,
locally of finite type $R$-scheme and prove these are in a natural bijection.
We then define the pullback of a sheaf of twisted differential operators that
reduces to the classical definition when $R=\mathbb{C}$. Finally, for modules
over twisted differential operators, we prove a theorem for the descent under a
locally trivial torsor.
|
Motivated by questions in number theory, Myerson asked how small the sum of 5
complex nth roots of unity can be. We obtain a uniform bound of O(n^{-4/3}) by
perturbing the vertices of a regular pentagon, improving to O(n^{-7/3})
infinitely often.
The corresponding configurations were suggested by examining exact minimum
values computed for n <= 221000. These minima can be explained at least in part
by selection of the best example from multiple families of competing
configurations related to close rational approximations.
|
The orientation completion problem for a class of oriented graphs asks
whether a given partially oriented graph can be completed to an oriented graph
in the class by orienting the unoriented edges of the partially oriented graph.
Orientation completion problems have been studied recently for several classes
of oriented graphs, yielding both polynomial time solutions as well as
NP-completeness results. Local tournaments are a well-structured class of
oriented graphs that generalize tournaments and their underlying graphs are
intimately related to proper circular-arc graphs. According to Skrien, a
connected graph can be oriented as a local tournament if and only if it is a
proper circular-arc graph. Proper interval graphs are precisely the graphs
which can be oriented as acyclic local tournaments. It has been proved that the
orientation completion problems for the classes of local tournaments and
acyclic local tournaments are both polynomial time solvable. In this paper we
characterize the partially oriented graphs that can be completed to local
tournaments by determining the complete list of obstructions. These are in a
sense minimal partially oriented graphs that cannot be completed to local
tournaments. The result may be viewed as an extension of the well-known
forbidden subgraph characterization of proper circular-arc graphs obtained by
Tucker. The complete list of obstructions for acyclic local tournament
orientation completions has been given in a companion paper.
|
We derive a thermodynamic uncertainty relation (TUR) for first-passage times
(FPTs) on continuous time Markov chains. The TUR utilizes the entropy
production coming from bidirectional transitions, and the net flux coming from
unidirectional transitions, to provide a lower bound on FPT fluctuations. As
every bidirectional transition can also be seen as a pair of separate
unidirectional ones, our approach typically yields an ensemble of TURs. The
tightest bound on FPT fluctuations can then be obtained from this ensemble by a
simple and physically motivated optimization procedure. The results presented
herein are valid for arbitrary initial conditions, out-of-equilibrium dynamics,
and are therefore well suited to describe the inherently irreversible
first-passage event. They can thus be readily applied to a myriad of
first-passage problems that arise across a wide range of disciplines.
|
A hyperlink is a finite set of non-intersecting simple closed curves in
$\mathbb{R}^4 \equiv \mathbb{R} \times \mathbb{R}^3$, each curve is either a
matter or geometric loop. We consider an equivalence class of such hyperlinks,
up to time-like isotopy, preserving time-ordering. Using an equivalence class
and after coloring each matter component loop with an irreducible
representation of $\mathfrak{su}(2) \times \mathfrak{su}(2)$, we can define its
Wilson Loop observable using an Einstein-Hilbert action, which is now thought
of as a functional acting on the set containing equivalence classes of
hyperlink. Construct a vector space using these functionals, which we now term
as quantum states. To make it into a Hilbert space, we need to define a
counting probability measure on the space containing equivalence classes of
hyperlinks. In our previous work, we defined area, volume and curvature
operators, corresponding to given geometric objects like surface and a compact
solid spatial region. These operators act on the quantum states and by
deliberate construction of the Hilbert space, are self-adjoint and possibly
unbounded operators. Using these operators and Einstein's field equations, we
can proceed to construct a quantized stress operator and also a Hamiltonian
constraint operator for the quantum system. We will also use the area operator
to derive the Bekenstein entropy of a black hole. In the concluding section, we
will explain how Loop Quantum Gravity predicts the existence of gravitons,
implies causality and locality in quantum gravity, and formulate the principle
of equivalence mathematically in its framework.
|
We use a replica trick construction to propose a definition of branch-point
twist operators in two dimensional momentum space and compute their two-point
function. The result is then tentatively interpreted as a pseudo R\'enyi
entropy for momentum modes.
|
Deep learning semantic segmentation algorithms can localise abnormalities or
opacities from chest radiographs. However, the task of collecting and
annotating training data is expensive and requires expertise which remains a
bottleneck for algorithm performance. We investigate the effect of image
augmentations on reducing the requirement of labelled data in the semantic
segmentation of chest X-rays for pneumonia detection. We train fully
convolutional network models on subsets of different sizes from the total
training data. We apply a different image augmentation while training each
model and compare it to the baseline trained on the entire dataset without
augmentations. We find that rotate and mixup are the best augmentations amongst
rotate, mixup, translate, gamma and horizontal flip, wherein they reduce the
labelled data requirement by 70% while performing comparably to the baseline in
terms of AUC and mean IoU in our experiments.
|
We investigate the behavior of vortex bound states in the quantum limit by
self-consistently solving the Bogoliubov-de Gennes equation. We find that the
energies of the vortex bound states deviates from the analytical result
$E_\mu=\mu\Delta^2/E_F$ with the half-integer angular momentum $\mu$ in the
extreme quantum limit. Specifically, the energy ratio for the first three
orders is more close to $1:2:3$ instead of $1:3:5$ at extremely low
temperature. The local density of states reveals an Friedel-like behavior
associated with that of the pair potential in the extreme quantum limit, which
will be smoothed out by thermal effect above a certain temperature even the
quantum limit condition, namely $T/T_c<\Delta/E_F$ is still satisfied. Our
studies show that the vortex bound states can exhibit very distinct features in
different temperature regimes, which provides a comprehensive understanding and
should stimulate more experimental efforts for verifications.
|
Predicting molecular conformations (or 3D structures) from molecular graphs
is a fundamental problem in many applications. Most existing approaches are
usually divided into two steps by first predicting the distances between atoms
and then generating a 3D structure through optimizing a distance geometry
problem. However, the distances predicted with such two-stage approaches may
not be able to consistently preserve the geometry of local atomic
neighborhoods, making the generated structures unsatisfying. In this paper, we
propose an end-to-end solution for molecular conformation prediction called
ConfVAE based on the conditional variational autoencoder framework.
Specifically, the molecular graph is first encoded in a latent space, and then
the 3D structures are generated by solving a principled bilevel optimization
program. Extensive experiments on several benchmark data sets prove the
effectiveness of our proposed approach over existing state-of-the-art
approaches. Code is available at
\url{https://github.com/MinkaiXu/ConfVAE-ICML21}.
|
Search strategies for the third-generation leptoquarks (LQs) are distinct
from other LQ searches, especially when they decay to a top quark and a $\tau$
lepton. We investigate the cases of all TeV-scale scalar and vector LQs that
decay to either a top-tau pair (charge-$1/3$ and $5/3$ LQs) or a top-neutrino
pair (charge-$2/3$ LQs). One can then use the boosted top (which can be tagged
efficiently using jet-substructure techniques) and high-$p_{\rm T}$ $\tau$
leptons to search for these LQs. We consider two search channels with either
one or two taus along with at least one hadronically decaying boosted top
quark. We estimate the high luminosity LHC (HL-LHC) search prospects of these
LQs by considering both symmetric and asymmetric pair and single production
processes. Our selection criteria are optimised to retain events from both pair
and single production processes. The combined signal has better prospects than
the traditional searches. We include new three-body single production processes
to enhance the single production contributions to the combined signal. We
identify the interference effect that appears in the dominant single production
channel of charge-$1/3$ scalar LQ ($S^{1/3}$). This interference is
constructive if $S^{1/3}$ is weak-triplet and destructive, for a singlet one.
As a result, their LHC prospects differ appreciably.
|
We present a detailed analysis to clarify what determines the growth of the
low-$T/|W|$ instability in the context of rapidly rotating core-collapse of
massive stars. To this end, we perform three-dimensional core-collapse
supernova (CCSN) simulations of a $27 M_{\odot}$ star including several updates
in the general relativistic correction to gravity, the multi-energy treatment
of heavy-lepton neutrinos, and the nuclear equation of state. Non-axisymmetric
deformations are analyzed from the point of view of the time evolution of the
pattern frequency and the corotation radius. The corotation radius is found to
coincide with the convective layer in the proto neutron star (PNS). We propose
a new mechanism to account for the growth of the low-$T/|W|$ instability in the
CCSN environment. Near the convective boundary where a small
Brunt-V\"ais\"al\"a frequency is expected, Rossby waves propagating in the
azimuthal direction at mid latitude induce non-axisymmetric unstable modes, in
both hemispheres. They merge with each other and finally become the spiral arm
in the equatorial plane. We also investigate how the growth of the low-$T/|W|$
instability impacts the neutrino and gravitational-wave signatures.
|
An optical neural network is proposed and demonstrated with programmable
matrix transformation and nonlinear activation function of photodetection
(square-law detection). Based on discrete phase-coherent spatial modes, the
dimensionality of programmable optical matrix operations is 30~37, which is
implemented by spatial light modulators. With this architecture, all-optical
classification tasks of handwritten digits, objects and depth images are
performed on the same platform with high accuracy. Due to the parallel nature
of matrix multiplication, the processing speed of our proposed architecture is
potentially as high as7.4T~74T FLOPs per second (with 10~100GHz detector)
|
In this note, we give a characterisation in terms of identities of the join
of $\mathbf{V}$ with the variety of finite locally trivial semigroups
$\mathbf{LI}$ for several well-known varieties of finite monoids $\mathbf{V}$
by using classical algebraic-automata-theoretic techniques. To achieve this, we
use the new notion of essentially-$\mathbf{V}$ stamps defined by Grosshans,
McKenzie and Segoufin and show that it actually coincides with the join of
$\mathbf{V}$ and $\mathbf{LI}$ precisely when some natural condition on the
variety of languages corresponding to $\mathbf{V}$ is verified.This work is a
kind of rediscovery of the work of J. C. Costa around 20 years ago from a
rather different angle, since Costa's work relies on the use of advanced
developments in profinite topology, whereas what is presented here essentially
uses an algebraic, language-based approach.
|
The Transiting Exoplanet Survey Satellite (\textit{TESS}) mission was
designed to perform an all-sky search of planets around bright and nearby
stars. Here we report the discovery of two sub-Neptunes orbiting around the TOI
1062 (TIC 299799658), a V=10.25 G9V star observed in the TESS Sectors 1, 13, 27
& 28. We use precise radial velocity observations from HARPS to confirm and
characterize these two planets. TOI 1062b has a radius of
2.265^{+0.095}_{-0.091} Re, a mass of 11.8 +\- 1.4 Me, and an orbital period of
4.115050 +/- 0.000007 days. The second planet is not transiting, has a minimum
mass of 7.4 +/- 1.6 Me and is near the 2:1 mean motion resonance with the
innermost planet with an orbital period of 8.13^{+0.02}_{-0.01} days. We
performed a dynamical analysis to explore the proximity of the system to this
resonance, and to attempt at further constraining the orbital parameters. The
transiting planet has a mean density of 5.58^{+1.00}_{-0.89} g cm^-3 and an
analysis of its internal structure reveals that it is expected to have a small
volatile envelope accounting for 0.35% of the mass at maximum. The star's
brightness and the proximity of the inner planet to the "radius gap" make it an
interesting candidate for transmission spectroscopy, which could further
constrain the composition and internal structure of TOI 1062b.
|
This paper describes the design, implementation, and verification of a
test-bed for determining the noise temperature of radio antennas operating
between 400-800MHz. The requirements for this test-bed were driven by the HIRAX
experiment, which uses antennas with embedded amplification, making system
noise characterization difficult in the laboratory. The test-bed consists of
two large cylindrical cavities, each containing radio-frequency (RF) absorber
held at different temperatures (300K and 77K), allowing a measurement of system
noise temperature through the well-known 'Y-factor' method. The apparatus has
been constructed at Yale, and over the course of the past year has undergone
detailed verification measurements. To date, three preliminary noise
temperature measurement sets have been conducted using the system, putting us
on track to make the first noise temperature measurements of the HIRAX feed and
perform the first analysis of feed repeatability.
|
We establish concentration inequalities in the class of ultra log-concave
distributions. In particular, we show that ultra log-concave distributions
satisfy Poisson concentration bounds. As an application, we derive
concentration bounds for the intrinsic volumes of a convex body, which
generalizes and improves a result of Lotz, McCoy, Nourdin, Peccati, and Tropp
(2019).
|
What does bumping into things in a scene tell you about scene geometry? In
this paper, we investigate the idea of learning from collisions. At the heart
of our approach is the idea of collision replay, where we use examples of a
collision to provide supervision for observations at a past frame. We use
collision replay to train convolutional neural networks to predict a
distribution over collision time from new images. This distribution conveys
information about the navigational affordances (e.g., corridors vs open spaces)
and, as we show, can be converted into the distance function for the scene
geometry. We analyze this approach with an agent that has noisy actuation in a
photorealistic simulator.
|
We propose a leptoquark model with two scalar leptoquarks $S^{}_1 \left(
\bar{3},1,\frac{1}{3} \right)$ and $\widetilde{R}^{}_2 \left(3,2,\frac{1}{6}
\right)$ to give a combined explanation of neutrino masses, lepton flavor
mixing and the anomaly of muon $g-2$, satisfying the constraints from the
radiative decays of charged leptons. The neutrino masses are generated via
one-loop corrections resulting from a mixing between $S^{}_1$ and
$\widetilde{R}^{}_2$. With a set of specific textures for the leptoquark Yukawa
coupling matrices, the neutrino mass matrix possesses an approximate
$\mu$-$\tau$ reflection symmetry with $\left( M^{}_\nu \right)^{}_{ee} = 0$
only in favor of the normal neutrino mass ordering. We show that this model can
successfully explain the anomaly of muon $g-2$ and current experimental
neutrino oscillation data under the constraints from the radiative decays of
charged leptons.
|
Face detection is a crucial first step in many facial recognition and face
analysis systems. Early approaches for face detection were mainly based on
classifiers built on top of hand-crafted features extracted from local image
regions, such as Haar Cascades and Histogram of Oriented Gradients. However,
these approaches were not powerful enough to achieve a high accuracy on images
of from uncontrolled environments. With the breakthrough work in image
classification using deep neural networks in 2012, there has been a huge
paradigm shift in face detection. Inspired by the rapid progress of deep
learning in computer vision, many deep learning based frameworks have been
proposed for face detection over the past few years, achieving significant
improvements in accuracy. In this work, we provide a detailed overview of some
of the most representative deep learning based face detection methods by
grouping them into a few major categories, and present their core architectural
designs and accuracies on popular benchmarks. We also describe some of the most
popular face detection datasets. Finally, we discuss some current challenges in
the field, and suggest potential future research directions.
|
A Cayley (di)hypergraph is a hypergraph that its automorphism group contains
a subgroup acting regularly on (hyper)vertices. In this paper, we study Cayley
(di)hypergraph and its automorphism group.
|
Purpose: Develop a processing scheme for Gradient Echo (GRE) phase to enable
restoration of susceptibility-related (SuR) features in regions affected by
imperfect phase unwrapping, background suppression and low signal-to-noise
ratio (SNR) due to phase dispersion. Theory and Methods: The predictable
components sampled across the echo dimension in a multi-echo GRE sequence are
recovered by rank minimizing a Hankel matrix formed using the complex
exponential of the background suppressed phase. To estimate the single
frequency component that relates to the susceptibility induced field, it is
required to maintain consistency with the measured phase after background
suppression, penalized by a unity rank approximation (URA) prior. This is
formulated as an optimization problem, implemented using the alternating
direction method of multiplier (ADMM). Results: With in vivo multi-echo GRE
data, the magnitude susceptibility weighted image (SWI) reconstructed using URA
prior shows additional venous structures that are obscured due to phase
dispersion and noise in regions subject to remnant non-local field variations.
The performance is compared with the susceptibility map weighted imaging (SMWI)
and the standard SWI. It is also shown using numerical simulation that
quantitative susceptibility map (QSM) computed from the reconstructed phase
exhibits reduced artifacts and quantification error. In vivo experiments reveal
iron depositions in insular, motor cortex and superior frontal gyrus that are
not identified in standard QSM. Conclusion: URA processed GRE phase is less
sensitive to imperfections in the phase pre-processing techniques, and thereby
enable robust estimation of SWI and QSM.
|
We provide a comprehensive analysis of the two-parameter Beta distributions
seen from the perspective of second-order stochastic dominance. By changing its
parameters through a bijective mapping, we work with a bounded subset D instead
of an unbounded plane. We show that a mean-preserving spread is equivalent to
an increase of the variance, which means that higher moments are irrelevant to
compare the riskiness of Beta distributions. We then derive the lattice
structure induced by second-order stochastic dominance, which is feasible
thanks to the topological closure of D. Finally, we consider a standard
(expected-utility based) portfolio optimization problem in which its inputs are
the parameters of the Beta distribution. We explicitly characterize the subset
of D for which the optimal solution consists of investing 100% of the wealth in
the risky asset and we provide an exhaustive numerical analysis of this optimal
solution through (color-coded) graphs.
|
We report the growth, structural and magnetic properties of the less studied
Eu-oxide phase, Eu$_3$O$_4$, thin films grown on a Si/SiO$_2$ substrate and
Si/SiO$_2$/graphene using molecular beam epitaxy. The X-ray diffraction scans
show that highly-textured crystalline Eu$_3$O$_4$(001) films are grown on both
substrates, whereas the film deposited on graphene has a better crystallinity
than that grown on the Si/SiO$_2$ substrate. The SQUID measurements show that
both films have a Curie temperature of about 5.5 K, with a magnetic moment of
0.0032 emu/g at 2 K. The mixed-valency of the Eu cations has been confirmed by
the qualitative analysis of the depth-profile X-ray photoelectron spectroscopy
measurements with the Eu$^{2+}$ : Eu$^{3+}$ ratio of 28 : 72. However,
surprisingly, our films show no metamagnetic behaviour as reported for the bulk
and powder form. Furthermore, the Raman spectroscopy scans show that the growth
of the Eu$_3$O$_4$ thin films has no damaging effect on the underlayer graphene
sheet. Therefore, the graphene layer is expected to retain its properties.
|
We study the Choquard equation with a local perturbation \begin{equation*}
-\Delta u=\lambda u+(I_\alpha\ast|u|^p)|u|^{p-2}u+\mu|u|^{q-2}u,\ x\in
\mathbb{R}^{N} \end{equation*} having prescribed mass \begin{equation*}
\int_{\mathbb{R}^N}|u|^2dx=a^2. \end{equation*} For a $L^2$-critical or
$L^2$-supercritical perturbation $\mu|u|^{q-2}u$, we prove nonexistence,
existence and symmetry of normalized ground states, by using the mountain pass
lemma, the Poho\v{z}aev constraint method, the Schwartz symmetrization
rearrangements and some theories of polarizations. In particular, our results
cover the Hardy-Littlewood-Sobolev upper critical exponent case
$p=(N+\alpha)/(N-2)$. Our results are a nonlocal counterpart of the results in
\cite{{Li 2021-4},{Soave JFA},{Wei-Wu 2021}}.
|
We study an invariant of compact metric spaces which combines the notion of
curvature sets introduced by Gromov in the 1980s together with the notion of
Vietoris-Rips persistent homology. For given integers $k\geq 0$ and $n\geq 1$
these invariants arise by considering the degree $k$ Vietoris-Rips persistence
diagrams of all subsets of a given metric space with cardinality at most $n$.
We call these invariants \emph{persistence sets} and denote them as
$D_{n,k}^\mathrm{VR}$. We argue that computing these invariants could be
significantly easier than computing the usual Vietoris-Rips persistence
diagrams. We establish stability results as for these invariants and we also
precisely characterize some of them in the case of spheres with geodesic and
Euclidean distances. We identify a rich family of metric graphs for which
$D_{4,1}^{\mathrm{VR}}$ fully recovers their homotopy type. Along the way we
prove some useful properties of Vietoris-Rips persistence diagrams.
|
Let $K$ be the connected sum of knots $K_1,\ldots,K_n$. It is known that the
$\mathrm{SL}_2(\mathbb{C})$-character variety of the knot exterior of $K$ has a
component of dimension $\geq 2$ as the connected sum admits a so-called
bending. We show that there is a natural way to define the adjoint Reidemeister
torsion for such a high-dimensional component and prove that it is locally
constant on a subset of the character variety where the trace of a meridian is
constant. We also prove that the adjoint Reidemeister torsion of $K$ satisfies
the vanishing identity if each $K_i$ does so.
|
Grammatical error correction (GEC) suffers from a lack of sufficient parallel
data. Therefore, GEC studies have developed various methods to generate pseudo
data, which comprise pairs of grammatical and artificially produced
ungrammatical sentences. Currently, a mainstream approach to generate pseudo
data is back-translation (BT). Most previous GEC studies using BT have employed
the same architecture for both GEC and BT models. However, GEC models have
different correction tendencies depending on their architectures. Thus, in this
study, we compare the correction tendencies of the GEC models trained on pseudo
data generated by different BT models, namely, Transformer, CNN, and LSTM. The
results confirm that the correction tendencies for each error type are
different for every BT model. Additionally, we examine the correction
tendencies when using a combination of pseudo data generated by different BT
models. As a result, we find that the combination of different BT models
improves or interpolates the F_0.5 scores of each error type compared with that
of single BT models with different seeds.
|
Deep learning recommendation models (DLRMs) are used across many
business-critical services at Facebook and are the single largest AI
application in terms of infrastructure demand in its data-centers. In this
paper we discuss the SW/HW co-designed solution for high-performance
distributed training of large-scale DLRMs. We introduce a high-performance
scalable software stack based on PyTorch and pair it with the new evolution of
Zion platform, namely ZionEX. We demonstrate the capability to train very large
DLRMs with up to 12 Trillion parameters and show that we can attain 40X speedup
in terms of time to solution over previous systems. We achieve this by (i)
designing the ZionEX platform with dedicated scale-out network, provisioned
with high bandwidth, optimal topology and efficient transport (ii) implementing
an optimized PyTorch-based training stack supporting both model and data
parallelism (iii) developing sharding algorithms capable of hierarchical
partitioning of the embedding tables along row, column dimensions and load
balancing them across multiple workers; (iv) adding high-performance core
operators while retaining flexibility to support optimizers with fully
deterministic updates (v) leveraging reduced precision communications,
multi-level memory hierarchy (HBM+DDR+SSD) and pipelining. Furthermore, we
develop and briefly comment on distributed data ingestion and other supporting
services that are required for the robust and efficient end-to-end training in
production environments.
|
Bitcoin and Ethereum transactions present one of the largest real-world
complex networks that are publicly available for study, including a detailed
picture of their time evolution. As such, they have received a considerable
amount of attention from the network science community, beside analysis from an
economic or cryptography perspective. Among these studies, in an analysis on
the early instance of the Bitcoin network, we have shown the clear presence of
the preferential attachment, or "rich-get-richer" phenomenon. Now, we revisit
this question, using a recent version of the Bitcoin network that has grown
almost 100-fold since our original analysis. Furthermore, we additionally carry
out a comparison with Ethereum, the second most important cryptocurrency. Our
results show that preferential attachment continues to be a key factor in the
evolution of both the Bitcoin and Ethereum transactoin networks. To facilitate
further analysis, we publish a recent version of both transaction networks, and
an efficient software implementation that is able to evaluate linking
statistics necessary for learn about preferential attachment on networks with
several hundred million edges.
|
Distributed hardware of acoustic sensor networks bears inconsistency of local
sampling frequencies, which is detrimental to signal processing. Fundamentally,
sampling rate offset (SRO) nonlinearly relates the discrete-time signals
acquired by different sensor nodes. As such, retrieval of SRO from the
available signals requires nonlinear estimation, like double-cross-correlation
processing (DXCP), and frequently results in biased estimation. SRO
compensation by asynchronous sampling rate conversion (ASRC) on the signals
then leaves an unacceptable residual. As a remedy to this problem, multi-stage
procedures have been devised to diminish the SRO residual with multiple
iterations of SRO estimation and ASRC over the entire signal. This paper
converts the mechanism of offline multi-stage processing into a continuous
feedback-control loop comprising a controlled ASRC unit followed by an online
implementation of DXCP-based SRO estimation. To support the design of an
optimum internal model control unit for this closed-loop system, the paper
deploys an analytical dynamical model of the proposed online DXCP. The
resulting control architecture then merely applies a single treatment of each
signal frame, while efficiently diminishing SRO bias with time. Evaluations
with both speech and Gaussian input demonstrate that the high accuracy of
multi-stage processing is maintained at the low complexity of single-stage
(open-loop) processing.
|
We examine a class of random walks in random environments on $\mathbb{Z}$
with bounded jumps, a generalization of the classic one-dimensional model. The
environments we study have i.i.d. transition probability vectors drawn from
Dirichlet distributions. For this model, we characterize recurrence and
transience, and in the transient case we characterize ballisticity. For
ballisticity, we give two parameters, $\kappa_0$ and $\kappa_1$. The parameter
$\kappa_0$ governs finite trapping effects, and $\kappa_1$ governs repeated
traversals of arbitrarily large regions of the graph. We show that the walk is
right-transient if and only if $\kappa_1>0$, and in that case it is ballistic
if and only if $\min(\kappa_0,\kappa_1)>1$.
|
G0.253+0.016, aka 'the Brick', is one of the most massive (> 10^5 Msun) and
dense (> 10^4 cm-3) molecular clouds in the Milky Way's Central Molecular Zone.
Previous observations have detected tentative signs of active star formation,
most notably a water maser that is associated with a dust continuum source. We
present ALMA Band 6 observations with an angular resolution of 0.13" (1000 AU)
towards this 'maser core', and report unambiguous evidence of active star
formation within G0.253+0.016. We detect a population of eighteen continuum
sources (median mass ~ 2 Msun), nine of which are driving bi-polar molecular
outflows as seen via SiO (5-4) emission. At the location of the water maser, we
find evidence for a protostellar binary/multiple with multi-directional outflow
emission. Despite the high density of G0.253+0.016, we find no evidence for
high-mass protostars in our ALMA field. The observed sources are instead
consistent with a cluster of low-to-intermediate-mass protostars. However, the
measured outflow properties are consistent with those expected for
intermediate-to-high-mass star formation. We conclude that the sources are
young and rapidly accreting, and may potentially form intermediate and
high-mass stars in the future. The masses and projected spatial distribution of
the cores are generally consistent with thermal fragmentation, suggesting that
the large-scale turbulence and strong magnetic field in the cloud do not
dominate on these scales, and that star formation on the scale of individual
protostars is similar to that in Galactic disc environments.
|
Authenticated Append-Only Skiplists (AAOSLs) enable maintenance and querying
of an authenticated log (such as a blockchain) without requiring any single
party to store or verify the entire log, or to trust another party regarding
its contents. AAOSLs can help to enable efficient dynamic participation (e.g.,
in consensus) and reduce storage overhead.
In this paper, we formalize an AAOSL originally described by Maniatis and
Baker, and prove its key correctness properties. Our model and proofs are
machine checked in Agda. Our proofs apply to a generalization of the original
construction and provide confidence that instances of this generalization can
be used in practice. Our formalization effort has also yielded some
simplifications and optimizations.
|
We define a spectral flow for paths of selfadjoint Fredholm operators that
are equivariant under the orthogonal action of a compact Lie group as an
element of the representation ring of the latter. This $G$-equivariant spectral
flow shares all common properties of the integer valued classical spectral
flow, and it can be non-trivial even if the classical spectral flow vanishes.
Our main theorem uses the $G$-equivariant spectral flow to study bifurcation of
periodic solutions for autonomous Hamiltonian systems with symmetries.
|
Deterioration of the operation parameters of Al/SiO2/p-type Si surface
barrier detector upon irradiation with alpha-particles at room temperature was
investigated. As a result of 40-days irradiation with a total fluence of 8*10^9
{\alpha}-particles, an increase of {\alpha}-peak FWHM from 70 keV to 100 keV
was observed and explained by increase of the detector reverse current due to
formation of a high concentration of near mid-gap defect levels. Performed CV
measurements revealed the appearance of at least 6*10^12 cm-3 radiation-induced
acceptors at the depths where according to the TRIM simulations the highest
concentration of vacancy-interstitial pairs was created by the incoming
{\alpha}-particles. The studies carried out by current-DLTS technique allowed
to associate the observed increase of the acceptor concentration with the near
mid-gap acceptor level at EV+0.56 eV. This level can be apparently associated
with V2O defects recognized previously to be responsible for the space charge
sign inversion in the irradiated n-type Si detectors.
|
For a function $f\colon [0,1]\to\mathbb R$, we consider the set $E(f)$ of
points at which $f$ cuts the real axis. Given $f\colon [0,1]\to\mathbb R$ and a
Cantor set $D\subset [0,1]$ with $\{0,1\}\subset D$, we obtain conditions
equivalent to the conjunction $f\in C[0,1]$ (or $f\in C^\infty [0,1]$) and
$D\subset E(f)$. This generalizes some ideas of Zabeti. We observe that, if $f$
is continuous, then $E(f)$ is a closed nowhere dense subset of $f^{-1}[\{ 0\}]$
where each $x\in \{0,1\}\cap E(f)$ is an accumulation point of $E(f)$. Our main
result states that, for a closed nowhere dense set $F\subset [0,1]$ with each
$x\in \{0,1\}\cap E(f)$ being an accumulation point of $F$, there exists $f\in
C^\infty [0,1]$ such that $F=E(f)$.
|
Graph-based causal discovery methods aim to capture conditional
independencies consistent with the observed data and differentiate causal
relationships from indirect or induced ones. Successful construction of
graphical models of data depends on the assumption of causal sufficiency: that
is, that all confounding variables are measured. When this assumption is not
met, learned graphical structures may become arbitrarily incorrect and effects
implied by such models may be wrongly attributed, carry the wrong magnitude, or
mis-represent direction of correlation. Wide application of graphical models to
increasingly less curated "big data" draws renewed attention to the unobserved
confounder problem.
We present a novel method that aims to control for the latent space when
estimating a DAG by iteratively deriving proxies for the latent space from the
residuals of the inferred model. Under mild assumptions, our method improves
structural inference of Gaussian graphical models and enhances identifiability
of the causal effect. In addition, when the model is being used to predict
outcomes, it un-confounds the coefficients on the parents of the outcomes and
leads to improved predictive performance when out-of-sample regime is very
different from the training data. We show that any improvement of prediction of
an outcome is intrinsically capped and cannot rise beyond a certain limit as
compared to the confounded model. We extend our methodology beyond GGMs to
ordinal variables and nonlinear cases. Our R package provides both PCA and
autoencoder implementations of the methodology, suitable for GGMs with some
guarantees and for better performance in general cases but without such
guarantees.
|
The Multichannel Subtractive Double Pass (MSDP) is an imaging spectroscopy
technique, which allows observations of spectral line profiles over a 2D field
of view with high spatial and temporal resolution. It has been intensively used
since 1977 on various spectrographs (Meudon, Pic du Midi, the German Vacuum
Tower Telescope, THEMIS, Wroc{\l}aw). We summarize previous developments and
describe the capabilities of a new design that has been developed at Meudon and
that has higher spectral resolution and increased channel number: Spectral
Sampling with Slicer for Solar Instrumentation (S4I), which can be combined
with a new and fast polarimetry analysis. This new generation MSDP technique is
well adapted to large telescopes. Also presented are the goals of a derived
compact version of the instrument, the Solar Line Emission Dopplerometer
(SLED), dedicated to dynamic studies of coronal loops observed in the forbidden
iron lines, and prominences. It is designed for observing total solar eclipses,
and for deployment on the Wroc{\l}aw and Lomnicky peak coronagraphs
respectively for prominence and coronal observations.
|
We exhibit a non-hyperelliptic curve C of genus 3 such that the class of the
Ceresa cycle [C]-[(-1)*C] in JC modulo algebraic equivalence is torsion.
|
This work presents the results of project CONECT4, which addresses the
research and development of new non-intrusive communication methods for the
generation of a human-machine learning ecosystem oriented to predictive
maintenance in the automotive industry. Through the use of innovative
technologies such as Augmented Reality, Virtual Reality, Digital Twin and
expert knowledge, CONECT4 implements methodologies that allow improving the
efficiency of training techniques and knowledge management in industrial
companies. The research has been supported by the development of content and
systems with a low level of technological maturity that address solutions for
the industrial sector applied in training and assistance to the operator. The
results have been analyzed in companies in the automotive sector, however, they
are exportable to any other type of industrial sector. -- --
En esta publicaci\'on se presentan los resultados del proyecto CONECT4, que
aborda la investigaci\'on y desarrollo de nuevos m\'etodos de comunicaci\'on no
intrusivos para la generaci\'on de un ecosistema de aprendizaje
hombre-m\'aquina orientado al mantenimiento predictivo en la industria de
automoci\'on. A trav\'es del uso de tecnolog\'ias innovadoras como la Realidad
Aumentada, la Realidad Virtual, el Gemelo Digital y conocimiento experto,
CONECT4 implementa metodolog\'ias que permiten mejorar la eficiencia de las
t\'ecnicas de formaci\'on y gesti\'on de conocimiento en las empresas
industriales. La investigaci\'on se ha apoyado en el desarrollo de contenidos y
sistemas con un nivel de madurez tecnol\'ogico bajo que abordan soluciones para
el sector industrial aplicadas en la formaci\'on y asistencia al operario. Los
resultados han sido analizados en empresas del sector de automoci\'on, no
obstante, son exportables a cualquier otro tipo de sector industrial.
|
This paper reports 209 O-type stars found with LAMOST. All 135 new O-type
stars discovered so far with LAMOST so far are given. Among them, 94 stars are
firstly presented in this sample. There are 1 Iafpe star, 5 Onfp stars, 12 Oe
stars, 1 Ofc stars, 3 ON stars, 16 double-lined spectroscopic binaries, and 33
single-lined spectroscopic binaries. All O-type stars are determined based on
LAMOST low-resolution spectra (R ~ 1800), with their LAMOST median-resolution
spectra (R~7500) as supplements.
|
A world-wide COVID-19 pandemic intensified strongly the studies of molecular
mechanisms related to the coronaviruses. The origin of coronaviruses and the
risks of human-to-human, animal-to-human, and human-to-animal transmission of
coronaviral infections can be understood only on a broader evolutionary level
by detailed comparative studies. In this paper, we studied ribonucleocapsid
assembly-packaging signals (RNAPS) in the genomes of all seven known pathogenic
human coronaviruses, SARS-CoV, SARS-CoV-2, MERS-CoV, HCoV-OC43, HCoV-HKU1,
HCoV-229E, and HCoV-NL63 and compared them with RNAPS in the genomes of the
related animal coronaviruses including SARS-Bat-CoV, MERS-Camel-CoV, MHV,
Bat-CoV MOP1, TGEV, and one of camel alphacoronaviruses. RNAPS in the genomes
of coronaviruses were evolved due to weakly specific interactions between
genomic RNA and N proteins in helical nucleocapsids. Combining transitional
genome mapping and Jaccard correlation coefficients allows us to perform the
analysis directly in terms of underlying motifs distributed over the genome. In
all coronaviruses RNAPS were distributed quasi-periodically over the genome
with the period about 54 nt biased to 57 nt and to 51 nt for the genomes longer
and shorter than that of SARS-CoV, respectively. The comparison with the
experimentally verified packaging signals for MERS-CoV, MHV, and TGEV proved
that the distribution of particular motifs is strongly correlated with the
packaging signals. We also found that many motifs were highly conserved in both
characters and positioning on the genomes throughout the lineages that make
them promising therapeutic targets. The mechanisms of encapsidation can affect
the recombination and co-infection as well.
|
This paper establishes new connections between many-body quantum systems,
One-body Reduced Density Matrices Functional Theory (1RDMFT) and Optimal
Transport (OT), by interpreting the problem of computing the ground-state
energy of a finite dimensional composite quantum system at positive temperature
as a non-commutative entropy regularized Optimal Transport problem. We develop
a new approach to fully characterize the dual-primal solutions in such
non-commutative setting. The mathematical formalism is particularly relevant in
quantum chemistry: numerical realizations of the many-electron ground state
energy can be computed via a non-commutative version of Sinkhorn algorithm. Our
approach allows to prove convergence and robustness of this algorithm, which,
to our best knowledge, were unknown even in the two marginal case. Our methods
are based on careful a priori estimates in the dual problem, which we believe
to be of independent interest. Finally, the above results are extended in
1RDMFT setting, where bosonic or fermionic symmetry conditions are enforced on
the problem.
|
Stochastic gradient Markov chain Monte Carlo (SGMCMC) is a popular class of
algorithms for scalable Bayesian inference. However, these algorithms include
hyperparameters such as step size or batch size that influence the accuracy of
estimators based on the obtained posterior samples. As a result, these
hyperparameters must be tuned by the practitioner and currently no principled
and automated way to tune them exists. Standard MCMC tuning methods based on
acceptance rates cannot be used for SGMCMC, thus requiring alternative tools
and diagnostics. We propose a novel bandit-based algorithm that tunes the
SGMCMC hyperparameters by minimizing the Stein discrepancy between the true
posterior and its Monte Carlo approximation. We provide theoretical results
supporting this approach and assess various Stein-based discrepancies. We
support our results with experiments on both simulated and real datasets, and
find that this method is practical for a wide range of applications.
|
Observations of the redshifted 21-cm line of neutral hydrogen (HI) are a new
and powerful window of observation that offers us the possibility to map the
spatial distribution of cosmic HI and learn about cosmology. BINGO (Baryon
Acoustic Oscillations [BAO] from Integrated Neutral Gas Observations) is a new
unique radio telescope designed to be one of the first to probe BAO at radio
frequencies. BINGO has two science goals: cosmology and astrophysics. Cosmology
is the main science goal and the driver for BINGO's design and strategy. The
key of BINGO is to detect the low redshift BAO to put strong constraints in the
dark sector models. Given the versatility of the BINGO telescope, a secondary
goal is astrophysics, where BINGO can help discover and study Fast Radio Bursts
(FRB) and other transients, Galactic and extragalactic science. In this paper,
we introduce the latest progress of the BINGO project, its science goals,
describing the scientific potential of the project in each science and the new
developments obtained by the collaboration. We introduce the BINGO project and
its science goals and give a general summary of recent developments in
construction, science potential and pipeline development obtained by the BINGO
collaboration in the past few years. We show that BINGO will be able to obtain
competitive constraints for the dark sector, and also that will allow for the
discovery of several FRBs in the southern hemisphere. The capacity of BINGO in
obtaining information from 21-cm is also tested in the pipeline introduced
here. There is still no measurement of the BAO in radio, and studying cosmology
in this new window of observations is one of the most promising advances in the
field. The BINGO project is a radio telescope that has the goal to be one of
the first to perform this measurement and it is currently being built in the
northeast of Brazil. (Abridged)
|
We explore cross-lingual transfer of register classification for web
documents. Registers, that is, text varieties such as blogs or news are one of
the primary predictors of linguistic variation and thus affect the automatic
processing of language. We introduce two new register annotated corpora,
FreCORE and SweCORE, for French and Swedish. We demonstrate that deep
pre-trained language models perform strongly in these languages and outperform
previous state-of-the-art in English and Finnish. Specifically, we show 1) that
zero-shot cross-lingual transfer from the large English CORE corpus can match
or surpass previously published monolingual models, and 2) that lightweight
monolingual classification requiring very little training data can reach or
surpass our zero-shot performance. We further analyse classification results
finding that certain registers continue to pose challenges in particular for
cross-lingual transfer.
|
Compact binary systems emit gravitational radiation which is potentially
detectable by current Earth bound detectors. Extracting these signals from the
instruments' background noise is a complex problem and the computational cost
of most current searches depends on the complexity of the source model. Deep
learning may be capable of finding signals where current algorithms hit
computational limits. Here we restrict our analysis to signals from
non-spinning binary black holes and systematically test different strategies by
which training data is presented to the networks. To assess the impact of the
training strategies, we re-analyze the first published networks and directly
compare them to an equivalent matched-filter search. We find that the deep
learning algorithms can generalize low signal-to-noise ratio (SNR) signals to
high SNR ones but not vice versa. As such, it is not beneficial to provide high
SNR signals during training, and fastest convergence is achieved when low SNR
samples are provided early on. During testing we found that the networks are
sometimes unable to recover any signals when a false alarm probability
$<10^{-3}$ is required. We resolve this restriction by applying a modification
we call unbounded Softmax replacement (USR) after training. With this
alteration we find that the machine learning search retains $\geq 97.5\%$ of
the sensitivity of the matched-filter search down to a false-alarm rate of 1
per month.
|
The position of the Sun inside the Milky Way's disc hampers the study of the
spiral arm structure. We aim to analyse the spiral arms along the line-of-sight
towards the Galactic centre (GC) to determine their distance, extinction, and
stellar population. We use the GALACTICNUCLEUS survey, a JHKs high angular
resolution photometric catalogue (0.2") for the innermost regions of the
Galaxy. We fitted simple synthetic colour-magnitude models to our data via
$\chi^2$ minimisation. We computed the distance and extinction to the detected
spiral arms. We also analysed the extinction curve and the relative extinction
between the detected features. Finally, we built extinction-corrected Ks
luminosity functions (KLFs) to study the stellar populations present in the
second and third spiral arm features. We determined the mean distances to the
spiral arms: $d1=1.6\pm0.2$, $d2=2.6\pm0.2$, $d3=3.9\pm0.3$, and $d4=4.5\pm0.2$
kpc, and the mean extinctions: $A_{H1}=0.35\pm0.08$, $A_{H2}=0.77\pm0.08$,
$A_{H3}=1.68\pm0.08$, and $A_{H4}=2.30\pm0.08$ mag. We analysed the extinction
curve in the near infrared for the stars in the spiral arms and found mean
values of $A_J/A_{H}=1.89\pm0.11$ and $A_H/A_{K_s}=1.86\pm0.11$, in agreement
with the results obtained for the GC. This implies that the shape of the
extinction curve does not depend on distance or absolute extinction. We also
built extinction maps for each spiral arm and obtained that they are
homogeneous and might correspond to independent extinction layers. Finally,
analysing the KLFs from the second and the third spiral arms, we found that
they have similar stellar populations. We obtained two main episodes of star
formation: $>6$ Gyr ($\sim60-70\%$ of the stellar mass), and $1.5-4$ Gyr
($\sim20-30\%$ of the stellar mass), compatible with previous work. We also
detected recent star formation at a lower level ($\sim10\%$) for the third
spiral arm.
|
We present a comprehensive comparison of spin and energy dynamics in quantum
and classical spin models on different geometries, ranging from one-dimensional
chains, over quasi-one-dimensional ladders, to two-dimensional square lattices.
Focusing on dynamics at formally infinite temperature, we particularly consider
the autocorrelation functions of local densities, where the time evolution is
governed either by the linear Schr\"odinger equation in the quantum case, or
the nonlinear Hamiltonian equations of motion in the case of classical
mechanics. While, in full generality, a quantitative agreement between quantum
and classical dynamics can therefore not be expected, our large-scale numerical
results for spin-$1/2$ systems with up to $N = 36$ lattice sites in fact defy
this expectation. Specifically, we observe a remarkably good agreement for all
geometries, which is best for the nonintegrable quantum models in quasi-one or
two dimensions, but still satisfactory in the case of integrable chains, at
least if transport properties are not dominated by the extensive number of
conservation laws. Our findings indicate that classical or semi-classical
simulations provide a meaningful strategy to analyze the dynamics of quantum
many-body models, even in cases where the spin quantum number $S = 1/2$ is
small and far away from the classical limit $S \to \infty$.
|
Event coreference continues to be a challenging problem in information
extraction. With the absence of any external knowledge bases for events,
coreference becomes a clustering task that relies on effective representations
of the context in which event mentions appear. Recent advances in
contextualized language representations have proven successful in many tasks,
however, their use in event linking been limited. Here we present a three part
approach that (1) uses representations derived from a pretrained BERT model to
(2) train a neural classifier to (3) drive a simple clustering algorithm to
create coreference chains. We achieve state of the art results with this model
on two standard datasets for within-document event coreference task and
establish a new standard on a third newer dataset.
|
Safety is a fundamental requirement in any human-robot collaboration
scenario. To ensure the safety of users for such scenarios, we propose a novel
Virtual Barrier system facilitated by an augmented reality interface. Our
system provides two kinds of Virtual Barriers to ensure safety: 1) a Virtual
Person Barrier which encapsulates and follows the user to protect them from
colliding with the robot, and 2) Virtual Obstacle Barriers which users can
spawn to protect objects or regions that the robot should not enter. To enable
effective human-robot collaboration, our system includes an intuitive robot
programming interface utilizing speech commands and hand gestures, and features
the capability of automatic path re-planning when potential collisions are
detected as a result of a barrier intersecting the robot's planned path. We
compared our novel system with a standard 2D display interface through a user
study, where participants performed a task mimicking an industrial
manufacturing procedure. Results show that our system increases the user's
sense of safety and task efficiency, and makes the interaction more intuitive.
|
We show that the change of basis matrices of a set of $m$ bases of a finite
vector space is a connected groupoid of order $m^2$. We define a general method
to express the elements of change of basis matrices as algebraic expressions
using optimizations of evaluations of vector dot products. Examples are given
with orthogonal polynomials.
|
We experimentally observe the dipole scattering of a nanoparticle using a
high numerical aperture (NA) imaging system. The optically levitated
nanoparticle provides an environment free of particle-substrate interaction. We
illuminate the silica nanoparticle in vacuum with a 532 nm laser beam
orthogonally to the propagation direction of the 1064 nm trapping laser beam
strongly focused by the same high NA objective used to collect the scattering,
which results in a dark background and high signal-noise ratio. The dipole
orientations of the nanoparticle induced by the linear polarization of the
incident laser are studied by measuring the scattering light distribution in
the image and the Fourier space (k-space) as we rotate the illuminating light
polarization. The polarization vortex (vector beam) is observed for the special
case, when the dipole orientation of the nanoparticle is aligned along the
optical axis of the microscope objective. Our work offers an important platform
for studying the scattering anisotropy with Kerker conditions.
|
We prove that the Feynman Path Integral is equivalent to a novel stringy
description of elementary particles characterized by a single compact (cyclic)
world-line parameter playing the role of the particle internal clock. Such a
possible description of elementary particles as characterized by intrinsic
periodicity in time has been indirectly confirmed, even experimentally, by
recent developments of Time Crystals. We clearly obtain an exact unified
formulation of quantum and relativistic physics, potentially deterministic,
fully falsifiable having no fine-tunable parameters, also proven in previous
papers to be completely consistent with all known physics, from theoretical
physics to condensed matter. New physics will be discovered by probing quantum
phenomena with experimental time accuracy of the order of $10^{-21}$ sec.
|
Recent papers on the theory of representation learning has shown the
importance of a quantity called diversity when generalizing from a set of
source tasks to a target task. Most of these papers assume that the function
mapping shared representations to predictions is linear, for both source and
target tasks. In practice, researchers in deep learning use different numbers
of extra layers following the pretrained model based on the difficulty of the
new task. This motivates us to ask whether diversity can be achieved when
source tasks and the target task use different prediction function spaces
beyond linear functions. We show that diversity holds even if the target task
uses a neural network with multiple layers, as long as source tasks use linear
functions. If source tasks use nonlinear prediction functions, we provide a
negative result by showing that depth-1 neural networks with ReLu activation
function need exponentially many source tasks to achieve diversity. For a
general function class, we find that eluder dimension gives a lower bound on
the number of tasks required for diversity. Our theoretical results imply that
simpler tasks generalize better. Though our theoretical results are shown for
the global minimizer of empirical risks, their qualitative predictions still
hold true for gradient-based optimization algorithms as verified by our
simulations on deep neural networks.
|
In [Kim05], Kim gave a new proof of Siegel's Theorem that there are only
finitely many $S$-integral points on $\mathbb P^1_{\mathbb
Z}\setminus\{0,1,\infty\}$. One advantage of Kim's method is that it in
principle allows one to actually find these points, but the calculations grow
vastly more complicated as the size of $S$ increases. In this paper, we
implement a refinement of Kim's method to explicitly compute various examples
where $S$ has size $2$ which has been introduced in [BD19]. In so doing, we
exhibit new examples of a natural generalisation of a conjecture of Kim.
|
This tool paper presents the High-Assurance ROS (HAROS) framework. HAROS is a
framework for the analysis and quality improvement of robotics software
developed using the popular Robot Operating System (ROS). It builds on a static
analysis foundation to automatically extract models from the source code. Such
models are later used to enable other sorts of analyses, such as Model
Checking, Runtime Verification, and Property-based Testing. It has been applied
to multiple real-world examples, helping developers find and correct various
issues.
|
The latest conjunction of Jupiter and Saturn occurred at an optical distance
of 6 arc minutes on 21 December 2020. We re-analysed all encounters of these
two planets between -1000 and +3000 CE, as the extraordinary ones
(<10$^{\prime}$) take place near the line of nodes every 400 years. An
occultation of their discs did not and will not happen within the historical
time span of $\pm$5,000 years around now. When viewed from Neptune though,
there will be an occultation in 2046.
|
Filters (such as Bloom Filters) are data structures that speed up network
routing and measurement operations by storing a compressed representation of a
set. Filters are space efficient, but can make bounded one-sided errors: with
tunable probability epsilon, they may report that a query element is stored in
the filter when it is not. This is called a false positive. Recent research has
focused on designing methods for dynamically adapting filters to false
positives, reducing the number of false positives when some elements are
queried repeatedly.
Ideally, an adaptive filter would incur a false positive with bounded
probability epsilon for each new query element, and would incur o(epsilon)
total false positives over all repeated queries to that element. We call such a
filter support optimal.
In this paper we design a new Adaptive Cuckoo Filter and show that it is
support optimal (up to additive logarithmic terms) over any n queries when
storing a set of size n. Our filter is simple: fixing previous false positives
requires a simple cuckoo operation, and the filter does not need to store any
additional metadata. This data structure is the first practical data structure
that is support optimal, and the first filter that does not require additional
space to fix false positives.
We complement these bounds with experiments showing that our data structure
is effective at fixing false positives on network traces, outperforming
previous Adaptive Cuckoo Filters.
Finally, we investigate adversarial adaptivity, a stronger notion of
adaptivity in which an adaptive adversary repeatedly queries the filter, using
the result of previous queries to drive the false positive rate as high as
possible. We prove a lower bound showing that a broad family of filters,
including all known Adaptive Cuckoo Filters, can be forced by such an adversary
to incur a large number of false positives.
|
Let $(G,K)$ be a Gelfand pair, with $G$ a Lie group of polynomial growth, and
let $\Sigma\subset{\mathbb R}^\ell$ be a homeomorphic image of the Gelfand
spectrum, obtained by choosing a generating system $D_1,\dots,D_\ell$ of
$G$-invariant differential operators on $G/K$ and associating to a bounded
spherical function $\varphi$ the $\ell$-tuple of its eigenvalues under the
action of the $D_j$'s.
We say that property (S) holds for $(G,K)$ if the spherical transform maps
the bi-$K$-invariant Schwartz space ${\mathcal S}(K\backslash G/K)$
isomorphically onto ${\mathcal S}(\Sigma)$, the space of restrictions to
$\Sigma$ of the Schwartz functions on ${\mathbb R}^\ell$. This property is
known to hold for many nilpotent pairs, i.e., Gelfand pairs where $G=K\ltimes
N$, with $N$ nilpotent.
In this paper we enlarge the scope of this analysis outside the range of
nilpotent pairs, stating the basic setting for general pairs of polynomial
growth and then focussing on strong Gelfand pairs.
|
Recent photometric surveys of Trans-Neptunian Objects (TNOs) have revealed
that the cold classical TNOs have distinct z-band color characteristics, and
occupy their own distinct surface class. This suggested the presence of an
absorption band in the reflectance spectra of cold classicals at wavelengths
above 0.8 micron. Here we present reflectance spectra spanning 0.55-1.0 micron
for six TNOs occupying dynamically cold orbits at semimajor axes close to 44
au. Five of our spectra show a clear and broadly consistent reduction in
spectral gradient above 0.8 micron that diverges from their linear red optical
continuum and agrees with their reported photometric colour data. Despite
predictions, we find no evidence that the spectral flattening is caused by an
absorption band centered near 1.0 micron. We predict that the overall
consistent shape of these five spectra is related to the presence of similar
refractory organics on each of their surfaces, and/or their similar physical
surface properties such as porosity or grain size distribution. The observed
consistency of the reflectance spectra of these five targets aligns with
predictions that the cold classicals share a common history in terms of
formation and surface evolution. Our sixth target, which has been ambiguously
classified as either a hot or cold classical at various points in the past, has
a spectrum which remains nearly linear across the full range observed. This
suggests that this TNO is a hot classical interloper in the cold classical
dynamical range, and supports the idea that other such interlopers may be
identifiable by their linear reflectance spectra in the range 0.8-1.0 micron.
|
We study the relations of the positive frequency mode functions of Dirac
field in 4-dimensional Minkowski spacetime covered with Rindler and Kasner
coordinates, and describe the explicit form of the Minkowski vacuum state with
the quantum states in Kasner and Rindler regions, and analytically continue the
solutions. As a result, we obtain the correspondence of the positive frequency
mode functions in Kasner region and Rindler region in a unified manner which
derives vacuum entanglement.
|
Based on a progressively type-II censored sample from the exponential
distribution with unknown location and scale parameter, confidence bands are
proposed for the underlying distribution function by using confidence regions
for the parameters and Kolmogorov-Smirnov type statistics. Simple explicit
representations for the boundaries and for the coverage probabilities of the
confidence bands are analytically derived, and the performance of the bands is
compared in terms of band width and area by means of a data example. As a
by-product, a novel confidence region for the location-scale parameter is
obtained. Extensions of the results to related models for ordered data, such as
sequential order statistics, as well as to other underlying location-scale
families of distributions are discussed.
|
The famous Yang-Yau inequality provides an upper bound for the first
eigenvalue of the Laplacian on an orientable Riemannian surface solely in terms
of its genus $\gamma$ and the area. Its proof relies on the existence of
holomorhic maps to $\mathbb{CP}^1$ of low degree. Very recently, A.~Ros was
able to use certain holomorphic maps to $\mathbb{CP}^2$ in order to give a
quantitative improvement of the Yang-Yau inequality for $\gamma=3$. In the
present paper, we generalize Ros' argument to make use of holomorphic maps to
$\mathbb{CP}^n$ for any $n>0$. As an application, we obtain a quantitative
improvement of the Yang-Yau inequality for all genera $\gamma>3$ except for
$\gamma = 4,6,8,10,14$.
|
All yield criteria that determine the onset of plastic deformation in
crystalline materials must be invariant under the inversion symmetry associated
with a simultaneous change of sign of the slip direction and the slip plane
normal. We demonstrate the consequences of this symmetry on the functional form
of the effective stress, where only the lowest order terms that obey this
symmetry are retained. A particular form of yield criterion is obtained for
materials that do not obey the Schmid law, hereafter called non-Schmid
materials. Application of this model to body-centered cubic and hexagonal
close-packed metals shows under which conditions the non-Schmid stress terms
become significant in predicting the onset of yielding. In the special case,
where the contributions of all non-Schmid stresses vanish, this model reduces
to the maximum shear stress theory of Tresca.
|
We explore recent progress and open questions concerning local minima and
saddle points of the Cahn--Hilliard energy in $d\geq 2$ and the critical
parameter regime of large system size and mean value close to $-1$. We employ
the String Method of E, Ren, and Vanden-Eijnden -- a numerical algorithm for
computing transition pathways in complex systems -- in $d=2$ to gain additional
insight into the properties of the minima and saddle point. Motivated by the
numerical observations, we adapt a method of Caffarelli and Spruck to study
convexity of level sets in $d\geq 2$.
|
Federated Learning is an emerging privacy-preserving distributed machine
learning approach to building a shared model by performing distributed training
locally on participating devices (clients) and aggregating the local models
into a global one. As this approach prevents data collection and aggregation,
it helps in reducing associated privacy risks to a great extent. However, the
data samples across all participating clients are usually not independent and
identically distributed (non-iid), and Out of Distribution(OOD) generalization
for the learned models can be poor. Besides this challenge, federated learning
also remains vulnerable to various attacks on security wherein a few malicious
participating entities work towards inserting backdoors, degrading the
generated aggregated model as well as inferring the data owned by participating
entities. In this paper, we propose an approach for learning invariant (causal)
features common to all participating clients in a federated learning setup and
analyze empirically how it enhances the Out of Distribution (OOD) accuracy as
well as the privacy of the final learned model.
|
Ostrovsky's equation with time- and space- dependent forcing is studied. This
equation is model for long waves in a rotating fluid with a non-constant depth
(topography). A classification of Lie point symmetries and low-order
conservation laws is presented. Generalized travelling wave solutions are
obtained through symmetry reduction. These solutions exhibit a wave profile
that is stationary in a moving reference frame whose speed can be constant,
accelerating, or decelerating.
|
A subalgebra $\mathcal{A}$ of a $C^*$-algebra $\mathcal{M}$ is logmodular
(resp. has factorization) if the set $\{a^*a; a\text{ is invertible with
}a,a^{-1}\in\mathcal{A}\}$ is dense in (resp. equal to) the set of all positive
and invertible elements of $\mathcal{M}$. There are large classes of well
studied algebras, both in commutative and non-commutative settings, which are
known to be logmodular. In this paper, we show that the lattice of projections
in a von Neumann algebra $\mathcal{M}$ whose ranges are invariant under a
logmodular algebra in $\mathcal{M}$, is a commutative subspace lattice.
Further, if $\mathcal{M}$ is a factor then this lattice is a nest. As a special
case, it follows that all reflexive (in particular, completely distributive
CSL) logmodular subalgebras of type I factors are nest algebras, thus answering
a question of Paulsen and Raghupathi [Trans. Amer. Math. Soc., 363 (2011)
2627-2640]. We also discuss some sufficient criteria under which an algebra
having factorization is automatically reflexive and is a nest algebra.
|
Quantum computing, an innovative computing system carrying prominent
processing rate, is meant to be the solutions to problems in many fields. Among
these realms, the most intuitive application is to help chemical researchers
correctly de-scribe strong correlation and complex systems, which are the great
challenge in current chemistry simulation. In this paper, we will present a
standalone quantum simulation tool for chemistry, ChemiQ, which is designed to
assist people carry out chemical research or molecular calculation on real or
virtual quantum computers. Under the idea of modular programming in C++
language, the software is designed as a full-stack tool without third-party
physics or chemistry application packages. It provides services as follow:
visually construct molecular structure, quickly simulate ground-state energy,
scan molecular potential energy curve by distance or angle, study chemical
reaction, and return calculation results graphically after analysis.
|
Microwave circulators play an important role in quantum technology based on
superconducting circuits. The conventional circulator design, which employs
ferrite materials, is bulky and involves strong magnetic fields, rendering it
unsuitable for integration on superconducting chips. One promising design for
an on-chip superconducting circulator is based on a passive Josephson-junction
ring. In this paper, we consider two operational issues for such a device:
circuit tuning and the effects of quasiparticle tunneling. We compute the
scattering matrix using adiabatic elimination and derive the parameter
constraints to achieve optimal circulation. We then numerically optimize the
circulator performance over the full set of external control parameters,
including gate voltages and flux bias, to demonstrate that this
multi-dimensional optimization converges quickly to find optimal working
points. We also consider the possibility of quasiparticle tunneling in the
circulator ring and how it affects signal circulation. Our results form the
basis for practical operation of a passive on-chip superconducting circulator
made from a ring of Josephson junctions.
|
A Robinson similarity matrix is a symmetric matrix where the entry values on
all rows and columns increase toward the diagonal. Decompose the Robinson
matrix into the sum of k {0, 1}-matrices, then these k {0, 1}-matrices are the
adjacency matrices of a set of nested unit interval graphs. Previous studies
show that unit interval graphs coincide with indifference graphs. An
indifference graph has an embedding that maps each vertex to a real number,
where two vertices are adjacent if their embedding is within a fixed threshold
distance. In this thesis, consider k different threshold distances, we study
the problem of finding an embedding that, simultaneously and with respect to
each threshold distance, embeds the k indifference graphs corresponding to the
k adjacency matrices. This is called a uniform embedding of a Robinson matrix
with respect to the k threshold distances. We give a sufficient and necessary
condition on Robinson matrices that have a uniform embedding, which is derived
from paths in an associated graph. We also give an efficient combinatorial
algorithm to find a uniform embedding or give proof that it does not exist, for
the case where k = 2.
|
Stationary memoryless sources produce two correlated random sequences $X^n$
and $Y^n$. A guesser seeks to recover $X^n$ in two stages, by first guessing
$Y^n$ and then $X^n$. The contributions of this work are twofold: (1) We
characterize the least achievable exponential growth rate (in $n$) of any
positive $\rho$-th moment of the total number of guesses when $Y^n$ is obtained
by applying a deterministic function $f$ component-wise to $X^n$. We prove
that, depending on $f$, the least exponential growth rate in the two-stage
setup is lower than when guessing $X^n$ directly. We further propose a simple
Huffman code-based construction of a function $f$ that is a viable candidate
for the minimization of the least exponential growth rate in the two-stage
guessing setup. (2) We characterize the least achievable exponential growth
rate of the $\rho$-th moment of the total number of guesses required to recover
$X^n$ when Stage 1 need not end with a correct guess of $Y^n$ and without
assumptions on the stationary memoryless sources producing $X^n$ and $Y^n$.
|
Network Traffic Classification (NTC) has become an important feature in
various network management operations, e.g., Quality of Service (QoS)
provisioning and security services. Machine Learning (ML) algorithms as a
popular approach for NTC can promise reasonable accuracy in classification and
deal with encrypted traffic. However, ML-based NTC techniques suffer from the
shortage of labeled traffic data which is the case in many real-world
applications. This study investigates the applicability of an active form of
ML, called Active Learning (AL), in NTC. AL reduces the need for a large number
of labeled examples by actively choosing the instances that should be labeled.
The study first provides an overview of NTC and its fundamental challenges
along with surveying the literature on ML-based NTC methods. Then, it
introduces the concepts of AL, discusses it in the context of NTC, and review
the literature in this field. Further, challenges and open issues in AL-based
classification of network traffic are discussed. Moreover, as a technical
survey, some experiments are conducted to show the broad applicability of AL in
NTC. The simulation results show that AL can achieve high accuracy with a small
amount of data.
|
On Titan, methane (CH4) and ethane (C2H6) are the dominant species found in
the lakes and seas. In this study, we have combined laboratory work and
modeling to refine the methane-ethane binary phase diagram at low temperatures
and probe how the molecules interact at these conditions. We used visual
inspection for the liquidus and Raman spectroscopy for the solidus. Through
these methods we determined a eutectic point of 71.15$\pm$0.5 K at a
composition of 0.644$\pm$0.018 methane - 0.356$\pm$0.018 ethane mole fraction
from the liquidus data. Using the solidus data, we found a eutectic isotherm
temperature of 72.2 K with a standard deviation of 0.4 K. In addition to
mapping the binary system, we looked at the solid-solid transitions of pure
ethane and found that, when cooling, the transition of solid I-III occurred at
89.45$\pm$0.2 K. The warming sequence showed transitions of solid III-II
occurring at 89.85$\pm$0.2 K and solid II-I at 89.65$\pm$0.2 K. Ideal
predictions were compared to molecular dynamics simulations to reveal that the
methane-ethane system behaves almost ideally, and the largest deviations occur
as the mixing ratio approaches the eutectic composition.
|
Heavy-ion collisions at the LHC provide the conditions to investigate regions
of quark-gluon plasma that reach higher temperatures and that persist for
longer periods of time compared to collisions at the Relativistic Heavy Ion
Collider. This extended duration allows correlations from charge conservation
to better separate during the quark-gluon plasma phase, and thus be better
distinguished from correlations that develop during the hadron phase or during
hadronization. In this study charge balance functions binned by relative
rapidity and azimuthal angle and indexed by species are considered. A detailed
theoretical model that evolves charge correlations throughout the entirety of
an event is compared to preliminary results from the ALICE Collaboration. The
comparison with experiment provides insight into the evolution of the chemistry
and diffusivity during the collision. A ratio of balance functions is proposed
to better isolate the effects of diffusion and thus better constrain the
diffusivity.
|
A new scaling is derived that yields a Reynolds number independent profile
for all components of the Reynolds stress in the near-wall region of wall
bounded flows, including channel, pipe and boundary layer flows. The scaling
demonstrates the important role played by the wall shear stress fluctuations
and how the large eddies determine the Reynolds number dependence of the
near-wall turbulence behavior.
|
We train convolutional neural networks to predict whether or not a set of
measurements is informationally complete to uniquely reconstruct any given
quantum state with no prior information. In addition, we perform fidelity
benchmarking based on this measurement set without explicitly carrying out
state tomography. The networks are trained to recognize the fidelity and a
reliable measure for informational completeness. By gradually accumulating
measurements and data, these trained convolutional networks can efficiently
establish a compressive quantum-state characterization scheme by accelerating
runtime computation and greatly reducing systematic drifts in experiments. We
confirm the potential of this machine-learning approach by presenting
experimental results for both spatial-mode and multiphoton systems of large
dimensions. These predictions are further shown to improve when the networks
are trained with additional bootstrapped training sets from real experimental
data. Using a realistic beam-profile displacement error model for
Hermite-Gaussian sources, we further demonstrate numerically that the
orders-of-magnitude reduction in certification time with trained networks
greatly increases the computation yield of a large-scale quantum processor
using these sources, before state fidelity deteriorates significantly.
|
When an approximant is accurate on the interval, it is only natural to try to
extend it to several-dimensional domains. In the present article, we make use
of the fact that linear rational barycentric interpolants converge rapidly
toward analytic and several times differentiable functions to interpolate on
two-dimensional starlike domains parametrized in polar coordinates. In radial
direction, we engage interpolants at conformally shifted Chebyshev nodes, which
converge exponentially toward analytic functions. In circular direction, we
deploy linear rational trigonometric barycentric interpolants, which converge
similarly rapidly for periodic functions, but now for conformally shifted
equispaced nodes. We introduce a variant of a tensor-product interpolant of the
above two schemes and prove that it converges exponentially for two-dimensional
analytic functions up to a logarithmic factor and with an order limited only by
the order of differentiability for real functions, if the boundary is as
smooth. Numerical examples confirm that the shifts permit to reach a much
higher accuracy with significantly less nodes, a property which is especially
important in several dimensions.
|
A novel approach to reduced-order modeling of high-dimensional time varying
systems is proposed. It leverages the formalism of the Dynamic Mode
Decomposition technique together with the concept of balanced realization. It
is assumed that the only information available on the system comes from input,
state, and output trajectories generated by numerical simulations or recorded
and estimated during experiments, thus the approach is fully data-driven. The
goal is to obtain an input-output low dimensional linear model which
approximates the system across its operating range. Since the dynamics of
aeroservoelastic systems markedly changes in operation (e.g. due to change in
flight speed or altitude), time-varying features are retained in the
constructed models. This is achieved by generating a Linear Parameter-Varying
representation made of a collection of state-consistent linear time-invariant
reduced-order models. The algorithm formulation hinges on the idea of replacing
the orthogonal projection onto the Proper Orthogonal Decomposition modes, used
in Dynamic Mode Decomposition-based approaches, with a balancing oblique
projection constructed entirely from data. As a consequence, the input-output
information captured in the lower-dimensional representation is increased
compared to other projections onto subspaces of same or lower size. Moreover, a
parameter-varying projection is possible while also achieving
state-consistency. The validity of the proposed approach is demonstrated on a
morphing wing for airborne wind energy applications by comparing the
performance against two algorithms recently proposed in the literature.
Comparisons cover both prediction accuracy and performance in model predictive
control applications.
|
We consider a machine learning algorithm to detect and identify strong
gravitational lenses on sky images. First, we simulate different artificial but
very close to reality images of galaxies, stars and strong lenses, using six
different methods, i.e. two for each class. Then we deploy a convolutional
neural network architecture to classify these simulated images. We show that
after neural network training process one achieves about 93 percent accuracy.
As a simple test for the efficiency of the convolutional neural network, we
apply it on an real Einstein cross image. Deployed neural network classifies it
as gravitational lens, thus opening a way for variety of lens search
applications of the deployed machine learning scheme.
|
This article describes the regularization of the generally relativistic gauge
field representation of gravity on a piecewise linear lattice. It is a part of
the program concerning the classical relativistic theory of fundamental
interactions, represented by minimally coupled gauge vector field densities and
half-densities. The correspondence between the local Darboux coordinates on
phase space and the local structure of the links of the lattice, embedded in
the spatial manifold, is demonstrated. Thus, the canonical coordinates are
replaceable by links-related quantities. This idea and the significant part of
formalism are directly based on the model of canonical loop quantum gravity
(CLQG).
The first stage of this program is formulated regarding the gauge field,
which dynamics is independent of other fundamental fields, but contributes to
their dynamics. This gauge field, which determines systems equivalence in the
actions defining all fundamental interactions, represents Einsteinian gravity.
The related links-defined quantities depend on holonomies of gravitational
connections and fluxes of densitized dreibeins. This article demonstrates how
to determine these quantities, which lead to a nonpertubative formalism that
preserves the general postulate of relativity. From this perspective, the
formalism presented in this article is analogous to the Ashtekar-Barbero-Holst
formulation on which CLQG is based. However, in this project, it is
additionally required that the fields' coordinates are quantizable in the
standard canonical procedure for a gauge theory and that any approximation in
the construction of the model is at least as precisely demonstrated as the
gauge invariance. These requirements lead to new relations between holonomies
and connections, and the representation of the densitized deibein determinant
that is more precise than the volume representation in CLQG.
|
We present the Stromlo Stellar Tracks, a set of stellar evolutionary tracks,
computed by modifying the Modules for Experiments in Stellar Astrophysics
(MESA) 1D stellar evolution package, to fit the Galactic Concordance abundances
for hot ($T > 8000$ K) massive ($\geq 10M_\odot$) Main-Sequence (MS) stars.
Until now, all stellar evolution tracks are computed at solar, scaled-solar, or
alpha-element enhanced abundances, and none of these models correctly represent
the Galactic Concordance abundances at different metallicities. This paper is
the first implementation of Galactic Concordance abundances to the stellar
evolution models. The Stromlo tracks cover massive stars ($10\leq M/M_\odot
\leq 300$) with varying rotations ($v/v_{\rm crit} = 0.0, 0.2, 0.4$) and a
finely sampled grid of metallicities ($-2.0 \leq {\rm [Z/H]} \leq +0.5$;
$\Delta {\rm [Z/H]} = 0.1$) evolved from the pre-main sequence to the end of
$^{12}$Carbon burning. We find that the implementation of Galactic Concordance
abundances is critical for the evolution of main-sequence, massive hot stars in
order to estimate accurate stellar outputs (L, T$_{\rm eff}$, $g$), which, in
turn, have a significant impact on determining the ionizing photon luminosity
budgets. We additionally support prior findings of the importance that rotation
plays on the evolution of massive stars and their ionizing budget. The
evolutionary tracks for our Galactic Concordance abundance scaling provide a
more empirically motivated approach than simple uniform abundance scaling with
metallicity for the analysis of HII regions and have considerable implications
in determining nebular emission lines and metallicity. Therefore, it is
important to refine the existing stellar evolutionary models for comprehensive
high-redshift extragalactic studies. The Stromlo tracks are publicly available
to the astronomical community online.
|
Let $G=\operatorname{O}(1,n+1)$ with maximal compact subgroup $K$ and let
$\Pi$ be a unitary irreducible representation of $G$ with non-trivial
$(\mathfrak{g},K)$-cohomology. Then $\Pi$ occurs inside a principal series
representation of $G$, induced from the $\operatorname{O}(n)$-representation
$\bigwedge\nolimits^p(\mathbb{C}^n)$ and characters of a minimal parabolic
subgroup of $G$ at the limit of the complementary series. Considering the
subgroup $G'=\operatorname{O}(1,n)$ of $G$ with maximal compact subgroup $K'$,
we prove branching laws and explicit Plancherel formulas for the restrictions
to $G'$ of all unitary representations occurring in such principal series,
including the complementary series, all unitary $G$-representations with
non-trivial $(\mathfrak{g},K)$-cohomology and further relative discrete series
representations in the cases $p=0,n$. Discrete spectra are constructed
explicitly as residues of $G'$-intertwining operators which resemble the
Fourier transforms on vector bundles over the Riemannian symmetric space
$G'/K'$.
|
Given coprime positive integers $d',d''$, B\'ezout's Lemma tells us that
there are integers $u,v$ so that $d'u-d''v=1$. We show that, interchanging $d'$
and $d''$ if necessary, we may choose $u$ and $v$ to be Loeschian numbers,
i.e., of the form $|\alpha|^2$, where $\alpha\in\mathbb{Z}[j]$, the ring of
integers of the number field $\mathbb{Q}(j)$, where $j^2+j+1=0$. We do this by
using Atkin-Lehner elements in some quaternion algebras $\mathcal{H}$. We use
this fact to count the number of conjugacy classes of elements of order 3 in an
order $\mathcal{O}\subset\mathcal{H}$.
|
Ear recognition can be described as a revived scientific field. Ear
biometrics were long believed to not be accurate enough and held a secondary
place in scientific research, being seen as only complementary to other types
of biometrics, due to difficulties in measuring correctly the ear
characteristics and the potential occlusion of the ear by hair, clothes and ear
jewellery. However, recent research has reinstated them as a vivid research
field, after having addressed these problems and proven that ear biometrics can
provide really accurate identification and verification results. Several 2D and
3D imaging techniques, as well as acoustical techniques using sound emission
and reflection, have been developed and studied for ear recognition, while
there have also been significant advances towards a fully automated recognition
of the ear. Furthermore, ear biometrics have been proven to be mostly
non-invasive, adequately permanent and accurate, and hard to spoof and
counterfeit. Moreover, different ear recognition techniques have proven to be
as effective as face recognition ones, thus providing the opportunity for ear
recognition to be used in identification and verification applications.
Finally, even though some issues still remain open and require further
research, the scientific field of ear biometrics has proven to be not only
viable, but really thriving.
|
A Polarimetric Synthetic Aperture Radar (PolSAR) sensor is able to collect
images in different polarization states, making it a rich source of information
for target characterization. PolSAR images are inherently affected by speckle.
Therefore, before deriving ad hoc products from the data, the polarimetric
covariance matrix needs to be estimated by reducing speckle. In recent years,
deep learning based despeckling methods have started to evolve from single
channel SAR images to PolSAR images. To this aim, deep learning based
approaches separate the real and imaginary components of the complex-valued
covariance matrix and use them as independent channels in a standard
convolutional neural networks. However, this approach neglects the mathematical
relationship that exists between the real and imaginary components, resulting
in sub-optimal output. Here, we propose a multi-stream complex-valued fully
convolutional network to reduce speckle and effectively estimate the PolSAR
covariance matrix. To evaluate the performance of CV-deSpeckNet, we used
Sentinel-1 dual polarimetric SAR images to compare against its real-valued
counterpart, that separates the real and imaginary parts of the complex
covariance matrix. CV-deSpeckNet was also compared against the state of the art
PolSAR despeckling methods. The results show CV-deSpeckNet was able to be
trained with a fewer number of samples, has a higher generalization capability
and resulted in a higher accuracy than its real-valued counterpart and
state-of-the-art PolSAR despeckling methods. These results showcase the
potential of complex-valued deep learning for PolSAR despeckling.
|
Urban areas are not only one of the biggest contributors to climate change,
but also they are one of the most vulnerable areas with high populations who
would together experience the negative impacts. In this paper, I address some
of the opportunities brought by satellite remote sensing imaging and artificial
intelligence (AI) in order to measure climate adaptation of cities
automatically. I propose an AI-based framework which might be useful for
extracting indicators from remote sensing images and might help with predictive
estimation of future states of these climate adaptation related indicators.
When such models become more robust and used in real-life applications, they
might help decision makers and early responders to choose the best actions to
sustain the wellbeing of society, natural resources and biodiversity. I
underline that this is an open field and an ongoing research for many
scientists, therefore I offer an in depth discussion on the challenges and
limitations of AI-based methods and the predictive estimation models in
general.
|
Nowadays, Graph Neural Networks (GNNs) following the Message Passing paradigm
become the dominant way to learn on graphic data. Models in this paradigm have
to spend extra space to look up adjacent nodes with adjacency matrices and
extra time to aggregate multiple messages from adjacent nodes. To address this
issue, we develop a method called LinkDist that distils self-knowledge from
connected node pairs into a Multi-Layer Perceptron (MLP) without the need to
aggregate messages. Experiment with 8 real-world datasets shows the MLP derived
from LinkDist can predict the label of a node without knowing its adjacencies
but achieve comparable accuracy against GNNs in the contexts of semi- and
full-supervised node classification. Moreover, LinkDist benefits from its
Non-Message Passing paradigm that we can also distil self-knowledge from
arbitrarily sampled node pairs in a contrastive way to further boost the
performance of LinkDist.
|
This work addresses whether a human-in-the-loop cyber-physical system (HCPS)
can be effective in improving the longitudinal control of an individual vehicle
in a traffic flow. We introduce the CAN Coach, which is a system that gives
feedback to the human-in-the-loop using radar data (relative speed and position
information to objects ahead) that is available on the controller area network
(CAN). Using a cohort of six human subjects driving an instrumented vehicle, we
compare the ability of the human-in-the-loop driver to achieve a constant
time-gap control policy using only human-based visual perception to the car
ahead, and by augmenting human perception with audible feedback from CAN sensor
data. The addition of CAN-based feedback reduces the mean time-gap error by an
average of 73%, and also improves the consistency of the human by reducing the
standard deviation of the time-gap error by 53%. We remove human perception
from the loop using a ghost mode in which the human-in-the-loop is coached to
track a virtual vehicle on the road, rather than a physical one. The loss of
visual perception of the vehicle ahead degrades the performance for most
drivers, but by varying amounts. We show that human subjects can match the
velocity of the lead vehicle ahead with and without CAN-based feedback, but
velocity matching does not offer regulation of vehicle spacing. The viability
of dynamic time-gap control is also demonstrated. We conclude that (1) it is
possible to coach drivers to improve performance on driving tasks using CAN
data, and (2) it is a true HCPS, since removing human perception from the
control loop reduces performance at the given control objective.
|
Quantitative phase imaging (QPI) is a valuable label-free modality that has
gained significant interest due to its wide potentials, from basic biology to
clinical applications. Most existing QPI systems measure microscopic objects
via interferometry or nonlinear iterative phase reconstructions from intensity
measurements. However, all imaging systems compromise spatial resolution for
field of view and vice versa, i.e., suffer from a limited space bandwidth
product. Current solutions to this problem involve computational phase
retrieval algorithms, which are time-consuming and often suffer from
convergence problems. In this article, we presented synthetic aperture
interference light (SAIL) microscopy as a novel modality for high-resolution,
wide field of view QPI. The proposed approach employs low-coherence
interferometry to directly measure the optical phase delay under different
illumination angles and produces large space-bandwidth product (SBP) label-free
imaging. We validate the performance of SAIL on standard samples and illustrate
the biomedical applications on various specimens: pathology slides, entire
insects, and dynamic live cells in large cultures. The reconstructed images
have a synthetic numeric aperture of 0.45, and a field of view of 2.6 x 2.6
mm2. Due to its direct measurement of the phase information, SAIL microscopy
does not require long computational time, eliminates data redundancy, and
always converges.
|
We train neural models for morphological analysis, generation and
lemmatization for morphologically rich languages. We present a method for
automatically extracting substantially large amount of training data from FSTs
for 22 languages, out of which 17 are endangered. The neural models follow the
same tagset as the FSTs in order to make it possible to use them as fallback
systems together with the FSTs. The source code, models and datasets have been
released on Zenodo.
|
Quantized nano-objects offer a myriad of exciting possibilities for
manipulating electrons and light that impact photonics, nanoelectronics, and
quantum information. In this context, ultrashort laser pulses combined with
nanotips and field emission have permitted renewing nano-characterization and
control electron dynamics with unprecedented space and time resolution reaching
femtosecond and even attosecond regimes. A crucial missing step in these
experiments is that no signature of quantized energy levels has yet been
observed. We combine in situ nanostructuration of nanotips and ultrashort laser
pulse excitation to induce multiphoton excitation and electron emission from a
single quantized nano-object attached at the apex of a metal nanotip.
Femtosecond induced tunneling through well-defined localized confinement states
that are tunable in energy is demonstrated. This paves the way for the
development of ultrafast manipulation of electron emission from isolated
nano-objects including stereographically fixed individual molecules and high
brightness, ultrafast, coherent single electron sources for quantum optics
experiments.
|
We present a fast and feature-complete differentiable physics engine, Nimble
(nimblephysics.org), that supports Lagrangian dynamics and hard contact
constraints for articulated rigid body simulation. Our differentiable physics
engine offers a complete set of features that are typically only available in
non-differentiable physics simulators commonly used by robotics applications.
We solve contact constraints precisely using linear complementarity problems
(LCPs). We present efficient and novel analytical gradients through the LCP
formulation of inelastic contact that exploit the sparsity of the LCP solution.
We support complex contact geometry, and gradients approximating
continuous-time elastic collision. We also introduce a novel method to compute
complementarity-aware gradients that help downstream optimization tasks avoid
stalling in saddle points. We show that an implementation of this combination
in an existing physics engine (DART) is capable of a 87x single-core speedup
over finite-differencing in computing analytical Jacobians for a single
timestep, while preserving all the expressiveness of original DART.
|
We study the Gram determinant and construct bases of hom spaces for the
one-dimensional topological theory of decorated unoriented one-dimensional
cobordisms, as recently defined by Khovanov, when the pair of generating
functions is linear.
|
Existing near-eye display designs struggle to balance between multiple
trade-offs such as form factor, weight, computational requirements, and battery
life. These design trade-offs are major obstacles on the path towards an
all-day usable near-eye display. In this work, we address these trade-offs by,
paradoxically, \textit{removing the display} from near-eye displays. We present
the beaming displays, a new type of near-eye display system that uses a
projector and an all passive wearable headset. We modify an off-the-shelf
projector with additional lenses. We install such a projector to the
environment to beam images from a distance to a passive wearable headset. The
beaming projection system tracks the current position of a wearable headset to
project distortion-free images with correct perspectives. In our system, a
wearable headset guides the beamed images to a user's retina, which are then
perceived as an augmented scene within a user's field of view. In addition to
providing the system design of the beaming display, we provide a physical
prototype and show that the beaming display can provide resolutions as high as
consumer-level near-eye displays. We also discuss the different aspects of the
design space for our proposal.
|