abstract
stringlengths 42
2.09k
|
---|
Attention-based pre-trained language models such as GPT-2 brought
considerable progress to end-to-end dialogue modelling. However, they also
present considerable risks for task-oriented dialogue, such as lack of
knowledge grounding or diversity. To address these issues, we introduce
modified training objectives for language model finetuning, and we employ
massive data augmentation via back-translation to increase the diversity of the
training data. We further examine the possibilities of combining data from
multiples sources to improve performance on the target dataset. We carefully
evaluate our contributions with both human and automatic methods. Our model
substantially outperforms the baseline on the MultiWOZ data and shows
competitive performance with state of the art in both automatic and human
evaluation.
|
The incremental poses computed through odometry can be integrated over time
to calculate the pose of a device with respect to an initial location. The
resulting global pose may be used to formulate a second, consistency based,
loss term in a deep odometry setting. In such cases where multiple losses are
imposed on a network, the uncertainty over each output can be derived to weigh
the different loss terms in a maximum likelihood setting. However, when
imposing a constraint on the integrated transformation, due to how only
odometry is estimated at each iteration of the algorithm, there is no
information about the uncertainty associated with the global pose to weigh the
global loss term. In this paper, we associate uncertainties with the output
poses of a deep odometry network and propagate the uncertainties through each
iteration. Our goal is to use the estimated covariance matrix at each
incremental step to weigh the loss at the corresponding step while weighting
the global loss term using the compounded uncertainty. This formulation
provides an adaptive method to weigh the incremental and integrated loss terms
against each other, noting the increase in uncertainty as new estimates arrive.
We provide quantitative and qualitative analysis of pose estimates and show
that our method surpasses the accuracy of the state-of-the-art Visual Odometry
approaches. Then, uncertainty estimates are evaluated and comparisons against
fixed baselines are provided. Finally, the uncertainty values are used in a
realistic example to show the effectiveness of uncertainty quantification for
localization.
|
Learning to model and reconstruct humans in clothing is challenging due to
articulation, non-rigid deformation, and varying clothing types and topologies.
To enable learning, the choice of representation is the key. Recent work uses
neural networks to parameterize local surface elements. This approach captures
locally coherent geometry and non-planar details, can deal with varying
topology, and does not require registered training data. However, naively using
such methods to model 3D clothed humans fails to capture fine-grained local
deformations and generalizes poorly. To address this, we present three key
innovations: First, we deform surface elements based on a human body model such
that large-scale deformations caused by articulation are explicitly separated
from topological changes and local clothing deformations. Second, we address
the limitations of existing neural surface elements by regressing local
geometry from local features, significantly improving the expressiveness.
Third, we learn a pose embedding on a 2D parameterization space that encodes
posed body geometry, improving generalization to unseen poses by reducing
non-local spurious correlations. We demonstrate the efficacy of our surface
representation by learning models of complex clothing from point clouds. The
clothing can change topology and deviate from the topology of the body. Once
learned, we can animate previously unseen motions, producing high-quality point
clouds, from which we generate realistic images with neural rendering. We
assess the importance of each technical contribution and show that our approach
outperforms the state-of-the-art methods in terms of reconstruction accuracy
and inference time. The code is available for research purposes at
https://qianlim.github.io/SCALE .
|
The X and Gamma Imaging Spectrometer instrument on-board the THESEUS mission
(selected by ESA in the framework of the Cosmic Vision M5 launch opportunity,
currently in phase A) is based on a detection plane composed of several
thousands of single active elements. Each element comprises a 4.5x4.5x30 mm 3
CsI(Tl) scintillator bar, optically coupled at both ends to Silicon Drift
Detectors (SDDs). The SDDs acts both as photodetectors for the scintillation
light and as direct X-ray sensors. In this paper the design of the XGIS
detection plane is reviewed, outlining the strategic choices in terms of
modularity and redundancy of the system. Results on detector-electronics
prototypes are also described. Moreover, the design and development of the
low-noise front-end electronics is presented, emphasizing the innovative
architectural design based on custom-designed Application-Specific Integrated
Circuits (ASICs).
|
In the last few years, significant advances have been made in understanding
the distributions of exoplanet populations and the architecture of planetary
systems. We review the recent progress of planet statistics, with a focus on
the inner <~ 1 AU region of the planetary system that has been fairly
thoroughly surveyed by the Kepler mission. We also discuss the theoretical
implications of these statistical results for planet formation and dynamical
evolution.
|
Let $G=(V(G),E(G))$ be a simple graph with vertex set $V(G)$ and edge set
$E(G)$. Let $S$ be a subset of $V(G)$, and let $B(S)$ be the set of neighbours
of $S$ in $V(G) \setminus S$. The differential $\partial(S)$ of $S$ is defined
as $|B(S)|-|S|$. The maximum value of $\partial(S)$ taken over all subsets
$S\subseteq V$ is the differential $\partial(G)$ of $G$. A graph operator is a
mapping $F: G\rightarrow G'$, where $G$ and $G'$ are families of graphs.The
graph $\S{G}$ is defined as the graph obtained from $G$ con bipartici\'on de
v\'ertices $V(G)\cup E(G)$, donde hay tantas aristas entre $v \in V(G)$ y $e
\in E(G)$, como veces $e$ sea incidente con $v$ en $G$. In this paper we study
the relationship between $\partial(G)$ and $\partial(\S{G})$. Besides, we
relate the differential of a graph with known parameters of a graph, namely,
its domination and independence number.
|
Recently, heatmap regression models have become the mainstream in locating
facial landmarks. To keep computation affordable and reduce memory usage, the
whole procedure involves downsampling from the raw image to the output heatmap.
However, how much impact will the quantization error introduced by downsampling
bring? The problem is hardly systematically investigated among previous works.
This work fills the blank and we are the first to quantitatively analyze the
negative gain. The statistical results show the NME generated by quantization
error is even larger than 1/3 of the SOTA item, which is a serious obstacle for
making a new breakthrough in face alignment. To compensate the impact of
quantization effect, we propose a novel method, called Heatmap In Heatmap(HIH),
which leverages two categories of heatmaps as label representation to encode
coordinate. And in HIH, the range of one heatmap represents a pixel of the
other category of heatmap. Also, we even combine the face alignment with
solutions of other fields to make a comparison. Extensive experiments on
various benchmarks show the feasibility of HIH and the superior performance
than other solutions. Moreover, the mean error reaches to 4.18 on WFLW, which
exceeds SOTA a lot. Our source code are made publicly available at
supplementary material.
|
Increased levels of digitalization in society expose companies to new
security threats, requiring them to establish adequate security and privacy
measures. Additionally, the presence of exogenous forces like new regulations,
e.g., GDPR and the global COVID-19 pandemic, pose new challenges for companies
that should preserve an adequate level of security while having to adapt to
change. In this paper, we investigate such challenges through a two-phase study
in companies located in Denmark -- a country characterized by a high level of
digitalization and trust -- focusing on software development and tech-related
companies. Our results show a number of issues, most notably i) a misalignment
between software developers and management when it comes to the implementation
of security and privacy measures, ii) difficulties in adapting company
practices in light of implementing GDPR compliance, and iii) different views on
the need to adapt security measures to cope with the COVID-19 pandemic.
|
In our paper, we present Deep Learning models with a layer differentiated
training method which were used for the SHARED TASK@ CONSTRAINT 2021 sub-tasks
COVID19 Fake News Detection in English and Hostile Post Detection in Hindi. We
propose a Layer Differentiated training procedure for training a pre-trained
ULMFiT arXiv:1801.06146 model. We used special tokens to annotate specific
parts of the tweets to improve language understanding and gain insights on the
model making the tweets more interpretable. The other two submissions included
a modified RoBERTa model and a simple Random Forest Classifier. The proposed
approach scored a precision and f1 score of 0.96728972 and 0.967324832
respectively for sub-task "COVID19 Fake News Detection in English". Also,
Coarse-Grained Hostility f1 Score and Weighted FineGrained f1 score of 0.908648
and 0.533907 respectively for sub-task Hostile Post Detection in Hindi. The
proposed approach ranked 61st out of 164 in the sub-task "COVID19 Fake News
Detection in English and 18th out of 45 in the sub-task Hostile Post Detection
in Hindi".
|
The angular-averaged differential cross section (dcs) of the elastic electron
proton (ep) scattering, covering Q^2 < 1.0GeV^2, was fitted via a combined
modified eq-scatterings where q is a point particle. The modifications
represent the cloud-covering effects to q. An energy-decaying ratio (edr) was
derived by inspecting the generated dcs ep from the form factor data gathered
at Mainz Microtron (A1-Collaboration) and Continuous Electron Beam Accelerator
Facility (Jefferson Laboratory) when compared to the dcs eq with modified
relativistic recoil factor. The diminishing cloud layer, edr, has a decay rate
of -2.8 for the data sets under investigation. The formulated SBM and SEM
fitting models use the bare and effective u and d-quark masses, respectively,
while SCBM and SCEM integrate other considerations. Three comparison methods
were used and all of them favor the models with other additional
considerations. SCEM was the most favored model in general.
|
The periodic microphases that self-assemble in systems with competing
short-range attractive and long-range repulsive interactions are structurally
both rich and elegant. Significant theoretical and computational efforts have
thus been dedicated to untangling their properties. By contrast, disordered
microphases, which are structurally just as rich but nowhere near as elegant,
have not been as carefully considered. Part of the difficulty is that simple
mean-field descriptions make a homogeneity assumption that washes away all of
their structural features. Here, we study disordered microphases by exactly
solving a SALR model on the Bethe lattice. By sidestepping the homogenization
assumption, this treatment recapitulates many of the key structural regimes of
disordered microphases, including particle and void cluster fluids as well as
gelation. This analysis also provides physical insight into the relationship
between various structural and thermal observables, between criticality and
physical percolation, as well as between glassiness and microphase ordering.
|
In order to formalize Distributed Ledger Technologies and their
interconnections, a recent line of research work has formulated the notion of
Distributed Ledger Object (DLO), which is a concurrent object that maintains a
totally ordered sequence of records, abstracting blockchains and distributed
ledgers. Through DLO, the Atomic Appends problem, intended as the need of a
primitive able to append multiple records to distinct ledgers in an atomic way,
is studied as a basic interconnection problem among ledgers.
In this work, we propose the Distributed Grow-only Set object (DSO), which
instead of maintaining a sequence of records, as in a DLO, maintains a set of
records in an immutable way: only Add and Get operations are provided. This
object is inspired by the Grow-only Set (G-Set) data type which is part of the
Conflict-free Replicated Data Types. We formally specify the object and we
provide a consensus-free Byzantine-tolerant implementation that guarantees
eventual consistency. We then use our Byzantine-tolerant DSO (BDSO)
implementation to provide consensus-free algorithmic solutions to the Atomic
Appends and Atomic Adds (the analogous problem of atomic appends applied on
G-Sets) problems, as well as to construct consensus-free Single-Writer BDLOs.
We believe that the BDSO has applications beyond the above-mentioned problems.
|
We study the network pricing problem where the leader maximizes their revenue
by determining the optimal amounts of tolls to charge on a set of arcs, under
the assumption that the followers will react rationally and choose the shortest
paths to travel. Many distinct single-level reformulations to this bilevel
optimization program have been proposed, however, their relationship has not
been established. In this paper, we aim to build a connection between those
reformulations and explore the combination of the path representation with
various modeling options, allowing us to generate 12 different reformulations
of the problem. Moreover, we propose a new path enumeration scheme, path-based
preprocessing, and hybrid framework to further improve performance and
robustness when solving the final model. We provide numerical results,
comparing all the derived reformulations and confirming the efficiency of the
novel dimensionality reduction procedures.
|
The rapidly evolving field of Artificial Intelligence necessitates automated
approaches to co-design neural network architecture and neural accelerators to
maximize system efficiency and address productivity challenges. To enable joint
optimization of this vast space, there has been growing interest in
differentiable NN-HW co-design. Fully differentiable co-design has reduced the
resource requirements for discovering optimized NN-HW configurations, but fail
to adapt to general hardware accelerator search spaces. This is due to the
existence of non-synthesizable (invalid) designs in the search space of many
hardware accelerators. To enable efficient and realizable co-design of
configurable hardware accelerators with arbitrary neural network search spaces,
we introduce RHNAS. RHNAS is a method that combines reinforcement learning for
hardware optimization with differentiable neural architecture search. RHNAS
discovers realizable NN-HW designs with 1.84x lower latency and 1.86x lower
energy-delay product (EDP) on ImageNet and 2.81x lower latency and 3.30x lower
EDP on CIFAR-10 over the default hardware accelerator design.
|
We review Hodge structures, relating filtrations, Galois Theory and
Jordan-Holder structures. The prototypical case of periods of Riemann surfaces
is compared with the Galois-Artin framework of algebraic numbers.
|
We consider the $L^\infty$-optimal mass transportation problem \[
\min_{\Pi(\mu, \nu)} \gamma-\mathrm{ess\,sup\,} c(x,y), \] for a new class of
costs $c(x,y)$ for which we introduce a tentative notion of twist condition. In
particular we study the conditions under which the infinitely-motonone
minimizers are induced by a transportation map. We also state a uniqueness
result for infinitely cyclically monotone Monge minimizers that corresponds to
this class of cost functions. We compare the results to previous works.
|
Aims: Our Gulf War Illness (GWI) study conducts combinatorial screening of
many interactive neural and humoral biomarkers in order to establish
predictive, diagnostic, and therapeutic targets. We encounter obstacles at
every stage of the biomarker discovery process, from sample acquisition,
bio-marker extraction to multi-aspect, multi-way interaction analysis, due to
the study complexity and lack of support for complex data problem solutions. We
introduce a novel data platform, named ROSALIND, to overcome the challenges,
foster healthy and vital collaborations and advance scientific inquiries.
Main methods: ROSALIND is a researcher-centered, study-specific data
platform. It provides vital support of individual creativity and effort in
collaborative research. We follow the principles etched in the platform name -
ROSALIND stands for resource organisms with self-governed accessibility,
linkability, integrability, neutrality, and dependability. We translate, encode
and implement the principles in the platform with novel use of advanced
concepts and techniques to ensure and protect data integrity and research
integrity. From a researcher's vantage point, ROSALIND embodies nuance
utilities and advanced functionalities in one system, beyond conventional
storage, archive and data management.
Key findings: The deployment of ROSALIND in our GWI study in recent 12 months
has accelerated the pace of data experiment and analysis, removed numerous
error sources, and increased research quality and productivity.
Significance: ROSALIND seems the first to address data integrity and research
integrity in tandem with digital measures and means. It also promises a new
type of distributed research networks with individualized data platforms
connected in various self-organized collaboration configurations.
|
We analyse the behaviour of the exponential sampling series $S_{w}^{\chi}f$
at jump discontinuity of the bounded signal $f.$ We obtain a representation
lemma that is used for analysing the series $S_{w}^{\chi}f$ and we establish
approximation of jump discontinuity functions by the series $S_{w}^{\chi}f.$
The rate of approximation of the exponential sampling series $S_{w}^{\chi}f$ is
obtained in terms of logarithmic modulus of continuity of functions and the
round-off and time-jitter errors are also studied. Finally we give some
graphical representation of approximation of discontinuous functions by
$S_{w}^{\chi}f$ using suitable kernels.
|
The Minho Quotation Resource was originally released in 2012. It provided
approximately 500,000 quotes from business leaders, analysts and politicians
that spanned the period from 2008 to 2012. The original resource had several
failings which include a large number of missing job titles and affiliations as
well as unnormalised job titles which produced a large variation in spellings
and formats of the same employment position. Also, there were numerous
duplicate posts. This update has standardised the job title text as well as the
imputation of missing job titles and affiliations. Duplicate quotes have been
deleted. This update also provides some metaphor and simile extraction as well
as an emotion distribution of the quotes. This update has also replaced an
antiquated version of Lucene index with a JSONL format as well as a rudimentary
interface that can query the data supplied with the resource. It is hoped that
this update will encourage the study of business communication in a time of a
financial crisis.
|
We present the development of a machine learning based pipeline to fully
automate the calibration of the frequency comb used to read out optical/IR
Microwave Kinetic Inductance Detector (MKID) arrays. This process involves
determining the resonant frequency and optimal drive power of every pixel (i.e.
resonator) in the array, which is typically done manually. Modern optical/IR
MKID arrays, such as DARKNESS (DARK-speckle Near-infrared Energy-resolving
Superconducting Spectrophotometer) and MEC (MKID Exoplanet Camera), contain
10-20,000 pixels, making the calibration process extremely time consuming; each
2000 pixel feedline requires 4-6 hours of manual tuning. Here we present a
pipeline which uses a single convolutional neural network (CNN) to perform both
resonator identification and tuning simultaneously. We find that our pipeline
has performance equal to that of the manual tuning process, and requires just
twelve minutes of computational time per feedline.
|
Artificial spin ice systems have seen burgeoning interest due to their
intriguing physics and potential applications in reprogrammable memory, logic
and magnonics. Integration of artificial spin ice with functional magnonics is
a relatively recent research direction, with a host of promising results. As
the field progresses, direct in-depth comparisons of distinct artificial spin
systems are crucial to advancing the field. While studies have investigated the
effects of different lattice geometries, little comparison exists between
systems comprising continuously connected nanostructures, where spin-waves
propagate via dipole-exchange interaction, and systems with nanobars
disconnected at vertices where spin-wave propagation occurs via stray
dipolar-field. Gaining understanding of how these very different coupling
methods affect both spin-wave dynamics and magnetic reversal is key for the
field to progress and provides crucial system-design information including for
future systems containing combinations of connected and disconnected elements.
Here, we study the magnonic response of two kagome spin ices via Brillouin
light scattering, a continuously connected system and a disconnected system
with vertex gaps. We observe distinct high-frequency dynamics and magnetization
reversal regimes between the systems, with key distinctions in spin-wave
localization and mode quantization, microstate-trajectory during reversal and
internal field-profiles. These observations are pertinent for the fundamental
understanding of artificial spin systems and broader design and engineering of
reconfigurable functional magnonic crystals.
|
Studies using asteroseismic ages and rotation rates from star-spot rotation
have indicated that standard age-rotation relations may break down roughly
half-way through the main sequence lifetime, a phenomenon referred to as
weakened magnetic braking. While rotation rates from spots can be difficult to
determine for older, less active stars, rotational splitting of asteroseismic
oscillation frequencies can provide rotation rates for both active and
quiescent stars, and so can confirm whether this effect really takes place on
the main sequence.
We obtained asteroseismic rotation rates of 91 main sequence stars showing
high signal-to-noise modes of oscillation. Using these new rotation rates,
along with effective temperatures, metallicities and seismic masses and ages,
we built a hierarchical Bayesian mixture model to determine whether the
ensemble more closely agreed with a standard rotational evolution scenario, or
one where weakened magnetic braking takes place. The weakened magnetic braking
scenario was found to be 98.4% more likely for our stellar ensemble, adding to
the growing body of evidence for this stage of stellar rotational evolution.
This work represents the largest catalogue of seismic rotation on the main
sequence to date, opening up possibilities for more detailed ensemble analysis
of rotational evolution with Kepler.
|
We establish the existence and uniqueness of solutions to stochastic 2D
Navier-Stokes equations in a time-dependent domain driven by Brownian motion. A
martingale solution is constructed through domain transformation and
appropriate Galerkin approximations on time-dependent spaces. The probabilistic
strong solution follows from the pathwise uniqueness and the Yamada-Watanable
theorem.
|
Visible-infrared cross-modality person re-identification (VI-ReID), whose aim
is to match person images between visible and infrared modality, is a
challenging cross-modality image retrieval task. Most existing works integrate
batch normalization layers into their neural network, but we found out that
batch normalization layers would lead to two types of distribution gap: 1)
inter-mini-batch distribution gap -- the distribution gap of the same modality
between each mini-batch; 2) intra-mini-batch modality distribution gap -- the
distribution gap of different modality within the same mini-batch. To address
these problems, we propose a new batch normalization layer called Modality
Batch Normalization (MBN), which normalizes each modality sub-mini-batch
respectively instead of the whole mini-batch, and can reduce these distribution
gap significantly. Extensive experiments show that our MBN is able to boost the
performance of VI-ReID models, even with different datasets, backbones and
losses.
|
Defect detection in the manufacturing industry is of utmost importance for
product quality inspection. Recently, optical defect detection has been
investigated as an anomaly detection using different deep learning methods.
However, the recent works do not explore the use of point pattern features,
such as SIFT for anomaly detection using the recently developed set-based
methods. In this paper, we present an evaluation of different point pattern
feature detectors and descriptors for defect detection application. The
evaluation is performed within the random finite set framework. Handcrafted
point pattern features, such as SIFT as well as deep features are used in this
evaluation. Random finite set-based defect detection is compared with
state-of-the-arts anomaly detection methods. The results show that using point
pattern features, such as SIFT as data points for random finite set-based
anomaly detection achieves the most consistent defect detection accuracy on the
MVTec-AD dataset.
|
We compute the sphere and disk partition functions in semiclassical Liouville
and analogous quantities in double-scaled matrix integrals. The quantity
sphere/disk^2 is unambiguous and we find a precise numerical match between the
Liouville answer and the matrix integral answer. An application is to show that
the sphere partition function in JT gravity is infinite.
|
2019 is the bicentenary of George Gabriel Stokes, who in 1851 described the
drag - Stokes drag - on a body moving immersed in a fluid, and 2020 is the
centenary of Christopher Robin Milne, for whom the game of poohsticks was
invented; his father A. A. Milne's "The House at Pooh Corner", in which it was
first described in print, appeared in 1928. So this is an apt moment to review
the state of the art of the fluid mechanics of a solid body in a complex fluid
flow, and one floating at the interface between two fluids in motion.
Poohsticks pertains to the latter category, when the two fluids are water and
air.
|
We prove that all Gibbs measures of the $q$-state Potts model on
$\mathbb{Z}^2$ are linear combinations of the extremal measures obtained as
thermodynamic limits under free or monochromatic boundary conditions. In
particular all Gibbs measures are invariant under translations. This statement
is new at points of first-order phase transition, that is at $T=T_{c}(q)$ when
$q>4$. In this case the structure of Gibbs measures is the most complex in the
sense that there exist $q+1$ distinct extremal measures.
Most of the work is devoted to the FK-percolation model on $\mathbb{Z}^{2}$
with $q\geq 1$, where we prove that every Gibbs measure is a linear combination
of the free and wired ones. The arguments are non-quantitative and follow the
spirit of the seminal works of Aizenman and Higuchi, which established the
Gibbs structure for the two-dimensional Ising model. Infinite-range
dependencies in FK-percolation (i.e., a weaker spatial Markov property) pose
serious additional difficulties compared to the case of the Ising model. For
example, it is not automatic, albeit true, that thermodynamic limits are Gibbs.
The result for the Potts model is then derived using the Edwards-Sokal coupling
and auto-duality. The latter ingredient is necessary since applying the
Edwards-Sokal procedure to a Gibbs measure for the Potts model does not
automatically produce a Gibbs measure for FK-percolation.
Finally, the proof is generic enough to adapt to the FK-percolation and Potts
models on the triangular and hexagonal lattices and to the loop $O(n)$ model in
the range of parameters for which its spin representation is positively
associated.
|
Label Smoothing (LS) improves model generalization through penalizing models
from generating overconfident output distributions. For each training sample
the LS strategy smooths the one-hot encoded training signal by distributing its
distribution mass over the non-ground truth classes. We extend this technique
by considering example pairs, coined PLS. PLS first creates midpoint samples by
averaging random sample pairs and then learns a smoothing distribution during
training for each of these midpoint samples, resulting in midpoints with high
uncertainty labels for training. We empirically show that PLS significantly
outperforms LS, achieving up to 30% of relative classification error reduction.
We also visualize that PLS produces very low winning softmax scores for both in
and out of distribution samples.
|
Mobile phones enable the collection of a wealth of private information, from
unique identifiers (e.g., email addresses), to a user's location, to their text
messages. This information can be harvested by apps and sent to third parties,
which can use it for a variety of purposes. In this paper we perform the
largest study of private information collection (PIC) on Android to date.
Leveraging an anonymized dataset collected from the customers of a popular
mobile security product, we analyze the flows of sensitive information
generated by 2.1M unique apps installed by 17.3M users over a period of 21
months between 2018 and 2019. We find that 87.2% of all devices send private
information to at least five different domains, and that actors active in
different regions (e.g., Asia compared to Europe) are interested in collecting
different types of information. The United States (62% of the total) and China
(7% of total flows) are the countries that collect most private information.
Our findings raise issues regarding data regulation, and would encourage
policymakers to further regulate how private information is used by and shared
among the companies and how accountability can be truly guaranteed.
|
We consider the problem of absence of backscattering in the transport of
Manakov solitons on a line. The concept of transparent boundary conditions is
used for modeling the reflectionless propagation of Manakov vector solitons in
a one-dimensional domain. Artificial boundary conditions that ensure the
absence of backscattering are derived and their numerical implementation is
demonstrated.
|
The concept of social trust has attracted an attention of information
processors/data scientists and information consumers / business firms. One of
the main reasons for acquiring the value of SBD is to provide frameworks and
methodologies using which the credibility of online social services users can
be evaluated. These approaches should be scalable to accommodate large-scale
social data. Hence, there is a need for well comprehending of social trust to
improve and expand the analysis process and inferring credibility of social big
data. Given the exposed environment's settings and fewer limitations related to
online social services, the medium allows legitimate and genuine users as well
as spammers and other low trustworthy users to publish and spread their
content. This chapter presents an overview of the notion of credibility in the
context of SBD. It also list an array of approaches to measure and evaluate the
trustworthiness of users and their contents. Finally, a case study is presented
that incorporates semantic analysis and machine learning modules to measure and
predict users' trustworthiness in numerous domains in different time periods.
The evaluation of the conducted experiment validates the applicability of the
incorporated machine learning techniques to predict highly trustworthy
domain-based users.
|
This paper studies a single-machine scheduling problem with a non-renewable
resource (NR-SSP) and total weighted completion time criterion. The
non-renewable resource is consumed when the machine starts processing a job. We
consider the case where each job's weight in the objective function is
proportional to its resource consumption amount. The problem is known to be
NP-hard in this case. We propose a 3-approximation list scheduling algorithm
for this problem. Besides, we show that the approximation ratio 3 is tight for
the algorithm.
|
We show that the infimum of the dual volume of the convex core of a convex
co-compact hyperbolic $3$-manifold with incompressible boundary coincides with
the infimum of the Riemannian volume of its convex core, as we vary the
geometry by quasi-isometric deformations. We deduce a linear lower bound of the
volume of the convex core of a quasi-Fuchsian manifold in terms of the length
of its bending measured lamination, with optimal multiplicative constant.
|
Before the recent publication of the correspondence between Gauss and Encke,
nothing was known about the role that John Taylor, a cotton merchant from
Liverpool, had played in the life of Gotthold Eisenstein. In this article, we
will bring together what we have discovered about John Taylor's life.
|
We introduce a theoretical framework for understanding and predicting the
complexity of sequence classification tasks, using a novel extension of the
theory of Boolean function sensitivity. The sensitivity of a function, given a
distribution over input sequences, quantifies the number of disjoint subsets of
the input sequence that can each be individually changed to change the output.
We argue that standard sequence classification methods are biased towards
learning low-sensitivity functions, so that tasks requiring high sensitivity
are more difficult. To that end, we show analytically that simple lexical
classifiers can only express functions of bounded sensitivity, and we show
empirically that low-sensitivity functions are easier to learn for LSTMs. We
then estimate sensitivity on 15 NLP tasks, finding that sensitivity is higher
on challenging tasks collected in GLUE than on simple text classification
tasks, and that sensitivity predicts the performance both of simple lexical
classifiers and of vanilla BiLSTMs without pretrained contextualized
embeddings. Within a task, sensitivity predicts which inputs are hard for such
simple models. Our results suggest that the success of massively pretrained
contextual representations stems in part because they provide representations
from which information can be extracted by low-sensitivity decoders.
|
Heart Sound (also known as phonocardiogram (PCG)) analysis is a popular way
that detects cardiovascular diseases (CVDs). Most PCG analysis uses supervised
way, which demands both normal and abnormal samples. This paper proposes a
method of unsupervised PCG analysis that uses beta variational auto-encoder
($\beta-\text{VAE}$) to model the normal PCG signals. The best performed model
reaches an AUC (Area Under Curve) value of 0.91 in ROC (Receiver Operating
Characteristic) test for PCG signals collected from the same source. Unlike
majority of $\beta-\text{VAE}$s that are used as generative models, the
best-performed $\beta-\text{VAE}$ has a $\beta$ value smaller than 1. Further
experiments then find that the introduction of a light weighted KL divergence
between distribution of latent space and normal distribution improves the
performance of anomaly PCG detection based on anomaly scores resulted by
reconstruction loss. The fact suggests that anomaly score based on
reconstruction loss may be better than anomaly scores based on latent vectors
of samples
|
We introduce a new family of particle evolution samplers suitable for
constrained domains and non-Euclidean geometries. Stein Variational Mirror
Descent and Mirrored Stein Variational Gradient Descent minimize the
Kullback-Leibler (KL) divergence to constrained target distributions by
evolving particles in a dual space defined by a mirror map. Stein Variational
Natural Gradient exploits non-Euclidean geometry to more efficiently minimize
the KL divergence to unconstrained targets. We derive these samplers from a new
class of mirrored Stein operators and adaptive kernels developed in this work.
We demonstrate that these new samplers yield accurate approximations to
distributions on the simplex, deliver valid confidence intervals in
post-selection inference, and converge more rapidly than prior methods in
large-scale unconstrained posterior inference. Finally, we establish the
convergence of our new procedures under verifiable conditions on the target
distribution.
|
In this paper we consider anisotropic Lorentz-Karamata space $2\pi$ of
periodic functions of $m$ variables and Nikol'skii--Besov's class . In this
paper, we establish order-sharp estimates of the best approximation by
trigonometric polynomials with harmonic numbers from the step hyperbolic cross
of functions from the Nikol'skii - Besov class in the norm of the anisotropic
Lorentz-Karamata space.
|
Multicore CPU architectures have been established as a structure for
general-purpose systems for high-performance processing of applications. Recent
multicore CPU has evolved as a system architecture based on non-uniform memory
architecture. For the technique of using the kernel space that shifts the tasks
to the ideal memory node, the characteristics of the applications of the
user-space cannot be considered. Therefore, kernel level approaches cannot
execute memory scheduling to recognize the importance of user applications.
Moreover, users need to run applications after sufficiently understanding the
multicore CPU based on non-uniform memory architecture to ensure the high
performance of the user's applications. This paper presents a user-space memory
scheduler that allocates the ideal memory node for tasks by monitoring the
characteristics of non-uniform memory architecture. From our experiment, the
proposed system improved the performance of the application by up to 25%
compared to the existing system.
|
Using the Floquet Hamiltonian derived based on the time-dependent
perturbation theory, we investigated the quasienergy bands of a one-dimensional
time-Floquet photonic crystal. The time-Floquet photonic crystal contains two
alternating layers labeled as A and B, and the permittivity of A layer is
modulated periodically in time. We showed that the quasienergy bands are
reciprocal when the modulation function is a function of time only, while the
quasienergy bands could be nonreciprocal when the permittivity is modulated in
both time and space through an unique combination. In the former case, the
coupling between the positive (negative) and positive (negative) bands results
in quasienergy gaps, while the coupling between the positive and negative bands
leads to pairs of exception points, when the modulation is on the real part of
the permittivity. In the latter case, the coupling between the positive
(negative) and positive (negative) bands still results in quasienergy gaps.
However, the coupling between the positive and negative bands leads to
quasienergy gaps at a small modulation speed and pairs of exceptional points at
a high modulation speed.
|
The many unusual properties of the enigmatic AT2018cow suggested that at
least some subset of the empirical class of fast blue optical transients
(FBOTs) represents a genuinely new astrophysical phenomenon. Unfortunately, the
intrinsic rarity and fleeting nature of these events have made it difficult to
identify additional examples early enough to acquire the observations necessary
to constrain theoretical models. We present here the Zwicky Transient Facility
discovery of AT2020xnd (ZTF20acigmel, the "Camel") at z=0.243, the first
unambiguous AT2018cow analog to be found and confirmed in real time. AT2018cow
and AT2020xnd share all key observational properties: a fast optical rise,
sustained high photospheric temperature, absence of a second peak attributable
to ejection of a radioactively-heated stellar envelope, extremely luminous
radio, millimetre, and X-ray emission, and a dwarf-galaxy host. This supports
the argument that AT2018cow-like events represent a distinct phenomenon from
slower-evolving radio-quiet supernovae, likely requiring a different progenitor
or a different central engine. The sample properties of the four known members
of this class to date disfavour tidal disruption models but are consistent with
the alternative model of an accretion powered jet following the direct collapse
of a massive star to a black hole. Contextual filtering of alert streams
combined with rapid photometric verification using multi-band imaging provides
an efficient way to identify future members of this class, even at high
redshift.
|
This paper is concerned with the parabolic-elliptic Keller-Segel system with
nonlinear diffusion and signal-dependent sensitivity
\begin{align}\tag{KS}\label{system} \begin{cases}
u_t=\Delta(u+1)^m-\nabla\cdot(u\chi(v)\nabla v),\quad &x\in\Omega, t>0,\\
0=\Delta v-v+u, &x\in\Omega, t>0 \end{cases} \end{align} under homogeneous
Newmann boundary conditions and initial conditions, where
$\Omega=B_R(0)\subset\mathbb{R}^N$ ($N\geq3,\ R>0$) is a ball, $m\geq 1$,
$\chi$ is a function satisfying that $\chi(s)\geq\chi_0(a+s)^{-k}$ ($k>0$,
$\chi_0>0$, $a\geq 0$) for all $s>0$ and some conditions. If the case that
$m=1$ and $\chi(s)=\chi_0s^{-k}$, Nagai-Senba established finite-time blow-up
of solutions under the smallness conditions on a moment of initial data $u(x,
0)$ and some condition for $k\in(0,1)$. Moreover, if the case that
$\chi(s)\equiv(\mbox{const.})$, Sugiyama showed finite-time blow-up of
solutions under the condition $m\in[1,2-\frac{2}{N})$. According to two
previous works, it seems that the smallness conditions of $m$ and $k$ leads to
finite-time blow-up of solutions. The purpose of this paper is to give the
relationship which depends only on $m$, $k$ and $N$ such that there exists
initial data which corresponds finite-time blow-up solutions.
|
The LIGO-Virgo gravitational-wave (GW) observation unveiled the new
population of black holes (BHs) that appears to have an extended mass spectrum
up to around $70M_\odot$, much heavier than the previously-believed mass range
($\sim 8M_\odot$). In this paper, we study the capability of a microlensing
observation of stars in the Milky Way (MW) bulge region to identify BHs of GW
mass scales, taking into account the microlensing parallax characterized by the
parameter $\pi_{\rm E}\propto M^{-1/2}$ ($M$ is the mass of a lens), which is a
dimension-less quantity defined by the ratio of the astronomical unit to the
projected Einstein radius. First, assuming that BHs follow the same spatial and
velocity distributions of stars as predicted by the standard MW model, we show
that microlensing events with long light curve timescales, $t_{\rm E}\gtrsim
100~{\rm days}$, and small parallax effects, $\pi_{\rm E}\sim 10^{-2}$, are
dominated by BH lenses compared to stellar-mass lenses. Second, using a Markov
chain Monte Carlo analysis of the simulated light curve, we show that BH lens
candidates are securely identified on individual basis, if the parallax effect
is detected or well constrained to the precision of a percent level in
$\pi_{\rm E}$. We also discuss that a microlensing event of an
intermediate-mass BH of $\sim 1000M_\odot$, if it occurs, can be identified in
a distinguishable way from stellar-mass BHs.
|
In this work we propose a novel and fully automated method for extracting the
yarn geometrical features in woven composites so that a direct parametrization
of the textile reinforcement is achieved (e.g., FE mesh). Thus, our aim is not
only to perform yarn segmentation from tomographic images but rather to provide
a complete descriptive modeling of the fabric. As such, this direct approach
improves on previous methods that use voxel-wise masks as intermediate
representations followed by re-meshing operations (yarn envelope estimation).
The proposed approach employs two deep neural network architectures (U-Net and
Mask RCNN). First, we train the U-Net to generate synthetic CT images from the
corresponding FE simulations. This allows to generate large quantities of
annotated data without requiring costly manual annotations. This data is then
used to train the Mask R-CNN, which is focused on predicting contour points
around each of the yarns in the image. Experimental results show that our
method is accurate and robust for performing yarn instance segmentation on CT
images, this is further validated by quantitative and qualitative analyses.
|
In this paper, we explore the use of pre-trained language models to learn
sentiment information of written texts for speech sentiment analysis. First, we
investigate how useful a pre-trained language model would be in a 2-step
pipeline approach employing Automatic Speech Recognition (ASR) and
transcripts-based sentiment analysis separately. Second, we propose a pseudo
label-based semi-supervised training strategy using a language model on an
end-to-end speech sentiment approach to take advantage of a large, but
unlabeled speech dataset for training. Although spoken and written texts have
different linguistic characteristics, they can complement each other in
understanding sentiment. Therefore, the proposed system can not only model
acoustic characteristics to bear sentiment-specific information in speech
signals, but learn latent information to carry sentiments in the text
representation. In these experiments, we demonstrate the proposed approaches
improve F1 scores consistently compared to systems without a language model.
Moreover, we also show that the proposed framework can reduce 65% of human
supervision by leveraging a large amount of data without human sentiment
annotation and boost performance in a low-resource condition where the human
sentiment annotation is not available enough.
|
We analyze the properties of the scattering solutions obtained as the pole of
the S- and K-matrix with the help of the Jost function framework and the
Strum-Liouville theory within the Hartree-Fock-Bogoliubov(HFB) framework, and
clarify the scattering solutions which can be defined as the physical state. We
found that there are three types of the resonances; "{\it shape resonance}",
"{\it particle-type}" and "{\it hole-type quasiparticle resonances}", and
another two types of solutions are given as the independent S-matrix and
K-matrix poles. The shape resonance is formed by the Hartree-Fock(HF) mean
field potential, is not affected by the pairing correlation so much. The
particle-type and hole-type quasiparticle resonances originate from the
particle and hole states by the configuration mixing effect by pairing. All of
resonance are represented by the S-matrix pole which has the corresponding
K-matrix pole. Two other types of solutions are given by the independent
S-matrix and K-matrix poles. These poles are formed by the HF mean field
potential. The effect of pairing for the independent S-matrix pole is small,
but the one for the independent K-matrix pole has the remarkable effect. The
independent K-matrix pole destroys the quasiparticle resonance as it approaches
to the resonance by the pairing effect. The wave function of all resonances
have the characteristic structure of the metastable property. However, the
metastable structure of the wave function of the quasiparticle resonance can be
broken by the independent standing wave solution or the Fano effect.
|
The multifractal formalism for measures in its original formulation is
checked for special classes of measures such as doubling, self-similar, and
Gibbs-like ones. Out of these classes, suitable conditions should be taken into
account to prove the validity of the multifractal formalism. In the present
work, a large class of measures satisfying a weak condition known as quasi
Ahlfors is considered in the framework of mixed multifractal analysis. A joint
multifractal analysis of finitely many quasi Ahlfors probability measures is
developed. Mixed variants of multifractal generalizations of Hausdorff and
packing measures, and corresponding dimensions are introduced. By applying
convexity arguments, some properties of these measures and dimensions are
established. Finally, an associated multifractal formalism is introduced and
proved to hold for the class of quasi Ahlfors measures.
|
In recent work, G. E. Andrews and G. Simay prove a surprising relation
involving parity palindromic compositions, and ask whether a combinatorial
proof can be found. We extend their results to a more general class of
compositions that are palindromic modulo $m$, that includes the parity
palindromic case when $m=2$. We then provide combinatorial proofs for the cases
$m=2$ and $m=3$.
|
With rapidly evolving internet technologies and emerging tools, sports
related videos generated online are increasing at an unprecedentedly fast pace.
To automate sports video editing/highlight generation process, a key task is to
precisely recognize and locate the events in the long untrimmed videos. In this
tech report, we present a two-stage paradigm to detect what and when events
happen in soccer broadcast videos. Specifically, we fine-tune multiple action
recognition models on soccer data to extract high-level semantic features, and
design a transformer based temporal detection module to locate the target
events. This approach achieved the state-of-the-art performance in both two
tasks, i.e., action spotting and replay grounding, in the SoccerNet-v2
Challenge, under CVPR 2021 ActivityNet workshop. Our soccer embedding features
are released at https://github.com/baidu-research/vidpress-sports. By sharing
these features with the broader community, we hope to accelerate the research
into soccer video understanding.
|
The concept of $\check{H}^n-$bubles was defined and investigated. In this
paper we generalize this conception for some other functors $F$. Open questions
are formulated.
|
Accelerators magnets must have minimal magnetic field imperfections for
reducing particle-beam instabilities. In the case of coils made of
high-temperature superconducting (HTS) tapes, the field imperfections from
persistent currents need to be carefully evaluated. In this paper we study the
use of superconducting screens based on HTS tapes for reducing the magnetic
field imperfections in accelerator magnets. The screens exploit the
magnetization by persistent currents to cancel out the magnetic field error.
The screens are aligned with the main field components, such that only the
undesired field components are compensated. The screens are passive,
self-regulating, and do not require any external source of energy. Measurements
in liquid nitrogen at 77 Kelvin show for dipole-field configurations a
significant reduction of the magnetic-field error up to a factor of four. The
residual error is explained via numerical simulations, accounting for the
geometrical imperfections in the HTS screens, thus achieving satisfactory
agreement with experimental results. Simulations show that if screens are
increased in width and thickness, and operated at 4.5 Kelvin, field errors may
be eliminated almost entirely for the typical excitation cycles of accelerator
magnets.
|
Object Detection (OD) is an important computer vision problem for industry,
which can be used for quality control in the production lines, among other
applications. Recently, Deep Learning (DL) methods have enabled practitioners
to train OD models performing well on complex real world images. However, the
adoption of these models in industry is still limited by the difficulty and the
significant cost of collecting high quality training datasets. On the other
hand, when applying OD to the context of production lines, CAD models of the
objects to be detected are often available. In this paper, we introduce a fully
automated method that uses a CAD model of an object and returns a fully trained
OD model for detecting this object. To do this, we created a Blender script
that generates realistic labeled datasets of images containing the object,
which are then used for training the OD model. The method is validated
experimentally on two practical examples, showing that this approach can
generate OD models performing well on real images, while being trained only on
synthetic images. The proposed method has potential to facilitate the adoption
of object detection models in industry as it is easy to adapt for new objects
and highly flexible. Hence, it can result in significant costs reduction, gains
in productivity and improved products quality.
|
The extension of the Standard Model with two gauge-singlet Majorana fermions
can simultaneously explain two beyond-the-Standard-model phenomena: neutrino
masses and oscillations, as well as the origin of the matter-antimatter
asymmetry in the Universe. The parameters of such a model are constrained by
the neutrino oscillation data, direct accelerator searches, big bang
nucleosynthesis, and requirement of successful baryogenesis. We show that the
combination of all these constraints still leaves an allowed region in the
parameter space below the kaon mass. This region can be probed by the further
searches of NA62, DUNE, or SHiP experiments.
|
Dynamical models of Solar System evolution have suggested that P-/D-type
volatile-rich asteroids formed in the outer Solar System and may be genetically
related to the Jupiter Trojans, the comets and small KBOs. Indeed, their
spectral properties resemble that of anhydrous cometary dust.
High-angular-resolution images of P-type asteroid (87) Sylvia with VLT/SPHERE
were used to reconstruct its 3D shape, and to study the dynamics of its two
satellites. We also model Sylvia's thermal evolution. The shape of Sylvia
appears flattened and elongated. We derive a volume-equivalent diameter of 271
+/- 5 km, and a low density of 1378 +/- 45 kg.m-3. The two satellites orbit
Sylvia on circular, equatorial orbits. The oblateness of Sylvia should imply a
detectable nodal precession which contrasts with the fully-Keplerian dynamics
of the satellites. This reveals an inhomogeneous internal structure, suggesting
that Sylvia is differentiated. Sylvia's low density and differentiated interior
can be explained by partial melting and mass redistribution through water
percolation. The outer shell would be composed of material similar to
interplanetary dust particles (IDPs) and the core similar to aqueously altered
IDPs or carbonaceous chondrite meteorites such as the Tagish Lake meteorite.
Numerical simulations of the thermal evolution of Sylvia show that for a body
of such size, partial melting was unavoidable due to the decay of long-lived
radionuclides. In addition, we show that bodies as small as 130-150 km in
diameter should have followed a similar thermal evolution, while smaller
objects, such as comets and the KBO Arrokoth, must have remained pristine, in
agreement with in situ observations of these bodies. NASA Lucy mission target
(617) Patroclus (diameter~140 km) may, however, be differentiated.
|
The ACM A.M. Turing Award is commonly acknowledged as the highest distinction
in the realm of computer science. Since 1960s, it has been awarded to computer
scientists who made outstanding contributions. The significance of this award
is far-reaching to the laureates as well as their research teams. However,
unlike the Nobel Prize that has been extensively investigated, little research
has been done to explore this most important award. To this end, we propose the
Turing Number (TN) index to measure how far a specific scholar is to this
award. Inspired by previous works on Erdos Number and Bacon Number, this index
is defined as the shortest path between a given scholar to any Turing Award
Laureate. Experimental results suggest that TN can reflect the closeness of
collaboration between scholars and Turing Award Laureates. With the correlation
analysis between TN and metrics from the bibliometric-level and network-level,
we demonstrate that TN has the potential of reflecting a scholar's academic
influence and reputation.
|
We define the concept of energy-variational solutions for the Navier--Stokes
and Euler equations. The underlying relative energy inequality holds as an
equality for classical solutions and if the additional variable vanishes, these
solutions are equivalent to the weak formulation with the strong energy
inequality. By introducing an additional defect variable in time, all
restrictions and all concatenations of energy-variational solutions are again
energy-variational solutions. Via the criterion of maximal dissipation, a
unique solution is selected that is not only continuously depending on the data
but also turns out to be a unique weak solution.
|
The first measurement of the production of pions, kaons, (anti-)protons and
$\phi$ mesons at midrapidity in Xe-Xe collisions at $\sqrt{s_{\rm NN}} = 5.44$
TeV is presented. Transverse momentum ($p_{\rm T}$) spectra and $p_{\rm
T}$-integrated yields are extracted in several centrality intervals bridging
from p-Pb to mid-central Pb-Pb collisions in terms of final-state multiplicity.
The study of Xe-Xe and Pb-Pb collisions allows systems at similar
charged-particle multiplicities but with different initial geometrical
eccentricities to be investigated. A detailed comparison of the spectral shapes
in the two systems reveals an opposite behaviour for radial and elliptic flow.
In particular, this study shows that the radial flow does not depend on the
colliding system when compared at similar charged-particle multiplicity. In
terms of hadron chemistry, the previously observed smooth evolution of particle
ratios with multiplicity from small to large collision systems is also found to
hold in Xe-Xe. In addition, our results confirm that two remarkable features of
particle production at LHC energies are also valid in the collision of
medium-sized nuclei: the lower proton-to-pion ratio with respect to the thermal
model expectations and the increase of the $\phi$-to-pion ratio with increasing
final-state multiplicity.
|
Highlights are presented about the science to be done with SKA. as well as
state of the art science already done today with its precursors (MeerKAT,
ASKAP) and pathfinders (LOFAR, NenuFAR), with accent on the expected
breakthroughs.
|
Federated Learning (FL) has received a significant amount of attention in the
industry and research community due to its capability of keeping data on local
devices. To aggregate the gradients of local models to train the global model,
existing works require that the global model and the local models are the same.
However, Internet of Things (IoT) devices are inherently diverse regarding
computation speed and onboard memory. In this paper, we propose an FL framework
targeting the heterogeneity of IoT devices. Specifically, local models are
compressed from the global model, and the gradients of the compressed local
models are used to update the global model. We conduct preliminary experiments
to illustrate that our framework can facilitate the design of IoT-aware FL.
|
Microgrid (MG) energy management is an important part of MG operation.
Various entities are generally involved in the energy management of an MG,
e.g., energy storage system (ESS), renewable energy resources (RER) and the
load of users, and it is crucial to coordinate these entities. Considering the
significant potential of machine learning techniques, this paper proposes a
correlated deep Q-learning (CDQN) based technique for the MG energy management.
Each electrical entity is modeled as an agent which has a neural network to
predict its own Q-values, after which the correlated Q-equilibrium is used to
coordinate the operation among agents. In this paper, the Long Short Term
Memory networks (LSTM) based deep Q-learning algorithm is introduced and the
correlated equilibrium is proposed to coordinate agents. The simulation result
shows 40.9% and 9.62% higher profit for ESS agent and photovoltaic (PV) agent,
respectively.
|
The phenomenon of quantum entanglement marks one of the furthest departures
from classical physics and is indispensable for quantum information processing.
Despite its fundamental importance, the distribution of entanglement over long
distances trough photons is unfortunately hindered by unavoidable decoherence
effects. Entanglement distillation is a means of restoring the quality of such
diluted entanglement by concentrating it into a pair of qubits. Conventionally,
this would be done by distributing multiple photon pairs and distilling the
entanglement into a single pair. Here, we turn around this paradigm by
utilising pairs of single photons entangled in multiple degrees of freedom.
Specifically, we make use of the polarisation and the energy-time domain of
photons, both of which are extensively field-tested. We experimentally chart
the domain of distillable states and achieve relative fidelity gains up to 13.8
%. Compared to the two-copy scheme, the distillation rate of our single-copy
scheme is several orders of magnitude higher, paving the way towards
high-capacity and noise-resilient quantum networks.
|
We theoretically study the superconducting properties of multi-band
two-dimensional transition metal oxide superconductors by analyzing not only
the role played by conventional singlet pairings, but also by the triplet order
parameters, favored by the spin-orbit couplings present in these ma- terials.
In particular, we focus on the two-dimensional electron gas at the (001)
interface between LaAlO3 and SrTiO3 band insulators where the low electron
densities and the sizeable spin-orbit couplings affect the superconducting
features. Our theoretical study is based on an extended su- perconducting
mean-field analysis of the typical multi-band tight-binding Hamiltonian, as
well as on a parallel analysis of the effective electronic bands in the
low-momentum limit, including static on-site and inter-site intra-band
attractive potentials under applied magnetic fields. The presence of triplet
pairings is able to strongly reduce the singlet order parameters which, as a
result, are no longer a monotonic function of the charge density. The interplay
between the singlet and the triplet pairings affects the dispersion of
quasi-particle excitations in the Brillouin zone and also induces anisotropy in
the superconducting behavior under the action of an in-plane and of an out-
of-plane magnetic fields. Finally, non-trivial topological superconducting
states become stable as a function of the charge density, as well as of the
magnitude and of the orientation of the magnetic field. In addition to the
chiral, time-reversal breaking, topological superconducting phase, favored by
the linear Rashba couplings and by the on-site attractive potentials in the
presence of an out- of-plane magnetic field, we find that a time-reversal
invariant topological helical superconducting phase is promoted by not-linear
spin-orbit couplings and by the inter-site attractive interactions in the
absence of magnetic field.
|
In classical Iwasawa theory, we mainly study codimension one behavior of
arithmetic modules. Relatively recently, F. M. Bleher, T. Chinburg, R.
Greenberg, M. Kakde, G. Pappas, R. Sharifi, and M. J. Taylor started studying
higher codimension behavior of unramified Iwasawa modules which are conjectured
to be pseudo-null. In this paper, by developing a general algebraic theory on
perfect complexes, we obtain a new perspective of their works. That allows us
to extend the results to equivariant settings and, even in non-equivariant
settings, to obtain more refined results concerning the higher codimension
behavior.
|
We compare and contrast the stellar structures of isolated Local Group dwarf
galaxies, as traced by their oldest stellar populations, with the satellite
dwarf galaxies of the Milky Way and M31. All Local Group dwarfs with Mv < -6
and surface brightness < 26.5 mags. per square arcsec. are considered, taking
advantage of measurements from surveys that use similar observations and
analysis techniques. For the isolated dwarfs, we use the results from Solitary
Local (Solo) Dwarf Galaxy Survey. We begin by confirming that the structural
and dynamical properties of the two satellite populations are not obviously
statistically different from each other, but we note that there many more
satellites around M31 than around the Milky Way down to equivalent magnitude
and surface brightness limits. We find that dwarfs in close proximity to a
massive galaxy generally show more scatter in their Kormendy relations than
those in isolation. Specifically, isolated Local Group dwarf galaxies show a
tighter trend of half-light radius versus magnitude than the satellite
populations, and similar effects are also seen for related parameters. There
appears to be a transition in the structural and dynamical properties of the
dwarf galaxy population around ~400 kpc from the Milky Way and M31, such that
the smallest, faintest, most circular dwarf galaxies are found closer than this
separation. We discuss the impact of selection effects on our analysis, and we
argue that our results point to the significance of tidal interactions on the
population of systems within approximately 400 kpc from the MW and M31.
|
Quantifying the amount of polarization is crucial for understanding and
studying political polarization in political and social systems. Several
methods are used commonly to measure polarization in social networks by purely
inspecting their structure. We analyse eight of such methods and show that all
of them yield high polarization scores even for random networks with similar
density and degree distributions to typical real-world networks. Further, some
of the methods are sensitive to degree distributions and relative sizes of the
polarized groups. We propose normalization to the existing scores and a minimal
set of tests that a score should pass in order for it to be suitable for
separating polarized networks from random noise. The performance of the scores
increased by 38%-220% after normalization in a classification task of 203
networks. Further, we find that the choice of method is not as important as
normalization, after which most of the methods have better performance than the
best-performing method before normalization. This work opens up the possibility
to critically assess and compare the features and performance of different
methods for measuring structural polarization.
|
Equilibrium statistical mechanics rests on the assumption of ergodic dynamics
of a system modulo the conservation laws of local observables: extremization of
entropy immediately gives Gibbs' ensemble (GE) for energy conserving systems
and a generalized version of it (GGE) when the number of local conserved
quantities (LCQ) is more than one. Through the last decade, statistical
mechanics has been extended to describe the late-time behaviour of periodically
driven (Floquet) quantum matter starting from a generic state. The structure
built on the fundamental assumptions of ergodicity and identification of the
relevant "conservation laws" in this inherently non-equilibrium setting. More
recently, it has been shown that the statistical mechanics has a much richer
structure due to the existence of {\it emergent} conservation laws: these are
approximate but stable conservation laws arising {\it due to the drive}, and
are not present in the undriven system. Extensive numerical and analytical
results support perpetual stability of these emergent (though approximate)
conservation laws, probably even in the thermodynamic limit. This banks on the
recent finding of a sharp ergodicity threshold for Floquet thermalization in
clean, interacting non-integrable Floquet systems. This opens up a new
possibility of stable Floquet engineering in such systems. This review intends
to give a theoretical overview of these developments. We conclude by briefly
surveying the experimental scenario.
|
In this paper we will solve an open problem raised by Man\'asevich and Mawhin
twenty years ago on the structure of the periodic eigenvalues of the vectorial
$p$-Laplacian. This is an Euler-Lagrangian equation on the plane or in higher
dimensional Euclidean spaces. The main result obtained is that for any exponent
$p$ other than $2$, the vectorial $p$-Laplacian on the plane will admit
infinitely many different sequences of periodic eigenvalues with a given
period. These sequences of eigenvalues are constructed using the notion of
scaling momenta we will introduce. The whole proof is based on the complete
integrability of the equivalent Hamiltonian system, the tricky reduction to
$2$-dimensional dynamical systems, and a number-theoretical distinguishing
between different sequences of eigenvalues. Some numerical simulations to the
new sequences of eigenvalues and eigenfunctions will be given. Several further
conjectures towards to the panorama of the spectral sets will be imposed.
|
We give asymptotic upper and lower bounds for the real and imaginary parts of
cycle integrals of the classical modular j-function along geodesics that
correspond to Markov irrationalities.
|
We report sub-arcsecond ALMA observations between 272 - 375 GHz towards Sgr
A*'s Circumnuclear disk (CND). Our data comprises 8 individual pointings, with
significant SiO (8(7) - 7(6)) and SO (7 - 6) emission detected towards 98
positions within these pointings. Additionally, we identify H2CS (9(1,9) -
8(1,8)), OCS (25 - 24) and CH3OH (2(1,1) - 2(0,2)) towards a smaller subset of
positions. By using the observed peak line flux density together with a
Bayesian Inference technique informed by radiative transfer models, we
systematically recover the physical gas conditions towards each of these
positions. We estimate that the bulk of the surveyed gas has temperature T <
500 K and density n $\lessapprox 10^{6}$ cm$^{-3}$, consistent with previous
studies of similar positions as traced by HCN clumps. However, we identify an
uncharacteristically hot (T $\approx 600$ K) and dense (n $\approx 10^{6}$
cm$^{-3}$) source in the Northeastern Arm. This position is found to be
approximately consistent with a gravitationally bound region dominated by
turbulence. We also identify a nearby cold (T $\approx 60$ K) and extremely
dense (n $\approx 10^{7}$ cm$^{-3}$) position that is again potentially bound
and dominated by turbulence. We also determine that the total gas mass
contained within the CND is M $\approx 4 \times 10^{4}$ $M_{\odot}$.
Furthermore, we qualitatively note that the observed chemical enrichment across
large scales within the CND is consistent with bulk grain processing, though
multiple desorption mechanisms are plausibly responsible. Further chemical
modelling is required to identify the physical origin of the grain-processing,
as well as the localised H2CS and OCS emission.
|
The rampant phenomenon of overpopulation and the remarkable increase of human
movements over the last decade have caused an aggressive re-emergence of dengue
fever, which made it the subject of several research fields. In this regard,
mathematical modeling, and notably through compartmental systems, is considered
as an eminent tool to obtain a clear overview of this disease's prevalence
behavior. In reality, and like all epidemics, the dengue spread phenomenon is
often subject to some randomness due to the different natural environment
fluctuations. For this reason, a mathematical formulation that considers
suitably as much as possible the external stochasticity is indeed required. By
this token, we strive in this work to present and analyze a generalized
stochastic dengue model that incorporates both slight and huge environmental
perturbations. More precisely, our proposed model is represented under the form
of an It\^o-L\'evy stochastic differential equations system that we demonstrate
its mathematical well-posedness and biological significance. Based on some
novel analytical techniques, we prove, and under appropriate hypothetical
frameworks, of course, two important asymptotic properties, namely: extinction
and persistence in the mean. The theoretical findings show that the dynamics of
our disturbed dengue model are mainly determined by the parameters that are
narrowly related to the small perturbations' intensities and the jumps
magnitudes. In the end, we give certain numerical illustrative examples to
support our theoretical findings and to highlight the effect of the adopted
mathematical techniques on the results.
|
The advent of Stage IV weak lensing surveys will open up a new era in
precision cosmology. These experiments will offer more than an
order-of-magnitude leap in precision over existing surveys, and we must ensure
that the accuracy of our theory matches this. Accordingly, it is necessary to
explicitly evaluate the impact of the theoretical assumptions made in current
analyses on upcoming surveys. One effect typically neglected in present
analyses is the Doppler-shift of the measured source comoving distances. Using
Fisher matrices, we calculate the biases on the cosmological parameter values
inferred from a Euclid-like survey, if the correction for this Doppler-shift is
omitted. We find that this Doppler-shift can be safely neglected for Stage IV
surveys. The code used in this investigation is made publicly available.
|
Recent simulations have shown that asymmetries in the ejecta distribution of
supernova remnants (SNR) can still reflect asymmetries from the initial
supernova explosion. Thus, their study provides a great means to test and
constrain model predictions in relation to the distributions of heavy elements
or the neutron star kicks, both of which are key to better understanding the
explosion mechanisms in core-collapse supernovae. The use of a novel blind
source separation method applied to the megasecond X-ray observations of the
well-known Cassiopeia A SNR has revealed maps of the distribution of the ejecta
endowed with an unprecedented level of detail and clearly separated from
continuum emission. Our method also provides a three-dimensional view of the
ejecta by disentangling the red- and blue-shifted spectral components and
associated images of the Si, S, Ar, Ca and Fe, providing insights into the
morphology of the ejecta distribution in Cassiopeia A. These mappings allow us
to thoroughly investigate the asymmetries in the heavy elements distribution
and probe simulation predictions about the neutron star kicks and the relative
asymmetries between the different elements. We find in our study that most of
the ejecta X-ray flux stems from the red-shifted component, suggesting an
asymmetry in the explosion. In addition, the red-shifted ejecta can physically
be described as a broad, relatively symmetric plume, whereas the blue-shifted
ejecta is more similar to a dense knot. The neutron star also moves directly
opposite to the red-shifted parts of the ejecta similar to what is seen with
44Ti. Regarding the morphological asymmetries, it appears that heavier elements
have more asymmetrical distributions, which confirms predictions made by
simulations. This study is a showcase of the capacities of new analysis methods
to revisit archival observations to fully exploit their scientific content.
|
Pixel-wise regression is probably the most common problem in fine-grained
computer vision tasks, such as estimating keypoint heatmaps and segmentation
masks. These regression problems are very challenging particularly because they
require, at low computation overheads, modeling long-range dependencies on
high-resolution inputs/outputs to estimate the highly nonlinear pixel-wise
semantics. While attention mechanisms in Deep Convolutional Neural
Networks(DCNNs) has become popular for boosting long-range dependencies,
element-specific attention, such as Nonlocal blocks, is highly complex and
noise-sensitive to learn, and most of simplified attention hybrids try to reach
the best compromise among multiple types of tasks. In this paper, we present
the Polarized Self-Attention(PSA) block that incorporates two critical designs
towards high-quality pixel-wise regression: (1) Polarized filtering: keeping
high internal resolution in both channel and spatial attention computation
while completely collapsing input tensors along their counterpart dimensions.
(2) Enhancement: composing non-linearity that directly fits the output
distribution of typical fine-grained regression, such as the 2D Gaussian
distribution (keypoint heatmaps), or the 2D Binormial distribution (binary
segmentation masks). PSA appears to have exhausted the representation capacity
within its channel-only and spatial-only branches, such that there is only
marginal metric differences between its sequential and parallel layouts.
Experimental results show that PSA boosts standard baselines by $2-4$ points,
and boosts state-of-the-arts by $1-2$ points on 2D pose estimation and semantic
segmentation benchmarks.
|
The cosmological term, $\Lambda$, was introduced $104$ years ago by Einstein
in his gravitational field equations. Whether $\Lambda$ is a rigid quantity or
a dynamical variable in cosmology has been a matter of debate for many years,
especially after the introduction of the general notion of dark energy (DE).
$\Lambda$ is associated to the vacuum energy density, $\rho_{\rm vac}$, and one
may expect that it evolves slowly with the cosmological expansion. Herein we
present a devoted study testing this possibility using the promising class of
running vacuum models (RVM's). We use a large string $SNIa+BAO+H(z)+LSS+CMB$ of
modern cosmological data, in which for the first time the CMB part involves the
full Planck 2018 likelihood for these models. We test the dependence of the
results on the threshold redshift $z_*$ at which the vacuum dynamics is
activated in the recent past and find positive signals up to $\sim4.0\sigma$
for $z_*\simeq 1$. The RVM's prove very competitive against the standard
$\Lambda$CDM model and give a handle for solving the $\sigma_8$ tension and
alleviating the $H_0$ one.
|
Task-parallel programs often enjoy deadlock freedom under certain
restrictions, such as the use of structured join operations, as in Cilk and
X10, or the use of asynchronous task futures together with deadlock-avoiding
policies such as Known Joins or Transitive Joins. However, the promise, a
popular synchronization primitive for parallel tasks, does not enjoy
deadlock-freedom guarantees. Promises can exhibit deadlock-like bugs; however,
the concept of a deadlock is not currently well-defined for promises.
To address these challenges, we propose an ownership semantics in which each
promise is associated to the task which currently intends to fulfill it.
Ownership immediately enables the identification of bugs in which a task fails
to fulfill a promise for which it is responsible. Ownership further enables the
discussion of deadlock cycles among tasks and promises and allows us to
introduce a robust definition of deadlock-like bugs for promises.
Cycle detection in this context is non-trivial because it is concurrent with
changes in promise ownership. We provide a lock-free algorithm for precise
runtime deadlock detection. We show how to obtain the memory consistency
criteria required for the correctness of our algorithm under TSO and the Java
and C++ memory models. An evaluation compares the execution time and memory
usage overheads of our detection algorithm on benchmark programs relative to an
unverified baseline. Our detector exhibits a 12% (1.12$\times$) geometric mean
time overhead and a 6% (1.06$\times$) geometric mean memory overhead, which are
smaller overheads than in past approaches to deadlock cycle detection.
|
Program verification is to develop the program's proof system, and to prove
the proof system soundness with respect to a trusted operational semantics of
the program. However, many practical program verifiers are not based on
operational semantics and can't seriously validate the program. Matching logic
is proposed to make program verification based on operational semantics. In
this paper, following Grigore Ro{\c{s}}u 's work, we consider matching logic
for parallel imperative language(PIMP). According to our investigation, this
paper is the first study on matching logic for PIMP. In our matching logic, we
redefine "interference-free" to character parallel rule and prove the soundness
of matching logic to the operational semantics of PIMP. We also link PIMP's
operational semantics and PIMP's verification formally by constructing a
matching logic verifier for PIMP which executes rewriting logic semantics
symbolically on configuration patterns and is sound and complete to matching
logic for PIMP. That is our matching logic verifier for PIMP is sound to the
operational semantics of PIMP. Finally, we also verify the matching logic
verifier through an example which is a standard problem in parallel
programming.
|
Ordinary differential equations (ODEs) are widely used to model complex
dynamics that arises in biology, chemistry, engineering, finance, physics, etc.
Calibration of a complicated ODE system using noisy data is generally very
difficult. In this work, we propose a two-stage nonparametric approach to
address this problem. We first extract the de-noised data and their higher
order derivatives using boundary kernel method, and then feed them into a
sparsely connected deep neural network with ReLU activation function. Our
method is able to recover the ODE system without being subject to the curse of
dimensionality and complicated ODE structure. When the ODE possesses a general
modular structure, with each modular component involving only a few input
variables, and the network architecture is properly chosen, our method is
proven to be consistent. Theoretical properties are corroborated by an
extensive simulation study that demonstrates the validity and effectiveness of
the proposed method. Finally, we use our method to simultaneously characterize
the growth rate of Covid-19 infection cases from 50 states of the USA.
|
We calculate gluon self-energy using quark energy projectors in a general
quark-gluon plasma. By separating the quark field into a positive- and a
negative-energy mode, the quark loop constructed with the same mode is always
convergent, and the divergence appears only in the mixed loop with different
modes and is medium independent. After removing the divergence in vacuum, we
obtain the one-loop gluon self-energy at finite temperature, chemical potential
and quarks mass without approximation. With the method of quark loop
resummation, we calculate non-perturbatively the gluon Debye mass and
thermodynamic potential. In the limit of small gluon momentum in comparison
with temperature, chemical potential and quark mass, our calculation comes back
to the known HTL/HDL results in literature.
|
Braiding operations are challenging to create topological quantum computers.
It is unclear whether braiding operations can be executed with any materials.
Although various calculations based on Majorana fermions show braiding
possibilities, a braiding operation with a Majorana fermion has not yet been
experimentally proven. Herein, braiding operations are demonstrated using a
molecular topological superconductor (MTSC) that utilizes the topological
properties intrinsic in molecules. The braiding operations were implemented by
controlling two MTSC modules made by pelletizing crystals of
4,5,9,10-tetrakis(dodecyloxy)pyrene, which is proposed as the first MTSC
material through n-MOSFETs. It shows the elements of topological quantum
computers that can be demonstrated without an external magnetic field at room
temperature.
|
Most water in the universe may be superionic, and its thermodynamic and
transport properties are crucial for planetary science but difficult to probe
experimentally or theoretically. We use machine learning and free energy
methods to overcome the limitations of quantum mechanical simulations, and
characterize hydrogen diffusion, superionic transitions, and phase behaviors of
water at extreme conditions. We predict that a close-packed superionic phase
with mixed stacking is stable over a wide temperature and pressure range, while
a body-centered cubic phase is only thermodynamically stable in a small window
but is kinetically favored. Our phase boundaries, which are consistent with the
existing-albeit scarce-experimental observations, help resolve the fractions of
insulating ice, different superionic phases, and liquid water inside of ice
giants.
|
Real-time rendering and animation of humans is a core function in games,
movies, and telepresence applications. Existing methods have a number of
drawbacks we aim to address with our work. Triangle meshes have difficulty
modeling thin structures like hair, volumetric representations like Neural
Volumes are too low-resolution given a reasonable memory budget, and
high-resolution implicit representations like Neural Radiance Fields are too
slow for use in real-time applications. We present Mixture of Volumetric
Primitives (MVP), a representation for rendering dynamic 3D content that
combines the completeness of volumetric representations with the efficiency of
primitive-based rendering, e.g., point-based or mesh-based methods. Our
approach achieves this by leveraging spatially shared computation with a
deconvolutional architecture and by minimizing computation in empty regions of
space with volumetric primitives that can move to cover only occupied regions.
Our parameterization supports the integration of correspondence and tracking
constraints, while being robust to areas where classical tracking fails, such
as around thin or translucent structures and areas with large topological
variability. MVP is a hybrid that generalizes both volumetric and
primitive-based representations. Through a series of extensive experiments we
demonstrate that it inherits the strengths of each, while avoiding many of
their limitations. We also compare our approach to several state-of-the-art
methods and demonstrate that MVP produces superior results in terms of quality
and runtime performance.
|
We make quantitative improvements to recently obtained results on the
structure of the image of a large difference set under certain quadratic forms
and other homogeneous polynomials. Previous proofs used deep results of
Benoist-Quint on random walks in certain subgroups of
$\operatorname{SL}_r(\mathbb{Z})$ (the symmetry groups of these quadratic
forms) that were not of a quantitative nature. Our new observation relies on
noticing that rather than studying random walks, one can obtain more
quantitative results by considering polynomial orbits of these group actions
that are not contained in cosets of submodules of $\mathbb{Z}^r$ of small
index. Our main new technical tool is a uniform Furstenberg-S\'{a}rk\"{o}zy
theorem that holds for a large class of polynomials not necessarily vanishing
at zero, which may be of independent interest and is derived from a density
increment argument and Hua's bound on polynomial exponential sums.
|
Person image synthesis, e.g., pose transfer, is a challenging problem due to
large variation and occlusion. Existing methods have difficulties predicting
reasonable invisible regions and fail to decouple the shape and style of
clothing, which limits their applications on person image editing. In this
paper, we propose PISE, a novel two-stage generative model for Person Image
Synthesis and Editing, which is able to generate realistic person images with
desired poses, textures, or semantic layouts. For human pose transfer, we first
synthesize a human parsing map aligned with the target pose to represent the
shape of clothing by a parsing generator, and then generate the final image by
an image generator. To decouple the shape and style of clothing, we propose
joint global and local per-region encoding and normalization to predict the
reasonable style of clothing for invisible regions. We also propose
spatial-aware normalization to retain the spatial context relationship in the
source image. The results of qualitative and quantitative experiments
demonstrate the superiority of our model on human pose transfer. Besides, the
results of texture transfer and region editing show that our model can be
applied to person image editing.
|
Given a closed manifold of positive Yamabe invariant and for instance
positive Morse functions upon it, the conformally prescribed scalar curvature
problem raises the question, whether or not such functions can by conformally
changing the metric be realised as the scalar curvature of this manifold. As we
shall quantify depending on the shape and structure of such functions, every
lack of a solution for some candidate function leads to existence of
energetically uniformly bounded solutions for entire classes of related
candidate functions.
|
In the animation industry, cartoon videos are usually produced at low frame
rate since hand drawing of such frames is costly and time-consuming. Therefore,
it is desirable to develop computational models that can automatically
interpolate the in-between animation frames. However, existing video
interpolation methods fail to produce satisfying results on animation data.
Compared to natural videos, animation videos possess two unique characteristics
that make frame interpolation difficult: 1) cartoons comprise lines and smooth
color pieces. The smooth areas lack textures and make it difficult to estimate
accurate motions on animation videos. 2) cartoons express stories via
exaggeration. Some of the motions are non-linear and extremely large. In this
work, we formally define and study the animation video interpolation problem
for the first time. To address the aforementioned challenges, we propose an
effective framework, AnimeInterp, with two dedicated modules in a
coarse-to-fine manner. Specifically, 1) Segment-Guided Matching resolves the
"lack of textures" challenge by exploiting global matching among color pieces
that are piece-wise coherent. 2) Recurrent Flow Refinement resolves the
"non-linear and extremely large motion" challenge by recurrent predictions
using a transformer-like architecture. To facilitate comprehensive training and
evaluations, we build a large-scale animation triplet dataset, ATD-12K, which
comprises 12,000 triplets with rich annotations. Extensive experiments
demonstrate that our approach outperforms existing state-of-the-art
interpolation methods for animation videos. Notably, AnimeInterp shows
favorable perceptual quality and robustness for animation scenarios in the
wild. The proposed dataset and code are available at
https://github.com/lisiyao21/AnimeInterp/.
|
Tensor network states have been a very prominent tool for the study of
quantum many-body physics, thanks to their physically relevant entanglement
properties and their ability to encode symmetries. In the last few years, the
formalism has been extended and applied to theories with local symmetries to -
lattice gauge theories. In the contraction of tensor network states as well as
correlation functions of physical observables with respect to them, one uses
the so-called transfer operator, whose local properties dictate the long-range
behaviour of the state. In this work we study transfer operators of tensor
network states (in particular, PEPS - projected entangled pair states) in the
context of lattice gauge theories, and consider the implications of the local
symmetry on their structure and properties. We focus on the Wilson loop - a
nonlocal, gauge-invariant observable which is central to pure gauge theories,
whose long range decay behaviour probes the confinement or deconfinement of
static charges. Using the symmetry, we show how to handle its contraction, and
formulate conditions relating local properties to its decay fashion.
|
We optimised the magnetic field homogeneity of two canonical designs for
mobile microfluidic NMR applications: two parallel magnets with an air gap and
a modified Halbach array. Along with the influence of the sample length,
general design guidelines will be presented. For a fair comparison the
sensitive length of the sample has been chosen to be the same as the gap size
between the magnets to ensure enough space for the transmitting and receiving
unit, as well as basic electric shimming components. Keeping the compactness of
the final device in mind, a box with an edge length 5 times the gap size has
been defined, in which the complete magnet configuration should fit. With the
chosen boundary conditions, the simple parallel cuboid configuration reaches
the best homogeneity without active shimming (0.5$\mathrm{B_{s}}$, 41 ppm),
while the Pseudo-Halbach configuration has the highest field strength
(0.9$\mathrm{B_{s}}$, 994 ppm), assuming perfect magnets. However, permanent
magnet configurations suffer from imperfections, such as magnetisation,
fabrication and positioning errors, which results in worse magnetic field
homogeneities than expected from simulations using a fixed optimised parameter
set. We present a sensitivity analysis for a magnetic cube and the results of
studies of the variations in the magnetisation and angle of magnetisation of
magnets purchased from different suppliers, composed of different materials and
coatings, and of different sizes. We performed a detailed Monte Carlo
simulation on the effect of the measured distribution of magnetic properties on
the mentioned configurations. The cuboid design shows a mean homogeneity of 430
ppm (std dev. 350 ppm), the Pseudo-Halbach has a mean homogeneity of 1086 ppm
(std dev. 8 ppm).
|
Over the last decade, Programmable Logic Controllers (PLCs) have been
increasingly targeted by attackers to obtain control over industrial processes
that support critical services. Such targeted attacks typically require
detailed knowledge of system-specific attributes, including hardware
configurations, adopted protocols, and PLC control-logic, i.e. process
comprehension. The consensus from both academics and practitioners suggests
stealthy process comprehension obtained from a PLC alone, to conduct targeted
attacks, is impractical. In contrast, we assert that current PLC programming
practices open the door to a new vulnerability class based on control-logic
constructs. To support this, we propose the concept of Process Comprehension at
a Distance (PCaaD), as a novel methodological and automatable approach for
system-agnostic exploitation of PLC library functions, leading to the targeted
exfiltration of operational data, manipulation of control-logic behavior, and
establishment of covert command and control channels through unused memory. We
validate PCaaD on widely used PLCs, by identification of practical attacks.
|
Emil Post's tag system problem is the question of whether or not a tag system
$\{N=3, P(0)=00, P(1)=1101\}$ has a configuration, simulation of which will
never halt or end up in a loop. For the past decades, there were several
attempts to find an answer to this question, including a recent study by
Wolfram (2021), during which the first $2^{84}$ initial configurations were
checked. This paper presents a family of configurations of this type in a form
of strings $a^{n} b c^{m}$, that evolve to $a^{n+1} b c^{m+1}$ after a finite
amount of steps. The proof of this behavior for all non-negative $n$ and $m$ is
described further in a paper as a finite verification procedure, which is
computationally bounded by 20000 iterations of tag. All corresponding code can
be found at
https://github.com/nikitakurilenko/post-tag-infinitely-growing-configurations.
|
3D reconstruction has lately attracted increasing attention due to its wide
application in many areas, such as autonomous driving, robotics and virtual
reality. As a dominant technique in artificial intelligence, deep learning has
been successfully adopted to solve various computer vision problems. However,
deep learning for 3D reconstruction is still at its infancy due to its unique
challenges and varying pipelines. To stimulate future research, this paper
presents a review of recent progress in deep learning methods for Multi-view
Stereo (MVS), which is considered as a crucial task of image-based 3D
reconstruction. It also presents comparative results on several publicly
available datasets, with insightful observations and inspiring future research
directions.
|
Traditional approaches to activity recognition involve the use of wearable
sensors or cameras in order to recognise human activities. In this work, we
extract fine-grained physical layer information from WiFi devices for the
purpose of passive activity recognition in indoor environments. While such data
is ubiquitous, few approaches are designed to utilise large amounts of
unlabelled WiFi data. We propose the use of self-supervised contrastive
learning to improve activity recognition performance when using multiple views
of the transmitted WiFi signal captured by different synchronised receivers. We
conduct experiments where the transmitters and receivers are arranged in
different physical layouts so as to cover both Line-of-Sight (LoS) and non LoS
(NLoS) conditions. We compare the proposed contrastive learning system with
non-contrastive systems and observe a 17.7% increase in macro averaged F1 score
on the task of WiFi based activity recognition, as well as significant
improvements in one- and few-shot learning scenarios.
|
3D printing has revolutionized the manufacturing of volumetric components and
structures for use in various fields. Owing to the advent of photo-curable
resins, several fully volumetric light-based techniques have been recently
developed to push further the resolution and speed limitations of 3D printing.
However, these new approaches only work with homogeneous and relatively
transparent resins so that the incident light patterns used for
photo-polymerization are not impacted along their propagation through the
material. Herein, we describe a strategy to print in scattering materials. It
consists of characterizing how light is distorted by the curable resin and then
applying a digital correction to the light patterns to counteract the effect of
scattering. Using a tomographic volumetric printer, we experimentally
demonstrate the importance of taking light scattering into account when
computing the projected patterns and show that our applied correction
significantly improves printability, even when the object size exceeds the
scattering mean free path of light.
|
Brain function relies on a precisely coordinated and dynamic balance between
the functional integration and segregation of distinct neural systems.
Characterizing the way in which neural systems reconfigure their interactions
to give rise to distinct but hidden brain states remains an open challenge. In
this paper, we propose a Bayesian model-based characterization of latent brain
states and showcase a novel method based on posterior predictive discrepancy
using the latent block model to detect transitions between latent brain states
in blood oxygen level-dependent (BOLD) time series. The set of estimated
parameters in the model includes a latent label vector that assigns network
nodes to communities, and also block model parameters that reflect the weighted
connectivity within and between communities. Besides extensive in-silico model
evaluation, we also provide empirical validation (and replication) using the
Human Connectome Project (HCP) dataset of 100 healthy adults. Our results
obtained through an analysis of task-fMRI data during working memory
performance show appropriate lags between external task demands and
change-points between brain states, with distinctive community patterns
distinguishing fixation, low-demand and high-demand task conditions.
|
For globally connected devices like smart phones, personal computers and
Internet-of-things devices, the ability to generate random numbers is essential
for execution of cryptographic protocols responsible for information security.
Generally, a random number generator should be small, robust, utilize as few
hardware and energy resources as possible, yet provide excellent randomness at
a high enough speed (bitrate) for a given purpose. In this work we present a
quantum random number generator (QRNG) which makes use of a photoelectric
effect in single-photon avalanche diodes (SPADs) as a source of randomness and
is scalable to any desired bitrate. We use the random flip-flop method in which
random bits are obtained by periodic sampling of a randomly toggling flip-flop.
For the first time we investigate this method in detail and find that, out of
two main imperfections, bias is due only to hardware imperfections while
autocorrelation predominantly resides with the method itself. SPADs are
integrated on a silicon chip together with passive quenching and digital
pulse-shaping circuitry, using a standard 180 nm CMOS process. A separate FPGA
chip derives random numbers from the detection signals. The basic QRNG cell,
made of only two SPADs and a few logic circuits, can generate up to 20 Mbit/s
that pass NIST statistical tests without any further postprocessing. This
technology allows integration of a QRNG on a single silicon chip using readily
available industrial processes.
|
The rise of personal assistants has made conversational question answering
(ConvQA) a very popular mechanism for user-system interaction. State-of-the-art
methods for ConvQA over knowledge graphs (KGs) can only learn from crisp
question-answer pairs found in popular benchmarks. In reality, however, such
training data is hard to come by: users would rarely mark answers explicitly as
correct or wrong. In this work, we take a step towards a more natural learning
paradigm - from noisy and implicit feedback via question reformulations. A
reformulation is likely to be triggered by an incorrect system response,
whereas a new follow-up question could be a positive signal on the previous
turn's answer. We present a reinforcement learning model, termed CONQUER, that
can learn from a conversational stream of questions and reformulations. CONQUER
models the answering process as multiple agents walking in parallel on the KG,
where the walks are determined by actions sampled using a policy network. This
policy network takes the question along with the conversational context as
inputs and is trained via noisy rewards obtained from the reformulation
likelihood. To evaluate CONQUER, we create and release ConvRef, a benchmark
with about 11k natural conversations containing around 205k reformulations.
Experiments show that CONQUER successfully learns to answer conversational
questions from noisy reward signals, significantly improving over a
state-of-the-art baseline.
|
Let $K$ be a field and $X$, $Y$ denote matrices such that, the entries of $X$
are either indeterminates over $K$ or $0$ and the entries of $Y$ are
indeterminates over $K$ which are different from those appearing in $X$. We
consider ideals of the form $I_{1}(XY)$, which is the ideal generated by the
$1\times 1$ minors of the matrix $XY$. We prove that the quotient ring $K[X,
Y]/I_{1}(XY)$ admits an ASL structure for certain $X$ and $Y$.
|
We apply the diabatic approach, specially suited for a QCD based study of
conventional (quark-antiquark) and unconventional (quark-antiquark +
meson-meson) meson states, to the description of hidden-bottom mesons. A
spectral analysis of the $I=0$, $J^{++}$ and $1^{--}$ resonances with masses up
to about $10.8$ GeV is carried out. Masses and widths of all the experimentally
known resonances, including conventional and unconventional states, can be well
reproduced. In particular, we predict a significant $B\bar{B}^{\ast}$ component
in $\Upsilon(10580)$. We also predict the existence of a not yet discovered
unconventional $1^{++}$ narrow state, with a significant
$B_{s}\bar{B}_{s}^{\ast}$ content making it to decay into $\Upsilon(1S)\phi$,
whose experimental discovery would provide definite support to our theoretical
analysis.
|
In a previous paper, we computed the energy density and the non-linear energy
cascade rate for transverse kink waves using Elsasser variables. In this paper,
we focus on the standing kink waves, which are impulsively excited in coronal
loops by external perturbations. We present an analytical calculation to
compute the damping time due to the non-linear development of the
Kelvin-Helmholtz instability. The main result is that the damping time is
inversely proportional to the oscillation amplitude. We compare the damping
times from our formula with the results of numerical simulations and
observations. In both cases we find a reasonably good match. The comparison
with the simulations show that the non-linear damping dominates in the high
amplitude regime, while the low amplitude regime shows damping by resonant
absorption. In the comparison with the observations, we find a power law
inversely proportional to the amplitude $\eta^{-1}$ as an outer envelope for
our Monte Carlo data points.
|
Multilingual Neural Machine Translation (MNMT) has aroused widespread
interest due to its efficiency. An exciting advantage of MNMT models is that
they could also translate between unsupervised (zero-shot) language directions.
Language tag (LT) strategies are often adopted to indicate the translation
directions in MNMT. In this paper, we demonstrate that the LTs are not only
indicators for translation directions but also crucial to zero-shot translation
qualities. Unfortunately, previous work tends to ignore the importance of LT
strategies. We demonstrate that a proper LT strategy could enhance the
consistency of semantic representations and alleviate the off-target issue in
zero-shot directions. Experimental results show that by ignoring the source
language tag (SLT) and adding the target language tag (TLT) to the encoder, the
zero-shot translations could achieve a +8 BLEU score difference over other LT
strategies in IWSLT17, Europarl, TED talks translation tasks.
|