text
stringlengths
57
2.88k
labels
sequencelengths
6
6
Title: An Integrated Decision and Control Theoretic Solution to Multi-Agent Co-Operative Search Problems, Abstract: This paper considers the problem of autonomous multi-agent cooperative target search in an unknown environment using a decentralized framework under a no-communication scenario. The targets are considered as static targets and the agents are considered to be homogeneous. The no-communication scenario translates as the agents do not exchange either the information about the environment or their actions among themselves. We propose an integrated decision and control theoretic solution for a search problem which generates feasible agent trajectories. In particular, a perception based algorithm is proposed which allows an agent to estimate the probable strategies of other agents' and to choose a decision based on such estimation. The algorithm shows robustness with respect to the estimation accuracy to a certain degree. The performance of the algorithm is compared with random strategies and numerical simulation shows considerable advantages.
[ 1, 0, 0, 0, 0, 0 ]
Title: Fast Multi-frame Stereo Scene Flow with Motion Segmentation, Abstract: We propose a new multi-frame method for efficiently computing scene flow (dense depth and optical flow) and camera ego-motion for a dynamic scene observed from a moving stereo camera rig. Our technique also segments out moving objects from the rigid scene. In our method, we first estimate the disparity map and the 6-DOF camera motion using stereo matching and visual odometry. We then identify regions inconsistent with the estimated camera motion and compute per-pixel optical flow only at these regions. This flow proposal is fused with the camera motion-based flow proposal using fusion moves to obtain the final optical flow and motion segmentation. This unified framework benefits all four tasks - stereo, optical flow, visual odometry and motion segmentation leading to overall higher accuracy and efficiency. Our method is currently ranked third on the KITTI 2015 scene flow benchmark. Furthermore, our CPU implementation runs in 2-3 seconds per frame which is 1-3 orders of magnitude faster than the top six methods. We also report a thorough evaluation on challenging Sintel sequences with fast camera and object motion, where our method consistently outperforms OSF [Menze and Geiger, 2015], which is currently ranked second on the KITTI benchmark.
[ 1, 0, 0, 0, 0, 0 ]
Title: Pointed $p^2q$-dimensional Hopf algebras in positive characteristic, Abstract: Let $\K$ be an algebraically closed field of positive characteristic $p$. We mainly classify pointed Hopf algebras over $\K$ of dimension $p^2q$, $pq^2$ and $pqr$ where $p,q,r$ are distinct prime numbers. We obtain a complete classification of such Hopf algebras except two subcases when they are not generated by the first terms of coradical filtration. In particular, we obtain many new examples of non-commutative and non-cocommutative finite-dimensional Hopf algebras.
[ 0, 0, 1, 0, 0, 0 ]
Title: Experimental Design of a Prescribed Burn Instrumentation, Abstract: Observational data collected during experiments, such as the planned Fire and Smoke Model Evaluation Experiment (FASMEE), are critical for progressing and transitioning coupled fire-atmosphere models like WRF-SFIRE and WRF-SFIRE-CHEM into operational use. Historical meteorological data, representing typical weather conditions for the anticipated burn locations and times, have been processed to initialize and run a set of simulations representing the planned experimental burns. Based on an analysis of these numerical simulations, this paper provides recommendations on the experimental setup that include the ignition procedures, size and duration of the burns, and optimal sensor placement. New techniques are developed to initialize coupled fire-atmosphere simulations with weather conditions typical of the planned burn locations and time of the year. Analysis of variation and sensitivity analysis of simulation design to model parameters by repeated Latin Hypercube Sampling are used to assess the locations of the sensors. The simulations provide the locations of the measurements that maximize the expected variation of the sensor outputs with the model parameters.
[ 0, 0, 0, 1, 0, 0 ]
Title: Seifert surgery on knots via Reidemeister torsion and Casson-Walker-Lescop invariant III, Abstract: For a knot $K$ in a homology $3$-sphere $\Sigma$, let $M$ be the result of $2/q$-surgery on $K$, and let $X$ be the universal abelian covering of $M$. Our first theorem is that if the first homology of $X$ is finite cyclic and $M$ is a Seifert fibered space with $N\ge 3$ singular fibers, then $N\ge 4$ if and only if the first homology of the universal abelian covering of $X$ is infinite. Our second theorem is that under an appropriate assumption on the Alexander polynomial of $K$, if $M$ is a Seifert fibered space, then $q=\pm 1$ (i.e.\ integral surgery).
[ 0, 0, 1, 0, 0, 0 ]
Title: Joint Power and Admission Control based on Channel Distribution Information: A Novel Two-Timescale Approach, Abstract: In this letter, we consider the joint power and admission control (JPAC) problem by assuming that only the channel distribution information (CDI) is available. Under this assumption, we formulate a new chance (probabilistic) constrained JPAC problem, where the signal to interference plus noise ratio (SINR) outage probability of the supported links is enforced to be not greater than a prespecified tolerance. To efficiently deal with the chance SINR constraint, we employ the sample approximation method to convert them into finitely many linear constraints. Then, we propose a convex approximation based deflation algorithm for solving the sample approximation JPAC problem. Compared to the existing works, this letter proposes a novel two-timescale JPAC approach, where admission control is performed by the proposed deflation algorithm based on the CDI in a large timescale and transmission power is adapted instantly with fast fadings in a small timescale. The effectiveness of the proposed algorithm is illustrated by simulations.
[ 1, 0, 1, 0, 0, 0 ]
Title: A Closer Look at the Alpha Persei Coronal Conundrum, Abstract: A ROSAT survey of the Alpha Per open cluster in 1993 detected its brightest star, mid-F supergiant Alpha Persei: the X-ray luminosity and spectral hardness were similar to coronally active late-type dwarf members. Later, in 2010, a Hubble Cosmic Origins Spectrograph SNAPshot of Alpha Persei found far-ultraviolet coronal proxy SiIV unexpectedly weak. This, and a suspicious offset of the ROSAT source, suggested that a late-type companion might be responsible for the X-rays. Recently, a multi-faceted program tested that premise. Groundbased optical coronography, and near-UV imaging with HST Wide Field Camera 3, searched for any close-in faint candidate coronal objects, but without success. Then, a Chandra pointing found the X-ray source single and coincident with the bright star. Significantly, the SiIV emissions of Alpha Persei, in a deeper FUV spectrum collected by HST COS as part of the joint program, aligned well with chromospheric atomic oxygen (which must be intrinsic to the luminous star), within the context of cooler late-F and early-G supergiants, including Cepheid variables. This pointed to the X-rays as the fundamental anomaly. The over-luminous X-rays still support the case for a hyperactive dwarf secondary, albeit now spatially unresolved. However, an alternative is that Alpha Persei represents a novel class of coronal source. Resolving the first possibility now has become more difficult, because the easy solution -- a well separated companion -- has been eliminated. Testing the other possibility will require a broader high-energy census of the early-F supergiants.
[ 0, 1, 0, 0, 0, 0 ]
Title: The challenge of realistic music generation: modelling raw audio at scale, Abstract: Realistic music generation is a challenging task. When building generative models of music that are learnt from data, typically high-level representations such as scores or MIDI are used that abstract away the idiosyncrasies of a particular performance. But these nuances are very important for our perception of musicality and realism, so in this work we embark on modelling music in the raw audio domain. It has been shown that autoregressive models excel at generating raw audio waveforms of speech, but when applied to music, we find them biased towards capturing local signal structure at the expense of modelling long-range correlations. This is problematic because music exhibits structure at many different timescales. In this work, we explore autoregressive discrete autoencoders (ADAs) as a means to enable autoregressive models to capture long-range correlations in waveforms. We find that they allow us to unconditionally generate piano music directly in the raw audio domain, which shows stylistic consistency across tens of seconds.
[ 0, 0, 0, 1, 0, 0 ]
Title: GENFIRE: A generalized Fourier iterative reconstruction algorithm for high-resolution 3D imaging, Abstract: Tomography has made a radical impact on diverse fields ranging from the study of 3D atomic arrangements in matter to the study of human health in medicine. Despite its very diverse applications, the core of tomography remains the same, that is, a mathematical method must be implemented to reconstruct the 3D structure of an object from a number of 2D projections. In many scientific applications, however, the number of projections that can be measured is limited due to geometric constraints, tolerable radiation dose and/or acquisition speed. Thus it becomes an important problem to obtain the best-possible reconstruction from a limited number of projections. Here, we present the mathematical implementation of a tomographic algorithm, termed GENeralized Fourier Iterative REconstruction (GENFIRE). By iterating between real and reciprocal space, GENFIRE searches for a global solution that is concurrently consistent with the measured data and general physical constraints. The algorithm requires minimal human intervention and also incorporates angular refinement to reduce the tilt angle error. We demonstrate that GENFIRE can produce superior results relative to several other popular tomographic reconstruction techniques by numerical simulations, and by experimentally by reconstructing the 3D structure of a porous material and a frozen-hydrated marine cyanobacterium. Equipped with a graphical user interface, GENFIRE is freely available from our website and is expected to find broad applications across different disciplines.
[ 0, 1, 0, 0, 0, 0 ]
Title: GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium, Abstract: Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible. However, the convergence of GAN training has still not been proved. We propose a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions. TTUR has an individual learning rate for both the discriminator and the generator. Using the theory of stochastic approximation, we prove that the TTUR converges under mild assumptions to a stationary local Nash equilibrium. The convergence carries over to the popular Adam optimization, for which we prove that it follows the dynamics of a heavy ball with friction and thus prefers flat minima in the objective landscape. For the evaluation of the performance of GANs at image generation, we introduce the "Fréchet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score. In experiments, TTUR improves learning for DCGANs and Improved Wasserstein GANs (WGAN-GP) outperforming conventional GAN training on CelebA, CIFAR-10, SVHN, LSUN Bedrooms, and the One Billion Word Benchmark.
[ 1, 0, 0, 1, 0, 0 ]
Title: Time-Series Adaptive Estimation of Vaccination Uptake Using Web Search Queries, Abstract: Estimating vaccination uptake is an integral part of ensuring public health. It was recently shown that vaccination uptake can be estimated automatically from web data, instead of slowly collected clinical records or population surveys. All prior work in this area assumes that features of vaccination uptake collected from the web are temporally regular. We present the first ever method to remove this assumption from vaccination uptake estimation: our method dynamically adapts to temporal fluctuations in time series web data used to estimate vaccination uptake. We show our method to outperform the state of the art compared to competitive baselines that use not only web data but also curated clinical data. This performance improvement is more pronounced for vaccines whose uptake has been irregular due to negative media attention (HPV-1 and HPV-2), problems in vaccine supply (DiTeKiPol), and targeted at children of 12 years old (whose vaccination is more irregular compared to younger children).
[ 1, 0, 0, 1, 0, 0 ]
Title: Over Recurrence for Mixing Transformations, Abstract: We show that every invertible strong mixing transformation on a Lebesgue space has strictly over-recurrent sets. Also, we give an explicit procedure for constructing strong mixing transformations with no under-recurrent sets. This answers both parts of a question of V. Bergelson. We define $\epsilon$-over-recurrence and show that given $\epsilon > 0$, any ergodic measure preserving invertible transformation (including discrete spectrum) has $\epsilon$-over-recurrent sets of arbitrarily small measure. Discrete spectrum transformations and rotations do not have over-recurrent sets, but we construct a weak mixing rigid transformation with strictly over-recurrent sets.
[ 0, 0, 1, 0, 0, 0 ]
Title: Joint Atlas-Mapping of Multiple Histological Series combined with Multimodal MRI of Whole Marmoset Brains, Abstract: Development of a mesoscale neural circuitry map of the common marmoset is an essential task due to the ideal characteristics of the marmoset as a model organism for neuroscience research. To facilitate this development there is a need for new computational tools to cross-register multi-modal data sets containing MRI volumes as well as multiple histological series, and to register the combined data set to a common reference atlas. We present a fully automatic pipeline for same-subject-MRI guided reconstruction of image volumes from a series of histological sections of different modalities, followed by diffeomorphic mapping to a reference atlas. We show registration results for Nissl, myelin, CTB, and fluorescent tracer images using a same-subject ex-vivo MRI as our reference and show that our method achieves accurate registration and eliminates artifactual warping that may be result from the absence of a reference MRI data set. Examination of the determinant of the local metric tensor of the diffeomorphic mapping between each subject's ex-vivo MRI and resultant Nissl reconstruction allows an unprecedented local quantification of geometrical distortions resulting from the histological processing, showing a slight shrinkage, a median linear scale change of ~-1% in going from the ex-vivo MRI to the tape-transfer generated histological image data.
[ 0, 0, 0, 0, 1, 0 ]
Title: A Practical Approach for Successive Omniscience, Abstract: The system that we study in this paper contains a set of users that observe a discrete memoryless multiple source and communicate via noise-free channels with the aim of attaining omniscience, the state that all users recover the entire multiple source. We adopt the concept of successive omniscience (SO), i.e., letting the local omniscience in some user subset be attained before the global omniscience in the entire system, and consider the problem of how to efficiently attain omniscience in a successive manner. Based on the existing results on SO, we propose a CompSetSO algorithm for determining a complimentary set, a user subset in which the local omniscience can be attained first without increasing the sum-rate, the total number of communications, for the global omniscience. We also derive a sufficient condition for a user subset to be complimentary so that running the CompSetSO algorithm only requires a lower bound, instead of the exact value, of the minimum sum-rate for attaining global omniscience. The CompSetSO algorithm returns a complimentary user subset in polynomial time. We show by example how to recursively apply the CompSetSO algorithm so that the global omniscience can be attained by multi-stages of SO.
[ 1, 0, 0, 0, 0, 0 ]
Title: Photonic topological pumping through the edges of a dynamical four-dimensional quantum Hall system, Abstract: When a two-dimensional electron gas is exposed to a perpendicular magnetic field and an in-plane electric field, its conductance becomes quantized in the transverse in-plane direction: this is known as the quantum Hall (QH) effect. This effect is a result of the nontrivial topology of the system's electronic band structure, where an integer topological invariant known as the first Chern number leads to the quantization of the Hall conductance. Interestingly, it was shown that the QH effect can be generalized mathematically to four spatial dimensions (4D), but this effect has never been realized for the obvious reason that experimental systems are bound to three spatial dimensions. In this work, we harness the high tunability and control offered by photonic waveguide arrays to experimentally realize a dynamically-generated 4D QH system using a 2D array of coupled optical waveguides. The inter-waveguide separation is constructed such that the propagation of light along the device samples over higher-dimensional momenta in the directions orthogonal to the two physical dimensions, thus realizing a 2D topological pump. As a result, the device's band structure is associated with 4D topological invariants known as second Chern numbers which support a quantized bulk Hall response with a 4D symmetry. In a finite-sized system, the 4D topological bulk response is carried by localized edges modes that cross the sample as a function of of the modulated auxiliary momenta. We directly observe this crossing through photon pumping from edge-to-edge and corner-to-corner of our system. These are equivalent to the pumping of charge across a 4D system from one 3D hypersurface to the opposite one and from one 2D hyperedge to another, and serve as first experimental realization of higher-dimensional topological physics.
[ 0, 1, 0, 0, 0, 0 ]
Title: Coherence for lenses and open games, Abstract: Categories of polymorphic lenses in computer science, and of open games in compositional game theory, have a curious structure that is reminiscent of compact closed categories, but differs in some crucial ways. Specifically they have a family of morphisms that behave like the counits of a compact closed category, but have no corresponding units; and they have a `partial' duality that behaves like transposition in a compact closed category when it is defined. We axiomatise this structure, which we refer to as a `teleological category'. We precisely define a diagrammatic language suitable for these categories, and prove a coherence theorem for them. This underpins the use of diagrammatic reasoning in compositional game theory, which has previously been used only informally.
[ 1, 0, 0, 0, 0, 0 ]
Title: Streaming Algorithm for Euler Characteristic Curves of Multidimensional Images, Abstract: We present an efficient algorithm to compute Euler characteristic curves of gray scale images of arbitrary dimension. In various applications the Euler characteristic curve is used as a descriptor of an image. Our algorithm is the first streaming algorithm for Euler characteristic curves. The usage of streaming removes the necessity to store the entire image in RAM. Experiments show that our implementation handles terabyte scale images on commodity hardware. Due to lock-free parallelism, it scales well with the number of processor cores. Our software---CHUNKYEuler---is available as open source on Bitbucket. Additionally, we put the concept of the Euler characteristic curve in the wider context of computational topology. In particular, we explain the connection with persistence diagrams.
[ 1, 0, 1, 0, 0, 0 ]
Title: GroupReduce: Block-Wise Low-Rank Approximation for Neural Language Model Shrinking, Abstract: Model compression is essential for serving large deep neural nets on devices with limited resources or applications that require real-time responses. As a case study, a state-of-the-art neural language model usually consists of one or more recurrent layers sandwiched between an embedding layer used for representing input tokens and a softmax layer for generating output tokens. For problems with a very large vocabulary size, the embedding and the softmax matrices can account for more than half of the model size. For instance, the bigLSTM model achieves state-of- the-art performance on the One-Billion-Word (OBW) dataset with around 800k vocabulary, and its word embedding and softmax matrices use more than 6GBytes space, and are responsible for over 90% of the model parameters. In this paper, we propose GroupReduce, a novel compression method for neural language models, based on vocabulary-partition (block) based low-rank matrix approximation and the inherent frequency distribution of tokens (the power-law distribution of words). The experimental results show our method can significantly outperform traditional compression methods such as low-rank approximation and pruning. On the OBW dataset, our method achieved 6.6 times compression rate for the embedding and softmax matrices, and when combined with quantization, our method can achieve 26 times compression rate, which translates to a factor of 12.8 times compression for the entire model with very little degradation in perplexity.
[ 0, 0, 0, 1, 0, 0 ]
Title: Morphological characterization of Ge ion implanted SiO2 matrix using multifractal technique, Abstract: 200 nm thick SiO2 layers grown on Si substrates and Ge ions of 150 keV energy were implanted into SiO2 matrix with Different fluences. The implanted samples were annealed at 950 C for 30 minutes in Ar ambience. Topographical studies of implanted as well as annealed samples were captured by the atomic force microscopy (AFM). Two dimension (2D) multifractal detrended fluctuation analysis (MFDFA) based on the partition function approach has been used to study the surfaces of ion implanted and annealed samples. The partition function is used to calculate generalized Hurst exponent with the segment size. Moreover, it is seen that the generalized Hurst exponents vary nonlinearly with the moment, thereby exhibiting the multifractal nature. The multifractality of surface is pronounced after annealing for the surface implanted with fluence 7.5X1016 ions/cm^2.
[ 0, 1, 0, 0, 0, 0 ]
Title: Magnetocapillary self-assemblies: locomotion and micromanipulation along a liquid interface, Abstract: This paper presents an overview and discussion of magnetocapillary self-assemblies. New results are presented, in particular concerning the possible development of future applications. These self-organizing structures possess the notable ability to move along an interface when powered by an oscillatory, uniform magnetic field. The system is constructed as follows. Soft magnetic particles are placed on a liquid interface, and submitted to a magnetic induction field. An attractive force due to the curvature of the interface around the particles competes with an interaction between magnetic dipoles. Ordered structures can spontaneously emerge from these conditions. Furthermore, time-dependent magnetic fields can produce a wide range of dynamic behaviours, including non-time-reversible deformation sequences that produce translational motion at low Reynolds number. In other words, due to a spontaneous breaking of time-reversal symmetry, the assembly can turn into a surface microswimmer. Trajectories have been shown to be precisely controllable. As a consequence, this system offers a way to produce microrobots able to perform different tasks. This is illustrated in this paper by the capture, transport and release of a floating cargo, and the controlled mixing of fluids at low Reynolds number.
[ 0, 1, 0, 0, 0, 0 ]
Title: On asymptotically minimax nonparametric detection of signal in Gaussian white noise, Abstract: For the problem of nonparametric detection of signal in Gaussian white noise we point out strong asymptotically minimax tests. The sets of alternatives are a ball in Besov space $B^r_{2\infty}$ with "small" balls in $L_2$ removed.
[ 0, 0, 1, 1, 0, 0 ]
Title: From Natural to Artificial Camouflage: Components and Systems, Abstract: We identify the components of bio-inspired artificial camouflage systems including actuation, sensing, and distributed computation. After summarizing recent results in understanding the physiology and system-level performance of a variety of biological systems, we describe computational algorithms that can generate similar patterns and have the potential for distributed implementation. We find that the existing body of work predominately treats component technology in an isolated manner that precludes a material-like implementation that is scale-free and robust. We conclude with open research challenges towards the realization of integrated camouflage solutions.
[ 1, 0, 0, 0, 1, 0 ]
Title: Bayesian nonparametric inference for the M/G/1 queueing systems based on the marked departure process, Abstract: In the present work we study Bayesian nonparametric inference for the continuous-time M/G/1 queueing system. In the focus of the study is the unobservable service time distribution. We assume that the only available data of the system are the marked departure process of customers with the marks being the queue lengths just after departure instants. These marks constitute an embedded Markov chain whose distribution may be parametrized by stochastic matrices of a special delta form. We develop the theory in order to obtain integral mixtures of Markov measures with respect to suitable prior distributions. We have found a sufficient statistic with a distribution of a so-called S-structure sheding some new light on the inner statistical structure of the M/G/1 queue. Moreover, it allows to update suitable prior distributions to the posterior. Our inference methods are validated by large sample results as posterior consistency and posterior normality.
[ 0, 0, 1, 1, 0, 0 ]
Title: On some polynomials and series of Bloch-Polya Type, Abstract: We will show that $(1-q)(1-q^2)\dots (1-q^m)$ is a polynomial in $q$ with coefficients from $\{-1,0,1\}$ iff $m=1,\ 2,\ 3,$ or $5$ and explore some interesting consequences of this result. We find explicit formulas for the $q$-series coefficients of $(1-q^2)(1-q^3)(1-q^4)(1-q^5)\dots$ and $(1-q^3)(1-q^4)(1-q^5)(1-q^6)\dots$. In doing so, we extend certain observations made by Sudler in 1964. We also discuss the classification of the products $(1-q)(1-q^2)\dots (1-q^m)$ and some related series with respect to their absolute largest coefficients.
[ 0, 0, 1, 0, 0, 0 ]
Title: Improvement in the UAV position estimation with low-cost GPS, INS and vision-based system: Application to a quadrotor UAV, Abstract: In this paper, we develop a position estimation system for Unmanned Aerial Vehicles formed by hardware and software. It is based on low-cost devices: GPS, commercial autopilot sensors and dense optical flow algorithm implemented in an onboard microcomputer. Comparative tests were conducted using our approach and the conventional one, where only fusion of GPS and inertial sensors are used. Experiments were conducted using a quadrotor in two flying modes: hovering and trajectory tracking in outdoor environments. Results demonstrate the effectiveness of the proposed approach in comparison with the conventional approaches presented in the vast majority of commercial drones.
[ 1, 0, 0, 0, 0, 0 ]
Title: A Categorical Approach for Recognizing Emotional Effects of Music, Abstract: Recently, digital music libraries have been developed and can be plainly accessed. Latest research showed that current organization and retrieval of music tracks based on album information are inefficient. Moreover, they demonstrated that people use emotion tags for music tracks in order to search and retrieve them. In this paper, we discuss separability of a set of emotional labels, proposed in the categorical emotion expression, using Fisher's separation theorem. We determine a set of adjectives to tag music parts: happy, sad, relaxing, exciting, epic and thriller. Temporal, frequency and energy features have been extracted from the music parts. It could be seen that the maximum separability within the extracted features occurs between relaxing and epic music parts. Finally, we have trained a classifier using Support Vector Machines to automatically recognize and generate emotional labels for a music part. Accuracy for recognizing each label has been calculated; where the results show that epic music can be recognized more accurately (77.4%), comparing to the other types of music.
[ 1, 0, 0, 1, 0, 0 ]
Title: Deformable Generator Network: Unsupervised Disentanglement of Appearance and Geometry, Abstract: We propose a deformable generator model to disentangle the appearance and geometric information from images into two independent latent vectors. The appearance generator produces the appearance information, including color, illumination, identity or category, of an image. The geometric generator produces displacement of the coordinates of each pixel and performs geometric warping, such as stretching and rotation, on the appearance generator to obtain the final synthesized image. The proposed model can learn both representations from image data in an unsupervised manner. The learned geometric generator can be conveniently transferred to the other image datasets to facilitate downstream AI tasks.
[ 0, 0, 0, 1, 0, 0 ]
Title: Gaussian Kernel in Quantum Paradigm, Abstract: The Gaussian kernel is a very popular kernel function used in many machine-learning algorithms, especially in support vector machines (SVM). For nonlinear training instances in machine learning, it often outperforms polynomial kernels in model accuracy. We use Gaussian kernel profoundly in formulating nonlinear classical SVM. In the recent research, P. Rebentrost et.al. discuss a very elegant quantum version of least square support vector machine using the quantum version of polynomial kernel, which is exponentially faster than the classical counterparts. In this paper, we have demonstrated a quantum version of the Gaussian kernel and analyzed its complexity in the context of quantum SVM. Our analysis shows that the computational complexity of the quantum Gaussian kernel is O(\epsilon^(-1)logN) with N-dimensional instances and \epsilon with a Taylor remainder error term |R_m (\epsilon^(-1) logN)|.
[ 1, 0, 0, 0, 0, 0 ]
Title: Learning to Succeed while Teaching to Fail: Privacy in Closed Machine Learning Systems, Abstract: Security, privacy, and fairness have become critical in the era of data science and machine learning. More and more we see that achieving universally secure, private, and fair systems is practically impossible. We have seen for example how generative adversarial networks can be used to learn about the expected private training data; how the exploitation of additional data can reveal private information in the original one; and how what looks like unrelated features can teach us about each other. Confronted with this challenge, in this paper we open a new line of research, where the security, privacy, and fairness is learned and used in a closed environment. The goal is to ensure that a given entity (e.g., the company or the government), trusted to infer certain information with our data, is blocked from inferring protected information from it. For example, a hospital might be allowed to produce diagnosis on the patient (the positive task), without being able to infer the gender of the subject (negative task). Similarly, a company can guarantee that internally it is not using the provided data for any undesired task, an important goal that is not contradicting the virtually impossible challenge of blocking everybody from the undesired task. We design a system that learns to succeed on the positive task while simultaneously fail at the negative one, and illustrate this with challenging cases where the positive task is actually harder than the negative one being blocked. Fairness, to the information in the negative task, is often automatically obtained as a result of this proposed approach. The particular framework and examples open the door to security, privacy, and fairness in very important closed scenarios, ranging from private data accumulation companies like social networks to law-enforcement and hospitals.
[ 1, 0, 0, 1, 0, 0 ]
Title: On Convergence Rate of a Continuous-Time Distributed Self-Appraisal Model with Time-Varying Relative Interaction Matrices, Abstract: This paper studies a recently proposed continuous-time distributed self-appraisal model with time-varying interactions among a network of $n$ individuals which are characterized by a sequence of time-varying relative interaction matrices. The model describes the evolution of the social-confidence levels of the individuals via a reflected appraisal mechanism in real time. We first show by example that when the relative interaction matrices are stochastic (not doubly stochastic), the social-confidence levels of the individuals may not converge to a steady state. We then show that when the relative interaction matrices are doubly stochastic, the $n$ individuals' self-confidence levels will all converge to $1/n$, which indicates a democratic state, exponentially fast under appropriate assumptions, and provide an explicit expression of the convergence rate.
[ 0, 0, 1, 0, 0, 0 ]
Title: Synchronous Observation on the Spontaneous Transformation of Liquid Metal under Free Falling Microgravity Situation, Abstract: The unusually high surface tension of room temperature liquid metal is molding it as unique material for diverse newly emerging areas. However, unlike its practices on earth, such metal fluid would display very different behaviors when working in space where gravity disappears and surface property dominates the major physics. So far, few direct evidences are available to understand such effect which would impede further exploration of liquid metal use for space. Here to preliminarily probe into this intriguing issue, a low cost experimental strategy to simulate microgravity environment on earth was proposed through adopting bridges with high enough free falling distance as the test platform. Then using digital cameras amounted along x, y, z directions on outside wall of the transparent container with liquid metal and allied solution inside, synchronous observations on the transient flow and transformational activities of liquid metal were performed. Meanwhile, an unmanned aerial vehicle was adopted to record the whole free falling dynamics of the test capsule from the far end which can help justify subsequent experimental procedures. A series of typical fundamental phenomena were thus observed as: (a) A relatively large liquid metal object would spontaneously transform from its original planar pool state into a sphere and float in the container if initiating the free falling; (b) The liquid metal changes its three-dimensional shape due to dynamic microgravity strength due to free falling and rebound of the test capsule; and (c) A quick spatial transformation of liquid metal immersed in the solution can easily be induced via external electrical fields. The mechanisms of the surface tension driven liquid metal actuation in space were interpreted. All these findings indicated that microgravity effect should be fully treated in developing future generation liquid metal space technologies.
[ 0, 1, 0, 0, 0, 0 ]
Title: Automated Synthesis of Safe Digital Controllers for Sampled-Data Stochastic Nonlinear Systems, Abstract: We present a new method for the automated synthesis of digital controllers with formal safety guarantees for systems with nonlinear dynamics, noisy output measurements, and stochastic disturbances. Our method derives digital controllers such that the corresponding closed-loop system, modeled as a sampled-data stochastic control system, satisfies a safety specification with probability above a given threshold. The proposed synthesis method alternates between two steps: generation of a candidate controller pc, and verification of the candidate. pc is found by maximizing a Monte Carlo estimate of the safety probability, and by using a non-validated ODE solver for simulating the system. Such a candidate is therefore sub-optimal but can be generated very rapidly. To rule out unstable candidate controllers, we prove and utilize Lyapunov's indirect method for instability of sampled-data nonlinear systems. In the subsequent verification step, we use a validated solver based on SMT (Satisfiability Modulo Theories) to compute a numerically and statistically valid confidence interval for the safety probability of pc. If the probability so obtained is not above the threshold, we expand the search space for candidates by increasing the controller degree. We evaluate our technique on three case studies: an artificial pancreas model, a powertrain control model, and a quadruple-tank process.
[ 1, 0, 0, 0, 0, 0 ]
Title: Magnus integrators on multicore CPUs and GPUs, Abstract: In the present paper we consider numerical methods to solve the discrete Schrödinger equation with a time dependent Hamiltonian (motivated by problems encountered in the study of spin systems). We will consider both short-range interactions, which lead to evolution equations involving sparse matrices, and long-range interactions, which lead to dense matrices. Both of these settings show very different computational characteristics. We use Magnus integrators for time integration and employ a framework based on Leja interpolation to compute the resulting action of the matrix exponential. We consider both traditional Magnus integrators (which are extensively used for these types of problems in the literature) as well as the recently developed commutator-free Magnus integrators and implement them on modern CPU and GPU (graphics processing unit) based systems. We find that GPUs can yield a significant speed-up (up to a factor of $10$ in the dense case) for these types of problems. In the sparse case GPUs are only advantageous for large problem sizes and the achieved speed-ups are more modest. In most cases the commutator-free variant is superior but especially on the GPU this advantage is rather small. In fact, none of the advantage of commutator-free methods on GPUs (and on multi-core CPUs) is due to the elimination of commutators. This has important consequences for the design of more efficient numerical methods.
[ 1, 1, 0, 0, 0, 0 ]
Title: High Dimensional Estimation and Multi-Factor Models, Abstract: This paper re-investigates the estimation of multiple factor models relaxing the convention that the number of factors is small and using a new approach for identifying factors. We first obtain the collection of all possible factors and then provide a simultaneous test, security by security, of which factors are significant. Since the collection of risk factors is large and highly correlated, high-dimension methods (including the LASSO and prototype clustering) have to be used. The multi-factor model is shown to have a significantly better fit than the Fama-French 5-factor model. Robustness tests are also provided.
[ 0, 0, 0, 1, 0, 1 ]
Title: An Expanded Local Variance Gamma model, Abstract: The paper proposes an expanded version of the Local Variance Gamma model of Carr and Nadtochiy by adding drift to the governing underlying process. Still in this new model it is possible to derive an ordinary differential equation for the option price which plays a role of Dupire's equation for the standard local volatility model. It is shown how calibration of multiple smiles (the whole local volatility surface) can be done in such a case. Further, assuming the local variance to be a piecewise linear function of strike and piecewise constant function of time this ODE is solved in closed form in terms of Confluent hypergeometric functions. Calibration of the model to market smiles does not require solving any optimization problem and, in contrast, can be done term-by-term by solving a system of non-linear algebraic equations for each maturity, which is fast.
[ 0, 0, 0, 0, 0, 1 ]
Title: Auto-Meta: Automated Gradient Based Meta Learner Search, Abstract: Fully automating machine learning pipelines is one of the key challenges of current artificial intelligence research, since practical machine learning often requires costly and time-consuming human-powered processes such as model design, algorithm development, and hyperparameter tuning. In this paper, we verify that automated architecture search synergizes with the effect of gradient-based meta learning. We adopt the progressive neural architecture search \cite{liu:pnas_google:DBLP:journals/corr/abs-1712-00559} to find optimal architectures for meta-learners. The gradient based meta-learner whose architecture was automatically found achieved state-of-the-art results on the 5-shot 5-way Mini-ImageNet classification problem with $74.65\%$ accuracy, which is $11.54\%$ improvement over the result obtained by the first gradient-based meta-learner called MAML \cite{finn:maml:DBLP:conf/icml/FinnAL17}. To our best knowledge, this work is the first successful neural architecture search implementation in the context of meta learning.
[ 0, 0, 0, 1, 0, 0 ]
Title: Convergence of the Forward-Backward Algorithm: Beyond the Worst Case with the Help of Geometry, Abstract: We provide a comprehensive study of the convergence of forward-backward algorithm under suitable geometric conditions leading to fast rates. We present several new results and collect in a unified view a variety of results scattered in the literature, often providing simplified proofs. Novel contributions include the analysis of infinite dimensional convex minimization problems, allowing the case where minimizers might not exist. Further, we analyze the relation between different geometric conditions, and discuss novel connections with a priori conditions in linear inverse problems, including source conditions, restricted isometry properties and partial smoothness.
[ 0, 0, 1, 1, 0, 0 ]
Title: Calibration-Free Relaxation-Based Multi-Color Magnetic Particle Imaging, Abstract: Magnetic Particle Imaging (MPI) is a novel imaging modality with important applications such as angiography, stem cell tracking, and cancer imaging. Recently, there have been efforts to increase the functionality of MPI via multi-color imaging methods that can distinguish the responses of different nanoparticles, or nanoparticles in different environmental conditions. The proposed techniques typically rely on extensive calibrations that capture the differences in the harmonic responses of the nanoparticles. In this work, we propose a method to directly estimate the relaxation time constant of the nanoparticles from the MPI signal, which is then used to generate a multi-color relaxation map. The technique is based on the underlying mirror symmetry of the adiabatic MPI signal when the same region is scanned back and forth. We validate the proposed method via extensive simulations, and via experiments on our in-house Magnetic Particle Spectrometer (MPS) setup at 550 Hz and our in-house MPI scanner at 9.7 kHz. Our results show that nanoparticles can be successfully distinguished with the proposed technique, without any calibration or prior knowledge about the nanoparticles.
[ 0, 1, 0, 0, 0, 0 ]
Title: Neural Machine Translation, Abstract: Draft of textbook chapter on neural machine translation. a comprehensive treatment of the topic, ranging from introduction to neural networks, computation graphs, description of the currently dominant attentional sequence-to-sequence model, recent refinements, alternative architectures and challenges. Written as chapter for the textbook Statistical Machine Translation. Used in the JHU Fall 2017 class on machine translation.
[ 1, 0, 0, 0, 0, 0 ]
Title: Vocabulary-informed Extreme Value Learning, Abstract: The novel unseen classes can be formulated as the extreme values of known classes. This inspired the recent works on open-set recognition \cite{Scheirer_2013_TPAMI,Scheirer_2014_TPAMIb,EVM}, which however can have no way of naming the novel unseen classes. To solve this problem, we propose the Extreme Value Learning (EVL) formulation to learn the mapping from visual feature to semantic space. To model the margin and coverage distributions of each class, the Vocabulary-informed Learning (ViL) is adopted by using vast open vocabulary in the semantic space. Essentially, by incorporating the EVL and ViL, we for the first time propose a novel semantic embedding paradigm -- Vocabulary-informed Extreme Value Learning (ViEVL), which embeds the visual features into semantic space in a probabilistic way. The learned embedding can be directly used to solve supervised learning, zero-shot and open set recognition simultaneously. Experiments on two benchmark datasets demonstrate the effectiveness of proposed frameworks.
[ 1, 0, 1, 1, 0, 0 ]
Title: A Team-Formation Algorithm for Faultline Minimization, Abstract: In recent years, the proliferation of online resumes and the need to evaluate large populations of candidates for on-site and virtual teams have led to a growing interest in automated team-formation. Given a large pool of candidates, the general problem requires the selection of a team of experts to complete a given task. Surprisingly, while ongoing research has studied numerous variations with different constraints, it has overlooked a factor with a well-documented impact on team cohesion and performance: team faultlines. Addressing this gap is challenging, as the available measures for faultlines in existing teams cannot be efficiently applied to faultline optimization. In this work, we meet this challenge with a new measure that can be efficiently used for both faultline measurement and minimization. We then use the measure to solve the problem of automatically partitioning a large population into low-faultline teams. By introducing faultlines to the team-formation literature, our work creates exciting opportunities for algorithmic work on faultline optimization, as well as on work that combines and studies the connection of faultlines with other influential team characteristics.
[ 1, 0, 0, 0, 0, 0 ]
Title: New quantum mds constacylıc codes, Abstract: This paper is devoted to the study of the construction of new quantum MDS codes. Based on constacyclic codes over Fq2 , we derive four new families of quantum MDS codes, one of which is an explicit generalization of the construction given in Theorem 7 in [22]. We also extend the result of Theorem 3:3 given in [17].
[ 1, 0, 0, 0, 0, 0 ]
Title: Infinitary first-order categorical logic, Abstract: We present a unified categorical treatment of completeness theorems for several classical and intuitionistic infinitary logics with a proposed axiomatization. This provides new completeness theorems and subsumes previous ones by Gödel, Kripke, Beth, Karp, Joyal, Makkai and Fourman/Grayson. As an application we prove, using large cardinals assumptions, the disjunction and existence properties for infinitary intuitionistic first-order logics.
[ 0, 0, 1, 0, 0, 0 ]
Title: Gini estimation under infinite variance, Abstract: We study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index $\alpha\in(1,2)$). We show that, in such a case, the Gini coefficient cannot be reliably estimated using conventional nonparametric methods, because of a downward bias that emerges under fat tails. This has important implications for the ongoing discussion about economic inequality. We start by discussing how the nonparametric estimator of the Gini index undergoes a phase transition in the symmetry structure of its asymptotic distribution, as the data distribution shifts from the domain of attraction of a light-tailed distribution to that of a fat-tailed one, especially in the case of infinite variance. We also show how the nonparametric Gini bias increases with lower values of $\alpha$. We then prove that maximum likelihood estimation outperforms nonparametric methods, requiring a much smaller sample size to reach efficiency. Finally, for fat-tailed data, we provide a simple correction mechanism to the small sample bias of the nonparametric estimator based on the distance between the mode and the mean of its asymptotic distribution.
[ 0, 0, 0, 1, 0, 0 ]
Title: Trajectories and orbital angular momentum of necklace beams in nonlinear colloidal suspensions, Abstract: Recently, we have predicted that the modulation instability of optical vortex solitons propagating in nonlinear colloidal suspensions with exponential saturable nonlinearity leads to formation of necklace beams (NBs) [S.~Z.~Silahli, W.~Walasik and N.~M.~Litchinitser, Opt.~Lett., \textbf{40}, 5714 (2015)]. Here, we investigate the dynamics of NB formation and propagation, and show that the distance at which the NB is formed depends on the input power of the vortex beam. Moreover, we show that the NB trajectories are not necessarily tangent to the initial vortex ring, and that their velocities have components stemming both from the beam diffraction and from the beam orbital angular momentum. We also demonstrate the generation of twisted solitons and analyze the influence of losses on their propagation. Finally, we investigate the conservation of the orbital angular momentum in necklace and twisted beams. Our studies, performed in ideal lossless media and in realistic colloidal suspensions with losses, provide a detailed description of NB dynamics and may be useful in studies of light propagation in highly scattering colloids and biological samples.
[ 0, 1, 0, 0, 0, 0 ]
Title: A XGBoost risk model via feature selection and Bayesian hyper-parameter optimization, Abstract: This paper aims to explore models based on the extreme gradient boosting (XGBoost) approach for business risk classification. Feature selection (FS) algorithms and hyper-parameter optimizations are simultaneously considered during model training. The five most commonly used FS methods including weight by Gini, weight by Chi-square, hierarchical variable clustering, weight by correlation, and weight by information are applied to alleviate the effect of redundant features. Two hyper-parameter optimization approaches, random search (RS) and Bayesian tree-structured Parzen Estimator (TPE), are applied in XGBoost. The effect of different FS and hyper-parameter optimization methods on the model performance are investigated by the Wilcoxon Signed Rank Test. The performance of XGBoost is compared to the traditionally utilized logistic regression (LR) model in terms of classification accuracy, area under the curve (AUC), recall, and F1 score obtained from the 10-fold cross validation. Results show that hierarchical clustering is the optimal FS method for LR while weight by Chi-square achieves the best performance in XG-Boost. Both TPE and RS optimization in XGBoost outperform LR significantly. TPE optimization shows a superiority over RS since it results in a significantly higher accuracy and a marginally higher AUC, recall and F1 score. Furthermore, XGBoost with TPE tuning shows a lower variability than the RS method. Finally, the ranking of feature importance based on XGBoost enhances the model interpretation. Therefore, XGBoost with Bayesian TPE hyper-parameter optimization serves as an operative while powerful approach for business risk modeling.
[ 1, 0, 0, 1, 0, 0 ]
Title: Rheology of High-Capillary Number Flow in Porous Media, Abstract: Immiscible fluids flowing at high capillary numbers in porous media may be characterized by an effective viscosity. We demonstrate that the effective viscosity is well described by the Lichtenecker-Rother equation. The exponent $\alpha$ in this equation takes either the value 1 or 0.6 in two- and 0.5 in three-dimensional systems depending on the pore geometry. Our arguments are based on analytical and numerical methods.
[ 0, 1, 0, 0, 0, 0 ]
Title: Fermi acceleration of electrons inside foreshock transient cores, Abstract: Foreshock transients upstream of Earth's bow shock have been recently observed to accelerate electrons to many times their thermal energy. How such acceleration occurs is unknown, however. Using THEMIS case studies, we examine a subset of acceleration events (31 of 247 events) in foreshock transients with cores that exhibit gradual electron energy increases accompanied by low background magnetic field strength and large-amplitude magnetic fluctuations. Using the evolution of electron distributions and the energy increase rates at multiple spacecraft, we suggest that Fermi acceleration between a converging foreshock transient's compressional boundary and the bow shock is responsible for the observed electron acceleration. We then show that a one-dimensional test particle simulation of an ideal Fermi acceleration model in fluctuating fields prescribed by the observations can reproduce the observed evolution of electron distributions, energy increase rate, and pitch-angle isotropy, providing further support for our hypothesis. Thus, Fermi acceleration is likely the principal electron acceleration mechanism in at least this subset of foreshock transient cores.
[ 0, 1, 0, 0, 0, 0 ]
Title: ACVAE-VC: Non-parallel many-to-many voice conversion with auxiliary classifier variational autoencoder, Abstract: This paper proposes a non-parallel many-to-many voice conversion (VC) method using a variant of the conditional variational autoencoder (VAE) called an auxiliary classifier VAE (ACVAE). The proposed method has three key features. First, it adopts fully convolutional architectures to construct the encoder and decoder networks so that the networks can learn conversion rules that capture time dependencies in the acoustic feature sequences of source and target speech. Second, it uses an information-theoretic regularization for the model training to ensure that the information in the attribute class label will not be lost in the conversion process. With regular CVAEs, the encoder and decoder are free to ignore the attribute class label input. This can be problematic since in such a situation, the attribute class label will have little effect on controlling the voice characteristics of input speech at test time. Such situations can be avoided by introducing an auxiliary classifier and training the encoder and decoder so that the attribute classes of the decoder outputs are correctly predicted by the classifier. Third, it avoids producing buzzy-sounding speech at test time by simply transplanting the spectral details of the input speech into its converted version. Subjective evaluation experiments revealed that this simple method worked reasonably well in a non-parallel many-to-many speaker identity conversion task.
[ 1, 0, 0, 1, 0, 0 ]
Title: Spectral analysis of jet turbulence, Abstract: Informed by LES data and resolvent analysis of the mean flow, we examine the structure of turbulence in jets in the subsonic, transonic, and supersonic regimes. Spectral (frequency-space) proper orthogonal decomposition is used to extract energy spectra and decompose the flow into energy-ranked coherent structures. The educed structures are generally well predicted by the resolvent analysis. Over a range of low frequencies and the first few azimuthal mode numbers, these jets exhibit a low-rank response characterized by Kelvin-Helmholtz (KH) type wavepackets associated with the annular shear layer up to the end of the potential core and that are excited by forcing in the very-near-nozzle shear layer. These modes too the have been experimentally observed before and predicted by quasi-parallel stability theory and other approximations--they comprise a considerable portion of the total turbulent energy. At still lower frequencies, particularly for the axisymmetric mode, and again at high frequencies for all azimuthal wavenumbers, the response is not low rank, but consists of a family of similarly amplified modes. These modes, which are primarily active downstream of the potential core, are associated with the Orr mechanism. They occur also as sub-dominant modes in the range of frequencies dominated by the KH response. Our global analysis helps tie together previous observations based on local spatial stability theory, and explains why quasi-parallel predictions were successful at some frequencies and azimuthal wavenumbers, but failed at others.
[ 0, 1, 0, 0, 0, 0 ]
Title: La notion d'involution dans le Brouillon Project de Girard Desargues, Abstract: Nous tentons dans cet article de proposer une thèse cohérente concernant la formation de la notion d'involution dans le Brouillon Project de Desargues. Pour cela, nous donnons une analyse détaillée des dix premières pages dudit Brouillon, comprenant les développements de cas particuliers qui aident à comprendre l'intention de Desargues. Nous mettons cette analyse en regard de la lecture qu'en fait Jean de Beaugrand et que l'on trouve dans les Advis Charitables. The purpose of this article is to propose a coherent thesis on how Girard Desargues arrived at the notion of involution in his Brouillon Project of 1639. To this purpose we give a detailed analysis of the ten first pages of the Brouillon, including developments of particular cases which help to understand the goal of Desargues, as well as to clarify the links between the notion of involution and that of harmonic division. We compare the conclusions of this analysis with the very critical reading Jean de Beaugrand made of the Brouillon Project in the Advis Charitables of 1640.
[ 0, 0, 1, 0, 0, 0 ]
Title: Framing U-Net via Deep Convolutional Framelets: Application to Sparse-view CT, Abstract: X-ray computed tomography (CT) using sparse projection views is a recent approach to reduce the radiation dose. However, due to the insufficient projection views, an analytic reconstruction approach using the filtered back projection (FBP) produces severe streaking artifacts. Recently, deep learning approaches using large receptive field neural networks such as U-Net have demonstrated impressive performance for sparse- view CT reconstruction. However, theoretical justification is still lacking. Inspired by the recent theory of deep convolutional framelets, the main goal of this paper is, therefore, to reveal the limitation of U-Net and propose new multi-resolution deep learning schemes. In particular, we show that the alternative U- Net variants such as dual frame and the tight frame U-Nets satisfy the so-called frame condition which make them better for effective recovery of high frequency edges in sparse view- CT. Using extensive experiments with real patient data set, we demonstrate that the new network architectures provide better reconstruction performance.
[ 1, 0, 0, 1, 0, 0 ]
Title: Assessing inter-modal and inter-regional dependencies in prodromal Alzheimer's disease using multimodal MRI/PET and Gaussian graphical models, Abstract: A sequence of pathological changes takes place in Alzheimer's disease, which can be assessed in vivo using various brain imaging methods. Currently, there is no appropriate statistical model available that can easily integrate multiple imaging modalities, being able to utilize the additional information provided from the combined data. We applied Gaussian graphical models (GGMs) for analyzing the conditional dependency networks of multimodal neuroimaging data and assessed alterations of the network structure in mild cognitive impairment (MCI) and Alzheimer's dementia (AD) compared to cognitively healthy controls. Data from N=667 subjects were obtained from the Alzheimer's Disease Neuroimaging Initiative. Mean amyloid load (AV45-PET), glucose metabolism (FDG-PET), and gray matter volume (MRI) was calculated for each brain region. Separate GGMs were estimated using a Bayesian framework for the combined multimodal data for each diagnostic category. Graph-theoretical statistics were calculated to determine network alterations associated with disease severity. Network measures clustering coefficient, path length and small-world coefficient were significantly altered across diagnostic groups, with a biphasic u-shape trajectory, i.e. increased small-world coefficient in early MCI, intermediate values in late MCI, and decreased values in AD patients compared to controls. In contrast, no group differences were found for clustering coefficient and small-world coefficient when estimating conditional dependency networks on single imaging modalities. GGMs provide a useful methodology to analyze the conditional dependency networks of multimodal neuroimaging data.
[ 0, 0, 0, 1, 1, 0 ]
Title: On the economics of knowledge creation and sharing, Abstract: This work bridges the technical concepts underlying distributed computing and blockchain technologies with their profound socioeconomic and sociopolitical implications, particularly on academic research and the healthcare industry. Several examples from academia, industry, and healthcare are explored throughout this paper. The limiting factor in contemporary life sciences research is often funding: for example, to purchase expensive laboratory equipment and materials, to hire skilled researchers and technicians, and to acquire and disseminate data through established academic channels. In the case of the U.S. healthcare system, hospitals generate massive amounts of data, only a small minority of which is utilized to inform current and future medical practice. Similarly, corporations too expend large amounts of money to collect, secure and transmit data from one centralized source to another. In all three scenarios, data moves under the traditional paradigm of centralization, in which data is hosted and curated by individuals and organizations and of benefit to only a small subset of people.
[ 1, 0, 0, 0, 0, 0 ]
Title: Seismic fragility curves for structures using non-parametric representations, Abstract: Fragility curves are commonly used in civil engineering to assess the vulnerability of structures to earthquakes. The probability of failure associated with a prescribed criterion (e.g. the maximal inter-storey drift of a building exceeding a certain threshold) is represented as a function of the intensity of the earthquake ground motion (e.g. peak ground acceleration or spectral acceleration). The classical approach relies on assuming a lognormal shape of the fragility curves; it is thus parametric. In this paper, we introduce two non-parametric approaches to establish the fragility curves without employing the above assumption, namely binned Monte Carlo simulation and kernel density estimation. As an illustration, we compute the fragility curves for a three-storey steel frame using a large number of synthetic ground motions. The curves obtained with the non-parametric approaches are compared with respective curves based on the lognormal assumption. A similar comparison is presented for a case when a limited number of recorded ground motions is available. It is found that the accuracy of the lognormal curves depends on the ground motion intensity measure, the failure criterion and most importantly, on the employed method for estimating the parameters of the lognormal shape.
[ 0, 0, 0, 1, 0, 0 ]
Title: Metastable Markov chains: from the convergence of the trace to the convergence of the finite-dimensional distributions, Abstract: We consider continuous-time Markov chains which display a family of wells at the same depth. We provide sufficient conditions which entail the convergence of the finite-dimensional distributions of the order parameter to the ones of a finite state Markov chain. We also show that the state of the process can be represented as a time-dependent convex combination of metastable states, each of which is supported on one well.
[ 0, 0, 1, 0, 0, 0 ]
Title: A System of Three Super Earths Transiting the Late K-Dwarf GJ 9827 at Thirty Parsecs, Abstract: We report the discovery of three small transiting planets orbiting GJ 9827, a bright (K = 7.2) nearby late K-type dwarf star. GJ 9827 hosts a $1.62\pm0.11$ $R_{\rm \oplus}$ super Earth on a 1.2 day period, a $1.269^{+0.087}_{-0.089}$ $R_{\rm \oplus}$ super Earth on a 3.6 day period, and a $2.07\pm0.14$ $R_{\rm \oplus}$ super Earth on a 6.2 day period. The radii of the planets transiting GJ 9827 span the transition between predominantly rocky and gaseous planets, and GJ 9827 b and c fall in or close to the known gap in the radius distribution of small planets between these populations. At a distance of 30 parsecs, GJ 9827 is the closest exoplanet host discovered by K2 to date, making these planets well-suited for atmospheric studies with the upcoming James Webb Space Telescope. The GJ 9827 system provides a valuable opportunity to characterize interior structure and atmospheric properties of coeval planets spanning the rocky to gaseous transition.
[ 0, 1, 0, 0, 0, 0 ]
Title: State Sum Invariants of Three Manifolds from Spherical Multi-fusion Categories, Abstract: We define a family of quantum invariants of closed oriented $3$-manifolds using spherical multi-fusion categories. The state sum nature of this invariant leads directly to $(2+1)$-dimensional topological quantum field theories ($\text{TQFT}$s), which generalize the Turaev-Viro-Barrett-Westbury ($\text{TVBW}$) $\text{TQFT}$s from spherical fusion categories. The invariant is given as a state sum over labeled triangulations, which is mostly parallel to, but richer than the $\text{TVBW}$ approach in that here the labels live not only on $1$-simplices but also on $0$-simplices. It is shown that a multi-fusion category in general cannot be a spherical fusion category in the usual sense. Thus we introduce the concept of a spherical multi-fusion category by imposing a weakened version of sphericity. Besides containing the $\text{TVBW}$ theory, our construction also includes the recent higher gauge theory $(2+1)$-$\text{TQFT}$s given by Kapustin and Thorngren, which was not known to have a categorical origin before.
[ 0, 1, 1, 0, 0, 0 ]
Title: Deep Multimodal Image-Repurposing Detection, Abstract: Nefarious actors on social media and other platforms often spread rumors and falsehoods through images whose metadata (e.g., captions) have been modified to provide visual substantiation of the rumor/falsehood. This type of modification is referred to as image repurposing, in which often an unmanipulated image is published along with incorrect or manipulated metadata to serve the actor's ulterior motives. We present the Multimodal Entity Image Repurposing (MEIR) dataset, a substantially challenging dataset over that which has been previously available to support research into image repurposing detection. The new dataset includes location, person, and organization manipulations on real-world data sourced from Flickr. We also present a novel, end-to-end, deep multimodal learning model for assessing the integrity of an image by combining information extracted from the image with related information from a knowledge base. The proposed method is compared against state-of-the-art techniques on existing datasets as well as MEIR, where it outperforms existing methods across the board, with AUC improvement up to 0.23.
[ 1, 0, 0, 0, 0, 0 ]
Title: DeepFense: Online Accelerated Defense Against Adversarial Deep Learning, Abstract: Recent advances in adversarial Deep Learning (DL) have opened up a largely unexplored surface for malicious attacks jeopardizing the integrity of autonomous DL systems. With the wide-spread usage of DL in critical and time-sensitive applications, including unmanned vehicles, drones, and video surveillance systems, online detection of malicious inputs is of utmost importance. We propose DeepFense, the first end-to-end automated framework that simultaneously enables efficient and safe execution of DL models. DeepFense formalizes the goal of thwarting adversarial attacks as an optimization problem that minimizes the rarely observed regions in the latent feature space spanned by a DL network. To solve the aforementioned minimization problem, a set of complementary but disjoint modular redundancies are trained to validate the legitimacy of the input samples in parallel with the victim DL model. DeepFense leverages hardware/software/algorithm co-design and customized acceleration to achieve just-in-time performance in resource-constrained settings. The proposed countermeasure is unsupervised, meaning that no adversarial sample is leveraged to train modular redundancies. We further provide an accompanying API to reduce the non-recurring engineering cost and ensure automated adaptation to various platforms. Extensive evaluations on FPGAs and GPUs demonstrate up to two orders of magnitude performance improvement while enabling online adversarial sample detection.
[ 1, 0, 0, 1, 0, 0 ]
Title: A mean value formula and a Liouville theorem for the complex Monge-Ampère equation, Abstract: In this paper, we prove a mean value formula for bounded subharmonic Hermitian matrix valued function on a complete Riemannian manifold with nonnegative Ricci curvature. As its application, we obtain a Liouville type theorem for the complex Monge-Ampère equation on product manifolds.
[ 0, 0, 1, 0, 0, 0 ]
Title: Yu-Shiba-Rusinov bands in superconductors in contact with a magnetic insulator, Abstract: Superconductor-Ferromagnet (SF) heterostructures are of interest due to numerous phenomena related to the spin-dependent interaction of Cooper pairs with the magnetization. Here we address the effects of a magnetic insulator on the density of states of a superconductor based on a recently developed boundary condition for strongly spin-dependent interfaces. We show that the boundary to a magnetic insulator has a similar effect like the presence of magnetic impurities. In particular we find that the impurity effects of strongly scattering localized spins leading to the formation of Shiba bands can be mapped onto the boundary problem.
[ 0, 1, 0, 0, 0, 0 ]
Title: Bayesian Optimization for Probabilistic Programs, Abstract: We present the first general purpose framework for marginal maximum a posteriori estimation of probabilistic program variables. By using a series of code transformations, the evidence of any probabilistic program, and therefore of any graphical model, can be optimized with respect to an arbitrary subset of its sampled variables. To carry out this optimization, we develop the first Bayesian optimization package to directly exploit the source code of its target, leading to innovations in problem-independent hyperpriors, unbounded optimization, and implicit constraint satisfaction; delivering significant performance improvements over prominent existing packages. We present applications of our method to a number of tasks including engineering design and parameter optimization.
[ 1, 0, 0, 1, 0, 0 ]
Title: Long-Term Load Forecasting Considering Volatility Using Multiplicative Error Model, Abstract: Long-term load forecasting plays a vital role for utilities and planners in terms of grid development and expansion planning. An overestimate of long-term electricity load will result in substantial wasted investment in the construction of excess power facilities, while an underestimate of future load will result in insufficient generation and unmet demand. This paper presents first-of-its-kind approach to use multiplicative error model (MEM) in forecasting load for long-term horizon. MEM originates from the structure of autoregressive conditional heteroscedasticity (ARCH) model where conditional variance is dynamically parameterized and it multiplicatively interacts with an innovation term of time-series. Historical load data, accessed from a U.S. regional transmission operator, and recession data for years 1993-2016 is used in this study. The superiority of considering volatility is proven by out-of-sample forecast results as well as directional accuracy during the great economic recession of 2008. To incorporate future volatility, backtesting of MEM model is performed. Two performance indicators used to assess the proposed model are mean absolute percentage error (for both in-sample model fit and out-of-sample forecasts) and directional accuracy.
[ 0, 0, 0, 1, 0, 0 ]
Title: Graph Clustering using Effective Resistance, Abstract: $ \def\vecc#1{\boldsymbol{#1}} $We design a polynomial time algorithm that for any weighted undirected graph $G = (V, E,\vecc w)$ and sufficiently large $\delta > 1$, partitions $V$ into subsets $V_1, \ldots, V_h$ for some $h\geq 1$, such that $\bullet$ at most $\delta^{-1}$ fraction of the weights are between clusters, i.e. \[ w(E - \cup_{i = 1}^h E(V_i)) \lesssim \frac{w(E)}{\delta};\] $\bullet$ the effective resistance diameter of each of the induced subgraphs $G[V_i]$ is at most $\delta^3$ times the average weighted degree, i.e. \[ \max_{u, v \in V_i} \mathsf{Reff}_{G[V_i]}(u, v) \lesssim \delta^3 \cdot \frac{|V|}{w(E)} \quad \text{ for all } i=1, \ldots, h.\] In particular, it is possible to remove one percent of weight of edges of any given graph such that each of the resulting connected components has effective resistance diameter at most the inverse of the average weighted degree. Our proof is based on a new connection between effective resistance and low conductance sets. We show that if the effective resistance between two vertices $u$ and $v$ is large, then there must be a low conductance cut separating $u$ from $v$. This implies that very mildly expanding graphs have constant effective resistance diameter. We believe that this connection could be of independent interest in algorithm design.
[ 1, 0, 0, 0, 0, 0 ]
Title: A Systematic Approach for Exploring Tradeoffs in Predictive HVAC Control Systems for Buildings, Abstract: Heating, Ventilation, and Cooling (HVAC) systems are often the most significant contributor to the energy usage, and the operational cost, of large office buildings. Therefore, to understand the various factors affecting the energy usage, and to optimize the operational efficiency of building HVAC systems, energy analysts and architects often create simulations (e.g., EnergyPlus or DOE-2), of buildings prior to construction or renovation to determine energy savings and quantify the Return-on-Investment (ROI). While useful, these simulations usually use static HVAC control strategies such as lowering room temperature at night, or reactive control based on simulated room occupancy. Recently, advances have been made in HVAC control algorithms that predict room occupancy. However, these algorithms depend on costly sensor installations and the tradeoffs between predictive accuracy, energy savings, comfort and expenses are not well understood. Current simulation frameworks do not support easy analysis of these tradeoffs. Our contribution is a simulation framework that can be used to explore this design space by generating objective estimates of the energy savings and occupant comfort for different levels of HVAC prediction and control performance. We validate our framework on a real-world occupancy dataset spanning 6 months for 235 rooms in a large university office building. Using the gold standard of energy use modeling and simulation (Revit and Energy Plus), we compare the energy consumption and occupant comfort in 29 independent simulations that explore our parameter space. Our results highlight a number of potentially useful tradeoffs with respect to energy savings, comfort, and algorithmic performance among predictive, reactive, and static schedules, for a stakeholder of our building.
[ 1, 0, 0, 0, 0, 0 ]
Title: On types of degenerate critical points of real polynomial functions, Abstract: In this paper, we consider the problem of identifying the type (local minimizer, maximizer or saddle point) of a given isolated real critical point $c$, which is degenerate, of a multivariate polynomial function $f$. To this end, we introduce the definition of faithful radius of $c$ by means of the curve of tangency of $f$. We show that the type of $c$ can be determined by the global extrema of $f$ over the Euclidean ball centered at $c$ with a faithful radius.We propose algorithms to compute a faithful radius of $c$ and determine its type.
[ 0, 0, 1, 0, 0, 0 ]
Title: Contribution of Data Categories to Readmission Prediction Accuracy, Abstract: Identification of patients at high risk for readmission could help reduce morbidity and mortality as well as healthcare costs. Most of the existing studies on readmission prediction did not compare the contribution of data categories. In this study we analyzed relative contribution of 90,101 variables across 398,884 admission records corresponding to 163,468 patients, including patient demographics, historical hospitalization information, discharge disposition, diagnoses, procedures, medications and laboratory test results. We established an interpretable readmission prediction model based on Logistic Regression in scikit-learn, and added the available variables to the model one by one in order to analyze the influences of individual data categories on readmission prediction accuracy. Diagnosis related groups (c-statistic increment of 0.0933) and discharge disposition (c-statistic increment of 0.0269) were the strongest contributors to model accuracy. Additionally, we also identified the top ten contributing variables in every data category.
[ 0, 0, 0, 0, 1, 0 ]
Title: Learning to Adapt by Minimizing Discrepancy, Abstract: We explore whether useful temporal neural generative models can be learned from sequential data without back-propagation through time. We investigate the viability of a more neurocognitively-grounded approach in the context of unsupervised generative modeling of sequences. Specifically, we build on the concept of predictive coding, which has gained influence in cognitive science, in a neural framework. To do so we develop a novel architecture, the Temporal Neural Coding Network, and its learning algorithm, Discrepancy Reduction. The underlying directed generative model is fully recurrent, meaning that it employs structural feedback connections and temporal feedback connections, yielding information propagation cycles that create local learning signals. This facilitates a unified bottom-up and top-down approach for information transfer inside the architecture. Our proposed algorithm shows promise on the bouncing balls generative modeling problem. Further experiments could be conducted to explore the strengths and weaknesses of our approach.
[ 1, 0, 0, 1, 0, 0 ]
Title: Haro 11: Where is the Lyman continuum source?, Abstract: Identifying the mechanism by which high energy Lyman continuum (LyC) photons escaped from early galaxies is one of the most pressing questions in cosmic evolution. Haro 11 is the best known local LyC leaking galaxy, providing an important opportunity to test our understanding of LyC escape. The observed LyC emission in this galaxy presumably originates from one of the three bright, photoionizing knots known as A, B, and C. It is known that Knot C has strong Ly$\alpha$ emission, and Knot B hosts an unusually bright ultraluminous X-ray source, which may be a low-luminosity AGN. To clarify the LyC source, we carry out ionization-parameter mapping (IPM) by obtaining narrow-band imaging from the Hubble Space Telescope WFC3 and ACS cameras to construct spatially resolved ratio maps of [OIII]/[OII] emission from the galaxy. IPM traces the ionization structure of the interstellar medium and allows us to identify optically thin regions. To optimize the continuum subtraction, we introduce a new method for determining the best continuum scale factor derived from the mode of the continuum-subtracted, image flux distribution. We find no conclusive evidence of LyC escape from Knots B or C, but instead, we identify a high-ionization region extending over at least 1 kpc from Knot A. Knot A shows evidence of an extremely young age ($\lesssim 1$ Myr), perhaps containing very massive stars ($>100$ M$_\odot$). It is weak in Ly$\alpha$, so if it is confirmed as the LyC source, our results imply that LyC emission may be independent of Ly$\alpha$ emission.
[ 0, 1, 0, 0, 0, 0 ]
Title: Typed Closure Conversion for the Calculus of Constructions, Abstract: Dependently typed languages such as Coq are used to specify and verify the full functional correctness of source programs. Type-preserving compilation can be used to preserve these specifications and proofs of correctness through compilation into the generated target-language programs. Unfortunately, type-preserving compilation of dependent types is hard. In essence, the problem is that dependent type systems are designed around high-level compositional abstractions to decide type checking, but compilation interferes with the type-system rules for reasoning about run-time terms. We develop a type-preserving closure-conversion translation from the Calculus of Constructions (CC) with strong dependent pairs ($\Sigma$ types)---a subset of the core language of Coq---to a type-safe, dependently typed compiler intermediate language named CC-CC. The central challenge in this work is how to translate the source type-system rules for reasoning about functions into target type-system rules for reasoning about closures. To justify these rules, we prove soundness of CC-CC by giving a model in CC. In addition to type preservation, we prove correctness of separate compilation.
[ 1, 0, 0, 0, 0, 0 ]
Title: Untangling Planar Curves, Abstract: Any generic closed curve in the plane can be transformed into a simple closed curve by a finite sequence of local transformations called homotopy moves. We prove that simplifying a planar closed curve with $n$ self-crossings requires $\Theta(n^{3/2})$ homotopy moves in the worst case. Our algorithm improves the best previous upper bound $O(n^2)$, which is already implicit in the classical work of Steinitz; the matching lower bound follows from the construction of closed curves with large defect, a topological invariant of generic closed curves introduced by Aicardi and Arnold. Our lower bound also implies that $\Omega(n^{3/2})$ facial electrical transformations are required to reduce any plane graph with treewidth $\Omega(\sqrt{n})$ to a single vertex, matching known upper bounds for rectangular and cylindrical grid graphs. More generally, we prove that transforming one immersion of $k$ circles with at most $n$ self-crossings into another requires $\Theta(n^{3/2} + nk + k^2)$ homotopy moves in the worst case. Finally, we prove that transforming one noncontractible closed curve to another on any orientable surface requires $\Omega(n^2)$ homotopy moves in the worst case; this lower bound is tight if the curve is homotopic to a simple closed curve.
[ 1, 0, 1, 0, 0, 0 ]
Title: Greedy-Merge Degrading has Optimal Power-Law, Abstract: Consider a channel with a given input distribution. Our aim is to degrade it to a channel with at most L output letters. One such degradation method is the so called "greedy-merge" algorithm. We derive an upper bound on the reduction in mutual information between input and output. For fixed input alphabet size and variable L, the upper bound is within a constant factor of an algorithm-independent lower bound. Thus, we establish that greedy-merge is optimal in the power-law sense.
[ 1, 0, 1, 0, 0, 0 ]
Title: Deep Learning for Classification Tasks on Geospatial Vector Polygons, Abstract: In this paper, we evaluate the accuracy of deep learning approaches on geospatial vector geometry classification tasks. The purpose of this evaluation is to investigate the ability of deep learning models to learn from geometry coordinates directly. Previous machine learning research applied to geospatial polygon data did not use geometries directly, but derived properties thereof. These are produced by way of extracting geometry properties such as Fourier descriptors. Instead, our introduced deep neural net architectures are able to learn on sequences of coordinates mapped directly from polygons. In three classification tasks we show that the deep learning architectures are competitive with common learning algorithms that require extracted features.
[ 0, 0, 0, 1, 0, 0 ]
Title: The Distance Standard Deviation, Abstract: The distance standard deviation, which arises in distance correlation analysis of multivariate data, is studied as a measure of spread. New representations for the distance standard deviation are obtained in terms of Gini's mean difference and in terms of the moments of spacings of order statistics. Inequalities for the distance variance are derived, proving that the distance standard deviation is bounded above by the classical standard deviation and by Gini's mean difference. Further, it is shown that the distance standard deviation satisfies the axiomatic properties of a measure of spread. Explicit closed-form expressions for the distance variance are obtained for a broad class of parametric distributions. The asymptotic distribution of the sample distance variance is derived.
[ 0, 0, 1, 1, 0, 0 ]
Title: Leavitt path algebras: Graded direct-finiteness and graded $Σ$-injective simple modules, Abstract: In this paper, we give a complete characterization of Leavitt path algebras which are graded $\Sigma $-$V$ rings, that is, rings over which a direct sum of arbitrary copies of any graded simple module is graded injective. Specifically, we show that a Leavitt path algebra $L$ over an arbitrary graph $E$ is a graded $\Sigma $-$V$ ring if and only if it is a subdirect product of matrix rings of arbitrary size but with finitely many non-zero entries over $K$ or $K[x,x^{-1}]$ with appropriate matrix gradings. We also obtain a graphical characterization of such a graded $\Sigma $-$V$ ring $L$% . When the graph $E$ is finite, we show that $L$ is a graded $\Sigma $-$V$ ring $\Longleftrightarrow L$ is graded directly-finite $\Longleftrightarrow L $ has bounded index of nilpotence $\Longleftrightarrow $ $L$ is graded semi-simple. Examples show that the equivalence of these properties in the preceding statement no longer holds when the graph $E$ is infinite. Following this, we also characterize Leavitt path algebras $L$ which are non-graded $\Sigma $-$V$ rings. Graded rings which are graded directly-finite are explored and it is shown that if a Leavitt path algebra $L$ is a graded $\Sigma$-$V$ ring, then $L$ is always graded directly-finite. Examples show the subtle differences between graded and non-graded directly-finite rings. Leavitt path algebras which are graded directly-finite are shown to be directed unions of graded semisimple rings. Using this, we give an alternative proof of a theorem of Vaš \cite{V} on directly-finite Leavitt path algebras.
[ 0, 0, 1, 0, 0, 0 ]
Title: Social media mining for identification and exploration of health-related information from pregnant women, Abstract: Widespread use of social media has led to the generation of substantial amounts of information about individuals, including health-related information. Social media provides the opportunity to study health-related information about selected population groups who may be of interest for a particular study. In this paper, we explore the possibility of utilizing social media to perform targeted data collection and analysis from a particular population group -- pregnant women. We hypothesize that we can use social media to identify cohorts of pregnant women and follow them over time to analyze crucial health-related information. To identify potentially pregnant women, we employ simple rule-based searches that attempt to detect pregnancy announcements with moderate precision. To further filter out false positives and noise, we employ a supervised classifier using a small number of hand-annotated data. We then collect their posts over time to create longitudinal health timelines and attempt to divide the timelines into different pregnancy trimesters. Finally, we assess the usefulness of the timelines by performing a preliminary analysis to estimate drug intake patterns of our cohort at different trimesters. Our rule-based cohort identification technique collected 53,820 users over thirty months from Twitter. Our pregnancy announcement classification technique achieved an F-measure of 0.81 for the pregnancy class, resulting in 34,895 user timelines. Analysis of the timelines revealed that pertinent health-related information, such as drug-intake and adverse reactions can be mined from the data. Our approach to using user timelines in this fashion has produced very encouraging results and can be employed for other important tasks where cohorts, for which health-related information may not be available from other sources, are required to be followed over time to derive population-based estimates.
[ 1, 0, 0, 0, 0, 0 ]
Title: Neeman's characterization of K(R-Proj) via Bousfield localization, Abstract: Let $R$ be an associative ring with unit and denote by $K({\rm R \mbox{-}Proj})$ the homotopy category of complexes of projective left $R$-modules. Neeman proved the theorem that $K({\rm R \mbox{-}Proj})$ is $\aleph_1$-compactly generated, with the category $K^+ ({\rm R \mbox{-}proj})$ of left bounded complexes of finitely generated projective $R$-modules providing an essentially small class of such generators. Another proof of Neeman's theorem is explained, using recent ideas of Christensen and Holm, and Emmanouil. The strategy of the proof is to show that every complex in $K({\rm R \mbox{-}Proj})$ vanishes in the Bousfield localization $K({\rm R \mbox{-}Flat})/\langle K^+ ({\rm R \mbox{-}proj}) \rangle.$
[ 0, 0, 1, 0, 0, 0 ]
Title: Curvature in Hamiltonian Mechanics And The Einstein-Maxwell-Dilaton Action, Abstract: Riemannian geometry is a particular case of Hamiltonian mechanics: the orbits of the hamiltonian $H=\frac{1}{2}g^{ij}p_{i}p_{j}$ are the geodesics. Given a symplectic manifold (\Gamma,\omega), a hamiltonian $H:\Gamma\to\mathbb{R}$ and a Lagrangian sub-manifold $M\subset\Gamma$ we find a generalization of the notion of curvature. The particular case $H=\frac{1}{2}g^{ij}\left[p_{i}-A_{i}\right]\left[p_{j}-A_{j}\right]+\phi $ of a particle moving in a gravitational, electromagnetic and scalar fields is studied in more detail. The integral of the generalized Ricci tensor w.r.t. the Boltzmann weight reduces to the action principle $\int\left[R+\frac{1}{4}F_{ik}F_{jl}g^{kl}g^{ij}-g^{ij}\partial_{i}\phi\partial_{j}\phi\right]e^{-\phi}\sqrt{g}d^{n}q$ for the scalar, vector and tensor fields.
[ 0, 0, 1, 0, 0, 0 ]
Title: Roche-lobe overflow in eccentric planet-star systems, Abstract: Many giant exoplanets are found near their Roche limit and in mildly eccentric orbits. In this study we examine the fate of such planets through Roche-lobe overflow as a function of the physical properties of the binary components, including the eccentricity and the asynchronicity of the rotating planet. We use a direct three-body integrator to compute the trajectories of the lost mass in the ballistic limit and investigate the possible outcomes. We find three different outcomes for the mass transferred through the Lagrangian point $L_{1}$: (i) self-accretion by the planet, (ii) direct impact on the stellar surface, (iii) disk formation around the star. We explore the parameter space of the three different regimes and find that at low eccentricities, $e\lesssim 0.2$, mass overflow leads to disk formation for most systems, while for higher eccentricities or retrograde orbits self-accretion is the only possible outcome. We conclude that the assumption often made in previous work that when a planet overflows its Roche lobe it is quickly disrupted and accreted by the star is not always valid.
[ 0, 1, 0, 0, 0, 0 ]
Title: Deep Learning Scooping Motion using Bilateral Teleoperations, Abstract: We present bilateral teleoperation system for task learning and robot motion generation. Our system includes a bilateral teleoperation platform and a deep learning software. The deep learning software refers to human demonstration using the bilateral teleoperation platform to collect visual images and robotic encoder values. It leverages the datasets of images and robotic encoder information to learn about the inter-modal correspondence between visual images and robot motion. In detail, the deep learning software uses a combination of Deep Convolutional Auto-Encoders (DCAE) over image regions, and Recurrent Neural Network with Long Short-Term Memory units (LSTM-RNN) over robot motor angles, to learn motion taught be human teleoperation. The learnt models are used to predict new motion trajectories for similar tasks. Experimental results show that our system has the adaptivity to generate motion for similar scooping tasks. Detailed analysis is performed based on failure cases of the experimental results. Some insights about the cans and cannots of the system are summarized.
[ 1, 0, 0, 0, 0, 0 ]
Title: XFlow: 1D-2D Cross-modal Deep Neural Networks for Audiovisual Classification, Abstract: We propose two multimodal deep learning architectures that allow for cross-modal dataflow (XFlow) between the feature extractors, thereby extracting more interpretable features and obtaining a better representation than through unimodal learning, for the same amount of training data. These models can usefully exploit correlations between audio and visual data, which have a different dimensionality and are therefore nontrivially exchangeable. Our work improves on existing multimodal deep learning metholodogies in two essential ways: (1) it presents a novel method for performing cross-modality (before features are learned from individual modalities) and (2) extends the previously proposed cross-connections, which only transfer information between streams that process compatible data. Both cross-modal architectures outperformed their baselines (by up to 7.5%) when evaluated on the AVletters dataset.
[ 1, 0, 0, 1, 0, 0 ]
Title: Making Sense of Physics through Stories: High School Students Narratives about Electric Charges and Interactions, Abstract: Educational research has shown that narratives are useful tools that can help young students make sense of scientific phenomena. Based on previous research, I argue that narratives can also become tools for high school students to make sense of concepts such as the electric field. In this paper I examine high school students visual and oral narratives in which they describe the interaction among electric charges as if they were characters of a cartoon series. The study investigates: given the prompt to produce narratives for electrostatic phenomena during a classroom activity prior to receiving formal instruction, (1) what ideas of electrostatics do students attend to in their narratives?; (2) what role do students narratives play in their understanding of electrostatics? The participants were a group of high school students engaged in an open-ended classroom activity prior to receiving formal instruction about electrostatics. During the activity, the group was asked to draw comic strips for electric charges. In addition to individual work, students shared their work within small groups as well as with the whole group. Post activity, six students from a small group were interviewed individually about their work. In this paper I present two cases in which students produced narratives to express their ideas about electrostatics in different ways. In each case, I present student work for the comic strip activity (visual narratives), their oral descriptions of their work (oral narratives) during the interview and/or to their peers during class, and the their ideas of the electric interactions expressed through their narratives.
[ 0, 1, 0, 0, 0, 0 ]
Title: On orbifold constructions associated with the Leech lattice vertex operator algebra, Abstract: In this article, we study orbifold constructions associated with the Leech lattice vertex operator algebra. As an application, we prove that the structure of a strongly regular holomorphic vertex operator algebra of central charge $24$ is uniquely determined by its weight one Lie algebra if the Lie algebra has the type $A_{3,4}^3A_{1,2}$, $A_{4,5}^2$, $D_{4,12}A_{2,6}$, $A_{6,7}$, $A_{7,4}A_{1,1}^3$, $D_{5,8}A_{1,2}$ or $D_{6,5}A_{1,1}^2$ by using the reverse orbifold construction. Our result also provides alternative constructions of these vertex operator algebras (except for the case $A_{6,7}$) from the Leech lattice vertex operator algebra.
[ 0, 0, 1, 0, 0, 0 ]
Title: Phase correction for ALMA - Investigating water vapour radiometer scaling:The long-baseline science verification data case study, Abstract: The Atacama Large millimetre/submillimetre Array (ALMA) makes use of water vapour radiometers (WVR), which monitor the atmospheric water vapour line at 183 GHz along the line of sight above each antenna to correct for phase delays introduced by the wet component of the troposphere. The application of WVR derived phase corrections improve the image quality and facilitate successful observations in weather conditions that were classically marginal or poor. We present work to indicate that a scaling factor applied to the WVR solutions can act to further improve the phase stability and image quality of ALMA data. We find reduced phase noise statistics for 62 out of 75 datasets from the long-baseline science verification campaign after a WVR scaling factor is applied. The improvement of phase noise translates to an expected coherence improvement in 39 datasets. When imaging the bandpass source, we find 33 of the 39 datasets show an improvement in the signal-to-noise ratio (S/N) between a few to ~30 percent. There are 23 datasets where the S/N of the science image is improved: 6 by <1%, 11 between 1 and 5%, and 6 above 5%. The higher frequencies studied (band 6 and band 7) are those most improved, specifically datasets with low precipitable water vapour (PWV), <1mm, where the dominance of the wet component is reduced. Although these improvements are not profound, phase stability improvements via the WVR scaling factor come into play for the higher frequency (>450 GHz) and long-baseline (>5km) observations. These inherently have poorer phase stability and are taken in low PWV (<1mm) conditions for which we find the scaling to be most effective. A promising explanation for the scaling factor is the mixing of dry and wet air components, although other origins are discussed. We have produced a python code to allow ALMA users to undertake WVR scaling tests and make improvements to their data.
[ 0, 1, 0, 0, 0, 0 ]
Title: Scatteract: Automated extraction of data from scatter plots, Abstract: Charts are an excellent way to convey patterns and trends in data, but they do not facilitate further modeling of the data or close inspection of individual data points. We present a fully automated system for extracting the numerical values of data points from images of scatter plots. We use deep learning techniques to identify the key components of the chart, and optical character recognition together with robust regression to map from pixels to the coordinate system of the chart. We focus on scatter plots with linear scales, which already have several interesting challenges. Previous work has done fully automatic extraction for other types of charts, but to our knowledge this is the first approach that is fully automatic for scatter plots. Our method performs well, achieving successful data extraction on 89% of the plots in our test set.
[ 1, 0, 0, 1, 0, 0 ]
Title: Polygons with prescribed edge slopes: configuration space and extremal points of perimeter, Abstract: We describe the configuration space $\mathbf{S}$ of polygons with prescribed edge slopes, and study the perimeter $\mathcal{P}$ as a Morse function on $\mathbf{S}$. We characterize critical points of $\mathcal{P}$ (these are \textit{tangential} polygons) and compute their Morse indices. This setup is motivated by a number of results about critical points and Morse indices of the oriented area function defined on the configuration space of polygons with prescribed edge lengths (flexible polygons). As a by-product, we present an independent computation of the Morse index of the area function (obtained earlier by G. Panina and A. Zhukova).
[ 0, 0, 1, 0, 0, 0 ]
Title: L lines, C points and Chern numbers: understanding band structure topology using polarization fields, Abstract: Topology has appeared in different physical contexts. The most prominent application is topologically protected edge transport in condensed matter physics. The Chern number, the topological invariant of gapped Bloch Hamiltonians, is an important quantity in this field. Another example of topology, in polarization physics, are polarization singularities, called L lines and C points. By establishing a connection between these two theories, we develop a novel technique to visualize and potentially measure the Chern number: it can be expressed either as the winding of the polarization azimuth along L lines in reciprocal space, or in terms of the handedness and the index of the C points. For mechanical systems, this is directly connected to the visible motion patterns.
[ 0, 1, 0, 0, 0, 0 ]
Title: Space dependent adhesion forces mediated by transient elastic linkages : new convergence and global existence results, Abstract: In the first part of this work we show the convergence with respect to an asymptotic parameter {\epsilon} of a delayed heat equation. It represents a mathematical extension of works considered previously by the authors [Milisic et al. 2011, Milisic et al. 2016]. Namely, this is the first result involving delay operators approximating protein linkages coupled with a spatial elliptic second order operator. For the sake of simplicity we choose the Laplace operator, although more general results could be derived. The main arguments are (i) new energy estimates and (ii) a stability result extended from the previous work to this more involved context. They allow to prove convergence of the delay operator to a friction term together with the Laplace operator in the same asymptotic regime considered without the space dependence in [Milisic et al, 2011]. In a second part we extend fixed-point results for the fully non-linear model introduced in [Milisic et al, 2016] and prove global existence in time. This shows that the blow-up scenario observed previously does not occur. Since the latter result was interpreted as a rupture of adhesion forces, we discuss the possibility of bond breaking both from the analytic and numerical point of view.
[ 0, 0, 1, 0, 0, 0 ]
Title: Gravitational wave production from preheating: parameter dependence, Abstract: Parametric resonance is among the most efficient phenomena generating gravitational waves (GWs) in the early Universe. The dynamics of parametric resonance, and hence of the GWs, depend exclusively on the resonance parameter $q$. The latter is determined by the properties of each scenario: the initial amplitude and potential curvature of the oscillating field, and its coupling to other species. Previous works have only studied the GW production for fixed value(s) of $q$. We present an analytical derivation of the GW amplitude dependence on $q$, valid for any scenario, which we confront against numerical results. By running lattice simulations in an expanding grid, we study for a wide range of $q$ values, the production of GWs in post-inflationary preheating scenarios driven by parametric resonance. We present simple fits for the final amplitude and position of the local maxima in the GW spectrum. Our parametrization allows to predict the location and amplitude of the GW background today, for an arbitrary $q$. The GW signal can be rather large, as $h^2\Omega_{\rm GW}(f_p) \lesssim 10^{-11}$, but it is always peaked at high frequencies $f_p \gtrsim 10^{7}$ Hz. We also discuss the case of spectator-field scenarios, where the oscillatory field can be e.g.~a curvaton, or the Standard Model Higgs.
[ 0, 1, 0, 0, 0, 0 ]
Title: Adaptive Model Predictive Control for High-Accuracy Trajectory Tracking in Changing Conditions, Abstract: Robots and automated systems are increasingly being introduced to unknown and dynamic environments where they are required to handle disturbances, unmodeled dynamics, and parametric uncertainties. Robust and adaptive control strategies are required to achieve high performance in these dynamic environments. In this paper, we propose a novel adaptive model predictive controller that combines model predictive control (MPC) with an underlying $\mathcal{L}_1$ adaptive controller to improve trajectory tracking of a system subject to unknown and changing disturbances. The $\mathcal{L}_1$ adaptive controller forces the system to behave in a predefined way, as specified by a reference model. A higher-level model predictive controller then uses this reference model to calculate the optimal reference input based on a cost function, while taking into account input and state constraints. We focus on the experimental validation of the proposed approach and demonstrate its effectiveness in experiments on a quadrotor. We show that the proposed approach has a lower trajectory tracking error compared to non-predictive, adaptive approaches and a predictive, non-adaptive approach, even when external wind disturbances are applied.
[ 1, 0, 0, 0, 0, 0 ]
Title: Eigenvalues of symmetric tridiagonal interval matrices revisited, Abstract: In this short note, we present a novel method for computing exact lower and upper bounds of eigenvalues of a symmetric tridiagonal interval matrix. Compared to the known methods, our approach is fast, simple to present and to implement, and avoids any assumptions. Our construction explicitly yields those matrices for which particular lower and upper bounds are attained.
[ 1, 0, 0, 0, 0, 0 ]
Title: Topology of Large-Scale Structures of Galaxies in Two Dimensions - Systematic Effects, Abstract: We study the two-dimensional topology of the galactic distribution when projected onto two-dimensional spherical shells. Using the latest Horizon Run 4 simulation data, we construct the genus of the two-dimensional field and consider how this statistic is affected by late-time nonlinear effects -- principally gravitational collapse and redshift space distortion (RSD). We also consider systematic and numerical artifacts such as shot noise, galaxy bias, and finite pixel effects. We model the systematics using a Hermite polynomial expansion and perform a comprehensive analysis of known effects on the two-dimensional genus, with a view toward using the statistic for cosmological parameter estimation. We find that the finite pixel effect is dominated by an amplitude drop and can be made less than $1\%$ by adopting pixels smaller than $1/3$ of the angular smoothing length. Nonlinear gravitational evolution introduces time-dependent coefficients of the zeroth, first, and second Hermite polynomials, but the genus amplitude changes by less than $1\%$ between $z=1$ and $z=0$ for smoothing scales $R_{\rm G} > 9 {\rm Mpc/h}$. Non-zero terms are measured up to third order in the Hermite polynomial expansion when studying RSD. Differences in shapes of the genus curves in real and redshift space are small when we adopt thick redshift shells, but the amplitude change remains a significant $\sim {\cal O}(10\%)$ effect. The combined effects of galaxy biasing and shot noise produce systematic effects up to the second Hermite polynomial. It is shown that, when sampling, the use of galaxy mass cuts significantly reduces the effect of shot noise relative to random sampling.
[ 0, 1, 0, 0, 0, 0 ]
Title: Wiki-index of authors popularity, Abstract: The new index of the author's popularity estimation is represented in the paper. The index is calculated on the basis of Wikipedia encyclopedia analysis (Wikipedia Index - WI). Unlike the conventional existed citation indices, the suggested mark allows to evaluate not only the popularity of the author, as it can be done by means of calculating the general citation number or by the Hirsch index, which is often used to measure the author's research rate. The index gives an opportunity to estimate the author's popularity, his/her influence within the sought-after area "knowledge area" in the Internet - in the Wikipedia. The suggested index is supposed to be calculated in frames of the subject domain, and it, on the one hand, avoids the mistaken computation of the homonyms, and on the other hand - provides the entirety of the subject area. There are proposed algorithms and the technique of the Wikipedia Index calculation through the network encyclopedia sounding, the exemplified calculations of the index for the prominent researchers, and also the methods of the information networks formation - models of the subject domains by the automatic monitoring and networks information reference resources analysis. The considered in the paper notion network corresponds the terms-heads of the Wikipedia articles.
[ 1, 0, 0, 0, 0, 0 ]
Title: Belyi map for the sporadic group J1, Abstract: We compute the genus 0 Belyi map for the sporadic Janko group J1 of degree 266 and describe the applied method. This yields explicit polynomials having J1 as a Galois group over K(t), [K:Q] = 7.
[ 0, 0, 1, 0, 0, 0 ]
Title: Lefschetz duality for intersection (co)homology, Abstract: We prove the Lefschetz duality for intersection (co)homology in the framework of $\partial$-pesudomanifolds. We work with general perversities and without restriction on the coefficient ring.
[ 0, 0, 1, 0, 0, 0 ]
Title: Empirical determination of the optimum attack for fragmentation of modular networks, Abstract: All possible removals of $n=5$ nodes from networks of size $N=100$ are performed in order to find the optimal set of nodes which fragments the original network into the smallest largest connected component. The resulting attacks are ordered according to the size of the largest connected component and compared with the state of the art methods of network attacks. We chose attacks of size $5$ on relative small networks of size $100$ because the number of all possible attacks, ${100}\choose{5}$ $\approx 10^8$, is at the verge of the possible to compute with the available standard computers. Besides, we applied the procedure in a series of networks with controlled and varied modularity, comparing the resulting statistics with the effect of removing the same amount of vertices, according to the known most efficient disruption strategies, i.e., High Betweenness Adaptive attack (HBA), Collective Index attack (CI), and Modular Based Attack (MBA). Results show that modularity has an inverse relation with robustness, with $Q_c \approx 0.7$ being the critical value. For modularities lower than $Q_c$, all heuristic method gives mostly the same results than with random attacks, while for bigger $Q$, networks are less robust and highly vulnerable to malicious attacks.
[ 1, 0, 0, 0, 0, 0 ]
Title: Random Forests, Decision Trees, and Categorical Predictors: The "Absent Levels" Problem, Abstract: One advantage of decision tree based methods like random forests is their ability to natively handle categorical predictors without having to first transform them (e.g., by using feature engineering techniques). However, in this paper, we show how this capability can lead to an inherent "absent levels" problem for decision tree based methods that has never been thoroughly discussed, and whose consequences have never been carefully explored. This problem occurs whenever there is an indeterminacy over how to handle an observation that has reached a categorical split which was determined when the observation in question's level was absent during training. Although these incidents may appear to be innocuous, by using Leo Breiman and Adele Cutler's random forests FORTRAN code and the randomForest R package (Liaw and Wiener, 2002) as motivating case studies, we examine how overlooking the absent levels problem can systematically bias a model. Furthermore, by using three real data examples, we illustrate how absent levels can dramatically alter a model's performance in practice, and we empirically demonstrate how some simple heuristics can be used to help mitigate the effects of the absent levels problem until a more robust theoretical solution is found.
[ 1, 0, 0, 1, 0, 0 ]
Title: Complexity and capacity bounds for quantum channels, Abstract: We generalise some well-known graph parameters to operator systems by considering their underlying quantum channels. In particular, we introduce the quantum complexity as the dimension of the smallest co-domain Hilbert space a quantum channel requires to realise a given operator system as its non-commutative confusability graph. We describe quantum complexity as a generalised minimum semidefinite rank and, in the case of a graph operator system, as a quantum intersection number. The quantum complexity and a closely related quantum version of orthogonal rank turn out to be upper bounds for the Shannon zero-error capacity of a quantum channel, and we construct examples for which these bounds beat the best previously known general upper bound for the capacity of quantum channels, given by the quantum Lovász theta number.
[ 0, 0, 1, 0, 0, 0 ]
Title: Deep Within-Class Covariance Analysis for Robust Audio Representation Learning, Abstract: Convolutional Neural Networks (CNNs) can learn effective features, though have been shown to suffer from a performance drop when the distribution of the data changes from training to test data. In this paper we analyze the internal representations of CNNs and observe that the representations of unseen data in each class, spread more (with higher variance) in the embedding space of the CNN compared to representations of the training data. More importantly, this difference is more extreme if the unseen data comes from a shifted distribution. Based on this observation, we objectively evaluate the degree of representation's variance in each class via eigenvalue decomposition on the within-class covariance of the internal representations of CNNs and observe the same behaviour. This can be problematic as larger variances might lead to mis-classification if the sample crosses the decision boundary of its class. We apply nearest neighbor classification on the representations and empirically show that the embeddings with the high variance actually have significantly worse KNN classification performances, although this could not be foreseen from their end-to-end classification results. To tackle this problem, we propose Deep Within-Class Covariance Analysis (DWCCA), a deep neural network layer that significantly reduces the within-class covariance of a DNN's representation, improving performance on unseen test data from a shifted distribution. We empirically evaluate DWCCA on two datasets for Acoustic Scene Classification (DCASE2016 and DCASE2017). We demonstrate that not only does DWCCA significantly improve the network's internal representation, it also increases the end-to-end classification accuracy, especially when the test set exhibits a distribution shift. By adding DWCCA to a VGG network, we achieve around 6 percentage points improvement in the case of a distribution mismatch.
[ 1, 0, 0, 0, 0, 0 ]