text
stringlengths
121
2.54k
summary
stringlengths
23
219
In this paper, we show that existing recognition and localization deep architectures, that have not been exposed to eye tracking data or any saliency datasets, are capable of predicting the human visual saliency. We term this as implicit saliency in deep neural networks. We calculate this implicit saliency using expectancy-mismatch hypothesis in an unsupervised fashion. Our experiments show that extracting saliency in this fashion provides comparable performance when measured against the state-of-art supervised algorithms. Additionally, the robustness outperforms those algorithms when we add large noise to the input images. Also, we show that semantic features contribute more than low-level features for human visual saliency detection.
Implicit Saliency in Deep Neural Networks
A Kerr microresonator frequency comb has enabled the generation of low-phase-noise millimeter- and terahertz-waves in conjunction with an ultrafast photodiode. It is intriguing to employ the new light source in wireless communication at above 100 GHz band, where a carrier signal with a high signal-to-noise ratio is desired to achieve higher data rates. In this study, we demonstrate two simple and efficient architectures of wireless links based on a microresonator comb. We show experimentally that simultaneous modulation and detection of multiple comb lines result in >10 times stronger modulation signal strength than two-line detection at a receiver. Successful transmission of complex modulation format up to 64 quadrature amplitude modulation proves that a microresonator comb and the proposed modulation method are effective in modern wireless communication.
300 GHz wireless link based on an integrated Kerr soliton comb
Dark matter neutralinos in the constrained minimal supersymmetric model (CMSSM) may account for the recent cosmic ray electron and positron observations reported by the PAMELA and ATIC experiments either through self annihilation or via decay. However, to achieve this, both scenarios require new physics beyond the 'standard' CMSSM, and a unified explanation of the two experiments suggests a neutralino mass of order 700 GeV - 2 TeV. A relatively light neutralino with mass around 100 GeV (300 GeV) can accomodate the PAMELA but not the ATIC observations based on a model of annihilating (decaying) neutralinos. We study the implications of these scenarios for Higgs and sparticle spectroscopy in the CMSSM and highlight some benchmark points. An estimate of neutrino flux expected from the annihilating and decaying neutralino scenarios is provided.
CMSSM Spectroscopy in light of PAMELA and ATIC
We explicitly compute the Dolbeault cohomologies of certain domains in complex space generalizing the classical Hartogs figure. The cohomology groups are non-Hausdorff topological vector spaces, and it is possible to identify the reduced (Hausdorff) and the indiscrete part of the cohomology.
Some non-pseudoconvex domains with explicitly computable non-Hausdorff Dolbeault cohomology
We have studied a mathematical relationship between holographic Wilsonian renormalization group(HWRG) and stochastic quantization(SQ) of scalar field with arbitrary mass in AdS spacetime. In the stochastic theory, the field is described by an equation with a form of harmonic oscillator with time dependent frequency and its Euclidean action also shows explicit time dependent kernel in it. We have obtained the stochastic 2-point correlation function and demonstrate that it reproduces the radial evolution of the double trace operator correctly via the suggested relation given in arXiv:1209.2242. Moreover, we justify our stochastic procedure with time dependent kernel by showing that it can be mapped to a new stochastic frame with a standard kernel without time dependence. Finally, we consider more general boundary conditions for the stochastic field to reproduce the radial evolution of the holographic boundary effective action when alternative quantization is allowed. We extensively study the Neumann boundary condition case and confirm that even in this case, the relation between HWRG and SQ is precisely hold.
Stochastic quantization and holographic Wilsonian renormalization group of scalar theories with arbitrary mass
A particular case of initial data for the two-dimensional Euler equations is studied numerically. The results show that the Godunov method does not always converge to the physical solution, at least not on feasible grids. Moreover, they suggest that entropy solutions (in the weak entropy inequality sense) are not well-posed.
A possible counterexample to wellposedness of entropy solutions and to Godunov scheme convergence
This thesis concerns the study of random walks in random environments (RWRE). Since there are two levels of randomness for random walks in random environments, there are two different distributions for the random walk that can be studied. The quenched distribution is the law of the random walk conditioned on a given environment. The annealed distribution is the quenched law averaged over all environments. The main results of the thesis fall into two categories: quenched limiting distributions for one-dimensional, transient RWRE and annealed large deviations for multidimensional RWRE. The analysis of the quenched distributions for transient, one-dimensional RWRE falls into two separate cases. First, when an annealed central limit theorem holds, we prove that a quenched central limit theorem also holds but with a random (depending on the environment) centering. In contrast, when the annealed limit distribution is not Gaussian, we prove that there is no quenched limiting distribution for the RWRE. Moreover, we show that for almost every environment, there exist two random (depending on the environment) sequences of times, along which random walk has different quenched limiting distributions. While an annealed large deviation principle for multidimensional RWRE was known previously, very little qualitative information was available about the annealed large deviation rate function. We prove that if the law on environments is non-nestling, then the annealed large deviation rate function is analytic in a neighborhood of its unique zero (which is the limiting velocity of the RWRE).
Limiting distributions and large deviations for random walks in random environments
The electric quadrupole moment and the magnetic moment of the 11Li halo nucleus have been measured with more than an order of magnitude higher precision than before, |Q| = 33.3(5)mb and mu=3.6712(3)mu_N, revealing a 8.8(1.5)% increase of the quadrupole moment relative to that of 9Li. This result is compared to various models that aim at describing the halo properties. In the shell model an increased quadrupole moment points to a significant occupation of the 1d orbits, whereas in a simple halo picture this can be explained by relating the quadrupole moments of the proton distribution to the charge radii. Advanced models so far fail to reproduce simultaneously the trends observed in the radii and quadrupole moments of the lithium isotopes.
Precision Measurement of 11Li moments: Influence of Halo Neutrons on the 9Li Core
This is an expository article of our work on analogies between knot theory and algebraic number theory. We shall discuss foundational analogies between knots and primes, 3-manifolds and number rings mainly from the group-theoretic point of view.
Analogies between Knots and Primes, 3-Manifolds and Number Rings
We present a new diagnostic diagram for local ultraluminous infrared galaxies (ULIRGs) and quasars, analysing particularly the Spitzer Space Telescope's Infrared Spectrograph (IRS) spectra of 102 local ULIRGs and 37 Palomar Green quasars. Our diagram is based on a special non-linear mapping of these data, employing the Kernel Principal Component Analysis method. The novelty of this map lies in the fact that it distributes the galaxies under study on the surface of a well-defined ellipsoid, which, in turn, links basic concepts from geometry to physical properties of the galaxies. Particularly, we have found that the equatorial direction of the ellipsoid corresponds to the evolution of the power source of ULIRGs, starting from the pre-merger phase, moving through the starburst-dominated coalescing stage towards the active galactic nucleus (AGN)-dominated phase, and finally terminating with the post-merger quasar phase. On the other hand, the meridian directions distinguish deeply obscured power sources of the galaxies from unobscured ones. These observations have also been verified by comparison with simulated ULIRGs and quasars using radiative transfer models. The diagram correctly identifies unique galaxies with extreme features that lie distinctly away from the main distribution of the galaxies. Furthermore, special two-dimensional projections of the ellipsoid recover almost monotonic variations of the two main physical properties of the galaxies, the silicate and PAH features. This suggests that our diagram naturally extends the well-known Spoon diagram and it can serve as a diagnostic tool for existing and future infrared spectroscopic data, such as those provided by the James Webb Space Telescope.
Classification of local ultraluminous infrared galaxies and quasars with kernel principal component analysis
We compute an explicit algebraic deformation quantization for an affine Poisson variety described by an ideal in a polynomial ring, and inheriting its Poisson structure from the ambient space.
On the deformation quantization of affine algebraic varieties
To understand the essence of the exciton Mott transition in three-dimensional electron-hole systems, the metal-insulator transition is studied for a two-band Hubbard model in infinite dimensions with interactions of electron-electron (hole-hole) repulsion U and electron-hole attraction -U'. By using the dynamical mean-field theory, the phase diagram in the U-U' plane is obtained (which is exact in infinite dimensions) assuming that electron-hole pairs do not condense. When both electron and hole bands are half-filled, two types of insulating states appear: the Mott-Hubbard insulator for U > U' and the biexciton-like insulator for U < U'. Even when away from half-filling, we find the phase transition between the exciton- or biexciton-like insulator and a metallic state. This transition can be assigned to the exciton Mott transition, whereas the Mott-Hubbard transition is absent.
Phase diagram for the exciton Mott transition in infinite-dimensional electron-hole systems
We study correlations of hydrodynamic fluctuations in shear flow analytically and also by dissipative particle dynamics~(DPD) simulations. The hydrodynamic equations are linearized around the macroscopic velocity field and then solved by a perturbation method in Fourier-transformed space. The autocorrelation functions~(ACFs) from the analytical method are compared with results obtained from DPD simulations under the same shear-flow conditions. Upto a moderate shear rate, various ACFs from the two approaches agree with each other well. At large shear rates, discrepancies between the two methods are observed, hence revealing strong additional coupling between different fluctuating variables, which is not considered in the analytical approach. In addition, the results at low and moderate shear rates can serve as benchmarks for developing multiscale algorithms for coupling of heterogeneous solvers, such as a hybrid simulation of molecular dynamics and fluctuating hydrodynamics solver, where thermal fluctuations are indispensable.
Analytical and Computational Studies of Correlations of Hydrodynamic Fluctuations in Shear Flow
Green algae of the $Volvocine$ lineage, spanning from unicellular $Chlamydomonas$ to vastly larger $Volvox$, are models for the study of the evolution of multicellularity, flagellar dynamics, and developmental processes. Phototactic steering in these organisms occurs without a central nervous system, driven solely by the response of individual cells. All such algae spin about a body-fixed axis as they swim; directional photosensors on each cell thus receive periodic signals when that axis is not aligned with the light. The flagella of $Chlamydomonas$ and $Volvox$ both exhibit an adaptive response to such signals in a manner that allows for accurate phototaxis, but in the former the two flagella have distinct responses, while the thousands of flagella on the surface of spherical $Volvox$ colonies have essentially identical behaviour. The planar 16-cell species $Gonium~pectorale$ thus presents a conundrum, for its central 4 cells have a $Chlamydomonas$-like beat that provide propulsion normal to the plane, while its 12 peripheral cells generate rotation around the normal through a $Volvox$-like beat. Here, we combine experiment, theory, and computations to reveal how $Gonium$, perhaps the simplest differentiated colonial organism, achieves phototaxis. High-resolution cell tracking, particle image velocimetry of flagellar driven flows, and high-speed imaging of flagella on micropipette-held colonies show how, in the context of a recently introduced model for $Chlamydomonas$ phototaxis, an adaptive response of the peripheral cells alone leads to photo-reorientation of the entire colony. The analysis also highlights the importance of local variations in flagellar beat dynamics within a given colony, which can lead to enhanced reorientation dynamics.
Motility and Phototaxis of $Gonium$, the Simplest Differentiated Colonial Alga
We report the detection of the sulfur-bearing species NCS, HCCS, H2CCS, H2CCCS, and C4S for the first time in space. These molecules were found towards TMC-1 through the observation of several lines for each species. We also report the detection of C5S for the first time in a cold cloud through the observation of five lines in the 31-50 GHz range. The derived column densities are N(NCS) = (7.8 +/- 0.6)e11 cm-2, N(HCCS) = (6.8 +/- 0.6)e11 cm-2, N(H2CCS) = (7.8 +/- 0.8)e11 cm-2, N(H2CCCS) = (3.7 +/- 0.4)e11 cm-2, N(C4S) = (3.8 +/- 0.4)e10 cm-2, and N(C5S) = (5.0 +/- 1.0)e10 cm-2. The observed abundance ratio between C3S and C4S is 340, that is to say a factor of approximately one hundred larger than the corresponding value for CCS and C3S. The observational results are compared with a state-of-the-art chemical model, which is only partially successful in reproducing the observed abundances. These detections underline the need to improve chemical networks dealing with S-bearing species.
TMC-1, the starless core sulfur factory: Discovery of NCS, HCCS, H2CCS, H2CCCS, and C4S and detection of C5S
In this note we generalize a result from a recent paper of Hajac, Reznikoff and Tobolski (2020). In that paper they give conditions they call admissibility on a pushout diagram in the category of directed graphs implying that the $C^*$-algebras of the graphs form a pullback diagram. We consider a larger category of relative graphs that correspond to relative Toeplitz graph algebras. In this setting we give necessary and sufficient conditions on the pushout to get a pullback of $C^*$-algebras.
Relative graphs and pullbacks of relative Toeplitz graph algebras
This Signal Processing Grand Challenge (SPGC) targets a difficult automatic prediction problem of societal and medical relevance, namely, the detection of Alzheimer's Dementia (AD). Participants were invited to employ signal processing and machine learning methods to create predictive models based on spontaneous speech data. The Challenge has been designed to assess the extent to which predictive models built based on speech in one language (English) generalise to another language (Greek). To the best of our knowledge no work has investigated acoustic features of the speech signal in multilingual AD detection. Our baseline system used conventional machine learning algorithms with Active Data Representation of acoustic features, achieving accuracy of 73.91% on AD detection, and 4.95 root mean squared error on cognitive score prediction.
Multilingual Alzheimer's Dementia Recognition through Spontaneous Speech: a Signal Processing Grand Challenge
In this paper, we analyze the polarized muon decay at rest (PMDaR) and elastic neutrino-electron scattering (ENES) admitting the non-standard V+A interaction in addition to standard V-A interaction. Considerations are made for Dirac massive muon neutrino and electron antineutrino. Moreover, muon neutrinos are transversely polarized. It means that the outgoing muon-neutrino beam is a mixture of the left- and right-chirality muon neutrinos and has a fixed direction of transverse spin polarization with respect to production plane. We show that the angle-energy distribution of muon neutrinos contains the interference terms between the standard V-A and exotic V+A couplings, which are proportional to the transverse components of muon neutrino spin polarization. They do not vanish in a limit of massless neutrino and include the relative phases to test the CP violation. In consequence, it allows to calculate a neutrino flux and an expected event number in the ENES (detection process) both for the standard model prediction and the case of neutrino left-right mixture.
Polarized Muon Decay at Rest with V+A Interaction
By using Girsanov transformation and martingale representation, Talagrand-type transportation cost inequalities, with respect to both the uniform and the $L^2$ distances on the global free path space, are established for the segment process associated to a class of neutral functional stochastic differential equations. Neutral functional stochastic partial differential equations are also investigated.
Transportation Cost Inequalities for Neutral Functional Stochastic Equations
From a linear stability analysis of the Gross Pitaevskii equation for binary Bose Einstein condensates, it is found that the uniform state becomes unstable to a periodic perturbation of wave number k if k exceeds a critical value kc. However we find that a stationary spatially periodic state does not exist. We show the existence of pulse type solutions, when the pulse structure for one condensate is strongly influenced by the presence of the other condensate.
Stability and The Existence of Coherent Structure in Demixed State of Binary BEC
It is an old and challenging topic to investigate for which discrete groups G the full group C*-algebra C*(G) is residually finite-dimensional (RFD). In particular not much is known about how the RFD property behaves under fundamental constructions, such as amalgamated free products and HNN-extensions. In [CS19] it was proved that central amalgamated free products of virtually abelian groups are RFD. In this paper we prove that this holds much beyond this case. Our method is based on showing a certain approximation property for characters induced from central subgroups. In particular it allows us to prove that free products of polycyclic-by-finite groups amalgamated over finitely generated central subgroups are RFD. On the other hand we prove that the class of RFD C*-algebras (and groups) is not closed under central amalgamated free products. Namely we give an example of RFD groups (in fact finitely generated amenable RF groups) whose central amalgamated free product is not RFD, moreover it is not even maximally almost periodic. This answers a question of Khan and Morris [KM82].
Central amalgamation of groups and the RFD property
Photo-induced processes are fundamental in nature, but accurate simulations are seriously limited by the cost of the underlying quantum chemical calculations, hampering their application for long time scales. Here we introduce a method based on machine learning to overcome this bottleneck and enable accurate photodynamics on nanosecond time scales, which are otherwise out of reach with contemporary approaches. Instead of expensive quantum chemistry during molecular dynamics simulations, we use deep neural networks to learn the relationship between a molecular geometry and its high-dimensional electronic properties. As an example, the time evolution of the methylenimmonium cation for one nanosecond is used to demonstrate that machine learning algorithms can outperform standard excited-state molecular dynamics approaches in their computational efficiency while delivering the same accuracy.
Machine learning enables long time scale molecular photodynamics simulations
The network studied here is based on a standard model in physics, but it appears in various applications ranging from spintronics to neuroscience. When the network is forced by an external signal common to all its elements, there are shown to be two potential (gradient) functions: One for amplitudes and one for phases. But the phase potential disappears when the forcing is removed. The phase potential describes the distribution of in-phase/anti-phase oscillations in the network, as well as resonances in the form of phase locking. A valley in a potential surface corresponds to memory that may be accessed by associative recall. The two potentials derived here exhibit two different forms of memory: structural memory (time domain memory) that is sustained in the free problem, and evoked memory (frequency domain memory) that is sustained by the phase potential, only appearing when the system is illuminated by common external forcing. The common forcing organizes the network into those elements that are locked to forcing frequencies and other elements that may form secluded sub-networks. The secluded networks may perform independent operations such as pattern recognition and logic computations. Various control methods for shaping the network's outputs are demonstrated.
A Frequency-Phase Potential for a Forced STNO Network: an Example of Evoked Memory
The standard notion of NS-NS 3-form flux is lifted to Hitchin's generalized geometry. This generalized flux is given in terms of an integral of a modified Nijenhuis operator over a generalized 3-cycle. Explicitly evaluating the generalized flux in a number of familiar examples, we show that it can compute three-form flux, geometric flux and non-geometric Q-flux. Finally, a generalized connection that acts on generalized vectors is described and we show how the flux arises from it.
NS-NS fluxes in Hitchin's generalized geometry
We provide convincing empirical evidence that long range interactions strongly enhance the rectification effect which takes place in mass graded systems. Even more importantly the rectification does not decrease with the increase of the system size. Large rectification is obtained also for the equal mass case and with graded on-site potential. These results allow to overcome current limitations of the rectification mechanism and open the way for a realistic implementation of efficient thermal diodes.
Ingredients for an efficient thermal diode
We present a new family of exact four-dimensional Taub-NUT spacetimes in Einstein-$\Lambda$ theory supplemented with a conformally coupled scalar field exhibiting a power-counting super-renormalizable potential. The construction proceeds as follows: A solution of a conformally coupled theory with a conformal potential, henceforth the seed $(g_{\mu\nu},\phi)$, is transformed by the action of a specific change of frame in addition with a simultaneous shift of the seed scalar. The new configuration, $(\bar{g}_{\mu\nu},\bar{\phi})$, solves the field equations of a conformally coupled theory with the aforementioned super-renormalizable potential. The solution spectrum of the seed is notoriously enhanced. We highlight the existence of two types of exact black bounces given by de Sitter and anti-de Sitter geometries that transit across three different configurations each. The de Sitter geometries transit from a regular black hole with event and cosmological horizons to a bouncing cosmology connecting two de Sitter Universes with different values of the asymptotic cosmological constant. An intermediate phase represented by a de Sitter wormhole or by a bouncing cosmology that connects two de Sitter Universes is shown, this under the presence of a cosmological horizon. On the other hand, the anti-de Sitter geometries transit from a regular black hole with inner and event horizons to a wormhole that connects two asymptotic boundaries with different constant curvatures. The intermediate phase is given by an anti-de Sitter regular black hole with a single event horizon that appears in two different settings. As a regular anti-de Sitter black hole inside of an anti-de Sitter wormhole or as an anti-de Sitter regular black hole with an internal cosmological bounce. These geometries are smoothly connected by the mass parameter only. Other black holes, bouncing cosmologies and wormholes are also found.
AdS-Taub-NUT spacetimes and exact black bounces with scalar hair
Assume that abelian categories $A, B$ over a field admit countable direct limits and that these limits are exact. Let $F: D^+_{dg}(A) --> D^+_{dg}(B)$ be a DG quasi-functor such that the functor $Ho(F): D^+(A) \to D^+(B)$ carries $D^{\geq 0}(A)$ to $D^{\geq 0}(B)$ and such that, for every $i>0$, the functor $H^i F: A \to B$ is effaceable. We prove that $F$ is canonically isomorphic to the right derived DG functor $RH^0(F)$. We also prove a similar result for bounded derived DG categories in a more general setting. We give an example showing that the corresponding statements for triangulated functors are false. We prove a formula that expresses Hochschild cohomology of the categories $ D^b_{dg}(A)$, $ D^+_{dg}(A) $ as the $Ext$ groups in the abelian category of left exact functors $A \to Ind B$ .
On the derived DG functors
We develop the theory of linear algebra over a (Z_2)^n-commutative algebra (n in N), which includes the well-known super linear algebra as a special case (n=1). Examples of such graded-commutative algebras are the Clifford algebras, in particular the quaternion algebra H. Following a cohomological approach, we introduce analogues of the notions of trace and determinant. Our construction reduces in the classical commutative case to the coordinate-free description of the determinant by means of the action of invertible matrices on the top exterior power, and in the supercommutative case it coincides with the well-known cohomological interpretation of the Berezinian.
Cohomological Approach to the Graded Berezinian
In this paper, we compare the regularities of symbolic and ordinary powers of edge ideals of weighted oriented graphs. For a weighted oriented graph $D$, we give a lower bound for $\reg(I(D)^{k})$, if $V^+$ are sinks. If $D$ has an induced directed path $(x_i,x_j),(x_j,x_r) \in E(D)$ of length $2$ with $w(x_j)\geq 2$, then we show that $\reg(I(D)^{k})\leq \reg(I(D)^{k})$ for all $k\geq 2$. In particular, if $D$ is bipartite, then the above inequality holds for all $k\geq 2$. For any weighted oriented graph $D$, if $V^+$ are sink vertices, then we show that $\reg(I(D)^{k}) \leq \reg(I(D)^{k})$ with $k=2,3$. We further study when these regularities are equal. As a consequence, we give sharp upper bounds for regularity of symbolic powers of certain class of weighted oriented graphs.
Regularity comparison of symbolic and ordinary powers of weighted oriented graphs
We report on the realization and characterization of a magnetic microtrap for ultra cold atoms near a straight superconducting Nb wire with circular cross section. The trapped atoms are used to probe the magnetic field outside the superconducting wire. The Meissner effect shortens the distance between the trap and the wire, reduces the radial magnetic-field gradients and lowers the trap depth. Measurements of the trap position reveal a complete exclusion of the magnetic field from the superconducting wire for temperatures lower than 6K. As the temperature is further increased, the magnetic field partially penetrates the superconducting wire; hence the microtrap position is shifted towards the position expected for a normal-conducting wire.
Meissner effect in superconducting microtraps
Let $u$ be a positive harmonic function in the unit ball $B_1 \subset \mathbb{R}^n$ and let $\mu$ be the boundary measure of $u$. Consider a point $x\in \partial B_1$ and let $n(x)$ denote the unit normal vector at $x$. Let $\alpha$ be a number in $(-1,n-1]$ and $A \in [0,+\infty) $. We prove that $u(x+n(x)t)t^{\alpha} \to A$ as $t \to +0$ if and only if $\frac{\mu({B_r(x)})}{r^{n-1}} r^{\alpha} \to C_\alpha A$ as $r\to+0$, where ${C_\alpha= \frac{\pi^{n/2}}{\Gamma(\frac{n-\alpha+1}{2})\Gamma(\frac{\alpha+1}{2})}}$. For $\alpha=0$ it follows from the theorems by Rudin and Loomis which claim that a positive harmonic function has a limit along the normal iff the boundary measure has the derivative at the corresponding point of the boundary. For $\alpha=n-1$ it concerns about the point mass of $\mu$ at $x$ and it follows from the Beurling minimal principle. For the general case of $\alpha \in (-1,n-1)$ we prove it with the help of the Wiener Tauberian theorem in a similar way to Rudin's approach. Unfortunately this approach works for a ball or a half-space only but not for a general kind of domain. In dimension $2$ one can use conformal mappings and generalise the statement above to sufficiently smooth domains, in dimension $n\geq 3$ we showed that this generalisation is possible for $\alpha\in [0,n-1]$ due to harmonic measure estimates. The last method leads to an extension of the theorems by Loomis, Ramey and Ullrich on non-tangential limits of harmonic functions to positive solutions of elliptic differential equations with Holder continuous coefficients.
On the Boundary Behavior of Positive Solutions of Elliptic Differential Equations
In this paper, we study geometric properties of basins of attraction of monotone systems. Our results are based on a combination of monotone systems theory and spectral operator theory. We exploit the framework of the Koopman operator, which provides a linear infinite-dimensional description of nonlinear dynamical systems and spectral operator-theoretic notions such as eigenvalues and eigenfunctions. The sublevel sets of the dominant eigenfunction form a family of nested forward-invariant sets and the basin of attraction is the largest of these sets. The boundaries of these sets, called isostables, allow studying temporal properties of the system. Our first observation is that the dominant eigenfunction is increasing in every variable in the case of monotone systems. This is a strong geometric property which simplifies the computation of isostables. We also show how variations in basins of attraction can be bounded under parametric uncertainty in the vector field of monotone systems. Finally, we study the properties of the parameter set for which a monotone system is multistable. Our results are illustrated on several systems of two to four dimensions.
Geometric Properties of Isostables and Basins of Attraction of Monotone Systems
Phase transitions are characterized by a sharp change in the type of dynamics of microparticles, and their description usually requires quantum mechanics. Recently, a peculiar type of conductors was discovered in which two-dimensional (2D) electrons form a viscous fluid. In this work we reveal that such electron fluid in high-quality samples can be formed from ballistic electrons via a phase transition. For this purpose, we theoretically study the evolution of a ballistic flow of 2D weakly interacting electrons with an increase of magnetic field and trace an emergence of a fluid fraction at a certain critical field. Such restructuring of the flow manifests itself in a kink in magnetic-field dependencies of the longitudinal and the Hall resistances. It is remarkable that the studied phase transition has a classical-mechanical origin and is determined by both the ballistic size effects and the electron-electron scattering. Our analysis shows that this effect was apparently observed in the recent transport experiments on 2D electrons in graphene and high-mobility GaAs quantum wells.
Ballistic-hydrodynamic phase transition in flow of two-dimensional electrons
Oscillations from high energy photons into light pseudoscalar particles in an external magnetic field are expected to occur in some extensions of the standard model. It is usually assumed that those axionlike particles (ALPs) could produce a drop in the energy spectra of gamma ray sources and possibly decrease the opacity of the Universe for TeV gamma rays. We show here that these assumptions are in fact based on an average behavior that cannot happen in real observations of single sources. We propose a new method to search for photon-ALP oscillations, taking advantage of the fact that a single observation would deviate from the average expectation. Our method is based on the search for irregularities in the energy spectra of gamma ray sources. We predict features that are unlikely to be produced by known astrophysical processes and a new signature of ALPs that is easily falsifiable.
Irregularity in gamma ray source spectra as a signature of axionlike particles
With the hit of new pandemic threats, scientific frameworks are needed to understand the unfolding of the epidemic. The use of mobile apps that are able to trace contacts is of utmost importance in order to control new infected cases and contain further propagation. Here we present a theoretical approach using both percolation and message--passing techniques, to the role of contact tracing, in mitigating an epidemic wave. We show how the increase of the app adoption level raises the value of the epidemic threshold, which is eventually maximized when high-degree nodes are preferentially targeted. Analytical results are compared with extensive Monte Carlo simulations showing good agreement for both homogeneous and heterogeneous networks. These results are important to quantify the level of adoption needed for contact-tracing apps to be effective in mitigating an epidemic.
A message-passing approach to epidemic tracing and mitigation with apps
Optimal caching of files in a content distribution network (CDN) is a problem of fundamental and growing commercial interest. Although many different caching algorithms are in use today, the fundamental performance limits of network caching algorithms from an online learning point-of-view remain poorly understood to date. In this paper, we resolve this question in the following two settings: (1) a single user connected to a single cache, and (2) a set of users and a set of caches interconnected through a bipartite network. Recently, an online gradient-based coded caching policy was shown to enjoy sub-linear regret. However, due to the lack of known regret lower bounds, the question of the optimality of the proposed policy was left open. In this paper, we settle this question by deriving tight non-asymptotic regret lower bounds in both of the above settings. In addition to that, we propose a new Follow-the-Perturbed-Leader-based uncoded caching policy with near-optimal regret. Technically, the lower-bounds are obtained by relating the online caching problem to the classic probabilistic paradigm of balls-into-bins. Our proofs make extensive use of a new result on the expected load in the most populated half of the bins, which might also be of independent interest. We evaluate the performance of the caching policies by experimenting with the popular MovieLens dataset and conclude the paper with design recommendations and a list of open problems.
Fundamental Limits of Online Network-Caching
HTTP/2 (h2) is a new standard for Web communications that already delivers a large share of Web traffic. Unlike HTTP/1, h2 uses only one underlying TCP connection. In a cellular network with high loss and sudden spikes in latency, which the TCP stack might interpret as loss, using a single TCP connection can negatively impact Web performance. In this paper, we perform an extensive analysis of real world cellular network traffic and design a testbed to emulate loss characteristics in cellular networks. We use the emulated cellular network to measure h2 performance in comparison to HTTP/1.1, for webpages synthesized from HTTP Archive repository data. Our results show that, in lossy conditions, h2 achieves faster page load times (PLTs) for webpages with small objects. For webpages with large objects, h2 degrades the PLT. We devise a new domain-sharding technique that isolates large and small object downloads on separate connections. Using sharding, we show that under lossy cellular conditions, h2 over multiple connections improves the PLT compared to h2 with one connection and HTTP/1.1 with six connections. Finally, we recommend content providers and content delivery networks to apply h2-aware domain-sharding on webpages currently served over h2 for improved mobile Web performance.
Domain-Sharding for Faster HTTP/2 in Lossy Cellular Networks
Two different constructions of an invariant of an odd dimensional hyperbolic manifold in the K-group $K_{2n-1}(\bar \Bbb Q)\otimes \Bbb Q$ are given. The volume of the manifold is equal to the value of the Borel regulator on that element. The scissor congruence groups in non euclidian geometries are studied and their relationship with algebraic K-theory of the field of complex numbers is discussed.
Volumes of hyperbolic manifolds and mixed Tate motives
Pulmonary embolus (PE) refers to obstruction of pulmonary arteries by blood clots. PE accounts for approximately 100,000 deaths per year in the United States alone. The clinical presentation of PE is often nonspecific, making the diagnosis challenging. Thus, rapid and accurate risk stratification is of paramount importance. High-risk PE is caused by right ventricular (RV) dysfunction from acute pressure overload, which in return can help identify which patients require more aggressive therapy. Reconstructed four-chamber views of the heart on chest CT can detect right ventricular enlargement. CT pulmonary angiography (CTPA) is the golden standard in the diagnostic workup of suspected PE. Therefore, it can link between diagnosis and risk stratification strategies. We developed a weakly supervised deep learning algorithm, with an emphasis on a novel attention mechanism, to automatically classify RV strain on CTPA. Our method is a 3D DenseNet model with integrated 3D residual attention blocks. We evaluated our model on a dataset of CTPAs of emergency department (ED) PE patients. This model achieved an area under the receiver operating characteristic curve (AUC) of 0.88 for classifying RV strain. The model showed a sensitivity of 87% and specificity of 83.7%. Our solution outperforms state-of-the-art 3D CNN networks. The proposed design allows for a fully automated network that can be trained easily in an end-to-end manner without requiring computationally intensive and time-consuming preprocessing or strenuous labeling of the data.We infer that unmarked CTPAs can be used for effective RV strain classification. This could be used as a second reader, alerting for high-risk PE patients. To the best of our knowledge, there are no previous deep learning-based studies that attempted to solve this problem.
Weakly Supervised Attention Model for RV StrainClassification from volumetric CTPA Scans
In the present article we propose a new hybrid shape function for wormhole (WH)s in the modified $f(R,T)$ gravity. The proposed shape function satisfied the conditions of WH geometry. Geometrical behavior of WH solutions are discussed in both anisotropic and isotropic cases respectively. Also, the stability of this model is obtained by determining the equilibrium condition. The radial null energy condition and weak energy condition are validated in the proposed shape function indicating the absence of exotic matter in modified $f(R,T)$ gravity.
Wormhole model with a hybrid shape function in f(R,T) gravity
Audio-visual multi-modal modeling has been demonstrated to be effective in many speech related tasks, such as speech recognition and speech enhancement. This paper introduces a new time-domain audio-visual architecture for target speaker extraction from monaural mixtures. The architecture generalizes the previous TasNet (time-domain speech separation network) to enable multi-modal learning and at meanwhile it extends the classical audio-visual speech separation from frequency-domain to time-domain. The main components of proposed architecture include an audio encoder, a video encoder that extracts lip embedding from video streams, a multi-modal separation network and an audio decoder. Experiments on simulated mixtures based on recently released LRS2 dataset show that our method can bring 3dB+ and 4dB+ Si-SNR improvements on two- and three-speaker cases respectively, compared to audio-only TasNet and frequency-domain audio-visual networks
Time Domain Audio Visual Speech Separation
Sum of the time component of $\Sigma$ term and the induced pseudo scalar term in axial current is shown to be the t-channel pion pole in Born terms for pion electroproduction near threshold. We also show that this $\Sigma$ term represents the charged pseudo scalar quark density matrix elements in nucleon and manifests itself in the $L_0^+$ amplitude on this reaction.
$\Sigma$ Like Term in Pion Electroproduction Near Threshold
We indicate the tentative source of instability in the two-dimensional black hole background. There are relevant operators among the tachyon and the higher level vertex operators in the conformal field theory. Connection of this instability with Hawking radiation is not obvious. The situation is somewhat analogous to fields in the background of a negative mass Euclidean Schwarzschild solution (in four dimensions). Speculation is made about decay of the Minkowski black hole into finite temperature flat space.
Instabilities in the gravitational background and string theory
This paper considers the distributed information bottleneck (D-IB) problem for a primitive Gaussian diamond channel with two relays and MIMO Rayleigh fading. The channel state is an independent and identically distributed (i.i.d.) process known at the relays but unknown to the destination. The relays are oblivious, i.e., they are unaware of the codebook and treat the transmitted signal as a random process with known statistics. The bottleneck constraints prevent the relays to communicate the channel state information (CSI) perfectly to the destination. To evaluate the bottleneck rate, we provide an upper bound by assuming that the destination node knows the CSI and the relays can cooperate with each other, and also two achievable schemes with simple symbol-by-symbol relay processing and compression. Numerical results show that the lower bounds obtained by the proposed achievable schemes can come close to the upper bound on a wide range of relevant system parameters.
Distributed Information Bottleneck for a Primitive Gaussian Diamond MIMO Channel
We consider global monopoles as well as black holes with global monopole hair in Einstein-Goldstone model with a cosmological constant in four spacetime dimensions. Similar to the $\Lambda=0$ case, the mass of these solutions defined in the standard way diverges. We use a boundary counterterm subtraction method to compute the mass and action of $\Lambda \neq 0$ configurations. The mass of the asymptotically de Sitter solutions computed in this way turns out to take positive values in a specific parameter range and, for a relaxed set of asymptotic boundary conditions, yields a counterexample to the maximal mass conjecture.
Global monopoles, cosmological constant and maximal mass conjecture
Besov-type and Triebel-Lizorkin-type spaces $\dot B^{s,\tau}_{p,q}$ and $\dot F^{s,\tau}_{p,q}$ on $\mathbb{R}^n$ consist of a general family of function spaces that cover not only the well-known Besov and Triebel-Lizorkin spaces $\dot B^{s}_{p,q}$ and $\dot F^{s}_{p,q}$ (when $\tau=0$) but also several other spaces of interest, such as Morrey spaces and $Q$ spaces. In this memoir, we introduce and study matrix-weighted versions $\dot B^{s,\tau}_{p,q}(W)$ and $\dot F^{s,\tau}_{p,q}(W)$ of these general function spaces on $\mathbb{R}^n$, where $W$ is a matrix-valued Muckenhoupt $A_p$ weight on $\mathbb R^n$. Our contributions include several characterizations of these spaces in terms of both the $\varphi$-transform of Frazier and Jawerth and the related sequence spaces $\dot b^{s,\tau}_{p,q}(W)$ and $\dot f^{s,\tau}_{p,q}(W)$, almost diagonal conditions that imply the boundedness of weakly defined operators on these spaces, and consequences for the boundedness of classical operators like pseudo-differential operators, trace operators, and Calderon-Zygmund operators. Results of this type are completely new on this level of generality, but many of them also improve the known results in the unweighted spaces $\dot B^{s,\tau}_{p,q}$ and $\dot F^{s,\tau}_{p,q}$ or, with $\tau=0$, in the weighted spaces $\dot B^{s}_{p,q}(W)$ and $\dot F^{s}_{p,q}(W)$. Several of our results are conveniently stated in terms of a new concept of the $A_p$-dimension $d\in[0,n)$ of a matrix weight $W\in A_p$ on $\mathbb R^n$ and, in several cases, the obtained estimates are shown to be sharp. In particular, for certain parameter ranges, we are able to characterize the sharp almost diagonal conditions that imply the boundedness of operators on these spaces.
Matrix-Weighted Besov-Type and Triebel-Lizorkin-Type Spaces
I comment on Ulvi Yurtsever's result, which states that the entropy of a truncated bosonic Fock space is given by a holographic bound when the energy of the Fock states is constrained gravitationally. The derivation given in Yurtsever's paper contains an subtle mistake, which invalidates the result. A more restrictive, non-holographic entropy bound is derived.
Entropy bound and local quantum field theory
We present multiwavelength (X-ray/optical/near-infrared/millimetre) observations of GRB 051022 between 2.5 hours and ~1.15 yr after the event. It is the most intense gamma-ray burst (~ 10^-4 erg cm^-2) detected by HETE-2, with the exception of the nearby GRB 030329. Optical and near infrared observations did not detect the afterglow despite a strong afterglow at X-ray wavelengths. Millimetre observations at Plateau de Bure (PdB) detected a source and a flare, confirming the association of this event with a moderately bright (R = 21.5) galaxy. Spectroscopic observations of this galaxy show strong [O II], Hbeta and [O III] emission lines at a redshift of 0.809. The spectral energy distribution of the galaxy implies Av (rest frame) = 1.0 and a starburst occuring ~ 25 Myr ago, during which the star-forming-rate reached >= 25 Msun/yr. In conjunction with the spatial extent (~ 1'') it suggests a very luminous (Mv = - 21.8) blue compact galaxy, for which we also find with Z Zsun. The X-ray spectrum shows evidence of considerable absorption by neutral gas with NH, X-ray = 3.47(+0.48/-0.47) x 10^22 cm^-2 (rest frame). Absorption by dust in the host galaxy at z = 0.809 certainly cannot account for the non-detection of the optical afterglow, unless the dust-to-gas ratio is quite different than that seen in our Galaxy (i.e. large dust grains). It is likely that the afterglow of the dark GRB 051022 was extinguished along the line of sight by an obscured, dense star forming region in a molecular cloud within the parent host galaxy. This galaxy is different from most GRB hosts being brighter than L* by a factor of 3. We have also derived a SFR ~ 50 Msun/yr and predict that this host galaxy will be detected at sub-mm wavelengths.
The dark nature of GRB 051022 and its host galaxy
Over the last few years, we have witnessed the availability of an increasing data generated from non-Euclidean domains, which are usually represented as graphs with complex relationships, and Graph Neural Networks (GNN) have gained a high interest because of their potential in processing graph-structured data. In particular, there is a strong interest in exploring the possibilities in performing convolution on graphs using an extension of the GNN architecture, generally referred to as Graph Convolutional Neural Networks (ConvGNN). Convolution on graphs has been achieved mainly in two forms: spectral and spatial convolutions. Due to the higher flexibility in exploring and exploiting the graph structure of data, there is recently an increasing interest in investigating the possibilities that the spatial approach can offer. The idea of finding a way to adapt the network behaviour to the inputs they process to maximize the total performances has aroused much interest in the neural networks literature over the years. This paper presents a novel method to adapt the behaviour of a ConvGNN to the input proposing a method to perform spatial convolution on graphs using input-specific filters, which are dynamically generated from nodes feature vectors. The experimental assessment confirms the capabilities of the proposed approach, which achieves satisfying results using a low number of filters.
Adaptive Filters in Graph Convolutional Neural Networks
Terahertz time-domain spectroscopy (THz-TDS) is a non-invasive, non-contact and label-free technique for biological and chemical sensing as THz-spectra is less energetic and lies in the characteristic vibration frequency regime of proteins and DNA molecules. However, THz-TDS is less sensitive for detection of micro-organisms of size equal to or less than $ \lambda/100 $ (where, $ \lambda $ is wavelength of incident THz wave) and, molecules in extremely low concentrated solutions (like, a few femtomolar). After successful high-throughput fabrication of nanostructures, nanoantennas and metamaterials were found to be indispensable in enhancing the sensitivity of conventional THz-TDS. These nanostructures lead to strong THz field enhancement which when in resonance with absorption spectrum of absorptive molecules, causing significant changes in the magnitude of the transmission spectrum, therefore, enhancing the sensitivity and allowing detection of molecules and biomaterials in extremely low concentrated solutions. Hereby, we review the recent developments in ultra-sensitive and selective nanogap biosensors. We have also provided an in-depth review of various high-throughput nanofabrication techniques. We also discussed the physics behind the field enhancements in sub-skin depth as well as sub-nanometer sized nanogaps. We introduce finite-difference time-domain (FDTD) and molecular dynamics (MD) simulations tools to study THz biomolecular interactions. Finally, we provide a comprehensive account of nanoantenna enhanced sensing of viruses (like, H1N1) and biomolecules such as artificial sweeteners which are addictive and carcinogenic.
Nanoantenna Enhanced Terahertz Interaction of Biomolecules
We investigate the nature of the double color-magnitude sequence observed in the Gaia DR2 HR diagram of stars with high transverse velocities. The stars in the reddest-color sequence are likely dominated by the dynamically-hot tail of the thick disk population. Information from Nissen & Schuster (2010) and from the APOGEE survey suggests that stars in the blue-color sequence have elemental abundance patterns that can be explained by this population having a relatively low star-formation efficiency during its formation. In dynamical and orbital spaces, such as the `Toomre diagram', the two sequences show a significant overlap, but with a tendency for stars on the blue-color sequence to dominate regions with no or retrograde rotation and high total orbital energy. In the plane defined by the maximal vertical excursion of the orbits versus their apocenters, stars of both sequences redistribute into discrete wedges. We conclude that stars which are typically assigned to the halo in the solar vicinity are actually both accreted stars lying along the blue sequence in the HR diagram, and the low velocity tail of the old Galactic disk, possibly dynamically heated by past accretion events. Our results imply that a halo population formed in situ and responsible for the early chemical enrichment prior to the formation of the thick disk is yet to be robustly identified, and that what has been defined as the stars of the in situ stellar halo of the Galaxy may be in fact fossil records of its last significant merger.
In disguise or out of reach: first clues about in-situ and accreted stars in the stellar halo of the Milky Way from Gaia DR2
Two major causes of death in the United States and worldwide are stroke and myocardial infarction. The underlying cause of both is thrombi released from ruptured or eroded unstable atherosclerotic plaques that occlude vessels in the heart (myocardial infarction) or the brain (stroke). Clinical studies show that plaque composition plays a more important role than lesion size in plaque rupture or erosion events. To determine the plaque composition, various cell types in 3D cardiovascular immunofluorescent images of plaque lesions are counted. However, counting these cells manually is expensive, time-consuming, and prone to human error. These challenges of manual counting motivate the need for an automated approach to localize and count the cells in images. The purpose of this study is to develop an automatic approach to accurately detect and count cells in 3D immunofluorescent images with minimal annotation effort. In this study, we used a weakly supervised learning approach to train the HoVer-Net segmentation model using point annotations to detect nuclei in fluorescent images. The advantage of using point annotations is that they require less effort as opposed to pixel-wise annotation. To train the HoVer-Net model using point annotations, we adopted a popularly used cluster labeling approach to transform point annotations into accurate binary masks of cell nuclei. Traditionally, these approaches have generated binary masks from point annotations, leaving a region around the object unlabeled (which is typically ignored during model training). However, these areas may contain important information that helps determine the boundary between cells. Therefore, we used the entropy minimization loss function in these areas to encourage the model to output more confident predictions on the unlabeled areas. Our comparison studies indicate that the HoVer-Net model trained using our weakly ...
Weakly Supervised Deep Instance Nuclei Detection using Points Annotation in 3D Cardiovascular Immunofluorescent Images
Single visual object tracking from an unmanned aerial vehicle (UAV) poses fundamental challenges such as object occlusion, small-scale objects, background clutter, and abrupt camera motion. To tackle these difficulties, we propose to integrate the 3D structure of the observed scene into a detection-by-tracking algorithm. We introduce a pipeline that combines a model-free visual object tracker, a sparse 3D reconstruction, and a state estimator. The 3D reconstruction of the scene is computed with an image-based Structure-from-Motion (SfM) component that enables us to leverage a state estimator in the corresponding 3D scene during tracking. By representing the position of the target in 3D space rather than in image space, we stabilize the tracking during ego-motion and improve the handling of occlusions, background clutter, and small-scale objects. We evaluated our approach on prototypical image sequences, captured from a UAV with low-altitude oblique views. For this purpose, we adapted an existing dataset for visual object tracking and reconstructed the observed scene in 3D. The experimental results demonstrate that the proposed approach outperforms methods using plain visual cues as well as approaches leveraging image-space-based state estimations. We believe that our approach can be beneficial for traffic monitoring, video surveillance, and navigation.
Integration of the 3D Environment for UAV Onboard Visual Object Tracking
Let $A$ be an abelian variety over an algebraically closed field. We show that $A$ is the automorphism group scheme of some smooth projective variety if and only if $A$ has only finitely many automorphisms as an algebraic group. This generalizes a result of Lombardo and Maffei for complex abelian varieties.
Abelian varieties as automorphism groups of smooth projective varieties in arbitrary characteristics
In the development of modern cockpits, there is a trend towards the use of large displays that combine information about air navigation and the status of aircraft equipment. Flight and equipment performance information generated by multiple flight control systems should be graphically displayed in an easy-to-read form on widescreen multifunction displays. It is usually generated by independent systems whose output must not interfere with each other in accordance with the requirements of the ARINC 653 standard. This paper presents a solution to the problem of displaying ARINC 653 applications, which further improves security and portability, when running multiple applications on a single screen of one physical device.
Cross-platform graphics subsystem for an ARINC 653-compatible real-time operating system
English Translation of Paul Drude's 1902 investigation into the factors affecting the self-resonance of single-layer solenoid coils. The ratio of the self-resonant half-wavelength to the conductor length is is found to be principally dependent on the height-to diameter ratio of the coil and the dielectric constants of any insulating materials involved in the construction.
On the construction of Tesla transformers. Period of oscillation and self-inductance of the coil. (Zur construction von Teslatransformatoren. Schwingungsdauer und Selbstinduction von Drahtspulen)
We revisit supersymmetric solutions to five dimensional ungauged N=1 supergravity with dynamic hypermultiplets. In particular we focus on a truncation to the axion-dilaton contained in the universal hypermultiplet. The relevant solutions are fibrations over a four-dimensional Kahler base with a holomorphic axion-dilaton. We focus on solutions with additional symmetries and classify Killing vectors which preserve the additional structure imposed by supersymmetry; in particular we extend the existing classification of solutions with a space-like U(1) isometry to the case where the Killing vector is rotational. We elaborate on general geometrical aspects which we illustrate in some simple examples. We especially discuss solutions describing the backreaction of M2-branes, which for example play a role in the black hole deconstruction proposal for microstate geometries.
Unlocking the Axion-Dilaton in 5D Supergravity
Hierarchical structure formation inevitably leads to the formation of supermassive binary black holes (BBHs) with a sub-parsec separation in galactic nuclei. However, to date there has been no unambiguous detection of such systems. In an effort to search for potential observational signatures of supermassive BBHs, we performed high-resolution smoothed particle hydrodynamics (SPH) simulations of two black holes in a binary of moderate eccentricity surrounded by a circumbinary disk. Building on our previous work, which has shown that gas can periodically transfer from the circumbinary disk to the black holes when the binary is on an eccentric orbit, the current set of simulations focuses on the formation of the individual accretion disks, their evolution and mutual interaction, and the predicted radiative signature. The variation in mass transfer with orbital phase from the circumbinary disk induces periodic variations in the light curve of the two accretion disks at ultraviolet wavelengths, but not in the optical or near-infrared. Searches for this signal offer a promising method to detect supermassive BBHs.
A supermassive binary black hole with triple disks
Typically, locally repairable codes (LRCs) and regenerating codes have been studied independently of each other, and it has not been clear how the parameters of one relate to those of the other. In this paper, a novel connection between locally repairable codes and exact regenerating codes is established. Via this connection, locally repairable codes are interpreted as exact regenerating codes. Further, some of these codes are shown to perform better than time-sharing codes between minimum bandwidth regenerating and minimum storage regenerating codes.
A Connection Between Locally Repairable Codes and Exact Regenerating Codes
This paper presents a new hypothesis on a macro law in the universe, the law of increasing complexity, to formulate the assumption that the universe we observe and the biosphere on Earth are getting more diverse and complex with time. This formulation utilizes a quantitative definition of the complexity of organized matters, organized complexity (OC) [6]. We then apply this law to the coincidence (or fine-tuning) problem about the fundamental physical constants. We introduce a new principle, the principle of increasing complexity, on the law of increasing complexity and explain the coincidence with this new principle without using the anthropic principle. The principle implies that an (approximate) reduction of this macro law to fundamental physical laws would lead to a concrete analysis of the coincidence problem of fundamental physical constants.
On the Arrow of Time and Organized Complexity in the Universe
Path sampling approaches have become invaluable tools to explore the mechanisms and dynamics of so-called rare events that are characterized by transitions between metastable states separated by sizeable free energy barriers. Their practical application, in particular to ever more complex molecular systems, is, however, not entirely trivial. Focusing on replica exchange transition interface sampling (RETIS) and forward flux sampling (FFS), we discuss a range of analysis tools that can be used to assess the quality and convergence of such simulations which is crucial to obtain reliable results. The basic ideas of a step-wise evaluation are exemplified for the study of nucleation in several systems with different complexity, providing a general guide for the critical assessment of RETIS and FFS simulations.
Practical guide to replica exchange transition interface sampling and forward flux sampling
A "book with k pages" consists of a straight line (the "spine") and k half-planes (the "pages"), such that the boundary of each page is the spine. If a graph is drawn on a book with k pages in such a way that the vertices lie on the spine, and each edge is contained in a page, the result is a k-page book drawing (or simply a k-page drawing). The k-page crossing number nu_k(G) of a graph G is the minimum number of crossings in a k-page drawing of G. In this paper we investigate the k-page crossing numbers of complete graphs K_n. We use semidefinite programming techniques to give improved lower bounds on nu_k(K_n) for various values of k. We also use a maximum satisfiability reformulation to calculate the exact value of nu_k(K_n) for several values of k and n. Finally, we investigate the best construction known for drawing K_n in k pages, calculate the resulting number of crossings, and discuss this upper bound in the light of the new results reported in this paper.
Improved lower bounds on book crossing numbers of complete graphs
The theory of (tight) wavelet frames has been extensively studied in the past twenty years and they are currently widely used for image restoration and other image processing and analysis problems. The success of wavelet frame based models, including balanced approach and analysis based approach, is due to their capability of sparsely approximating piecewise smooth functions like images. Motivated by the balanced approach and analysis based approach, we shall propose a wavelet frame based $\ell_0$ minimization model, where the $\ell_0$ "norm" of the frame coefficients is penalized. We adapt the penalty decomposition (PD) method to solve the proposed optimization problem. Numerical results showed that the proposed model solved by the PD method can generate images with better quality than those obtained by either analysis based approach or balanced approach in terms of restoring sharp features as well as maintaining smoothness of the recovered images. Some convergence analysis of the PD method will also be provided.
$\ell_0$ Minimization for Wavelet Frame Based Image Restoration
Under a nonlinear regression model with univariate response an algorithm for the generation of sequential adaptive designs is studied. At each stage, the current design is augmented by adding $p$ design points where $p$ is the dimension of the parameter of the model. The augmenting $p$ points are such that, at the current parameter estimate, they constitute the locally D-optimal design within the set of all saturated designs. Two relevant subclasses of nonlinear regression models are focused on, which were considered in previous work of the authors on the adaptive Wynn algorithm: firstly, regression models satisfying the `saturated identifiability condition' and, secondly, generalized linear models. Adaptive least squares estimators and adaptive maximum likelihood estimators in the algorithm are shown to be strongly consistent and asymptotically normal, under appropriate assumptions. For both model classes, if a condition of `saturated D-optimality' is satisfied, the almost sure asymptotic D-optimality of the generated design sequence is implied by the strong consistency of the adaptive estimators employed by the algorithm. The condition states that there is a saturated design which is locally D-optimal at the true parameter point (in the class of all designs).
A $p$-step-ahead sequential adaptive algorithm for D-optimal nonlinear regression design
We extend the symbol calculus and study the limit operator theory for $\sigma$-compact, \'{e}tale and amenable groupoids, in the Hilbert space case. This approach not only unifies various existing results which include the cases of exact groups and discrete metric spaces with Property A, but also establish new limit operator theories for group/groupoid actions and uniform Roe algebras of groupoids. In the process, we extend a monumental result by Exel, Nistor and Prudhon, showing that the invertibility of an element in the groupoid $C^*$-algebra of a $\sigma$-compact amenable groupoid with a Haar system is equivalent to the invertibility of its images under regular representations.
Limit operator theory for groupoids
Nowadays, people start to use online reservation systems to plan their vacations since they have vast amount of choices available. Selecting when and where to go from this large-scale options is getting harder. In addition, sometimes consumers can miss the better options due to the wealth of information to be found on the online reservation systems. In this sense, personalized services such as recommender systems play a crucial role in decision making. Two traditional recommendation techniques are content-based and collaborative filtering. While both methods have their advantages, they also have certain disadvantages, some of which can be solved by combining both techniques to improve the quality of the recommendation. The resulting system is known as a hybrid recommender system. This paper presents a new hybrid hotel recommendation system that has been developed by combining content-based and collaborative filtering approaches that recommends customer the hotel they need and save them from time loss.
Hotel Recommendation System Based on User Profiles and Collaborative Filtering
In this work, we introduce the notion of Context-Based Prediction Models. A Context-Based Prediction Model determines the probability of a user's action (such as a click or a conversion) solely by relying on user and contextual features, without considering any specific features of the item itself. We have identified numerous valuable applications for this modeling approach, including training an auxiliary context-based model to estimate click probability and incorporating its prediction as a feature in CTR prediction models. Our experiments indicate that this enhancement brings significant improvements in offline and online business metrics while having minimal impact on the cost of serving. Overall, our work offers a simple and scalable, yet powerful approach for enhancing the performance of large-scale commercial recommender systems, with broad implications for the field of personalized recommendations.
Unleash the Power of Context: Enhancing Large-Scale Recommender Systems with Context-Based Prediction Models
We briefly review the recent progress in the exclusive determination of |V(ub)| using QCD sum rules on the light cone.
Obtaining |V(ub)| exclusively: a theoretical perspective
The period enforcer algorithm for self-suspending real-time tasks is a technique for suppressing the "back-to-back" scheduling penalty associated with deferred execution. Originally proposed in 1991, the algorithm has attracted renewed interest in recent years. This note revisits the algorithm in the light of recent developments in the analysis of self-suspending tasks, carefully re-examines and explains its underlying assumptions and limitations, and points out three observations that have not been made in the literature to date: (i) period enforcement is not strictly superior (compared to the base case without enforcement) as it can cause deadline misses in self-suspending task sets that are schedulable without enforcement; (ii) to match the assumptions underlying the analysis of the period enforcer, a schedulability analysis of self-suspending tasks subject to period enforcement requires a task set transformation for which no solution is known in the general case, and which is subject to exponential time complexity (with current techniques) in the limited case of a single self-suspending task; and (iii) the period enforcer algorithm is incompatible with all existing analyses of suspension-based locking protocols, and can in fact cause ever-increasing suspension times until a deadline is missed.
A Note on the Period Enforcer Algorithm for Self-Suspending Tasks
Partially Observable Markov Decision Processes (POMDPs) are rich environments often used in machine learning. But the issue of information and causal structures in POMDPs has been relatively little studied. This paper presents the concepts of equivalent and counterfactually equivalent POMDPs, where agents cannot distinguish which environment they are in though any observations and actions. It shows that any POMDP is counterfactually equivalent, for any finite number of turns, to a deterministic POMDP with all uncertainty concentrated into the initial state. This allows a better understanding of POMDP uncertainty, information, and learning.
Counterfactual equivalence for POMDPs, and underlying deterministic environments
Machine learning has proved invaluable for a range of different tasks, yet it also proved vulnerable to evasion attacks, i.e., maliciously crafted perturbations of input data designed to force mispredictions. In this paper we propose a novel technique to verify the security of decision tree models against evasion attacks with respect to an expressive threat model, where the attacker can be represented by an arbitrary imperative program. Our approach exploits the interpretability property of decision trees to transform them into imperative programs, which are amenable for traditional program analysis techniques. By leveraging the abstract interpretation framework, we are able to soundly verify the security guarantees of decision tree models trained over publicly available datasets. Our experiments show that our technique is both precise and efficient, yielding only a minimal number of false positives and scaling up to cases which are intractable for a competitor approach.
Certifying Decision Trees Against Evasion Attacks by Program Analysis
We study the long-distance asymptotic behavior of various correlation functions for the one-dimensional (1D) attractive Hubbard model in a partially polarized phase through the Bethe ansatz and conformal field theory approaches. We particularly find the oscillating behavior of these correlation functions with spatial power-law decay, of which the pair (spin) correlation function oscillates with a frequency $\Delta k_F$ ($2\Delta k_F$). Here $\Delta k_F=\pi(n_\uparrow-n_\downarrow)$ is the mismatch in the Fermi surfaces of spin-up and spin-down particles. Consequently, the pair correlation function in momentum space has peaks at the mismatch $k=\Delta k_F$, which has been observed in recent numerical work on this model. These singular peaks in momentum space together with the spatial oscillation suggest an analog of the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) state in the 1D Hubbard model. The parameter $\beta$ representing the lattice effect becomes prominent in critical exponents which determine the power-law decay of all correlation functions. We point out that the backscattering of unpaired fermions and bound pairs within their own Fermi points gives a microscopic origin of the FFLO pairing in 1D.
Asymptotic correlation functions and FFLO signature for the one-dimensional attractive Hubbard model
Tur\'an, Mitrinovi\'c-Adamovi\'c and Wilker type inequalities are deduced for regular Coulomb wave functions. The proofs are based on a Mittag-Leffler expansion for the regular Coulomb wave function, which may be of independent interest. Moreover, some complete monotonicity results concerning the Coulomb zeta functions and some interlacing properties of the zeros of Coulomb wave functions are given.
Tur\'an type inequalities for regular Coulomb wave functions
We discuss the static axially symmetric regular solutions, obtained recently in Einstein-Yang-Mills and Einstein-Yang-Mills-dilaton theory [1]. These asymptotically flat solutions are characterized by the winding number $n>1$ and the node number $k$ of the purely magnetic gauge field. The well-known spherically symmetric solutions have winding number $n=1$. The axially symmetric solutions satisfy the same relations between the metric and the dilaton field as their spherically symmetric counterparts. Exhibiting a strong peak along the $\rho$-axis, the energy density of the matter fields of the axially symmetric solutions has a torus-like shape. For fixed winding number $n$ with increasing node number $k$ the solutions form sequences. The sequences of magnetically neutral non-abelian axially symmetric regular solutions with winding number $n$ tend to magnetically charged abelian spherically symmetric limiting solutions, corresponding to ``extremal'' Einstein-Maxwell-dilaton solutions for finite values of $\gamma$ and to extremal Reissner-Nordstr\o m solutions for $\gamma=0$, with $n$ units of magnetic charge.
Static Axially Symmetric Einstein-Yang-Mills-Dilaton Solutions: I.Regular Solutions
We estimate a characteristic timescale for star formation in the spiral arms of disk galaxies, going from atomic hydrogen (HI) to dust-enshrouded massive stars. Drawing on high-resolution HI data from The HI Nearby Galaxy Survey and 24$\mu$m images from the Spitzer Infrared Nearby Galaxies Survey we measure the average angular offset between the HI and 24$\mu$m emissivity peaks as a function of radius, for a sample of 14 nearby disk galaxies. We model these offsets assuming an instantaneous kinematic pattern speed, $\Omega_p$, and a timescale, t(HI-->24$\mu$m), for the characteristic time span between the dense \hi phase and the formation of massive stars that heat the surrounding dust. Fitting for $\Omega_p$ and t(HI-->24$\mu$m), we find that the radial dependence of the observed angular offset (of the \hi and 24$\mu$m emission) is consistent with this simple prescription; the resulting corotation radii of the spiral patterns are typically $R_{cor}\simeq 2.7 R_{s}$, consistent with independent estimates. The resulting values of t(HI-->24$\mu$m) for the sample are in the range 1--4 Myr. We have explored the possible impact of non-circular gas motions on the estimate of t(HI-->24$\mu$m) and have found it to be substantially less than a factor of 2. This implies that a short timescale for the most intense phase of the ensuing star formation in spiral arms, and implies a considerable fraction of molecular clouds exist only for a few Myr before forming stars. However, our analysis does not preclude that some molecular clouds persist considerably longer. If much of the star formation in spiral arms occurs within this short interval t(HI-->24$\mu$m), then star formation must be inefficient, in order to avoid the short-term depletion of the gas reservoir.
Geometrically Derived Timescales for Star Formation in Spiral Galaxies
A new $\mu$TCA DAQ system was introduced in CANDLES experiment with SpaceWire-to-GigabitEthernet (SpaceWire-GigabitEthernet) network for data readout and Flash Analog-to-Digital Converters (FADCs). With SpaceWire-GigabitEthernet, we can construct a flexible DAQ network with multi-path access to FADCs by using off-the-shelf computers. FADCs are equipped 8 event buffers, which act as de-randomizer to detect sequential decays from the background. SpaceWire-GigabitEthernet has high latency (about 100 $\mu$sec) due to long turnaround time, while GigabitEthernet has high throughput. To reduce dead-time, we developed the DAQ system with 4 "crate-parallel" (modules in crates are read in parallel) reading threads. As a result, the readout time is reduced by 4 times: 40 msec down to 10 msec. With improved performance, it is expected to achieve higher background suppression for CANDLES experiment. Moreover, for energy calibration, "event-parallel" reading process (events are read in parallel) is also introduced to reduce measurement time. With 2 "event-parallel" reading processes, the data rate is increased 2 times.
$\mu$TCA DAQ system and parallel reading in CANDLES experiment
Linear representations for a subclass of boolean symmetric functions selected by a parity condition are shown to constitute a generalization of the linear constraints on probabilities introduced by Boole. These linear constraints are necessary to compute probabilities of events with relations between the. arbitrarily specified with propositional calculus boolean formulas.
Generalizing Fuzzy Logic Probabilistic Inferences
The relationship between the densities of ground-state wave functions (i.e., the minimizers of the Rayleigh--Ritz (RR) variation principle) and the ground-state densities in density-functional theory (i.e., the minimizers of the Hohenberg--Kohn (HK) variation principle) is studied within the framework of convex conjugation, in a generic setting covering molecular systems, solid-state systems, and more. Having introduced admissible density functionals as functionals that produce the exact ground ground-state energy for a given external potential by minimizing over densities in the HK variation principle, necessary sufficient conditions on such functionals are established to ensure that the RR ground-state densities and the HK ground-state densities are identical. We apply the results to molecular systems in the BO-approximation. For any given potential $v \in L^{3/2}(\mathbb{R}^3) + L^{\infty}(\mathbb{R}^3)$, we establish a one-to-one correspondence between the mixed ground-state densities of the RR variation principle and the mixed ground-state densities of the HK variation principle when the Lieb density-matrix constrained-search universal density functional is taken as the admissible functional. A similar one-to-one correspondence is established between the pure ground-state densities of the RR variation principle and the pure ground-state densities obtained using the HK variation principle with the Levy--Lieb pure-state constrained-search functional. In other words, all physical ground-state densities (pure or mixed) are recovered with these functionals and no false densities (i.e., minimizing densities that are not physical) exist. The importance of topology (i.e., choice of Banach space of densities and potentials) is emphasized and illustrated. The relevance of these results for current-density-functional theory is examined.
Ground-state densities from the Rayleigh--Ritz variation principle and from density-functional theory
Each of neutrinos has a non - zero mass and regardless of whether it is a Dirac or a Majorana mass, can possess both anapole and electric dipole moments. Between their form factors appears a connection, for example, at the longitudinal neutrinos scattering on spinless nuclei. We discuss a theory, in which a mass consists of vector and axial - vector components responsible for separateness of leptonic current into the vector and axial - vector parts of the same charge or dipole moment. Such a model can explain the absence of truly neutral neutrinos vector interactions and the availability of an axial - vector structure of a Majorana mass. Thereby it relates the two neutrinos of a different nature. We derive an equation which unites the masses to a ratio of the anapole and electric dipole form factors of any lepton and its neutrino as a consequence of their unification in families of doublets and singlets. This testifies in favor of the existence of the left (right) dileptons and paradileptons of the axial - vector currents. Each of them answers to conservation of an axial - vector charge and any lepton flavor. Therefore, an axial - vector mass, anapole and electric dipole moment of the neutrino become proportional respectively to an axial - vector mass, anapole and electric dipole moment of a particle of the same families.
Family Structure of Leptons and Their Currents of an Axial Vector Nature
We investigate the formation of methane line at 2.3 ${\rm \mu m}$ in Brown Dwarf Gliese 229B. Two sets of model parameters with (a) $T_{\rm eff}=940 $K and $\log (g) =5.0$, (b) $T_{\rm eff}=1030 $K and $ \log(g)=5.5$ are adopted both of which provide excellent fit for the synthetic continuum spectra with the observed flux at a wide range of wavelengths. In the absence of observational data for individual molecular lines, we set the additional parameters that are needed in order to model the individual lines by fitting the calculated flux with the observed flux at the continuum. A significant difference in the amount of flux at the core of the line is found with the two different models although the flux at the ontinuum remains the same. Hence, we show that if spectroscopic observation at $2.3{\rm \mu m}$ with a resolution as high as $R \simeq 200,000$ is possible then a much better constraint on the surface gravity and on the metallicity of the object could be obtained by fitting the theoretical model of individual molecular line with the observed data.
Line Formation in the Atmosphere of Brown Dwarf Gliese 229B: $CH_4$ at 2.3 micron
The energy losses at collisions of heavy multiply charged ions with light atoms and polarization losses at moving through the matter have been considered under circumstances the ion charge Z>>1 and the relative colliding velocity v>>1, so that Z~v=<c, where c - the light velocity (atomic units are used). In this region of parameters the Born approximation are not justificative. The simple formulas for effective stopping are obtained, the comparison with other theoretical results and experiments are given.
THE ENERGY LOSSES OF RELATIVISTIC HIGH CHARGED IONS
We consider the coincident root loci consisting of the polynomials with at least two double roots andpresent a linear basis of the corresponding ideal in the algebra of symmetric polynomials in terms of the Jack polynomials with special value of parameter $\alpha = -2.$ As a corollary we present an explicit formula for the Hilbert-Poincar\`e series of this ideal and the generator of the minimal degree as a special Jack polynomial. A generalization to the case of the symmetric polynomials vanishing on the double shifted diagonals and the Macdonald polynomials specialized at $t^2 q = 1$ is also presented. We also give similar results for the interpolation Jack polynomials.
Coincident root loci and Jack and Macdonald polynomials for special values of the parameters
In this paper, we discuss the definition of Q factor for nonlinear oscillators. While available definitions of Q are often limited to linear resonators or oscillators with specific topologies, our definition is applicable to any oscillator as a figure of merit for its amplitude stability. It can be formulated rigorously and computed numerically from oscillator equations. With this definition, we calculate and analyze the Q factors of several oscillators of different types. The results confirm that the proposed Q formulation is a useful addition to the characterization techniques for oscillators.
Rigorous Q Factor Formulation and Characterization for Nonlinear Oscillators
It is demonstrated that there is a dynamic isospin breaking effect in the near threshold $\gamma^{*} N\to \pi N$ reaction due to the mass difference of the up and down quarks, which also causes isospin breaking in the $\pi N$ system. The photopion reaction is affected through final state $\pi N$ interactions (formally implemented by unitarity and time reversal invariance). It is also demonstrated that the near threshold $\gamma \vec{N} \to \pi N$ reaction is a practical reaction to measure isospin breaking in the $\pi N$ system, which was first predicted by Weinberg about 20 years ago but has never been experimentally tested.
Light Quark Mass Difference and Isospin Breaking In Electromagnetic Pion Production
One proves existence and uniqueness of strong solutions to stochastic porous media equations under minimal monotonicity conditions on the nonlinearity. In particular, we do not assume continuity of the drift or any growth condition at infinity.
Existence of Strong Solutions for Stochastic Porous Media Equation under General Monotonicity Conditions
This article recalls the birth of the first electron-positron storage ring AdA, and the construction of the higher energy collider ADONE, where early photon-photon collisions were observed. The events which led the Austrian physicist Bruno Touschek to propose and construct AdA will be recalled, starting with early work on the Wideroe's betatron during World War II, up to the construction of ADONE, and the theoretical contribution to radiative corrections to electron-positron collisions.
Birth of colliding beams in Europe, two photon studies at Adone
We consider a retailer selling a single product with limited on-hand inventory over a finite selling season. Customer demand arrives according to a Poisson process, the rate of which is influenced by a single action taken by the retailer (such as price adjustment, sales commission, advertisement intensity, etc.). The relationship between the action and the demand rate is not known in advance. However, the retailer is able to learn the optimal action "on the fly" as she maximizes her total expected revenue based on the observed demand reactions. Using the pricing problem as an example, we propose a dynamic "learning-while-doing" algorithm that only involves function value estimation to achieve a near-optimal performance. Our algorithm employs a series of shrinking price intervals and iteratively tests prices within that interval using a set of carefully chosen parameters. We prove that the convergence rate of our algorithm is among the fastest of all possible algorithms in terms of asymptotic "regret" (the relative loss comparing to the full information optimal solution). Our result closes the performance gaps between parametric and non-parametric learning and between a post-price mechanism and a customer-bidding mechanism. Important managerial insight from this research is that the values of information on both the parametric form of the demand function as well as each customer's exact reservation price are less important than prior literature suggests. Our results also suggest that firms would be better off to perform dynamic learning and action concurrently rather than sequentially.
Close the Gaps: A Learning-while-Doing Algorithm for a Class of Single-Product Revenue Management Problems
In this paper we develop a relative version of T-duality in generalized complex geometry which we propose as a manifestation of mirror symmetry. Let M be an n-dimensional smooth real manifold, V a rank n real vector bundle on M, and nabla a flat connection on V. We define the notion of a nabla-semi-flat generalized complex structure on the total space of V. We show that there is an explicit bijective correspondence between nabla-semi-flat generalized complex structures on the total space of V and nabla(dual)-semi-flat generalized complex structures on the total space of the dual of V. Similarly we define semi-flat generalized complex structures on real n-torus bundles with section over an n-dimensional base and establish a similar bijective correspondence between semi-flat generalized complex structures on pair of dual torus bundles. Along the way, we give methods of constructing generalized complex structures on the total spaces of vector bundles and torus bundles with sections. We also show that semi-flat generalized complex structures give rise to a pair of transverse Dirac structures on the base manifold. We give interpretations of these results in terms of relationships between the cohomology of torus bundles and their duals. We also study the ways in which our results generalize some well established aspects of mirror symmetry as well as some recent proposals relating generalized complex geometry to string theory.
Mirror Symmetry and Generalized Complex Manifolds
Quantum work fluctuation theorems are known to hold when the work is defined as the difference between the outcomes of projective measurements carried out on the Hamiltonian of the system at the initial and the final time instants of the experimental realization of the process. A recent study showed that the theorem breaks down if the measurement is of a more general nature, i.e. if a positive operator valued measurement is used, and the deviation vanishes only in the limit where the operators become projective in nature. We study a simple two-state system subjected to a unitary evolution under a Hamiltonian that is linearly dependent on time, and verify the validity of the above statement. We further define a weak value of work and show that the deviation from the exact work fluctuation theorems are much less in this formalism.
Exploring the extent of validity of quantum work fluctuation theorems in the presence of weak measurements
We have developed two computer algebra systems, meditor [Jolly:2007] and JAS [Kredel:2006]. These CAS systems are available as Java libraries. For the use-case of interactively entering and manipulating mathematical expressions, there is a need of a scripting front-end for our libraries. Most other CAS invent and implement their own scripting interface for this purpose. We, however, do not want to reinvent the wheel and propose to use a contemporary scripting language with access to Java code. In this paper we discuss the requirements for a scripting language in computer algebra and check whether the languages Python, Ruby, Groovy and Scala meet these requirements. We conclude that, with minor problems, any of these languages is suitable for our purpose.
How to turn a scripting language into a domain specific language for computer algebra
In the last decade or so there has been debate over the possibility that the fuzzy quantum nature of spacetime might decohere wavefronts emanating from very distant sources. Consequences of that could be "blurred" or "faded" images of compact structures in galaxies, primarily at z>1 for their emitted X-rays and gamma-rays, but perhaps even in ultraviolet through optical light at higher redshift. So far there are only inconclusive hints of this from z~4 active-galactic nucleii and gamma-ray bursts viewed with Fermi and Hubble Space Telescope. If correct though, that would impose a significant, fundamental resolution limit for galaxies out to z~8 in the era of the James Webb Space Telescope and the next generation of ground-based telescopes using adaptive optics.
Limits to Seeing High-Redshift Galaxies Due to Planck-Scale-Induced Blurring
We apply a new method based upon thermofield dynamics (TFD) to study entanglement of finite-spin systems with non-competitive external fields for both equilibrium and non-equilibrium cases. For the equilibrium finite-spin systems, the temperature dependence of the extended density matrices is derived using this method, and the effect of non-competitive external field is demonstrated. For the non-equilibrium finite-spin systems, the time dependence of the extended density matrices and the extended entanglement entropies is derived in accordance with von Noumann equation, and the dissipative dynamics of the finite-spin systems is argued. Consequently, the applicability of the TFD-based method to describe entanglement is confirmed in both equilibrium and non-equilibrium cases with the external fieds.
Dissipative dynamics of entangled finite-spin systems with non-competitive external fields
We consider special solution to the 3D compressible Navier-Stokes system with and without the Coriolis force and dry friction and find the respective initial data implying a finite time gradient catastrophe.
Exact solutions to the compressible Navier-Stokes equations with the Coriolis and friction terms
I study theoretically quadrupolar topological insulators under applied static electric field rotated along the crystal axis. I demonstrate, that the energy spectrum of this structure is a Wannier-Stark ladder that is quantized and directly distinguishes between the topological phase, possessing localized corner states, and the trivial phase, lacking the corner states. These results may find applications in the characterization of rapidly emerging higher-order topological phases of light and matter.
Distinguishing trivial and topological quadrupolar insulators by Wannier-Stark ladders
Jets, jet-medium interaction and hydrodynamic evolution of fluctuations in initial parton density all lead to the final anisotropic dihadron azimuthal correlations in high-energy heavy-ion collisions. We remove the harmonic flow background and study the net correlations from different sources with different initial conditions within the AMPT model. We also study $\gamma$-hadron correlations which are only influenced by jet-medium interactions.
Initial fluctuations and dihadron and $\gamma$-hadron correlations in high-energy heavy ion collisions
With the tremendous amount of computing because of the wide usage of internet it is observed that some user(s) are not able to manage their desktop with antivirus software properly installed. It is happening few times, that we allow our friends, students and colleagues to sit on our networked PC. Sometimes the user is unaware of the situation that there workstations are unsecured and so some one else could also be monitoring your flow of information and your most important data could go haywire, resulting into leakage of most confidential data to unwanted or malicious user(s). Example of some such documents could be question papers designed by the faculty member by various universities. Now a day most of the universities are having the biggest threat about the question papers and many other confidential documents designed by their faculty members. We in this paper present the solution to over come such a situation using the concept of Steganography. Steganography is a technique through which one can hide information into some cover object. This technique, if used, in positive direction could be of great help to solve such a problem and even other.
An approach to secure highly confidential documents of any size in the corporate or institutes having unsecured networks
In this paper we examine predictions from different models of nondiagonal parton distributions. This will be achieved by examining whether certain predictions of relationships between diagonal and nondiagonal parton distributions also hold after having evolved the different distributions.
Study of Nondiagonal Parton Distribution Models
In our previous work [Y. Angelopoulos, S. Aretakis, and D. Gajic, Late-time asymptotics for the wave equation on spherically symmetric stationary backgrounds, in Advances in Mathematics 323 (2018), 529-621] we showed that the coefficient in the precise leading-order late-time asymptotics for solutions to the wave equation with smooth, compactly supported initial data on Schwarzschild backgrounds is proportional to the time-inverted Newman-Penrose constant (TINP), that is the Newman-Penrose constant of the associated time integral. The time integral (and hence the TINP constant) is canonically defined in the domain of dependence of any Cauchy hypersurface along which the stationary Killing field is non-vanishing. As a result, an explicit expression of the late-time polynomial tails was obtained in terms of initial data on Cauchy hypersurfaces intersecting the future event horizon to the future of the bifurcation sphere. In this paper, we extend the above result to Cauchy hypersurfaces intersecting the bifurcation sphere via a novel geometric interpretation of the TINP constant in terms of a modified gradient flux on Cauchy hypersurfaces. We show, without appealing to the time integral construction, that a general conservation law holds for these gradient fluxes. This allows us to express the TINP constant in terms of initial data on Cauchy hypersurfaces for which the time integral construction breaks down.
Asymptotics for scalar perturbations from a neighborhood of the bifurcation sphere
Many data-fitting applications require the solution of an optimization problem involving a sum of large number of functions of high dimensional parameter. Here, we consider the problem of minimizing a sum of $n$ functions over a convex constraint set $\mathcal{X} \subseteq \mathbb{R}^{p}$ where both $n$ and $p$ are large. In such problems, sub-sampling as a way to reduce $n$ can offer great amount of computational efficiency. Within the context of second order methods, we first give quantitative local convergence results for variants of Newton's method where the Hessian is uniformly sub-sampled. Using random matrix concentration inequalities, one can sub-sample in a way that the curvature information is preserved. Using such sub-sampling strategy, we establish locally Q-linear and Q-superlinear convergence rates. We also give additional convergence results for when the sub-sampled Hessian is regularized by modifying its spectrum or Levenberg-type regularization. Finally, in addition to Hessian sub-sampling, we consider sub-sampling the gradient as way to further reduce the computational complexity per iteration. We use approximate matrix multiplication results from randomized numerical linear algebra (RandNLA) to obtain the proper sampling strategy and we establish locally R-linear convergence rates. In such a setting, we also show that a very aggressive sample size increase results in a R-superlinearly convergent algorithm. While the sample size depends on the condition number of the problem, our convergence rates are problem-independent, i.e., they do not depend on the quantities related to the problem. Hence, our analysis here can be used to complement the results of our basic framework from the companion paper, [38], by exploring algorithmic trade-offs that are important in practice.
Sub-Sampled Newton Methods II: Local Convergence Rates
The implementation of a proof-of-concept Lattice Quantum Chromodynamics kernel on the Cell processor is described in detail, illustrating issues encountered in the porting process. The resulting code performs up to 45GFlop/s per socket, indicating that the Cell processor is likely to be a good platform for future Lattice QCD calculations.
Performance of a Lattice Quantum Chromodynamics Kernel on the Cell Processor