text
stringlengths
121
2.54k
summary
stringlengths
23
219
Using the critical Casimir force, we study the attractive-strength dependence of diffusion-limited colloidal aggregation in microgravity. By means of near field scattering we measure both the static and dynamic structure factor of the aggregates as the aggregation process evolves. The simultaneous measurement of both the static and dynamic structure factor under ideal microgravity conditions allows us to uniquely determine the ratio of the hydrodynamic and gyration radius as a function of the fractal dimension of the aggregate, enabling us to elucidate the internal structure of the aggregates as a function of the interaction potential. We find that the mass is evenly distributed in all objects with fractal dimension ranging from 2.55 for a shallow to 1.75 for the deepest potential.
Dynamics of colloidal aggregation in microgravity by critical Casimir forces
Heat fluxes in a district heating pipeline systems need to be controlled on the scale from minutes to an hour to adjust to evolving demand. There are two principal ways to control the heat flux - keep temperature fixed but adjust velocity of the carrier (typically water) or keep the velocity flow steady but then adjust temperature at the heat producing source (heat plant). We study the latter scenario, commonly used for operations in Russia and Nordic countries, and analyze dynamics of the heat front as it propagates through the system. Steady velocity flows in the district heating pipelines are typically turbulent and incompressible. Changes in the heat, on either consumption or production sides, lead to slow transients which last from tens of minutes to hours. We classify relevant physical phenomena in a single pipe, e.g. turbulent spread of the turbulent front. We then explain how to describe dynamics of temperature and heat flux evolution over a network efficiently and illustrate the network solution on a simple example involving one producer and one consumer of heat connected by "hot" and "cold" pipes. We conclude the manuscript motivating future research directions.
Thermal Transients in District Heating Systems
We investigate percolation in the Boolean model with convex grains in high dimension. For each dimension d, one fixes a compact, convex and symmetric set K $\subset$ R d with non empty interior. In a first setting, the Boolean model is a reunion of translates of K. In a second setting, the Boolean model is a reunion of translates of K or $\rho$K for a further parameter $\rho$ $\in$ (1, 2). We give the asymptotic behavior of the percolation probability and of the percolation threshold in the two settings.
Percolation in the Boolean model with convex grains in high dimension
We use de-identified data from Facebook Groups to study and provide a descriptive analysis of local gift-giving communities, in particular buy nothing (BN) groups. These communities allow people to give items they no longer need, reduce waste, and connect to local community. Millions of people have joined BN groups on Facebook, with an increasing pace through the COVID-19 pandemic. BN groups are more popular in dense and urban US counties with higher educational attainment. Compared to other local groups, BN groups have lower Facebook friendship densities, suggesting they bring together people who are not already connected. The interaction graphs in BN groups form larger strongly connected components, indicative of norms of generalized reciprocity. The interaction patterns in BN groups are similar to other local online gift-giving groups, with names containing terms such as `free stuff" and `pay it forward". This points to an interaction signature for local online gift-giving communities.
Community gifting groups on Facebook
We prove that, for the moduli space of flat SU(2)-connections on the torus, the Weyl quantization and the quantization using the quantum group of SL(2,C) are unitarily equivalent. This is done by comparing the matrices of the operators associated by the two quantization to cosine functions. We also discuss the *-product of the Weyl quantization and show that it satisfies the product-to-sum formula for noncommutative cosines on the noncommutative torus.
The Weyl quantization and the quantum group quantization of the moduli space of flat SU(2)-connections on the torus are the same
An overarching scientific challenge for the coming decade is to discover the meaning of confinement, its relationship to dynamical chiral symmetry breaking (DCSB) - the origin of visible mass - and the connection between them. In progressing toward meeting this challenge, significant progress has been made using continuum methods in QCD. For example, a novel understanding of gluon and quark confinement and its consequences has begun to emerge from quantum field theory; a clear picture is being drawn of how hadron masses emerge dynamically in a universe with light quarks; and ground-state hadron wave functions with a direct connection to QCD are becoming available, which reveal that quark-quark correlations are crucial in hadron structure. There is growing experimental support for this body of predictions in both elastic and nucleon-to-resonance-transition form factors.
Running Masses in the Nucleon and its Resonances
We have used 2D Fabry-Perot absorption-line spectroscopy of the SB0 galaxy NGC 7079 to measure its bar pattern speed, $\om$. As in all previous cases of bar pattern speed measurements, we find a fast bar. We estimate that NGC 7079 has been undisturbed for at least the past Gyr or roughly 8 bar rotations, long enough for the bar to have slowed down significantly through dynamical friction if the disk is sub-maximal.
The Bar Pattern Speed in NGC 7079
Community structure is a commonly observed feature of real networks. The term refers to the presence in a network of groups of nodes (communities) that feature high internal connectivity, but are poorly connected between each other. Whereas the issue of community detection has been addressed in several works, the problem of validating a partition of nodes as a good community structure for a real network has received considerably less attention and remains an open issue. We propose a set of indices for community structure validation of network partitions that are based on an hypothesis testing procedure that assesses the distribution of links between and within communities. Using both simulations and real data, we illustrate how the proposed indices can be employed to compare the adequacy of different partitions of nodes as community structures in a given network, to assess whether two networks share the same or similar community structures, and to evaluate the performance of different network clustering algorithms.
On community structure validation in real networks
The high reflection of land vegetation in the near-infrared, the vegetation red edge (VRE), is often cited as a spectral biosignature for surface vegetation on exoplanets. The VRE is only a few percent change in reflectivity for a disk-integrated observation of present-day Earth. Here we show that the strength of Earth's VRE has increased over the past ~500 million years of land plant evolution and may continue to increase as solar luminosity increases and the planet warms, until either vegetation coverage is reduced, or the planet's atmosphere becomes opaque to light reflected off the surface. Early plants like mosses and liverworts, which dominated on land 500-400 million years ago, produce a weaker VRE, approximately half as strong as that of modern vegetation. We explore how the changes in land plants, as well as geological changes like ice coverage during ice-ages and interglacial periods, influence the detectability of the VRE through Earth's geological past. Our results show that the VRE has varied through the evolutionary history of land plants on Earth, and could continue to change into the future if hotter climate conditions became dominant, encouraging the spread of vegetation. Our findings suggest that older and hotter Earth-like planets are good targets for the search for a VRE signature. In addition, hot exoplanets and dry exoplanets with some water could be the best targets for a successful vegetation biosignature detection. As well as a strong red edge, lower cloud-fractions and low levels of atmospheric water vapor on such planets could make it easier to detect surface features in general.
The Vegetation Red Edge Biosignature Through Time on Earth and Exoplanets
The $\mu\tau$-reflection symmetric neutrino mass matrix can accommodate all known neutrino mixing angles, with maximal atmospheric angle fixed, and predicts all the unknown CP phases of the lepton sector but is unable to predict the absolute neutrino mass scale. Here we present a highly predictive scenario where $\mu\tau$-reflection is combined with a discrete abelian symmetry to enforce a texture-zero in the mass matrix of the heavy right-handed neutrinos that generate the light neutrino masses. Such a restriction reduces the free parameters of the low energy theory to zero and the absolute neutrino mass scale is restricted to few discrete regions, three in the few meV range and one extending up to around 30 meV. The heavy neutrino sector is dependent only on two free parameters which are further restricted to small regions from the requirement of successful leptogenesis. Mass degenerate heavy neutrinos are possible in one case but there is no resonant enhancement of the CP asymmetry.
Mu-tau reflection symmetry with a high scale texture-zero
Analog meters equipped with one or multiple pointers are wildly utilized to monitor vital devices' status in industrial sites for safety concerns. Reading these legacy meters {\bi autonomously} remains an open problem since estimating pointer origin and direction under imaging damping factors imposed in the wild could be challenging. Nevertheless, high accuracy, flexibility, and real-time performance are demanded. In this work, we propose the Vector Detection Network (VDN) to detect analog meters' pointers given their images, eliminating the barriers for autonomously reading such meters using intelligent agents like robots. We tackled the pointer as a two-dimensional vector, whose initial point coincides with the tip, and the direction is along tail-to-tip. The network estimates a confidence map, wherein the peak pixels are treated as vectors' initial points, along with a two-layer scalar map, whose pixel values at each peak form the scalar components in the directions of the coordinate axes. We established the Pointer-10K dataset composing of real-world analog meter images to evaluate our approach due to no similar dataset is available for now. Experiments on the dataset demonstrated that our methods generalize well to various meters, robust to harsh imaging factors, and run in real-time.
Vector Detection Network: An Application Study on Robots Reading Analog Meters in the Wild
Cooperative transmission in vehicular networks is studied by using coalitional game and pricing in this paper. There are several vehicles and roadside units (RSUs) in the networks. Each vehicle has a desire to transmit with a certain probability, which represents its data burtiness. The RSUs can enhance the vehicles' transmissions by cooperatively relaying the vehicles' data. We consider two kinds of cooperations: cooperation among the vehicles and cooperation between the vehicle and RSU. First, vehicles cooperate to avoid interfering transmissions by scheduling the transmissions of the vehicles in each coalition. Second, a RSU can join some coalition to cooperate the transmissions of the vehicles in that coalition. Moreover, due to the mobility of the vehicles, we introduce the notion of encounter between the vehicle and RSU to indicate the availability of the relay in space. To stimulate the RSU's cooperative relaying for the vehicles, the pricing mechanism is applied. A non-transferable utility (NTU) game is developed to analyze the behaviors of the vehicles and RSUs. The stability of the formulated game is studied. Finally, we present and discuss the numerical results for the 2-vehicle and 2-RSU scenario, and the numerical results verify the theoretical analysis.
Coalitional Game Theoretic Approach for Cooperative Transmission in Vehicular Networks
In noncommutative algebraic geometry, noncommutative quadric hypersurfaces are major objects of study. In this paper, we focus on studying noncommutative conics $\operatorname{Proj_{nc}} A$ embedded into Calabi-Yau quantum projective planes. In particular, we give complete classifications of homogeneous coordinate algebras $A$ of noncommutative conics up to isomorphism of graded algebras, and of noncommutive conics $\operatorname{Proj_{nc}} A$ up to isomorphism of noncommutative schemes.
Noncommutative conics in Calabi-Yau quantum projective planes
Quantum key distribution (QKD) offers a reliable solution to communication problems that require long-term data security. For its widespread use, however, the rate and reach of QKD systems must be improved. Twin-field (TF) QKD is a step forward toward this direction, with early demonstrations suggesting it can beat the current rate-versus-distance records. A recently introduced variant of TF-QKD is particularly suited for experimental implementation, and has been shown to offer a higher key rate than other variants in the asymptotic regime where users exchange an infinite number of signals. Here, we extend the security of this protocol to the finite-key regime, showing that it can overcome the fundamental bounds on point-to-point QKD with around $10^{10}$ transmitted signals. Within distance regimes of interest, our analysis offers higher key rates than those of alternative variants. Moreover, some of the techniques we develop are applicable to the finite-key analysis of other QKD protocols.
Tight finite-key security for twin-field quantum key distribution
The importance of lattice gauge field interpolation for our recent non-perturbative formulation of chiral gauge theory is emphasized. We illustrate how the requisite properties are satisfied by our recent four-dimensional non-abelian interpolation scheme, by going through the simpler case of U(1) gauge fields in two dimensions.
Lattice Gauge Field Interpolation for Chiral Gauge Theories
We present a method for determining mean light-weighted ages and abundances of Fe, Mg, C, N, and Ca, from medium resolution spectroscopy of unresolved stellar populations. The method, pioneered by Schiavon (2007), is implemented in a publicly available code called EZ_Ages. The method and error estimation are described, and the results tested for accuracy and consistency, by application to integrated spectra of well-known Galactic globular and open clusters. Ages and abundances from integrated light analysis agree with studies of resolved stars to within +/-0.1 dex for most clusters, and to within +/-0.2 dex for nearly all cases. The results are robust to the choice of Lick indices used in the fitting to within +/-0.1 dex, except for a few systematic deviations which are clearly categorized. The realism of our error estimates is checked through comparison with detailed Monte Carlo simulations. Finally, we apply EZ_Ages to the sample of galaxies presented in Thomas et al. (2005) and compare our derived values of age, [Fe/H], and [alpha/Fe] to their analysis. We find that [alpha/Fe] is very consistent between the two analyses, that ages are consistent for old (Age > 10 Gyr) populations, but show modest systematic differences at younger ages, and that [Fe/H] is fairly consistent, with small systematic differences related to the age systematics. Overall, EZ_Ages provides accurate estimates of fundamental parameters from medium resolution spectra of unresolved stellar populations in the old and intermediate-age regime, for the first time allowing quantitative estimates of the abundances of C, N, and Ca in these unresolved systems. The EZ_Ages code can be downloaded at http://www.ucolick.org/~graves/EZ_Ages.html
Measuring Ages and Elemental Abundances from Unresolved Stellar Populations: Fe, Mg, C, N, and Ca
We show that a 2-subset-regular self-complementary 3-uniform hypergraph with $n$ vertices exists if and only if $n\ge 6$ and $n$ is congruent to 2 modulo 4.
A note on 2-subset-regular self-complementary 3-uniform hypergraphs
This paper deals with the possible motion of nucleons in the nucleus, which is due to realistic inter-nucleonic forces. This approach provides new or more substantiated conclusions about the nuclear structure than those based on the effective interaction of nucleons, while the shell model of the nucleus may lead to questionable conclusions regarding the nuclear structure and nuclear reaction mechanisms.
On the nucleons motion in the nucleus as being due to realistic inter-nucleonic forces
Parkinson's disease (PD) is a common neurological disorder characterized by gait impairment. PD has no cure, and an impediment to developing a treatment is the lack of any accepted method to predict disease progression rate. The primary aim of this study was to develop a model using clinical measures and biomechanical measures of gait and postural stability to predict an individual's PD progression over two years. Data from 160 PD subjects were utilized. Machine learning models, including XGBoost and Feed Forward Neural Networks, were developed using extensive model optimization and cross-validation. The highest performing model was a neural network that used a group of clinical measures, achieved a PPV of 71% in identifying fast progressors, and explained a large portion (37%) of the variance in an individual's progression rate on held-out test data. This demonstrates the potential to predict individual PD progression rate and enrich trials by analyzing clinical and biomechanical measures with machine learning.
Prediction of individual progression rate in Parkinson's disease using clinical measures and biomechanical measures of gait and postural stability
We consider the problem of updating the SVD when augmenting a "tall thin" matrix, i.e., a rectangular matrix $A \in \RR^{m \times n}$ with $m \gg n$. Supposing that an SVD of $A$ is already known, and given a matrix $B \in \RR^{m \times n'}$, we derive an efficient method to compute and efficiently store the SVD of the augmented matrix $[ A B ] \in \RR^{m \times (n+n')}$. This is an important tool for two types of applications: in the context of principal component analysis, the dominant left singular vectors provided by this decomposition form an orthonormal basis for the best linear subspace of a given dimension, while from the right singular vectors one can extract an orthonormal basis of the kernel of the matrix. We also describe two concrete applications of these concepts which motivated the development of our method and to which it is very well adapted.
SVD update methods for large matrices and applications
Subspace codes form the appropriate mathematical setting for investigating the Koetter-Kschischang model of fault-tolerant network coding. The Main Problem of Subspace Coding asks for the determination of a subspace code of maximum size (proportional to the transmission rate) if the remaining parameters are kept fixed. We describe a new approach to finding good subspace codes, which surpasses the known size limit of lifted MRD codes and is capable of yielding an alternative construction of the currently best known binary subspace code of packet length 7, constant dimension 3 and minimum subspace distance 4.
A New Approach to the Main Problem of Subspace Coding
Transverse NMR relaxation in a macroscopic sample is shown to be extremely sensitive to the structure of mesoscopic magnetic susceptibility variations. Such a sensitivity is proposed as a novel kind of contrast in the NMR measurements. For suspensions of arbitrary shaped paramagnetic objects, the transverse relaxation is found in the case of a small dephasing effect of an individual object. Strong relaxation rate dependence on the objects' shape agrees with experiments on whole blood. Demonstrated structure sensitivity is a generic effect that arises in NMR relaxation in porous media, biological systems, as well as in kinetics of diffusion limited reactions.
Transverse NMR relaxation as a probe of mesoscopic structure
The explicit formulas for the maps interconnecting the sets of solutions of the special double confluent Heun equation and the equation of the RSJ model of overdamped Josephson junction in case of shifted sinusoidal bias are given. The approach these are based upon leans on the extensive application of eigenfunctions of certain linear operator acting on functions holomorphic on the universal cover of the punctured complex plane. The functional equation the eigenfunctions noted obey is derived, the matrix form of the monodromy transformation they infer is given.
The interrelation of the special double confluent Heun equation and the equation of RSJ model of Josephson junction revisited
Within the frame of quantum optics we analyze the properties of spontaneous emission of two-level atom in media with indefinite permittivity tensor where the geometry of the dispersion relation is characterized by an ellipsoid or a hyperboloid(hyperbolic medium). The decay rate is explicitly given with the orientation of the dipole transition matrix element taken into account. It indicates that for the ellipsoid case the intensity of the photons coupled into different modes can be tuned by changing the direction of the matrix element and for the hyperboloid case it is found that spontaneous emission in hyperbolic medium can be dramatically enhanced compared to the dielectric background. Moreover, spontaneous emission exhibit the strong directivity and get the maximum in the asymptote direction.
Controlling spontaneous emission of a two-level atom by hyperbolic metamaterials
Flavour oscillations of sub-GeV atmospheric neutrinos and antineutrinos, traversing different distances inside the Earth, are a promising source of information on the leptonic CP phase $\delta$. In that energy range, the oscillations are very fast, far beyond the resolution of modern neutrino detectors. However, the necessary averaging over the experimentally typical energy and azimuthal angle bins does not wash out the CP violation effects. In this paper we derive very accurate analytic compact expressions for the averaged oscillations probabilities. Assuming spherically symmetric Earth, the averaged oscillation probabilities are described in terms of two analytically calculable effective parameters. Based on those expressions, we estimate maximal magnitude of CP-violation effects in such measurements and propose optimal observables best suited to determine the value of the CP phase in the PMNS mixing matrix.
Analytical description of CP violation in oscillations of atmospheric neutrinos traversing the Earth
The Projected Dynamics method was originally developed to study metastable decay in ferromagnetic discrete spin models. Here, we apply it to a classical, continuous Heisenberg model with anisotropic ferromagnetic interactions, which evolves under a Monte Carlo dynamic. The anisotropy is sufficiently large to allow comparison with the Ising model. We describe the Projected Dynamics method and how to apply it to this continuous-spin system. We also discuss how to extract metastable lifetimes and how to extrapolate from small systems to larger systems.
Application of the Projected Dynamics Method to an Anisotropic Heisenberg Model
We consider quantum analogs of the relativistic Toda lattices and give new $2\times 2$ $L$-operators for these models. Making use of the variable separation the spectral problem for the quantum integrals of motion is reduced to solving one-dimensional separation equations.
Separation of variables for the quantum relativistic Toda lattices
In the absence of acceleration, the velocity formula gives "distance travelled equals speed multiplied by time". For a broad class of Markov chains such as circulant Markov chains or random walk on complete graphs, we prove a probabilistic analogue of the velocity formula between entropy and hitting time, where distance is the entropy of the Markov trajectories from state $i$ to state $j$ in the sense of [L. Ekroot and T. M. Cover. The entropy of Markov trajectories. IEEE Trans. Inform. Theory 39(4): 1418-1421.], speed is the classical entropy rate of the chain, and the time variable is the expected hitting time between $i$ and $j$. This motivates us to define new entropic counterparts of various hitting time parameters such as average hitting time or commute time, and prove analogous velocity formulae and estimates between these quantities.
Velocity formulae between entropy and hitting time for Markov chains
Violation of Mermin's and Svetlichny's inequalities can rule out the predictions of local hidden variable theory and can confirm the existence of true nonlocal correlation for n-particle pure quantum systems. Here we demonstrate the experimental violation of the above inequalities for W- and GHZ-class of states. We use IBM's five-qubit quantum computer for experimental implementation of these states and illustration of inequalities' violations. Our results clearly show the violations of both Mermin's and Svetlichny's inequalities for W and GHZ states respectively. Being a superconducting qubit-based quantum computer, the platform used here opens up the opportunity to explore multipartite inequalities which is beyond the reach of other existing technologies.
Experimental demonstration of the violations of Mermin's and Svetlichny's inequalities for W- and GHZ-class of states
The ability to achieve near-unity light extraction efficiency is necessary for a truly deterministic single photon source. The most promising method to reach such high efficiencies is based on embedding single photon emitters in tapered photonic waveguides defined by top-down etching techniques. However, light extraction efficiencies in current top-down approaches are limited by fabrication imperfections and etching induced defects. The efficiency is further tempered by randomly positioned off-axis quantum emitters. Here, we present perfectly positioned single quantum dots on the axis of a tailored nanowire waveguide using bottom-up growth. In comparison to quantum dots in nanowires without waveguide, we demonstrate a 24-fold enhancement in the single photon flux, corresponding to a light extraction efficiency of 42 %. Such high efficiencies in one-dimensional nanowires are promising to transfer quantum information over large distances between remote stationary qubits using flying qubits within the same nanowire p-n junction.
Bright single-photon sources in bottom-up tailored nanowires
In this paper, we propose a methodology for early recognition of human activities from videos taken with a first-person viewpoint. Early recognition, which is also known as activity prediction, is an ability to infer an ongoing activity at its early stage. We present an algorithm to perform recognition of activities targeted at the camera from streaming videos, making the system to predict intended activities of the interacting person and avoid harmful events before they actually happen. We introduce the novel concept of 'onset' that efficiently summarizes pre-activity observations, and design an approach to consider event history in addition to ongoing video observation for early first-person recognition of activities. We propose to represent onset using cascade histograms of time series gradients, and we describe a novel algorithmic setup to take advantage of onset for early recognition of activities. The experimental results clearly illustrate that the proposed concept of onset enables better/earlier recognition of human activities from first-person videos.
Early Recognition of Human Activities from First-Person Videos Using Onset Representations
The recent Planck Legacy 2018 release has confirmed the presence of an enhanced lensing amplitude in CMB power spectra compared to that predicted in the standard $\Lambda$CDM model. A closed universe can provide a physical explanation for this effect, with the Planck CMB spectra now preferring a positive curvature at more than $99 \%$ C.L. Here we further investigate the evidence for a closed universe from Planck, showing that positive curvature naturally explains the anomalous lensing amplitude and demonstrating that it also removes a well-known tension within the Planck data set concerning the values of cosmological parameters derived at different angular scales. We show that since the Planck power spectra prefer a closed universe, discordances higher than generally estimated arise for most of the local cosmological observables, including BAO. The assumption of a flat universe could, therefore, mask a cosmological crisis where disparate observed properties of the Universe appear to be mutually inconsistent. Future measurements are needed to clarify whether the observed discordances are due to undetected systematics, or to new physics, or simply are a statistical fluctuation.
Planck evidence for a closed Universe and a possible crisis for cosmology
We report the results of Cu and Cl nuclear magnetic resonance experiments (NMR) and thermal expansion measurements in magnetic fields in the coupled dimer spin system TlCuCl3. We found that the field-induced antiferromagnetic transition as confirmed by the splitting of NMR lines is slightly discontinuous. The abrupt change of the electric field gradient at the Cl sites, as well as the sizable change of the lattice constants, across the phase boundary indicate that the magnetic order is accompanied by simultaneous lattice deformation.
Field-Induced Magnetic Order and Simultaneous Lattice Deformation in TlCuCl3
We study the symmetry resolved entanglement entropies in gapped integrable lattice models. We use the corner transfer matrix to investigate two prototypical gapped systems with a U(1) symmetry: the complex harmonic chain and the XXZ spin-chain. While the former is a free bosonic system, the latter is genuinely interacting. We focus on a subsystem being half of an infinitely long chain. In both models, we obtain exact expressions for the charged moments and for the symmetry resolved entropies. While for the spin chain we found exact equipartition of entanglement (i.e. all the symmetry resolved entropies are the same), this is not the case for the harmonic system where equipartition is effectively recovered only in some limits. Exploiting the gaussianity of the harmonic chain, we also develop an exact correlation matrix approach to the symmetry resolved entanglement that allows us to test numerically our analytic results.
Symmetry resolved entanglement in gapped integrable systems: a corner transfer matrix approach
There is a large change in surface rotation rates of sun-like stars on the pre-main sequence and early main sequence. Since these stars have dynamo driven magnetic fields, this implies a strong evolution of their magnetic properties over this time period. The spin-down of these stars is controlled by interactions between stellar winds and magnetic fields, thus magnetic evolution in turn plays an important role in rotational evolution. We present here the second part of a study investigating the evolution of large-scale surface magnetic fields in this critical time period. We observed stars in open clusters and stellar associations with known ages between 120 and 650 Myr, and used spectropolarimetry and Zeeman Doppler Imaging to characterize their large-scale magnetic field strength and geometry. We report 15 stars with magnetic detections here. These stars have masses from 0.8 to 0.95 Msun, rotation periods from 0.326 to 10.6 days, and we find large-scale magnetic field strengths from 8.5 to 195 G with a wide range of geometries. We find a clear trend towards decreasing magnetic field strength with age, and a power-law decrease in magnetic field strength with Rossby number. There is some tentative evidence for saturation of the large-scale magnetic field strength at Rossby numbers below 0.1, although the saturation point is not yet well defined. Comparing to younger classical T Tauri stars, we support the hypothesis that differences in internal structure produce large differences in observed magnetic fields, however for weak lined T Tauri stars this is less clear.
The evolution of surface magnetic fields in young solar-type stars II: The early main sequence (250-650 Myr)
We study the quantum and classical scattering of Hamiltonian systems whose chaotic saddle is described by binary or ternary horseshoes. We are interested in parameters of the system for which a stable island, associated with the inner fundamental periodic orbit of the system exists and is large, but chaos around this island is well developed. In this situation, in classical systems, decay from the interaction region is algebraic, while in quantum systems it is exponential due to tunneling. In both cases, the most surprising effect is a periodic response to an incoming wave packet. The period of this self-pulsing effect or scattering echoes coincides with the mean period, by which the scattering trajectories rotate around the stable orbit. This period of rotation is directly related to the development stage of the underlying horseshoe. Therefore the predicted echoes will provide experimental access to topological information. We numerically test these results in kicked one dimensional models and in open billiards.
Self-pulsing effect in chaotic scattering
A diabatic (configuration-fixed) constrained approach to calculate the potential energy surface (PES) of the nucleus is developed in the relativistic mean field model. {As an example}, the potential energy surfaces of $^{208}$Pb obtained from both adiabatic and diabatic constrained approaches are investigated and compared. {It is shown that} the diabatic constrained approach enables one to decompose the segmented PES obtained in usual adiabatic approaches into separate parts uniquely characterized by different configurations, {to follow the evolution of single-particle orbits till very deformed region}, and to obtain several well defined deformed excited states which can hardly be expected from the adiabatic PES's.
Constrained relativistic mean field approach with fixed configurations
Inflationary models can correlate small-scale density perturbations with the long-wavelength gravitational waves (GW) in the form of the Tensor-Scalar-Scalar (TSS) bispectrum. This correlation affects the mass-distribution in the Universe and leads to the off-diagonal correlations of the density field modes in the form of the quadrupole anisotropy. Interestingly, this effect survives even after the tensor mode decays when it re-enters the horizon, known as the fossil effect. As a result, the off-diagonal correlation function between different Fourier modes of the density fluctuations can be thought as a way to probe the large-scale GW and the mechanism of inflation behind the fossil effect. Models of single field slow roll inflation generically predict a very small quadrupole anisotropy in TSS while in models of multiple fields inflation this effect can be observable. Therefore this large scale quadrupole anisotropy can be thought as a spectroscopy for different inflationary models. In addition, in models of anisotropic inflation there exists quadrupole anisotropy in curvature perturbation power spectrum. Here we consider TSS in models of anisotropic inflation and show that the shape of quadrupole anisotropy is different than in single field models. In addition in these models the quadrupole anisotropy is projected into the preferred direction and its amplitude is proportional to $g_* N_e$ where $N_e$ is the number of e-folds and $g_*$ is the amplitude of quadrupole anisotropy in curvature perturbation power spectrum. We use this correlation function to estimate the large scale GW as well as the preferred direction and discuss the detectability of the signal in the galaxy surveys like Euclid and 21 cm surveys.
Clustering Fossil from Primordial Gravitational Waves in Anisotropic Inflation
We prove that the Crisp and Gow's quiver operation on a finite quiver Q produces a new quiver Q' with fewer vertices, such that the finite dimensional algebras kQ/J^2 and kQ'/J^2 are singularly equivalent. This operation is a general quiver operation which includes as specific examples some operations which arise naturally in symbolic dynamics (e.g., (elementary) strong shift equivalent, (in-out) splitting, source elimination, etc.).
Singular equivalence of finite dimensional algebras with radical square zero
Cluster algebras were introduced by S. Fomin and A. Zelevinsky in connection with dual canonical bases. To a cluster algebra of simply laced Dynkin type one can associate the cluster category. Any cluster of the cluster algebra corresponds to a tilting object in the cluster category. The cluster tilted algebra is the algebra of endomorphisms of that tilting object. Viewing the cluster tilted algebra as a path algebra of a quiver with relations, we prove in this paper that the quiver of the cluster tilted algebra is equal to the cluster diagram. We study also the relations. As an application of these results, we answer several conjectures on the connection between cluster algebras and quiver representations.
Quivers with relations and cluster tilted algebras
Combinatorial characterisations of minimal rigidity are obtained for symmetric 2-dimensional bar-joint frameworks with either $\ell^1$ or $\ell^\infty$ distance constraints. The characterisations are expressed in terms of symmetric tree packings and the number of edges fixed by the symmetry operations. The proof uses new Henneberg-type inductive construction schemes.
Symmetric isostatic frameworks with $\ell^1$ or $\ell^\infty$ distance constraints
We show that if an ample line bundle L on a nonsingular toric 3-fold satisfies h^0(L+2K)=0, then L is normally generated. As an application, we show that the anti-canonical divisor on a nonsingular toric Fano 4-fold is normally generated.
Projective normality of nonsingular toric varieties of dimension three
A distinctive property of human and animal intelligence is the ability to form abstractions by neglecting irrelevant information which allows to separate structure from noise. From an information theoretic point of view abstractions are desirable because they allow for very efficient information processing. In artificial systems abstractions are often implemented through computationally costly formations of groups or clusters. In this work we establish the relation between the free-energy framework for decision making and rate-distortion theory and demonstrate how the application of rate-distortion for decision-making leads to the emergence of abstractions. We argue that abstractions are induced due to a limit in information processing capacity.
Abstraction in decision-makers with limited information processing capabilities
At present, the task of searching for compounds with a high superconducting transition temperature is a very relevant scientific direction. Usually, the calculation of is carried out by numerically solving the system of Eliashberg equations. In this paper, a set of programs for solving this system written in various forms on the imaginary axis is presented. As an example of the developed methods applications, calculations results of and thermodynamic properties of the metallic hydrogen I41/AMD phase and some other substances under high pressure are presented.
Software Complex for the Numerical Solution of the Isotropic Imaginary-Axis Eliashberg Equations
Integration between magnetism and topology is an exotic phenomenon in condensed-matter physics. Here, we propose an exotic phase named topological crystalline antiferromagnetic state, in which antiferromagnetism intrinsically integrates with nontrivial topology, and we suggest such a state can be realized in tetragonal FeS. A combination of first-principles calculations and symmetry analyses shows that the topological crystalline antiferromagnetic state arises from band reconstruction induced by pair checker-board antiferromagnetic order together with band-gap opening induced by intrinsic spin-orbit coupling in tetragonal FeS. The topological crystalline antiferromagnetic state is protected by the product of fractional translation symmetry, mirror symmetry, and time-reversal symmetry, and present some unique features. In contrast to strong topological insulators, the topological robustness is surface-dependent. These findings indicate that non-trivial topological states could emerge in pure antiferromagnetic materials, which sheds new light on potential applications of topological properties in fast-developing antiferromagnetic spintronics.
Topological crystalline antiferromagnetic state in tetragonal FeS
Splines and subdivision curves are flexible tools in the design and manipulation of curves in Euclidean space. In this paper we study generalizations of interpolating splines and subdivision schemes to the Riemannian manifold of shell surfaces in which the associated metric measures both bending and membrane distortion. The shells under consideration are assumed to be represented by Loop subdivision surfaces. This enables the animation of shells via the smooth interpolation of a given set of key frame control meshes. Using a variational time discretization of geodesics efficient numerical implementations can be derived. These are based on a discrete geodesic interpolation, discrete geometric logarithm, discrete exponential map, and discrete parallel transport. With these building blocks at hand discrete Riemannian cardinal splines and three different types of discrete, interpolatory subdivision schemes are defined. Numerical results for two different subdivision shell models underline the potential of this approach in key frame animation.
Smooth Interpolation of Key Frames in a Riemannian Shell Space
Fine-grained visual classification (FGVC) is much more challenging than traditional classification tasks due to the inherently subtle intra-class object variations. Recent works mainly tackle this problem by focusing on how to locate the most discriminative parts, more complementary parts, and parts of various granularities. However, less effort has been placed to which granularities are the most discriminative and how to fuse information cross multi-granularity. In this work, we propose a novel framework for fine-grained visual classification to tackle these problems. In particular, we propose: (i) a progressive training strategy that effectively fuses features from different granularities, and (ii) a random jigsaw patch generator that encourages the network to learn features at specific granularities. We obtain state-of-the-art performances on several standard FGVC benchmark datasets, where the proposed method consistently outperforms existing methods or delivers competitive results. The code will be available at https://github.com/PRIS-CV/PMG-Progressive-Multi-Granularity-Training.
Fine-Grained Visual Classification via Progressive Multi-Granularity Training of Jigsaw Patches
We prove a universal approximation theorem that allows to approximate continuous functionals of c\`adl\`ag (rough) paths uniformly in time and on compact sets of paths via linear functionals of their time-extended signature. Our main motivation to treat this question comes from signature-based models for finance that allow for the inclusion of jumps. Indeed, as an important application, we define a new class of universal signature models based on an augmented L\'evy process, which we call L\'evy-type signature models. They extend continuous signature models for asset prices as proposed e.g. by Arribas et al.(2020) in several directions, while still preserving universality and tractability properties. To analyze this, we first show that the signature process of a generic multivariate L\'evy process is a polynomial process on the extended tensor algebra and then use this for pricing and hedging approaches within L\'evy-type signature models.
Universal approximation theorems for continuous functions of c\`adl\`ag paths and L\'evy-type signature models
This paper presents Non-Attentive Tacotron based on the Tacotron 2 text-to-speech model, replacing the attention mechanism with an explicit duration predictor. This improves robustness significantly as measured by unaligned duration ratio and word deletion rate, two metrics introduced in this paper for large-scale robustness evaluation using a pre-trained speech recognition model. With the use of Gaussian upsampling, Non-Attentive Tacotron achieves a 5-scale mean opinion score for naturalness of 4.41, slightly outperforming Tacotron 2. The duration predictor enables both utterance-wide and per-phoneme control of duration at inference time. When accurate target durations are scarce or unavailable in the training data, we propose a method using a fine-grained variational auto-encoder to train the duration predictor in a semi-supervised or unsupervised manner, with results almost as good as supervised training.
Non-Attentive Tacotron: Robust and Controllable Neural TTS Synthesis Including Unsupervised Duration Modeling
Quantized vortex states of weakly interacting Bose-Einstein condensate of atoms with attractive interatomic interaction in an axially symmetric harmonic oscillator trap are investigated using the numerical solution of the time-dependent Gross-Pitaevskii (GP) equation obtained by the semi-implicit Crank-Nicholson method. Collapse of the condensate is studied in the presence of deformed traps with a larger frequency along the radial as well as along the axial directions. The critical number of atoms for collapse is calculated as a function of vortex quantum $L$. The critical number increases with angular momentum $L$ of the vortex state but tends to saturate for large $L$.
Collapse of attractive Bose-Einstein condensed vortex states in a cylindrical trap
We prove an algebra property under pointwise multiplication for Besov spaces defined on Lie groups of polynomial growth. When the setting is restricted to the case of H-type groups, this algebra property is generalized to paraproduct estimates.
Besov algebras on Lie groups of polynomial growth
Noncommutative geometry, an offshoot of string theory, replaces point-like objects by smeared objects. The resulting uncertainty may cause a black hole to be observationally indistinguishable from a traversable wormhole, while the latter, in turn, may become observationally indistinguishable from a gravastar. The same noncommutative-geometry background allows the theoretical construction of thin-shell wormholes from gravastars and may even serve as a model for dark energy.
Seeking connections between wormholes, gravastars, and black holes via noncommutative geometry
The TOTEM experiment at the LHC is dedicated to the measurement of the total pp cross section and to the study of elastic scattering and of diffractive dissociation processes. TOTEM is here presented with a general overview on the main features of its experimental apparatus and of its physics programme.
The TOTEM Experiment at the LHC
We consider quantum quenches in models of free scalars and fermions with a generic time-dependent mass $m(t)$ that goes from $m_0$ to zero. We prove that, as anticipated in MSS \cite{Mandal:2015jla}, the post-quench dynamics can be described in terms of a state of the generalized Calabrese-Cardy form $|\psi \rangle$= $\exp[-\kappa_2 H -\sum_{n>2}^\infty \kappa_n W_n]| \hbox{Bd} \rangle$. The $W_n$ ($n=2,3,...$, $W_2=H$) here represent the conserved $W_\infty$ charges and $| \hbox{Bd} \rangle$ represents a conformal boundary state. Our result holds irrespective of whether the pre-quench state is a ground state or a squeezed state, and is proved without recourse to perturbation expansion in the $\kappa_n$'s as in MSS. We compute exact time-dependent correlators for some specific quench protocols $m(t)$. The correlators explicitly show thermalization to a generalized Gibbs ensemble (GGE), with inverse temperature $\beta= 4\kappa_2$, and chemical potentials $\mu_n=4\kappa_n$. In case the pre-quench state is a ground state, it is possible to retrieve the exact quench protocol $m(t)$ from the final GGE, by an application of inverse scattering techniques. Another notable result, which we interpret as a UV/IR mixing, is that the long distance and long time (IR) behaviour of some correlators depends crucially on all $\kappa_n$'s, although they are highly irrelevant couplings in the usual RG parlance. This indicates subtleties in RG arguments when applied to non-equilibrium dynamics.
Thermalization in 2D critical quench and UV/IR mixing
We prove a differential Harnack inequality for the solution of the parabolic Allen-Cahn equation $ \frac{\partial f}{\partial t}=\triangle f-(f^3-f)$ on a closed n-dimensional manifold. As a corollary we find a classical Harnack inequality. We also formally compare the standing wave solution to a gradient estimate of Modica from the 1980s for the elliptic equation.
A Harnack inequality for the parabolic Allen-Cahn equation
A basic component in Internet applications is the electronic mail and its various implications. The paper proposes a mechanism for automatically classifying emails and create dynamic groups that belong to these messages. Proposed mechanisms will be based on natural language processing techniques and will be designed to facilitate human-machine interaction in this direction.
Clasificarea distribuita a mesajelor de e-mail
This paper presents a fast and effective computer algebraic method for analyzing and verifying non-linear integer arithmetic circuits using a novel algebraic spectral model. It introduces a concept of algebraic spectrum, a numerical form of polynomial expression; it uses the distribution of coefficients of the monomials to determine the type of arithmetic function under verification. In contrast to previous works, the proof of functional correctness is achieved by computing an algebraic spectrum combined with a local rewriting of word-level polynomials. The speedup is achieved by propagating coefficients through the circuit using And-Inverter Graph (AIG) datastructure. The effectiveness of the method is demonstrated with experiments including standard and Booth multipliers, and other synthesized non-linear arithmetic circuits up to 1024 bits containing over 12 million gates.
Spectral Approach to Verifying Non-linear Arithmetic Circuits
Perfect cloning of a known set of states with arbitrary prior probabilities is possible if we allow the cloner to sometimes fail completely. In the optimal case the probability of failure is at its minimum allowed by the laws of quantum mechanics. Here we show that it is possible to lower the failure rate below that of the perfect probabilistic cloner but the price to pay is that the clones are not perfect; the global fidelity is less than one. We determine the optimal fidelity of a cloner with a Fixed Failure Rate (FFR cloner) in the case of a pair of known states. Optimality is shown to be attainable by a measure-and-prepare protocol in the limit of infinitely many clones. The optimal protocol consists of discrimination with a fixed rate of inconclusive outcome followed by preparation of the appropriate clones. The convergence shows a symmetry-breaking second-order phase transition in the fidelity of the approximate infinite clones.
Optimal Cloning of Quantum States with a Fixed Failure Rate
We introduce two applications of polygraphs to categorification problems. We compute first, from a coherent presentation of an $n$-category, a coherent presentation of its Karoubi envelope. For this, we extend the construction of Karoubi envelope to $n$-polygraphs and linear $(n,n-1)$-polygraphs. The second problem treated in this paper is the construction of Grothendieck decategorifications for $(n,n-1)$-polygraphs. This construction yields a rewriting system presenting for example algebras categorified by a linear monoidal category. We finally link quasi-convergence of such rewriting systems to the uniqueness of direct sum decompositions for linear $(n-1,n-1)$-categories.
Linear polygraphs applied to categorification
We determine the metric dimension of the annihilating-ideal graph of a local finite commutative principal ring and a finite commutative principal ring with two maximal ideals. We also find the bounds for the metric dimension of the annihilating-ideal graph of an arbitrary finite commutative principal ring.
The metric dimension of the annihilating-ideal graph of a finite commutative ring
This paper presents a distributed control architecture for voltage and frequency stabilization in AC islanded microgrids. In the primary control layer, each generation unit is equipped with a local controller acting on the corresponding voltage-source converter. Following the plug-and-play design approach previously proposed by some of the authors, whenever the addition/removal of a distributed generation unit is required, feasibility of the operation is automatically checked by designing local controllers through convex optimization. The update of the voltage-control layer, when units plug -in/-out, is therefore automatized and stability of the microgrid is always preserved. Moreover, local control design is based only on the knowledge of parameters of power lines and it does not require to store a global microgrid model. In this work, we focus on bus-connected microgrid topologies and enhance the primary plug-and-play layer with local virtual impedance loops and secondary coordinated controllers ensuring bus voltage tracking and reactive power sharing. In particular, the secondary control architecture is distributed, hence mirroring the modularity of the primary control layer. We validate primary and secondary controllers by performing experiments with balanced, unbalanced and nonlinear loads, on a setup composed of three bus-connected distributed generation units. Most importantly, the stability of the microgrid after the addition/removal of distributed generation units is assessed. Overall, the experimental results show the feasibility of the proposed modular control design framework, where generation units can be added/removed on the fly, thus enabling the deployment of virtual power plants that can be resized over time.
Plug-and-play and coordinated control for bus-connected AC islanded microgrids
The impulse response function (IRF) of a localized bolus in cerebral blood flow codes important information on the tissue type. It is indirectly accessible both from MR- and CT-imaging methods, at least in principle. In practice, however, noise and limited signal resolution render standard deconvolution techniques almost useless. Parametric signal descriptions look more promising, and it is the aim of this contribution to develop some improvements along this line.
Signal analysis of impulse response functions in MR- and CT-measurements of cerebral blood flow
Metamaterials and metasurfaces are at the pinnacle of wave propagation engineering, yet their design has thus far been mainly focused on deep-subwavelength periodicities, practically forming an effective medium. Such an approach overlooks important structural degrees-of-freedom, e.g. the interplay between the corrugation periodicity and depth and how it affects the beam transport. Here, we present Slack Metasurfaces - weakly modulated metal-dielectric interfaces unlocking all structural degrees-of-freedom that affect the wave propagation. We experimentally demonstrate control over the anisotropy of surface waves in such metasurfaces, leading to yet, unexplored, dual stage topological transitions. We further utilize these metasurfaces to show unique backward focusing of surface waves driven by an umklapp process - momentum relaxation empowered by the periodic nature of the structure. Our findings can be applied to any type of guided waves, introducing a simple and diverse method for controlling wave propagation in artificial media.
Topological transitions and surface umklapp scattering in Slack Metasurfaces
The effect of deposition oxygen pressure (P$_{O}$) on phase separation (PS) induced in epitaxial La$_{0.67}$Ca$_{0.33}$MnO$_{3}$/NdGaO$_{3}$(001) films was investigated. Fully oxygenated films grown at high P$_{O}$ are anisotropically strained. They exhibit PS over a wide temperature range, because of the large orthorhombicity of NdGaO$_{3}$ substrates. The paramagnetic insulator-to-ferromagnetic metal (FM) and FM-to-antiferromagnetic insulator (AFI) transitions gradually shift to lower temperatures with decreasing PO. The AFI state is initially weakened (P$_{O}$ >= 30 Pa), but then becomes more robust against the magnetic field (P$_{O}$ < 30 Pa). The out-of-plane film lattice parameter increases with decreasing P$_{O}$. For films grown at P$_{O}$>= 30 Pa, the slight oxygen deficiency may enlarge the lattice unit cell, reduce the anisotropic strain and suppress the AFI state. Films deposited at P$_{O}$ < 30 Pa instead experience an average compressive strain. The enhanced compressive strain and structural defects in the films may lead to the robust AFI state. These results aid our understanding of PS in manganite films.
Effect of growth oxygen pressure on anisotropic-strain-induced phase separation in epitaxial La$_{0.67}$Ca$_{0.33}$MnO$_{3}$/NdGaO$_{3}$(001) films
A wide variety of methods have been used to compute percolation thresholds. In lattice percolation, the most powerful of these methods consists of microcanonical simulations using the union-find algorithm to efficiently determine the connected clusters, and (in two dimensions) using exact values from conformal field theory for the probability, at the phase transition, that various kinds of wrapping clusters exist on the torus. We apply this approach to percolation in continuum models, finding overlaps between objects with real-valued positions and orientations. In particular, we find precise values of the percolation transition for disks, squares, rotated squares, and rotated sticks in two dimensions, and confirm that these transitions behave as conformal field theory predicts. The running time and memory use of our algorithm are essentially linear as a function of the number of objects at criticality.
Continuum Percolation Thresholds in Two Dimensions
Bialgebroids (resp. Hopf algebroids) are bialgebras (Hopf algebras) over noncommutative rings. Drinfeld twist techniques are particularly useful in the (deformation) quantization of Lie algebras as well as underlying module algebras (=quantum spaces). Smash product construction combines these two into the new algebra which, in fact, does not depend on the twist. However, we can turn it into bialgebroid in the twist dependent way. Alternatively, one can use Drinfeld twist techniques in a category of bialgebroids. We show that both techniques indicated in the title: twisting of a bialgebroid or constructing a bialgebroid from the twisted bialgebra give rise to the same result in the case of normalized cocycle twist. This can be useful for better description of a quantum deformed phase space. We argue that within this bialgebroid framework one can justify the use of deformed coordinates (i.e. spacetime noncommutativity) which are frequently postulated in order to explain quantum gravity effects.
Twisted bialgebroids versus bialgebroids from a Drinfeld twist
The entropy of the Gram matrix of a joint purification of an ensemble of K mixed states yields an upper bound for the Holevo information Chi of the ensemble. In this work we combine geometrical and probabilistic aspects of the ensemble in order to obtain useful bounds for Chi. This is done by constructing various correlation matrices involving fidelities between every pair of states from the ensemble. For K=3 quantum states we design a matrix of root fidelities that is positive and the entropy of which is conjectured to upper bound Chi. Slightly weaker bounds are established for arbitrary ensembles. Finally, we investigate correlation matrices involving multi-state fidelities in relation to the Holevo quantity.
Matrices of fidelities for ensembles of quantum states and the Holevo quantity
A comparative study of high and zero temperature plasma for the case of damping rate, drag and diffusion coefficients have been presented. In each of these quantities, it is revealed how the magnetic interaction dominates over the electric one at zero temperature unlike what happens at high temperature.
Drag and Diffusion coefficients in extreme scenarios of temperature and chemical potential
Blue noise error patterns are well suited to human perception, and when applied to stochastic rendering techniques, blue noise masks (blue noise textures) minimize unwanted low-frequency noise in the final image. Current methods of applying blue noise masks at each frame independently produce white noise frequency spectra temporally. This white noise results in slower integration convergence over time and unstable results when filtered temporally. Unfortunately, achieving temporally stable blue noise distributions is non-trivial since 3D blue noise does not exhibit the desired 2D blue noise properties, and alternative approaches degrade the spatial blue noise qualities. We propose novel blue noise patterns that, when animated, produce values at a pixel that are well distributed over time, converge rapidly for Monte Carlo integration, and are more stable under TAA, while still retaining spatial blue noise properties. To do so, we propose an extension to the well-known void and cluster algorithm that reformulates the underlying energy function to produce spatiotemporal blue noise masks. These masks exhibit blue noise frequency spectra in both the spatial and temporal domains, resulting in visually pleasing error patterns, rapid convergence speeds, and increased stability when filtered temporally. We demonstrate these improvements on a variety of applications, including dithering, stochastic transparency, ambient occlusion, and volumetric rendering. By extending spatial blue noise to spatiotemporal blue noise, we overcome the convergence limitations of prior blue noise works, enabling new applications for blue noise distributions.
Scalar Spatiotemporal Blue Noise Masks
Canonical correlation analysis investigates linear relationships between two sets of variables, but often works poorly on modern data sets due to high-dimensionality and mixed data types such as continuous, binary and zero-inflated. To overcome these challenges, we propose a semiparametric approach for sparse canonical correlation analysis based on Gaussian copula. Our main contribution is a truncated latent Gaussian copula model for data with excess zeros, which allows us to derive a rank-based estimator of the latent correlation matrix for mixed variable types without the estimation of marginal transformation functions. The resulting canonical correlation analysis method works well in high-dimensional settings as demonstrated via numerical studies, as well as in application to the analysis of association between gene expression and micro RNA data of breast cancer patients.
Sparse semiparametric canonical correlation analysis for data of mixed types
In this expository article we relate the presentation of weighted estimates in the book of Martinez to the Bergman kernel approach of Sj\"ostrand. It is meant as an introduction to the Helffer--Sj\"ostrand theory (designed for the study of quantum resonances) in the simplest setting and to its adaptations to compact manifolds.
An introduction to microlocal complex deformations
We define the hitting time for a model of continuous-time open quantum walks in terms of quantum jumps. Our starting point is a master equation in Lindblad form, which can be taken as the quantum analogue of the rate equation for a classical continuous-time Markov chain. The quantum jump method is well known in the quantum optics community and has also been applied to simulate open quantum walks in discrete time. This method however, is well-suited to continuous-time problems. It is shown here that a continuous-time hitting problem is amenable to analysis via quantum jumps: The hitting time can be defined as the time of the first jump. Using this fact, we derive the distribution of hitting times and explicit exressions for its statistical moments. Simple examples are considered to illustrate the final results. We then show that the hitting statistics obtained via quantum jumps is consistent with a previous definition for a measured walk in discrete time [Phys.~Rev.~A~{\bf 73}, 032341 (2006)] (when generalised to allow for non-unitary evolution and in the limit of small time steps). A caveat of the quantum-jump approach is that it relies on the final state (the state which we want to hit) to share only incoherent edges with other vertices in the graph. We propose a simple remedy to restore the applicability of quantum jumps when this is not the case and show that the hitting-time statistics will again converge to that obtained from the measured discrete walk in appropriate limits.
Hitting statistics from quantum jumps
In this paper we study some properties of Fibonacci-sum set-graphs. The aforesaid graphs are an extension of the notion of Fibonacci-sum graphs to the notion of set-graphs. The colouring of Fibonacci-sum graphs is also discussed. A number of challenging research problems are posed in the conclusion.
Some Properties of Fibonacci-Sum Set-Graphs
Supersymmetric gauge theories in four dimensions can display interesting non-perturbative phenomena. Although the superpotential dynamically generated by these phenomena can be highly nontrivial, it can often be exactly determined. We discuss some general techniques for analyzing the Wilsonian superpotential and demonstrate them with simple but non-trivial examples.
Exact Superpotentials in Four Dimensions
In this paper we study the regularized analytic torsion of finite volume hyperbolic manifolds. We consider sequences of coverings $X_i$ of a fixed hyperbolic orbifold $X_0$. Our main result is that for certain sequences of coverings and strongly acyclic flat bundles, the analytic torsion divided by the index of the covering, converges to the $L^2$-torsion. Our results apply to certain sequences of arithmetic groups, in particular to sequences of principal congruence subgroups of $\SO^0(d,1)(\Z)$ and to sequences of principal congruence subgroups or Hecke subgroups of Bianchi groups.
The analytic torsion and its asymptotic behaviour for sequences of hyperbolic manifolds of finite volume
Fluctuation measurements are important sources of information on the mechanism of particle production at LHC energies. This article reports the first experimental results on third-order cumulants of the net-proton distributions in Pb$-$Pb collisions at a center-of-mass energy $\sqrt{s_{\rm NN}} = 5.02$ TeV recorded by the ALICE detector. The results on the second-order cumulants of net-proton distributions at $\sqrt{s_{\rm NN}} = 2.76$ and $5.02$ TeV are also discussed in view of effects due to the global and local baryon number conservation. The results demonstrate the presence of long-range rapidity correlations between protons and antiprotons. Such correlations originate from the early phase of the collision. The experimental results are compared with HIJING and EPOS model calculations, and the dependence of the fluctuation measurements on the phase-space coverage is examined in the context of lattice quantum chromodynamics (LQCD) and hadron resonance gas (HRG) model estimations. The measured third-order cumulants are consistent with zero within experimental uncertainties of about 4% and are described well by LQCD and HRG predictions.
Closing in on critical net-baryon fluctuations at LHC energies: cumulants up to third order in Pb$-$Pb collisions
Several Riemannian metrics and families of Riemannian metrics were defined on the manifold of Symmetric Positive Definite (SPD) matrices. Firstly, we formalize a common general process to define families of metrics: the principle of deformed metrics. We relate the recently introduced family of alpha-Procrustes metrics to the general class of mean kernel metrics by providing a sufficient condition under which elements of the former belongs to the latter. Secondly, we focus on the principle of balanced bilinear forms that we recently introduced. We give a new sufficient condition under which the balanced bilinear form is a metric. It allows us to introduce the Mixed-Euclidean (ME) metrics which generalize the Mixed-Power-Euclidean (MPE) metrics. We unveal their link with the (u, v)-divergences and the ($\alpha$, $\beta$)-divergences of information geometry and we provide an explicit formula of the Riemann curvature tensor. We show that the sectional curvature of all ME metrics can take negative values and we show experimentally that the sectional curvature of all MPE metrics but the log-Euclidean, power-Euclidean and power-affine metrics can take positive values.
The geometry of mixed-Euclidean metrics on symmetric positive definite matrices
We briefly summarize motivations for testing the weak equivalence principle and then review recent torsion-balance results that compare the differential accelerations of beryllium-aluminum and beryllium-titanium test body pairs with precisions at the part in $10^{13}$ level. We discuss some implications of these results for the gravitational properties of antimatter and dark matter, and speculate about the prospects for further improvements in experimental sensitivity.
Torsion-balance tests of the weak equivalence principle
The scarce knowledge of the initial stages of quark-gluon plasma before the thermalization is mostly inferred through the low-$p_\perp$ sector. We propose a complementary approach in this report - the use of high-$p_\perp$ probes' energy loss. We study the effects of four commonly assumed initial stages, whose temperature profiles differ only before the thermalization, on high-$p_\perp$ $R_{AA}$ and $v_2$ predictions. The predictions are based on our Dynamical Radiative and Elastic ENergy-loss Approach (DREENA) framework. We report insensitivity of $v_2$ to the initial stages, making it unable to distinguish between different cases. $R_{AA}$ displays sensitivity to the presumed initial stages, but current experimental precision does not allow resolution between these cases. We further revise the commonly accepted procedure of fitting the energy loss parameters, for each individual initial stage, to the measured $R_{AA}$. We show that the sensitivity of $v_2$ to various initial stages obtained through such procedure is mostly a consequence of fitting procedure, which may obscure the physical interpretations. Overall, the simultaneous study of high-$p_\perp$ observables, with unchanged energy loss parametrization and restrained temperature profiles, is crucial for future constraints on initial stages.
Utilizing high-$p_\perp$ theory and data to constrain the initial stages of quark-gluon plasma
We report observations of nanosecond nanometer scale heterogeneous dynamics in a free flowing colloidal jet revealed by ultrafast x-ray speckle visibility spectroscopy. The nanosecond double-bunch mode of the Linac Coherent Light Source free electron laser enabled the production of pairs of femtosecond coherent hard x-ray pulses. By exploring the anisotropic summed speckle visibility which relates to the correlation functions, we are able to evaluate not only the average particle flow rate in a colloidal nanoparticle jet, but also the heterogeneous flow field within. The reported methodology presented here establishes the foundation for the study of nano- and atomic-scale heterogeneous fluctuations in complex matter using x-ray free electron laser sources.
Nanoscale heterogeneous dynamics probed by nanosecond x-ray speckle visibility spectroscopy
Astronomers have proposed a number of mechanisms to produce supernova explosions. Although many of these mechanisms are now not considered primary engines behind supernovae, they do produce transients that will be observed by upcoming ground-based surveys and NASA satellites. Here we present the first radiation-hydrodynamics calculations of the spectra and light curves from three of these "failed" supernovae: supernovae with considerable fallback, accretion induced collapse of white dwarfs, and energetic helium flashes (also known as type .Ia supernovae).
Spectra and Light Curves of Failed Supernovae
Differentially private deep learning has recently witnessed advances in computational efficiency and privacy-utility trade-off. We explore whether further improvements along the two axes are possible and provide affirmative answers leveraging two instantiations of \emph{group-wise clipping}. To reduce the compute time overhead of private learning, we show that \emph{per-layer clipping}, where the gradient of each neural network layer is clipped separately, allows clipping to be performed in conjunction with backpropagation in differentially private optimization. This results in private learning that is as memory-efficient and almost as fast per training update as non-private learning for many workflows of interest. While per-layer clipping with constant thresholds tends to underperform standard flat clipping, per-layer clipping with adaptive thresholds matches or outperforms flat clipping under given training epoch constraints, hence attaining similar or better task performance within less wall time. To explore the limits of scaling (pretrained) models in differentially private deep learning, we privately fine-tune the 175 billion-parameter GPT-3. We bypass scaling challenges associated with clipping gradients that are distributed across multiple devices with \emph{per-device clipping} that clips the gradient of each model piece separately on its host device. Privately fine-tuning GPT-3 with per-device clipping achieves a task performance at $\epsilon=1$ better than what is attainable by non-privately fine-tuning the largest GPT-2 on a summarization task.
Exploring the Limits of Differentially Private Deep Learning with Group-wise Clipping
The different forms of propagation of relativistic electron plasma wavepackets in terms of Airy functions are studied. It is shown that exact solutions can be constructed showing accelerated propagations along coordinates transverse to the thermal speed cone coordinate. Similarly, Airy propagation is a solution for relativistic electron plasma waves in the paraxial approximation. This regime is considered in time-domain, when paraxial approximation is considered for frequency, and in space-domain, when paraxial approximation is considered for wavelength. In both different cases, the wavepackets remains structured in the transverse plane. Using these solutions we are able to define generalized and arbitrary Airy wavepackets for electron plasma waves, depeding on arbitrary spectral functions. Examples of this construction are presented. These electron plasma Airy wavepackets are the most general solutions of this kind.
Exact and paraxial Airy propagation of relativistic electron plasma wavepackets
The Javalambre Photometric Local Universe Survey (J-PLUS) is an ongoing 12 band photometric optical survey, observing thousands of square degrees of the Northern Hemisphere from the dedicated JAST80 telescope at the Observatorio Astrof\'isico de Javalambre (OAJ). Observational strategy is a critical point in this large survey. To plan the best observations, it is necessary to select pointings depending on object visibility, the pointing priority and status and location and phase of the Moon. In this context, the J-PLUS Tracking Tool, a web application, has been implemented, which includes tools to plan the best observations, as well as tools to create the command files for the telescope; to track the observations; and to know the status of the survey. In this environment, robustness is an important point. To obtain it, a feedback software system has been implemented. This software automatically decides and marks which observations are valid or which must be repeated. It bases its decision on the data obtained from the data management pipeline database using a complex system of pointing and filter statuses. This contribution presents J-PLUS Tracking Tool and all feedback software system.
J-PLUS Tracking Tool: Scheduler and Tracking software for the Observatorio Astrof\'isico de Javalambre (OAJ)
In 1935, Einstein, Podolsky and Rosen (EPR) questioned the completeness of quantum mechanics by devising a quantum state of two massive particles with maximally correlated space and momentum coordinates. The EPR criterion qualifies such continuous-variable entangled states, where a measurement of one subsystem seemingly allows for a prediction of the second subsystem beyond the Heisenberg uncertainty relation. Up to now, continuous-variable EPR correlations have only been created with photons, while the demonstration of such strongly correlated states with massive particles is still outstanding. Here, we report on the creation of an EPR-correlated two-mode squeezed state in an ultracold atomic ensemble. The state shows an EPR entanglement parameter of 0.18(3), which is 2.4 standard deviations below the threshold 1/4 of the EPR criterion. We also present a full tomographic reconstruction of the underlying many-particle quantum state. The state presents a resource for tests of quantum nonlocality and a wide variety of applications in the field of continuous-variable quantum information and metrology.
Satisfying the Einstein-Podolsky-Rosen criterion with massive particles
We demonstrate, for the first time, a scheme that generates radially-polarized light using Goos-Hanchen shift of a cylindrically symmetric Total Internal Reflection. It allows ultra-broadband radial polarization conversion for wavelengths differing >1 micron.
Ultra-Broadband Radial Polarization Conversion based on Goos-Hanchen Shift
We report a structural transition found in Ca10(Ir4As8)(Fe2-xIrxAs2)5, which exhibits superconductivity at 16 K. The c-axis parameter is doubled below a structural transition temperature of approximately 100 K, while the tetragonal symmetry with space group P4/n (No.85) is unchanged at all temperatures measured. Our synchrotron x-ray diffraction study clearly shows iridium ions at a non-coplanar position shift along the z-direction at the structural phase transition. We discuss that the iridium displacements affect superconductivity in Fe2As2 layers.
Synchrotron X-ray Diffraction Study of Structural Phase Transition in Ca10(Ir4As8)(Fe2-xIrxAs2)
A preliminary result of the solar axion search experiment at the University of Tokyo is presented. We searched for axions which could be produced in the solar core by exploiting the axion helioscope. The helioscope consists of a superconducting magnet with field strength of 4 Tesla over 2.3 meters. From the absence of the axion signal we set a 95 % confidence level upper limit on the axion coupling to two photons $g_{a\gamma\gamma} < 6.0 \times 10^{-10} GeV^{-1}$ for the axion mass $m_a < 0.03$ eV. This is the first solar axion search experiment whose sensitivity to $g_{a\gamma\gamma}$ exceeds the limit inferred from the solar age consideration.
The Tokyo Axion Helioscope Experiment
We report on the development of commercially fabricated multi-chroic antenna coupled Transition Edge Sensor (TES) bolometer arrays for Cosmic Microwave Background (CMB) polarimetry experiments. CMB polarimetry experiments have deployed instruments in stages. Stage-II experiments deployed with O(1,000) detectors and reported successful detection of B-mode (divergent free) polarization pattern in the CMB. Stage-III experiments have recently started observing with O(10,000) detectors with wider frequency coverage. A concept for a Stage-IV experiment, CMB-S4, is emerging to make a definitive measurement of CMB polarization from the ground with O(400,000) detectors. The orders of magnitude increase in detector count for CMB-S4 requires a new approach in detector fabrication to increase fabrication throughput.and reduce cost. We report on collaborative efforts with two commercial micro-fabrication foundries to fabricate antenna coupled TES bolometer detectors. The detector design is based on the sinuous antenna coupled dichroic detector from the POLARBEAR-2 experiment. The TES bolometers showed the expected I-V response and the RF performance agrees with simulation. We will discuss the motivation, design consideration, fabrication processes, test results, and how industrial detector fabrication could be a path to fabricate hundreds of detector wafers for future CMB polarimetry experiments.
Commercialization of micro-fabrication of antenna-coupled Transition Edge Sensor bolometer detectors for studies of the Cosmic Microwave Background
We produce synthetic images and SEDs from radiation hydrodynamical simulations of radiatively driven implosion. The synthetically imaged bright rimmed clouds (BRCs) are morphologically similar to those observed in star forming regions. Using nebular diagnostic line-ratios, simulated Very Large Array (VLA) radio images, H{\alpha} imaging and SED fitting we compute the neutral cloud and ionized boundary layer gas densities and temperatures and perform a virial stability analysis for each model cloud. We determine that the neutral cloud temperatures derived by SED fitting are hotter than the dominant neutral cloud temperature by 1 - 2 K due to emission from warm dust. This translates into a change in the calculated cloud mass by 8-35 %. Using a constant mass conversion factor (C{\nu}) for BRCs of different class is found to give rise to errors in the cloud mass of up to a factor of 3.6. The ionized boundary layer (IBL) electron temperature calculated using diagnostic line ratios is more accurate than assuming the canonical value adopted for radio diagnostics of 10^4 K. Both radio diagnostics and diagnostic line ratios are found to underestimate the electron density in the IBL. Each system is qualitatively correctly found to be in a state in which the pressure in the ionized boundary layer is greater than the supporting cloud pressure, implying that the objects are being compressed. We find that observationally derived mass loss estimates agree with those on the simulation grid and introduce the concept of using the mass loss flux to give an indication of the relative strength of photo-evaporative flow between clouds. The effect of beam size on these diagnostics in radio observations is found to be a mixing of the bright rim and ambient cloud and HII region fluxes, which leads to an underestimate of the cloud properties relative to a control diagnostic.
Testing diagnostics of triggered star formation
In this work, we consider weighted anisotropic Hardy inequalities and trace Hardy inequalities involving a general Finsler metric. We follow a unifying approach, by establishing first a sharp interpolation between them, extending the corresponding nonweighted version, being established recently by a different approach. Then, passing to bounded domains, we obtain successive sharp improvements by adding remainder terms involving sharp weights and optimal constants, resulting in an infinite series-type improvement. The results extend, into the Finsler context, the earlier known ones within the Euclidean setting. The generalization of our results to cones is also discussed.
Series expansion of weighted Finsler-Kato-Hardy inequalities
As learning-based methods make their way from perception systems to planning/control stacks, robot control systems have started to enjoy the benefits that data-driven methods provide. Because control systems directly affect the motion of the robot, data-driven methods, especially black box approaches, need to be used with caution considering aspects such as stability and interpretability. In this paper, we describe a differentiable and hierarchical control architecture. The proposed representation, called \textit{multi-abstractive neural controller}, uses the input image to control the transitions within a novel discrete behavior planner (referred to as the visual automaton generative network, or \textit{vAGN}). The output of a vAGN controls the parameters of a set of dynamic movement primitives which provides the system controls. We train this neural controller with real-world driving data via behavior cloning and show improved explainability, sample efficiency, and similarity to human driving.
Multi-Abstractive Neural Controller: An Efficient Hierarchical Control Architecture for Interactive Driving
We give a unified description of twisted forms of classical reductive groups schemes. Such group schemes are constructed from algebraic objects of finite rank, excluding some exceptions of small rank. These objects, augmented odd form algebras, consist of $2$-step nilpotent groups with an action of the underlying commutative ring, hence we develop basic descent theory for them. In addition, we describe classical isotropic reductive groups as odd unitary groups up to an isogeny.
Twisted forms of classical groups
Ontology matching is a core task when creating interoperable and linked open datasets. In this paper, we explore a novel structure-based mapping approach which is based on knowledge graph embeddings: The ontologies to be matched are embedded, and an approach known as absolute orientation is used to align the two embedding spaces. Next to the approach, the paper presents a first, preliminary evaluation using synthetic and real-world datasets. We find in experiments with synthetic data, that the approach works very well on similarly structured graphs; it handles alignment noise better than size and structural differences in the ontologies.
Ontology Matching Through Absolute Orientation of Embedding Spaces
Our work aims to study tools offered to students and tutors involved in face-to-face or blended project- based learning activities. Project-based learning is often applied in the case of complex learning (i.e. which aims at making learners acquire various linked skills or develop their behaviours). In comparison to traditional learning, this type of learning relies on co-development, collective responsibility and co-operation. Learners are the principal actors of their learning. These trainings rest on rich and complex organizations, particularly for tutors, and it is difficult to apply innovative educational strategies. Our aim, in a bottom-up approach, is (1) to observe, according to Knowledge Management methods, a course characterized by these three criteria. The observed course concerns project management learning. Its observation allows us (2) to highlight and to analyze the problems encountered by the actors (students, tutors, designers) and (3) to propose tools to solve or improve them. We particularly study the relevance and the limits of the existing monitoring and experience sharing tools. We finally propose a result in the form of the tool MEShaT (Monitoring and Experience Sharing Tool) and end on the perspectives offered by these researches.
Combiner suivi de l'activite? et partage d'exp\'eriences en apprentissage par projet pour les acteurs tuteurs et apprenants
The Automunge open source python library platform for tabular data pre-processing automates feature engineering data transformations of numerical encoding and missing data infill to received tidy data on bases fit to properties of columns in a designated train set for consistent and efficient application to subsequent data pipelines such as for inference, where transformations may be applied to distinct columns in "family tree" sets with generations and branches of derivations. Included in the library of transformations are methods to extract structure from bounded categorical string sets by way of automated string parsing, in which comparisons between entries in the set of unique values are parsed to identify character subset overlaps which may be encoded by appended columns of boolean overlap detection activations or by replacing string entries with identified overlap partitions. Further string parsing options, which may also be applied to unbounded categoric sets, include extraction of numeric substring partitions from entries or search functions to identify presence of specified substring partitions. The aggregation of these methods into "family tree" sets of transformations are demonstrated for use to automatically extract structure from categoric string compositions in relation to the set of entries in a column, such as may be applied to prepare categoric string set encodings for machine learning without human intervention.
Parsed Categoric Encodings with Automunge
Optical losses degrade the sensitivity of laser interferometric instruments. They reduce the number of signal photons and introduce technical noise associated with diffuse light. In quantum-enhanced metrology, they break the entanglement between correlated photons. Such decoherence is one of the primary obstacles in achieving high levels of quantum noise reduction in precision metrology. In this work, we compare direct measurements of cavity and mirror losses in the Caltech 40m gravitational-wave detector prototype interferometer with numerical estimates obtained from semi-analytic intra-cavity wavefront simulations using mirror surface profile maps. We show a unified approach to estimating the total loss in optical cavities (such as the LIGO gravitational detectors) that will lead towards the engineering of systems with minimum decoherence for quantum-enhanced precision metrology.
Scattering Loss in Precision Metrology due to Mirror Roughness
The sampling of the configuration space in diffusion Monte Carlo (DMC) is done using walkers moving randomly. In a previous work on the Hubbard model [\href{https://doi.org/10.1103/PhysRevB.60.2299}{Assaraf et al.~Phys.~Rev.~B \textbf{60}, 2299 (1999)}], it was shown that the probability for a walker to stay a certain amount of time in the same state obeys a Poisson law and that the on-state dynamics can be integrated out exactly, leading to an effective dynamics connecting only different states. Here, we extend this idea to the general case of a walker trapped within domains of arbitrary shape and size. The equations of the resulting effective stochastic dynamics are derived. The larger the average (trapping) time spent by the walker within the domains, the greater the reduction in statistical fluctuations. A numerical application to the Hubbard model is presented. Although this work presents the method for finite linear spaces, it can be generalized without fundamental difficulties to continuous configuration spaces.
Diffusion Monte Carlo using domains in configuration space
Liquid capillary-bridge formation between solid particles has a critical influence on the rheological properties of granular materials and, in particular, on the efficiency of fluidized bed reactors. The available analytical and semi-analytical methods have inherent limitations, and often do not cover important aspects, like the presence of non-axisymmetric bridges. Here, we conduct numerical simulations of the capillary bridge formation between equally and unequally-sized solid particles using the lattice Boltzmann method, and provide an assessment of the accuracy of different families of analytical models. We find that some of the models taken into account are shown to perform better than others. However, all of them fail to predict the capillary force for contact angles larger than $\pi/2$, where a repulsive capillary force attempts to push the solid particle outwards to minimize the surface energy, especially at a small separation distance.
Capillary-bridge Forces Between Solid Particles: Insights from Lattice Boltzmann Simulations
The advances in deep neural networks (DNN) have significantly enhanced real-time detection of anomalous data in IoT applications. However, the complexity-accuracy-delay dilemma persists: complex DNN models offer higher accuracy, but typical IoT devices can barely afford the computation load, and the remedy of offloading the load to the cloud incurs long delay. In this paper, we address this challenge by proposing an adaptive anomaly detection scheme with hierarchical edge computing (HEC). Specifically, we first construct multiple anomaly detection DNN models with increasing complexity, and associate each of them to a corresponding HEC layer. Then, we design an adaptive model selection scheme that is formulated as a contextual-bandit problem and solved by using a reinforcement learning policy network. We also incorporate a parallelism policy training method to accelerate the training process by taking advantage of distributed models. We build an HEC testbed using real IoT devices, implement and evaluate our contextual-bandit approach with both univariate and multivariate IoT datasets. In comparison with both baseline and state-of-the-art schemes, our adaptive approach strikes the best accuracy-delay tradeoff on the univariate dataset, and achieves the best accuracy and F1-score on the multivariate dataset with only negligibly longer delay than the best (but inflexible) scheme.
Adaptive Anomaly Detection for Internet of Things in Hierarchical Edge Computing: A Contextual-Bandit Approach