text
stringlengths
121
2.54k
summary
stringlengths
23
219
We show that the increasingly popular nonlinear optical technique of time-domain coherent anti-Stokes Raman scattering (CARS), which is usually understood in terms of the semiclassical time-dependent third-order polarization, can be equally explained in terms of the time-delayed version of the Yuratich equation so popular in traditional frequency-domain CARS. The method brings out the strong dependence of CARS time traces and time-delayed CARS lineshapes on the spectral envelope of the probe laser electric field. Examples are analytically shown for experimental results that are otherwise treated by means of numerical methods only.
Spectral model of time-domain coherent anti-Stokes Raman scattering
We obtain closed form expressions for the expected conditional degree distribution and the joint degree distribution of the linear preferential attachment model for network growth in the steady state. We consider the multiple-destination preferential attachment growth model, where incoming nodes at each timestep attach to $\beta$ existing nodes, selected by degree-proportional probabilities. By the conditional degree distribution $p(\ell| k)$, we mean the degree distribution of nodes that are connected to a node of degree $k$. By the joint degree distribution $p(k,\ell)$, we mean the proportion of links that connect nodes of degrees $k$ and $\ell$. In addition to this growth model, we consider the shifted-linear preferential growth model and solve for the same quantities, as well as a closed form expression for its steady-state degree distribution.
Degree Correlation in Scale-Free Graphs
Generalizing visual recognition models trained on a single distribution to unseen input distributions (i.e. domains) requires making them robust to superfluous correlations in the training set. In this work, we achieve this goal by altering the training images to simulate new domains and imposing consistent visual attention across the different views of the same sample. We discover that the first objective can be simply and effectively met through visual corruptions. Specifically, we alter the content of the training images using the nineteen corruptions of the ImageNet-C benchmark and three additional transformations based on Fourier transform. Since these corruptions preserve object locations, we propose an attention consistency loss to ensure that class activation maps across original and corrupted versions of the same training sample are aligned. We name our model Attention Consistency on Visual Corruptions (ACVC). We show that ACVC consistently achieves the state of the art on three single-source domain generalization benchmarks, PACS, COCO, and the large-scale DomainNet.
Attention Consistency on Visual Corruptions for Single-Source Domain Generalization
The thermodynamic properties of an ideal gas of charged vector bosons (with mass m and charge e) is studied in a strong external homogeneous magnetic field no greater than the critical value B_{cr}=m^2/e. The thermodynamic potential, after appropriate analytic continuation, is then used in the study of the spontaneous production of charged spin-one boson pairs from vacuum in the presence of a supercritical homogeneous magnetic field at finite temperature.
Pair Production of charged vector bosons in supercritical magnetic fields at finite temperatures
A slope $\frac pq$ is called a characterizing slope for a given knot $K_0\subset S^3$ if whenever the $\frac pq$--surgery on a knot $K\subset S^3$ is homeomorphic to the $\frac pq$--surgery on $K_0$ via an orientation preserving homeomorphism, then $K=K_0$. In a previous paper, we showed that, outside a certain finite set of slopes, only the negative integers could possibly be non-characterizing slopes for the torus knot $T_{5,2}$. Applying recent work of Baldwin--Hu--Sivek, we improve our result by showing that a nontrivial slope $\frac pq$ is a characterizing slope for $T_{5,2}$ if $\frac pq>-1$ and $\frac pq\notin \{0,1, \pm\frac12,\pm\frac13\}$. In particular, every nontrivial L-space slope of $T_{5,2}$ is characterizing for $T_{5,2}$. As a consequence, if a nontrivial $\frac pq$-surgery on a non-torus knot in $S^3$ yields a manifold of finite fundamental group, then $|p|>9$.
Characterizing slopes for torus knots, II
Let $G$ be an affine algebraic group with a reductive identity component $G^{0}$ acting regularly on an affine Krull scheme $X = {Spec} (R)$ over an algebraically closed field. Let $T$ be an algebraic subtorus of $G$ and suppose that ${Q}(R)^{T}= {Q}(R^{T})$ of quotient fields. We will show: If $G$ is the centralizer of $T$ in $G$, then the pseudo-reflections of the action of $G$ on $R^{T}$ can be lifted to those on $R$.
Liftings of pseudo-reflection groups of toric quotients of Krull schemes
We study the phase diagram of a system of soft-core dipolar bosons confined to a two-dimensional optical lattice layer. We assume that dipoles are aligned perpendicular to the layer such that the dipolar interactions are purely repulsive and isotropic. We consider the full dipolar interaction and perform Path Integral Quantum Monte Carlo simulations using the Worm Algorithm. Besides a superfluid phase, we find various solid and supersolid phases. We show that, unlike what was found previously for the case of nearest-neighboring interaction, supersolid phases are stabilized not only by doping the solids with particles but with holes as well. We further study the stability of these quantum phases against thermal fluctuations. Finally, we discuss pair formation and the stability of the pair checkerboard phase formed in a bilayer geometry, and suggest experimental conditions under which the pair checkerboard phase can be observed.
Quantum Phases of Soft-Core Dipolar Bosons in Optical Lattices
In the quest for applicable quantum information technology miniaturised, compact and scalable sources are of paramount importance. Here, we present the concept for the generation of 2-photon N00N states without further post-processing in a single non-linear optical element. Based upon a periodically poled waveguide coupler, we present the principle of state generation via type-0 parametric down-conversion inside this type of devices. With the eigenmode description of the linear optical element, we utilise the delocalised photon pair generation to generate a N00N state in the measurement basis. We show, that we are able to eliminate the need for narrow-band spectral filtering, as well as for phase-stabilisation of the pump light, making this approach an elegant way to produce 2-photon N00N states.
N00N states from a single non-linear directional coupler
A comparison of structural features of quantum and classical physical theories, such as the information capacity of systems subject to these theories, requires a common formal framework for the presentation of corresponding concepts (such as states, observables, probability, entropy). Such a framework is provided by the notion of statistical model developed in the convexity approach to statistical physical theories. Here we use statistical models to classify and survey all possible types of embedding and extension of quantum probabilistic theories subject to certain reasonable constraints. It will be shown that the so-called canonical classical extension of quantum mechanics is essentially the only `good' representation of the quantum statistical model in a classical framework. All quantum observables are thus identified as fuzzy classical random variables.
Less (precision) is more (information): quantum information in fuzzy probability theory
We study the influence of thermal fluctuations in the phase diagram of a recently introduced two-dimensional phase field crystal model with an external pinning potential. The model provides a continuum description of pinned lattice systems allowing for both elastic deformations and topological defects. We introduce a non-conserved version of the model and determine the ground-state phase diagram as a function of lattice mismatch and strength of the pinning potential. Monte Carlo simulations are used to determine the phase diagram as a function of temperature near commensurate phases. The results show a rich phase diagram with commensurate, incommensurate and liquid-like phases with a topology strongly dependent on the type of ordered structure. A finite-size scaling analysis of the melting transition for the $c(2 \times 2)$ commensurate phase shows that the thermal correlation length exponent $\nu$ and specific heat behavior are consistent with the Ising universality class as expected from analytical arguments.
Thermal fluctuations and phase diagrams of the phase field crystal model with pinning
The high-Reynolds number stratified wake of a slender body is studied using a high-resolution hybrid simulation. The wake generator is a 6:1 prolate spheroid with a tripped boundary layer, the diameter-based body Reynolds number is $Re= U_\infty D/\nu = 10^5$, and the body Froude numbers are $Fr=U_\infty/ND=\{2,10,\infty\}$. The wake defect velocity ($U_d$) decays following three stages with different wake decay rates \citep{Spedding1997} as for a bluff body. However, the transition points among stages do not follow the expected $Nt = Nx/U_\infty$ values. Comparison with the wake of a circular disk in similar conditions \citep{Chongsiripinyo2020} quantifies the influence of the wake generator - bluff versus slender - in stratified flow. The strongly stratified $Fr=2$ wake is in a resonant state. The steady lee waves strongly modulate the mean flow and, relative to the disk, the $Fr=2$ spheroid wake shows an earlier transition from the non-equilibrium (NEQ) stage to the quasi-two-dimensional (Q2D) stage. The NEQ-Q2D transition is followed by a sharp increase in the turbulent kinetic energy and horizontal wake meanders. At $Fr=10$, the start of the NEQ stage is delayed. Transfers between kinetic energy and potential energy reservoirs (both mean and turbulence) are analyzed and the flows are compared in phase space (local Froude and Reynolds number as coordinates). Overall, the results of this study point to the difficulty of finding a universal framework for stratified wake evolution, independent of the features of the body, and provide insights into how buoyancy effects depend on the wake generator.
The high-Reynolds-number stratified wake of a slender body and its comparison with a bluff-body wake
We say that a finite group $G$ satisfies the independence property if, for every pair of distinct elements $x$ and $y$ of $G$, either $\{x,y\}$ is contained in a minimal generating set for $G$ or one of $x$ and $y$ is a power of the other. We give a complete classification of the finite groups with this property, and in particular prove that every such group is supersoluble. A key ingredient of our proof is a theorem showing that all but three finite almost simple groups $H$ contain an element $s$ such that the maximal subgroups of $H$ containing $s$, but not containing the socle of $H$, are pairwise non-conjugate.
Finite groups satisfying the independence property
Industrial robots play an increasingly important role in a growing number of fields. For example, robotics is used to increase productivity while reducing costs in various aspects of manufacturing. Since robots are often set up in production lines, the breakdown of a single robot has a negative impact on the entire process, in the worst case bringing the whole line to a halt until the issue is resolved, leading to substantial financial losses due to the unforeseen downtime. Therefore, predictive maintenance systems based on the internal signals of robots have gained attention as an essential component of robotics service offerings. The main shortcoming of existing predictive maintenance algorithms is that the extracted features typically differ significantly from the learnt model when the operation of the robot changes, incurring false alarms. In order to mitigate this problem, predictive maintenance algorithms require the model to be retrained with normal data of the new operation. In this paper, we propose a novel solution based on transfer learning to pass the knowledge of the trained model from one operation to another in order to prevent the need for retraining and to eliminate such false alarms. The deployment of the proposed unsupervised transfer learning algorithm on real-world datasets demonstrates that the algorithm can not only distinguish between operation and mechanical condition change, it further yields a sharper deviation from the trained model in case of a mechanical condition change and thus detects mechanical issues with higher confidence.
Domain Adaptation for Robot Predictive Maintenance Systems
In nonlinear state-space models, sequential learning about the hidden state can proceed by particle filtering when the density of the observation conditional on the state is available analytically (e.g. Gordon et al., 1993). This condition need not hold in complex environments, such as the incomplete-information equilibrium models considered in financial economics. In this paper, we make two contributions to the learning literature. First, we introduce a new filtering method, the state-observation sampling (SOS) filter, for general state-space models with intractable observation densities. Second, we develop an indirect inference-based estimator for a large class of incomplete-information economies. We demonstrate the good performance of these techniques on an asset pricing model with investor learning applied to over 80 years of daily equity returns.
State-Observation Sampling and the Econometrics of Learning Models
Photosystem 0 concerns a primitive mechanism for free energy gain as ATP from fluctuating light during early evolution. The PS0 reaction centers had no reducing power: charge transport was only temporary. Light induced within the reaction centers metastable dipoles that generated a membrane potential. This in turn drove ATP synthesis by protons moving through the ATP synthase enzyme. After the decay of the dipole potential in the dark, the protons either (1) returned across the membrane by conduction or (2) were pumped back by ATP synthase, backwards active as ATPase at a higher H+/ATP ratio. PS0 constitutes a link to previously proposed free energy sources for early evolution that worked on thermal cycling. Several contemporary photosynthetic phenomena may be relics of PS0.
Photosystem 0, a proposed ancestral photosystem without reducing power that synthesized ATP during light-dark cycling
In this paper we report the effect of the jet-medium interplay as implemented in EPOS 3 on the ridge like structure observed in high multiplicity p-Pb collisions at $\sqrt{s_{NN}} = $ 5.02 TeV. EPOS 3 takes into account hydrodynamically expanding bulk matter, jets and the jet-medium interaction. The basis of this model is multiple scatterings where each scattering finally produces flux tube / string. In the higher multiplicity event classes where the flux tube/string density is higher, there is a finite probability that the strings will pick up quarks and antiquarks (or diquarks) from the bulk (core) for flux tube breaking to produce jet hadrons (corona) instead of producing them via usual Schwinger mechanism. This will eventually create a correlation between core and corona and also influence the corona-corona correlation as the corona particles containing quarks and antiquarks (or diquarks) from the bulk also carry the fluid information. We report the relative contributions of the core-core, core-corona, corona-core and corona-corona correlations towards the ridge in the high and low multiplicity p-Pb collisions at $\sqrt{s_{NN}} = $ 5.02 TeV using the data generated by EPOS 3. The multiplicity evolution of the ridges in all the cases is also reported.
Ridge from jet-medium interaction in p-Pb collisions at $\sqrt{s_{NN} }$ = 5.02 TeV
Taking into account the mixing effects between left- and right-handed top-squarks, we calculate the genuine supersymmetric eletroweak correction to top quark production at the Tevatron in the minimal supersymmetric model. The analytic expressions of the corrections to both the parton level cross section and the total hadronic cross section are presented. Some numerical examples are also given to show the size of the corrections.
Top-squark mixing effects in the supersymmetric electroweak corrections to top quark production at the Tevatron
Atomically thin layers of two-dimensional (2D) materials such as graphene, MoS2 and h-BN have immense potential as sensors and electronic devices thanks to their highly desirable electronic, mechanical, optical and heat transport properties. In particular their extreme stiffness, tensile strength and low density allows for high frequency electronic devices, resonators and ultra-sensitive detectors providing realistic avenues for down-scaling electronic devices and nanoelectromechanical systems (NEMS). Whilst nanoscale morphology and electronic properties of 2D materials can be studied using existing electron or scanning probe microscopy approaches, time-dependant phenomena on the ns and shorter time-scales cannot be readily explored. Here we use the heterodyne principle to reach into this ns time-scale and create a local nanoscale probe for electrostatically induced actuation of a graphene resonator, with amplitude sensitivity down to pm range and time sensitivity in the ns range. We experimentally observed response times of 20-120 ns for resonators with beam lengths of 180 nm to 2.5 um in line with the theoretical predictions for such NEMS devices.
Nanoscale Mapping of Nanosecond Time-scale Electro-Mechanical Phenomena in Graphene NEMS
High-speed spectroscopy of two pulsating subdwarf B stars, KPD 2109+4401 and PB 8783, is presented. Radial motions are detected with the same frequencies as reported from photometric observations and with amplitudes of ~2 km/sec in two or more independent modes. These represent the first direct observations of surface motion due to multimode non-radial oscillations in subdwarf B stars. In the case of the sdB+F binary PB 8783, the velocities of both components are resolved; high-frequency oscillations are found only in the sdB star and not the F star. There also appears to be evidence for mutual motion of the binary components. If confirmed, it implies that the F-type companion is >~1.2 times more massive than the sdB star, while the amplitude of the F star acceleration over 4 hours would constrain the orbital period to lie between 0.5 and 3.2d.
Radial velocities of pulsating subdwarf B stars: KPD 2109+4401 and PB 8783
We report on the deterministic fabrication of sub-um mesa structures containing single quantum dots by in-situ electron-beam lithography. The fabrication method is based on a two-step lithography process using a low-temperature cathodoluminescence (CL) spectroscopy setup. In the first step the position and spectral features of single InGaAs quantum dots (QDs) are detected by CL. Then circular sub-um mesa-structures are exactly defined by high-resolution electron-beam lithography and subsequent etching in the second step. CL spectroscopy and micro-photoluminscence spectroscopy demonstrate the high optical quality of the single-QD mesa-structures with emission linewidths below 15 ueV and g(2)(0) = 0.04. Our lithography method allows for an alignment precision better than 100 nm which paves the way for a fully-deterministic device technology using in-situ CL lithography.
In-situ electron-beam lithography of deterministic single-quantum-dot mesa-structures using low-temperature cathodoluminescence spectroscopy
For a multivariate random walk with i.i.d. jumps satisfying the Cramer moment condition and having a mean vector with at least one negative component, we derive the exact asymptotics of the probability of ever hitting the positive orthant that is being translated to infinity along a fixed vector with positive components. This problem is motivated by and extends results from a paper by F. Avram et al. (2008) on a two-dimensional risk process. Our approach combines the large deviation techniques from a recent series of papers by A. Borovkov and A. Mogulskii with new auxiliary constructions, which enable us to extend their results on hitting remote sets with smooth boundaries to the case of boundaries with a "corner" at the "most probable hitting point". We also discuss how our results can be extended to the case of more general target sets.
The exact asymptotics of the large deviation probabilities in the multivariate boundary crossing problem
We study the absorption of scalar fields by extreme/exotic compact objects (ECOs) -- horizonless alternatives to black holes -- via a simple model in which dissipative mechanisms are encapsulated in a single parameter. Trapped modes, localized between the ECO core and the potential barrier at the photosphere, generate Breit-Wigner-type spectral lines in the absorption cross section. Absorption is enhanced whenever the wave frequency resonates with a trapped mode, leading to a spectral profile which differs qualitatively from that of a black hole. We introduce a model based on Nariai spacetime, in which properties of the spectral lines are calculated in closed form. We present numerically calculated absorption cross sections and transmission factors for example scenarios, and show how the Nariai model captures the essential features. We argue that, in principle, ECOs can be distinguished from black holes through their absorption spectra.
Spectral lines of extreme compact objects
We update the bounds on fermions with electric charge $\epsilon e$ and mass $m_\epsilon$. For $m_\epsilon\lsim m_e$ we find $10^{-15}\lsim\epsilon<1$ is excluded by laboratory experiments, astrophysics and cosmology. For larger masses, the limits are less restrictive and depend on $m_\epsilon$. For milli-charged neutrinos, the limits are stronger, especially if the different flavors mix as suggested by current experimental evidence.
Updated Bounds on Milli-Charged Particles
Interferences are not positive-definite and therefore they can change sign over the phase space. If the contributions of the regions where the interference is positive and negative nearly cancel each other, interference effects are hard to measure. In this paper, we propose a method to quantify the ability of an observable to separate an interference positive and negative contributions and therefore to revive the interference effects in measurements. We apply this method to the anomalous gluon operator in the SMEFT for which the interference suppression is well-known. We show that we can get contraints on its coefficient, using the interference only, similar to those obtained by including the square of the new physics amplitude.
Reviving the interference: framework and proof-of-principle for the anomalous gluon self-interaction in the SMEFT
Suppose that $m$ drivers each choose a preferred parking space in a linear car park with $n$ spots. In order, each driver goes to their chosen spot and parks there if possible, and otherwise takes the next available spot if it exists. If all drivers park successfully, the sequence of choices is called a parking function. Classical parking functions correspond to the case $m=n$; we study here combinatorial and probabilistic aspects of this generalized case. We construct a family of bijections between parking functions $\text{PF}(m, n)$ with $m$ cars and $n$ spots and spanning forests $\mathscr{F}(n+1, n+1-m)$ with $n+1$ vertices and $n+1-m$ distinct trees having specified roots. This leads to a bijective correspondence between $\text{PF}(m, n)$ and monomial terms in the associated Tutte polynomial of a disjoint union of $n-m+1$ complete graphs. We present an identity between the "inversion enumerator" of spanning forests with fixed roots and the "displacement enumerator" of parking functions. The displacement is then related to the number of graphs on $n+1$ labeled vertices with a fixed number of edges, where the graph has $n+1-m$ disjoint rooted components with specified roots. We investigate various probabilistic properties of a uniform parking function, giving a formula for the law of a single coordinate. As a side result we obtain a recurrence relation for the displacement enumerator. Adapting known results on random linear probes, we further deduce the covariance between two coordinates when $m=n$.
Parking functions: From combinatorics to probability
In real-world dialogue systems, the ability to understand the user's emotions and interact anthropomorphically is of great significance. Emotion Recognition in Conversation (ERC) is one of the key ways to accomplish this goal and has attracted growing attention. How to model the context in a conversation is a central aspect and a major challenge of ERC tasks. Most existing approaches are generally unable to capture both global and local contextual information efficiently, and their network structures are too complex to design. For this reason, in this work, we propose a straightforward Dual-stream Recurrence-Attention Network (DualRAN) based on Recurrent Neural Network (RNN) and Multi-head ATtention network (MAT). The proposed model eschews the complex network structure of current methods and focuses on combining recurrence-based methods with attention-based methods. DualRAN is a dual-stream structure mainly consisting of local- and global-aware modules, modeling a conversation from distinct perspectives. To achieve the local-aware module, we extend the structure of RNN, thus enhancing the expressive capability of the network. In addition, we develop two single-stream network variants for DualRAN, i.e., SingleRANv1 and SingleRANv2. We conduct extensive experiments on four widely used benchmark datasets, and the results reveal that the proposed model outshines all baselines. Ablation studies further demonstrate the effectiveness of each component.
A Dual-Stream Recurrence-Attention Network with Global-Local Awareness for Emotion Recognition in Textual Dialogue
In high-energy nuclear collisions, light nuclei can be regarded as a cluster of baryons and their yields are sensitive to the baryon density fluctuations. Thus, the production of light nuclei can be used to study the QCD phase transition, at which the baryon density fluctuation will be enhanced. The yield ratio of light nuclei, defined as $N(t)$$\times$$N(p)$/$N^2(d)$, is predicted to be sensitive observable to search for the 1st-order phase transition and/or QCD critical point in heavy-ion collisions. In this paper, we present the energy and centrality dependence of (anti)deuteron and triton production in Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}$ = 7.7, 11.5, 14.5, 19.6, 27, 39, 54.4, 62.4, and 200 GeV measured by the STAR experiment at RHIC. We show beam-energy dependence for the coalescence parameter, $B_2(d)$ and $B_3(t)$, particle ratios, $d/p$, $t/p$, and $t/d$, and the yield ratio of $N(t)$$\times$$N(p)$/$N^2(d)$. More importantly, non-monotonic energy dependence is observed for the yield ratio, $N(t)$$\times$$N(p)$/$N^2(d)$, in 0-10\% central Au+Au collisions with a peak around 20-30 GeV. Their physics implications on QCD critical point search and change of the equation of state will be discussed.
Light Nuclei ($d$, $t$) Production in Au + Au Collisions at $\sqrt{s_{NN}}$ = 7.7 - 200 GeV
We have used the Goddard IRAM 2-Millimeter Observer (GISMO) with the 30 m IRAM telescope to carry out a 2 mm survey of the Galaxy's central molecular zone (CMZ). These observations detect thermal emission from cold ISM dust, thermal free-free emission from ionized gas, and nonthermal synchrotron emission from relatively flat-spectrum sources. Archival data sets spanning $3.6 \mu$m to 90 cm are used to distinguish different emission mechanisms. After the thermal emission of dust is modeled and subtracted, the remaining 2 mm emission is dominated by free-free emission, with the exception of the brightest nonthermal filament (NTF) that runs though the middle of the bundle of filaments known as the Radio Arc. This is the shortest wavelength at which any NTF has been detected. The GISMO observations clearly trace this NTF over a length of ~0.2$^\circ$, with a mean 2 mm spectral index which is steeper than at longer wavelengths. The 2 mm to 6 cm (or 20 cm) spectral index steepens from $\alpha \approx -0.2$ to $-0.7$ as a function distance from the Sickle H II region, suggesting that this region is directly related to the NTF. A number of unresolved (at $21''$) 2 mm sources are found nearby. One appears to be thermal dust emission from a molecular cloud that is associated with an enigmatic radio point source whose connection to the Radio Arc is still debated. The morphology and colors at shorter IR wavelengths indicate other 2 mm unresolved sources are likely to be compact H II regions.
2 mm GISMO Observations of the Galactic Center. II. A Nonthermal Filament in the Radio Arc and Compact Sources
The qualities of electron refrigeration by means of tunnel junctions between superconducting and normal--metal electrodes are studied theoretically. A suitable approximation of the basic expression for the heat current across those tunnel junctions allows the investigation of several features of the device such as its optimal bias voltage, its maximal heat current, its optimal working point, and the maximally gained temperature reduction. Fortunately, the obtained results can be compared with those of a recent experiment.
Electron Refrigeration in the Tunneling Approach
Photocatalytic water splitting reaction on TiO2 surface is one of the fundamental issues that bears significant implication in hydrogen energy technology and has been extensively studied. However, the existence of the very first reaction step, the direct photo-dissociation of water, has been disregarded. Here, we provide unambiguously experimental evidence to demonstrate that adsorbed water molecules on reduced rutile TiO2(110)-1\times1 surface can be dissociated under UV irradiation using low temperature scanning tunneling microscopy. It is identified that a water molecule at fivefold coordinated Ti (Ti5c) site can be photocatalytically dissociated, resulting in a hydroxyl at Ti5c and another hydroxyl at bridge oxygen row. Our findings reveal a missing link in the photocatalytic water splitting reaction chain, which greatly contribute to the detailed understanding of underlying mechanism.
Evidence of Photocatalytic Dissociation of Water on TiO2 with Atomic Resolution
We investigate the observational viability of a class of $\alpha$-attractors inflationary models in light of the most recent Cosmic Microwave Background (CMB) and Large-Scale Structure (LSS) data. By considering a double-well potential we perform a slow-roll analysis to study the behavior of this class of models, which is a continuous interpolation between the chaotic inflation for large values of $\alpha$ and the universal attractor, i.e., $n_s=1- 2/N$ and $r=\alpha 12/N^2$ for small $\alpha$, where $n_s$ is the scalar spectral index, $r$ is the tensor-to-scalar ratio, and $N$ is the e-fold number. In order to explore the parameter space of the model, we also perform a MCMC analysis and find $\alpha=7.56\pm 5.15$ ($1\sigma$).
Observational constraints on $\alpha$-attractor inflationary models with a Higgs-like potential
We study the thermodynamics of a crystalline solid by applying intermediate statistics manifested by q-deformation. We based part of our study on both Einstein and Debye models, exploring primarily deformed thermal and electrical conductivities as a function of the deformed Debye specific heat. The results revealed that the q-deformation acts in two different ways but not necessarily as independent mechanisms. It acts as a factor of disorder or impurity, modifying the characteristics of a crystalline structure, which are phenomena described by q-bosons, and also as a manifestation of intermediate statistics, the B-anyons (or B-type systems). For the latter case, we have identified the Schottky effect, normally associated with high-Tc superconductors in the presence of rare-earth-ion impurities, and also the increasing of the specific heat of the solids beyond the Dulong-Petit limit at high temperature, usually related to anharmonicity of interatomic interactions. Alternatively, since in the q-bosons the statistics are in principle maintained the effect of the deformation acts more slowly due to a small change in the crystal lattice. On the other hand, B-anyons that belong to modified statistics are more sensitive to the deformation.
Intermediate statistics in thermoelectric properties of solids
Multimodal semantic understanding often has to deal with uncertainty, which means the obtained messages tend to refer to multiple targets. Such uncertainty is problematic for our interpretation, including inter- and intra-modal uncertainty. Little effort has studied the modeling of this uncertainty, particularly in pre-training on unlabeled datasets and fine-tuning in task-specific downstream datasets. In this paper, we project the representations of all modalities as probabilistic distributions via a Probability Distribution Encoder (PDE) by utilizing sequence-level interactions. Compared to the existing deterministic methods, such uncertainty modeling can convey richer multimodal semantic information and more complex relationships. Furthermore, we integrate uncertainty modeling with popular pre-training frameworks and propose suitable pre-training tasks: Distribution-based Vision-Language Contrastive learning (D-VLC), Distribution-based Masked Language Modeling (D-MLM), and Distribution-based Image-Text Matching (D-ITM). The fine-tuned models are applied to challenging downstream tasks, including image-text retrieval, visual question answering, visual reasoning, and visual entailment, and achieve state-of-the-art results.
MAP: Multimodal Uncertainty-Aware Vision-Language Pre-training Model
In this paper we discuss how the question about the rationality of L^2-Betti numbers is related to the Isomorphism Conjecture in algebraic K-theory and why in this context noncommutative localization appears as an important tool.
L^2-Betti numbers, isomorphism conjectures and noncommutative localization
We study torsion subgroups of elliptic curves with complex multiplication (CM) defined over number fields which admit a real embedding. We give a complete classification of the groups which arise up to isomorphism as the torsion subgroup of a CM elliptic curve defined over a number field of odd degree: there are infinitely many. Restricting to the case of prime degree, we show that there are only finitely many isomorphism classes. More precisely, there are six "Olson groups" which arise as torsion subgroups of CM elliptic curves over number fields of every degree, and there are precisely 17 "non-Olson" CM elliptic curves defined over a prime degree number field.
Torsion Points on CM Elliptic Curves Over Real Number Fields
In this reply we answer the comment by A. Dhar (cond-mat/0203077) on our Letter "Simple one dimensional model of heat conduction which obeys Fourier's law" (Phys. Rev. Lett. 86, 5486 (2001), cond-mat/0104453)
Reply to comment on "Simple one-dimensional model of heat conduction which obeys Fourier's law"
We investigate the critical curve of the string tension sigma(T) as a function of temperature in quenched gauge invariant SU(3) lattice gauge theory. We extract sigma(T) from the colour averaged free energy of a static quark-antiquark pair. To compute the free energy, we utilize a pair of gauge invariant Polyakov loop and antiloop correlations, and apply the multihit procedure to enhance the signal to noise ratio. We find that the string tension departs from the zero temperature sigma(0) at T close to 0.5 Tc. We cover the relevant temperature range from 0.5 Tc up to the confinement temperature Tc using 57 different sets of pure gauge lattice configurations with four temporal extensions (4,6,8,12), different beta and a spatial volume of 48^3 in lattice units.
Lattice QCD computation of the SU(3) String Tension critical curve
The Kepler object KIC 12557548 shows irregular eclipsing behaviour with a constant 15.685 hr period, but strongly varying transit depth. In this paper we fit individual eclipses, in addition to fitting binned light curves, to learn more about the process underlying the eclipse depth variation. Additionally, we put forward observational constraints that any model of this planet-star system will have to match. We find two quiescent spells of ~30 orbital periods each where the transit depth is <0.1%, followed by relatively deep transits. Additionally, we find periods of on-off behaviour where >0.5% deep transits are followed by apparently no transit at all. Apart from these isolated events we find neither significant correlation between consecutive transit depths nor a correlation between transit depth and stellar intensity. We find a three-sigma upper limit for the secondary eclipse of 4.9*10^-5, consistent with a planet candidate with a radius of less than 4600 km. Using the short cadence data we find that a 1-D exponential dust tail model is insufficient to explain the data. We improved our model to a 2-D, two-component dust model with an opaque core and an exponential tail. Using this model we fit individual eclipses observed in short cadence mode. We find an improved fit of the data, quantifying earlier suggestions by Budaj (2013) of the necessity of at least two components. We find that deep transits have most absorption in the tail, and not in a disk-shaped, opaque coma, but the transit depth and the total absorption show no correlation with the tail length.
Analysis and interpretation of 15 quarters of Kepler data of the disintegrating planet KIC 12557548 b
A potential crewed mission to Mars would require us to solve a number of problems, including how to protect astronauts against the devastating effects of energetic charged particles from Solar and Galactic sources. The radiation environment on Mars is of particular interest, since maintaining optimal absorbed doses by astronauts is crucial to their survival. Here, we give an overview of the conditions on Mars, as determined by theoretical models and in-situ measurements, and present the main proposed strategies to mitigate radiation exposure while on Mars. Specifically, we focus on the passive shielding technique. Several widely used materials, along with some innovative ones and combinations of those, are studied for their behavior against Solar Energetic Particle Events and Galactic Cosmic Rays in the Martian environment. For that purpose, we implement the GEANT4 package, a Monte-Carlo numerical model developed by CERN, which is specifically applied to simulate interactions of radiation with matter. A description of our model will be given, followed by outputs of the numerical model. We conclude that hydrogen-rich materials act as better attenuators, as expected, but other materials can be helpful against cosmic rays too.
Radiation protection and shielding materials for crewed missions on the surface of Mars
The CP-violating phases in the soft supersymmetry-breaking sector in orbifold compactifications with a continuous Wilson line are investigated. In this case the modular symmetry is the Siegel modular group $Sp(4,Z)$ of genus two. In particular, we study the case that the hidden sector non-perturbative superpotential is determined by the Igusa cusp form ${\cal C}_{12}$ of modular weight 12. The effect of large non-perturbative corrections to the dilaton K\"ahler potential on the resulting CP-violating phases is also investigated.
The effect of Wilson line moduli on CP-violation by soft supersymmetry breaking terms
The compress-and-forward relay scheme developed by (Cover and El Gamal, 1979) is improved with a modification on the decoding process. The improvement follows as a result of realizing that it is not necessary for the destination to decode the compressed observation of the relay; and even if the compressed observation is to be decoded, it can be more easily done by joint decoding with the original message, rather than in a successive way. An extension to multiple relays is also discussed.
An Improvement of Cover/El Gamal's Compress-and-Forward Relay Scheme
This survey article is concerned with the study of bifurcations of piecewise-smooth maps. We review the literature in circle maps and quasi-contractions and provide paths through this literature to prove sufficient conditions for the occurrence of two types of bifurcation scenarios involving rich dynamics. The first scenario consists of the appearance of periodic orbits whose symbolic sequences and "rotation" numbers follow a Farey tree structure; the periods of the periodic orbits are given by consecutive addition. This is called the {\em period adding} bifurcation, and its proof relies on results for maps on the circle. In the second scenario, symbolic sequences are obtained by consecutive attachment of a given symbolic block and the periods of periodic orbits are incremented by a constant term. It is called the {\em period incrementing} bifurcation, in its proof relies on results for maps on the interval. We also discuss the expanding cases, as some of the partial results found in the literature also hold when these maps lose contractiveness. The higher dimensional case is also discussed by means of {\em quasi-contractions}. We also provide applied examples in control theory, power electronics and neuroscience where these results can be applied to obtain precise descriptions of their dynamics.
The Period adding and incrementing bifurcations: from rotation theory to applications
Hierarchical multi-label text classification aims to classify the input text into multiple labels, among which the labels are structured and hierarchical. It is a vital task in many real world applications, e.g. scientific literature archiving. In this paper, we survey the recent progress of hierarchical multi-label text classification, including the open sourced data sets, the main methods, evaluation metrics, learning strategies and the current challenges. A few future research directions are also listed for community to further improve this field.
Recent Advances in Hierarchical Multi-label Text Classification: A Survey
Coulomb interaction between charged particles is a well-known phenomenon in many areas of researches. In general the Coulomb repulsion force broadens the pulse width of an electron bunch and limits the temporal resolution of many scientific facilities such as ultrafast electron diffraction and x-ray free-electron lasers. Here we demonstrate a scheme that actually makes use of Coulomb force to compress a relativistic electron beam. Furthermore, we show that the Coulomb-driven bunch compression process does not introduce additional timing jitter, which is in sharp contrast to the conventional radio-frequency buncher technique. Our work not only leads to enhanced temporal resolution in electron beam based ultrafast instruments that may provide new opportunities in probing material systems far from equilibrium, but also opens a promising direction for advanced beam manipulation through self-field interactions.
Coulomb-driven relativistic electron beam compression
A weighted digraph is a digraph such that every arc is assigned a nonnegative number, called the weight of the arc. The weighted outdegree of a vertex $v$ in a weighted digraph $D$ is the sum of the weights of the arcs with $v$ as their tail, and the weight of a directed cycle $C$ in $D$ is the sum of the weights of the arcs of $C$. In this note we prove that if every vertex of a weighted digraph $D$ with order $n$ has weighted outdegree at least 1, then there exists a directed cycle in $D$ with weight at least $1/\log_2 n$. This proves a conjecture of Bollob\'{a}s and Scott up to a constant factor.
A note on heavy cycles in weighted digraphs
Electrically driven spin resonance (EDSR) is an established tool for controlling semiconductor spin qubits. Here, we theoretically study a frequency-mixing variant of EDSR, where two driving tones with different drive frequencies are applied, and the resonance condition connects the spin Larmor frequency with the sum of the two drive frequencies. Focusing on flopping-mode operation, we calculate the parameter dependence of the Rabi frequency and the Bloch-Siegert shift. A shared-control spin qubit architecture could benefit from this bichromatic EDSR scheme, as it enables simultaneous single-qubit gates.
Electrically driven spin resonance with bichromatic driving
Understanding relationships between feature variables is one important way humans use to make decisions. However, state-of-the-art deep learning studies either focus on task-agnostic statistical dependency learning or do not model explicit feature dependencies during prediction. We propose a deep neural network framework, dGAP, to learn neural dependency Graph and optimize structure-Aware target Prediction simultaneously. dGAP trains towards a structure self-supervision loss and a target prediction loss jointly. Our method leads to an interpretable model that can disentangle sparse feature relationships, informing the user how relevant dependencies impact the target task. We empirically evaluate dGAP on multiple simulated and real datasets. dGAP is not only more accurate, but can also recover correct dependency structure.
Relate and Predict: Structure-Aware Prediction with Jointly Optimized Neural DAG
We summarise status and recent results of the European Twisted Mass collaboration (ETMC). The collaboration has generated gauge configurations for three different values of the lattice spacing smaller or equal 0.1 fm and values of the charged pseudo scalar mass as low as 300 MeV with two flavours of maximally twisted mass quarks. We provide evidence that O(a) improvement works very well with maximally twisted mass fermions and that also higher order lattice artifacts appear to be small. The currently only quantity in the light meson and baryon sector where cut-off effects are visible is the neutral pseudo scalar meson mass and we present an attempt to understand this from a theoretical point of view. We describe finite size effects and quark mass dependence of the mass and decay constant of the (charged) pseudo scalar meson with chiral perturbation theory formulae and our current estimate for the low energy constants l_{3,4} is l_3=3.44(8)(35) and l_4=4.61(4)(11). Results for the average up-down, the strange and the charm quark mass and the chiral condensate are also presented.
Lattice QCD with two light Wilson quarks and maximally twisted mass
We calculate the radiated energy to $O(\hbar)$ from a charged wave-packet in the uniform magnetic field. In the high-speed and weak-field limit, while the non-commutativity of the system reduces the classical radiation, the additional corrections originated from the velocity uncertainty of the wave-packet leads to an enhancement of the radiation.
Quantum Corrections to Synchrotron Radiation from Wave-Packet
We apply Monte Carlo Renormalization group to the crumpling transition in random surface models of fixed connectivity. This transition is notoriously difficult to treat numerically. We employ here a Fourier accelerated Langevin algorithm in conjunction with a novel blocking procedure in momentum space which has proven extremely successful in $\lambda\phi^4$. We perform two successive renormalizations in lattices with up to $64^2$ sites. We obtain a result for the critical exponent $\nu$ in general agreement with previous estimates and similar error bars, but with much less computational effort. We also measure with great accuracy $\eta$. As a by-product we are able to determine the fractal dimension $d_H$ of random surfaces at the crumpling transition.
M.C.R.G. Study of Fixed-connectivity Surfaces
We consider a family of slightly extended version of the Raynaud's surfaces X over the field of positive characteristic with Mumford-Szpiro type polarizations Z, which have Kodaira non-vanishing H^1(X, Z^{-1})\ne 0. The surfaces are at least normal but smooth under a special condition. We compute the cohomologies H^i(X, Z^n), for intergers i and n, and study their (non-)vanishing. Finally, we give a fairly large family of non Mumford-Szpiro type polarizations Z_{a,b} with Kodaira non-vanishing.
On non-vanishing of cohomologies of generalized Raynaud polarized surfaces
Keyphrase extraction from a given document is the task of automatically extracting salient phrases that best describe the document. This paper proposes a novel unsupervised graph-based ranking method to extract high-quality phrases from a given document. We obtain the contextualized embeddings from pre-trained language models enriched with topic vectors from Latent Dirichlet Allocation (LDA) to represent the candidate phrases and the document. We introduce a scoring mechanism for the phrases using the information obtained from contextualized embeddings and the topic vectors. The salient phrases are extracted using a ranking algorithm on an undirected graph constructed for the given document. In the undirected graph, the nodes represent the phrases, and the edges between the phrases represent the semantic relatedness between them, weighted by a score obtained from the scoring mechanism. To demonstrate the efficacy of our proposed method, we perform several experiments on open source datasets in the science domain and observe that our novel method outperforms existing unsupervised embedding based keyphrase extraction methods. For instance, on the SemEval2017 dataset, our method advances the F1 score from 0.2195 (EmbedRank) to 0.2819 at the top 10 extracted keyphrases. Several variants of the proposed algorithm are investigated to determine their effect on the quality of keyphrases. We further demonstrate the ability of our proposed method to collect additional high-quality keyphrases that are not present in the document from external knowledge bases like Wikipedia for enriching the document with newly discovered keyphrases. We evaluate this step on a collection of annotated documents. The F1-score at the top 10 expanded keyphrases is 0.60, indicating that our algorithm can also be used for 'concept' expansion using external knowledge.
Topic Aware Contextualized Embeddings for High Quality Phrase Extraction
The KKLT construction of dS vacua relies on an uplift term that arises from an anti-D3-brane. It was argued by Kachru, Pearson and Verlinde that this anti-D3-brane is an excited state in a supersymmetric theory since it can decay to a supersymmetric ground state. Hence the anti-D3-brane breaks supersymmetry spontaneously and one should be able to package all the world-volume fields on the anti-D3-brane into a four dimensional $\cal{N}=1$ supersymmetric action. Here we extend previous results and identify the constrained superfields that correspond to all the degrees of freedom on the anti-D3-brane. In particular, we show explicitly that the four 4D worldvolume spinors give rise to constrained chiral multiplets $S$ and $Y^i$, $i=1,2,3$ that satisfy $S^2=SY^i=0$. We also conjecture (and provide evidence in a forthcoming publication) that the vector field $A_\mu$ and the three scalars $\phi^i$ give rise to a field strength multiplet $W_\alpha$ and three chiral multiplets $H^i$ that satisfy the constraints $S W_\alpha= \bar{D}_{\dot \alpha} (S \bar H^i)=0$. This is the first time that such constrained multiplets appear in string theory constructions.
Constrained superfields from an anti-D3-brane in KKLT
In this paper we study the set of digit frequencies that are realised by elements of the set of $\beta$-expansions. The main result of this paper demonstrates that as $\beta$ approaches $1,$ the set of digit frequencies that occur amongst the set of $\beta$-expansions fills out the simplex. As an application of our main result, we obtain upper bounds for the local dimension of certain biased Bernoulli convolutions.
Exceptional digit frequencies and expansions in non-integer bases
We identify the class of elementary groups: the smallest class of totally disconnected locally compact second countable (t.d.l.c.s.c.) groups that contains the profinite groups and the discrete groups, is closed under group extensions of profinite groups and discrete groups, and is closed under countable increasing unions. We show this class enjoys robust permanence properties. In particular, it is closed under group extension, taking closed subgroups, taking Hausdorff quotients, and inverse limits. A characterization of elementary groups in terms of well-founded descriptive-set-theoretic trees is then presented. We conclude with three applications. We first prove structure results for general t.d.l.c.s.c. groups. In particular, we show a compactly generated t.d.l.c.s.c. group decomposes into elementary groups and topologically characteristically simple groups via group extension. We then prove two local-to-global structure theorems: Locally solvable t.d.l.c.s.c. groups are elementary and [A]-regular t.d.l.c.s.c. groups are elementary.
Elementary totally disconnected locally compact groups
The Kondo effect, a hallmark of strong correlation physics, is characterized by the formation of an extended cloud of singlet states around magnetic impurities at low temperatures. While many implications of the Kondo cloud's existence have been verified, the existence of the singlet cloud itself has not been directly demonstrated. We suggest a route for such a demonstration by considering an observable that has no classical analog, but is still experimentally measurable: "singlet weights", or projections onto particular entangled two-particle states. Using approximate theoretical arguments, we show that it is possible to construct highly specific energy- and position-resolved probes of Kondo correlations. Furthermore, we consider a quantum transport setup that can be driven away from equilibrium by a bias voltage. There, we show that singlet weights are enhanced by voltage even as the Kondo effect is weakened by it. This exposes a patently nonequilibrium mechanism for the generation of Kondo-like entanglement that is inherently different from its equilibrium counterpart.
Resolving the nonequilibrium Kondo singlet in energy- and position-space using quantum measurements
Cell spreading requires a major reorganisation of the actin cytoskeleton, from a cortical structure to a lamellipodium where filaments are mostly parallel to the substrate. We propose a model inspired by the physics of nematic liquid crystals and fluid membranes, in which the coupling between actin mechanics, filaments orientation, and the local curvature of the cell membrane naturally yields the collective reorientation of actin filaments at the highly curved edge of a spreading cell. Filament orientation increases the traction force exerted by the frictional flow of polymerising actin on the substrate, creating a positive feedback loop between edge curvature, filament alignment, and traction force that promotes cell spreading. We establish the condition under which this feedback leads to a full wetting transition, which we interpret as the initiation of a lamellipodium, and we uncover the existence of bi-stability between partial and full spreading which could trigger spontaneous cell polarization and lead to migration.
Model of lamellipodium initiation during cell spreading
Using the formalism of the conditional amplitude, we study the response part of the exchange-correlation potential in the strong-coupling limit of density functional theory, analysing its peculiar features and comparing it with the response potential averaged over the coupling constant for small atoms and for the hydrogen molecule. We also use a simple one-dimensional model of a stretched heteronuclear molecule to derive exact properties of the response potential in the strong-coupling limit. The simplicity of the model allows us to unveil relevant features also of the exact Kohn-Sham potential and its different components, namely the appearance of a second peak in the correlation kinetic potential on the side of the most electronegative atom.
Response potential in the strong-interaction limit of DFT: Analysis and comparison with the coupling-constant average
We construct a formalism for evolving spherically symmetric black hole initial data sets within a canonical approach to quantum gravity. This problem can be formulated precisely in quantum reduced loop gravity, a framework which has been successfully applied to give a full theory derivation of loop quantum cosmology. We extend this setting by implementing a particular choice of partial gauge which is then used to select a kinematical Hilbert space where the symmetry reduction is imposed through semiclassical states. The main result of this investigation is an effective Hamiltonian that can be used to solve for quantum black hole geometries by evolving classical black hole initial data sets.
Quantum evolution of black hole initial data sets: Foundations
A detailed study of event by event fluctuation of maximum particle density of the produced particles in narrow pseudo-rapidity interval in terms of the scaled variance {\omega} has been carried out for 16O-AgBr, 28Si-AgBr and 32S-AgBr interactions at an incident momentum of 4.5 AGeV/c. For all the interactions the values of scaled variance are found to be greater than zero indicating the presence of strong event by event fluctuation of maximum particle density values in the multiparticle production process. The event by event fluctuations are found to decrease with the increase of pseudo-rapidity interval. Experimental analysis has been compared with the results obtained from the analysis of events simulated by the Ultra Relativistic Quantum Molecular Dynamics (UrQMD) model. UrQMD model could not replicate the experimental results.
Event-By-Event Fluctuation of Maximum Particle Density in Narrow Pseudo-Rapidity Interval at a Few AGeV/c
The microscopic model of atomic diffusion is considered to describe the short-range order relaxation kinetics within the f.c.c.-Ni-Fe Permalloys. The model takes into account both the discrete and anisotropic characters of atomic jumps within the long-range field of concentration heterogeneities of the interacting atoms. The diffusion coefficients and activation energies for the disordered Ni-Fe permalloy are estimated with the evaluated probabilities of atomic jumps. As shown, the increasing of a temperature with a fixed composition influences on the 'potential' field of interatomic interaction ambiguously: the field 'potential' increases for defined coordination shells and decreases for some of other ones. Although the temperature increasing promotes the increasing of any atomic-probabilities jumps generally, but decreasing of the action of 'potential' field generated by the atoms of defined element and caused by its concentration heterogeneities onto the distant sites results in increasing of the atomic-jumps' probabilities of just this element, first of all, into the sites, which are more distant from the 'source' of heterogeneity. Within the framework of the static concentration waves' method along with the self-consistent field approximation, the Onsager-type kinetics equation is obtained to describe the long-range order relaxation by the L12-type superstructure. To calculate diffusivities for the ordered Ni3Fe permalloy, the independent, diffraction experimental data of the long-range order parameter relaxation are used. Theoretical curves of the long-range order time evolution for the non-stoichiometric f.c.c.-Ni-Fe permalloys are plotted. Decreasing of the concentration of alloying element results in decelerating of the long-range order parameter change and in increasing of its relaxation time.
Diffusivities and kinetics of short-range and long-range orderings in Ni-Fe permalloys
We search for possible correlations between neutron star observables and thermodynamic quantities that characterize high density nuclear matter. We generate a set of model-independent equations of state describing stellar matter from a Taylor expansion around saturation density. Each equation of state which is a functional of the nuclear matter parameters is thermodynamically consistent, causal and compatible with astrophysical observations. We find that the neutron star tidal deformability and radius are strongly correlated with the pressure, the energy density and the sound velocity at different densities. Similar correlations are also exhibited by a large set of mean-field models based on non-relativistic and relativistic nuclear energy density functionals. These model independent correlations can be employed to constrain the equation of state at different densities above saturation from measurements of NS properties with multi-messenger observations. In particular, precise constraints on the radius of PSR J0030+0451 thanks to NICER observations would allow to better infer the properties of matter around two times the nuclear saturation density.
Empirical constraints on the high-density equation of state from multi-messenger observables
We study the single and double lepton polarization asymmetries in the semileptonic $B$ meson decays $B \to K_1 (1270) \ell^+ \ell^-$ $\ell \equiv e$, $\mu$, $\tau$), where the strange $P$-wave meson, $K_1(1270)$, is the mixtures of the $K_{1A}$ and $K_{1B}$, which are the $1^3P_1$ and $1^1P_1$ states, respectively. The lepton polarization asymmetries show relatively strong dependency in the various region of dileptonic invariant mass. The lepton polarization asymmetries can also be used for determining the $K_1(1270)$--$K_1(1400)$ mixing angle, $\theta_{K_1}$ and new physics effects. Furthermore, it is shown that these asymmetries in $B\to K_1(1270)\ell^+\ell^-$ decay compared with those of $B\to K^*\ell^+\ell^-$ decay are more sensitive to the dileptonic invariant mass.
Lepton polarization in $B \to K_1 \ell^+ \ell^-$ Decays
Large language models (LLMs) have played a pivotal role in revolutionizing various facets of our daily existence. Solving attention regression is a fundamental task in optimizing LLMs. In this work, we focus on giving a provable guarantee for the one-layer attention network objective function $L(X,Y) = \sum_{j_0 = 1}^n \sum_{i_0 = 1}^d ( \langle \langle \exp( \mathsf{A}_{j_0} x ) , {\bf 1}_n \rangle^{-1} \exp( \mathsf{A}_{j_0} x ), A_{3} Y_{*,i_0} \rangle - b_{j_0,i_0} )^2$. Here $\mathsf{A} \in \mathbb{R}^{n^2 \times d^2}$ is Kronecker product between $A_1 \in \mathbb{R}^{n \times d}$ and $A_2 \in \mathbb{R}^{n \times d}$. $A_3$ is a matrix in $\mathbb{R}^{n \times d}$, $\mathsf{A}_{j_0} \in \mathbb{R}^{n \times d^2}$ is the $j_0$-th block of $\mathsf{A}$. The $X, Y \in \mathbb{R}^{d \times d}$ are variables we want to learn. $B \in \mathbb{R}^{n \times d}$ and $b_{j_0,i_0} \in \mathbb{R}$ is one entry at $j_0$-th row and $i_0$-th column of $B$, $Y_{*,i_0} \in \mathbb{R}^d$ is the $i_0$-column vector of $Y$, and $x \in \mathbb{R}^{d^2}$ is the vectorization of $X$. In a multi-layer LLM network, the matrix $B \in \mathbb{R}^{n \times d}$ can be viewed as the output of a layer, and $A_1= A_2 = A_3 \in \mathbb{R}^{n \times d}$ can be viewed as the input of a layer. The matrix version of $x$ can be viewed as $QK^\top$ and $Y$ can be viewed as $V$. We provide an iterative greedy algorithm to train loss function $L(X,Y)$ up $\epsilon$ that runs in $\widetilde{O}( ({\cal T}_{\mathrm{mat}}(n,n,d) + {\cal T}_{\mathrm{mat}}(n,d,d) + d^{2\omega}) \log(1/\epsilon) )$ time. Here ${\cal T}_{\mathrm{mat}}(a,b,c)$ denotes the time of multiplying $a \times b$ matrix another $b \times c$ matrix, and $\omega\approx 2.37$ denotes the exponent of matrix multiplication.
A Fast Optimization View: Reformulating Single Layer Attention in LLM Based on Tensor and SVM Trick, and Solving It in Matrix Multiplication Time
Taxonomies are semantic hierarchies of concepts. One limitation of current taxonomy learning systems is that they define concepts as single words. This position paper argues that contextualized word representations, which recently achieved state-of-the-art results on many competitive NLP tasks, are a promising method to address this limitation. We outline a novel approach for taxonomy learning that (1) defines concepts as synsets, (2) learns density-based approximations of contextualized word representations, and (3) can measure similarity and hypernymy among them.
Learning Taxonomies of Concepts and not Words using Contextualized Word Representations: A Position Paper
Normative modelling is an emerging method for understanding the underlying heterogeneity within brain disorders like Alzheimer Disease (AD) by quantifying how each patient deviates from the expected normative pattern that has been learned from a healthy control distribution. Since AD is a multifactorial disease with more than one biological pathways, multimodal magnetic resonance imaging (MRI) neuroimaging data can provide complementary information about the disease heterogeneity. However, existing deep learning based normative models on multimodal MRI data use unimodal autoencoders with a single encoder and decoder that may fail to capture the relationship between brain measurements extracted from different MRI modalities. In this work, we propose multi-modal variational autoencoder (mmVAE) based normative modelling framework that can capture the joint distribution between different modalities to identify abnormal brain structural patterns in AD. Our multi-modal framework takes as input Freesurfer processed brain region volumes from T1-weighted (cortical and subcortical) and T2-weighed (hippocampal) scans of cognitively normal participants to learn the morphological characteristics of the healthy brain. The estimated normative model is then applied on Alzheimer Disease (AD) patients to quantify the deviation in brain volumes and identify the abnormal brain structural patterns due to the effect of the different AD stages. Our experimental results show that modeling joint distribution between the multiple MRI modalities generates deviation maps that are more sensitive to disease staging within AD, have a better correlation with patient cognition and result in higher number of brain regions with statistically significant deviations compared to a unimodal baseline model with all modalities concatenated as a single input.
Normative Modeling using Multimodal Variational Autoencoders to Identify Abnormal Brain Structural Patterns in Alzheimer Disease
We propose a novel mechanism of the Kondo effect driven by a chirality imbalance (or chiral chemical potential) of relativistic light fermions. This effect is realized by the mixing between a right- or left-handed fermion and a heavy impurity in the chirality imbalanced matter even at zero density. This is different from the usual Kondo effect induced by finite density. We derive the Kondo effect from both a perturbative calculation and a mean-field approach. We also discuss the temperature dependence of the Kondo effect. The Kondo effect at nonzero chiral chemical potential can be tested by future lattice simulations.
Kondo effect driven by chirality imbalance
The second-generation (2G) mobile systems were developed in response to the growing demand for a system that met mobile communication demands while also providing greater interoperability with other systems. International organizations were crucial in the development of a system that would offer better services, be more transparent, and be more interoperable with other networks. The aim of having a single set of standards for networks worldwide was sadly not realized by the 2G network standards. The third generation (3G) was born. It was called the universal terrestrial mobile system (UMTS), which is European telecommunications standards institute (ETSI) driven. IMT-2000 is the international telecommunication union-telecommunication standardization sector (ITU-T) name for the 3G network. Wide-band code division multiple access (WCDMA) is the air interface technology for the UMTS. This platform offers many services that are based on the Internet, along with video calling, imaging, etc. Further advancements to mobile network technology led to long term evolution (LTE), a technology referred to as 4G. The primary goal of LTE was to improve the speed and capacity of mobile networks while lowering latency. As we move to an ALL-IP system, mobile networks' design becomes much simpler. LTE uses orthogonal frequency division multiplexing (OFDM) in its air interface. This paper details all mentioned mobile generations, as well as all the differences between them in terms of hardware and software architectures.
From 2G to 4G Mobile Network: Architecture and Key Performance Indicators
CRExplorer version 1.6.7 was released on July 5, 2016. This version includes the following new features and improvements: Scopus: Using "File" - "Import" - "Scopus", CRExplorer reads files from Scopus. The file format "CSV" (including citations, abstracts and references) should be chosen in Scopus for downloading records. Export facilities: Using "File" - "Export" - "Scopus", CRExplorer exports files in the Scopus format. Using "File" - "Export" - "Web of Science", CRExplorer exports files in the Web of Science format. These files can be imported in other bibliometric programs (e.g. VOSviewer). Space bar: Select a specific cited reference in the cited references table, press the space bar, and all bibliographic details of the CR are shown. Internal file format: Using "File" - "Save", working files are saved in the internal file format "*.cre". The files include all data including matching results and manual matching corrections. The files can be opened by using "File" - "Open".
New features of CitedReferencesExplorer (CRExplorer)
We report the first measurement of the parity-violating asymmetry A_PV in the elastic scattering of polarized electrons from 208Pb. A_PV is sensitive to the radius of the neutron distribution (Rn). The result A_PV = 0.656 \pm 0.060 (stat) \pm 0.014 (syst) ppm corresponds to a difference between the radii of the neutron and proton distributions Rn - Rp = 0.33 +0.16 -0.18 fm and provides the first electroweak observation of the neutron skin which is expected in a heavy, neutron-rich nucleus.
Measurement of the Neutron Radius of 208Pb Through Parity-Violation in Electron Scattering
We show how to compute analytically time and space dependent correlations in one dimensional quantum integrable systems with an impurity. Our approach is based on a description of these systems in terms of massless scattering of quasiparticles. Correlators follow then from matrix elements of local operators between multiparticle states, the ``massless form factors''. Although an infinite sum of these form factors has to be considered in principle, we find that for current, spin, and energy operators, only a few (typically two or three) are necessary to obtain an accuracy of more than $1\%$, for {\bf arbitrary coupling strength}, that is all the way from short to large distances. As examples we compute, at zero temperature, the frequency dependent conductance in a Luttinger liquid with impurity, the spectral function in the double well problem of dissipative quantum mechanics and part of the space dependent succeptibility in the Kondo model .
Form factors approach to current correlations in one dimensional systems with impurities
We consider the joint SPX-VIX calibration within a general class of Gaussian polynomial volatility models in which the volatility of the SPX is assumed to be a polynomial function of a Gaussian Volterra process defined as a stochastic convolution between a kernel and a Brownian motion. By performing joint calibration to daily SPX-VIX implied volatility surface data between 2012 and 2022, we compare the empirical performance of different kernels and their associated Markovian and non-Markovian models, such as rough and non-rough path-dependent volatility models. In order to ensure an efficient calibration and a fair comparison between the models, we develop a generic unified method in our class of models for fast and accurate pricing of SPX and VIX derivatives based on functional quantization and Neural Networks. For the first time, we identify a \textit{conventional one-factor Markovian continuous stochastic volatility model} that is able to achieve remarkable fits of the implied volatility surfaces of the SPX and VIX together with the term structure of VIX futures. What is even more remarkable is that our conventional one-factor Markovian continuous stochastic volatility model outperforms, in all market conditions, its rough and non-rough path-dependent counterparts with the same number of parameters.
Joint SPX-VIX calibration with Gaussian polynomial volatility models: deep pricing with quantization hints
Dynamics of a one-dimensional growing front with an unstable straight profile are analyzed. We argue that a coarsening process occurs if and only if the period \lambda of the steady state solution is an increasing function of its amplitude A. This statement is rigorously proved for two important classes of conserved and nonconserved models by investigating the phase diffusion equation of the steady pattern. We further provide clear numerical evidences for the growth equation of a stepped crystal surface.
When does coarsening occur in the dynamics of one-dimensional fronts ?
Building general-purpose robots to perform a diverse range of tasks in a large variety of environments in the physical world at the human level is extremely challenging. It requires the robot learning to be sample-efficient, generalizable, compositional, and incremental. In this work, we introduce a systematic learning framework called SAGCI-system towards achieving these above four requirements. Our system first takes the raw point clouds gathered by the camera mounted on the robot's wrist as the inputs and produces initial modeling of the surrounding environment represented as a file of Unified Robot Description Format (URDF). Our system adopts a learning-augmented differentiable simulation that loads the URDF. The robot then utilizes the interactive perception to interact with the environment to online verify and modify the URDF. Leveraging the differentiable simulation, we propose a model-based learning algorithm combining object-centric and robot-centric stages to efficiently produce policies to accomplish manipulation tasks. We apply our system to perform articulated object manipulation tasks, both in the simulation and the real world. Extensive experiments demonstrate the effectiveness of our proposed learning framework. Supplemental materials and videos are available on https://sites.google.com/view/egci.
SAGCI-System: Towards Sample-Efficient, Generalizable, Compositional, and Incremental Robot Learning
Context. Apertif is a multi-beam receiver system for the Westerbork Synthesis Radio Telescope that operates at 1.1-1.5 GHz, which overlaps with various radio services, resulting in contamination of astronomical signals with radio-frequency interference (RFI). Aims. We analyze approaches to mitigate Apertif interference and design an automated detection procedure for its imaging mode. Using this approach, we present long-term RFI detection results of over 300 Apertif observations. Methods. Our approach is based on the AOFlagger detection approach. We introduce several new features, including ways to deal with ranges of invalid data (e.g. caused by shadowing) in both the SumThreshold and scale-invariant rank operator steps; pre-calibration bandpass calibration; auto-correlation flagging; and HI flagging avoidance. These methods are implemented in a new framework that uses the Lua language for scripting, which is new in AOFlagger version 3. Results. Our approach removes RFI fully automatically, and is robust and effective enough for further calibration and (continuum) imaging of these data. Analysis of 304 observations show an average of 11.1% of lost data due to RFI with a large spread. We observe 14.6% RFI in auto-correlations. Computationally, AOFlagger achieves a throughput of 370 MB/s on a single computing node. Compared to published machine learning results, the method is one to two orders of magnitude faster.
An interference detection strategy for Apertif based on AOFlagger 3
Air free-cooled data centers (DCs) have not existed in the tropical zone due to the unique challenges of year-round high ambient temperature and relative humidity (RH). The increasing availability of servers that can tolerate higher temperatures and RH due to the regulatory bodies' prompts to raise DC temperature setpoints sheds light upon the feasibility of air free-cooled DCs in tropics. However, due to the complex psychrometric dynamics, operating the air free-cooled DC in tropics generally requires adaptive control of supply air condition to maintain the computing performance and reliability of the servers. This paper studies the problem of controlling the supply air temperature and RH in a free-cooled tropical DC below certain thresholds. To achieve the goal, we formulate the control problem as Markov decision processes and apply deep reinforcement learning (DRL) to learn the control policy that minimizes the cooling energy while satisfying the requirements on the supply air temperature and RH. We also develop a constrained DRL solution for performance improvements. Extensive evaluation based on real data traces collected from an air free-cooled testbed and comparisons among the unconstrained and constrained DRL approaches as well as two other baseline approaches show the superior performance of our proposed solutions.
Deep Reinforcement Learning for Tropical Air Free-Cooled Data Center Control
Riemann surface carries a natural line bundle, the determinant bundle. The space of sections of this line bundle (or its multiples) constitutes a natural non-abelian generalization of the spaces of theta functions on the Jacobian. There has been much progress in the last few years towards a better understanding of these spaces, including a rigorous proof of the celebrated Verlinde formula which gives their dimension. This survey paper tries to explain what is now known and what remains open.
Vector bundles on curves and generalized theta functions: recent results and open problems
Interpreting the inner function of neural networks is crucial for the trustworthy development and deployment of these black-box models. Prior interpretability methods focus on correlation-based measures to attribute model decisions to individual examples. However, these measures are susceptible to noise and spurious correlations encoded in the model during the training phase (e.g., biased inputs, model overfitting, or misspecification). Moreover, this process has proven to result in noisy and unstable attributions that prevent any transparent understanding of the model's behavior. In this paper, we develop a robust interventional-based method grounded by causal analysis to capture cause-effect mechanisms in pre-trained neural networks and their relation to the prediction. Our novel approach relies on path interventions to infer the causal mechanisms within hidden layers and isolate relevant and necessary information (to model prediction), avoiding noisy ones. The result is task-specific causal explanatory graphs that can audit model behavior and express the actual causes underlying its performance. We apply our method to vision models trained on classification tasks. On image classification tasks, we provide extensive quantitative experiments to show that our approach can capture more stable and faithful explanations than standard attribution-based methods. Furthermore, the underlying causal graphs reveal the neural interactions in the model, making it a valuable tool in other applications (e.g., model repair).
Causal Analysis for Robust Interpretability of Neural Networks
In the presence of strong magnetic fields the electronic bandstructure of graphene drastically changes. The Dirac cone collapses into discrete non-equidistant Landau levels, which can be externally tuned by changing the magnetic field. In contrast to conventional materials, specific Landau levels are selectively addressable using circularly polarized light. Exploiting these unique properties, we propose the design of a tunable laser operating in the technologically promising terahertz spectral range. To uncover the many-particle physics behind the emission of light, we perform a fully quantum mechanical investigation of the non-equilibrium dynamics of electrons, phonons, and photons in optically pumped Landau-quantized graphene embedded into an optical cavity. The gained microscopic insights allow us to predict optimal experimental conditions to realize a technologically promising terahertz laser.
Proposal for a tunable graphene-based terahertz Landau-level laser
We show that every abelian Polish group is the topological factor-group of a closed subgroup of the full unitary group of a separable Hilbert space with the strong operator topology. It follows that all orbit equivalence relations induced by abelian Polish group actions are Borel reducible to some orbit equivalence relations induced by actions of the unitary group.
On a universality property of some abelian Polish groups
This paper explores the grounding issue concerning multimodal semantic representation from a computational cognitive-linguistic view. Five perceptual properties of groundedness are annotated and analyzed: Affordance, Perceptual salience, Object number, Gaze cueing, and Ecological Niche Association (ENA). We annotated selected images from the Flickr30k dataset with exploratory analyses and statistical modeling of their captions. Our findings suggest that a comprehensive understanding of an object or event requires cognitive attention, semantic distinctions in linguistic expression, and multimodal construction. During this construction process, viewers integrate situated meaning and affordance into multimodal semantics, which is consolidated into image captions used in the image-text dataset incorporating visual and textual elements. Our findings suggest that situated meaning and affordance grounding are critical for grounded natural language understanding systems to generate appropriate responses and show the potential to advance the understanding of human construal in diverse situations.
Exploring the Grounding Issues in Image Caption
In this paper we discuss four problems regarding Markov equivalences for subclasses of loopless mixed graphs. We classify these four problems as finding conditions for internal Markov equivalence, which is Markov equivalence within a subclass, for external Markov equivalence, which is Markov equivalence between subclasses, for representational Markov equivalence, which is the possibility of a graph from a subclass being Markov equivalent to a graph from another subclass, and finding algorithms to generate a graph from a certain subclass that is Markov equivalent to a given graph. We particularly focus on the class of maximal ancestral graphs and its subclasses, namely regression graphs, bidirected graphs, undirected graphs, and directed acyclic graphs, and present novel results for representational Markov equivalence and algorithms.
Markov Equivalences for Subclasses of Loopless Mixed Graphs
Fluorodeoxyglucose Positron Emission Tomography (FDG-PET) combined with Computed Tomography (CT) scans are critical in oncology to the identification of solid tumours and the monitoring of their progression. However, precise and consistent lesion segmentation remains challenging, as manual segmentation is time-consuming and subject to intra- and inter-observer variability. Despite their promise, automated segmentation methods often struggle with false positive segmentation of regions of healthy metabolic activity, particularly when presented with such a complex range of tumours across the whole body. In this paper, we explore the application of the nnUNet to tumour segmentation of whole-body PET-CT scans and conduct different experiments on optimal training and post-processing strategies. Our best model obtains a Dice score of 69\% and a false negative and false positive volume of 6.27 and 5.78 mL respectively, on our internal test set. This model is submitted as part of the autoPET 2023 challenge. Our code is available at: https://github.com/anissa218/autopet\_nnunet
Autopet Challenge 2023: nnUNet-based whole-body 3D PET-CT Tumour Segmentation
We analyze the data and discuss their implications for the microscopic origin of the low frequency flux noise in superconducting circuits. We argue that this noise is produced by spins at the superconductor insulator boundary whose dynamics is due to RKKY interaction. We show that this mechanism explains size independence of the noise, different frequency dependences of the spectra reported in large and small SQUIDs and gives the correct intensity for realistic parameters.
Microscopic origin of low frequency flux noise in Josephson circuits
This paper introduces a new type of unsupervised learning algorithm, based on the alignment of sentences and Harris's (1951) notion of interchangeability. The algorithm is applied to an untagged, unstructured corpus of natural language sentences, resulting in a labelled, bracketed version of the corpus. Firstly, the algorithm aligns all sentences in the corpus in pairs, resulting in a partition of the sentences consisting of parts of the sentences that are similar in both sentences and parts that are dissimilar. This information is used to find (possibly overlapping) constituents. Next, the algorithm selects (non-overlapping) constituents. Several instances of the algorithm are applied to the ATIS corpus (Marcus et al., 1993) and the OVIS (Openbaar Vervoer Informatie Systeem (OVIS) stands for Public Transport Information System.) corpus (Bonnema et al., 1997). Apart from the promising numerical results, the most striking result is that even the simplest algorithm based on alignment learns recursion.
Bootstrapping Syntax and Recursion using Alignment-Based Learning
This work introduces efficient symbolic algorithms for quantitative reactive synthesis. We consider resource-constrained robotic manipulators that need to interact with a human to achieve a complex task expressed in linear temporal logic. Our framework generates reactive strategies that not only guarantee task completion but also seek cooperation with the human when possible. We model the interaction as a two-player game and consider regret-minimizing strategies to encourage cooperation. We use symbolic representation of the game to enable scalability. For synthesis, we first introduce value iteration algorithms for such games with min-max objectives. Then, we extend our method to the regret-minimizing objectives. Our benchmarks reveal that our symbolic framework not only significantly improves computation time (up to an order of magnitude) but also can scale up to much larger instances of manipulation problems with up to 2x number of objects and locations than the state of the art.
Efficient Symbolic Approaches for Quantitative Reactive Synthesis with Finite Tasks
Kernel methods form a theoretically-grounded, powerful and versatile framework to solve nonlinear problems in signal processing and machine learning. The standard approach relies on the \emph{kernel trick} to perform pairwise evaluations of a kernel function, leading to scalability issues for large datasets due to its linear and superlinear growth with respect to the training data. Recently, we proposed \emph{no-trick} (NT) kernel adaptive filtering (KAF) that leverages explicit feature space mappings using data-independent basis with constant complexity. The inner product defined by the feature mapping corresponds to a positive-definite finite-rank kernel that induces a finite-dimensional reproducing kernel Hilbert space (RKHS). Information theoretic learning (ITL) is a framework where information theory descriptors based on non-parametric estimator of Renyi entropy replace conventional second-order statistics for the design of adaptive systems. An RKHS for ITL defined on a space of probability density functions simplifies statistical inference for supervised or unsupervised learning. ITL criteria take into account the higher-order statistical behavior of the systems and signals as desired. However, this comes at a cost of increased computational complexity. In this paper, we extend the NT kernel concept to ITL for improved information extraction from the signal without compromising scalability. Specifically, we focus on a family of fast, scalable, and accurate estimators for ITL using explicit inner product space (EIPS) kernels. We demonstrate the superior performance of EIPS-ITL estimators and combined NT-KAF using EIPS-ITL cost functions through experiments.
Fast Estimation of Information Theoretic Learning Descriptors using Explicit Inner Product Spaces
The self-adjoint and $m$-sectorial extensions of coercive Sturm-Liouville operators are characterised, under minimal smoothness conditions on the coefficients of the differential expression.
Selfadjoint and $m$ sectorial extensions of Sturm-Liouville operators
Exoplanet detection in the past decade by efforts including NASA's Kepler and TESS missions has discovered many worlds that differ substantially from planets in our own Solar system, including more than 400 exoplanets orbiting binary or multi-star systems. This not only broadens our understanding of the diversity of exoplanets, but also promotes our study of exoplanets in the complex binary and multi-star systems and provides motivation to explore their habitability. In this study, we analyze orbital stability of exoplanets in non-coplanar circumbinary systems using a numerical simulation method, with which a large number of circumbinary planet samples are generated in order to quantify the effects of various orbital parameters on orbital stability. We also train a machine learning model that can quickly determine the stability of the circumbinary planetary systems. Our results indicate that larger inclinations of the planet tend to increase the stability of its orbit, but change in the planet's mass range between Earth and Jupiter has little effect on the stability of the system. In addition, we find that Deep Neural Networks (DNNs) have higher accuracy and precision than other machine learning algorithms.
Analyzing the Stability of Non-coplanar Circumbinary Planets using Machine Learning
The mysteries of sunspot penumbrae have been under an intense scrutiny for the past 10 years. During this time, some models have been proposed and refuted, while the surviving ones had to be modified, adapted and evolved to explain the ever-increasing array of observational constraints. In this contribution I will review two of the present models, emphasizing their contributions to this field, but also pinpointing some of their inadequacies to explain a number of recent observations at very high spatial resolution. To help explaining these new observations I propose some modifications to each of them. These modifications bring those two seemingly opposite models closer together into a general picture that agrees well with recent 3D magneto-hydrodynamic simulations.
Models and Observations of Sunspot Penumbrae
Within the NRQCD factorization framework, we compute the next-to-leading-order QCD corrections to the gluon fragmentation into the ${}^1S_0^{(1,8)}$ Fock components of a quarkonium, at the lowest order in velocity expansion. We follow the operator definition of the fragmentation function advanced by Collins and Soper. The key technique underpinning our calculation is the sector decomposition method widely used in the area of multi-loop computation. It is found that the NLO QCD corrections have significant effects, and qualitatively modify the profiles of the corresponding leading-order fragmentation functions.
Next-to-leading-order QCD corrections to gluon fragmentation into ${}^1S_0^{(1,8)}$ quarkonia
Maintenance of existing software requires a large amount of time for comprehending the source code. The architecture of a software, however, may not be clear to maintainers if up to date documentations are not available. Software clustering is often used as a remodularisation and architecture recovery technique to help recover a semantic representation of the software design. Due to the diverse domains, structure, and behaviour of software systems, the suitability of different clustering algorithms for different software systems are not investigated thoroughly. Research that introduce new clustering techniques usually validate their approaches on a specific domain, which might limit its generalisability. If the chosen test subjects could only represent a narrow perspective of the whole picture, researchers might risk not being able to address the external validity of their findings. This work aims to fill this gap by introducing a new approach, Explaining Software Clustering for Remodularisation, to evaluate the effectiveness of different software clustering approaches. This work focuses on hierarchical clustering and Bunch clustering algorithms and provides information about their suitability according to the features of the software, which as a consequence, enables the selection of the most optimum algorithm and configuration from our existing pool of choices for a particular software system. The proposed framework is tested on 30 open source software systems with varying sizes and domains, and demonstrates that it can characterise both the strengths and weaknesses of the analysed software clustering algorithms using software features extracted from the code. The proposed approach also provides a better understanding of the algorithms behaviour through the application of dimensionality reduction techniques.
E-SC4R: Explaining Software Clustering for Remodularisation
We consider the problem of designing an Ansatz for the fermion-photon vertex function, using three-dimensional quantum electrodynamics as a test case. In many existing studies, restrictions have been placed on the form of the vertex Ansatz by making the unsubstantiated assumption that in the quenched, massless limit the Landau gauge Dyson-Schwinger equations admit a trivial solution. We demonstrate, without recourse to this assumption, the existence of a non-local gauge in which the fermion propagator is the bare propagator. This result is used to provide a viable Ansatz for part of the vertex function.
Deconstructing the vertex Ansatz in three dimensional quantum electrodynamics
Motivated by recent proposal by Potter et al. [Phys. Rev. X 6, 031026 (2016)] concerning possible thermoelectric signatures of Dirac composite fermions, we perform a systematic experimental study of thermoelectric transport of an ultrahigh-mobility GaAs/AlxGa1-xAs two dimensional electron system at filling factor v = 1/2. We demonstrate that the thermopower Sxx and Nernst Sxy are symmetric and anti-symmetric with respect to B = 0 T, respectively. The measured properties of thermopower Sxx at v = 1/2 are consistent with previous experimental results. The Nernst signals Sxy of v = 1/2, which have not been reported previously, are non-zero and show a power law relation with temperature in the phonon-drag dominant region. In the electron-diffusion dominant region, the Nernst signals Sxy of v = 1/2 are found to be significantly smaller than the linear temperature dependent values predicted by Potter et al., and decreasing with temperature faster than linear dependence.
Thermopower and Nernst measurements in a half-filled lowest Landau level
Capsule Networks have great potential to tackle problems in structural biology because of their attention to hierarchical relationships. This paper describes the implementation and application of a Capsule Network architecture to the classification of RAS protein family structures on GPU-based computational resources. The proposed Capsule Network trained on 2D and 3D structural encodings can successfully classify HRAS and KRAS structures. The Capsule Network can also classify a protein-based dataset derived from a PSI-BLAST search on sequences of KRAS and HRAS mutations. Our results show an accuracy improvement compared to traditional convolutional networks, while improving interpretability through visualization of activation vectors.
Capsule Networks for Protein Structure Classification and Prediction
This paper considers the cooperative output regulation problem for linear multi-agent systems with a directed communication graph, heterogeneous linear subsystems, and an exosystem whose output is available to only a subset of subsystems. Both the cases with nominal and uncertain linear subsystems are studied. For the case with nominal linear subsystems, a distributed adaptive observer-based controller is designed, where the distributed adaptive observer is implemented for the subsystems to estimate the exogenous signal. For the case with uncertain linear subsystems, the proposed distributed observer and the internal model principle are combined to solve the robust cooperative output regulation problem. Compared with the existing works, one main contribution of this paper is that the proposed control schemes can be designed and implemented by each subsystem in a fully distributed fashion for general directed graphs. For the special case with undirected graphs, a distributed output feedback control law is further presented.
Fully Distributed Adaptive Controllers for Cooperative Output Regulation of Heterogeneous Linear Multi-agent Systems with Directed Graphs
Particle acceleration and heating at mildly relativistic magnetized shocks in electron-ion plasma are investigated with unprecedentedly high-resolution two-dimensional particle-in-cell simulations that include ion-scale shock rippling. Electrons are super-adiabatically heated at the shock, and most of the energy transfer from protons to electrons takes place at or downstream of the shock. We are the first to demonstrate that shock rippling is crucial for the energization of electrons at the shock. They remain well below equipartition with the protons. The downstream electron spectra are approximately thermal with a limited supra-thermal power-law component. Our results are discussed in the context of wakefield acceleration and the modelling of electromagnetic radiation from blazar cores.
Mildly relativistic magnetized shocks in electron-ion plasmas -- II. Particle acceleration and heating
The Neptune Trojans are the most recent addition to the panoply of Solar system small body populations. The orbit of the first discovered member, 2001 QR$_{322}$, was investigated shortly after its discovery, based on early observations of the object, and it was found to be dynamically stable on timescales comparable to the age of the Solar system. As further observations were obtained of the object over the following years, the best-fit solution for its orbit changed. We therefore carried out a new study of 2001 QR$_{322}$'s orbit in 2010, finding that it lay on the boundary between dynamically stable and unstable regions in Neptune's Trojan cloud, and concluding that further observations were needed to determine the true stability of the object's orbit. Here we follow up on that earlier work, and present the preliminary results of a dynamical study using an updated fit to 2001 QR$_{322}$'s orbit. Despite the improved precision with which the orbit of 2001 QR$_{322}$ is known, we find that the best-fit solution remains balanced on a knife-edge, lying between the same regions of stability and instability noted in our earlier work. In the future, we intend to carry out new observations that should hopefully refine the orbit to an extent that its true nature can finally be disentangled.
2001 QR$_{322}$ - an update on Neptune's first unstable Trojan companion
The continuous imaginary-time quantum Monte Carlo method with the worm update algorithm is applied to explore the ground state properties of the spin-1/2 Heisenberg model with antiferromagnetic (AF) coupling $J>0$ and ferromagnetic (F) coupling $J^{\prime}<0$ along zigzag and armchair directions, respectively, on honeycomb lattice. It is found that by enhancing the F coupling $J^{\prime}$ between zigzag AF chains, the system is smoothly crossover from one-dimensional zigzag spin chains to a two-dimensional magnetic ordered state. In absence of an external field, the system is in a stripe order phase. In presence of uniform and staggered fields, the uniform and staggered out-of-plane magnetizations appear while the stripe order keeps in $xy$ plane, and a second-order quantum phase transition (QPT) at a critical staggered field is observed. The critical exponents of correlation length for QPTs induced by a staggered field for the cases with $J>0$, $J^{\prime}<0$ and $J<0$, $J^{\prime}>0$ are obtained to be $\nu=0.677(2)$ and $0.693(0)$, respectively, indicating that both cases belong to O(3) universality. The scaling behavior in a staggered field is analyzed, and the ground state phase diagrams in the plane of coupling ratio and staggered field are presented for two cases. The temperature dependence of susceptibility and specific heat of both systems in external magnetic fields is also discussed.
Quantum Monte Carlo Study on the Spin-1/2 Honeycomb Heisenberg Model with Mixing Antiferromagnetic and Ferromagnetic Interactions in External Magnetic Fields
High spectral purity frequency agile room temperature sources in the terahertz spectrum are foundational elements for imaging, sensing, metrology, and communications. Here we present a chip scale optical parametric oscillator based on an integrated nonlinear microresonator that provides broadly tunable single frequency and multi frequency oscillators in the terahertz regime. Through optical to terahertz down conversion using a plasmonic nanoantenna array, coherent terahertz radiation spanning 2.8 octaves is achieved from 330 GHz to 2.3 THz, with 20 GHz cavity mode limited frequency tuning step and 10 MHz intracavity mode continuous frequency tuning range at each step. By controlling the microresonator intracavity power and pump resonance detuning, tunable multi frequency terahertz oscillators are also realized. Furthermore, by stabilizing the microresonator pump power and wavelength, sub 100 Hz linewidth of the terahertz radiation with 10-15 residual frequency instability is demonstrated. The room temperature generation of both single frequency, frequency agile terahertz radiation and multi frequency terahertz oscillators in the chip scale platform offers unique capabilities in metrology, sensing, imaging and communications.
Coherent terahertz radiation with 2.8-octave tunability through chip-scale photomixed microresonator optical parametric oscillation

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
12
Add dataset card