entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
14
193
authors
sequencelengths
1
1.14k
primary_category
stringclasses
125 values
categories
sequencelengths
1
6
text
stringlengths
12
495k
http://arxiv.org/abs/2409.03398v1
20240905102446
Recursive Quantization for $\mathcal{L}_2$ Stabilization of a Finite Capacity Stochastic Control Loop with Intermittent State Observations
[ "Shrija Karmakar", "Ritwik Kumar Layek" ]
eess.SY
[ "eess.SY", "cs.SY" ]
Recursive Quantization for ℒ_2 Stabilization of a Finite Capacity Stochastic Control Loop with Intermittent State Observations Shrija Karmakar, and Ritwik Kumar Layek The authors are with Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology, Kharagpur, West Bengal, India (e-mail: ritwik@ece.iitkgp.ac.in). ^1 Université Paris-Saclay, CEA, Service de Thermo-hydraulique et de Mécanique des Fluides, 91191 Gif-sur-Yvette, France ^2 Laboratoire de Physique de la Matière Condensée, CNRS, École Polytechnique, Institut Polytechnique de Paris, 91120 Palaiseau, France ======================================================================================================================================================================================================================================================================= § ABSTRACT The problem of ℒ_2 stabilization of a state feedback stochastic control loop is investigated under different constraints. The discrete time linear time invariant (LTI) open loop plant is chosen to be unstable. The additive white Gaussian noise is assumed to be stationary. The link between the plant and the controller is assumed to be a finite capacity stationary channel, which puts a constraint on the bit rate of the transmission. Moreover, the state of the plant is observed only intermittently keeping the loop open some of the time. In this manuscript both scalar and vector plants under Bernoulli and Markov intermittence models are investigated. Novel bounds on intermittence parameters are obtained to ensure ℒ_2 stability. Moreover, novel recursive quantization algorithms are developed to implement the stabilization scheme under all the constraints. Suitable illustrative examples are provided to elucidate the main results. Keywords: Stochastic control, intermittent observation, control over communication channel, ℒ_2 stability, recursive quantization, capacity, differential entropy, spectral norm. § INTRODUCTION The theory of Linear Quadratic Regulator / Linear Quadratic Gaussian <cit.> was developed to solve the optimal control problem of a linear stochastic system, where a quadratic cost functional of the state and the input is minimized. Here the closed loop system is considered local, and the real valued signals are assumed to flow through the loop. Initially control theory didn't consider the bandwidth related challenges faced by communication engineers. On that front, information theory<cit.> was developed to understand the physical limit of point-to-point communication. The accuracy resolutions or the noise processes present in the physical channels impose fundamental limit on the maximum feasible data rate of the channel which is known as `capacity’<cit.>. Clearly, an LQG control system with finite capacity channel in loop doesn’t provide simple answers on performance and stability. However, it is a well-known fact now that there exists a duality between stochastic control system and a communication system with feedback <cit.>. In <cit.>, the authors have demonstrated that stochastic control systems utilize randomized strategies for encoding and transmitting information, similar to Shannon's coding for noisy channels, in order to achieve control-coding capacity. The trade-off between the best achievable data rate through the channel and the best achievable control cost generates considerable interest in recent literature <cit.>. It has been shown in <cit.> how Massey's directed information <cit.> between the plant and the controller can be minimized while keeping the quadratic cost within some specified upper bound. In <cit.>, some interesting results are developed on trade-offs between data rate and different control goals, including stability and asymptotic performances. However, the genesis of the dual optimization of error cost function and information functional lies in rate distortion theory <cit.>. The authors in <cit.> investigate Sequential Rate-Distortion (SRD) theory in the context of time-varying Gauss-Markov processes. The study in <cit.> investigates minimum data rates for mean square stabilization of linear systems over lossy digital channels, presenting an innovative method for addressing temporal correlations and fluctuating data rates. For unlimited quadratic cost function, <cit.> shows that the minimum directed information reaches a lower bound which is dependent on the large eigenvalues (outside the unit circle) of the discrete time dynamical system. A similar but more operational result is demonstrated in <cit.>, where a quantized and encoded state is used to stabilize the control loop. For an unstable scalar linear stochastic system with limited information about system state, the paper establishes the relation between the cardinality of a finite state space and the system gain, which is necessary and sufficient for β-moment stability. The scheme proposed in <cit.> utilizes a uniform quantizer for data rate efficiency, analyzed using probabilistic and information-theoretic techniques. Another practical limitation of a control-communication system lies in sensor or link failure. If the state or output of the plant is not measured due to sensor failure or the transmitted signal over the channel is not received in the estimator or controller, the controller tends to lose its ability to stabilize the loop. A related work in optimal estimation problem is developed in <cit.>, where the problem of intermittent Kalman filter is introduced. The output is sensed through a random switch which is controlled via a Bernoulli random variable. In <cit.>, the statistical convergence properties of the estimation error covariance are examined, and a critical intermittence probability is determined at which the estimation error covariance diverges. Upper and lower bounds are provided for the critical intermittence. Several other research works have been pursued in this realm thereafter. The precise critical intermittence probability for bounded error covariance matrix is determined by the known lower bound of the intermittence probability when the observation matrix within the observable subspace is invertible <cit.>. The authors of <cit.> have established conditions for bounded estimation error by examining the error behavior of the discrete-time extended Kalman filter in the context of nonlinear systems with intermittent observations. New conditions for the stability of a Kalman filter are introduced in <cit.> when intermittent measurements result from communication constraints. The general conditions permit diagonalizable state matrices and finite order Markov processes, thereby expanding the applicability of the current findings to encompass degenerate systems and a wider variety of network models. In <cit.>, the authors analyses the asymptotic behavior of the random Ricatti equation obtained from the random intermittent switching of the Kalman filter. Similarly, <cit.> delves into the dynamics of the random Ricatti equation showing that under certain observability and controllability conditions the distribution functions of the random conditional error covariance matrices converge to an invariant distribution which satisfies a moderate deviation principle. The study in <cit.> examines the control of a nonlinear system using quantized measurements to accurately follow or reject external signals. It utilizes passivity and output regulation techniques to guarantee the convergence of quantized error and achieve zero asymptotic error in the actual output. <cit.> examines the optimal control of a linear stochastic system with both additive and multiplicative noise over a finite time horizon, offering a solution for optimal covariance steering, and establishes the existence and uniqueness of the optimal control, along with presenting a result in the form of a matrix Riccati differential equation. The efforts to tackle the problem of intermittent observation in a stochastic control loop is ongoing with some success <cit.>. With packet acknowledgment, the control is linear and follows the separation principle, while critical switching probabilities determine system stability.<cit.> addresses a linear quadratic Gaussian (LQG) tracking problem with safety and reachability constraints when a subset of sensors is subject to an adversarial false data injection attack. A control policy is suggested, which limits the control input by utilizing approximated states that are resolved through the utilization of a quadratically constrained quadratic program. In this manuscript, the constraints posed by intermittent observations and finite capacity channel are combined. The contributions of this manuscript are given in section <ref>. §.§ Contributions In this manuscript, the controller is considered to be the infinite horizon linear quadratic Gaussian (LQG) controller. When the observation switch is `ON', the state is measured, quantized, and transmitted to the controller to generate the control signal, which will make the closed loop stable. However, when the observation switch is `OFF', the state is not measured and the loop is open and unstable. The main contributions of this manuscript are twofold. Firstly, the necessary and sufficient conditions on the intermittence parameters for ℒ_2 stability of the overall system are given. Secondly, when the intermittence conditions are met, a recursive quantization algorithm is designed to transmit the state through the finite capacity channel. Both of these contributions are novel and they extend several works from the contemporary literature as discussed in section <ref>. §.§ Organization of the paper This paper is organized as follows. Section <ref> introduces the problem with a survey of the relevant literature along with the main contribution of the manuscript, and some useful notations. In section <ref>, the mathematical description of the problem is introduced. The main theoretical contributions of the manuscript are given in section <ref>. Illustrating examples, and simulations are shown in section <ref>. Few concluding remarks are given in section <ref>. The proofs of all the lemmas, and theorems are presented in section <ref>. §.§ Notations Some useful notations, terminologies, and concepts are mentioned here. * The real line is denoted by ℝ. All logarithms in this manuscript are assumed to have base 2. δ(k) is the Kronecker's delta function. Matrices are written in capital letters. Vectors and scalars are written using small letters. For a vector x, the ℒ_2 norm is denoted by x_2. A' is the transpose of A. The determinant of matrix A is denoted by |A|. A zero vector of size (n× 1) is denoted by 0_n. A zero matrix of size (n× m) is denoted by 0_n × m. For a (n× n) matrix A, (n× 1) vector v, and scalar (real or complex) λ, if the following equation is satisfied: Av = λ v, then λ, and v are called eigenvalue, and eigenvector of the matrix A respectively. A ≻ B means (A-B ) is a positive definite matrix. A ≽ B means (A-B ) is a positive semi-definite matrix. The set of all n× n positive definite matrices is denoted by M_n^+. The set of all n× n positive semi-definite matrices is denoted by M_n. For two matrices A, and B, Minkowskii's inequality of determinants is given by: |A+B|^1/n≥ |A|^1/n + |B|^1/n. A set C is called a cone if for every point x ∈ C and for all θ≥ 0, θ x ∈ C. A cone is called a convex cone if for any x_1,x_2 ∈ C and θ∈ [0,1], θ x_1 + (1-θ) x_2 ∈ C. For a linear map 𝒯 : ℝ^n→ℝ^m, the spectral norm <cit.> of 𝒯 is defined as: 𝒯_2 = inf{c≥0 : 𝒯(x)_2 ≤ cx_2 ∀ x ∈ℝ^n }. An operator 𝒯 : ℝ^n →ℝ^m is called a contraction operator if 𝒯_2 < 1. * A triple (Ω,ℬ,ℙ) is called a probability space where Ω is the sample space, ℬ is the σ-algebra, and ℙ is the probability measure. The map x: (Ω,ℬ,ℙ)→(ℝ,ℬ(ℛ), F_ x) is called a random variable, where the domain space is some underlying probability space, and the range space is the induced measure space of the real line. ℬ(ℛ) is the Borel σ-algebra of the real line, and F_ x is the probability distribution function of x characterizing the induced measure. In this manuscript probability is denoted by P. Here, F_ x(ξ) = P( x≤ξ) = ℙ( x^-1((-∞, ξ])). Random variables/vectors are written using bold small letters. For notational simplicity we will not mention the probability spaces explicitly everywhere. The standard measure of the real line is the Lebesgue measure. 𝒰, ℬern, and 𝒩 denote the uniform distribution, Bernoulli distribution, and Gaussian distribution, respectively. 𝔼 is the expectation operator defined by the Stieltjes integral: 𝔼( x) = ∫_ℝξ dF_ x(ξ). The variance of a random variable x is defined by: σ_ x^2 = Var( x) = 𝔼( x-𝔼( x))^2. For a indexed random variable x_k, the variance is denoted by the shorthand notation σ_k^2. If x is a Gaussian random variable with mean μ, and variance σ^2, it is denoted as: x∼𝒩(μ,σ^2). A random vector x of size (n× 1) is an n-tuple of random variables. The covariance matrix of a random vector x is defined as: P_ x=Cov( x) = 𝔼(( x-𝔼( x))( x-𝔼( x))'). Again, for indexed random vector x_k, the covariance matrix is denoted by the shorthand notation P_k. x∼𝒩(μ,P_ x) denotes a Gaussian random vector with mean vector μ and covariance matrix P_ x. x_k is a random variable/vector at time k. x_k is the scalar/vector value at time k. Unless stated otherwise, all random variables/vectors used in this manuscript are assumed to be of zero mean. A sequence of random variables/vectors is denoted by: x_0:n = { x_0, x_1, … x_n}. The conditional expectation is defined as: 𝔼( x| y) = ∫_ℝ_ x x dF_ x| y(x| y). Here, ℝ_ x denotes the range space of x. Clearly 𝔼( x| y) is a random variable. The law of total expectation suggests that 𝔼( x)= 𝔼(𝔼( x| y)). Conditional variance of the random variable x_k conditioned on the sequence of random variables γ_0:k-1 is denoted by: σ_k|γ_0:k-1^2 = 𝔼( x_k^2|γ_0:k-1). Conditional covariance matrix of the vector x_k conditioned on the sequence of random variables γ_0:k-1 is denoted by: P_k|γ_0:k-1=𝔼( x_k x'_k|γ_0:k-1). For zero mean random variables, a Hilbert space of random variables can be constructed by defining the inner product as: < x, y> = Cov( x, y). Hence, the ℒ_2 norm is given by: x_2 = √(Var( x)) = σ_ x. A sequence of random variables x_0:∞ is said to be ℒ_2 convergent to y if lim_n→∞ x_n- y_2 = 0. * The differential entropy of a continuous random variable x with a probability density function f_ x(x) is defined as h( x) = 𝔼[-log(f_ x( x))] = -∫_ℝ_ xlog(f_ x(x)) f_ x(x) dx. The entropy of a discrete random variable z with probability mass function p_ z(z) is given by H( z)=-Σ_z∈ supp{ z}p_ z(z) log(p_ z(z)). Quantization is the process of countably partitioning the sample space of a continuous random variable and assigning a value for every partition to generate a discrete random variable. If a continuous random variable x with differential entropy h( x) is quantized uniformly using the quantization step-size Δ (Lebesgue measure of the partition over the range of x), the entropy H( x^Δ) of the quantized discrete random variable x^Δ is given by H( x^Δ) = h( x) - log(Δ). The typical support set of a random variable x is a set S⊆ℝ such that P( x∈ S) ≥ 1-ϵ for some arbitrarily small ϵ > 0. * A discrete time LTI autonomous system x_k+1=Ax_k, x_k ∈ℝ^n, k = 0,1,2,… is called asymptotically stable if and only if there exists a unique solution P ≻ 0_n× n for the Lyapunov equation APA' - P + Q = 0_n× n, where Q ≻ 0_n× n. § PROBLEM FORMULATION Consider the discrete time LTI system x_k+1= A x_k+ B u_k+ w_k, k=0,1,2,… where x_k∈ℝ^n is the state vector, u_k∈ℝ^m is the input vector with m<n, and w_k∈ℝ^n is the additive white gaussian noise vector. The matrix A is assumed to be diagonalizable, and the system is ( A, B) controllable. The initial state x_0∼𝒩(0, P_0), P_0≻ 0_n× n, the noise process w_k∼𝒩(0,W), W≻ 0_n × n, and 𝔼( w_k w_j^⊤)=Wδ(k-j). It is also assumed that 𝔼( x_k w_j^⊤)= 0_n× n ∀ j≥ k. In infinite horizon LQG control synthesis, the objective is to design the optimal control policy which minimizes the expected quadratic cost function: J( x_0:∞, u_0:∞) = ∑_k =0^∞𝔼( x_k^⊤Q x_k+ u_k^⊤R u_k) Here the costs of the state and the input are given by the matrices Q ≻ 0_n× n and R ≻ 0_m × m. In state feedback optimal control it is assumed that the state x_k is accessible to the controller to synthesize optimal control law which takes the linear form of u_k = -L x_k, where L is the optimal gain matrix. However, in the control loop shown in Fig. <ref>, there exists a channel with finite capacity (𝒞) between the plant and the controller. Hence, the observation of real valued state is no longer feasible. For error free communication through the channel, the state signal at the plant needs to be quantized and encoded to keep the data rate below the capacity value 𝒞 <cit.>. Furthermore, due to possible link failure and packet drops, the communication link is modeled with a random switch which controls the transmission through the channel leading to intermittent observations of the quantized state at the controller. Two different models of intermittence is considered, namely the Bernoulli switch, and the Markov switch. The block diagram of the entire system is shown in Fig. <ref>. The plant is the dynamical system which generates the state according to Equation. <ref>. The intermittent switch is controlled by a binary random variable whose `0' state implies the switch is `ON' and the state signal can be transmitted to the controller closing the loop. On the other hand, switch state `1' implies the switch is `OFF' and the signal cannot be further processed which will eventually make the loop open. The link failure event is rather rarer and the corresponding switching probabilities will be studied. This will make further discussion more convenient. The mathematical description and relevant derivations are given in section <ref>. Sensing the intermittence, the quantizer-encoder block will update the quantization step-size such that the capacity constraint is satisfied. If the switch is `ON', the state will be quantized and encoded before sending it through the channel. The decoder-controller block will receive the signal and construct the linear state feedback optimal control law using two pieces namely, the gain matrix computed from the infinite horizon linear quadratic regulator analysis, and the quantized state information. This control law will close the loop driving the system towards stability.The analysis with a scalar plant is introduced first. The objective will be to find corresponding bounds of the intermittence parameters such that the stochastic system is ℒ_2 stabilized. § MAIN RESULTS §.§ Bernoulli intermittence model §.§.§ Scalar Plant The scalar version of the system introduced in section <ref> is given below. x_k+1= α x_k+ b u_k+ w_k, k=0,1,2,… The initial state is represented as x_0 ∼𝒩(0,σ_0 ^2) where σ_0^2 is the variance of the initial state. An important assumption of the optimal control design problem is |α| > 1 and also, the control applied on the system is infinite horizon LQR. The scalar noise process is w_k ∼𝒩(0,σ_ w ^2) where σ_ w^2 is the variance of the noise. Moreover, 𝔼( w_k w_l)=0 ∀ k l, and 𝔼( x_k w_l)=0 ∀ l≥ k. The observed state sequence is up to time k denoted by: x_0:k = {x_0,x_1,…,x_k}. The realized input sequence up to time k is denoted by: u_0:k = {u_0,u_1,…,u_k}. The random sequences are denoted likewise. The intermittence and capacity constraints are briefly introduced in section <ref>. The state of the intermittent switch at time k is the the random variable γ_k ∼ℬern(p), 0≤ p≤ 1. P(γ_k =1)=p is the probability of the switch being open at time k, and then the state x_k cannot be observed and transmitted to the decoder-controller block. The Bernoulli intermittence sequence up to time k is given by: γ_0:k = {γ_0, γ_1,…,γ_k}. The capacity of the channel between the quantizer-encoder and the decoder-controller is 𝒞. The necessary and sufficient conditions for ℒ_2 stabilizability of the system are discussed in the following lemmas and theorems. For the scalar plant described in section <ref>, in absence of quantization (i.e., assuming 𝒞→∞), the conditional state variance at the (k+1)^th time step is given by: σ_k+1|γ_0:k^2 = (∏_j=0^kξ_j^2 )σ_0^2 + [∑_j=0^k-1( ∏_i=j+1^kξ_i^2 ) + 1 ] σ_ w^2 , and the (expected) state variance at the (k+1)^th time step is given by: σ_k+1^2 = ω^2(k+1)σ_0^2 + 1 - ω^2(k+1)/1 - ω^2σ_ w^2 Here, ξ_k = αγ_k + (α - bl)(1 - γ_k) , l is the infinite horizon LQR gain, and ω^2 = α^2 p+(α-bl)^2(1-p). Proof: The proof is provided in section <ref>. The scenario will be a little different when the finite capacity channel is considered between the switch and the controller. The random state x_k, if observed will be quantized and encoded such that the signal data rate is below the capacity of the channel to ensure asymptotic error free detection of the quantized signal at the detector-controller block. For any random map whose range is not of finite support, finite quantization can be done using the notion of `typicality' of the range space. For a zero mean Gaussian random variable x_k, the `typical range' set can be assumed to be 𝒮_ x_k=[-2^(h( x_k)-1), +2^(h( x_k)-1)]. Here, x_k ∼𝒩(0,σ_k ^2), and h( x_k) = 1/2log(2π e σ_k^2) is the differential entropy of x_k. Another popular choice of typicality set is of course the 6σ support set, which is in fact a little bigger than the set derived from differential entropy. However, to derive the bounds analytically, the typical set is used in this manuscript. Suppose, the quantization step size at time k is Δ_k, which is obviously a random variable. Denote, Δ_k=𝔼(Δ_k) In this manuscript, uniform quantization is considered. In this setup, the one step state variance update equation is stated in Lemma <ref>. For the system discussed in section <ref>, if 𝒞 < ∞, and if the quantization step-size at the k^th time step is Δ_k, conditioning on the intermittence random sequence, the one step conditional state variance update equation at the (k+1)^th time step is given by: σ_k+1|γ_0:k^2=ξ_k^2σ_k|γ_0:k-1^2+b^2 l^2(1-γ_k)Δ_k|γ_0:k-1^2/12+σ_ w^2 , and the one step (expected) state variance update equation at the (k+1)^th time step is given by: σ_k+1^2 = ω^2σ_k^2 + b^2l^2(1-p)Δ_k^2/12 + σ_ w^2 Proof: The proof is provided in section <ref>. For the system discussed in section <ref>, the one step conditional quantization step size update equation for the (k+1)^th time step is given by: Δ_k+1|γ_0:k^2 =Δ_k|γ_0:k-1^2[ξ_k^2 + η G(1-γ_k)] + ησ_ w^2 + ϵ(1 - ξ_k^2) , and the one step (expected) quantization step size update equation for the (k+1)^th time step is given by: Δ_k+1^2 = Δ_k^2[ω^2 + η G(1-p)] + ησ_ w^2 + ϵ(1 - ω^2) . Here, η = 2π e/2^2𝒞, G = b^2 l^2/12, and ϵ>0 is a small constant. Proof: The proof is provided in section <ref>. Based on these lemmas, the result on intermittence probability upper bound for the Bernoulli model is stated in the next theorem. For the system discussed in section <ref>, the necessary and sufficient conditions for ℒ_2 stabilization are given below. * For 𝒞→∞, the necessary and sufficient condition for ℒ_2 stabilizability of the system is given by: p < 1 - (α-bl)^2 /α^2 - (α-bl)^2 , and if the condition is satisfied, the state variance asymptotically converges as follows: lim_n→∞σ_n^2 = (σ_ w^2 /1 - ω^2 ) . * For 1/2log(π e α^2/6)<𝒞 < ∞, the necessary and sufficient condition for ℒ_2 stabilizability of the system is given by: p < 1 - (α-bl)^2 - η G/α^2 - (α-bl)^2 - η G , and if the condition is satisfied, the state variance asymptotically converges as follows: lim_n→∞σ_n^2 = (σ_ w^2 + G(1-p)ϵ/1 - ω^2 - η G(1-p)) . Here, η = 2π e/2^2𝒞, G = b^2 l^2/12, and ϵ>0 is a small constant. * For 𝒞≤1/2log(π e α^2/6), the system is always ℒ_2 unstable. Proof: The proof is provided in section <ref>. §.§.§ Vector Plant The details of the vector plant is already sketched in section <ref>. The Bernoulli switching is already introduced in section <ref>. The following lemmas will help to develop the main theorem for the vector plant under Bernoulli intermittence. For the vector plant described in section <ref>, the conditional state covariance matrix in absence of quantization, (i.e, when the channel capacity (𝒞) is infinite) at the (k+1)^th time step is given by: P_k+1|γ_0:k = 𝔼( x_k+1 x'_k+1|γ_0:k) = (∏_j=k^0Λ_j) P_0(∏_j=0^kΛ'_j) +[∑_j=0^k-1{(∏_i=k^j+1Λ_i)W(∏_i=j+1^kΛ'_i)}+W] Where, P_0 = 𝔼( x_0 x'_0), and Λ_k = γ_k A + (1-γ_k)(A-BL). Proof: The proof is provided in section <ref>. For the vector plant described in section <ref>, the one step dynamics of the state covariance matrix (P_k) in absence of quantization, (i.e, when the channel capacity (𝒞) is infinite) is given by: P_k+1 = pAP_kA'+(1-p)(A-BL)P_k(A-BL)'+W where, P_k = 𝔼( x_k x'_k). Proof: The proof is provided in section <ref>. Next, the main theorem for ℒ_2 stabilization of the vector plant without quantization under Bernoulli intermittence can be stated. For the vector plant described in section <ref>, if capacity is infinite (𝒞→∞), the necessary and sufficient condition on the intermittence probability (p) for the asymptotic state covariance matrix to be bounded, and unique is given by: p< 1-A-BL_2^2/A_2^2-A-BL_2^2 Where, A_2 is the spectral/operator norm of the matrix A. Proof: The proof is provided in section <ref>. However, one important point to mention is that the condition of Theorem <ref> will be satisfied if and only if A-BL<1. (A-BL) being Schur doesn't automatically satisfy the spectral norm criterion. Therefore, it is possible to design a different methodology to find a more suitable candidate for the gain matrix L, which may not minimize the LQR cost functional, but will satisfy the spectral norm criterion always. This will be investigated by the authors of this manuscript in near future. Moreover, it is evident that Theorem <ref> can be immediately derived from Theorem <ref> as the spectral norm of the open loop state transition matrix A is reduced to the absolute open loop gain |α| of the scalar plant, and the spectral norm of the closed loop state transition matrix (A-BL) is reduced to the absolute closed loop gain |α-bl| of the scalar plant. If the intermittence probability of the Bernoulli switch is below the threshold given in Theorem <ref>, the asymptotic state covariance matrix (P_∞ = lim_n→∞P_n) can be computed by solving the following n(n+1)/2 linear equations. ∑_l=2^n∑_k=1^l-1[p(a_ila_jk+a_ika_jl)+(1-p)(g_ilg_jk+g_ikg_jl)]s_lk +∑_l=1^n [pa_ila_jl+(1-p)g_ilg_jl]s_ll+w_ij = s_ij ∀ i =1,2,…,n and j = 1,2,…, i. Here, a_ij, g_ij, w_ij, s_ij are the (i,j)^th elements of the matrices A, (A-BL), W, P_∞ respectively. These equations are obtained by expanding the generalized Lyapunov equation pAP_∞ A'+(1-p)(A-BL)P_∞(A-BL)'+W = P_∞ and the non-redundant equations are kept. Once the expected system dynamics, and the condition of boundedness of the state covariance matrix (P_n) are obtained, the result towards the design of the recursive quantizer be stated such that the system can be ℒ_2 stabilized even in the presence of finite capacity channel. However, for vector plant with input dimensionality being less than the state dimensionality (m<n), it is not possible to find out the covariance matrix of the quantized signal analytically using the posterior distribution of the quantized signal following the technique of Lemma <ref>. This is stated in the next lemma. Lemma <ref> cannot be extended to the vector plant with m<n. Proof: The proof is provided in section <ref>. From the scalar example, it has been seen that the upper bound of the intermittence probability is an increasing function of channel capacity 𝒞. However, for vector plant with m<n, the upper bound of p is obtained only for 𝒞→∞. Clearly Theorem <ref> is only a necessity and not sufficiency here. In this situation, the recursive quantization methodology to mitigate the finite capacity channel constraint can only be designed based on the dynamics of the conditional state covariance matrix when 𝒞→∞. The recursion is given in Theorem <ref>. For the vector plant described in section <ref>, for 𝒞< ∞, the recursive update equation of the uniform quantizer step-size per dimension is given by Δ_k+1|γ_0:k^2 = Δ_k|γ_0:k-1^2 [γ_k |A|^2/n+(1-γ_k)|A-BL|^2/n] + 2π e (2^-2𝒞/n)|W|^1/n+ϵ Where, Δ_k|γ_0:k-1 is the conditional quantization step-size per state dimension at k^th time step, and ϵ >0 is a small constant. Proof: The proof is provided in section <ref>. The algorithm for this recursive quantization scheme is given in section <ref>. §.§ Markov intermittence model §.§.§ Scalar Plant A system is taken similar to the one described for the Bernoulli intermittence model. The only difference is that the intermittent switch state γ_k is considered to be a Markov chain whose transition probability matrix (TPM) is given by: T = [ 1-p p; q 1-q; ] , where p,q∈[0,1]. p and q need not be equal. The state of the Markov chain is defined by ζ_k=[(1-π_k),π_k], where π_k=P(γ_k=1). The initial condition of the Markov chain is assumed to be ζ_0 =[(1-π_0), π_0], π_0∈[0,1]. The following lemmas and theorems are used to obtain the conditions for ℒ_2 stabilization of the system. For the scalar plant described in section <ref>, in absence of quantization (i.e., assuming 𝒞→∞), the conditional state variance σ^2_k+1|γ_0:k at the (k+1)^th time step is given by: σ_k+1|γ_0:k^2= (∏_j=0^kξ_j^2)σ_0^2 + [∑_j=0^k-1(∏_i=j+1^kξ_i^2) + 1]σ_ w^2 , and the (expected) state variance σ_k+1^2 at the (k+1)^th time step is given by: σ_k+1^2 = (∏_j=0^kω_j^2)σ_0^2 + [∑_j=0^k-1(∏_i=j+1^kω_i^2) + 1]σ_ w^2 where, ξ_i^2 = α^2 γ_i + (α-bl)^2(1 - γ_i), and ω_i^2 = α^2π_i + (α-bl)^2(1 - π_i). Proof: The proof is provided in section <ref>. For the system discussed in section <ref>, if 𝒞<∞, and if the quantization step size at the k^th time step is Δ_k, conditioning on the intermittence random sequence, the one step conditional state variance update equation at the (k+1)^th time step is given by: σ_k+1|γ_0:k^2=ξ_k^2σ_k|γ_0:k-1^2+b^2 l^2(1-γ_k)Δ_k|γ_0:k-1^2/12+σ_ w^2 , and the one step (expected) state variance update equation at the (k+1)^th time step is given by: σ_k+1^2 = ω_k^2σ_k^2 + b^2l^2(1-π_k)Δ_k^2/12 + σ_ w^2 Proof: The proof is provided in section <ref>. For the system discussed in section <ref>, the one step conditional quantization step size update equation for the (k+1)^th time step is given by: Δ_k+1|γ_0:k^2 = Δ_k|γ_0:k-1^2[ξ_k^2 +η G(1-γ_k)] + ησ_ w^2 + ϵ(1 - ξ_k^2) , and the one step (expected) quantization step size update equation for the (k+1)^th time step is given by: Δ_k+1^2 = Δ_k^2[ω_k^2 + η G(1-π_k)] + ησ_ w^2 + ϵ(1 - ω_k^2) . Here, η = 2π e/2^2𝒞, G = b^2 l^2/12, and ϵ>0 is a small constant. Proof: The proof is provided in section <ref>. Based on these lemmas, the theorem on the conditions for ℒ_2 stabilization can be stated. For the system discussed in section <ref>, the necessary and sufficient conditions for ℒ_2 stabilization are given below. * For 𝒞→∞, the necessary and sufficient condition for ℒ_2 stabilization of the system is given by: p/q≤1 - (α-bl)^2/α^2 - 1 , and if the condition is satisfied, the state variance asymptotically converges as follows: lim_k→∞σ_k^2=(1+p/q)/1 - (α-bl)^2 - p/q(α^2 - 1)σ_ w^2 * For 1/2log(α^2/6π e)<𝒞 < ∞, the necessary and sufficient condition for ℒ_2 stabilization of the system is given by: p/q < 1 -(α-bl)^2 - η G/α^2 - 1 , and if the condition is satisfied, the state variance asymptotically converges as follows: lim_k→∞σ_k^2 = σ_ w^2(1+p/q) + ϵ G/(1+p/q)-[α^2p/q+(α-bl)^2+η G] * If 𝒞≤1/2log(α^2/6π e), the system will always be ℒ_2 unstable. Here, η = 2π e/2^2𝒞, G = b^2 l^2/12, and ϵ>0 is a small constant. Proof: The proof is provided in section <ref> §.§.§ Vector Plant The vector plant with Markov intermittence switching is given by: x_k+1 = {γ_k A +(1-γ_k)(A-BL)} x_k+ w_k Here, γ_k is the intermittent switch binary random variable with Markovian property as discussed in section <ref>. The following lemmas will lead to the main theorems. For the vector plant described in section <ref>, the conditional state covariance matrix in the absence of quantization, (i.e, when the channel capacity (𝒞) is infinite) is given by: P_k+1|γ_0:k = 𝔼( x_k+1 x'_k+1|γ_0:k) = (∏_j=k^0Λ_j) P_0(∏_j=0^kΛ'_j) +[∑_j=0^k-1{(∏_i=k^j+1Λ_i)W(∏_i=j+1^kΛ'_i)}+W] Where, Λ_k = γ_k A + (1-γ_k)(A-BL), P_0 = 𝔼( x_0 x'_0), and W = 𝔼( w_k w'_k). Proof: The proof is provided in section <ref>. For the vector plant described in section <ref> , the one step update equation of the state covariance matrix (P_k) in absence of quantization, (i.e, 𝒞→∞) is given by: P_k+1 = π_k AP_kA'+(1-π_k)(A-BL)P_k(A-BL)'+W where, P_k = 𝔼( x_k x'_k), and π_k = P(γ_k =1). Proof: The proof is provided in section <ref>. Now the main theorem providing the necessary and sufficient condition for ℒ_2 stabilization of the vector plant without quantization under Markov intermittence is stated. For the system described in section <ref>, if 𝒞→∞, the necessary and sufficient condition for the asymptotic state covariance matrix to be bounded, and unique is given by: p/q< 1-A-BL_2^2/A_2^2-1 Where, A_2 is the spectral/operator norm of the matrix A. Proof: The proof is provided in section <ref>. Moreover, it is evident that the first part of Theorem <ref> (Eqn. <ref>) can be immediately derived from Theorem <ref> as the spectral norm of the open loop state transition matrix A is reduced to the absolute open loop gain |α| of the scalar plant, and the spectral norm of the closed loop state transition matrix (A-BL) is reduced to the absolute closed loop gain |α-bl| of the scalar plant. If the intermittence parameters of the Markov switch (p and q) satisfy the necessary and sufficient condition as given in Theorem <ref>, the asymptotic state covariance matrix (P_∞) can be computed by solving the following n(n+1)/2 linear equations. ∑_l=2^n∑_k=1^l-1[p(a_ila_jk+a_ika_jl)+q(g_ilg_jk+g_ikg_jl)]s_lk+ ∑_l=1^n [pa_ila_jl+qg_ilg_jl]s_ll = (p+q)(s_ij-w_ij) ∀ i =1,2,…,n and j = 1,2,…, i. Here, a_ij, g_ij, w_ij, s_ij are the (i,j)^th elements of the matrices A, (A-BL), W, and P_∞ respectively. These equations are obtained by expanding the following generalized asymptotic Lyapunov equation and keeping the non-redundant terms. π_∞ AP_∞ A'+(1-π_∞)(A-BL)P_∞(A-BL)'+W = P_∞ Once the system dynamics, and the conditions for convergence of the state covariance matrix are obtained, the main result towards the design of the recursive quantizer be stated. However, for vector plant with m<n, it is not possible to develop the covariance matrix of the quantized signal using the posterior distribution of the quantized signal. Lemma <ref> cannot be extended to the vector plant with m<n. Proof: The proof is provided in section <ref>. In this situation, the quantization to mitigate the finite capacity channel constraint can only be designed based on the dynamics of the conditional state covariance matrix when the channel is of infinite capacity. The result is given in the theorem below. For the system discussed in section <ref>, for 𝒞< ∞, the recursive update equation of the uniform quantizer step-size per dimension is given by Δ_k+1|γ_0:k^2 = Δ_k|γ_0:k-1^2 [γ_k |A|^2/n+(1-γ_k)|A-BL|^2/n] + 2π e (2^-2𝒞/n)|W|^1/n+ϵ Where, Δ_k| γ_0:k-1 is the conditional quantization step-size per state dimension at the k^th time step, and ϵ >0 is a small constant. Proof: The proof is provided in section <ref>. Clearly for 𝒞<∞ scenario, the condition of ℒ_2 stabilization of the vector plant under Markov intermittence as stated in Theorem <ref> is only necessary, and not sufficient. The algorithm for this recursive quantization scheme is given in section <ref>. §.§ Algorithms Since the recursive quantization procedures are same for Bernoulli and Markov intermittence models, the algorithm is presented for both of them together. The algorithm for the scalar plant is presented in Algorithm <ref>, and the algorithm for the vector plant is presented in Algorithm <ref>. In the algorithms the symbols are not bold. This is to signify that here the realized values are mentioned, not the random variables. § EXAMPLES §.§ Bernoulli intermittence To study the scalar plant with Bernoulli intermittence switching, first it is pertinent to study Theorem <ref>. For the entire range of the channel capacity i.e., 𝒞∈ [0,∞), the upper bound of the intermittence parameter p is plotted in Fig. <ref> for different absolute values of the plant parameter |α|. All the curves are reaching to the corresponding asymptotes. The next interesting study is of the convergences and divergences of sample state variances for different choices of p. When p is below the upper bound of p as found out from Theorem <ref>, and shown in Fig. <ref>, the sample variances for different time samples converge to the theoretically predicted value. If p goes above the bound, the sample state variance starts diverging for larger time points. This result is shown in Fig. <ref>. In this experiment the following values are used for the system: open loop gain α = 3.3, closed loop gain (α-bl)=0.4, σ_0^2 = 4, and σ_ w^2 = 1. Although Fig. <ref> and Fig. <ref> demonstrate the asymptotic behaviors and the bounds, it is still important to show the dynamics of the system, albeit for just two sample trajectories of the stochastic system. First, a convergent trajectory is shown by choosing the parameters in the convergent region. For α = 3.3, (α-bl)=0.4, p=0.1, 𝒞 = 6 bits/channel use, the state trajectory sample function is not diverging for 1000 time points as seen in Fig. <ref>. However, by modifying intermittence probability to p=0.3, the divergence of the sample function is observed at a time point of 51 as seen in Fig. <ref>. Although these simulations are only for tutorial purpose. A few sample functions don't say much about stochastic dynamics. §.§ Markov intermittence For the scalar plant with Markov intermittent switching, Theorem <ref> is studied. For the entire range of the channel capacity i.e., 𝒞∈ [0,∞), the upper bound of (p/q) is plotted for different absolute values of the plant parameter |α|. All the curves are reaching to the corresponding asymptotes. The next simulation study is of convergences and divergences of sample state variances for different choices of (p/q). When p/q is below the upper bound of p/q as found out from Theorem <ref>, and shown in Fig. <ref>, the sample variances for different time samples converge to the theoretically predicted value. If p goes above the bound, the sample state variance starts diverging for larger time points. This result is shown in Fig. <ref>. In this experiment the following values are used for the system: open loop gain α = 3.3, closed loop gain (α-bl)=0.4, σ_0^2 = 1, and σ_ w^2 = 1. First, a convergent trajectory is shown by choosing the parameters in the convergent region. For α = 3.3, (α-bl)=0.4, p=0.05, q=0.95, 𝒞 = 6 bits/channel use, the state trajectory sample function is not diverging for 1000 time points as seen in Fig. <ref>. However, by modifying one parameter p to p=0.25, the divergence of the sample function is observed at a time point of 224 as seen in Fig. <ref>. Unfortunately, for vector system, there is no necessary and sufficient condition on the intermittence parameters for a finite capacity channel setting. So, simulations such as Fig. <ref> or Fig. <ref> are not possible. So, only sample function trajectory simulation could have been shown, but that would have been similar to the scalar ones. Hence, they are omitted. § CONCLUSION The work presented in this manuscript is novel and it opens up various fundamental questions in the intersection of control and communication theory. One open question is already discussed after stating Theorem <ref>. The design of an optimal controller for the intermittent vector system such that the spectral norm of the closed loop gain matrix remains smaller than unity is an open problem to the best of the knowledge of the authors. Moreover, the sufficient condition of intermittence parameters for the vector plant with a finite capacity channel is still open. The lower bound on capacity for the vector plant is also not figured out. Furthermore, the immediate next big problem is of output feedback. The joint behavior of the Kalman filter and the controller should be studied under the given constraints. The finite horizon counterpart of the problems are also important. On the quantization part, non-uniform entropy coded quantization needs to be undertaken when the capacity value is small. Systems with nonlinear plant, non-Gaussian additive noise, multiplicative noise, or time varying channels are much harder problems which can be investigated in future. Finally, robust optimal control strategies such as H_∞ can be researched on under the constraints of intermittent observations and finite capacity channel. § PROOFS OF THE MAIN RESULTS §.§ Proof of Lemma <ref> The intermittent system dynamics under 𝒞→∞ assumption is described by: x_k+1 = αγ_k x_k + (α-bl) (1-γ_k) x_k + w_k = [αγ_k + (α-bl) (1 - γ_k) ] x_k + w_k = ξ_k x_k + w_k where, ξ_k is a Bernoulli random variable, and l is the infinite horizon LQR gain. Now, ξ_k^2 = α^2 γ_k + (α-bl)^2 (1 - γ_k). Clearly, ξ_k^2 is also a Bernoulli random variable and 𝔼(ξ_k^2ξ_j^2) = 0∀ j k. Therefore from stationarity of ξ_k^2, define ω^2:= 𝔼(ξ_k^2) = α^2 p + (α-bl)^2 (1 - p). So, the one step conditional variance update equation at the (k+1)^th time step is given by: 𝔼( x_k+1^2|γ_k) = [α^2 γ_k + (α-bl)^2 (1 - γ_k)]𝔼( x_k^2) + σ_ w^2 Conditioning over γ_0:k-1, the conditional state variance is computed as: σ_k+1|γ_0:k^2=𝔼[ x_k+1^2|γ_0:k] = ξ_k^2 𝔼[ x_k^2|γ_0:k-1] + σ_ w^2 = (∏_j=0^kξ_j^2 )σ_0^2 + [∑_j=0^k-1( ∏_i=j+1^kξ_i^2 ) + 1 ] σ_ w^2 The state variance at (k+1)^th time step is computed by taking expectation of the conditional state variance with respect to γ_0:k. σ_k+1^2 = 𝔼[𝔼( x_k+1^2|γ_0:k)] = [ω^2(k+1)σ_0^2 + (1 + ω^2 + ω^4 + ... + ω^2k)σ_ w^2 ] = ω^2(k+1)σ_0^2 + 1-ω^2(k+1)/1-ω^2σ_ w^2 That concludes the proof. ▪ §.§ Proof of Lemma <ref> Considering an infinite horizon LQR control formulation, the random control input is u_k = -l x_k. x_k is quantized to x_k^Δ. Therefore, the quantization error is e_k = x_k - x_k^Δ. The closed loop equation is given by: x_k+1 = α x_k + b(-l x_k^Δ) + w_k = α x_k - bl( x_k - e_k) + w_k = (α-bl) x_k + bl e_k + w_k The open loop equation is given by: x_k+1 = α x_k + w_k Thus, the combined random dynamics can be expressed as: x_k+1 = [αγ_k + (α-bl) (1 - γ_k)] x_k + bl(1 - γ_k) e_k + w_k = ξ_k x_k + bl(1 - γ_k) e_k + w_k At the (k+1)^th time step, the `typical' domain of the conditional probability density function f_ x_k+1|γ_0:k(x_k+1|γ_0:k) is divided uniformly into n_k+1|γ_0:k number of bins for quantization with the width of each bin being Δ_k+1|γ_0:k. At the (k+1)^th time step, consider the j^th bin ℛ_k+1|γ_0:k^j. If x_k+1∈ℛ_k+1|γ_0:k^j, assign the quantized value x_k+1^Δ = x_k+1|γ_0:k^j, where x_k+1|γ_0:k^j is chosen to be the midpoint of the interval ℛ_k+1|γ_0:k^j. The conditional density of x_k+1 given the bin ℛ_k+1|γ_0:k^j is: f_ x_k+1| x_k+1^Δ = x_k+1|γ_0:k^j(x_k+1| x_k+1^Δ = x_k+1|γ_0:k^j) = f_ x_k+1|γ_0:k(x_k+1|γ_0:k)/P( x_k+1^Δ = x_k+1|γ_0:k^j) = f_ x_k+1|γ_0:k(x_k+1|γ_0:k)/∫_ℛ_k+1|γ_0:k^jf_ x_k+1|γ_0:k(x_k+1|γ_0:k) dx_k+1 . The conditional density function of the error e_k+1 given the bin ℛ_k+1|γ_0:k^j is: f_ e_k+1| x_k+1^Δ = x_k+1|γ_0:k^j(e_k+1| x_k+1^Δ = x_k+1|γ_0:k^j) = f_ e_k+1| x_k+1^Δ = x_k+1|γ_0:k^j(x_k+1 - x_k+1|γ_0:k^j| x_k+1^Δ = x_k+1|γ_0:k^j) = f_ x_k+1| x_k+1^Δ = x_k+1|γ_0:k^j(x_k+1| x_k+1^Δ = x_k+1|γ_0:k^j) For large enough capacity, if x_k+1∈ℛ_k+1|γ_0:k^j, f_ x_k+1| x_k+1^Δ = x_k+1|γ_0:k^j(x_k+1| x_k+1^Δ = x_k+1|γ_0:k^j) = f_ x_k+1| x_k+1^Δ = x_k+1|γ_0:k^j( x_k+1|γ_0:k^j| x_k+1^Δ = x_k+1|γ_0:k^j) . Therefore, f_ e_k+1| x_k+1^Δ = x_k+1|γ_0:k^j( e_k+1| x_k+1^Δ = x_k+1|γ_0:k^j) = 1/Δ_k+1|γ_0:k Hence, the marginal density function of e_k+1 over the choice of j∈{1,2,…, n_k+1|γ_0:k} is given by: f_ e_k+1|γ_0:k(e_k+1|γ_0:k) = ∑_j=1^ n_k+1|γ_0:k1/Δ_k+1|γ_0:kP( x_k+1^Δ = x_k+1|γ_0:k^j) = 1/Δ_k+1|γ_0:k∑_j=1^ n_k+1|γ_0:k P( x_k+1^Δ = x_k+1|γ_0:k^j) = 1/Δ_k+1|γ_0:k Thus, e_k+1∼𝒰([-Δ_k+1|γ_0:k/2,+Δ_k+1|γ_0:k/2]) is a (conditionally) uniform random variable with 𝔼( e_k+1|γ_0:k) = 0, and Var( e_k+1|γ_0:k) = Δ_k+1|γ_0:k^2/12. The conditional state variance update equation is: σ_k+1|γ_0:k^2 = 𝔼[ x_k+1^2|γ_0:k] = ξ_k^2σ_k|γ_0:k-1^2 + b^2 l^2(1 - γ_k) Δ_k|γ_0:k-1^2/12 + σ_ w^2 The state variance update equation is obtained by taking expectation over γ_0:k. σ_k+1^2 = 𝔼[σ_k+1|γ_0:k^2] = ω^2σ_k^2 + b^2 l^2 (1 - p)Δ_k^2/12 + σ_ w^2 That completes the proof. ▪ §.§ Proof of Lemma <ref> Let the initial step size for quantization be Δ_0. From Shannon's capacity theorem <cit.>, 2^h( x_0)/Δ_0≤ 2^𝒞 ⇒Δ_0 = √(2π e)σ_0/2^𝒞 + ϵ, where ϵ is a small positive constant. Similarly, for (k+1)^th time step, Δ_k+1 = √(2π e)σ_k+1/2^𝒞 + ϵ. Conditioning the quantization step-sizes with respect to the intermittence random sequences , the one step conditional update equation becomes Δ_k+1|γ_0:k^2 = 2π e σ_k+1|γ_0:k^2/2^2𝒞 + ϵ =2π e/2^2𝒞[ξ_k^2σ_k|γ_0:k-1^2 + b^2 l^2(1 - γ_k)Δ_k|γ_0:k-1^2/12 + σ_ w^2] + ϵ = Δ_k|γ_0:k-1^2 [ξ_k^2 + η G(1 - γ_k)] + ησ_ w^2 + ϵ(1 - ξ_k^2) where, η = 2π e/2^2𝒞, G = b^2 l^2/12, and ϵ is a small positive constant. Taking expectation over γ_0:k, the expected one step quantization update equation is obtained. Δ_k+1^2 = Δ_k^2[ω^2 + η G(1 - p)] + ησ_w^2 + ϵ(1 - ω^2) That concludes the proof. ▪ §.§ Proof of Theorem <ref> * For ℒ_2 stabilization of the system of section <ref>, and assuming 𝒞→∞, Eqn. <ref> from Lemma <ref> states that the asymptotic state variance lim_n→∞σ_n^2 < ∞ if and only if ω^2 < 1. So, the necessary and sufficient condition is: α^2 p + (α-bl)^2 (1 - p) < 1 ⇒ p < 1 - (α-bl)^2/α^2 - (α-bl)^2 Moreover, the asymptotic state variance will be: lim_n→∞σ_n^2 =lim_n→∞[ ω^2(n+1)σ_0^2 + (1 - ω^2(n+1)/1 - ω^2)σ_ w^2] = σ_ w^2/(1 - ω^2) * Now consider 1/2log(π e α^2/6)<𝒞<∞. Conditioning Eqn. <ref> on the intermittence random vector γ_0:k-1, the following random equation is obtained. σ_k+1|γ_0:k^2 = ξ_k^2σ_k|γ_0:k-1^2 + G(1 - γ_k) Δ_k|γ_0:k-1^2+ σ_ w^2 Moreover, using Eqn. <ref> for the k^th time step, the following equation is obtained. Δ_k|γ_0:k-1^2 = ησ_k|γ_0:k-1^2 + ϵ Combining the above two equations, the following recursion is obtained. σ_k+1|γ_0:k^2 = [ξ_k^2 + η G(1 - γ_k)]σ_k|γ_0:k-1^2 + [σ_ w^2 + G(1 - γ_k)ϵ] Taking expectation with respect to γ_0:k, and extending the recursion up to the initial condition, the variance formula is obtained. σ_n^2 = [ω^2 + η G(1 - p)]^nσ_0^2 + ∑_k=0^n-1[σ_ w^2 + G(1-p)ϵ] [ω^2 + η G(1-p)]^k Clearly the necessary and sufficient condition for lim_n→∞σ_n^2 <∞ is given by: [ω^2 + η G(1-p)]< 1 Here, it will be imperative to observe the influence of the finite capacity 𝒞 on the closed loop even in absence of any intermittent unstable open loop. Also consider an infinite control cost which will make the closed loop gain very small. With these ideal conditions of p=0, and (α-bl)→ 0, Eqn. <ref> will become: η G <1 ⇒2 π e/2^2𝒞b^2l^2/12 <1 ⇒π e α^2/6<2^2𝒞 ⇒𝒞> 1/2log(π e α^2/6) Clearly, in non-ideal condition, the necessary (not sufficient) condition for ℒ_2 stabilization of the system is given in Eqn. <ref>. For large enough 𝒞, the upper bound of p for ℒ_2 stabilization of the system is thus obtained by rearranging Eqn. <ref>. p < 1 - (α-bl)^2 - η G/α^2 - (α-bl)^2 - η G Moreover, the asymptotic state variance is given by: lim_n →∞σ_n^2 = σ_ w^2 + G(1 - p)ϵ/1 - (ω^2 + η G(1 - p)) * If 𝒞≤1/2log(π e α^2/6), from Eqn. <ref>, the necessary condition breaks down, and even in the ideal condition of p=0, and α = bl, the system goes ℒ_2 unstable. That concludes the proof. ▪ §.§ Proof of Lemma <ref> The stochastic system of section <ref> with control law u_k=-L x_k is given by: x_k+1 = γ_k A x_k + (1 - γ_k)(A-BL) x_k + w_k = Λ_k x_k + w_k Where, Λ_k = γ_k A+ (1 - γ_k)(A-BL) is a random matrix with every element a Bernoulli random variable. The one step conditional state covariance matrix equation is given by: P_k+1|γ_k = 𝔼( x_k+1 x'_k+1|γ_k) = Λ_k P_kΛ'_k +W where, P_k=𝔼( x_k x'_k) is the state covariance matrix at k^th time step. Conditioning on γ_0:k-1, the equation becomes: P_k+1|γ_0:k = Λ_k P_k|γ_0:k-1Λ'_k +W Expanding the recursion till the initial condition, the following expression is obtained: P_k+1|γ_0:k=(∏_j=k^0Λ_j) P_0(∏_j=0^kΛ'_j) +[∑_j=0^k-1{(∏_i=k^j+1Λ_i)W(∏_i=j+1^kΛ'_i)}+W] . That concludes the proof. ▪ §.§ Proof of Lemma <ref> The Bernoulli random sequence {γ_0, γ_1, …γ_n,…} is stationary with 𝔼(γ_k) = p. From Eqn <ref>, it can be seen that P_k+1|γ_k = Λ_k P_k Λ'_k +W = γ_k AP_k A'+(1-γ_k) (A-BL)P_k(A-BL)'+W Where, P_k=𝔼( x_k x_k'). Taking expectation with respect to γ_k, the desired form (Eqn.<ref>) is obtained. That concludes the proof. ▪ §.§ Proof of Theorem <ref> The dynamics of the state covariance matrix P_k as discussed in lemma <ref> (Eqn. <ref>) can be visualized as a flow in the open convex cone of positive definite matrices (M_n^+). Obviously the set is not a metric space. However, the open set can be completed by including the singular positive semi-definite matrices to form the closed cone (M_n). Since the set is not compact, not all flows lead to a fixed point. However, if a contraction operator can be constructed to define the dynamics, it will asymptotically converge to a fixed point. For any p∈ [0,1], define the operator 𝒯_p:M_n→ M_n as: 𝒯_p(P) = pAPA'+(1-p)(A-BL)P(A-BL)' Then the state covariance update equation can be written as: P_k+1 = 𝒯_p(P_k)+W Since W∈ M_n^+, and 𝒯_p(P_k) ∈ M_n, it implies that P_k+1∈ M_n^+. Clearly, lim_k→∞ P_k will be unique and bounded if and only if 𝒯_p is a contraction or equivalently 𝒯_p(P)_2<P_2 for all P∈ M_n. Now, 𝒯_p(P)_2 = pAPA'+(1-p)(A-BL)P(A-BL)'_2 ≤ pAPA'_2+(1-p)(A-BL)P(A-BL)'_2 = pAPA'_2+(1-p)(A-BL)P(A-BL)'_2 ≤ pA_2^2P_2+(1-p)A-BL_2^2P_2 = P_2 (pA_2^2+(1-p)A-BL_2^2) Here Eqn.41 is obtained from the triangle inequality of spectral norm, Eqn. 42 is obtained from the scalar multiplication of spectral norm, and Eqn. 43 is obtained from the submultiplicative inequality of spectral norm. Therefore, the necessary and sufficient condition for 𝒯_p to be a contraction is given by: [pA_2^2+(1-p)A-BL_2^2]<1. By rearrangement, the expression of the theorem (Eqn. <ref>) is obtained. That concludes the proof. ▪ §.§ Proof of Lemma <ref> When γ_k = 1, the open loop system is given by: x_k+1 = A x_k+ w_k, and for γ_k = 0, the loop is closed using the control signal u_k = -L x_k^Δ. Here, x_k^Δ is the quantized state at time k. The quantization error is e_k = x_k- x_k^Δ. So, the closed loop dynamics is given by: x_k+1 = A x_k+B(-L x_k^Δ)+ w_k = (A-BL) x_k+BL e_k+ w_k Hence the stochastic dynamics based on the Bernoulli switching random variable γ_k is given by: x_k+1 = Λ_k x_k+(1-γ_k)BL e_k+ w_k Where, Λ_k = {γ_kA+(1-γ_k)(A-BL)}. The one step update of the conditional state covariance matrix at time (k+1) is given by: P_k+1|γ_0:k = γ_k A P_k|γ_0:k-1A' + (1-γ_k)(A-BL) P_k|γ_0:k-1(A-BL)' + (1-γ_k)BL𝔼( e_k e_k'|γ_0:k-1)L'B'+W Since m<n, the matrix BL is singular, and hence the innovation contribution (differential entropy) of the quantization error vector e_k to the state differential entropy is -∞, resulting in the volume of the corresponding typical support set being zero. Hence, this method cannot be used to relate the state covariance matrix and the quantization step-size like Lemma <ref>. That concludes the proof. ▪ §.§ Proof of Theorem <ref> For Gaussian random vector x_k with conditional covariance matrix P_k|γ_0:k-1, the conditional differential entropy is given by: h( x_k|γ_0:k-1) = 1/2log((2π e)^n| P_k|γ_0:k-1|) ⇒| P_k|γ_0:k-1|^1/n = (1/2 π e) 2^[2/n h( x_k|γ_0:k-1)] From lemma <ref>, the one step update rule for the conditional covariance is obtained. P_k+1|γ_0:k =γ_k A P_k|γ_0:k-1A' +(1-γ_k)(A-BL) P_k|γ_0:k-1(A-BL)'+W Applying Minkowskii's ineqaulity on Eqn. <ref>, the following inequality is obtained. | P_k+1|γ_0:k|^1/n≥[γ_k |A|^2/n+(1-γ_k) |A-BL|^2/n]| P_k|γ_0:k-1|^1/n+|W|^1/n Using conditional differential entropy from Eqn. <ref> in this form gives a version of the Entropy Power Inequality (EPI) <cit.>: 2^[2/n h( x_k+1|γ_0:k)]≥ 2^[2/n h( w_k)] + 2^[2/n h( x_k|γ_0:k-1)][γ_k |A|^2/n+(1-γ_k) |A-BL|^2/n] The quantization step-size depends on the typical conditional support set of the state vector at every instance. If the conditional quantization step-size per dimension at time instance k+1 is Δ_k+1|γ_0:k, and the typical conditional support set cardinality is 2^[2/n h( x_k+1|γ_0:k)], it is deduced that: log[2^[2/n h( x_k+1|γ_0:k)]/Δ^n_k+1|γ_0:k]<𝒞 Combining the above two inequalities (<ref> and <ref>), the theorem statement is obtained completing the proof. ▪ §.§ Proof of Lemma <ref> For Markov intermittence, the stochastic system can be described by, x_k+1 = ξ_k x_k + w_k, where, the initial state x_0∼𝒩(0,σ_0^2), and ξ_k = αγ_k + (α-bl)(1 - γ_k). Since γ_k is a Markov chain with transition probability matrix (TPM) T, ξ_k is also a binary Markov chain with the same TPM T with the two states being the open loop gain α, and the closed loop gain (α-bl). Clearly, ξ_k^2 = α^2 γ+(α-bl)^2(1-γ_k), and ω_k^2 = 𝔼(ξ_k^2)=α^2π_k+(α-bl)^2(1-π_k). The probability state vector ζ_k = [(1-π_k), π_k] of the Markov chain follows the dynamics ζ_k+1=ζ_k T with the initial condition ζ_0. The conditional state variance σ_k+1|γ_0:k^2 at the (k+1)^th time step depends only on the initial condition, and the sequence γ_0:k, the formulation is identical to the derivation under the Bernoulli intermittence assumption as shown in section <ref>. Therefore, the repetition is omitted to prove Eqn. <ref>. However, the sequence γ_0:k for a Markov switching scheme is not a sequence of independent and/or identically distributed random variables like the Bernoulli example. Hence, taking expectation of Eqn. <ref> with respect to γ_0:k will not reduce the formula like the Bernoulli one. Instead the form of Eqn. <ref> is obtained. That concludes the proof. ▪ §.§ Proof of Lemma <ref> The first part of the proof is similar to that of the Bernoulli intermittence model as shown in section <ref>. The one step conditional variance update equation (Eqn. <ref>) is identical to Eqn. <ref>. Hence, the proof is omitted. Now, taking expectation of Eqn. <ref> with respect to γ_0:k results in Eqn. <ref>. That concludes the proof. ▪ §.§ Proof of Lemma <ref> The proof of Eqn. <ref> is exactly same as Eqn. <ref> which is elaborated in section <ref>. Next, taking expectation of Eqn. <ref> with respect to γ_0:k results in Eqn. <ref>. That concludes the proof. ▪ §.§ Proof of Theorem <ref> * The necessary and sufficient condition for ℒ_2 stabilization (i.e., lim_k→∞σ_k^2<∞) of the system under the assumption of 𝒞→∞ can be obtained from Eqn. <ref> as: lim_k→∞∏_j=0^kω_j^2 < ∞. This will be satisfied if and only if lim_k→∞ω_k^2 < 1. Now, ω_k^2 = α^2 π_k + (α-bl)^2(1 - π_k), and ζ_∞=[(1-π_∞),π_∞]=lim_k→∞ζ_k is the steady state distribution of the intermittent switch Markov Chain. The steady state distribution ζ_∞ is the left eigenvector of the transition probability matrix T with unity eigenvalue. So, π_∞ is obtained by solving the eigen-equation ζ_∞ = ζ_∞ T, and is given by: π_∞ = p/p+q. Therefore, the condition is: lim_k→∞ω_k^2 = α^2π_∞+ (α-bl)^2(1-π_∞) < 1 That provides the upper bound of the ratio of the two transition probability parameters of the Markov chain for ℒ_2 stabilization of the system under the assumption 𝒞→∞: p/q < 1 - (α-bl)^2/α^2 - 1 Now, if lim_k→∞ω_k^2 < 1, then lim_k→∞∏_j=0^kω_j^2 → 0. Therefore the asymptotic behavior of the state variance can be obtained from Eqn. <ref> by taking limit k→∞. lim_k→∞σ_k^2 = lim_k→∞(σ_ w^2/1 - ω_k^2) = σ_ w^2/1 - [α^2π_∞ + (α-bl)^2(1 - π_∞)] = [(1 + p/q)/1 - (α-bl)^2 - (α^2 - 1)p/q]σ_ w^2 * For finite capacity system (𝒞<∞), Eqn. <ref> gives the one step conditional state variance update equation under the quantized scheme. Also, from Shannon's capacity theorem <cit.>, the relationship of state variance with the quantization step-size is obtained as: Δ_k|γ_0:k-1^2 = ησ_k|γ_0:k-1^2+ϵ Combining this result with Eqn. <ref>, the following recursion is obtained: σ_k+1|γ_0:k^2 = [ξ_k^2+η G (1-γ_k)]σ_k|γ_0:k-1^2+[σ_ w^2+G(1-γ_k)ϵ] Taking expectation with respect to γ_0:k results in σ_k+1^2 = [ω_k^2+η G (1-π_k)]σ_k^2+[σ_ w^2+G(1-π_k)ϵ] The necessary and sufficient condition for lim_k→∞σ_k^2<∞ is lim_k→∞ [ω_k^2+η G (1-π_k)]<1. The steady state distribution of the intermittent switch Markov chain is ζ_∞ = [(1-π_∞), π_∞] = [q/p+q, p/p+q]. Hence, the necessary and sufficient condition for ℒ_2 stabilizability of the system is given by: α^2π_∞+[(α-bl)^2 + η G](1-π_∞)<1 The condition in terms of the ratio of the transition probability parameters is given by: p/q < 1 - (α-bl)^2 - η G/α^2 - 1 Then the asymptotic state variance is given by: lim_k→∞σ_k^2 = σ_ w^2 + G(1 - π_∞)ϵ/1 - (ω_∞^2 + η G(1 - π_∞)) = (1 + p/q)σ_ w^2 + ϵ G/(1 + p/q) - [α^2 p/q + (α-bl)^2 + η G] This proves Eqn. <ref>. Moreover, in idealized condition of p=0 (i.e., the transition probability of the switch from `ON' to `OFF' state is zero), q=1 (i.e., the transition probability of the switch from `OFF' to `ON' state is unity), and (α-bl)→ 0 (i.e., the closed loop gain is extremely small, which is feasible with infinite control cost), the condition for ℒ_2 stability of the system becomes: η G <1. That gives the lower bound of the capacity for ℒ_2 stabilization. Assuming α = b l, 2π e/2^2𝒞b^2l^2/12<1 ⇒𝒞>1/2log(α^2/6π e) * If 𝒞≤1/2log(α^2/6π e), even in the idealized situation the state variance will diverge. The system will be always ℒ_2 unstable. Thus the proof is concluded. ▪ §.§ Proof of Lemma <ref> The system as discussed in section <ref> is given by: x_k+1 = γ_k A x_k + (1 - γ_k)(A-BL) x_k + w_k = Λ_k x_k + w_k Hence, P_k+1|γ_0:k= Λ_k P_k|γ_0:k-1Λ'_k + W. Eqn. <ref> is obtained by expanding the recursion till the initial condition . That concludes the proof. ▪ §.§ Proof of Lemma <ref> The Markov sequence {γ_0, γ_1, …γ_n,…} is ergodic with P(γ_k = 1) = π_k. From Lemma <ref>, P_k+1|γ_0:k = γ_k A P_k|γ_0:k-1 A' +(1-γ_k) (A-BL) P_k|γ_0:k-1(A-BL)'+W Eqn. <ref> is obtained by taking expectation of the above equation with respect to γ_0:k. That concludes the proof. ▪ §.§ Proof of Theorem <ref> The dynamics of the state covariance matrix P_k as given in Eqn. <ref> in Lemma <ref> can be visualized as a flow in the open convex cone of positive definite matrices (M_n^+). Obviously the set is not a metric space. However, the open set can be completed by including the singular positive semi-definite matrices to form the closed cone (M_n). Since the set is not compact, not all flows lead to a fixed point. Therefore, if a sequence of contraction operators can be constructed to define the dynamics of the covariance matrix, it will asymptotically converge to a fixed point in M_n^+. For the sequence {π_k∈ [0,1]} , ∀ k∈ℕ, define a sequence of operators 𝒯_k:M_n→ M_n as: 𝒯_k(P) = π_k APA'+(1-π_k)(A-BL)P(A-BL)' Then the state covariance update equation can be written as: P_k+1 = 𝒯_k(P_k)+W Since W∈ M_n^+, and 𝒯_k(P_k) ∈ M_n, it implies that P_k+1∈ M_n^+. Clearly lim_k→∞ P_k will be unique and bounded if and only if lim_k→∞𝒯_k is a contraction or equivalently lim_k→∞𝒯_k(P)_2<P_2 for all P∈ M_n. Now, lim_k→∞𝒯_k(P)_2 = π_∞ APA'+(1-π_∞)(A-BL)P(A-BL)'_2 ≤ π_∞ APA'_2+(1-π_∞)(A-BL)P(A-BL)'_2 = π_∞APA'_2+(1-π_∞)(A-BL)P(A-BL)'_2 ≤ π_∞A_2^2P_2+(1-π_∞)A-BL_2^2P_2 = P_2 [π_∞A_2^2+(1-π_∞)A-BL_2^2] Here Eqn.49 is obtained from the triangle inequality of spectral norm, Eqn.50 is obtained from the scalar multiplication of spectral norm, and Eqn.51 is obtained from the submultiplicative inequality of spectral norm. Moreover, π_∞ = (p/p+q). Clearly, the necessary and sufficient condition for 𝒯_∞ to be a contraction is given by: [(p/p+q)A_2^2+(q/p+q)A-BL_2^2]<1. By rearrangement, Eqn. <ref> of Theorem <ref> is obtained. That concludes the proof. ▪ §.§ Proof of Lemma <ref> The stochastic dynamics based on the Markov switching random variable γ_k is given by: x_k+1 = Λ_k x_k+(1-γ_k)BL e_k+ w_k Where, Λ_k = {γ_kA+(1-γ_k)(A-BL)} is the random state transition matrix. The conditional state covariance matrix given the sequence γ_0:k is given by: P_k+1|γ_0:k = (1-γ_k)(A-BL)P_k|γ_0:k-1(A-BL)' + γ_k A P_k|γ_0:k-1A' + (1-γ_k)BL𝔼( e_k e_k'|γ_0:k-1)L'B'+W Since m<n, the matrix BL is singular, the innovation contribution (differential entropy) of the quantization error vector to the state differential entropy is -∞, resulting in the volume of the corresponding typical support set being zero. That concludes the proof. ▪ §.§ Proof of Theorem <ref> The proof is identical with the proof given in section <ref> for Theorem <ref>. That concludes the proof. ▪ 00 bellman1954theory Richard Bellman, “The theory of dynamic programming,” Bulletin of the American Mathematical Societyl, vol. 60, no. 6, pp. 503–515, 1954. anderson2007optimal Brian D.O. Anderson, and John B. Moore, Optimal control: linear quadratic methods, Courier Corporation, 2007. horn2012matrix Roger A. Horn, and Charles R. Johnson, Matrix analysis, Cambridge university press, 2012. bertsekas2011dynamicDimitri P. Bertsekas, Dynamic programming and optimal control, vol. II, Athena Scientific, Belmont, MA, 2011. aastrom2012introduction Karl J. Åström, Introduction to stochastic control theory, Courier Corporation, 2012. athans1971roleMichael Athans, “The role and use of the stochastic linear-quadratic-Gaussian problem in control system design,” IEEE Transactions on Automatic Control, vol. 16, no. 6, pp. 529–552, 1971. shannon1948mathematicalf Claude E. Shannon, “A mathematical theory of communication,” The Bell system technical journa, vol. 27, no. 3, pp. 379–423, 1948. cover1999elements Thomas M. Cover, and Joy A. Thomas, Elements of information theory, John Wiley & Sons, 2006. gallager2008principles Robert G. Gallager, Principles of digital communication, Cambridge University Press Cambridge, UK, 2008. tanaka2016semidefiniteTakashi Tanaka, Kwang-Ki K. Kim, Pablo A. Parrilo , and Sanjoy K. Mitter, “Semidefinite programming approach to Gaussian sequential rate-distortion trade-offs,” IEEE Transactions on Automatic Control, vol. 62, no.4, pp. 1896-1910, 2016. basar2013 Serdar Yüksel, and Tamer Basar, Stochastic networked control systems: Stabilization and optimization under information constraints, Springer Science & Business Media, 2013. charalambous2017informationCharalambos D. Charalambous, Christos K. Kourtellaris, and Ioannis Tzortzis , “Information transfer of control strategies: Dualities of stochastic optimal control theory and feedback capacity of information theory,” IEEE Transactions on Automatic Control, vol. 62, no.10, pp. 5010–5025, 2017. tanaka2018Takashi Tanaka, Peyman M. Esfahani, and Sanjoy K. Mitter, “LQG Control With Minimum Directed Information: Semidefinite Programming Approach,” IEEE Transactions on Automatic Control, vol. 63, no.1, pp. 37-52, 2018. nair2007feedbackGirish N. Nair, Fabio Fagnani, Sandro Zampieri, and Robin J. Evans, “Feedback control under data rate constraints: An overview,” Proceedings of the IEEE, vol. 95, no.1, pp. 108-137, 20007. kalman1960new Rudolph E. Kalman, “A new approach to linear filtering and prediction problems,” J. Basic Eng., vol. 82, no. 1, pp. 35-45, 1960. you2010minimum Keyou You, and Lihua Xie, “Minimum data rate for mean square stabilizability of linear systems with Markovian packet losses,” IEEE Transactions on Automatic Control, vol.56, no.4, pp. 772-785, 2010. kostina2021exactVictoria Kostina, Yuval Peres, Gireeja Ranade, and Mark Sellke, “Exact minimum number of bits to stabilize a linear system,” IEEE Transactions on Automatic Control, vol.67, no.10, pp. 5548-5554, 2022. takashi2022Photios A. Stavrou, Mikael Skoglund, and Takashi Tanaka, “Sequential Source Coding for Stochastic Systems Subject to Finite Rate Constraints,” IEEE Transactions on Automatic Control, vol.67, no.8, pp. 3822-3835, 2022. massey1990causality James Massey, “Causality, feedback and directed information,” in Proc. Int. Symp. Inf. Theory Applic.(ISITA-90), vol. 2, Waikiki, Hawaii, USA, 1990. sinopoli2004kalmanBruno Sinopoli, Luca Schenato, Massimo Franceschetti, Kameshwar Poolla, Michael I. Jordan, and Shankar S. Sastry, “Kalman filtering with intermittent observations,” IEEE Transactions on Automatic Control, vol. 49, no.9, pp. 1453–1464, 2004. plarre2009kalman Kurt Plarre, Francesco Bullo, “On Kalman filtering for detectable systems with intermittent observations,” IEEE Transactions on Automatic Control, vol. 54, no. 2, pp. 386–390, 2009. kluge2010stochasticSebastian Kluge, Konrad Reif, and Martin Brokate, “Stochastic stability of the extended Kalman filter with intermittent observations,” IEEE Transactions on Automatic Control, vol. 55, no. 2, pp. 514-518, 2010. rohr2014kalman Eduardo Rath Rohr, Damián Marelli, and Minyue Fu, “Kalman filtering with intermittent observations: On the boundedness of the expected error covariance,” IEEE Transactions on Automatic Control, vol. 59, no. 10, pp. 2724-2738, 2014. kar2011kalmanSoummya Kar, Bruno Sinopoli, and José M.F. Moura, “Kalman filtering with intermittent observations: Weak convergence to a stationary distribution,” IEEE Transactions on Automatic Control, vol. 57, no. 2, pp. 405-420, 2011. kar2012moderateSoummya Kar, and José M.F. Moura, “Moderate deviations of a random riccati equation,” IEEE Transactions on Automatic Control, vol. 57, no. 9, pp. 2250-2265, 2012. castro2024regulation Rafael S. Castro, Aurélio T. Salton, Minyue Fu, and Alberto Isidori, “Regulation of Nonlinear Systems Subject to Uniform Output Quantization,” IEEE Transactions on Automatic Control, 2024. liu2024optimalFengjiao Liu, and Panagiotis Tsiotras, “Optimal covariance steering for continuous-time linear stochastic systems with multiplicative noise,” IEEE Transactions on Automatic Control, 2024. sinopoli2005lqgBruno Sinopoli, Luca Schenato, Massimo Franceschetti, Kameshwar Poolla, and Shankar Sastry, “LQG control with missing observation and control packets,” IFAC Proceedings Volumes,vol. 38, no. 1, pp. 1-6, 2005. li2022lqgZhouchi Li, Luyao Niu, and Andrew Clark, “LQG Reference Tracking With Safety and Reachability Guarantees Under Unknown False Data Injection Attacks,” IEEE Transactions on Automatic Control, vol. 68, no.2, pp. 1245-1252, 2022.
http://arxiv.org/abs/2409.03535v1
20240905134854
Interactive Surgical Liver Phantom for Cholecystectomy Training
[ "Alexander Schuessler", "Rayan Younis", "Jamie Paik", "Martin Wagner", "Franziska Mathis-Ullrich", "Christian Kunz" ]
cs.RO
[ "cs.RO", "cs.CY" ]
gobble Interactive Surgical Liver Phantom for Cholecystectomy Training (as presented at CURAC 2023) [1]Alexander Schüßler 2]Rayan Younis 1]Jamie Paik 3]Martin Wagner 4]Franziska Mathis-Ullrich 4]Christian Kunz A. Schüßler et al. [1]Reconfigurable Robotics Laboratory, École Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland, E-Mail: alexander.schuessler@epfl.ch [2]Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, 69120 Heidelberg, Germany [3]Department of Visceral-, Thoracic and Vascular Surgery, Faculty of Medicine, University Hospital Carl Gustav Carus & Center for the Tactile Internet with Human in the Loop (CeTI), Technische Universität Dresden, 01307 Dresden, Germany, E-Mail: martin.wagner@ukdd.de [4]Friedrich-Alexander-Universität Erlangen-Nürnberg, Department of Artificial Intelligence in Biomedical Engineering (AIBE), 91052 Erlangen, Germany, E-Mail: franziska.mathis-ullrich@fau.de, christian.kunz@fau.de Training and prototype development in robot-assisted surgery requires appropriate and safe environments for the execution of surgical procedures. Current dry lab laparoscopy phantoms often lack the ability to mimic complex, interactive surgical tasks. This work presents an interactive surgical phantom for the cholecystectomy. The phantom enables the removal of the gallbladder during cholecystectomy by allowing manipulations and cutting interactions with the synthetic tissue. The force-displacement behavior of the gallbladder is modelled based on retraction demonstrations. The force model is compared to the force model of ex-vivo porcine gallbladders and evaluated on its ability to estimate retraction forces. Interactive Surgical Liver Phantom for Cholecystectomy Training [ September 9, 2024 =============================================================== § INTRODUCTION With the rise of robot-assisted surgery (RAS), there is an increasing need of safe environments for the replication of surgical procedures. While telemanipulation based RAS requires extensive training for surgeons to learn complex surgical skills without tactile feedback, machine learning based algorithms for autonomous task execution on RAS systems require extensive data collection to learn specific surgical tasks <cit.>. Training of surgical procedures are usually performed in virtual reality simulations, dry lab training environments or wet lab training environments with real tissue <cit.>. A recent study by Wang et al. <cit.> shows that dry lab training on the actual robotic system results in significantly higher muscle activation and path length compared to training in virtual reality. The study indicates that virtual simulations are cost-effective, scalable and safe, but the absence of the physical system might over-simplify the training tasks, which could lead to ineffective training of the surgeon. However, dry lab training platforms are currently limited to the training of basic surgical skills such as cutting, suturing, and grasping. More complex surgical tasks that require tissue handling and understanding of the tissue-instrument interaction are currently performed in wet lab environments <cit.>. The interactive surgical phantom presented in this work tries to increase the capabilities of dry lab training phantoms in order to close the gap between dry and wet lab training environments. Several laparoscopic training platforms exist. The Heidelberg laparoscopy phantom (OpenHELP) mimics the abdomen of a male patient with realistic anatomy. While the phantom offers the possibility to grasp organs with laparoscopic instruments, it does not enable a realistic retraction and dissection task for the cholecystectomy <cit.>. Wang et al. <cit.> propose a laparoscopic training platform based on ex-vivo tissue that provides a realistic experience of the laparoscopic cholecystectomy. Dayan et al. <cit.> and Ulrich et al. <cit.> developed low-cost laparoscopic training platforms to practice basic laparoscopic skills, such as rope passing, peg transfer, and knot tying. The Lübeck Toolbox trainer presents a commercially available laparoscopic video trainer for learning basic skills in minimally invasive surgery, such as peg transfer, cutting and suturing <cit.>. However, the integration of interactive features in surgical phantoms for the simulation of complex surgical tasks remains a challenge in dry lab environments. Interaction force modeling provides surgeons with valuable information about the behavior of synthetic tissue compared to the behavior of real tissue. However, the modeling and evaluation of accurate force interactions is often difficult to realize. In this work, we present a liver with attached gallbladder phantom for the realistic execution of the gallbladder dissection step in cholecystectomy (as depicted in Fig. <ref>). The main contributions of our work can be summarized as: * A novel phantom design with interactive features to reproduce the retraction and dissection of the gallbladder during the cholecystectomy. * Non-linear force model for the gallbladder retraction of the phantom and its comparison to the gallbladder retraction on an ex-vivo porcine liver. * Experimental validation of the phantom and the presented force model. § MATERIALS & METHODS The following chapter gives an overview of the design and manufacturing process used for the presented phantom. A force model for the manipulation of the gallbladder and its experimental validation are presented. §.§ Phantom Design The goal of the interactive phantom is the reproduction of the retraction and dissection process during the cholecystectomy. The interactive liver phantom presented in Fig. <ref> is designed as an artificial replica of an ex-vivo porcine liver with attached gallbladder (Fig. <ref>). The phantom consists of three main parts: A silicone liver, silicone connective tissue, and a gallbladder made from latex. The latex gallbladder is connected to the silicone liver through the silicone connective tissue layer, which replicates the attachment of the gallbladder to the liver bed. Similar to the real gallbladder, the latex gallbladder can be filled with liquid and closed, e.g. with staples or glue. The dissection of the gallbladder from the liver bed during the cholecystectomy is emulated by severing the silicone connective tissue. For safe dissection of the gallbladder from the liver, tension at the dissection line of the gallbladder is necessary. The tension is realized through a retraction movement, which can be achieved by grasping and manipulating the latex gallbladder of the liver phantom. §.§ Manufacturing Process The interactive liver phantom is manufactured in a multi-step process (Fig. <ref>). The basis of the manufacturing process are the negative mold of the liver, the negative mold of the connective tissue and the positive mold of the gallbladder. The molds of the liver and connective tissue are 3D printed from PLA material using Fused Deposition Modelling (FDM). The positive gallbladder mold is manufactured by using Stereolithography (SLA) of clear resin (Formlabs GmbH, Germany) material. In the subsequent step, the molds are used to cast the silicone liver and silicone connective tissue with Ecoflex 00-30 silicone (Smooth-On Inc., USA). The silicone used for the liver is dyed with red color pigments (Smooth-On Inc., USA). Multiple layers of latex are coated and dried on to the positive gallbladder form to produce the latex gallbladder. The latex gallbladder is glued on to the silicone connective tissue by using a combination of primer (HG pro-innovations GmbH, Salzburg, Austria) and adhesive cyanoacrylate. The combination of gallbladder and connective tissue are subsequently glued on to the liver by using the same primer for pretreatment and adhesive cyanoacrylate. While the silicone liver can be reused for multiple surgery simulations, the silicone connective tissue and the latex gallbladder are single-use parts. §.§ Force Model & Experimental Validation The gallbladder retraction behavior of the phantom is investigated using telemanipulated robot-assisted retraction demonstrations. Experiments on one ex-vivo porcine liver is used for comparison of the results. The retraction demonstrations are conducted by controlling a surgical gripper with a Franka Panda robot (Franka Emika GmbH, Germany). The robot is equipped with a force-torque sensor (Koris Force & Safety Components GmbH, Germany) at its wrist to collect force data during the retraction process. The gripper position is determined using the forward kinematics of the robot. Overall, ten retraction demonstrations on the phantom and the ex-vivo porcine liver were conducted and recorded. All demonstrations are performed with half of the gallbladder being attached to the liver and half of the gallbladder being removed from the liver. The initial position of the gripper is located above the dissection line. The gallbladder neck is grasped and retracted behind the dissection line (positive x-direction in right image of Fig. <ref>), until the necessary tension for removal at the dissection line is reached. Similar to the force modeling for needle insertion of Okamura et al. <cit.>, the retraction forces of the phantom and the ex-vivo porcine liver are modeled with a non-linear model. In this case, a second-order polynomial (with c=0) is fitted to the average of six retraction demonstrations: F(x)=a*x^2 + b*x + c. § RESULTS The displacement and force after reaching the desired tension at the dissection line are evaluated based on ten retraction demonstrations on the phantom and the ex-vivo porcine liver. The retraction demonstrations are split into six demonstrations to model the retraction force behavior and four demonstrations to evaluate the model. The force model of the phantom and of the ex-vivo porcine liver are normalized with the average retraction force (F_t_max) after reaching the desired tension at the dissection line: F_norm(x)= F(x)/F_t_max. The average retraction force after reaching the desired tension at the dissection line is F_t_max,p=9.97N for the phantom and F_t_max,ex=2.06N for the ex-vivo porcine liver. The displacement after reaching the desired tension are x_t_max,p=56mm for the phantom scenario and x_t_max,ex=64mm for the ex-vivo scenario. The normalized force-displacement diagram with data points of the demonstrations and force model for the phantom and the ex-vivo scenario are presented in Fig. <ref>. The presented gallbladder retraction force model is validated on four retraction demonstrations for the phantom and ex-vivo scenario. The force model is used to estimate the retraction force after reaching the desired tension on the dissection line based on the displacement information. The estimated force is compared to the actual retraction force of each demonstration. The average force estimation error is [separate-uncertainty=true]1.64(0.93)N for the phantom and [separate-uncertainty=true]0.16(0.10)N for the ex-vivo scenario. § DISCUSSION The force model of the presented phantom gives surgeons information about the phantom's behavior during the gallbladder retraction. It can as well be used for the creation of a virtual simulation of the presented phantom, for usage in sim-to-real reinforcement learning methods that transfer a learned control policy from a virtual simulation to a real scenario as presented in Scheikl et al. <cit.>. However, there are several limitations to the presented force model. The normalized force-displacement curves show a very similar gallbladder retraction behavior for the phantom and the ex-vivo scenario, but the average retraction force after reaching the desired tension at the dissection line used for the normalization varies between the phantom and the ex-vivo scenario (phantom: 9.97N, ex-vivo 2.06N). The higher forces during the retraction of the latex gallbladder indicate stiffer material properties of the latex gallbladder compared to ex-vivo gallbladders. In order to further close the gap between dry lab and wet lab surgical training, future work will investigate alternative materials to latex (e.g., polyisopren und polyurethan) for achieving retraction forces closer to the ones of ex-vivo gallbladders. The limited retraction demonstrations for deriving the force model have a similar initial distance between the dissection line and the grasping point at the gallbladder neck, as well as the same direction of retraction. Retraction movements to the side are not considered in the current model, but could be taken into account with a 3-DoF force model. The force model does not consider different volumes of liquid inside the gallbladder, which may influence the retraction behavior. § CONCLUSION This work presents a novel surgical phantom for the cholecystectomy. The significance of this work lies in the interactive features of the phantom, which enable the reproduction of the gallbladder retraction and dissection process during the surgical procedure. The dry lab phantom enables more realistic training compared to current dry lab phantoms and presents an alternative to wet lab training based on ex-vivo tissue. The retraction behavior was characterized by a force model and compared to the behavior of an ex-vivo gallbladder. §.§ Author Statement Research funding: This work was supported by the German Federal Ministry of Education and Research under the grant 13GW0471C and by the German Research Foundation (DFG, Deutsche Forschungsgemeinschaft) as part of Germany’s Excellence Strategy – EXC 2050/1 – Project ID 390696704 – Cluster of Excellence “Centre for Tactile Internet with Human-in-the-Loop” (CeTI) of Technische Universität Dresden. Conflict of interest: Authors state no conflict of interest. 99 1 Sridhar A, Briggs T, Kelly J, Nathan S. Training in Robotic Surgery-an Overview. Curr Urol Rep. 2017;18 2 Haidegger T, Speidel S, Stoyanov D, Satava R. Robot-Assisted Minimally Invasive Surgery - Surgical Robotics in the Data Age. Proceedings of the IEEE. 2022;110:835-846 3 Wang Z, Kasman M, Martinez M, Rege R, Zeh H, Scott D, et al.. A comparative human-centric analysis of virtual reality and dry lab training tasks on the da vinci surgical platform. Journal of Medical Robotics Research. 2019;4 4 Kenngott H G, Wünscher J J, Wagner M, Preukschas A, Wekerle A L, Neher P, et al.. OpenHELP (Heidelberg laparoscopy phantom): development of an open-source surgical evaluation and training tool. Surgical Endoscopy. 2015;29:3338-3347 5 Xiaobo Wang X, Zhang K, Hu W, Kuang M, Teo S, Guo Z, et al.. A new platform for laparoscopic training: initial evaluation of the ex-vivo live multivisceral training device. Surgical Endoscopy. 2021;35:374-382 6 Dayan A, Ziv A, Berkenstadt H, Munz Y. A simple, low-cost platform for basic laparoscopic skills training. Surgical Innovation. 2008;15:136-142 7 Ulrich A, Cho M Y, Lam C, Lerner V T. A Low-Cost Platform for Laparoscopic Simulation Training. Obstetrics and Gynecology. 2020;136:77-82 8 Laubert T, Esnaashari H, Auerswald P, Höfer A, Thomaschewski M, Bruch H-P, et al.. Conception of the Lübeck Toolbox curriculum for basic minimally invasive surgery skills. Langenbeck's Archives of Surgery. 2018;403:271-278 9 Okamura A M, Simone C, O'Leary M D. Force modeling for needle insertion into soft tissue. IEEE Transactions on Biomedical Engineering. 2004;51:1707-1716 10 Scheikl P M, Tagliabue E, Gyenes B, Wagner M, Dall D, Fiorini P, et al.. Sim-to-Real Transfer for Visual Reinforcement Learning of Deformable Object Manipulation for Robot-Assisted Surgery. IEEE ROBOTICS AND AUTOMATION LETTERS. 2023;8
http://arxiv.org/abs/2409.02877v1
20240904170102
Configurable Foundation Models: Building LLMs from a Modular Perspective
[ "Chaojun Xiao", "Zhengyan Zhang", "Chenyang Song", "Dazhi Jiang", "Feng Yao", "Xu Han", "Xiaozhi Wang", "Shuo Wang", "Yufei Huang", "Guanyu Lin", "Yingfa Chen", "Weilin Zhao", "Yuge Tu", "Zexuan Zhong", "Ao Zhang", "Chenglei Si", "Khai Hao Moo", "Chenyang Zhao", "Huimin Chen", "Yankai Lin", "Zhiyuan Liu", "Jingbo Shang", "Maosong Sun" ]
cs.AI
[ "cs.AI", "cs.CL", "cs.LG" ]
[ [ Accepted Sep 3 2024 to ApJ Letters ======================================= § ABSTRACT Advancements in large language models (LLMs) have recently unveiled challenges tied to computational efficiency and continual scalability due to their requirements of huge parameters, making the applications and evolution of these models on devices with limited computation resources and scenarios requiring various abilities increasingly cumbersome. Inspired by modularity within the human brain, there is a growing tendency to decompose LLMs into numerous functional modules, allowing for inference with part of modules and dynamic assembly of modules to tackle complex tasks, such as mixture-of-experts. To highlight the inherent efficiency and composability of the modular approach, we coin the term brick to represent each functional module, designating the modularized structure as configurable foundation models. In this paper, we offer a comprehensive overview and investigation of the construction, utilization, and limitation of configurable foundation models. We first formalize modules into emergent bricks - functional neuron partitions that emerge during the pre-training phase, and customized bricks - bricks constructed via additional post-training to improve the capabilities and knowledge of LLMs. Based on diverse functional bricks, we further present four brick-oriented operations: retrieval and routing, merging, updating, and growing. These operations allow for dynamic configuration of LLMs based on the instruction to handle complex tasks. To verify our perspective, we conduct an empirical analysis on widely-used LLMs, Llama-3-8B-Instruct and Mistral-7B-Instruct-v0.3. We find that the FFN layers follow modular patterns with functional specialization of neurons and functional neuron partitions. Finally, as the domain of configurable LLMs remains nascent and evolving, we highlight several open issues and directions for future research, including the correlation between emergent and customized bricks, general brick development protocols, evaluation of configurable LLMs, efficient brick computing frameworks, and systems consisting of multiple model-level bricks. Overall, this paper aims to offer a fresh modular perspective on existing LLM research and inspire the future creation of more efficient and scalable foundational models. [b]0.45 “Rome was not built in a day, but they were laying bricks every hour.”— John Heywood § INTRODUCTION Large pre-trained models, especially large pre-trained language models (LLMs), have achieved remarkable success in a variety of tasks <cit.>. LLMs have become the foundation models of artificial intelligence applications by providing amounts of world knowledge <cit.> and powerful reasoning capabilities <cit.>. Current advanced LLMs, such as GPT-4 <cit.>, are deployed on large-scale central servers with high-bandwidth memory and GPUs to address various user instructions. With the development of LLMs, the future applications of LLMs will inevitably face the following trends, which in turn present challenges for LLMs: (1) Deployment on end devices. With the capabilities of LLMs continuing to improve, the trend of deploying these models on devices with limited computing power, such as smartphones and personal computers, is attracting increasing attention, allowing LLMs to serve as personal assistants for millions of users <cit.>. The use of monolithic LLMs that require substantial computational resources is gradually becoming infeasible, and improving the computational efficiency of LLMs is a significant challenge. (2) Widespread application across multiple domains. LLMs are widely applied in various fields and applications to enhance people's work efficiency <cit.>. However, the knowledge and capabilities required by different domains, users, and even different instructions vary greatly. Storing all world knowledge in monolithic LLMs and serving all scenarios with full parameters often leads to redundant computations, and conflicts between different domain knowledge may even result in sub-optimal performance. (3) Rapid evolution in new scenarios. As application scenarios increase and time progresses, we usually need LLMs to efficiently adapt to new tasks and learn from environment feedbacks <cit.>. Meanwhile, the world knowledge stored in LLMs is constantly updating and expanding. This demands that LLMs are able to evolve efficiently and continuously, learning new knowledge and skills while avoiding forgetting existing knowledge. To address these issues, studying and analyzing LLMs from a modular perspective has gradually become an important focus for current researchers <cit.>. These works decompose LLMs into functional modules. In this way, for each computation step, we can only involve part of modules to save computation costs and achieve efficient continual training by constructing or updating modules. Modularity has long been an endogenous mechanism or a central principle in diverse fields, ranging from biomedical sciences to engineering fields <cit.>. A module is conceived as an independent component with specialized functionality that can coordinate and interface with other modules, thereby giving rise to complex systemic behavior. Owing to modularity, many complex systems become more understandable and scalable. For instance, in cognitive neuroscience, the modularity of mind hypothesis posits that the human brain comprises functionally specialized modules, such as the visual cortex for processing visual inputs <cit.> and Broca's area for speech production <cit.>. These modules operate on distinct types of information while collaborating to generate integrated cognition and behavior. In software engineering, decomposing programs into logical modules can significantly enhance development efficiency, reduce project complexity, and encourage code reuse <cit.>. In industrial manufacturing, items like electrical appliances and automobiles are also produced by assembling modular components <cit.>. The core characteristics of modularity are independence, specificity, and composability. Modules are decoupled from each other and can thus be specified to focus on certain functionality. By constructing and composing modules, complex systems can be built and maintained more easily. Owing to modularity, it becomes possible to simplify the systems and enhance their efficiency. It also allows for a streamlined process in developing and updating systems, ensuring they remain adaptable and scalable over time. Inspired by existing observations in other disciplines, modularity is becoming a promising conceptual perspective for designing and analyzing the next generation of foundation models <cit.>. Some preliminary efforts indicate that LLMs have the potential to be decomposed into various specialized functional modules. For example, we can find that LLMs adopt single neurons as memory modules to store a specific structured knowledge triplet like (China, , Beijing) (i.e., Beijing is the capital of China) <cit.>. Recent efforts in parameter-efficient tuning have demonstrated that constructing modules with several dozen neurons can equip LLMs with specific task abilities <cit.>. To augment LLMs with multi-modal processing capabilities, some researchers attempt to adopt visual models as the modules of LLMs to analyze visual semantics <cit.>. Therefore, building LLMs with modules from the neuron level to the model level can significantly enhance the abilities of LLMs without costly re-training from scratch. Generally, a module comprises a collection of function-specific neurons, the size of which can vary according to functional requirements. In some cases, we can even regard entire pre-trained models as a functional module, beyond the definition of a module in the general sense. To avoid ambiguity and more intuitively show the benefits of modularity for LLMs, we term the functional components to build LLMs as bricks instead of modules. Training and deploying LLMs from the perspective of brick combination enable configurable model usage. As shown in Figure <ref>, for a given instruction, unlike computing the monolithic LLMs, we can select part of bricks with specific functionalities for computation according to the instruction functionality requirements <cit.>, which significantly benefits the computational efficiency. Besides, the adaptation and augmentation of LLMs can be further formalized as the problem of constructing new bricks or updating existing bricks <cit.>, which are more cost-efficient and scalable than continual training of full models. Owing to the high scalability brought by brick combination, we term the LLMs built on bricks as configurable foundation models. These beneficial endeavors can well propel the utilization of LLMs in daily applications, thereby offering fresh insights and approaches for the development of next-generation architectures for foundation models. Therefore, to facilitate the progress of configurable foundation models, this paper places its emphasis on a comprehensive analysis of existing efforts, future directions, and potential challenges. To take advantage of configurable foundation models, this paper focuses on addressing two problems: Problem 1: how can we formalize and construct bricks for configurable foundation models? (<ref>) From the micro parameter level rather than the macro model level, both pre-training and post-training essentially involve constructing bricks for LLMs. In recent years, the prevailing paradigm for building LLMs involves two steps: pre-training and post-training. Here, post-training includes fine-tuning and preference learning. During the pre-training phase, LLMs acquire versatile knowledge and learn general language reasoning through self-supervised learning on massive unsupervised data. During the subsequent post-training phase, LLMs are adapted to obtain additional capabilities, including downstream task ability, and domain knowledge <cit.>. Despite the pre-training and post-training processes being conducted on monolithic LLMs, recent works indicate that the impact of these processes on the internal parameters of LLMs tends to be modular <cit.>. Pre-training is to construct emergent bricks, the functional specialization of which emerges during pre-training. (<ref>) It includes both dense and sparse pre-training processes. (1) Prior analyses on the typical dense pre-training of LLMs reveal that model parameters undergo differentiation throughout the process <cit.>. This dynamic and implicit process leads to the emergence of functional partitions, giving rise to an implicit brick structure. Notably, <cit.> identify distinct groups of parameters within LLMs dedicated to semantic, knowledge, and task functions. <cit.> uncover the presence of language-neutral sub-networks for multilingual foundation models. Moreover, using the pruning technique, previous works have explored to discover task-specific subnetworks from LLMs <cit.>. (2) Some recent efforts have attempted to build LLMs with the sparse mixture-of-expert (MoE) structure <cit.>. In the MoE structure, an LLM is composed of multiple experts, each of which has the same architecture as the feed-forward network or the attention network in the original model. Only a subset of experts is activated at a time by a gating network to process input data. While the primary intent of MoE methods is to enhance model capacity without escalating computational cost, these methods invisibly formalize LLMs into a pre-defined structure akin to combining bricks, where each expert functions as a distinct and specialized brick. Further studies show that the MoE structure can yield comparable results to conventional dense structures <cit.> and even prove more advantages for understanding task-specific instructions due to the functional specialization <cit.>. Based on such evidence, whether we intentionally or unintentionally design the structure of LLMs, bricks are spontaneously formed to target specific functions during the pre-training process. Post-training essentially is to construct additional customized bricks for the whole model, the functional specialization of which is manually defined to meet specific human-defined requirements. (<ref>) To enhance models with additional abilities, such as domain knowledge and task-specific capabilities, traditional methods involve full-parameter fine-tuning. Recent research shows that parameter changes are intrinsically low-rank <cit.>, which implies that only a small proportion of the parameters necessitate tuning for further capabilities adaptation. Inspired by these findings, parameter-efficient fine-tuning (PEFT) freezes LLMs and introduces extra parameters to achieve efficient task adaptation <cit.>. Beyond PEFT, many studies find that the additional parameters can not only endow LLMs with task-specific capabilities, but also supplement them with much extra knowledge and functionalities , such as knowledge bricks for world knowledge injection <cit.>, modality bricks for multi-modal composition <cit.>, memory bricks for long-text processing <cit.>, and compression bricks for inference acceleration <cit.>. Therefore, the essence of fine-tuning LLMs is to customize bricks, which can fully supplement and stimulate knowledge and capabilities for LLMs to meet specific requirements. Furthermore, each LLM itself can also become a customized brick in a multi-model system. For example, in a multi-agent system, each model is responsible for a specific sub-task <cit.>; in a combination of multi-modal models, each model is tasked with processing data from a specific modality <cit.>. After formalizing pre-training and fine-tuning into constructing bricks, we delve into a discussion on selecting brick granularity for building configurable foundation models. This entails a thoughtful evaluation of both brick capabilities and brick management, highlighting the relationship between the granularity and the capability complexity and the inclusion among different bricks (<ref>). Subsequently, we further summarize five major advantages of configurable foundation models, including efficiency, reusability, traceability, sustainability, and distributed computation (<ref>). Problem 2: how can we leverage existing bricks to build configurable foundation models for the ever-increasing complex requirements of real-world tasks? (<ref>) Bricks consist of a collection of parameters with specialized functionalities. In real-world tasks, singular knowledge and capabilities frequently fall short of meeting task requirements, implying that we need to combine multiple bricks to understand instructions and accomplish specific tasks effectively. In this paper, we summarize four primitive operations on bricks and argue that through the composition of these primitive operations, configurable foundation models can be built based on bricks to fulfill complex task instructions efficiently. The operations are summarized as follows: * Routing and Retrieving. The routing and retrieving operation involves the dynamic selection of specific bricks from a brick repository based on instructions. This operation acts as a dynamic brick gatekeeper and allows the configurable foundation model to adapt its brick composition to the instruction at hand. (<ref>) * Combining. The combining operation involves the synergistic integration of multiple bricks, facilitating collaborative and comprehensive processing. This can be achieved by directly merging isomorphic bricks to enhance abilities, or by simultaneously inserting multiple bricks to create composite skills. This operation empowers the model to harness the collective capabilities of various bricks, facilitating the creation of informative responses. (<ref>) * Updating. The updating operation involves the refinement and adaptation of bricks over time, based on new knowledge and feedback. This operation enables bricks to be fine-tuned or adjusted to improve their performance continually. It also empowers foundation models to adapt and remain pertinent in dynamic real-world scenarios. (<ref>) * Growing. The growing operation pertains to the expansion of the brick repository itself. New bricks with specialized functionalities can be added to the repository to address emerging requirements. By incorporating new modules, configurable models can keep up with the increasing complexity of real-world applications and offer effective solutions to a broader range of challenges. (<ref>) Besides the above two important problems, in this paper, we conduct empirical analysis on widely-used LLMs to investigate whether these existing well-trained LLMs exhibit functional partitioning similar to the human brain. The experimental results from two models, Llama-3-8B-Instruct and Mistral-7B-Instruct-v0.3, demonstrate that: (1) Neuron activation is sparse, meaning that processing each instruction requires only a small subset of neurons. (2) Neurons are specialized for specific functionalities, with the removal of these neurons having minimal impact on other capabilities. (3) There is evidence of neuronal partitioning, indicating that different capabilities require distinct sets of neurons. In the end, we discuss the future research directions for the application of configurable foundation models (<ref>), including: * Analyzing the correlation between emergent and customized bricks: Here, we focus on delineating roles between emergent and customized bricks, as well as identifying and handling knowledge conflicts and redundancies arising from their interaction. (<ref>) * Unifying the protocol to construct bricks: We engage in a discourse on a novel paradigm for developing foundation models, wherein the shift moves from training a whole model to training individual bricks. This envisioned paradigm entails a shared core foundation model for open-source communities, enabling individuals to develop their bricks based on a unified protocol and openly share bricks for collective utilization. (<ref>) * Evaluating configurable models: This facet centers on evaluating the foundation models from the perspective of bricks and discussing the evaluation metrics for configurable bricks. (<ref>) * Implementing the framework for efficient computing: Our deliberation encompasses the foundational computational operators of configurable models, characterized by sparsity and computational decoupling. Additionally, we delve into the prospects of distributed center-edge computing frameworks for configurable foundation models. (<ref>) * Combining multiple model-level bricks for composite capability: In the rapidly evolving AI community, a vast array of large pre-trained models has been open-sourced, which can serve as model-level bricks for completing complex instructions. We discuss the potential and challenges for scalable multi-model cooperation systems. (<ref>) In summary, we present the concepts coined in this paper in Table <ref> and present existing representative efforts for configurable foundation models in Table <ref>. We aspire for our paper to serve as an inspiration for future researchers, driving forward the progress of efficient and scalable foundation models. § CONFIGURABLE FOUNDATION MODELS In this section, we elaborate on the general framework for configurable foundation models, consisting of various bricks. These bricks encompass both the emergent bricks from the pre-training process and customized bricks from post-processing to enhance LLMs. Specifically, we first present that the pre-trained LLMs naturally possess the property of modularity and can be split into bricks with pre-defined structures or self-organized functional neuron clusters (<ref>). Then, in the pursuit of advancing LLMs, it is promising to parameterize the external knowledge and capacities into neural bricks, which can be inserted into LLMs in a plug-and-play manner (<ref>). Subsequently, we discuss how to select the granularity of bricks to trade off efficiency and effectiveness (<ref>). Lastly, we present five benefits to constructing LLMs with configurable bricks, including high efficiency, reusability, traceability, sustainability, and distributed computation (<ref>). §.§ Emergent Bricks The emergent property of modularity has been observed in the pre-training process of language models <cit.>, which indicates that a subset of the parameters can function properly as the entire model for specific instructions. Such property makes it possible to break down the gigantic LLMs, including both dense models <cit.> and sparse models <cit.>, into tiny modules. With the breakdown, the aforementioned issues of efficiency and scalability can be tackled via module dropping <cit.>, subnetwork extraction <cit.>, and recombination <cit.>. We term these modules directly broken down from the pre-trained models as emergent bricks, which acquire certain capabilities of the entire model from the pre-training process. In this subsection, we summarize the potential inspirations for emergent bricks and introduce two different categories of emergent bricks, including bricks with human-defined and self-organized structures. The discussion may boost our understanding of the working mechanism inside the LLMs and help us better configure the LLMs with various internal modules. §.§.§ Observations on Parameter Differentiation LLMs tend to be over-parameterized when performing some specific tasks <cit.>, which indicates that there exists a sub-module functioning nearly the same as the entire model with the rest parameters being redundant. This over-parameterization phenomenon leads to two general questions: (1) “Which part of the model is actually functioning?” (2) “What kind of ability does it have?”. In this subsection, we discuss existing observations about the functional specialization of internal parameters in LLMs, that is, each parameter is only responsible for a specific function. Activation Sparsity Inspired by the sparsity in human brains <cit.> that only a small portion of the neurons activate at each time, special architectures such as sparsely-activated Mixture-of-Experts (MoE) are introduced into Transformers to enforce activation sparsity and thus improve model efficiency <cit.>. Different from the sparsity of expert activation in MoE, researchers also explore the activation sparsity of model neurons, which is in finer granularity. Specifically, the neuron “activation” refers to the intermediate output of the fully connected layer after the non-linear activation function, and “sparsity” indicates that only a few entries of the activation values are nonzero for each given input. <cit.> inspect the computational pattern of pre-trained Transformers and find that the activation sparsity naturally exists in pre-trained dense Transformers. Specifically, they delve into the feed-forward networks (FFNs), which constitute two-thirds of the Transformer model parameters, and find the emergence of sparse activation (e.g., only around 5% of the neurons are with nonzero activation values for 90% of the input for a fine-tuned T5-Large model <cit.>). <cit.> comprehensively investigate sparse activation in Transformers and conclude that it is a ubiquitous phenomenon that emerges for both natural language and vision models, on both training and evaluation data, on datasets of varying scale, on Transformers of varying configurations, and across all layers of a Transformer. Although the above works focus on the sparsity within ReLU-based models, an increasing number of modern LLMs have been trained with non-ReLU activations, and it is more tricky to explore activation sparsity for them since there are typically many near-zero but nonzero small activation values. However, recent research shows that these non-ReLU models may be converted to ReLU versions without major performance degradation by fine-tuning with ReLU activations <cit.>, making activation sparsity pragmatic. Moreover, <cit.> shows that for non-ReLU models, there are still some neurons whose outputs are close to zero and can be discarded without performance degradation, which also indicates the existence of activation sparsity in non-ReLU models. In summary, activation sparsity refers to the phenomenon that only a small portion of weights play a role for each input, including both expert activation sparsity in MoE and neuron activation sparsity in dense models, and it is different from the sparsity in the weight matrix leading to pruning <cit.>. Besides, activation sparsity can greatly accelerate the inference process by involving only parts of the parameters for computation <cit.>. Function Localization In addition to activation sparsity, neurons in the human brain exhibit a modular characteristic: neurons with similar functions tend to cluster together to form specific functional partition <cit.>. Similarly, it is widely reported that substantial functions are specifically localized in a small number of parameters within pre-trained models, i.e., “neurons” or “circuits”: (1) Knowledge Neuron. <cit.> and <cit.> find that factual knowledge tuplets are stored in neurons of FFNs, and manipulating the activations or weights of these “knowledge neurons” can effectively edit the knowledge-related predictions of LLMs. (2) Skill Neuron. Some researchers dive into finding skill neurons, of which the activations are highly predictive of the task labels. These skill neurons are task-specific and perturbations to skill neurons can drastically impair the performance of corresponding tasks, implying that the task skills are surely localized in these neurons <cit.>. (3) Linguistic Neuron. <cit.> and <cit.> study the linguistic features encoded in neurons and observe that neuron activations have correlations with a wide range of features like n-grams and positions. <cit.> and <cit.> discover language regions of LLMs that are specialized in multilingual text processing. Inspired by these observations, <cit.> expand the neuron analysis to include MoE (Mixture of Experts) experts, which represent clusters of neurons. It demonstrates that these sparsely-activated experts are specialized in different functions including knowledge storing, task skills, and semantic understanding. §.§.§ Human-Defined Emergent Bricks Neural networks are usually constructed by stacking multiple human-designed network modules at different granularity levels. For example, the original Transformer <cit.> model consists of multiple identical blocks, within which there are multi-head attention (MHA) layers and FFNs. Simultaneously, each of the MHA layers is a combination of multiple attention heads and each FFN can be viewed as a collection of single artificial neurons that can be further regrouped based on their activations. Here, the structures and connections of these stacked modules, including neurons, attention heads, and whole layers, are explicitly designed by humans. Thus, we term these bricks as human-defined emergent bricks. After defining the structures of neural models, the next pivotal step to empower each brick with problem-solving capabilities is training, to which there are generally two main approaches: End-to-end Training End-to-end training of the entire network, which is the most common practice in the deep learning community. In this case, the skills or abilities of each module are not explicitly defined. Existing intriguing observations find that these human-defined bricks can gradually become functionally specialized during the end-to-end training process of the full model <cit.>. For a specific task or input, only a subset of these emergent bricks are functional or informative with others being redundant, and there can be multiple granularity levels of the potential emergent bricks. Here we give three examples from top to bottom. <cit.> show that it is possible to reduce the model computational costs by selecting only some of the layers from a pre-trained model. <cit.> find that simply using one attention head can achieve comparable performance to the full model on certain tasks. <cit.> identify the important neurons in the FFNs for a specific task and then reorganize the sub-network based on the importance of different neurons to achieve better efficiency. The intrinsic modular characteristic of these human-defined bricks ensures the effectiveness of the parameter pruning methods. Modular Training Modular training conducts training for different modules separately, with their functionalities predefined in advance. <cit.> first propose the neural module networks for visual question answering, where multiple question-specific modular networks are dynamically initiated according to different reusable functional components (e.g., for recognizing dogs, and classifying colors). In this case, each of the reusable components represents a human-defined emergent brick that possesses predefined functionality. Such training paradigm also appeals to Transformer-based models, characterized by the MoE models <cit.>, where each expert brick is responsible for one domain or one task. Based on these human-defined emergent bricks with predefined functionalities, it is cost-efficient to construct a new task-specific model by intentionally combining part of the bricks within a general-purpose pre-trained model. Generally, human-defined emergent bricks are commonly observed and hierarchically structured, which naturally expedites the research on various brick configurations. However, there still exist inescapable obstacles to configuring foundation models with human-defined bricks: (1) The functionalities of human-defined bricks acquired by end-to-end training are hard to interpret and localize, preventing them from being clearly and effectively utilized; (2) Modular training of human-defined bricks requires delicate design of each single brick, such as the structure and scale of the modules, to ensure that each brick can effectively attain the functionality predefined by humans in advance. §.§.§ Self-Organized Emergent Bricks Language models acquire various capabilities from the pre-training process, which are further stored as parametric knowledge within the model parameters. Observations from the sparse activation and function localization phenomenon imply that the internal parametric knowledge or capabilities are not universally distributed among the entire model, but instead are stored in a centralized manner. However, such centralized distribution of the parametric knowledge does not strictly follow the human-defined module structure, as most modern LLMs adopt the end-to-end training paradigm which intrinsically does not constrain the learning objective of each module explicitly. In this case, there must exist an implicit structure of parametric knowledge distribution that differs from the human-defined module structure and the universal distribution over the entire model. Such an implicit structure is naturally formed during the training process, where different parts of the model interact and collaborate without explicit instructions by humans. Therefore, though language models are built upon human-defined bricks, dependencies and connections between bricks also emerge during the training process, resulting in self-organized emergent bricks. Distinct from any individual human-defined brick, the concept of self-organized bricks emerges from the interaction between multiple human-defined bricks. There have already been preliminary explorations of self-organized bricks in the literature shedding light on their characteristics and implications. For example, <cit.> find that some small proportion of the neurons within the Transformer feed-forward networks tend to activate together while the rest of the neurons are inactivated, indicating that such subset of neurons are self-organized to function properly after pre-training and thus give rise to a new form of emergent bricks that is different from those pre-defined by humans. Following this finding, <cit.> enforce the activation sparsity and only adopt the self-organized bricks for improvement in inference efficiency. Furthermore, <cit.> demonstrate that these self-organized groups of neurons possess high productivity for specific functions and any perturbations to them can lead to drastic degradation of the performance in the corresponding function, implying that such self-organized neuron clusters can serve as emergent bricks and are functionally specialized. <cit.> dive deeper into the concept of self-organized bricks by exploring both Transformer feed-forward networks and muti-head attention layers. Specifically, based on the discovery that the residual connections <cit.> in LLMs make token embeddings barely change across different layers, they envision input-dependent subsets of neurons in feed-forward networks and attention heads to yield performance comparable to employing the entire model. Moreover, <cit.> introduce ReLU non-linear activation functions into the layer normalization of Transformers, which further enhances the collaboration among human-defined bricks and leads to more compact self-organized bricks. These works suggest that the functionality of the monolithic LLMs relies on the interaction and collaboration between multiple human-defined bricks at different granularities, and these newly clustered parameters form the self-organized bricks that are specialized in certain functions. Introducing self-organized bricks is beneficial to improving both the efficiency and interpretability of language models. First, it is possible to decompose a task-specific sub-model with minimal cost from an existing LLM based on its self-organized bricks, which are more compact compared with its human-defined bricks. For example, we can replace the conventional layer-wise pruning with a more concise selection of functional neuron subsets in the feed-forward networks to improve efficiency. Second, we can align the functionality to the self-organized bricks more flexibly than the human-defined bricks without the explicit structure constraint. Hence, the inner working mechanism of the model can be better interpreted by analyzing the status of the various self-organized bricks. Though preliminary progress has been made in self-organized emergent bricks, there are still several unexplored aspects that demand further attention from the community: (1) Inspecting cross-layer organization: Currently investigated self-organized emergent bricks are relatively flat and are mostly constructed in parallel within homogeneous layers, whereas they can be organized across different layers. For example, the sparse activation of Transformer feed-forward layers and multi-head attention layers, as observed in <cit.>, could correlate to each other with chained dependencies. (2) Enhancing training strategy: Though the existence of self-organized emergent bricks can be revealed in the models trained in an end-to-end manner, models with explicit modular structure still struggle to learn the modular data distribution with the conventional end-to-end training algorithms <cit.>. Enhanced training strategies should be proposed to explicitly encourage the emergence of self-organized emergent bricks. (3) Guiding network design: Scrutinizing self-organized emergent bricks can provide valuable guidance for designing networks with improved efficiency and interpretability in the future. For example, it is possible to identify the minimum combination of bricks that are necessary for a specific task, which can be further employed as a reference for exploring the scaling law and emergent abilities of existing LLMs. §.§ Customized Bricks A monolithic LLM can be split into several emergent bricks, even when LLM layers are trained end-to-end with fully connected neurons. As LLM parameters continue to scale, the number of emergent bricks acquired during pre-training also increases, which enables satisfactory downstream performance. However, as the world continually changes, the capacities and knowledge that models need to master are constantly being updated and expanded. For example, world knowledge contained in widely-used Wikipedia is edited and updated daily <cit.>; new academic papers published on scholarly websites continuously advance domain knowledge <cit.>; and novel tasks emerging in different application scenarios also demand ever-growing task capacities from LLMs <cit.>. For ease of introduction, we use the term, knowledge, to refer to both knowledge and capacities, which can be regarded as a type of abstract knowledge. Training the whole model to incorporate new knowledge is computation-intensive and requires massive storage space. To address this issue, many efforts have been devoted into parameterizing the external knowledge as customized bricks, which can be injected into LLMs for performance promotion <cit.>. Specifically, customized bricks are usually constructed after pre-training with the original parameters frozen. Customized bricks can serve as an external knowledge bank for LLMs. Given an instruction, we first retrieve bricks with relevant knowledge and then insert them into LLMs for better responses. Different from training LLMs to store knowledge in emergent bricks, customized bricks possess the plug-and-play characteristic for dynamic and reusable knowledge injection. Therefore, customized bricks are usually named “plugins”. In this subsection, we will first summarize the underlying reasons for the feasibility of customized bricks and introduce several typical customized bricks. §.§.§ Observations on Intrinsic Dimension of LLMs Customized bricks aim to inject external knowledge or new task adaptations into foundation models with tiny neural modules with a few parameters. This raises a natural question: “Can the external knowledge be represented in limited parameters?” Intrinsic Dimensionality It has been widely recognized that LLMs are highly over-parameterized <cit.>, which is also the case of almost all the modern neural network models. This brings a natural question of what is the minimal number of parameters needed to describe the training of these models. <cit.> introduce the concept of “Intrinsic Dimension” of an objective landscape. For a given training objective landscape, its intrinsic dimension is defined as the minimal number of free variables required to well define the optimization problem, i.e., the minimal possible dimension to reparameterize the objective into. <cit.> propose to estimate the intrinsic dimension by randomly projecting the parameters of the original model into a low-dimensional subspace and observing if the random subspace contains a good enough solution for the training objective. If there is a solution, the dimension of the current subspace serves as an upper bound of the intrinsic dimension. With this method, <cit.> examine the intrinsic dimension of pre-trained language models and find that the fine-tuning of PLMs has a very low intrinsic dimension (∼ 200 for RoBERTa) and pre-training implicitly minimizes the intrinsic dimension. It has also been observed that larger models tend to have lower intrinsic dimensions. <cit.> further find that the PLM adaptations to many different tasks not only commonly have low intrinsic dimension but the many tasks can also be reparameterized into a shared universal low-dimensional subspace, which partially explains the prevalent effectiveness of foundation models and the transferability of parameter-efficient tuning <cit.>. <cit.> also find that fine-tuning can be performed within a low-dimensional subspace and some outlier dimensions play an important role. The low intrinsic dimensionality and the universal existence of intrinsic subspace make us believe that adding new abilities and knowledge into foundation models with relatively small-scale customized bricks is possible. §.§.§ Typical Customized Bricks Customized bricks have emerged as a significant avenue for enhancing LLMs during their post-processing phase, which involves the insertion of tiny bricks after the pre-training or fine-tuning procedures. This approach aims at efficiently enhancing the LLM's customized capabilities. As shown in Figure <ref>, we categorize widely used bricks into three types based on their capabilities: task bricks, knowledge bricks, and modality bricks. §.§.§ Task Bricks Task bricks, also known as parameter-efficient tuning or delta tuning <cit.>, are widely explored as a substitute for full-parameter fine-tuning, which usually requires substantial computational and storage costs. Task bricks adopt task adaptation by tuning only a small portion of parameters. As the number of model parameters grows, the performance gap between task bricks and full-parameter fine-tuning narrows. Consequently, in the realm of LLMs, employing task bricks for adaptation has become a widely accepted paradigm. Following <cit.>, we divide task bricks into three types according to the operations on the tunable parameters. Besides, recent efforts demonstrate that the task bricks can also be obtained by extracting task vectors from the internal representations without tuning, and we term these efforts as training-free task bricks. Addition-based Bricks Addition-based bricks introduce extra parameters into the LLM for fine-tuning. Within this category, the most extensively studied methods are adapter tuning <cit.> and prompt tuning <cit.>. The fundamental structure of adapter tuning consists of two linear layers with a notably low intermediate dimension, which enables efficient computation and storage. This layer can be inserted into the standard transformer architecture, for instance, following the self-attention layers or the feed-forward network layers, to facilitate task adaptation. Different from adapter layers to modify the model architectures, prompt tuning conducts task adaptation by inserting token embeddings in the input layers <cit.>. The early researches prepend hard discrete tokens to the inputs, which aims to bridge the gap between pre-training and fine-tuning via formalizing all NLP tasks into sequence generation problem <cit.>. Then to make prompt tunable, soft prompt is proposed to prepend randomly initialized continuous embeddings to inputs and optimize task objectives via gradient descent <cit.>. Prompt tuning is a human interaction-friendly algorithm as the users can drive LLMs to accomplish various tasks by utilizing different prompts eliminating the need to modify the model's architecture. Specification-based Bricks Specification-based bricks specify some existing parameters in LLMs to be tunable and do not introduce additional parameters. BitFit <cit.> presents that only optimizing the bias vectors inside the linear projections can achieve satisfactory performance. <cit.> attempt to only tune the output layer to protect data privacy and enable an efficient API-based tuning framework. Besides manually specifying which parameters are adjustable, <cit.> and <cit.> learn a binary mask for the parameters, determining which ones should be optimized for a given task. When the model scales to billions of parameters, the performance gap introduced by the design differences becomes negligible, and even arbitrarily specifying modules to be tunable can also lead to comparable results with full-parameter fine-tuning <cit.>. Reparameterization-based Bricks Reparameterization-based bricks rewrite the computational formula of existing layers into a parameter-efficient form and specify part of the parameters tunable. The most widely adopted approaches within this category assume that the variations in some parameters during training are low-rank, subsequently optimizing these low-rank variations with tiny parameters. For instance, the intrinsic dimension, as mentioned earlier, maps a low-dimensional vector into the parameter space, allowing the model training process to solely optimize this low-dimensional vector <cit.>. Similarly, LoRA models the variation of a particular parameter matrix as the product of two matrices with significantly low intermediate dimensions <cit.>. Training-free Task Bricks In addition to the aforementioned methods that require additional training, many researchers attempt to activate the intrinsic task capabilities of foundation models without any training. Recent progress shows that the representation space of intermediate layers in LLMs possesses semantically meaningful structures <cit.>. It indicates that we can directly control the behaviors of LLMs by operating the intermediate representations. Inspired by these findings, many efforts reveal that the demonstrations in in-context learning can be transformed into a function vector with simple representation arithmetic, and inserting the function vector into intermediate representations of inputs can trigger the foundation model to generate the task predictions <cit.>. §.§.§ Knowledge Bricks Knowledge bricks aim to supplement LLMs with external knowledge. While it is well-documented that LLMs internalize amounts of world knowledge to facilitate robust language comprehension <cit.>, their finite parameter space inevitably limits the capacity to encapsulate the nearly infinite spectrum of external knowledge. This limitation often manifests itself in the form of “hallucinations”, where the model generates erroneous information in responses due to a lack of relevant knowledge <cit.>. Moreover, the computational overhead of LLMs make them less agile in adapting to the ever-changing landscape of world knowledge <cit.>. In the following paragraphs, we will present methods for representing knowledge as compact neural bricks and discuss the advantages of knowledge bricks compared to traditional knowledge injection methods. Based on the knowledge type, we can divide the knowledge bricks into structured knowledge graph (KG) bricks and unstructured text bricks. Structured KG Bricks Leveraging structured KGs to enhance pre-trained language models has long been a pivotal direction in NLP <cit.>. Compared to traditional models that incorporate knowledge during the pre-training phase, the construction of tiny structured KG bricks is cost-efficient, with stronger scalability to build bricks across diverse knowledge types. The core of structured KG bricks lies in pluggable knowledge representations, which can be injected into LLMs to provide external knowledge. One main line of research attempts to concatenate informative entity representations with original input embedding sequences for knowledge fusion. To this end, <cit.> average the output vectors of masked entity’s occurrences as pluggable representations. <cit.> and <cit.> learn a neural projection to bridge the gap between KG embeddings <cit.> and token embeddings in pre-trained models. Besides, different from incorporating knowledge features, K-Adapter adopts knowledge-specific objectives, such as entity relation extraction for KG and dependency prediction for linguistic trees, to optimize adapters for knowledge-enriched representations from originally hidden vectors <cit.>. Unstructured Text Bricks Unstructured text is the primary medium for humans to record and store world knowledge. An increasing number of researchers are exploring ways to augment LLMs with an external retrievier, where the relevant textual knowledge is concatenated with input instructions <cit.>. However, these approaches often suffer from poor reusability of encoded knowledge, requiring redundant knowledge re-encoding for different instructions. Therefore, constructing cross-task reusable, plug-and-play textual knowledge bricks presents an efficient method. <cit.> adopt the activations of long documents as reusable representations. During downstream inference, the activations are directly fed into the top layers of pre-trained models to reduce computational overhead. <cit.> represent documents as prefix tokens and conduct self-supervised training to enable document bricks suitable for both inference and fine-tuning. §.§.§ Modality Bricks Multimodal large language models (MLLM), which utilize LLMs as the brain for reasoning and are capable of processing various perceptual signals such as images and speech, have become pivotal in the pursuit of artificial general intelligence. With the continuous growth in LLM training data and parameter scale, LLMs have exhibited numerous surprising emergent abilities, including instruction-following, in-context learning, and chain-of-thoughts <cit.>. To leverage these remarkable abilities in multimodal tasks and scenarios, many researchers have shifted their focus towards the training of MLLM <cit.>. However, building an MLLM from scratch necessitates substantial computation and multimodally aligned data pairs. As a result, much of the current work treats pre-trained models from other modalities as bricks for LLMs, effectively leveraging them to transform multi-modal signals into features that the LLMs can readily process and understand. Based on the types of interface features communicated between models, these models can be categorized as bricks with textual interface and continuous interface. Bricks with Textual Interface This line of work initially converts multi-modal data into text, which is then combined with textual instructions and fed into the LLMs. For example, <cit.> adopts open-source video caption and detection models to convert videos into textual descriptions, which are then fed into LLM to generate responses. Recent popular tool-augmented LLMs treat models for other modalities as APIs. When given the functional descriptions and input-output formats of models, LLMs decompose the input instruction into multiple sub-tasks, where various models are involved to solve one by one <cit.>. These approaches require no additional training and do not necessitate access to the model's parameters, making them particularly suitable for API-based models such as ChatGPT and GPT-4 <cit.>. However, converting other modalities into text often results in inevitable information loss, which can considerably impact the model's performance. Bricks with Continuous Interface To eliminate the information loss, many researchers attempt to construct a learnable continuous interface between LLMs and other pre-trained models. As the models are trained separately on single-modality data, the representation spaces between these models are quite different, which poses challenges for the learnable interface to translate visual and audio inputs as LLM continuous prompts. As for the model architecture, a widely-used method is adopting attention mechanism to extract important information with several learnable query vectors <cit.>. Further investigation indicates that a simple multi-layer perceptron is powerful for modalities bridge <cit.>. Different from bricks with textual interface, learnable continuous interface relies heavily on multi-modality aligned data <cit.>. Besides, due to the limited representation capacities of continuous prompts, the learnable interface sometimes suffers from fine-grained information loss. Besides the three types of customized bricks mentioned above, many researchers devote efforts to developing plugins with diverse functionalities. These include plugins that enable models to manipulate external tools <cit.>, debias the model response <cit.>, reduce the computational costs <cit.>, and transform the style of generated text <cit.>. The practice of constructing tiny customized bricks - plugins for LLMs to supplement their functionalities and knowledge has become a widely accepted paradigm. Despite this success, the plugin learning for LLMs is still faced with the following challenges: (1) Combining multiple plugins: In real-world scenarios, it often becomes necessary to combine multiple plugins to execute complex commands. However, since different types of plugins are trained independently, combining them during the inference stage can lead to out-of-distribution (OOD) problems. Exploring the combination of multiple plugins is therefore crucial to unlocking the full potential of large model plugins. (2) Unified training strategy: Currently, different training methods, datasets, and insertion points are required for plugins with different capabilities. Discussing the construction of different types of plugins from a unified perspective could greatly benefit the future development of numerous plugins. A standardized approach to training would streamline the process, ensuring consistency and efficiency across different plugin types, which would also benefit compatibility issues. §.§ Brick Granularity As stated previously, the granularity of a brick is highly customizable, ranging from a solitary neuron to a whole pre-trained model. As the size of bricks increases, their capacity expands correspondingly and the computational resources required also increase. This presents a challenge in selecting the optimal brick size, necessitating a careful balance between efficiency and effectiveness. In this section, we will first review existing observations on the capability of four different granularity of bricks: the solitary neuron (<ref>), the neuron group (<ref>), the layer (<ref>), and the full model (<ref>). Furthermore, we discuss how to choose the brick granularity properly (<ref>). §.§.§ Solitary Neuron Granularity The neuron, defined as a row or a column in the weight matrix of the linear layer, is often considered the finest functional unit in Transformer-based foundation models <cit.>. After being trained properly, they can carry certain skills or knowledge, laying a solid foundation for the complex behavior of the entire deep learning system. From the perspective of skills, neurons in well-trained neural networks are demonstrated to possess the ability to capture specific input patterns and predictivity for some basic NLP tasks. Some early works find that neurons can learn the position of words <cit.> or parts of speech <cit.> such as nouns, verb forms, articles, numbers, etc. Others prove the specialization of certain neurons in capturing groups of words with similar meanings (e.g., electronic items or legislative terms) <cit.>. Further, recent studies demonstrate the potential of visual model neurons in learning meaningful perceptual concepts such as the tall structures in images <cit.>. The sensitivity of neurons to various input patterns constitutes their high predictivity for some fundamental NLP tasks, including sentiment analysis, natural language inference, topic classification, etc <cit.>. Moreover, factual knowledge is also an important aspect of information that can be obtained by neurons. An important work in this field is done by <cit.>, who demonstrates the potential storage of factual knowledge in specific neurons (i.e., knowledge neurons). The activation of these knowledge neurons is positively correlated to the expression of the corresponding knowledge triplet, which sheds light on a promising approach to training-free knowledge editing and model manipulation <cit.>. Another interesting fact found in previous work is that the skills or knowledge contained in a solitary neuron can even be non-singular. Polysemous neurons capturing multiple concepts or word senses widely exist in deep neural networks <cit.>. The knowledge neurons were responsible for different factual knowledge are also proven to have intersections <cit.>. This observation underscores the potential of exploring smaller units within LLMs for understanding the storage of knowledge and skills. §.§.§ Neuron Group Granularity Neuron groups, namely tiny sublayers involving a group of neurons, can often display more complex behaviors than solitary neurons. As demonstrated in <cit.>, neurons can be emergently clustered into different function groups during the pre-training. Besides, the customized bricks usually consist of tiny sublayers to store certain knowledge and abilities. One of the most popular organization forms of neuron groups is Mixture-of-Expert (MoE), where each expert is a specialized neuron group and the MoE output is aggregated from the expert outputs through a routing function. As for Transformers, pre-defined MoE is usually implemented by replacing a single linear layer in the attention module <cit.> or the feedforward network <cit.> with multiple linear experts. In some previous works <cit.>, these experts are demonstrated to possess specialized functions at different levels, ranging from simple semantic functions (e.g., word sense classification), knowledge functions (e.g., factual knowledge recognition), to more complex language understanding tasks such as GLUE <cit.>. Other works also provide clues for expert specialization through the analyses on expert activations, expert usage, or ablation studies <cit.>. They also demonstrate that the effectiveness of MoE and expert specialization are consistent in textual, visual, and multimodal models. In addition to the pre-defined MoE structure, we can also probably explore MoE structures inside an already pre-trained model without additional parameters. For instance, <cit.> constructs experts by splitting the FFN parameters into functional partitions, which reduces the computation significantly without harming the overall performance. Another line of research focusing on a group of neurons is parameter-efficient tuning. To reduce the huge computation costs of tuning large language modules, PET only updates a small number of neurons (inherently inside the model or additionally introduced) while freezing the remaining parts of the model <cit.>. The tunable neuron groups in representative PET methods (e.g., Adapter <cit.>, Prefix-Tuning <cit.>, BitFit <cit.>, and LoRA <cit.>) are demonstrated to have high versatility and satisfactory performance on over 100 NLP tasks, from simple text classification to complex conditional generation <cit.>. PET neuron groups can also carry external knowledge to empower the frozen language model in a plug-and-play manner <cit.>. §.§.§ Layer Granularity The utilization of stacked layers within deep models has consistently showcased its superior performance across numerous scenarios <cit.>. LLMs typically comprise multiple layers with the same architecture, each possessing unique parameters <cit.>. Understanding and manipulating different layers are both crucial aspects to maximizing the potential of LLMs. Numerous studies have delved into the examination of the functions of distinct model layers. For instance, <cit.> present that the word order information is mostly contained in lower layers, while <cit.> propose the structural probing framework and find that syntactic knowledge is most prominent in middle layers. <cit.> also find that final layers are usually more related to specific tasks. <cit.> provide a relatively thorough survey about the function of each layer in BERT <cit.>. In addition to linguistic functions, <cit.> demonstrate that the feed-forward layers in Transformer models <cit.> can be construed as key-value memories, which inspired a new way to addressing and editing the knowledge stored in language models. As models progress from lower to higher layers, the functional scope transitions from local, lexical aspects to global, semantic dimensions. Based on the observations, there exist several approaches for the manipulation of inner model layers to improve efficiency, among which a straightforward one is to swap out some layers to accelerate model training or inference. <cit.> present a Switchable-Transformer block, which introduces a gate to determine whether the corresponding layer is disabled or not in the training stage, based on which they further proposed a progressive-layer-dropping approach that can effectively reduce the training cost. Regarding inference, <cit.> introduces DeeBERT, which reduces inference time by bypassing certain upper layers rather than passing through the entire model. Another branch of layer manipulation is knowledge editing, which aims to change the existing knowledge within pre-trained models by modifying some specific layers. For example, <cit.> proposed to inject multilingual knowledge into the feed-forward layers. Recently, <cit.> present EasyEdit, which supports various cutting-edge knowledge editing methods and applies to representative large language models, such as Llama 2 <cit.>. §.§.§ Full Model Granularity Various types of models frequently exhibit distinct advantages and drawbacks. For instance, large models excel in performance, while small models offer higher speed and demand fewer computational resources. Combining existing models is an efficient strategy for harnessing the strengths of individual models. <cit.> propose to dynamically select models of different sizes for input samples with different difficulties. Inspired by the dual-process theory of human cognition, <cit.> propose to employ a small model that performs fast and intuitive thinking and a large model for complex and slow thinking. In addition to amalgamating models of varying sizes, as discussed in <ref> the integration of models from different modalities offers a practical approach to constructing multimodal models <cit.>. The studies mentioned above primarily concentrate on the integration of independent models. However, there are also notable works that involve the extraction of sub-networks from larger models. <cit.> explore the out-of-distribution generalization capabilities of sub-networks and find that even in biased models there still exist unbiased sub-networks. <cit.> identify a sub-network, which they called the child network, in a pre-trained model and only update the weights of the child network for downstream tasks. <cit.> also propose S^4-Tuning, a technique that partitions the entire model into sub-networks dedicated to each target language. It exclusively updates the relevant sub-network for tasks in a specific language, thereby enhancing language-specific task performance. §.§.§ Discussion Based on the above statements, we turn back to the issue of selecting the appropriate granularity according to the required ability. Intuitively, coarser-grained bricks with more parameters are better suited for addressing complex tasks. For instance, solitary neurons can discern specific patterns of low-level linguistic units <cit.>. By contrast, full model bricks are typically associated with higher-level capabilities, encompassing the general understanding of specific modalities <cit.>, language, or textual corpus (e.g., codes). However, there exists evidence supporting the presence of certain abilities shared by multiple granularities. Both solitary neurons and neuron groups have been established as having predictive functions in some language understanding tasks, including sentiment analysis and natural language inference <cit.>. Consequently, the alignment of diverse granularity levels with specific abilities remains an open question. Scaling laws, the empirical studies on the relationships between task performance, model scale, dataset size, and computation, may provide insights into the granularity-ability association <cit.>. Another point worth attention is that different levels of brick granularities and abilities have potential inclusive relationships. Specifically, high-level abilities, such as general language understanding, can be decomposed into low-level NLP tasks. Similarly, bricks of coarser granularities can be viewed as a combination of multiple finer-grained ones. Therefore, a possible approach involves structuring foundational models with hierarchical bricks. Consequently, when retrieving, constructing, or updating bricks, operations can be efficiently executed at the appropriate granularity, guided by the hierarchical functional partitions. Nevertheless, some works also demonstrate that the functional bricks in LLMs can appear emergently during the training process. Given the yet-to-be-fully-explored association between granularity and abilities, the manual design of well-suited brick hierarchies or abilities presents a challenge. Thus, it becomes necessary to conduct comparative analyses of the performance and costs for different brick granularities, which we leave for future work. §.§ Benefits of Configurable Bricks As elucidated earlier in this section, we can decompose pre-trained models into emergent bricks and enhance their capabilities by constructing custom bricks, thereby realizing a configurable LLM architecture. In this subsection, we will highlight five advantages of configurable LLMs compared to traditional monolithic LLMs. Efficiency The vast number of parameters in language models, encapsulating knowledge and capabilities, are essential for executing a wide range of tasks. However, for a specific task or instruction, we often only need to utilize a subset of these parameters for language comprehension and task inference. This means that, given a particular command, we can dynamically select the relevant bricks associated with the current instruction for computation. For instance, early-exiting models treat each layer of the model as a brick. For each instruction, they decide whether to engage subsequent layers for computation based on the predicted confidence <cit.>. Models utilizing a mixture-of-experts (MoE) approach regard each expert network (typically an FFN layer) as a brick. For every token, they select a few experts that match the token's characteristics from a set of expert networks to participate in the computation <cit.>. Consequently, even as the number of model parameters grows in response to expanding knowledge and capabilities, the computational demand for a given instruction or task remains relatively low. Reusability Configurable LLMs decompose the model into several distinct functional bricks, facilitating the performance for real-world complex requirements through combinations of these bricks. In various configurations, knowledge and capability transfer can be achieved by reusing different bricks. For instance, <cit.> decomposes multi-lingual task fine-tuning into language bricks and task bricks. Training task bricks on a high-resource language and then combining it with a low-resource language brick enables the task knowledge transfer across different languages. Both <cit.> and <cit.> model KG and textual knowledge as task-agnostic bricks, allowing for efficient knowledge injection across different tasks without the need for further task-specific adaptations. Traceability Decomposing the black-box LLMs into bricks with interpretable functions allows us to trace the underlying mechanisms behind their superior performance by monitoring the activation of bricks. As aforementioned, efforts have been made to identify the knowledge <cit.>, concept <cit.>, and task-specific skills <cit.> encoded in specialized neurons or expert units. When a particular brick is activated, it indicates that the knowledge contained within the brick has been utilized to generate a response to the given instruction. Such observations provide a fresh perspective on understanding the behavior of LLMs. After the source tracking, we can predictively control model behavior by manipulating relevant bricks without affecting other parts of the architecture <cit.>. For example, <cit.> update or erase learned relational facts by directly modifying the parameters in the corresponding knowledge neuron. This provides a novel viewpoint for the oversight and alignment of LLMs: instead of focusing on a holistic ethical and safety review of the entire LLMs, the focus can shift to a brick-by-brick examination, allowing for the repair or replacement of bricks that may induce the model to generate unethical responses <cit.>. Sustainability Enhancing LLMs with continuous capabilities and knowledge to adapt to the ever-evolving global environment remains a focal point in research. Unlike monolithic LLM, which necessitates the updating of whole parameters and can result in catastrophic forgetting of existing knowledge, configurable LLMs can achieve continual learning through the growth and updating of specific bricks without undermining previously acquired knowledge. For instance, in the realm of multi-domain pre-training, many scholars focus on the MoE model, leveraging the expansion of experts to assimilate knowledge across an increasing array of domains <cit.>. Furthermore, when knowledge embedded within the LLM requires updating or overwriting, strategically modifying specific emergent bricks <cit.>, or constructing a supplementary customized brick <cit.>, stands as an efficient solution. Distributed Computation Configurable LLMs decompose the monolithic computation into modularized operations. The distributed computation trait makes configurable LLMs more practical to deploy on computational clusters: each machine can be tasked with the computation of a specific brick, exchanging information with others through hidden vectors. This distributed computing trait can harness the full computational capacity of each device, thereby reducing deployment costs. For instance, many researchers propose to distribute different bricks across distinct machines, training them with domain-specific data, and eventually merging all the trained bricks to produce a larger language model with enhanced capabilities <cit.>. Moreover, the nature of distributed computing can serve as a safeguard for model and data privacy. For example, <cit.> and <cit.> place the main LLM on a central server endowed with substantial computational resources, while the task-specific bricks and output layers reside on the user's machine. This setup allows users to reap the benefits of the LLM's superior performance without compromising data confidentiality. § OPERATIONS FOR CONFIGURABLE BRICKS In the previous section, we introduce the construction of emergent and custom bricks for LLMs. As the number and variety of bricks increases, it becomes crucial to configure LLMs for intricate requirements in real-world applications. This involves utilizing multiple different bricks to execute complex instructions. In this section, we mainly describe several operations associated with the LLMs configurable bricks: routing and retrieving from a vast array of bricks based on instructions (<ref>); combining multiple single-purpose bricks to endow the system with composite capabilities (<ref>); updating or refining bricks to align with shifts in world knowledge and requirements (<ref>); growing the bricks to accommodate new capabilities acquired from continuously emerging data (<ref>). §.§ Routing and Retrieval Given the abundance of emergent and customized bricks, as shown in Figure <ref>, it is essential to establish suitable routing and retrieval methods to utilize these bricks in various situations effectively. In this subsection, we first provide an overview of existing routing and retrieval methods, examining them from two perspectives: the categories of bricks and their granularity. Then, we engage in a discussion concerning the improvement of current retrieval methods. §.§.§ Emergent Brick Routing Regarding emergent bricks, whether they are defined by humans or self-organized, the main objective of retrieving these bricks is usually to enhance the efficiency of the current model, where only limited parameters are selected for computation. Due to emergent bricks being generated during the pre-training, the number of emergent bricks for an LLM is typically limited, often amounting to only a few dozen. Consequently, emergent bricks are selected through a routing function, which assigns a score to each brick based on given instructions or tokens. The bricks with the highest scores are then engaged in the computation process. By selectively activating the retrieved bricks, it becomes possible to significantly reduce both the training and inference FLOPs, thereby improving computational efficiency. Current routing methods for emergent bricks mainly focus on the pre-defined brick architecture, MoE, and can be categorized into two main categories: trainable route function and fixed route function. Trainable Route Function In many MoE models, a brick refers to an expert, and a trainable routing function is employed to determine the assignment of each token to its corresponding brick. Typically, the routing function in SwitchTransformer <cit.> and GShard <cit.> consists of a trainable projection layer, which takes the token representation as input to calculate the gate values for different bricks. The token is then routed to the corresponding expert based on the top-k gate values. However, this method can lead to multiple tokens being assigned to the same brick, resulting in an imbalance of FLOPs among different bricks. To address this issue, alternative approaches have been proposed. One such approach, suggested by <cit.>, involves solving a linear assignment problem to route tokens, rather than simply selecting the top-k bricks. <cit.> adopt a recurrent router to establish the associations between routing choices of different layers. <cit.> propose that bricks select tokens instead of tokens selecting bricks. This method achieves a more balanced distribution of FLOPs among bricks by controlling the number of tokens each brick selects. Additionally, <cit.> propose the use of soft slots to gather information from all tokens, which are then further processed by different expert bricks. Although these trainable routing functions exhibit meaningful patterns after training <cit.>, fully explaining the routing behavior remains a challenging task. Fixed Route Function Instead of training an unexplainable routing function through the training process, some researchers explore alternative fixed route functions that do not introduce any training parameters. In their work, <cit.> utilize pre-computed hash functions to route tokens to different bricks in a perfectly balanced manner. Similarly, <cit.> find that the behavior of route functions in SwitchTransformer <cit.> is akin to random routing. As a result, they suggest employing random routing without any additional parameters. Aside from these random routing methods, <cit.> propose a token routing approach based on the domains of the current instance, which offers better explainability and encourages specialization among different bricks. However, it should be noted that this method requires domains of nearly equal size to maintain a balance across the different bricks. §.§.§ Customized Brick Retrieval For customized bricks, the retrieval objective is to enhance LLMs with specific external capabilities that are relevant to the current situation. While retrieval methods are predominantly used in knowledge bricks, as they can be quite extensive, it is crucial to retrieve the most pertinent knowledge from vast sources such as Wikidata. Several studies <cit.> have focused on augmenting LLMs with entity knowledge brick from structured knowledge graphs and have investigated entity linking as a retrieval method to incorporate specific entity brick into the current model. In addition, <cit.> explore encoding Wikipedia into an external memory as knowledge bricks, and they utilize Maximum Inner Product Search (MIPS) to retrieve the most suitable brick for different instances. In the case of other types of customized bricks, such as task-specific modules, their scale is generally not as vast. Therefore, previous works <cit.> have focused on combining and merging all bricks rather than specifically retrieving the most relevant ones. Considering the growing number of task bricks, <cit.> propose to retrieve and then compose multiple LoRA modules relevant to the input instructions. Overall, retrieval methods play a crucial role in incorporating customized bricks, especially knowledge bricks, into LLMs, and further advancements in retrieval techniques can significantly contribute to the effective utilization of these customized bricks. §.§.§ Routing and Retrieval Granularity Different routing and retrieval methods can be applied at different levels of granularity, including token-level, sentence-level, and task-level. Each level of granularity offers distinct advantages and considerations. Token-level routing and retrieval provide greater flexibility and enable precise control over the specific information required for a given task and instance. Many MoE architectures <cit.> employ token-level routing to ensure that different experts acquire more generalized and fundamental capabilities. Additionally, token-level retrieval can be utilized within the context of knowledge brick retrieval to enhance token-level knowledge, such as entity bricks <cit.>. However, it is important to note that token-level retrieval and routing can be time-consuming, requiring significant computation and communication costs when applied to every token and every layer. Sentence-level routing and retrieval provide a higher-level view of the information within a sentence. <cit.> utilize sentence-level information about the domain of the current instance to route each token to its corresponding domain bricks and promote expertise specialization in specific domains. <cit.> adopt a retrieve-then-compose framework to utilize massive LoRA task bricks, where the model first retrieves several related task bricks based on the input instructions and averages these bricks for final computation. Furthermore, sentence-level information can also be valuable for text-based knowledge bricks, as the semantic coherence makes the related knowledge of tokens within a sentence highly likely to be the same. <cit.> attempt to construct a sentence-level representation using an average representation of all tokens, which serves as the query to retrieve knowledge brick for the whole sequence understanding. Task-level routing and retrieval are specifically designed for downstream tasks. Unlike token-level and sentence-level retrieval, which are used during training or inference, task-level retrieval occurs before training and inference. In this approach, retrieval methods aim to identify the most relevant task bricks given all the data associated with a specific task. The retrieved task bricks can then be utilized to perform the task without the need for additional tuning or can serve as a better starting point for subsequent training <cit.>. §.§.§ Discussion Based on the development of configurable bricks, more advanced routing and retrieval methods need to be proposed. Efficient Routing and Retrieval Bricks routing and retrieval during training and inference can be time-consuming due to the increased calculation and communication. <cit.> show that the training speed of an MoE model can be 3 times slower than a compute-match dense model due to the additional computation and communication on the brick route. Even for customized bricks, some compromise solutions, like retrieval only on specific tokens <cit.> or use higher level representation <cit.>, are proposed to reduce the frequency of routing and retrieval. Better retrieval methods need to be designed to balance accuracy and efficiency. Multi-Level Routing and Retrieval Most current retrieval methods are based on single-level retrieval, which may not capture all the information required for accurate retrieval. Additionally, multi-level retrieval is crucial for the collaboration between emergent bricks and customized bricks, as token-level information is empirically used in emergent bricks, while sentence-level and task-level information is more relevant to customized bricks. <cit.> have made preliminary attempts to incorporate task-level information and token-level information into routing various emergent bricks. However, there is still much to explore in combining multi-level information for retrieval purposes. Active Routing and Retrieval Current routing and retrieval methods typically decide in advance where to conduct routing and retrieval based on fixed rules, such as the position of entity mentions or just every token and sentence. We refer to these methods as passive routing and retrieval methods. On the contrary, more proactive methods can allow the LLM to decide where to conduct routing and retrieval during the generation process, which we call active routing and retrieval methods. Active methods can also address the efficiency problem by significantly reducing the frequency of retrieval. <cit.> have attempted to dynamically augment entity memory when the model generates a specific special token. Furthermore, it is important to investigate whether current LLMs can determine when to retrieve bricks and how to enhance this ability for more advanced configurable foundation models. §.§ Combination Single-function bricks often fall short in fulfilling complex instruction demands. Brick combination aims to fuse multiple bricks to possess combined abilities. For example, it enables cross-lingual transfer for the named entity recognition (NER) task to combine a brick trained on English NER datasets with a brick proficient in Chinese capabilities. In this way, the brick combination can obviate the need to build high-quality annotated datasets for a specific requirement and train models from scratch, thus significantly reducing both human effort and computational costs. In this section, we divide brick combination methods into two categories based on the operations: parameter weighted average and brick stitching. Besides, we also provide a discussion about the future directions for brick combination. §.§.§ Parameter Weighted Averaging Parameter weighted averaging obtains a merged brick by directly performing an element-wise weighted average of multiple bricks with the same structure (homogenous bricks combination). In early research, parameter averaging is applied to the ensemble of models trained from the same task, aiming to boost the model's robustness and performance. Due to the inherent randomness in the training of deep neural networks and the non-convex nature of loss functions, linearly weighting parameters from two distinct training processes usually fail to yield satisfactory results. As a result, many researchers explore the “mode connectivity” of deep neural networks <cit.>. These explorations seek to uncover the interconnected paths between the parameters of two models, ensuring that the parameters along this path achieve commendable accuracy. Further, <cit.> discover that if two models are initialized from the same well-trained parameters, a straightforward linear weighting can enable the merged model to exhibit superior performance, which inspires subsequent research on parameter average based on pre-trained models. Models fine-tuned from the same pre-trained model, though with different training configurations, including various hyperparameters and data sampling, can be linearly weighted together <cit.>. The resulting merged birck achieves performance comparable or superior to a multi-brick ensemble. Besides, averaging bricks from the same task but different domains has been demonstrated to effectively enhance the domain generalization capacity <cit.>. Thus, parameter averaging is also employed in efficient federated learning, allowing for a generalized merged brick while simultaneously protecting private data <cit.>. Furthermore, parameter weighted averaging has been introduced to combine bricks from different tasks to facilitate knowledge transfer. In such settings, the contribution of the source task bricks to the target task often varies, which implies that careful design is required to determine the weighting coefficients for the different bricks. <cit.> employ Fisher-weighted averaging to transfer capabilities from intermediate tasks to target tasks. <cit.> leverage combinatorial optimization algorithm to optimize weighting coefficients, aiming to reduce the number of training instances required for the target task. <cit.> determine the coefficients by minimizing the L2 distance between the parameters of merged bricks and source bricks. Beyond parameter addition, some researchers discern that subtracting parameters can enable a model to unlearn an undesired capability <cit.>. For instance, <cit.> perform detoxification by subtracting a brick trained from toxicity instructions. Similarly, <cit.> mitigate hallucinations by subtracting a brick trained on hallucinated examples. §.§.§ Brick Stitching Brick stitching involves concatenating several bricks together in sequence based on functional requirements, such that the output of one brick serves as the input for the next. Birck stitching can be applied to combine bricks with different structures (heterogeneous bricks combination). Given the substantial discrepancies between features processed by different bricks, a crucial aspect of brick stitching is the training of interaction interfaces between them. This ensures that the outputs from preceding bricks can be effectively interpreted by subsequent ones. While the types of interfaces have been discussed in prior sections (Modality Bricks in <ref>), in this section, our primary focus will be on the way to determine the order and structure of stitched bricks. Heuristic Stitching The manual definition of stitching order based on inference sequence is the most common method of brick stitching. This approach often involves explicitly decomposing a task into several inference steps. For instance, in a visual question-answering task, there's a need to first comprehend the objects and scenes within the given image and then answer questions based on the image content. Correspondingly, concatenating an image encoding model before a language model becomes a widely adopted structure for multi-modal understanding <cit.>. Besides, recent popular LLM-based multi-agent collaboration systems are also based on heuristic brick stitching, where each brick is a whole LLM-based agent and required to solve a subtask, such as front-end design in game development <cit.>. Heuristic stitching is generally adopted for concatenating model-level bricks, which can independently accomplish specific inference steps. In contrast, fine-grained bricks tend to have abstract functions and are usually dependent on surrounding bricks. Arbitrarily concatenating any two bricks typically results in suboptimal collaboration. Recent studies observed that the hidden spaces of two pre-trained models from the same structures and tasks but differing sizes can be linearly transferred <cit.>. Based on this insight, <cit.> propose an approach that concatenates layer-level bricks from a family of pre-trained models with different sizes. This allows for optimal utilization of computational resources, ensuring maximum performance under given computational constraints. To further improve the flexibility of stitching different layers, <cit.> adopt evolutionary optimization algorithms to select optimal composition architectures from predefined stitching search spaces. Planner-based Stitching While heuristic stitching is suitable for tasks with fixed inference steps, real-world instructions often demand varied inference sequences. This implies that we usually need to determine the execution order of different bricks based on the specific given instruction. Inspired by early neural module network <cit.>, many brick stitching models are composed of three components: a task planner, which is responsible for decomposing an instruction into several sub-tasks; a controller, which is tasked with generating and receiving the signals for each sub-task and ultimately produces the final answer; multiple bricks, where each brick handles a specific type of sub-task. In such a framework, the stitching order is determined by the task planner. An intuitive approach is to have the planner generate the execution sequences based on the functional descriptions and usage demonstrations of each brick before performing instruction reasoning <cit.>. Furthermore, many scholars propose to dynamically decide which brick to call after each reasoning step, allowing the planner to leverage intermediate reasoning results for a more precise determination of the execution sequence <cit.>. However, a linear execution sequence can be problematic: any error in the selection of bricks can directly impact the final prediction. To address this issue, recent research attempts to incorporate searching strategies for inference, which require the planner to provide multiple options at each step and sequentially test them until a satisfying response is generated <cit.>. §.§.§ Discussion Brick combination aims to fuse multiple single-function bricks to fulfill complex instructions. In this section, we delve into parameter-weighted averaging applied to homogenous brick combinations and brick stitching suitable for heterogeneous brick combinations. The essence of parameter-weighted averaging lies in determining the weights for each brick, whereas the key to brick stitching is the alignment of feature spaces across different bricks. While current efforts have initiated preliminary exploration into the brick combination, several challenges remain to be addressed. Combination of Fine-grained Heterogeneous Bricks Most efforts in combination with heterogeneous bricks focus on the integration at the model level. However, parameter redundancies also exist between different models, for instance, both language and image models internally possess neural bricks responsible for understanding real-world concepts <cit.>. Combining fine-grained heterogeneous bricks holds the potential to further reduce parameter redundancies and enhance the reusability of merged bricks across diverse scenarios. A Universal Brick Interaction Interface Within the context of brick stitching, there are two primary interaction interfaces between different bricks: one utilizes discrete, human-readable signals, and the other engages through continuous hidden vectors. The former offers a low training cost but can suffer from information loss; the latter typically results in superior data representation but demands extensive training data and lacks generalizability across different scenarios. Hence, devising a more universally applicable and efficient module interaction method is crucial to enhancing the practicality of brick stitching algorithms. Besides, a universal interaction interface could boost the scalability of the multi-brick system, allowing bricks capable of the interface to seamlessly stitch with others. §.§ Updating The continuous growth and evolution of world knowledge over time presents a unique challenge for LLMs. These models, once trained, may contain outdated information, leading to the phenomenon of hallucination. Therefore, LLMS needs to adapt to shifts in world knowledge. As LLMs become gradually larger, the cost of retraining the model for every new knowledge update request is prohibitively expensive. To this end, methods for quickly updating the knowledge encoded in LLMs have been developed in recent years. One significant advantage of configurable LLMs is that they allow updating bricks in an isolated manner, which is more efficient (in terms of computation or data) than full parameter fine-tuning. Keeping other parameters frozen may also help minimize unwanted detrimental impacts on other capabilities of the models <cit.>. Existing works related to updating bricks have largely focused on editing the knowledge encoded in neural models <cit.>. Therefore, the discussion in this section primarily revolves around knowledge bricks updating. However, many concepts for updating knowledge bricks can also be applied to updating other kinds of bricks as well. From the perspective of configurable bricks, we can categorize knowledge editing methods into (1) methods that locate and update naturally emergent knowledge bricks, and (2) methods that inject new customized bricks. §.§.§ Locating and Updating Emergent Bricks Because of the above challenges, some recent works have explored methods to edit knowledge encoded in the emergent bricks of LLMs. To this end, we need to first locate the emergent bricks storing the target knowledge and then update the parameters for knowledge editing. Locating Knowledge Bricks Inspired by the hypothesis that FFNs can be regarded as key-value memories <cit.>, existing researches mainly focus on exploring emergent knowledge bricks at the neuron level in the FFN layers. <cit.> use integrated gradients <cit.> to discover that some factual associations are positively correlated with the activation of a “knowledge neuron” in FFNs. This enables the deletion of knowledge by zeroing the activation of knowledge neurons or amplifying knowledge by scaling up the activation. <cit.> strengthens this hypothesis by causal intervention on activation values and finds that middle-layer FFNs are most responsible for fact recalling. <cit.> show that editing the layers identified by causal intervention does not result in better editing performance. They find that this is because causal intervention identifies which brick carries the target knowledge, but manipulating the parameters of these bricks does not lead to superior performance. It indicates that the knowledge storage of even one entity or tuple involves multiple neurons and defining knowledge bricks at the neuron-group level and layer level may be more suitable. These works shed light on the challenges of knowledge bricks localization methods. Updating Knowledge Bricks After locating the target knowledge neurons, we need to edit the parameters for injecting the updated knowledge. Specifically, the whole FFN layer is regarded as a key-value memory network, and a knowledge neuron is treated as a key-value pair <cit.>. The editing operation is usually performed by replacing the value vector with the target knowledge-enriched representation. Notably, one emergent knowledge brick usually is responsible for more than one target knowledge, which means knowledge editing usually results in undesirable changes for unrelated knowledge. To alleviate this issue, extra efforts are made to minimize the impact on unrelated knowledge and capabilities. <cit.> apply L2 normalization loss on parameter updates to reduce the drift in parameter space. <cit.> utilize the layer statistics on a large corpus to directly infer an update that induces a given key-value mapping while minimizing the norm of the weight updates. <cit.> improves upon this by applying multiple key-value mapping at the same time. However, a small change in the parameter space might not translate to a small change in the function space. Hence, many works use a KL divergence loss with adversarial examples to minimize the change of predictions on unrelated inputs after the update <cit.>. Some works also investigate the possibility of using a hyper-network to produce a better parameter update a given gradient information on the representations of a piece of knowledge <cit.>. §.§.§ Injecting New Customized Bricks An alternative to the locate-and-update paradigm is to inject new bricks that override the existing knowledge. This generally does not require knowledge attribution of neurons or the knowledge update requests are known in advance, since the bricks are created on demand. The main challenges lie in the efficiency of the injection process and its effectiveness in replacing existing knowledge. Many knowledge-injection methods use full-parameter updates <cit.> and they are typically regarded too compute-heavy for updating knowledge. One alternative is plug-and-play methods that inject knowledge into a frozen LLM by adding new bricks. <cit.> propose to train entity embeddings that can be prepended to the hidden states after the input layer. These can be trained on demand, but the update can only happen at the entity level. <cit.> build on the work of knowledge neurons and propose to insert and train a new FFN neuron for every knowledge update. <cit.> re-route examples concerning updated knowledge to a small model conditioned on a memory of updated knowledge. Additionally, a line of works has explored knowledge bricks in the representations instead of the parameters. <cit.> add an external brick that produces an updated hidden representation that induces the target knowledge when the subject representation is replaced with it. <cit.> and <cit.> investigate the possibility of using the difference between the hidden representations of two prompts as steering vectors to induce the target knowledge. §.§.§ Discussion Currently, updating methods have been largely limited to knowledge bricks. This is because the capabilities of other widely used bricks (see  <ref>) such as task and modality rarely require frequent updates. Moreover, these bricks are often trained in isolation without updating the base model. Thus, directly retraining the bricks for the capabilities of interest is typically effective enough. Of course, it is not hard to imagine that as we scale up LLMs, even efficient adaptation methods may be too expensive for many applications. Therefore, we believe that exploring more efficient methods for updating tasks, modalities, and other kinds of bricks is a promising future research direction. Besides, during the deployment process, we often encounter some undesired behaviors of LLMs, such as generating offensive responses when subjected to jailbreaking attacks. Therefore, quickly locating the bricks that lead to the undesired behaviors and correcting them is a promising research direction for efficient alignment. §.§ Growing In light of continually growing new knowledge and tasks, the demand for LLM growth thus becomes imperative, necessitating enhancements in the knowledge and capabilities of LLMs while avoiding catastrophic forgetting of existing capabilities. A straightforward strategy to address this challenge involves repetitive training from scratch on both old and new data, which demands prohibitively high costs. To this end, many efforts have been devoted to continual learning strategies aiming at enabling LLMs to acquire information from new data effectively and efficiently. In this section, we mainly focus on the strategies for continual learning by increasing the number of bricks. §.§.§ Growing for Pre-Training Continually learning new knowledge and capabilities from the new pre-training corpus is also important. Moreover, the relationship between the performance of LLMs and model scale has been well-established through scaling laws <cit.>. Based on associated observations, increasing the number of emergent bricks presents an effective approach to improving the model performance. Early works attempt to achieve model continual pre-training by expanding the existing dense parameters, especially, expanding the width, i.e., hidden dimension (neuron-level bricks), and the depth, i.e., the number of layers (layer-level bricks). Then the new bricks and original parameters are trained with the new pre-trained corpus. An straightforward way to expand the number of parameters is to initialize the expanded parameters with the original parameters and conduct continual pre-training with a mixed corpus <cit.>. Then many efforts are devoted into further avoiding forgetting old knowledge learned in original parameters and improve the training efficiency. ELLE <cit.> enlarges the pre-trained model in width and depth and carefully recovers its capabilities on old tasks through a recovering warmup process. Next, the expanded model undergoes training on a mixture of new data and replayed old data to acquire new information. Similarly, LiGO <cit.> employs a linear mapping of parameters from an existing model to initialize a model with increased width and depth. <cit.> frozen the orignal parameters and only train the expanded parameters with new training corpus to avoid forgeting. In this way, the enlarged LLM can inherit the knowledge of the original small models and thus reduce the costs for continual pre-training. An other promising direction is to employ the sparse-activated modular architecture, where only related bricks are selected for computation. Therefore, constructing new bricks and preserving the original bricks frozen will not introduce knowledge forgetting and avoid additional computation costs for inference. Among these works, the sparse MoE architecture is widely used. The growing operation is performed by increasing the number of experts and keeping the parameters of other layers and experts fixed. During inference, each token will only select the most related experts for computation <cit.>. §.§.§ Growing for Post-Training Continual learning has been studied for decades for multi-task learning, where the model is required to acquire new knowledge for new task instances and preserve the abilities for existing old tasks <cit.>. Among them, a popular line of research attempts to build an episodic memory, which can be regarded as a memory brick, to store a few representative and informative instances of old tasks <cit.>. In this way, the model can be continually trained only on instances of new tasks and instances saved in the memory brick, which can save the computational costs. Nowadays, benefiting from the plug-and-play characteristic of task bricks, we can continually train LLMs for multiple tasks by constructing a new brick for each new task <cit.>. For example, <cit.> and <cit.> increase the model capacity with pluggable Adapter and LoRA modules respectively. To differentiate between the forward propagation routes of new and old data, both works adopt a router to select the appropriate plugin. Besides, constructing new plugins can also introduce more world knowledge, domain knowledge, and complex capabilities for LLMs as discussed in <ref>. §.§.§ Discussion Based on the insights provided by the aforementioned studies, we suggest choosing the means of brick growing with consideration to the following factors: Task Complexity The task complexity serves as a pivotal determinant in the selection of an appropriate model growth strategy. For less complex tasks such as acquiring modest amounts of knowledge, recognizing a new entity category, or manipulating an unseen tool, a viable approach involves growing the model at finer granularities (e.g., introducing a plugin). However, scenarios may arise where a large volume of information should be injected into the model or its knowledge capacity and general performance should be substantially boosted, necessitating growth on a larger scale. Computation Budget The computation budget is a rigid constraint on the model scale. While model performance generally improves with growth, the training and deployment costs associated with an expanded model must not exceed the computational budget. Strategies such as plugins, and sparse MoE architectures are representative approaches to model growth within acceptable expenses. Application Targets Finally, the growth of the model is always linked to the application targets. For instance, lightweight plugins prove highly advantageous in scenarios emphasizing user-oriented customization. Conversely, when the target is to scale up a model to achieve heightened general AI capabilities, the introduction of a more extensive parameter set through a direct increase in width and depth or other sophisticated architectures (e.g., MoE or progressive networks) becomes imperative. § EMPIRICAL ANALYSIS In previous sections, we discuss that LLMs can be decomposed into emergent bricks and custom bricks from a modular perspective. Similar to the human brain, neurons in LLMs exhibit the characteristics of sparse activation and function differentiation, meaning each neuron is responsible for specific functionality and is activated when an input instruction requires those functionalities. Previous works have explored the sparsity <cit.>, functionality specialization on specific classification tasks <cit.>, and modular grouping <cit.> on encoder model, BERT <cit.>, or encoder-decoder model, T5 <cit.>. In this paper, we focus on the analysis of widely-used decoder-only models with instruction-following chat data. Specifically, we conduct a detailed analysis of the following questions: (1) Are LLMs sparsely activated, meaning that only a few neurons influence the final output when processing each token? (2) Do neurons exhibit functional specialization, with their activation values highly correlated to the capabilities required by the instruction? (3) Do LLMs have the potential to be modularly split, which means that different capabilities activate different partitions of neurons? In this section, we present our empirical analysis, beginning with an introduction to the formal definition of neurons and their activation values, the functionality localization of neurons, and finally, we present the experimental results. §.§ Functionality Localization In this section, we attempt to the analysis of neurons in the feedforward layers. Previous works indicate that feedforward layers in Transformer can be regarded as key-value memory networks <cit.> and provide world knowledge for sequence understanding. Therefore, we mainly focus on the feedforward layers for analysis. Neurons and Activations The feedforward layers (FFNs) employ two-layer projections or gated projections for each token in the sequences. The calculation can be written as FFN(𝐱) = FFN^O(FFN^I(𝐱)) = 𝐖^𝐎(FFN^I(x)) + 𝐛^𝐎. Here, 𝐖^𝐎∈ℝ^d × d_ff and 𝐛^𝐎∈ℝ^d are the weight matrix and bias vector for the output linear layer FFN^O(·). As for FFN^I(·), there are two variants: Vallina FFN: FFN^I(𝐱) = σ(𝐖^𝐈𝐱 + 𝐛^𝐈), Gated FFN: FFN^I(𝐱) = σ(𝐖_𝐆𝐱 + 𝐛_𝐆) ⊙(𝐖^𝐈𝐱 + 𝐛^𝐈). Here, 𝐖_𝐆, 𝐖^𝐈∈ℝ^d_ff× d and 𝐛^𝐈, 𝐛_𝐆∈ℝ^d_ff are the weight matrices and bias vectors for the input linear layer FFN^I(·) and gate linear layer FFN_G(·). Following previous works <cit.>, we can split an FFN layer as d_ff neurons, each consisting of a row in the input and gate layer as well as a column in the output layer. The outputs of FFN layers can be rewritten as the sum of all neuron outputs: FFN(𝐱) = ∑_i^d_ffFFN^I(𝐱)_i 𝐖^O_:,i + 𝐛^O_i. We define the intermediate output, FFN^I(𝐱)_i, as the activation value of i-th neuron. Intuitively, if the magnitudes of activation values are small, then the corresponding neuron will have a limited impact on the final outputs and vice versa. Therefore, the activation values are widely used as indicators for the functionality of neurons. Functionality Score In the following paragraphs, we will introduce how to locate neurons with specific functionalities. As mentioned before, the activation values can reflect the contributions of each neuron to the FFN layer output and thus are usually used as the indicator for the functionality. Then we will present the process to calculate the functionality scores of neurons on given functionalities. Specifically, we denote the functionality, such as coding ability, as f, a collection of chat instances as 𝒞 = {(p_0, r_0), ... , (p_n, r_n)}, where p_i and r_i are user input prompt and model-generated response. We define the functionality label for each chat instance (p_i, r_i) as y_i^f, where y_i^f = 1 when p_i requires ℳ to have the capability f to generate the correct answer; otherwise, y_i^f = 0. For example, given the prompt p=“How can we select unique elements from a list in Python?”, its functionality label for code ability is 1 and its functionality label for translation ability is 0. We denote the LLM requiring to be analyzed as ℳ, which has L Transformer layers and L× d_ff neurons. Given a neuron n, as the FFN layers are computed in a token-wise manner, we can collect the activation values of the neuron n on the collection 𝒞. For the instance, (p_i, r_i), there are l_i activation values, and we define the collection of the absolute value of activation values as: 𝒜_i = {|a^i_0|, ..., |a^i_l_i|} from the l_i tokens in p_i. We define the activation value of neuron n on the instance (p_i, r_i) as the average value of 𝒜_i. Then, following <cit.>, we use the average precision score as the functionality score of n on the functionality f: FuncScore(n, f) = AvgPrecision({𝒜_0, ..., 𝒜_i}, {y_0^f, ..., y_n^f}). Intuitively, a higher functionality score suggests a stronger correlation between neuron n and capability f. That is to say, if the FuncScore(n, f) is high, neuron n exhibits higher activation levels when the input prompt necessitates capability f and lower activation when it does not. §.§ Experimental Settings To analyze the functionality specialization and partition, we need a dataset annotated with abilities required by each instance. Therefore, in the experiments, we adopt the Infinity-Instruct dataset [<https://huggingface.co/datasets/BAAI/Infinity-Instruct>]. Each instance in Infinity-Instruct consists of user prompts, model-generated responses, and several abilities required by the given user prompts. There are thousands of different abilities across the entire dataset and we summarize 7 typical and widely-used functionalities and its corresponding data labels for our analysis. The detailed functionalities and data labels are listed in Table <ref>. (1) Coding: Specializes in a variety of programming languages including Python, Java, and C++, with expertise in object-oriented programming and effective code documentation. (2) Math: Encompasses a wide range of mathematical skills, from basic calculations to complex problem-solving and theoretical proofs. (3) Linguistic: Focuses on the analysis and generation of syntactic structures, enhancing understanding and applying linguistic knowledge effectively. (4) Knowledge: Covers an extensive array of subjects such as science, literature, and religion, reflecting the deep understanding and application of specific disciplinary knowledge. (5) Translation: Showcases multilingual capabilities, specializing in Chinese-English translations among other languages, with proficiency in machine translation systems. (6) Ethics and Moral: Concentrates on ethical reasoning and judgment, exploring concepts from ethical analysis to the implications of unethical behaviors and moral standards. (7) Writing: Spans a variety of genres and formats including scriptwriting, creative writing, and technical documentation, emphasizing creativity and clear communication. We manually select data labels that meet our specific functionality requirements. Since each instance may demand various abilities, our goal is to analyze the specialization and distribution of functionality across different abilities. Therefore, we retain only those instances that belong exclusively to one of the aforementioned types. We randomly sample 1,000 instances from each data type for further analysis. As for the backbone model, we adopt two widely used models, Llama-3-8B-Instruct and Mistral-7B-Instruct-v0.3, for analysis. Both two models are trained with large-scale corpus and chat data. §.§ Sparse Activation In this subsection, we focus on the first question: are LLMs sparsely activated, just similar to human brains? If LLMs are sparsely activated, we can select part of them for the computation of each input to reduce the computational costs. To evaluate the sparsity of LLMs, we attempt to calculate the distribution of neuron activations and the impact of the neurons with low activation values. Besides, as the activation values are intermediate results of FFN, following <cit.>, we directly observe the impact of each neuron on the output, i.e., output magnitude, for sparsity evaluation. Specifically, for i-th neuron in FFN, the output magnitude is defined as ||FFN(x)_i||_2. For the convenience of introduction, we term the activation values and output magnitudes as indicators. Then, to further evaluate the impact of these neurons with low indicators on the model performance, we further assess the loss variation with these neurons masked. Specifically, given an input sequence, we first compute the intermediate indicators for all tokens and in all FFN layers. Then we can select and mask k% neurons with the lowest indicators for each token and each layer. The results are shown in Figure <ref> and Figure <ref>. The results indicate that: (1) The normalized indicators for 80% neurons are lower than 0.2. It indicates that the impact of neurons is present in a long-tail distribution, and only a few neurons have a significant impact on the output of FFN layers. (2) The distributions are similar across different layers and two decoder-only models with different training data. It proves that the sparsity issues widely exist in LLMs. (3) Across all data from the seven functionalities, when the proportion of masked neurons is low (70% when using activation values as indicators, 80% when using output magnitudes as indicators), there is almost no decline in the model performance. This further demonstrates that similar to human brains, LLM exhibits sparsity in its parameter usage. This suggests the potential to significantly reduce computational costs without compromising performance by using part of the parameters for each input. (4) Compared to activation values, using output magnitudes as indicators results in a greater number of neurons with normalized indicators below 0.2, allowing more neurons to be masked without a performance drop. This indicates that output magnitudes can achieve a higher degree of sparsity. This finding encourages further research to identify more effective indicators for assessing neuron usefulness. §.§ Functionality Specialization In this subsection, we discuss the functionality specialization of neurons in LLMs. First, we attempt to locate neurons corresponding to different functionalities within LLMs. We calculate the functionality scores of all neurons across each layer for seven functionalities. As shown in Figure <ref>, we present the best functionality scores for neurons in each layer. We also provide the functionality scores of randomly activated neurons and the average functionality scores of all neurons in a single FFN layer as baselines. From the results, we can observe that: (1) The best functionality scores across these seven functionalities are significantly higher than those of randomly activated neurons and the mean functionality scores. This demonstrates that each FFN layer contains neurons highly associated with these seven functionalities, indicating that these neurons have differentiated into distinct functionalities during pre-training and alignment processes. (2) The functionalities of Coding, Math, and Translation can achieve functionality scores higher than 0.8 in most layers. This suggests that neurons associated with these three functionalities are more specialized compared to the other four functionalities. Instructions for these functionalities require LLMs to understand or generate sequences distinctly different from the English natural language, hence the neurons activated show high specificity. (3) We also present the 5‰ highest functionality scores across different layers. We can observe that there are large gaps between the best functionality scores and the best 5‰ scores. As mentioned before, the neurons are sparsely activated and there are only a few neurons are highly associated with each specific functionality. Thus, the functionality scores drop quickly with the increase of the number of neurons. (4) The average functionality scores are almost equivalent to the functionality scores of randomly activated neurons, indicating that the functionality scores of most neurons are close to the random baseline. This further suggests that for each capability, only a small subset of neurons is highly associated with it. Besides, inspired by <cit.>, we further conduct a perturbation study to verify the functionality specialization of neurons with high functionality scores. In this study, given the specific functionalities, we prune 5% neurons with high functionality scores and evaluate the pruned models on all data from 7 functionalities. For instance, given the coding functionality, we manually set the activation value of the neurons with high functionality scores as zero and evaluated the impact of the pruned neurons on all functionalities. Theoretically, if the pruned neurons are highly specialized to specific functionality, they are supposed to only have an impact on the corresponding functionality and have minor impacts on other functionalities. As all data used in our experiments are generation tasks without a clear task format, we adopt perplexity as our evaluation metric. The results of the perturbation study are shown in Figure <ref>. From the results, we can observe that: (1) Values on the diagonal are generally higher than those of the diagonal. This indicates that after pruning neurons for specific functionality, the model’s performance significantly deteriorates in the corresponding functionality while having less impact on other functionalities. (2) Pruning neurons for knowledge in Llama and neurons for translation in Mistral significantly affect all other functionalities. This may be due to the presence of resident neurons in the FFN <cit.>, which are frequently activated for most inputs. Including these neurons when selecting for functional specificity results in a substantial impact on model performance. In the future, we will explore more effective methods to locate function-specific neurons. §.§ Functionality Partition From experimental results in previous subsections, we can observe that neurons are sparsely activated, and each neuron exhibits specific functionalities. Based on these observations, in this sub-section, we further explore the potential for modular partitioning in LLMs. Similar to the human brain, neurons can be divided into several regions, each region containing neurons specialized for specific capabilities, collaborating yet not interfering with each other. Therefore, in this subsection, we attempt to visualize whether there are distinct partitions within the LLMs across the aforementioned seven capabilities. Specifically, we compute the distribution similarity between top-5% neurons of different functionalities. The results are shown in Figure <ref>. From the results, we can observe that: (1) Values on the diagonal significantly outperform those off the diagonal, indicating that neurons for different functionalities are distinctly different. (2) The similarity between neurons for translation and linguistic functionalities is greater than the random value, which is due to the need to ensure grammatical correctness in the output language during the translation process. In the future, an important research direction is to explore how to accurately cluster different neurons into distinct groups. This approach could avoid the need to select parameters at a neuron level. § OPEN PROBLEMS AND FUTURE DIRECTIONS §.§ Correlation between Emergent and Customized Bricks The emergent and customized bricks are the essence of configurable foundation models that make the training and updating of LLMs more flexible and scalable. Configuring LLMs with both emergent and customized bricks can promote decomposing and recombining functionalities for existing LLMs. However, as these two types of bricks acquire capabilities through different stages, there exist subtle discrepancies between their properties. For instance, emergent bricks can learn some outdated factual knowledge from the pre-training corpus, while customized bricks post-processed with updated documents may have the latest but also overlapped knowledge. This could lead to unexpected collisions and redundancy in their functionalities, resulting in potential performance degradation and extra computation costs. We advocate further efforts to better manage the integration and cooperation of emergent and customized bricks for ensuring optimal performance and efficiency in configurable LLMs. Construction The potential collision and redundancy between the functionalities of emergent and customized bricks can be traced back to their construction process. Though emergent bricks can be human-defined or self-organized, their capabilities are attained through the large-scale pre-training procedure, which is typically conducted in an end-to-end manner, making them relatively hard to interpret and localize. For adaptations to new tasks and knowledge that the existing model does not have, customized bricks are constructed after the pre-training stage with delicately designed structures and learning objectives. However, since it is impossible to enumerate the capabilities and knowledge of existing models, incorporating multiple customized bricks for new capabilities and knowledge can also introduce redundancy and collision. In addition, the granularities of both emergent and customized bricks have several variations and each of them may possess distinct abilities at different levels. The diverse combinations of emergent and customized bricks with different granularities may lead to varying extents of redundancy and collision of capabilities and knowledge. Therefore, detecting the underlying collision and redundancy between bricks is necessary for constructing customized bricks that effectively align with emergent bricks, which makes it possible to achieve optimal performance at minimal cost. Utilization The other perspective of avoiding collision and reducing redundancy lies in the joint operations of emergent and customized bricks. As mentioned in <ref>, emergent bricks tend to be selected by routing due to their relatively limited number. In contrast, customized bricks are retrieved to augment the current model with various external capabilities. Currently, the routing and retrieval processes of emergent and customized bricks are typically conducted independently, ignoring the potential collaboration. Jointly routing and retrieving emergent and customized bricks can benefit mutually, optimizing collision detection and selection efficiency. In addition, compared with integrating bricks at the model level, stitching emergent and customized bricks with varying granularities may improve the efficiency and reusability of configurable LLMs. §.§ Brick Construction Protocols Configurable LLMs transform the paradigm of LLM alignment and adaptation from a full-parameter training approach to one focusing on the construction and updating of a limited number of bricks. However, most existing algorithms for brick construction and updating, while not requiring the entire LLMs to be updated, still necessitate involving all LLM parameters in the error backpropagation to compute the gradients of bricks. This means that the brick training process demands substantial computational resources. This leads to a contradiction where the bricks offer computational benefits for inference while still being constrained by traditional, resource-intensive training methods. Efficient brick construction has emerged as a critical challenge. In configurable LLMs, different bricks exchange information through continuous hidden vectors. The primary objective in constructing a brick is to enable it to comprehend the input hidden vectors and generate output hidden vectors that are information-rich and understandable by subsequent modules. It implies that if one can effectively define the input and output hidden vector spaces of a brick, its construction can be independent of the massive parameters of the original LLM. To this end, <cit.> and <cit.> make preliminary exploration by introducing a small auxiliary model that serves as an emulator for the original LLM. The emulator shares the same brick structure as the original LLM and the hidden vector spaces for inter-brick communication are also pre-aligned with LLM. Each brick in the emulator has significantly fewer parameters compared to its counterpart in the original LLM. Therefore, we can utilize the emulator to construct functional bricks efficiently, which can be directly applied to the original LLM. Here, the emulator can be regarded as the brick construction protocol designed for LLMs, and bricks built following the protocol can be seamlessly integrated into the LLM. A unified and efficient brick construction protocol holds immense potential for the collaborative construction of future LLMs, enabling a paradigm akin to open-source code repository development. The protocol allows multiple developers to engage in collaborative LLM training, and brings two key benefits: (1) Protection of Data Privacy: Developers can utilize their private data to construct high-quality bricks without exposing privacy to a central training process. (2) Distributed Model Training: Each developer can develop and share bricks based on a unified protocol, without the need for gradient or data communication between different computational nodes. Despite these advantages, developing more effective and efficient protocols still requires considerable future efforts to fully realize the potential of this collaborative approach: (1) Universal Protocols: The emulator-based approach is usually limited to the inherent structure of the emulator, restricting its applicability to the construction of specific types of bricks. For instance, existing studies develop emulators that preserve only the layer-wise structure of origin LLMs, tailored for bricks that are inserted between transformer layers. However, due to the loss of intra-layer vector spaces, the emulator falls short when it comes to constructing bricks within a layer, such as prefixes inserted in attention mechanisms <cit.>. Therefore, how to design universal protocols suitable for multiple types of bricks remains a great challenge. (2) Effective Protocols: Existing emulators created via pruning or distillation, despite their smaller parameter scale, struggle to accurately represent the vector space of LLMs, thus leading to a performance loss. Therefore, a key focus of future research lies in enhancing the ability of small emulators to better approximate the vector space of LLMs. §.§ Evaluation of Configurable Foundation Models Configurable foundation models consist of various functional bricks. It introduces a fresh methodology to evaluate models from the perspective of bricks for existing metrics. Besides, the modular structure and further operations for bricks require evaluating the brick decomposition performance, i.e., whether the bricks can effectively support complex brick operations. Evaluation from the Perspective of Bricks Traditional evaluation methods and metrics usually treat the given LLMs as black-box systems, assessing the ability to generate responses that meet predefined requirements given specific instructions. However, such evaluations typically employ coarse-grained metrics that fail to capture the fine-grained performance. For instance, many efforts use quality scores of model responses or the winning rate against reference responses as metrics for LLM alignment. Such methods can provide coarse-grained performance evaluation but cannot measure the performance in intention understanding, multi-step reasoning, and other fine-grained capabilities. Configurable foundation models provide a new perspective to model evaluation, allowing us to shift from end-to-end black-box evaluation to brick-by-brick capability evaluations. This approach enables more precise identification of model shortages and directly updating bricks in urgent need of improvement. In this regard, some researchers have begun to explore the brick evaluation. Such as, <cit.> examine the functionality of neurons in LLMs, and find that some neurons are responsible for generating undesired toxic language, and deactivating them can achieve effective detoxicity. Evaluation for the Brick Decomposition The configurable foundation model also introduces new requirements for model evaluation, particularly regarding whether the bricks within the model can effectively support the diverse brick operations. In this context, we propose the following evaluation metrics for configurable foundation models: (1) Sparsity: A configurable foundation model, during inference, only needs to select a small subset of bricks with relevant functionalities for computation, thereby enhancing inference efficiency. Thus, the goal is to achieve high performance with the least possible number of bricks engaged for given instructions. The fewer bricks required, the more efficient and sparse the model is considered to be. Some existing efforts focus on actively enhancing model sparsity to minimize the parameters involved for each instruction, thereby improving the computational efficiency of LLMs <cit.>. (2) Coupling: As a core concept in software development, decoupling aims to isolate the code that performs a specific task from the code that performs another task <cit.>. Indeed, decoupling makes the code more maintainable, reusable, and easier to test <cit.>, which is also important for LLMs. In a configurable foundation model, different bricks are required to be combined to achieve complex capabilities. Besides, the updating and growing operation also needs to update the brick parameters for continual learning. These operations require low-dependency relationships between different bricks, allowing each brick to cooperate with others and be reused across various scenarios multiple times. Additionally, low coupling ensures that changes in one module do not adversely affect the performance of others. In this regard, some efforts have been made to construct task-decoupled knowledge plugins, enabling the reuse of knowledge encoding across various tasks <cit.>. §.§ Efficient Brick Computing Frameworks Decomposing LLMs into bricks allows for computation with only a fraction of the parameters, thereby reducing computational load. However, this approach also introduces additional time for brick selection and memory scheduling. Moreover, decoupling the computations of different bricks shows potential for distributed training. Consequently, to enhance the practicality of configurable foundation models, it is crucial to develop corresponding sparse and heterogeneous computing operators. These research directions are vital for optimizing the efficiency and effectiveness of configurable LLMs, making them more feasible and scalable in diverse computational environments. Sparse Operator We have introduced numerous bricks. However, to handle specific inputs, we do not need to use all bricks. If only the bricks that are most effective for specific inputs are used and other bricks are ignored, the computational cost can be significantly reduced. However, if the size of the brick is small, sequentially calling multiple bricks will result in a lower utilization rate of CUDA computing units. Therefore, <cit.> and <cit.> cluster bricks that are frequently used simultaneously and fuse them into one kernel for parallel execution. <cit.> implements sparse operators dynamically based on actual input to aggregate bricks. Given an input, whether a brick is suitable or not generally needs to be judged based on the actual activation value after the calculation of the brick. To avoid these calculations, the above solutions need to perform statistical analysis on a large amount of data beforehand to quickly predict the brick selection plan based on the input. However, during the training process, the parameters of the brick continue to change, and the applicability for input also changes dynamically. The above solutions that require prior statistical analysis can only be applied to the inference stage after the model is fixed, how to apply them to the training stage remains a challenging problem. Heterogeneous Operator Due to the independence between bricks, bricks can be distributed across different machines for collaborative training and inference. Gshard <cit.> and Switch Transformers <cit.> leverage the MoE architecture <cit.> to distribute multiple bricks across different GPUs for parallel pre-training, efficiently scaling up the model size. In particular, the parameter count of the Switch Transformer has reached the trillion level, which is far beyond the size of single-module models. Recent work <cit.> has attempted to solve the problem of load imbalance across different GPUs during MoE training by optimizing brick allocation strategies and scheduling schemes. During inference, we can place core bricks on servers and custom bricks on user machines <cit.>. In this way, users can conveniently adjust the sub-functions of the model according to their personalized needs, while leaving the core, general, and computation-intensive modules to be computed by the model service provider. On the other hand, when the personalized modules and core modules are placed on different machines, more personalized problems such as how to avoid privacy concerns when transfering private data over the network and how to reduce the inference latency caused by cross-machine communication remain to be solved. §.§ Multi-Model Cooperation System In configurable LLMs, individual bricks collaborate to complete complex instructions. However, building bricks from scratch requires the collection of massive data and consumes significant computational resources. In the rapidly evolving AI community, numerous researchers have open-sourced various pre-trained models with unique capabilities, such as image generation, speech recognition, etc. Reusing and combining these models as model-level bricks can cost-effectively construct a multimodal system capable of handling complex instructions <cit.>. As discussed in <ref> and <ref>, there have already been many attempts to combine multiple models to achieve composite capabilities. For example, different modality models are concatenated to achieve multimodal understanding and generation, or different models act as agents that interact with each other through human-readable signals. However, implementing a multi-model cooperation system still faces the following challenges: Scalable Cooperation Most current works focus on the cooperation of a limited number of models and adapt each model to the entire system, often requiring training of the whole system, which incurs significant overhead. Therefore, designing a highly scalable multi-model framework is an important future direction, which enables the system to efficiently integrate an independently trained model into this multi-model system. Effective Scheduling and Communication A complex instruction requires different models to perform their duties and coordinate with each other, necessitating that the multi-model system effectively schedules different models and ensures efficient information communication between them. Using human-readable signals for information exchange among different models often leads to information loss. However, the representational spaces of different models vary significantly, and direct interaction using intermediate representations makes it difficult for models to understand each other. To effectively address this issue, one possible approach is to introduce an intermediary model that acts as a bridge and information relay between the different models. Another possible approach is to design a unified intermediate representation form for interactions between different models. Overall, achieving efficient collaboration in complex multi-model systems is an important topic that warrants further research. § CONCLUSION In this paper, we explore configurable foundation models that consist of emergent bricks generated during pre-training and customized bricks created during post-training. We first describe how the bricks constituting a foundational model are trained and further discuss the capabilities of bricks at different granularities. We summarize the advantages of decomposing the foundation models from a modular perspective, including computational efficiency, parameter reusability, traceable results, sustainable capability growth, and optimization for distributed computing. Furthermore, we define four fundamental operations for configurable bricks, including routing and retrieval, brick combination, brick growing, and brick updating. These four operations enable the completion of complex instructions even when each brick is responsible for a single capability. Finally, we discuss the open problems and challenges that remain unresolved for configurable foundation models. We hope this paper will stimulate further research to construct more efficient and scalable foundation models from a modular perspective. § CONTRIBUTIONS The contributions of the authors are listed as follows: Chaojun Xiao, Zhengyan Zhang, Xu Han, Zhiyuan Liu, and Maosong Sun initiated and organized the research. Chaojun Xiao drafted the abstract. Chaojun Xiao, Zhengyan Zhang, and Xu Han drafted the introduction. Zhengyan Zhang, Feng Yao, and Xiaozhi Wang drafted <ref>. Chaojun Xiao and Xiaozhi Wang drafted <ref>. Chenyang Song and Shuo Wang drafted <ref>. Chaojun Xiao and Yuge Tu drafted <ref>. Yufei Huang drafted <ref>. Chaojun Xiao drafted <ref>. Yingfa Chen drafted <ref>. Chenyang Song drafted <ref>. Chaojun Xiao, Dazhi Jiang, and Chenyang Song conducted experiments for <ref>. Feng Yao drafted <ref>. Chaojun Xiao drafted <ref>. Guanyu Lin and Chaojun Xiao drafted <ref>. Weilin Zhao drafted <ref>. Chaojun Xiao drafted <ref>. Chaojun Xiao drafted the conclusion. Jingbo Shang, Huimin Chen, Yankai Lin, Zexuan Zhong, Ao Zhang, and Chenglei Si proofread the paper and provided valuable feedback on the paper structure. Khai Hao Moo and Chenyang Zhao proofread the paper and provided valuable suggestions for grammar correction. citation
http://arxiv.org/abs/2409.03670v1
20240905162413
Loop corrections for hard spheres in Hamming space
[ "Abolfazl Ramezanpour", "Saman Moghimi-Araghi" ]
cond-mat.dis-nn
[ "cond-mat.dis-nn", "cond-mat.stat-mech", "cs.DM", "math-ph", "math.MP" ]
aramezanpour@gmail.com Department of Physics, College of Science, Shiraz University, Shiraz 71454, Iran Medical Systems Biophysics and Bioengineering, Leiden Academic Centre for Drug Research, Faculty of Science, Leiden University, Leiden, The Netherlands samanimi@sharif.edu Physics Department, Sharif University of Technology, P.O. Box 11155-9161 Tehran, Iran § ABSTRACT We begin with an exact expression for the entropy of a system of hard spheres within the Hamming space. This entropy relies on probability marginals, which are determined by an extended set of Belief Propagation (BP) equations. The BP probability marginals are functions of auxiliary variables which are introduced to model the effects of loopy interactions on a tree-structured interaction graph. We explore various reasonable and approximate probability distributions, ensuring they align with the exact solutions of the BP equations. Our approach is based on an ansatz of (in)homogeneous cavity marginals respecting the permutation symmetry of the problem. Through thorough analysis, we aim to minimize errors in the BP equations. Our findings support the conjecture that the maximum packing density asymptotically conforms to the lower bound proposed by Gilbert and Varshamov, further validated by the solution of the loopy BP equations. Loop corrections for hard spheres in Hamming space Saman Moghimi-Araghi September 9, 2024 ================================================== § INTRODUCTION Finding the maximum packing density of hard spheres is a challenging problem in physics and coding theory, especially in high dimensions <cit.>. For example, exact solutions are only known for very small and specific dimensions like 1, 2, 3, 8, and 24<cit.>. Currently, there's a significant gap between the best lower and upper bounds on the maximum packing density in very high dimensions <cit.>. One approach to tackle this problem is to start with a crystalline structure made of elementary cells with a flexible basis. These cells form a periodic density distribution of the spheres, determined by the shape of the elementary cell and the arrangement of the spheres in the basis. By adjusting these parameters, we can find an upper bound for the maximum packing density that satisfies certain equations for the spatial density distribution. On the other hand, the problem can be understood as mapping to a physical system of interacting particles, wherein the relevant packing configurations, weighted by the Boltzmann factor, are examined as the number density of hard spheres increases. Mean-field approximations are commonly employed to construct a phase diagram of the system, particularly in very high (infinite) dimensions where this approximation is expected to perform well if handled with care <cit.>. In the following, we will focus on the packing problem within the binary Hamming space. At zero temperature, this translates into a constraint satisfaction problem where any two spheres cannot overlap. The Bethe approximation (cavity method) has been utilized to estimate the maximum packing density in the Hamming space. Nevertheless, the degree to which these results accurately reflect the behaviour as the dimensionality approaches infinity in this fully-connected model with numerous interconnected interactions remains uncertain. In this study we try a different approach by first introducing some auxiliary variables to map the original problem to an extended problem with a tree interaction graph where the Bethe approximation is exact <cit.>. Then we write an exact expression for the entropy in terms of the (unique) solution to the Bethe, or Belief Propagation (BP), equations <cit.>. To solve these equations and estimate the system's entropy, we make reasonable and manageable approximations. The structure of the paper is as follows. We begin with a more precise statement of the problem and a summary of the results. In Sec. <ref> we present the mapping to a tree interaction graph and write the associated Bethe equations and the entropy. Section <ref> is devoted to finding approximate BP solutions, starting with a naive homogeneous (liquid) solution and ending with inhomogeneous solutions which respect the permutation symmetry of the problem. The concluding remarks and some details of the results are given in Sec. <ref> and Appendices <ref>-<ref>, respectively. §.§ The problem statement Consider the binary Hamming space in n dimensions. In other words, an n-dimensional discrete space where each dimension can take either zero or one. We want to place N hard spheres of diameter d on the points of this lattice in way that the spheres do not overlap. However, if they have no overlap, they are considered completely non-interacting. Placing a hard sphere at any point σ⃗_i∈ (0,1)^n produces a forbidden or occupied space which is the set of discrete points within the ball of radius d at which the other hard spheres cannot be located. This volume is given by V_d(σ⃗_i)={σ⃗_j:D_ij<d}, as the distance between any two points in Hamming space is equal to D_ij=∑_a=1^n(σ_i^a-σ_j^a)^2. And if the coordinates of any two points are not identical in exactly q dimensions, their distance from each other is equal to D_ij=q. Thus, the number of inaccessible points is equal to V_d=|V_d(σ⃗_i)|=∑_l=0^d-1C(l:n), where C(l:n)=n!/(l!(n-l)!). Note that this number differs from the volume of a spherical shell with radius d. Let's write the partition function of this system. Since the spheres are considered non-interacting, the partition function of the system is just the number of possible configurations for the packing. Any arbitrary configuration of these N spheres in the Hamming space is represented by σ⃗={σ⃗_i ∈ (0,1)^n: i=1,⋯,N}. and thus, the partition function is given by Z_n(N,d)=∑_σ⃗∏_i<j𝕀(D_ij≥ d)=e^S_n(N,d). where the indicator function 𝕀(C)=1 if constraint C is satisfied, otherwise 𝕀(C)=0. As mentioned, the partition function represents the number of possible configurations for packing. Naturally, in a space of a certain size, if we increase the number of spheres, we reach a point where no valid configuration can be found and the spheres would surely have overlaps. We call the maximum number of spheres for which a valid configuration is found N_max. In other words, we must have Z_n(N_max,d)> 0,1cm Z_n(N_max+1,d)= 0. This quantity and other properties of the system depend on the space dimension n and the diameter considered for the spheres d. Increasing n enlarges the space, while increasing d reduces the number of states. We are interested in the case where d, n →∞ while their ratio remains constant and equal to δ. Within the Bethe approximation and assuming the replica symmetry <cit.>, the entropy S_n(N,d)=ln Z_n(N,d) is estimated by S_n^LBP(N,d)=∑_iΔ S_i-∑_i<jΔ S_ij. The entropy contributions of the nodes (spheres) Δ S_i and edges (interactions) Δ S_ij depend on the solutions to the Bethe equations η_i→ j(σ⃗_i) ∝∏_k i,j(∑_σ⃗_k𝕀(D_ik≥ d)η_k→ i(σ⃗_k)). Here the cavity marginal η_i→ j(σ⃗_i) is the probability of finding sphere i in position σ⃗_i in the absence interaction with sphere j. Given a fixed point of the above equation, one computes the entropy contributions e^Δ S_i = ∑_σ⃗_i∏_j i(∑_σ⃗_j𝕀(D_ij≥ d)η_j→ i(σ⃗_j)), e^Δ S_ij = ∑_σ⃗_i,σ⃗_j𝕀(D_ij≥ d)η_i→ j(σ⃗_i)η_j→ i(σ⃗_j). For a tree interaction graph, the Bethe equations with the unique solution η_i→ j(σ⃗_i)=1/2^n provide the exact entropy: S_n^T(N,d)=Nln(2^n)+(N-1)ln (1-V_d/2^n), which is greater than zero for any d<n, irrespective of the tree structure. The entropy obtained by the loopy Belief Propagation (LBP) equations with a homogeneous or liquid solution which respects the translation symmetry of the problem (η_i→ j(σ⃗_i)=1/2^n) is: S_n^LBP(N,d)=Nln(2^n)+N(N-1)/2ln (1-V_d/2^n). The maximum N here is given by N_max^LBP=1-2ln 2^n/ln (1-v_d)→ (2ln 2)n/v_d, where v_d=V_d/2^n. Note that the Gilbert-Varshamov (GV) lower bound states that N_max≥2^n/V_d=1/v_d=N_max^GV. It seems that even other solutions to the loopy Bethe equations do not result in an exponentially larger maximum number of spheres than the GV lower bound <cit.>. §.§ Summary of the results The main results of this paper are listed below: * The interaction graph of spheres is partitioned into a tree interaction graph and a set of induced loopy interactions. The effect of loopy interactions is represented by messages passing along the edges of the tree graph. We formulate the BP equations with these auxiliary variables in an extended space to obtain an exact expression for the entropy of the packing problem in the Hamming space. * A naïve ansatz for the BP probability marginals reproduces the entropy, previously obtained by the loopy Bethe equations (assuming replica symmetry or one step of replica symmetry breaking). This asymptotically coincides with the GV lower bound for the maximum number of spheres. * We observe that the aforementioned naïve ansatz asymptotically satisfies (“on average”) the extended BP equations. Similar results are obtained with other reasonable candidates for homogeneous (liquid) BP marginals, which are expected to be closer to the exact solution of the extended BP equations. * Numerically, we observe that an initially inhomogeneous solution to the BP equations approaches the homogeneous one as the number of spheres increases, starting from two neighboring spheres localized in the Hamming space. However, this does not conclusively prove that there is no packing density that asymptotically exceeds the GV lower bound. § AN EXACT TREE REPRESENTATION Consider a connected tree graph T of N nodes and the associated local branches or cavity trees T_i→ j. More precisely, T_i→ j is the subgraph that is obtained after removing edge (ij) with root node i. This can recursively be defined as follows T_i→ j=i ∪_k∈∂ i∖ j T_k→ i. Here ∂ i denotes the set of neighbors of node i in the graph. Figure <ref> displays such a tree interaction graph. The size of such cavity tree is denoted by N_i→ j=|T_i→ j|. For a given configuration σ⃗, let us define the cavity messages h_i→ j to represent the space occupied by all the spheres in the cavity tree T_i→ j∖ i, that is h_i→ j=∪_k∈ T_i→ j∖ i V_d(σ⃗_k). Or, in terms of the other incoming messages h_i→ j=∪_k∈∂ i ∖ j( h_k→ i∪ V_d(σ⃗_k) ). Equivalently, the messages h_i→ j can be considered as the positions of all the spheres in T_i→ j except sphere i. These message are defined to have access to the positions of all the spheres locally at each node of the tree graph T. Now with the extended set of variables σ⃗ and 𝐡={h_i→ j: i=1,…,N, j∈∂ i}, we rewrite the partition function of the sphere packing problem, Z_n(N,d)=∑_σ⃗∏_(ij)∈ T𝕀(D_ij≥ d) ×(∑_𝐡∏_i ∏_j∈∂ i𝕀(V_d(σ⃗_i)∩ h_j→ i=∅)𝕀(h_i→ j=∪_k∈∂ i∖ j h_k→ i∪ V_d(σ⃗_k))). The constraints are to ensure that the messages h_i→ j satisfy the necessary equations and at the same time any two spheres have a distance larger than or equal to d. Note that there is only one solution to the constraints on the messages h_i→ j for a given configuration σ⃗ of the spheres. The Bethe equations for the above problem are recursive equations for the cavity probability distributions μ_i→ j(σ⃗_⃗i⃗,h_i→ j:σ⃗_⃗j⃗,h_j→ i) along the directed edges of the tree graph. Here μ_i→ j(σ⃗_⃗i⃗,h_i→ j:σ⃗_⃗j⃗,h_j→ i) is the probability of having (σ⃗_⃗i⃗,h_i→ j) as the position of sphere i and the message h_i→ j conditioned on the values of (σ⃗_⃗j⃗,h_j→ i). In a tree graph, the incoming variables are independent of each other, therefore <cit.>, μ_i→ j(σ⃗_⃗i⃗,h_i→ j:σ⃗_⃗j⃗,h_j→ i) ∝𝕀(σ⃗_⃗i⃗,h_j→ i) ×∑_{σ⃗_k,h_k→ i:k∈∂ i∖ j}𝕀(h_i→ j)∏_k∈∂ i∖ j[𝕀(D_ik) 𝕀(σ⃗_⃗i⃗,h_k→ i)𝕀(h_i→ k)μ_k→ i(σ⃗_⃗k⃗,h_k→ i:σ⃗_⃗i⃗,h_i→ k)], where for brevity's sake, we defined 𝕀(σ⃗_⃗i⃗,h_k→ i) =𝕀(V_d(σ⃗_i)∩ h_k→ i=∅), 𝕀(D_ik) =𝕀(D_ik≥ d), 𝕀(h_i→ k) =𝕀(h_i→ k=∪_j∈∂ i∖ k h_j→ i∪ V_d(σ⃗_j)). In the following, the messages h_i→ j are called internal messages and the cavity probabilities μ_i→ j are called the external BP messages. In practice, the BP equations are solved by iteration in a random sequential way starting with initial BP messages μ_i→ j. Here the tree structure of the interaction graph insures that there a unique solution to the above equations. It is straightforward to start from the partition function Z_n(N,d) of the tree interaction graph T and relate the free entropy S_n(N,d) to the BP cavity marginals <cit.>. This can be written in term of the variables (nodes) and interactions (edges) contributions to the entropy S_n(N,d)=∑_i Δ S_i-∑_(ij)∈ TΔ S_ij, where e^Δ S_i=∑_σ⃗_i∑_{σ⃗_j,h_j→ i:j∈∂ i}∏_j∈∂ i[𝕀(D_ij) 𝕀(σ⃗_⃗i⃗,h_j→ i)𝕀(h_i→ j)μ_j→ i(σ⃗_⃗j⃗,h_j→ i:σ⃗_⃗i⃗,h_i→ j)], and e^Δ S_ij= ∑_σ⃗_i,h_i→ j,σ⃗_j,h_j→ i𝕀(D_ij) μ_i→ j(σ⃗_⃗i⃗,h_i→ j:σ⃗_⃗j⃗,h_j→ i)μ_j→ i(σ⃗_⃗j⃗,h_j→ i:σ⃗_⃗i⃗,h_i→ j). In the following we shall work with a tree structure T which is represented by a chain of interacting spheres. This allows us to simplify the BP equations and obtain simpler expressions for the BP cavity marginals and the entropy. §.§ Chain representation Here we assume that the spheres are arranged in a chain from i=1,…,N. Besides the σ⃗_i we need the messages h_i passing from left to right. Thus on each edge (i,i+1) we have three constraints: D_i,i+1≥ d, h_i+1=h_i∪ V_d(σ⃗_i), and h_i ∩ V_d(σ⃗_i+1)=∅. Let the indicator function 𝕀_i,i+1 represent these constraints. Then, the left-to-right BP messages μ_i(σ⃗_i,h_i) and right-to-left BP messages ν_i(σ⃗_i,h_i) are given by the following BP equation, μ_i(σ⃗_i,h_i) ∝∑_σ⃗_i-1,h_i-1𝕀_i-1,iμ_i-1(σ⃗_i-1,h_i-1), ν_i(σ⃗_i,h_i) ∝∑_σ⃗_i+1,h_i+1𝕀_i,i+1ν_i+1(σ⃗_i+1,h_i+1). The boundary messages are μ_1(σ⃗,h_1) = 1/2^nδ_h,∅, ν_N(σ⃗,h_N) = 1/2^n1/2^n(N-1). Here we are considering h_i as the set of positions of spheres j=1,…,i-1. In this way, the right-to-left messages are uniform distributions. Also away from the boundary ν_i(σ⃗,h) = 1/2^n1/2^n(i-1). The entropy within the chain representation reads S_n(N,d)=∑_i=1^N Δ S_i-∑_i=1^N-1Δ S_i,i+1, where now the node contribution is e^Δ S_i=∑_σ⃗_i,h_i(∑_σ⃗_i-1,h_i-1𝕀_i-1,iμ_i-1(σ⃗_i-1,h_i-1))(∑_σ⃗_i+1,h_i+1𝕀_i,i+1ν_i+1(σ⃗_i+1,h_i+1)), and the edge contribution is given by e^Δ S_i,i+1= ∑_σ⃗_i,h_i,σ⃗_i+1,h_i+1𝕀_i,i+1μ_i(σ⃗_i,h_i)ν_i+1(σ⃗_i+1,h_i+1). In the following we shall work with the chain representation. § APPROXIMATE SOLUTIONS OF THE BETHE EQUATIONS §.§ A naive approximation of the BP messages Let us write the BP probability marginals μ_i(σ⃗_i,h_i) in term of probability of position of sphere i, p_i(σ⃗_i) and a conditional probability q_i(h_i:σ⃗_i), μ_i(σ⃗_i,h_i)= p_i(σ⃗_i) q_i(h_i:σ⃗_i), with h_i={σ⃗_j=1,…,i-1}. Then, by symmetry we take a uniform distribution for σ⃗_i, p_i(σ⃗_i)=1/2^n. We also assume that the probability distribution of the other spheres is factorized and uniform q_i(h_i:σ⃗_i)=∏_j=1^i-1𝕀(D_i,j≥ d)/2^n-V_d. In this way, for the node contributions to the entropy from Eq. <ref> we obtain e^Δ S_i=1/2^n+ni∑_l_1,l_2,l_12=d^n C(l_12:n) Ω(l_1,l_2:l_12) ×(1-2V_d-O_01(l_1)-O_02(l_2)-O_12(l_12)+2O_012(l_1,l_2,l_12)/2^n-V_d)^i-2. The other terms in the entropy come from the interactions along the chain and are obtained from Eq. <ref> e^Δ S_i,i+1= 1/2^n+ni∑_l=d^n C(l:n) (1-V_d-O_01(l)/2^n-V_d)^i-1. Here O_ij(l) is the overlap of two spheres of radius d at distance l. O_ijk(l_i,l_j,l_ij) is the overlap of three spheres of radius d when (i,j) have distance l_ij and the other sphere is at distances l_i,l_j form i and j, respectively. The function Ω(l_i,l_j:l_ij) is the number of possible points for the third sphere given the distances. See Fig. <ref> for a schematic representation of these quantities. Let us take the limit n,d→∞ with δ=d/n finite. All distances are scaled with n, for instance r_ij=l_ij/n. Then v_d=V_d/2^n=e^-n[ln(2)-H(δ)], with H(δ)=-δln(δ)-(1-δ) ln(1-δ) for the binary entropy, is an exponentially small quantity. In Appendix <ref> we obtain the asymptotic entropy contributions of the nodes and edges in the total entropy, Δ S_i ≃ nN(3/2ln 2-ṽ_d), Δ S_i,i+1 ≃ nN1/2(ln 2-ṽ_d), S^naive =Δ S_i-Δ S_i,i+1≃ nN(ln 2-1/2ṽ_d). where we defined ṽ_d=Nv_d/n. Therefore, the entropy vanishes at N_max^naive=(2ln 2)n/v_d, at the same point that the loopy BP entropy S_n^LBP(N,d) in Eq. <ref> goes to zero. §.§ Beyond the naive approximation Note that the probability distribution of the spheres is invariant under the permutation of the spheres. Therefore, by de Finetti theorem <cit.> the probabilities q_i(h_i:σ⃗_i) can well be approximated by a convex combination of factorized distributions, q_i(h_i:σ⃗_i)=∑_αc_i(α)∏_j=1^i-1f_i,j^α(σ⃗_j:σ⃗_i), where ∑_αc_i(α) =1, ∑_σ⃗_jf_i,j^α(σ⃗_j:σ⃗_i) =1. Recall that h_i is equivalent to the set of positions {σ⃗_j=1,…,i-1}. Let us for simplicity consider only one of the factorized terms. Then the BP equations are p_i(σ⃗_i)∏_j=1^i-1f_ij(σ⃗_j:σ⃗_i) ∝∑_σ⃗_i-1,h_i-1𝕀_i-1,i p_i-1(σ⃗_i-1)∏_j=1^i-2f_i-1,j(σ⃗_j:σ⃗_i-1). It means that p_i(σ⃗_i)f_i,i-1(σ⃗_i-1:σ⃗_i)∏_j=1^i-2f_i,j(σ⃗_j:σ⃗_i) = p_i-1(σ⃗_i-1)/z_i𝕀(D_i,i-1≥ d)∏_j=1^i-2𝕀(D_i,j≥ d)f_i-1,j(σ⃗_j:σ⃗_i-1), where the normalization factor z_i is z_i= ∑_σ⃗_i,h_i p_i-1(σ⃗_i-1)𝕀(D_i,i-1≥ d)∏_j=1^i-2𝕀(D_i,j≥ d)f_i-1,j(σ⃗_j:σ⃗_i-1). Taking p_i(σ⃗_i)=p_i-1(σ⃗_i-1)=1/2^n, the above equation suggests the following solution f_i,j(σ⃗_j:σ⃗_i)=𝕀(D_i,j≥ d)/2^n-V_d, for j=1,…,i-2, and f_i,i-1(σ⃗_i-1:σ⃗_i)=𝕀(D_i,i-1≥ d)g_i,i-1(σ⃗_i-1:σ⃗_i), for j=i-1. Given the {f_i-1,j:j=1,…,i-2}, the function g_i,i-1(σ⃗_i-1:σ⃗_i) should be chosen according to the following constraints, ∑_σ⃗_i-1g_i,i-1(σ⃗_i-1:σ⃗_i)𝕀(D_i,i-1≥ d) =1, g_i,i-1(σ⃗_i-1:σ⃗_i) =(2^n-V_d)^i-2/z_i∏_j=1^i-2f_i-1,j(σ⃗_j:σ⃗_i-1). The equations can not be true for any configuration. To overcome the problem of dependence on σ⃗_j, we can simply average over these variables using a uniform measure, leading to g_i,i-1(σ⃗_i-1:σ⃗_i) = (2^n-V_d)^i-2/z_i⟨∏_j=1^i-2f_i-1,j(σ⃗_j:σ⃗_i-1)⟩ =(1-v_d)^i-2/z_i∑_σ⃗_i-2 g_i-1,i-2(σ⃗_i-2:σ⃗_i-1)𝕀(D_i-1,i-2≥ d) =(1-v_d)^i-2/z_i. In this way z_i=2^n(1-v_d)^i-1, thus f_i,i-1(σ⃗_i-1:σ⃗_i)=𝕀(D_i,i-1≥ d)/2^n-V_d. It means that all the f_i,j for j=1,…,i-1 are given by the expression we used in the naive approximation of the messages. Moreover, the f_i,j are the same function, as expected from the permutation symmetry of the problem. Note that given the above f_i,j, according to Eq. <ref>, z_i=1/(2^n-V_d)^i-2∑_l_12=d^nC(l_12:n) (∑_l_1,l_2=d^n Ω(l_1,l_2:l_12))^i-2. For n→∞, after ignoring the exponentially small overlaps, z_i≃1/(2^n-V_d)^i-2(2^n-V_d)(2^n-2V_d)^i-2=2^n(1-v_d)(1-V_d/2^n-V_d)^i-2≃ 2^n(1-v_d)^i-1, which is consistent with the expression we obtained for this quantity after Eq. <ref>. §.§ Minimizing the BP errors In this subsection, we look for a better solution for f_i,j(σ⃗_j:σ⃗_i)=𝕀(D_i,j≥ d)g_i,j(σ⃗_j:σ⃗_i) by minimizing the distance ℒ[{g_i,j=1,…,i-1}]=∑_{σ⃗_j=1,…,i-1:D_i,j≥ d}( ∏_j=1^i-1g_i,j-1/z_i∏_j=1^i-2𝕀(D_i-1,j≥ d)g_i-1,j)^2, constrained with ∑_σ⃗_j:D_i,j≥ dg_i,j=1. The stationary equations for the g_i,j(σ⃗_j:σ⃗_i) result in λ_i,j=g_i,j(σ⃗_j:σ⃗_i)∏_k=1:k≠ j^i-1(∑_σ⃗_k:D_i,k≥ dg_i,k^2) -1/z_i∑_σ⃗_i-1:D_i,i-1,D_i-1,j≥ dg_i,i-1g_i-1,j∏_k=1:k≠ j^i-2(∑_σ⃗_k:D_i,k,D_i-1,k≥ dg_i,kg_i-1,k), for j=1,…,i-2, and λ_i,i-1=g_i,i-1(σ⃗_i-1:σ⃗_i)∏_k=1^i-2(∑_σ⃗_k:D_i,k≥ dg_i,k^2)-1/z_i∏_k=1^i-2(∑_σ⃗_k:D_i,k≥ d,D_i-1,k≥ dg_i-1,kg_i,k). The Lagrange multipliers λ_ij are to ensure the normalization constraints. Let us assume that the g_i,j depend only on the Hamming distances D_i,j. By symmetry we also assume that g_i,j=g_i is the same for all j=1,…,i-1 and consider only the later equation for g_i,i-1. In this way, from Eq. <ref> we get g_i(l)=λ_i+1/z_i(∑_l_1,l_2=d^n Ω(l_1,l_2:l)g_i-1(l_1)g_i(l_2))^i-2/(∑_l_1=d^nC(l_1:n)g_i(l_1)^2)^i-2, z_i=∑_l=d^nC(l:n) (∑_l_1,l_2=d^nΩ(l_1,l_2:l)g_i-1(l_1) )^i-2, and for the Lagrange multiplier λ_i=1/2^n-V_d(∑_l_1=d^nC(l_1:n)g_i(l_1)^2)^i-2 -1/z_i(2^n-V_d)∑_l=d^nC(l:n)(∑_l_1,l_2=d^n Ω(l_1,l_2:l)g_i-1(l_1)g_i(l_2))^i-2. The above equations can be rewritten in a compact form as g_i(l)=1/2^n-V_d+(X_i(l)-⟨ X_i(l)⟩), where we defined X_i(l)=1/z_i( ∑_l_1,l_2=d^n Ω(l_1,l_2:l)g_i-1(l_1)g_i(l_2)/∑_l_1=d^nC(l_1:n)g_i(l_1)^2)^i-2, and ⟨ X_i(l)⟩=1/2^n-V_d∑_l=d^nC(l:n)X_i(l). Given the g_i-1(l), the equations can be solved by iteration for g_i(l), starting from g_2(l)=1/(2^n-V_d). We see that the trial solution g_i(l)=1/(2^n-V_d), gives ⟨ X_i(l)⟩ =1/2^n-V_d, and X_i(l)=(1-v_d1-o_ij(l)/1-v_d)^i-2/∑_l=d^nC(l:n)(1-v_d1-o_ij(l)/1-v_d)^i-2, where v_d=V_d/2^n and o_ij=O_ij/V_d are exponentially small quantities. Approximating (1-o_ij(l))/(1-v_d)≈ 1 is therefore consistent with the solution g_i(l)=1/(2^n-V_d) obtained in the previous subsection. §.§ Considering the permutation symmetry Let us consider the general case where the conditional probability marginal in the BP messages is a convex combination of factorized forms q_i(h_i:σ⃗_i)=∑_αc_i(α)∏_j=1^i-1( 𝕀(D_i,j≥ d)g_i,j^α(σ⃗_j:σ⃗_i) ). Then we try to minimize the consistency error of the above trail solutions in the BP equations ℒ[{c_i^α,g_i,j=1,…,i-1^α}] =∑_{σ⃗_j=1,…,i-1:D_i,j≥ d}( ∑_αc_i(α)∏_j=1^i-1g_i,j^α-1/z_i∑_αc_i-1(α)∏_j=1^i-2𝕀(D_i-1,j≥ d)g_i-1,j^α)^2, with respect to the variables c_i(α), and g_i,j^α. The stationary equations for the unknown variables lead to the following expressions λ_i=∑_α'c_i(α')∏_j=1^i-1(∑_σ⃗_j:D_i,j≥ dg_i,j^αg_i,j^α') -1/z_i∑_α'c_i-1(α')∑_σ⃗_i-1:D_i,i-1≥ dg_i,i-1^α∏_j=1^i-2(∑_σ⃗_j:D_i,j,D_i-1,j≥ dg_i,j^αg_i-1,j^α'), and λ_ij^α=c_i(α)∑_α'c_i(α')g_i,j^α'∏_k=1:k≠ j^i-1(∑_σ⃗_k:D_i,k≥ dg_i,k^αg_i,k^α') -1/z_ic_i(α)∑_α'c_i-1(α')∑_σ⃗_i-1:D_i,i-1≥ dg_i,i-1^αg_i-1,j^α'∏_k=1:k≠ j^i-2(∑_σ⃗_k:D_i,k,D_i-1,k≥ dg_i,k^αg_i-1,k^α'), if j=1,…,i-2, and for j=i-1 λ_i,i-1^α=c_i(α)∑_α'c_i(α')g_i,i-1^α'∏_k=1^i-2(∑_σ⃗_k:D_i,k≥ dg_i,k^αg_i,k^α') -1/z_ic_i(α)∑_α'c_i-1(α')∏_k=1^i-2(∑_σ⃗_k:D_i,k,D_i-1,k≥ dg_i,k^αg_i-1,k^α'). The Lagrange multipliers λ_i and λ_ij^α ensure that ∑_αc_i(α) =1, ∑_σ⃗_j:D_i,j≥ dg_i,j^α(σ⃗_j:σ⃗_i) =1. To be specific, we take ∑_αc_i(α)[…]=∑_l_j=1,…,i-1=d^nc_i(l_1,…,l_i-1)1/(i-1)!∑_P[…]. This means that the i-1 spheres are distributed in distances l_1,…,l_i-1, and to ensure the permutation symmetry we sum over all the (i-1)! permutations with equal weights. More precisely, now the the conditional part of the BP messages read as follows q_i(h_i:σ⃗_i)=∑_l_j=1,…,i-1≥ dc_i(l_1,…,l_i-1)1/(i-1)!∑_P∏_j=1^i-1g_i,j^l_Pj(σ⃗_j:σ⃗_i), where as before h_i={σ⃗_1:i-1}, and j'=Pj shows the effect of permutation P on j. By symmetry, the coefficients c_i(l_1,…,l_i-1)=c_i({l_1:i-1}) depend on the number of spheres at distance l. As before, we assume that the g_i,j^l_Pj depend on Hamming distances D_i,j and consider a complete and normalized representation g_i,j^l_Pj(σ⃗_j:σ⃗_i)=1/w(l_Pj)δ_l_ij,l_Pj. This fixes the values of the g_i,j variables in the cavity marginals. We also defined w(l)=C(l:n) for the number of points at distance l from an arbitrary point in the Hamming space. In this way, the stationary equations are given by λ_i=1/(i-1)!(i-1)!∑_{l_1:i-1'}∑_P,P'c_i({l_1:i-1'})∏_j=1^i-1(δ_l_Pj,l_P'j'/w(l_Pj)) -1/z_i1/(i-1)!(i-2)!∑_{l_1:i-2'}∑_P,P'c_i-1({l_1:i-2'})∑_σ⃗_i-1:D_i,i-1≥ dg_i,i-1^l_P(i-1)∏_j=1^i-2(δ_l_Pj,l_P'j'/w(l_Pj)), which can be simplified to λ_i=1/∏_j=1^i-1w(l_j)c_i({l_1:i-1}) -1/z_i1/(i-1)∑_k=1^i-11/∏_j kw(l_j)c_i-1({l_1:i-1}∖ l_k). Or, in terms of the c_i variables we have c_i({l_1:i-1})=λ_i∏_j=1^i-1w(l_j) +1/z_i1/(i-1)∑_k=1^i-1w(l_k)c_i-1({l_1:i-1}∖ l_k). On the other hand, from the normalization constraints ∑_l_1,…,l_i-1=d^n c_i({l_1:i-1}) =1, 1/2^n∑_σ⃗_i,{σ⃗_1:i-1}q_i({σ⃗_1:i-1}:σ⃗_i) =1, one obtains λ_i=1/(2^n-V_d)^i-1(1-2^n-V_d/z_i), and z_i=∑_l_1,…,l_i-1=d^n c_i-1({l_1:i-2})w(l_i-1)∏_j=1^i-2(∑_l'=d^nΩ(l',l_j:l_i-1)/w(l_j) ). Finally, after some straightforward algebra, we obtain a recursive equation for the c_i in terms of the previously defined quantities c_i({l_1:i-1})=∏_j=1^i-1(w(l_j)/2^n-V_d) +∑_k=1^i-1w(l_k)c_i-1({l_1:i-1}∖ l_k)/(i-1)-(2^n-V_d)∏_j=1^i-1(w(l_j)/(2^n-V_d))/∑_l_1',…,l_i-1'=d^n c_i-1({l_1:i-2'})w(l_i-1')∏_j=1^i-2(∑_l'=d^nΩ(l',l_j':l_i-1')/w(l_j') ). A reasonable initial condition for the above equation is c_2(l_1)=w(l_1)/2^n-V_d. This makes the second term for computing c_3(l_1,l_2) zero and results in c_3(l_1,l_2)=w(l_1)/2^n-V_dw(l_2)/2^n-V_d. The same happens for other coefficients and we obtain c_i({l_1:i-1})=∏_j=1^i-1(w(l_j)/2^n-V_d), which is equivalent to a uniform distribution of the sphere positions {σ⃗_1:i-1}. From the above solutions we obtain the node contributions to the entropy Eq. (<ref>), e^Δ S_i=1/2^n(i+2)∑_l_1,…,l_i-2=d^n c_i-1({l_1:i-2})1/(i-2)!∑_P ∑_σ⃗_i-1,σ⃗_i,σ⃗_i+1:D_i,i-1,D_i,i+1,D_i-1,i+1≥ d∏_j=1^i-2(∑_σ⃗_j:D_j,i-1,D_j,i,D_j,i+1≥ d g_i-1,j^l_Pj(σ⃗_j:σ⃗_i-1)), which simplifies to e^Δ S_i=1/2^n(i+1)∑_l_1,l_2,l_3=d^n C(l_1:n)Ω(l_2,l_3:l_1)(1/2^n-V_d∑_l_i-1,l_i,l_i+1=d^n Y(l_i-1,l_i,l_i+1:l_1,l_2,l_3))^i-2. The edge contributions are obtained from Eq. (<ref>), e^Δ S_i,i+1= 1/2^n(i+2)∑_l_1,…,l_i-1=d^n c_i({l_1:i-1})1/(i-1)!∑_P ∑_σ⃗_i,σ⃗_i+1:D_i,i+1≥ d∏_j=1^i-1(∑_σ⃗_j:D_j,i,D_j,i+1≥ d g_i,j^l_Pj(σ⃗_j:σ⃗_i)), or, after simplification e^Δ S_i,i+1=1/2^n(i+1)∑_l=d^n C(l:n)(1/2^n-V_d∑_l_i,l_i+1=d^n Ω(l_i,l_i+1:l))^i-1. As expected, these are identical expressions to those obtained for the entropy using the naive approximation of the BP messages. §.§ Inhomogeneous solutions Let us break the translation symmetry and look for more general solutions to the BP equations μ_i(σ⃗_i,h_i)= p_i(σ⃗_i) q_i(h_i:σ⃗_i) where p_i(σ⃗_i) is not necessarily 1/2^n. Assuming again that the conditional part is given by q_i(h_i:σ⃗_i)=∑_αc_i(α)∏_j=1^i-1( 𝕀(D_i,j≥ d)g_i,j^α(σ⃗_j:σ⃗_i) ), we minimize the resulted error in the BP equations ℒ[p_i(σ⃗_i),{c_i^α,g_i,j=1,…,i-1^α}]=∑_σ⃗_i,{σ⃗_j=1,…,i-1:D_i,j≥ d} ( p_i(σ⃗_i)∑_αc_i(α)∏_j=1^i-1g_i,j^α-1/z_ip_i-1(σ⃗_i-1)∑_αc_i-1(α)∏_j=1^i-2𝕀(D_i-1,j≥ d)g_i-1,j^α)^2. The main equations and details of computations for this general case are given in Appendix <ref>. In the following, we continue with the last expression for the q_i(h_i:σ⃗_i) in the previous subsection which respects the permutation symmetry, that is q_i(h_i:σ⃗_i)=∑_l_j=1,…,i-1≥ dc_i(l_1,…,l_i-1)1/(i-1)!∑_P∏_j=1^i-1g_i,j^l_Pj(σ⃗_j:σ⃗_i). As before g_i,j^l_Pj(σ⃗_j:σ⃗_i)=1/w(l_Pj)δ_l_ij,l_Pj is fixed and we have only two stationary equations for the p_i(σ⃗_i) and c_i({l_1:i-1}). Derivation of the error function Eq. (<ref>) with respect to the p_i(σ⃗_i) in presence of the normalization constraints leads to p_i(σ⃗_i)=γ_i' +1/∑_{l_1:i-1≥ d}c_i^2({l_1:i-1})/∏_j=1^i-1w(l_j)∑_{l_1:i-1≥ d}c_i({l_1:i-1})/∏_j=1^i-1w(l_j) ×1/(i-1)!∑_P1/z_i∑_{l_1:i-2'≥ d}c_i-1({l_1:i-2'}) ∑_σ⃗_i-1:D_i,i-1=l_P(i-1)p_i-1(σ⃗_i-1) ∏_j=1^i-2( Ω(l_Pj,l_j':l_P(i-1))/w(l_j')). The stationary equations for the c_i read as follows c_i({l_1:i-1})=λ_i'∏_j=1^i-1w(l_j) +1/∑_σ⃗_ip_i^2(σ⃗_i)∑_σ⃗_ip_i(σ⃗_i) ×1/(i-1)!∑_P1/z_i∑_{l_1:i-2'≥ d}c_i-1({l_1:i-2'})∑_σ⃗_i-1:D_i,i-1=l_P(i-1)p_i-1(σ⃗_i-1)∏_j=1^i-2( Ω(l_Pj,l_j':l_P(i-1))/w(l_j')). The normalization factor z_i is z_i=∑_{l_1:i-2'≥ d}^n c_i-1({l_1:i-2'})∑_σ⃗_i,σ⃗_i-1:l_i,i-1≥ dp_i-1(σ⃗_i-1)∏_j=1^i-2(∑_l=d^nΩ(l,l_j':l_i,i-1)/w(l_j')), and the coefficients γ_i' and λ_i' are Lagrange multipliers. Note that the first terms of the two equations are expected from a homogeneous solution and in both equations we have a term like Q(σ⃗_i,{l_1:i-1})≡1/(i-1)!∑_P1/z_i∑_{l_1:i-2'≥ d}c_i-1({l_1:i-2'}) ×∑_σ⃗_i-1:D_i,i-1=l_P(i-1)p_i-1(σ⃗_i-1)∏_j=1^i-2( Ω(l_Pj,l_j':l_P(i-1))/w(l_j')). Normalization conditions give the Lagrange multipliers γ_i' =1/2^n(1- 1/∑_{l_1:i-1≥ d}c_i^2({l_1:i-1})/∏_j=1^i-1w(l_j)∑_{l_1:i-1≥ d}c_i({l_1:i-1})/∏_j=1^i-1w(l_j)∑_σ⃗_i Q(σ⃗_i,{l_1:i-1}) ), λ_i' =1/(2^n-V_d)^i-1(1-1/∑_σ⃗_ip_i^2(σ⃗_i)∑_σ⃗_ip_i(σ⃗_i)∑_{l_1:i-1≥ d}Q(σ⃗_i,{l_1:i-1}) ). We rewrite the above equations for the p_i and c_i in a more compact form p_i(σ⃗_i) =1/2^n+Δ p_i(σ⃗_i), c_i({l_1:i-1}) =∏_j=1^i-1(w(l_j)/2^n-V_d)+Δ c_i({l_1:i-1}), with deviations Δ p_i,Δ c_i from the liquid solution defined by Δ p_i(σ⃗_i)≡1/∑_{l_1:i-1≥ d}c_i^2({l_1:i-1})/∏_j=1^i-1w(l_j) ×∑_{l_1:i-1≥ d}c_i({l_1:i-1})/∏_j=1^i-1w(l_j)(Q(σ⃗_i,{l_1:i-1})-1/2^n∑_σ⃗_i'Q(σ⃗_i',{l_1:i-1})), and Δ c_i({l_1:i-1})≡1/∑_σ⃗_ip_i^2(σ⃗_i)∑_σ⃗_ip_i(σ⃗_i) ×(Q(σ⃗_i,{l_1:i-1})-∏_j=1^i-1(w(l_j)/2^n-V_d)∑_{l_1:i-1'≥ d}Q(σ⃗_i,{l_1:i-1'})). Let us rewrite the equations for the p_i and c_i as [p_i(σ⃗_i)-γ_i']( ∑_{l_1:i-1≥ d}c_i^2({l_1:i-1})/∏_j=1^i-1w(l_j)) =∑_{l_1:i-1≥ d}c_i({l_1:i-1})/∏_j=1^i-1w(l_j)Q(σ⃗_i,{l_j}), [c_i({l_1:i-1})-λ_i'∏_j=1^i-1w(l_j)](∑_σ⃗_ip_i^2(σ⃗_i)) =∑_σ⃗_ip_i(σ⃗_i)Q(σ⃗_i,{l_1:i-1}). Now, we multiply the first equation by p_i(σ⃗_i) and sum over σ⃗_i. Using the normalization condition and the second equation we get [∑_σ⃗_ip_i^2(σ⃗_i)-γ_i']( ∑_{l_1:i-1≥ d}c_i^2({l_1:i-1})/∏_j=1^i-1w(l_j)) =(∑_σ⃗_ip_i^2(σ⃗_i))∑_{l_1:i-1≥ d}c_i({l_1:i-1})/∏_j=1^i-1w(l_j)[c_i({l_1:i-1})-λ_i'∏_j=1^i-1w(l_j)]. Simplifying the equation results in γ_i'( ∑_{l_1:i-1≥ d}c_i^2({l_1:i-1})/∏_j=1^i-1w(l_j)) =λ_i'(∑_σ⃗_ip_i^2(σ⃗_i)), where we also used normalization condition ∑_{l_1:i-1≥ d}c_i({l_1:i-1})=1. For the liquid case p_i(σ⃗_i) =1/2^n, c_i({l_1:i-1}) =∏_j=1^i-1(w(l_j)/(2^n-V_d)), a consistent solution to the equations is given by Q(σ⃗_i,{l_1:i-1}=(i-1)/2^n∏_j=1^i-1(w(l_j)/2^n-V_d), with Δ p_i=Δ c_i=0. In addition, for Eqs. <ref>, <ref>, and <ref> we obtain γ_i' =1/2^n(1-∑_σ⃗_i,{l_1:i-1≥ d}Q(σ⃗_i,{l_1:i-1})), λ_i' =1/(2^n-V_d)^i-1(1-∑_σ⃗_i,{l_1:i-1≥ d}Q(σ⃗_i,{l_1:i-1})), λ_i'/2^n =γ_i'/(2^n-V_d)^i-1, which are satisfied by the liquid solution. On the other side, we may consider a frozen solution, where the probability distributions are concentrated on a single configuration, for instance, p_i(σ⃗_i) =δ_σ⃗_i,σ⃗_i^*, c_i({l_1:i-1}) =δ_{l_1:i-1},{l_1:i-1^*}. In this case, from Eqs. <ref>, <ref>, and <ref> we find γ_i' =1/2^n(1-∑_σ⃗_iQ(σ⃗_i,{l_1:i-1^*})), λ_i' =1/(2^n-V_d)^i-1(1-∑_{l_1:i-1≥ d}Q(σ⃗_i^*,{l_1:i-1})), λ_i' =γ_i'/∏_j=1^i-1w(l_j^*). This results in a consistency equation which should be satisfied by the (σ⃗_i^*,{l_1:i-1^*}) given the previous assignments for j=1,⋯,i-1, 1/2^n(1-∑_σ⃗_iQ(σ⃗_i,{l_1:i-1^*}))= ∏_j=1^i-1w(l_j^*)/(2^n-V_d)^i-1(1-∑_{l_1:i-1≥ d}Q(σ⃗_i^*,{l_1:i-1})). Starting from an initial condition one can try to satisfy the above equation by iteration to find a configuration (σ⃗_i^*,{l_1:i-1^*}) for sphere i. An initial condition starting from i=2 could be p_2(σ⃗_2)=δ_σ⃗_2,0⃗, c_2(l_1)=δ_l_1,d. Let us assume that the probabilities c_i({l_j}) are concentrated on a single configuration {l_j^*}. Then we compute the p_i(σ⃗_i) from Eq. <ref> using the above initial condition. The deviation Δ P=∑_σ⃗|p_i(σ⃗)-1/2^n| is then maximized over all possible configurations of {l_j^*}. Figure <ref> shows that the above process converges quickly to a uniform solution p_i(σ⃗_i)=1/2^n as the number of spheres N increases. The minimum of deviation is zero for all the points which are reported in the figure. § CONCLUSION We used an exact representation of the entropy of a system of hard spheres in the Hamming space to investigate the asymptotic behavior of the maximum packing density of the spheres. The method is based on a decomposition of the interaction graph into a tree structure and the induced loopy interactions, which their effects are taken into account by passing some (internal) messages along the tree interaction graph. The solutions to the Bethe equations for reasonable approximations of the BP cavity marginals are asymptotically consistent with the Gilbert-Varshamov lower bound for the packing density, but we can not exclude the possibility of solutions of higher densities. We considered an ansatz of BP marginals which are represented by a linear superposition of factorized probability distributions which respect the permutation symmetry of the problem. It would be interesting to try other classes of tractable cavity marginals as trial BP messages which are constrained by satisfying the exact BP equations. For instance, another possibility is working with simpler internal messages h_i→ j instead of approximating the BP cavity marginals. Given an ansatz of reasonable BP probability marginals, one can also try to maximize the Bethe entropy to find an upper bound within a specific class of solutions to the BP equations. We would like to thank Parsa Rangriz and Francesco Zamponi for helpful discussions and useful comments. This work was performed using the Academic Leiden Interdisciplinary Cluster Environment (ALICE), the compute resources provided by Leiden University. § THE NAIVE APPROXIMATION OF THE BP MESSAGES: ASYMPTOTIC ENTROPY The BP probability marginals μ_i(σ⃗_i,h_i) in general depend on the position probability of sphere i, p_i(σ⃗_i) and a conditional probability q_i(h_i:σ⃗_i), μ_i(σ⃗_i,h_i)= p_i(σ⃗_i) q_i(h_i:σ⃗_i). The h_i={σ⃗_j=1,…,i-1} represent the positions of all spheres j=1,⋯,i-1. By symmetry we take a uniform distribution for σ⃗_i, p_i(σ⃗_i)=1/2^n. We also assume that the probability distribution of the other spheres is factorized and uniform q_i(h_i:σ⃗_i)=∏_j=1^i-1𝕀(D_i,j≥ d)/2^n-V_d. In this way, for the nodes' contribution to the entropy we have e^Δ S_i=1/2^n+ni∑_l_1,l_2,l_12=d^n C(l_12:n) Ω(l_1,l_2:l_12) ×(1-2V_d-O_01(l_1)-O_02(l_2)-O_12(l_12)+2O_012(l_1,l_2,l_12)/2^n-V_d)^i-2. The edges' contribution in the entropy are e^Δ S_i,i+1= 1/2^n+ni∑_l=d^n C(l:n) (1-V_d-O_01(l)/2^n-V_d)^i-1. Here O_ij(l) is the overlap of two spheres of radius d at distance l. And O_ijk(l_i,l_j,l_ij) is the overlap of three spheres of radius d when (i,j) have distance l_ij and the other sphere is at distances l_i,l_j form (i,j). The function Ω(l_i,l_j:l_ij) is the number of possible points for the third sphere given the distances, Ω(l_i,l_j:l_ij)= C(l_i-l_j+l_ij/2:l_ij)C(l_i+l_j-l_ij/2:n-l_ij). Thus for the overlaps we get O_ij(l_ij)=∑_l_i,l_j=0^dΩ(l_i,l_j:l_ij). Let us define Y(l_i,l_j,l_k:l_ij,l_ik,l_jk) as the number of points at distances l_i,l_j,l_k from spheres i,j,k with distances l_ij,l_ik,l_jk, Y(l_i,l_j,l_k:l_ij,l_ik,l_jk) =∑_x_00=0^n C(x_11:x)C(x_01:l_ik-x)C(x_00:n-l_ik-l_jk+x)C(x_10:l_jk-x) with 2x =l_ik+l_jk-l_ij, 2x_01 =l_j+l_k-l_jk-2x_00, 2x_10 =l_i+l_k-l_ik-2x_00, 2x_11 =l_ik+l_jk-l_i-l_j+2x_00. Then O_ijk(l_i,l_j,l_ij)=∑_l_i,l_j,l_k=0^dY(l_i,l_j,l_k:l_ij,l_ik,l_jk). Let us take the limit n,d→∞ with δ=d/n finite. All distances are scaled with n, for instance r_ij=l_ij/n. Then v_d=V_d/2^n=e^-n[ln(2)-H(δ)] with H(δ)=-δln(δ)-(1-δ) ln(1-δ) is an exponentially small quantity. The node's contribution in the entropy is given by Δ S_node=∑_i=1^NΔ S_i=≃ nN∫_0^1 dx[H^*(x)+H_1^*(x)+H_2^*(x) -xṽ_d(2-o_01^*(x)-o_02^*(x)-o_12^*(x)+2o_012^*(x))] -nNln(2)/2, after approximating ∑_i ≈ N∫_0^1 dx. The overlaps are scaled o=O/V_d and ṽ_d=Nv_d/n. The star means the above quantities are computed at r^*(x),r_1^*(x),r_2^*(x) which r^*(x),r_1^*(x),r_2^*(x)= max_r,r_1,r_2 ∈ (δ,1)[ H(r)+H_1(r_1,r_2,r)+H_2(r_1,r_2,r) -xṽ_d(2-o_01(r_1)-o_02(r_2)-o_12(r)+2o_012(r_1,r_2,r))]. Here H_1(r_1,r_2,r)=rln r-r_1-r_2+r/2lnr_1-r_2+r/2- r_2-r_1+r/2lnr_2-r_1+r/2, and H_2(r_1,r_2,r)=(1-r)ln(1-r) -r_1+r_2-r/2lnr_1+r_2-r/2- (1-r_1+r_2+r/2)ln(1-r_1+r_2+r/2). The overlaps are exponentially small here. Consider for instance the maximum overlap o_ij(r) at r=δ, o_ij(δ)=O_ij/V_d≃ e^n[H_1(r',r',δ)+H_2(r',r',δ)-H(δ)]. By symmetry, the maximum is for r_1=r_2=r'. A plot of the exponent H_1(r',r',δ)+H_2(r',r',δ)-H(δ) as a function of r' in Fig. <ref> shows that this quantity is always negative for δ<1/2. Since the overlaps are exponentially small, they can be ignored to get Δ S_node≃ nN(H(1/2)+H_1(1/2,1/2,1/2)+H_2(1/2,1/2,1/2)-1/2(2ṽ_d+ln(2)) ), The link's contribution in the entropy is given by Δ S_link=∑_i=1^N-1Δ S_i,i+1≃ nN(∫_0^1 dx[H^*(x)-xṽ_d(1-o_ij^*(x))]-ln(2)/2). The star means the above quantities are computed at r^*(x) which r^*(x)=max_r∈ (δ,1)( H(r)-xṽ_d(1-o_ij(r))). The exponentially small o_ij can be ignored to get Δ S_link≃ nN(H(1/2)-1/2(ṽ_d+ln(2)) ), Putting all together the entropy reads Δ S_node≃ nN(3/2ln 2-ṽ_d), Δ S_link≃ nN1/2(ln 2-ṽ_d), S^naive=Δ S_node-Δ S_link≃ nN(ln 2-1/2ṽ_d). Therefore, the entropy vanishes at N_max^naive=(2ln 2)n/v_d, at the same point that the Bethe entropy in Eq. <ref> goes to zero. § THE BP EQUATIONS FOR INHOMOGENEOUS SOLUTIONS Let us break the translation symmetry and look for more general solutions to the BP equations μ_i(σ⃗_i,h_i)= p_i(σ⃗_i) q_i(h_i:σ⃗_i) where p_i(σ⃗_i) is not necessarily 1/2^n. Assuming that q_i(h_i:σ⃗_i)=∑_αc_i(α)∏_j=1^i-1( 𝕀(D_i,j≥ d)g_i,j^α(σ⃗_j:σ⃗_i) ), we minimize the expected error in the BP equations ℒ[p_i(σ⃗_i),{c_i^α,g_i,j=1,…,i-1^α}]=∑_σ⃗_i,{σ⃗_j=1,…,i-1:D_i,j≥ d} ( p_i(σ⃗_i)∑_αc_i(α)∏_j=1^i-1g_i,j^α-1/z_ip_i-1(σ⃗_i-1)∑_αc_i-1(α)∏_j=1^i-2𝕀(D_i-1,j≥ d)g_i-1,j^α)^2, The above function should be minimized with respect to the variables p_i(σ⃗_i),{c_i^α,g_i,j=1,…,i-1^α} conditioned to the following normalization constraints: ∑_σ⃗_ip_i(σ⃗_i) =1, ∑_αc_i(α) =1, ∑_σ⃗_j:D_i,j≥ dg_i,j^α(σ⃗_j:σ⃗_i) =1. The stationary equations with respect to the variables p_i(σ⃗_i),{c_i^α,g_i,j=1,…,i-1^α} read as follows γ_i=p_i(σ⃗_i)∑_α,α'c_i(α)c_i(α')∏_j=1^i-1(∑_σ⃗_j:D_i,j≥ dg_i,j^αg_i,j^α') -1/z_i∑_α,α'c_i(α)c_i-1(α')∑_σ⃗_i-1:D_i,i-1≥ dp_i-1(σ⃗_i-1)g_i,i-1^α∏_j=1^i-2(∑_σ⃗_j:D_i,j,D_i-1,j≥ dg_i,j^αg_i-1,j^α'), λ_i=∑_σ⃗_ip_i^2(σ⃗_i)∑_α'c_i(α')∏_j=1^i-1(∑_σ⃗_j:D_i,j≥ dg_i,j^αg_i,j^α') -1/z_i∑_σ⃗_ip_i(σ⃗_i)∑_α'c_i-1(α')∑_σ⃗_i-1:D_i,i-1≥ dp_i-1(σ⃗_i-1)g_i,i-1^α∏_j=1^i-2(∑_σ⃗_j:D_i,j,D_i-1,j≥ dg_i,j^αg_i-1,j^α'), and λ_ij^α=p_i^2(σ⃗_i)c_i(α)∑_α'c_i(α')g_i,j^α'∏_k=1:k≠ j^i-1(∑_σ⃗_k:D_i,k≥ dg_i,k^αg_i,k^α') -1/z_ip_i(σ⃗_i)c_i(α)∑_α'c_i-1(α')∑_σ⃗_i-1:D_i,i-1≥ dp_i-1(σ⃗_i-1)g_i,i-1^αg_i-1,j^α'∏_k=1:k≠ j^i-2(∑_σ⃗_k:D_i,k,D_i-1,k≥ dg_i,k^αg_i-1,k^α'), if j=1,…,i-2, and for j=i-1 λ_i,i-1^α=p_i^2(σ⃗_i)c_i(α)∑_α'c_i(α')g_i,i-1^α'∏_k=1^i-2(∑_σ⃗_k:D_i,k≥ dg_i,k^αg_i,k^α') -1/z_ip_i(σ⃗_i)p_i-1(σ⃗_i-1)c_i(α)∑_α'c_i-1(α')∏_k=1^i-2(∑_σ⃗_k:D_i,k,D_i-1,k≥ dg_i,k^αg_i-1,k^α'). The Lagrange multipliers γ_i, λ_i and λ_ij^α are to ensure the normalization constraints. prsty CS-book-1999 Conway JH, Sloane NJA. Sphere packings, lattices and groups, third edition, Grundlehren der Mathematischen Wissenschaften 290, Springer-Verlag, New York, 1999. Z-book-2008 Zong C. Sphere packings. Springer Science & Business Media; 2008 Jan 20. PZ-revmp-2010 Parisi G, Zamponi F. Mean-field theory of hard sphere glasses and jamming. Reviews of Modern Physics. 2010 Mar 16;82(1):789. TS-revmp-2010 Torquato S, Stillinger FH. Jammed hard-particle packings: From Kepler to Bernal and beyond. Reviews of modern physics. 2010 Sep 15;82(3):2633. F-mathz-1940 Fejes L. Über einen geometrischen Satz. Mathematische Zeitschrift. 1940 Dec;46(1):83-5. H-anmath-2005 Hales TC. A proof of the Kepler conjecture. Annals of mathematics. 2005 Nov 1:1065-185. V-anmath-2017 Viazovska MS. The sphere packing problem in dimension 8. Annals of mathematics. 2017 May 1:991-1015. CKMRV-2017 Cohn H, Kumar A, Miller S, Radchenko D, Viazovska M. The sphere packing problem in dimension 24. Annals of Mathematics. 2017 May 1;185(3):1017-33. MRRW-ieee-1977 McEliece R, Rodemich E, Rumsey H, Welch L. New upper bounds on the rate of a code via the Delsarte-MacWilliams inequalities. IEEE transactions on Information Theory. 1977 Mar;23(2):157-66. B-mathres-1992 Ball K. A lower bound for the optimal density of lattice packings. International Mathematics Research Notices. 1992 Jan 1;1992(10):217-21. DL-ieee-1998 Delsarte P, Levenshtein VI. Association schemes and coding theory. IEEE Transactions on Information Theory. 1998 Oct;44(6):2477-504. S-jcomb-2001 Samorodnitsky A. On the optimum of Delsarte's linear program. Journal of Combinatorial Theory, Series A. 2001 Nov 1;96(2):261-87. JV-ieee-2004 Jiang T, Vardy A. Asymptotic improvement of the Gilbert-Varshamov bound on the size of binary codes. IEEE Transactions on Information Theory. 2004 Jul 26;50(8):1655-64. SST-jmat-2008 Scardicchio A, Stillinger FH, Torquato S. Estimates of the optimal density of sphere packings in high dimensions. Journal of Mathematical Physics. 2008 Apr 1;49(4). C-geo-2002 Cohn H. New upper bounds on sphere packings II. Geometry & Topology. 2002 Jun 25;6(1):329-53. KLV-mathres-2004 Krivelevich M, Litsyn S, Vardy A. A lower bound on the density of sphere packings via graph theory. International Mathematics Research Notices. 2004;2004(43):2271-9. ACHT-jhep-2020 Afkhami-Jeddi N, Cohn H, Hartman T, de Laat D, Tajdini A. High-dimensional sphere packing and the modular bootstrap. Journal of High Energy Physics. 2020 Dec;2020(12):1-45. CT-mathc-2022 Cohn H, Triantafillou N. Dual linear programming bounds for sphere packing via modular forms. Mathematics of Computation. 2022 Jan;91(333):491-508. PS-jstat-1999 Procacci A, Scoppola B. Statistical mechanics approach to coding theory. Journal of statistical physics. 1999 Aug;96:907-12. PS-pre-2000 Parisi G, Slanina F. Toy model for the mean-field theory of hard-sphere liquids. Physical Review E. 2000 Nov 1;62(5):6554. PZ-jstat-2006 Parisi G, Zamponi F. On the high density behavior of Hamming codes with fixed minimum distance. Journal of statistical physics. 2006 Jun;123(6):1145-67. CKPUZ-anrevcm-2017 Charbonneau P, Kurchan J, Parisi G, Urbani P, Zamponi F. Glass and jamming transitions: From exact results to finite-dimensional descriptions. Annual Review of Condensed Matter Physics. 2017 Mar 31;8:265-88. RZ-pre-2012 Ramezanpour A, Zecchina R. Cavity approach to sphere packing in Hamming space. Physical Review E. 2012 Feb 6;85(2):021106. R-pre-2013 Ramezanpour A. Computing loop corrections by message passing. Physical Review E. 2013 Jun 28;87(6):060103. MP-physc-2001 Mezard M, Parisi G. The Bethe lattice spin glass revisited. The European Physical Journal B-Condensed Matter and Complex Systems. 2001 Mar;20:217-33. KFL-ieee-2001 Kschischang FR, Frey BJ, Loeliger HA. Factor graphs and the sum-product algorithm. IEEE Transactions on information theory. 2001 Feb;47(2):498-519. YFW-artint-2003 Yedidia JS, Freeman WT, Weiss Y. Understanding belief propagation and its generalizations. Exploring artificial intelligence in the new millennium. 2003 Jan 8;8(236-239):0018-9448. MM-book-2009 Mezard M, Montanari A. Information, physics, and computation. Oxford University Press; 2009 Jan 22. baldassi-polito-2009 Baldassi, C., Braunstein, A., Ramezanpour, A., Zecchina, R. (2015). Statistical Physics and Network Optimization Problems. In: Fagnani, F., Fosson, S., Ravazzi, C. (eds) Mathematical Foundations of Complex Networked Information Systems. Lecture Notes in Mathematics(), vol 2141. Springer, Cham. KLC-pra-2013 Kraus CV, Lewenstein M, Cirac JI. Ground states of fermionic lattice Hamiltonians with permutation symmetry. Physical Review A. 2013 Aug 28;88(2):022335.
http://arxiv.org/abs/2409.02812v1
20240904153123
Packing and finding paths in sparse random graphs
[ "Vesna Iršič", "Julien Portier", "Leo Versteegen" ]
math.CO
[ "math.CO" ]
Packing and finding paths in sparse random graphs Vesna Iršič [mailto:vesna.irsic@fmf.uni-lj.sivesna.irsic@fmf.uni-lj.si, Faculty of Mathematics and Physics, University of Ljubljana, Slovenia] Julien Portier [mailto:jp899@cam.ac.ukjp899@cam.ac.uk, Department of Pure Mathematics and Mathematical Statistics (DPMMS), University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA, United Kingdom] Leo Versteegen [mailto:lversteegen.math@gmail.comlversteegen.math@gmail.com, Department of Pure Mathematics and Mathematical Statistics (DPMMS), University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA, United Kingdom] September 9, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ § ABSTRACT Let G∼ G(n,p) be a (hidden) Erdős-Rényi random graph with p=(1+)/n for some fixed constant >0. Ferber, Krivelevich, Sudakov, and Vieira showed that to reveal a path of length ℓ=Ω(log(1/)/) in G with high probability, one must query the adjacency of Ω(ℓ/plog(1/)) pairs of vertices in G, where each query may depend on the outcome of all previous queries. Their result is tight up to the factor of log(1/) in both ℓ and the number of queries, and they conjectured that this factor could be removed. We confirm their conjecture. The main ingredient in our proof is a result about path-packings in random labelled trees of independent interest. Using this, we also give a partial answer to a related question of Ferber, Krivelevich, Sudakov, and Vieira. Namely, we show that when ℓ=o((t/log t)^1/3), the maximum number of vertices covered by edge-disjoint paths of length at least ℓ in a random labelled tree of size t is Θ(t/ℓ) with high probability. Keywords: Adaptive algorithms, Path-packing, Random graphs, Random trees AMS Subj. Class. (2020): 05C80, 05C85 § INTRODUCTION In the Erdős–Rényi random graph model a graph G ∼ G(n,p) is obtained by independently adding an edge between each pair of n labelled vertices with probability p ∈ [0,1]. We say that an event whose probability depends on n occurs with high probability (or whp for short) if its probability goes to 1 as n goes to infinity. In the companion papers <cit.> Ferber, Krivelevich, Sudakov, and Vieira introduced the following type of algorithmic problem on Erdős–Rényi random graphs. Let 𝒫 be an increasing graph property, i.e., a graph property such that when G_1 is a subgraph of G_2 and G_1 has the property 𝒫, then G_2 does as well. Suppose that p=p(n) is chosen such that when G is sampled according to the Erdős-Rényi graph model G(n,p), then G has the property 𝒫 with high probability. An adaptive algorithm queries for pairs of vertices in V(G) whether or not they are an edge of G. Each query is allowed to depend on the outcome of previous queries, and the algorithm terminates once the revealed edges witness that G has 𝒫. The principal objective in this setting is to determine the minimum number of queries that an adaptive algorithm requires to terminate with high probability. After the research of adaptive algorithms on random graphs was initiated in <cit.>, problems of this type were studied in multiple papers, see for instance <cit.>. If all graphs with property 𝒫 have at least m edges, then it is obvious that one needs to query at least (1+o(1)) m/p pairs of vertices, as the algorithm requires at least m positive responses in order to satisfy the target property. In <cit.>, where a question of this type appeared implicitly, Krivelevich and Sudakov showed that this trivial lower bound is tight in the supercritical regime if 𝒫 is the property of having a giant component. More precisely, they showed that there is an adaptive algorithm which finds a connected component of size at least n/2 in G∼ G(n,p) in at most n^2/2 queries with high probability when p=1+/n. Similarly, Ferber, Krivelevich, Sudakov, and Vieira proved in <cit.> that this trivial lower bound is also tight for the property of having a Hamilton cycle. They established that when p≥ln n+lnln n+ω(1) there exists an adaptive algorithm that finds a Hamiltonian cycle in G∼ G(n,p) in at most (1+o(1))n/p queries with high probability. The naive bound, however, is not always tight. In <cit.>, Ferber, Krivelevich, Sudakov, and Vieira showed that if the property of interest is to have a path of length ℓ, then one may need substantially more than ℓ/p queries. There exists an absolute constant C >0 such that the following holds. For every constant q ∈ (0,1), there exists n_0,_0 such that for every fixed ∈ (0, _0) and any n ≥ n_0, there is no adaptive algorithm which reveals a path of length ℓ≥3C/log(1/) with probability at least q in G ∼ G(n,p), where p=1+/n, by quering at most qℓ/8640Cplog(1/) pairs of vertices. They conjectured that the log(1/) factor in the denominator could be removed, and noticed that this would be optimal, since an argument from <cit.> can be adapted to show that there exists an adaptive algorithm which finds a path of length ℓ≤1/5^2n in G(n,p) when p=1+/n with probability at least 1-exp(-Ω(ℓ/)) in at most O(ℓ/p) queries. In this paper, we confirm their conjecture in form of the following theorem. For all δ∈ (0,1] and all q ∈ [0,1], there exists n_0∈,_0 >0 such that for every ∈ (0, _0) and any n ≥ n_0 the following holds. Any adaptive algorithm which reveals a path of length ℓ≥δ/ with probability at least q in G ∼ G(n,1+/n) requires Ω(qℓδ/p ) queries. We prove <Ref> by making the following improvement on a technical result that is the main ingredient in the proof of <Ref> in <cit.>. There exists _0>0 such that for all δ∈ (0,1] and ∈ (0, _0), with high probability, the following holds. Let S be a set of vertex-disjoint paths in G ∼ G(n,1+/n), each of length at least δ/. Then S covers O(^2 δ^-2 n) vertices. For a graph G and an integer ℓ, let _ℓ(G) denote the maximum number of vertices that can be covered by a system of vertex-disjoint paths of length at least ℓ in G. In <cit.>, it was observed that to estimate _ℓ(G) when G∼ G(n,1+/n) it is actually sufficient to bound _ℓ(T) for a random labelled tree T of any given size t. Somewhat surprisingly, the insight that _ℓ(T)=0 when T does not contain any path of length ℓ, together with the trivial bound _ℓ(T)≤ t for all other trees, is ultimately sufficient to prove <Ref>. The key ingredient of our improvement is a more sophisticated bound on _ℓ(T) for a typical labelled tree T. We obtain this bound through <Ref> by counting the number of pairs (e,T) such that e is an edge that lies near the centre of a long path of size at least ℓ of a random labelled tree on t vertices. The deduction of <Ref> from <Ref> proceeds in the same manner as that of <Ref> from the result in <cit.> that is analogous to <Ref>. For completeness, we include this proof in Appendix <ref>. Having identified the number of paths of length at least ℓ that can be packed into a random labelled tree as useful information for progress on <Ref>, the authors of <cit.> asked the following question, which we believe is of independent interest. Given a=a(t) ∈ℕ and b=b(t) ∈ℕ, what is the probability that a random labelled tree on t vertices contains b vertex-disjoint paths, each of length at least a? Because each path of length longer than 2ℓ may be split into paths of length in [ℓ,2ℓ], the maximum number of vertex-disjoint paths of length at least ℓ in a tree T lies between _ℓ(T)/2ℓ and _ℓ(T)/ℓ. Thus, by our aforementioned result that [_ℓ(T)]=Θ(t/ℓ), the average number of paths of length at least ℓ that can be packed into T is Θ(t/ℓ^2), as long as ℓ=O(√(t)). What is more, if ℓ = o((t/log t)^1/3), we can use Talagrand's inequality to show that _ℓ(T) is tightly concentrated around its mean, thus giving a partial answer to <Ref>. For the avoidance of doubt, the following theorem is phrased in terms of (unlabelled) trees on the fixed vertex set [t] which is conceptually equivalent to labelled trees on t unspecified vertices. There exist C_1,C_2>0 such that for all t∈ the following hold. Let T be sampled uniformly at random from all trees on vertex set [t]. For all integers ℓ∈ [1,t], we have [_ℓ(T)]≤C_1t/ℓ, and for all integers ℓ∈ [1,C_2t], we have [_ℓ(T)]≥ C_2t/ℓ. Furthermore, there exists c>0 such that for all integers ℓ≤ c√(t) and δ∈ (0,1), if T is sampled uniformly at random from all trees on vertex set [t], then we have _T(|_ℓ(T) - [_ℓ(T)]| > δ t/ℓ)≤ct^2/δ e^cδ^2t/ℓ^3. The remainder of the paper is structured as follows. In <Ref> we introduce some notation and tools, most of them already appearing in <cit.>. In <Ref> we state and prove the main counting lemma which is the crux of our proofs. In <Ref> we proceed to the proof of <Ref>. In <Ref> we prove <Ref>. § PRELIMINARIES For a positive integer n, let [n] = {1, …, n}, and for a set S, let S^(n) denote the set of all subsets of S of size n. For functions f and g, we denote f ≪ g if and only if there exist constants N, k > 0 such that for every x > N, we have f(x) < k g(x). Note that this is different from the usual use of ≪ when studying random graphs. We record the following version of Stirling's formula for later use. There exists two absolute positive constants K_1 and K_2 such that for every integer N, we have K_1 √(N) (N/e)^N ≥ N! ≥ K_2 √(N) (N/e)^N. We will make use of the following concentration inequality for the edge exposure martingale in G(n,p) which is an easy consequence of <cit.>. Let X be a random variable in the probability space G(n,p) such that we have |X(H_1)-X(H_2)| ≤ C if H_1 and H_2 differ in at most one edge. Then, for any positive α < 2 √(n^2p), we have (|X-[X]| > C α√(n^2p)) ≤ 2e^-α^2/4 The 2-core of a graph G is the maximal induced subgraph of G in which all vertices have degree at least 2. The next lemma is a simple consequence of Theorem 5.4 of <cit.> and Theorem 3 of <cit.>. Let >0 be fixed, and let p=1+/n. Then there exists _0>0 such that for every ∈ (0, _0), the following three properties hold whp in G ∼ G(n,p). * the largest connected component of G has size between n and 3 n, * the second largest connected component of G has size at most 20 log n/^2, * the 2-core of the largest connected component of G has size at most 2^2 n. For n ≥ 1, a path P_n is a graph on n vertices v_1, …, v_n with edges v_1 v_2, …, v_n-1v_n. A rooted tree is a connected graph with no cycles, where one of the vertices of T has been marked as the root. Recall that Cayley's formula[Cayley's formula is often phrased as counting the number of labelled trees on t vertices.] states that the number of trees on vertex set [t] is t^t-2. It follows that the number of rooted trees on vertex set [t] vertices is t^t-1. If T is a tree rooted at v, then the depth (w) of a vertex w is the distance from w to v in T. The height of T, denoted by h(T) is the maximum depth of a vertex in T, while the width of T is given by w(T)=max_k∈ [h(T)]|{w∈ V(T):(w)=k}|. If u is a vertex in a tree T without a root, we refer to the height of T rooted at u as the height of u. A Galton-Watson tree is a rooted tree which is constructed recursively at random. Starting from the root, each vertex is given a random number of children independently of the other vertices according to a distribution ξ on ℕ_0. The distribution ξ is called the offspring distribution and the tree is referred to as the ξ-Galton-Watson tree. For concreteness, we say that the vertex set of a Galton-Watson tree T is the integer interval [| T|]. In <cit.>, Addario-Berry, Devroye, and Janson established sub-gaussian tail bounds for the distributions of the width and height of a Galton-Watson tree conditioned on the number of its vertices. In the following two theorems, ξ is a distribution on _0 with expectation 1 and positive, but finite, variance. There exist c_1, C_1>0 such that for a ξ-Galton-Watson tree T and all x,t∈, (h(T) ≥ x || V(T)|=t) ≤ C_1e^-c_1 x^2/t. There exist c_2, C_2>0 such that for a ξ-Galton-Watson tree T and all x,t∈, (w(T) ≥ x|| V(T)|=t) ≤ C_2e^-c_2 x^2/t. The relevance of these theorems for our paper stems from the following theorem from <cit.>. If ξ is the Poisson(1) distribution and T is a ξ-Galton-Watson tree, then the distribution of T conditioned on the event | V(T)| = t is the same as the uniform distribution over rooted trees on vertex set [t]. The following lemma is a corollary of the previous three results, and it will be central in our proof of <Ref>. There exist constants c_1, c_2, C_1,C_2 >0 such that for all x,t∈, if T is sampled uniformly at random from all rooted trees on [t], then (h(T) ≥ x) ≤ C_1e^-c_1 x^2/t, and (h(T) ≤ t/x) ≤ C_2e^-c_2 x^2/t. The first equation (<ref>) follows immediately from Theorems <ref> and <ref>, while (<ref>) follows from Theorems <ref>, <ref> and the additional observation that h(T)w(T)≥ t. We need the following well-known fact about Poisson(μ)-Galton-Watson tree (see for instance Section 6 of <cit.>). Let 𝒯 be a Poisson(μ)-Galton-Watson tree. Then (|𝒯|=t)=t^t-1(μ e^-μ)^t/μ t!. Ding, Lubetzky and Peres <cit.> gave a full characterisation of the structure of the giant component of G ∼ G(n,p) in the strictly supercritical regime, i.e. where p=1+/n for some constant >0. We will use the following consequence of their work, which was already used in a slightly different form in <cit.>. Let p=1+/n for some constant >0. Let 𝒞_1 be the largest connected component of G ∼ G(n,p), let 𝒞^(2)_1 be the 2-core of 𝒞_1, and let 𝒞_1 ∖𝒞^(2)_1 be the graph obtained from 𝒞_1 by deleting all edges from 𝒞^(2)_1. Let 0 < μ <1 be such that μ e^-μ=(1+)e^-(1+) and consider 2 ^2 n independent Poisson(μ)-Galton-Watson trees T_1, …, T_2^2 n. Then, for every ℓ and M, if whp the disjoint union of T_1, …, T_2 ^2 n does not contain a set of vertex-disjoint paths of length ℓ covering at least M edges, then the same holds whp for 𝒞_1 ∖𝒞^(2)_1. Finally, to prove the concentration part of <Ref>, we will also require Talagrand's inequality. Let Ω=∏_i=1^nΩ_i be a probability space where each Ω_i is a finite probability space and Ω has the product measure. A random variable XΩ→ is called K-Lipschitz if | X(ω)-X(ω')|≤ K whenever ω,ω'∈Ω differ in at most one coordinate. Let f→. We say that X is f-certifiable if for all ω∈Ω and s∈, if X(ω)≥ s there exists I ⊂ [n] with | I|≤ f(s) so that all ω' ∈Ω that agree with ω on I satisfy X(ω')≥ s. The following theorem is a simple corollary to Talagrand's inequality (see Theorem 7.7.1 of <cit.>). Let f→ be a function and let XΩ→ be K-Lipschitz and f-certifiable. Then for all μ, α∈, (X≤μ - α K√(f(μ)))(X≥μ) ≤ e^-α^2/4. § THE MAIN COUNTING LEMMA For a graph G, we say that an edge uv of G is m-centred if there exist two vertex-disjoint paths of length at least m-1 in G - uv starting at u and v, respectively. For t≥ m, we define (t,m) as the set of pairs (uv,T) such that T is a tree on [t] and uv is an m-centred edge of T. Recall that for a graph G and an integer ℓ, _ℓ(G) is the maximum number of vertices that can be covered by a system of vertex-disjoint paths of length at least ℓ in G. The key insight of our proofs is that the number of m-centred edges in a tree T can be used to estimate _ℓ(T). Indeed, if v_0… v_k is a path in T for k≥ 3m and m-1 ≤ i ≤ k-m, then T-v_iv_i+1 contains two vertex-disjoint paths of length m-1 that start at v_i and v_i+1, respectively. In other words, v_iv_i+1 is m-centred in T. Since the interval [m-1,k-m] has length k-2m+1>k+1/3, it follows that _3m(T) ≤ 3|{m-centred edges in T}|. Conversely, we may choose an arbitrary root r in T and construct a system of vertex-disjoint paths in T greedily by taking the longest monotone (in the tree order) path that is disjoint from all previously included paths until no path of length ℓ is left. The resulting systems of paths will include all vertices that are incident to ℓ-centred edges in T. Thus, we have |{ℓ-centred edges in T}|≤_ℓ(T). The following lemma will allow us to estimate the number of m-centred edges in a typical tree. There exist constants c_1, C_1>0 such that for all t∈ and m<t, |(t,m)| < C_1∑_k=m^t/2 t^t-1k^-3/2 e^-c_1m^2/k. Furthermore, there exist constants c_2, C_2>0 such that |(t,m)| > C_2∑_k=c_2 m^2^t/2 t^t-1k^-3/2. For m,t∈, S⊂ [t] and u∈ S let _m(S;u) be the set of trees on S rooted at u whose height is at least m-1. For v∈ [t]∖ S and a rooted tree T_v∈_m(S,v), we write T_u+T_v for the tree on [t] with edge set E(T_u)∪ E(T_v)∪{uv}. Note that uv is an m-centred edge of T_u+T_v so that (uv,T_u+T_v)∈(t,m). Thus the following map is well-defined. ϕ⋃_S⊂ [t] u∈ S v∈ [t]∖ S_m(S;u)×_m([t]∖ S;v) →(t,m) (T_u,T_v) ↦ (uv,T_u + T_v). For each (uv,T)∈(t,m), if T_u and T_v are the components of T-uv rooted at u and v, respectively, then ϕ^-1(uv,T)={(T_u,T_v),(T_v,T_u)}. Thus, the domain of ϕ is exactly twice as large as its codomain (t,m), and since _m(S;u) is empty unless | S|≥ m, we have |(t,m)| = 1/2∑_k=m^t-m∑_S∈ [t]^(k) u∈ S v∈ [t]∖ S|_m(S;u)|·|_m([t]∖ S;v)| =1/2∑_k=m^t-mtkk(t-k) |_m([k];1)|·|_m([t-k];1)|. To bound this quantity from above, note that since there are exactly k^k-2 trees on [k] overall, we may infer from <Ref> that |_m([k];1)|≪ k^k-2e^-cm^2/k. Inserting this into (<ref>), we find that |(t,m)| ≪∑_k=m^t-mtkk^k-1(t-k)^t-k-1 e^-cm^2(1/k+1/(t-k)) ≪∑_k=m^t/2tkk^k-1(t-k)^t-k-1 e^-cm^2/k =∑_k=m^t/2t!k^k-1(t-k)^t-k-1/k!(t-k)! e^-cm^2/k. By Stirling's formula (<Ref>), this can be bounded as |(t,m)| ≪∑_k=m^t/2t^t+1/2/k^3/2(t-k)^3/2 e^-cm^2/k≪ t^t-1∑_k=m^t/2 k^-3/2 e^-cm^2/k, as desired. To bound |(t,m)| from below, we observe that by <Ref>, |_m([k];1) |≥ k^k-2 (1-Ce^-ck/m^2). In particular, there exists a constant c_2 such that |_m([k];1) |≥ k^k-2/2 when k>c_2 m^2. Therefore, we deduce from (<ref>) that |(t,m)|≫∑_k=c_2 m^2^t/2tkk^k-1(t-k)^t-k-1. As before, it follows from Stirling's formula that |(t,m)| ≫ t^t-1∑_k=c_2 m^2^t/2 k^-3/2. § PROOF OF <REF> For every >0, there exists a unique μ=μ()∈ (0,1) such that μ e^-μ=(1+)e^-(1+). We choose _0 such that for all <_0, we have 1-μ()>/2 and 1+≤ e^ - ^2/3. Let δ∈ (0,1] and ∈ (0,_0). As a path of length m consists of m-1 edges and m vertices, it is equivalent to show that whp any set of vertex-disjoint paths of lengths at least ℓ=δ/ covers at most D^2 δ^-2 n edges, for some absolute constant D > 0. Note that the claim is trivial if ℓ<100 so that we may assume that ℓ≥ 100. Here and throughout the proof, we ignore rounding unless it is critical to the correctness of the proof. Let 𝒞_1 be the largest connected component of G, let 𝒞^(2)_1 be the 2-core of 𝒞_1, and let 𝒞_1 ∖𝒞^(2)_1 be the graph obtained from 𝒞_1 by deleting the edges of 𝒞^(2)_1. Furthermore, we define the following random variables: * Let X_ℓ be the maximum number of edges covered by vertex-disjoint paths of length at least ℓ in components of G of size at most 20 log n/^2. * Let Y_ℓ be the maximum number of edges covered by vertex-disjoint paths of length at least ℓ in 𝒞_1. * Let Z_ℓ be the maximum number of edges covered by vertex-disjoint paths of length at least ℓ/3 in 𝒞_1 ∖𝒞^(2)_1. By <Ref>(b), we have that whp the number of edges of G covered by vertex-disjoint paths of length at least ℓ is at most X_ℓ+Y_ℓ. As in <cit.>, a simple counting argument shows that Y_ℓ≤ 6|𝒞^(2)_1|+6Z_ℓ, and hence by <Ref>(c), we have whp Y_ℓ≤ 12 ^2 n+6Z_ℓ. Therefore, to finish the proof, it suffices to prove that whp we have X_ℓ≪^2δ^-2 n and Z_ℓ≪^2δ^-2 n. We start by proving that X_ℓ≪^2 δ^-2 n whp. Let S⊂ [n] and set m=ℓ/3. We define a random variable X_S whose value depends on a case distinction. If G[S] is connected and there are no edges between S and [n]∖ S, then X_S is defined as the number of edges in G[S] that are m-centred. If either condition is not satisfied, we define X_S to be 0. Observe next that if G[S] is connected and e is an m-centred edge in G[S], then G[S] has a spanning tree T such that e is also m-centred in T. Therefore, we have 𝔼[X_S] ≤ |ℳ(t,m)|p^t-1 (1-p)^t(n-t). Indeed, |ℳ(t,m)| counts the number of pairs (e,T) such that T is a spanning tree on S and e is m-centred in T, p^t-1 is the probability that T⊂ G[S], and (1-p)^t(n-t) is the probability that there are no edges between S and [n]∖ S. Since X_ℓ is bounded by the sum of all X_S for which S⊂ [n] has size between m and 20log n/^2, inserting the upper bound from <Ref> into (<ref>), yields [X_ℓ] ≪∑_t=m^20/^2log nnt∑_k=m^t/2 t^t-1k^-3/2 e^-cm^2/k p^t-1 (1-p)^t(n-t) ≪∑_t=m^20/^2log nn!/t!(n-t)!∑_k=m^t/2 t^t-1k^-3/2 e^-cm^2/ke^(t-1)/n^t-1 e^-1+/nt(n-t). Simplifying and applying <Ref>, we obtain [X_ℓ] ≪ n ∑_t=m^20/^2log nn^n-t+1/2/t^t+1/2(n-t)^n-t+1/2∑_k=m^t/2 t^t-1k^-3/2 e^-cm^2/k e^-t. Moreover, we have n^n-t+1/2/(n-t)^n-t+1/2 = 1/(1-t/n)^n-t+1/2 = e^t+O(t^2/n)≪ e^t. Inserting (<ref>) into (<ref>), we obtain [X_ℓ] ≪ n ∑_t=m^20/^2log n∑_k=m^t/2 k^-3/2t^-3/2 e^-cm^2/k ≪ n ∑_t=m^∞∑_k=m^t/2 k^-3/2t^-3/2 e^-cm^2/k ≪ n ∑_t=m^∞ t^-3/2∫_m^t/2 k^-3/2e^-cm^2/k k, where the integral comparison that yields the last inequality is valid because the integrand is positive and for all x,y≥ m ≥ 1 such that | x-y|≤ 1, we have x^-3/2e^-cm^2/x/y^-3/2e^-cm^2/y=(y/x)^3/2e^c( m^2/y-m^2/x)≤ 2^3/2 e^c. Performing the substitution k=m^2/x^2, we obtain [X_ℓ] ≪ nm^-1∑_t=m^∞ t^-3/2∫_√(2/t)m^√(m) e^-cx^2 x ≪ nm^-1∑_t=m^∞ t^-3/2∫_0^√(m)x ≥√(2/t)m e^-cx^2 x ≪ nm^-1∫_0^√(m) e^-cx^2∑_t=max(2m^2/x^2, m)^∞ t^-3/2 x ≪ nm^-1∫_x=0^√(m) e^-cx^2∫_t=2m^2/x^2^∞ t^-3/2 t x ≪ nm^-1∫_x=0^√(m) e^-cx^2 (2m^2/x^2)^-1/2 x ≪ nm^-2∫_0^∞ xe^-cx^2 x. Since the moments of a Gaussian random variable exist and are finite, we may insert the value of m to obtain [X_ℓ] ≪ n ^2δ^-2. An application of <Ref> gives the desired conclusion. We now move on to bounding Z_ℓ. Let 0 < μ <1 be such that μ e^-μ=(1+)e^-(1+) and consider 2 ^2 n independent Poisson(μ)-Galton-Watson trees T_1, …, T_2^2 n. By <Ref>, to prove that Z_ℓ≪^2 δ^-2 n whp, it is enough to show that M_ℓ, the number of edges that can be covered by vertex-disjoint paths in T_1∪…∪ T_2^2 n of length at least ℓ/3, satisfies M_ℓ≪^2 δ^-2 n whp. To bound M_ℓ, we employ the second moment method, as was done in <cit.>. However, <Ref> will allow us to obtain better bounds once more. Set m=ℓ/9 and let T be a Poisson(μ)-Galton-Watson tree and let S_ℓ be the number of m-centred edges in T. Thus, [M_ℓ]≤ 6^2n [S_ℓ]. It is clear that [S_ℓ] ≤ 3∑_t ≥ mℙ(|T|=t)||ℳ(t,m)|/t^t-2. Inserting <Ref> into the above, we obtain [S_ℓ] ≪∑_t ≥ m1/t^t-2t^t-1(μ e^-μ)^t/μ t!∑_k=m^t/2 t^t-1k^-3/2 e^-cm^2/k ≪∑_t ≥ mt (1+)^te^-(1+)t/t!∑_k=m^t/2 t^t-1k^-3/2 e^-cm^2/k. Using that (1+)^t ≤ e^ t- ^2/3t and <Ref>, we have [S_ℓ] ≪∑_t ≥ m∑_k=m^t/2 k^-3/2t^-1/2 e^-^2/3t e^-cm^2/k ≪∑_t ≥ m e^-^2/3t t^-1/2∫_k=m^t/2 k^-3/2e^-cm^2/k k. As for the computation of [X_ℓ], we substitute k=m^2/x^2 in the integral to obtain [S_ℓ] ≪m^-1∑_t=m^∞ e^-^2/3t t^-1/2∫_√(2/t)m^√(m) e^-cx^2 x ≪m^-1∑_t=m^∞∫_x=√(2/t)m^√(m)x ≥√(2/t)m e^-^2/3t t^-1/2 e^-cx^2 x ≪m^-1∫_x=0^√(m) e^-cx^2∑_t=2m^2/x^2^∞ e^-^2/3t t^-1/2 x ≪m^-1∫_x=0^√(m) e^-cx^2∫_t=2m^2/x^2^∞ e^-^2/3t t^-1/2 t x. Performing the second substitution t=y^2/^2 in the integral, we obtain [S_ℓ] ≪m^-1∫_x=0^√(m) e^-cx^2∫_y=√(2)m /x^∞e^-y^2/3/ y x ≪δ^-1∫_x=0^∞ e^-cx^2∫_y=0^∞ e^-y^2/3 y x ≪δ^-1, showing that [M_ℓ]≪^2δ^-1n. For the second moment, we use the same computation as <cit.>. Using <Ref>, which specifies the distribution of | T|, we can calculate that [| T|^2]=(1-μ)^-3. Thus, [M_ℓ] ≤ 18^2 n [S^2_ℓ] ≤ 18^2 n[|T|^2] =18^2 n/(1-μ)^3≤144n/, where the last inequality follows from 1-μ≥/2. Since [M_ℓ]≪^2δ^-1n and [M_ℓ] ≪ n^-1 as well, it follows from Chebyshev's inequality that M_ℓ≪^2 δ^-1 n whp, as desired. § PROOF OF <REF> Recall that _ℓ(T) is the maximum number of vertices in T that can be covered by vertex-disjoint paths of length at least ℓ. We begin by estimating the expected value of _ℓ(T). There exists a constant C such that for all integers 1 ≤ℓ≤ t, we have [_ℓ(T)]≤Ct/ℓ, where T is sampled uniformly at random from all trees on vertex set [t]. Furthermore, there exists a constant c>0 such that for all t∈ and 1≤ℓ<c√(t), we have [_ℓ(T)]≥ ct/ℓ. For a tree T on [t] and m≤ t, let X_m(T) be the number of pairs (e,T) in (t,m), i.e., the number of m-centred edges in T. Recall that by (<ref>), we have _3m(T)≤ 3X_m(T), while by (<ref>), we have X_ℓ(T)≤_ℓ(T). Observe further that since _ℓ'(T)≤_ℓ(T) for ℓ'>ℓ, it is enough to show the upper bound in the claim when ℓ is divisible by three. Thus, to prove the claim, it suffices to show that [X_m]=O(t/m) for 1≤ m≤ t, and that there exists c>0 such that [X_m]=Ω(t/m) for 1≤ m ≤ c√(t). To bound [X_m] from above, we make similar calculations to those in the proof of <Ref>. By <Ref>, we have [X_m(T)] ≪ t∑_k=m^t/2 k^-3/2 e^-ct^2/k≪ t∫_m^t/2 k^-3/2 e^-ct^2/k k ≪ t∫_m^t/2 k^-3/2 e^-cm^2/k k. By substituting k=m^2/x^2, we obtain [X_m(T)] ≪t/m∫_m/√(t)^√(m) e^-cx^2 x≪t/m, as desired. It remains to bound [X_m(T)] from below. By the lower bound in <Ref>, there exists a constant c_2 such that [X_m(T)] ≫ t∑_k=c_2ℓ^2^t/2 k^-3/2. Assuming that c_2m^2<t/4, this gives [X_m(T)] ≫ t ∫_c_2 m^2^t/2 k^-3/2 k ≫t/√(c_2 m^2)-t/√(t/2)≫t/m, as desired. As a function of the edges of T, _ℓ(T) is continuous in the sense that changing a single edge can change _ℓ by at most 2ℓ. If ℓ is sufficiently small compared to t, we can use Talagrand's inequality (see <Ref>) to exploit this continuity to show that _ℓ is concentrated around its mean. For v∈ [t-1] let Ω_v=[t] and consider the product space Ω=∏_v=1^t-1Ω_v with uniform probability measure _ω. Each element ω∈Ω can be associated with a graph with vertex set [t] and edge set {uv:ω_u=vω_v=u}. In particular, each tree T on [t] is associated with the unique element ω∈Ω for which ω_v is the first vertex on the path from v to t. For ω∈Ω, we define _ℓ(ω) as _ℓ(G_ω) for the unique graph G_ω associated with ω. Observe that the random variable ω↦_ℓ(G_ω) is 2ℓ-Lipschitz. Furthermore, if G_ω has a system of vertex-disjoint paths of length at least ℓ that cover m vertices, then the m-1 edges of this path system certify that _ℓ(G_ω)≥ m. Hence, _ℓ is f-certifiable for f(s)=s-1. Therefore, we may apply <Ref> to see that for any α,μ >0, _ω(_ℓ(G_ω)≤μ - αℓ√(μ)) _ω(_ℓ(G_ω)≥μ)≤ e^-α^2/4. Since the size of Ω is t^t-1 and there are t^t-2 trees on [t], the set {ω:G_ω is a tree} has density 1/t in Ω, it follows that for the uniform probability measure _T over all trees on [t], we have _T(_ℓ(T)≤μ - α√(ℓ^2 μ)) _T(_ℓ(T)≥μ)≤ t^2e^-α^2/4. Let δ∈ (0,1). By <Ref>, there exists c>0 such that _T[_ℓ(T)]≥ ct/ℓ, independently of t as long as ℓ≪ c√(t). Thus, to bound _T(_ℓ(T)≥ (1+δ)_T[_ℓ(T)]), we may apply (<ref>) with μ=(1+δ)_T[_ℓ(T)] and α=δ/4√(_T[_ℓ(T)]/ℓ^2)≥δ/4√(ct/ℓ^3), so that μ-α√(ℓ^2μ)=(1+δ)_T[_ℓ(T)]-δ/4√((1+δ)_T[_ℓ(T)]^2)≥ (1+δ/2)_T[_ℓ(T)]. Inserting this into (<ref>), we obtain _T(_ℓ(T)≥ (1+δ)_T[_ℓ(T)])≤t^2e^-ctδ^2/64ℓ^3/_T(_ℓ(T)≤ (1+δ/2)_T[_ℓ(T)]). By Markov's inequality, the denominator is at least 1-1/(1+δ/2)≥δ/3 so that _T(_ℓ(T)≥ (1+δ)_T[_ℓ(T)])≤3t^2e^-ctδ^2/64ℓ^3/δ. On the other hand, if we let μ=_T[_ℓ(T)] and α=δ√(ct/ℓ^3) and denote _T(_ℓ(T)<(1-δ)_T[_ℓ(T)]) by q_δ, then _T(_ℓ(T)≥_T[_ℓ(T)])≤ t^2q_δ^-1e^-ctδ^2/ℓ^3. Since _ℓ(T) is bounded by t, _T[_ℓ(T)] can be bounded as _T[_ℓ(T)] ≤ q_δ(1-δ)_T[_ℓ(T)]+(1-q_δ)_T[_ℓ(T)]+_T(_ℓ(T) > _T[_ℓ(T)])t ≤ (1-q_δδ)_T[_ℓ(T)] + t^3q_δ^-1e^-ctδ^2/ℓ^3, which implies that _T(_ℓ(T)<(1-δ)_T[_ℓ(T)])=q_δ≤(t^3/δ_T[_ℓ(T)]e^-ctδ^2/ℓ^3)^1/2≤ t√(ℓ/cδ) e^-ctδ^2/2ℓ^3. We may now apply a union bound for (<ref>) and (<ref>), and by renaming the constant c appropriately, we obtain the claim. The fact that the absence or presence of a single edge can change _ℓ(T) by as much as 2ℓ prevents us from using Talagrand's inequality to show that _ℓ(T) is tightly concentrated when ℓ is larger than t^1/3. Nonetheless, we believe that ℓ=o(√(t)) is sufficient for tight concentration. If ℓ=ℓ(t) is o(√(t)) as t approaches infinity, then |_ℓ(T) - [_ℓ]| < δ t/ℓ with high probability for all δ >0. § ACKNOWLEDGEMENTS The first author was supported by the Slovenian Research and Innovation Agency (ARIS) under the grants Z1-50003, P1-0297, and N1-0285, and by the European Union (ERC, KARST, 101071836). The second author is funded by EPSRC (Engineering and Physical Sciences Research Council) and by the Cambridge Commonwealth, European and International Trust. The third author was funded through a Trinity External Researcher Studentship by Trinity College of the University of Cambridge. § PROOF OF <REF> We first recall the following result. Let P=(V,E) be a path of length ℓ and let B ⊆ E, |B| ≤αℓ, where α≥1/ℓ. Let Q be a graph obtained from P by removing the edges from B. Then there exist vertex-disjoint subpaths {Q^j}_j ∈ J of Q such that each Q^j is of length at least 1/3 α and they cover at least (1/3 - α) ℓ vertices of V. By <Ref>, there exist D>1,_0'>0 such that for all δ∈ (0,1] and ∈ (0,_0') the following holds. With high probability, G∼ G(n,1+/n) contains no system of vertex-disjoint paths of length at least δ/3 that covers more than D^2 δ^-2 n vertices. For δ, q∈ (0,1], let _0 be the minimum of _0' and q δ^2/192 D and let p=1+/n. Suppose that Alg is an adaptive algorithm that reveals a path of length ℓ≥δ/ with probability at least q by querying at most δ q ℓ/1152 D p pairs of vertices in a graph G ∼ G(n,p). Let n' = (1 + 96 D ^2/q δ^2)n, and note that this is at most (1+/2)n, as ≤q δ^2/192 D. Let further V_0 = [n'], I_0 = ∅, and s = 96 D ^2 n/δ^2 q (ℓ + 1). We will now construct a sequence of sets V_0⊃ V_1⊃…⊃ V_s recursively, by repeating the following steps for i ∈{1, …, s}. * Sample an injection f_i [n]→ V_i-1 uniformly at random and sample G_i∼ G(V_i-1, p). We let f_i(G_i) denote the graph with vertex set [n] and edge set {uv∈ [n]^(2):f(uv)∈ E(G_i)}. We run Alg on f_i(G_i), noting that since f_i(G_i)∼ G(n,p), the probability of success is at least q. * Let L_i be the graph on V_i-1 with edge set {f(uv):uv was queried by Alg} (note that |E(L_i)| ≤q ℓδ/1152 D p), and let K_i ⊆ L_i be the intersection of L_i and G_i. * If K_i contains a path of length ℓ, then we fix one such path which we call P_i and let V_i = V_i-1∖ V(P_i) and I_i = I_i-1∪{i}. Otherwise, we let V_i = V_i-1 and I_i = I_i-1. Observe that we are indeed able to repeat these steps s times since for every i ∈ [s], |V_i-1| ≥ |V_s| ≥ n' - (ℓ+1) s = n. We now define a graph H on the vertex set V_0 so that {u,v}∈ E(H) if and only if {u,v}∈ E(L_i) for some i ∈ [s] and for the smallest such i_0 it holds that {u,v}∈ E(K_i_0). For every e∈ [n']^(2), we have (e∈ E(H))≤1+/n = 1+/n'+1+/n'(n'/n-1)≤1+2/n', and while the presence of edges in H is not necessarily independent, one can check that for any distinct e_1,…,e_r∈ [n']^(2), (e_1,…,e_r ∈ E(H)) ≤(1+2/n')^r. Thus, since the property of having a family of vertex-disjoint paths with length at least ℓ that cover M vertices is monotone for all ℓ and M, the following holds. If H contains a set of vertex-disjoint paths with length at least δ/ that cover at least 4 D ^2 δ^-2 n' = D (2 )^2 δ^-2 n' vertices with probability at least q^2/4, the same holds with probability at least q^2/4 for G∼ G(n', 1+2/n'). Thus, it is enough to prove that H has this property with probability at least q^2/4 since this will then be in contradiction with <Ref> for sufficiently large n. For every i ∈ I_s let H_i be the graph on vertices V_i-1 with edges ( ⋃_j=1^i-1 E(L_j) ) ∩ V_i-1^(2). Observe that |E(H_i)| ≤/6 δ|V_i-1|2. By definition, V_i-1∖ V_i is a path P_i in K_i for every i ∈ I_s. Thus, we can define B_i = E(P_i) ∩ E(H_i) and Q_i to be the graph obtained from P_i by deleting all edges in B_i. Clearly, E(Q_i) ⊆ E(H), and the graphs {Q_i}_i ∈ I_s are vertex-disjoint. Let I = {i ∈ I_s : |B_i| ≤/δℓ}. Applying <Ref> with α=δ/, we get that for every i ∈ I, there exist vertex-disjoint subpaths {Q_i^j}_j∈ J_i of Q_i such that each Q_i^j has length at least δ/3 and they cover at least (1/3 - /δ) ℓ≥1/4 (ℓ + 1) vertices of P_i, where the last inequality holds since <q δ^2/192 D, and thus, ℓ≥ 192. If |I| ≥sq/3, then {Q_i^j}_j ∈ J_i is a collection of paths of length at least δ/3 that cover at least 1/4 (ℓ+1) |I| ≥ 4 D ^2 δ^-2 n'. So it suffices to show that |I| ≥sq/3 with probability at least q^2/4. Let I' = [s] ∖ I. For every i ∈ [s], we have (i ∈ I') = (i ∉ I_s) + (i ∈ I' | i ∈ I_s) (i ∈ I_s). Recall that (i ∈ I_s) is simply the success probability of Alg, which is at least q. For i ∈ I_s, consider the random embedding f_i [n]→ V_i-1 that was chosen to pull G_i back onto [n]. Since f_i was sampled independently from all random variables that determined H_i and P_i is the image of a path under f_i, we have by (<ref>), that (e ∈ E(H_i)) ≤/6 δ for every e ∈ E(P_i). By the linearity of expectation we have that [|B_i|] = [|E(P_i) ∩ E(H_i)|] ≤/6 δℓ. Using Markov's inequality, we obtain that (i ∈ I' | i ∈ I_s) = (|B_i| ≥/δℓ| i ∈ I_s) ≤[|B_i| | i ∈ I_s]//δℓ≤/6 δℓ//δℓ < 1/2. Using these inequalities in (<ref>) yields (i ∈ I') ≤ 1 - (i ∈ I_s) + 1/2(i ∈ I_s) = 1 - 1/2(i ∈ I_s) ≤ 1 - q/2. Linearity of expectation now gives [|I'|] ≤ s (1 - q/2), and Markov's inequality implies that (|I'| ≥s/1 + q/2) ≤ 1 - q^2/4. Therefore, as q ∈ (0,1), (|I| ≥sq/3) ≥(|I| ≥sq/2+q) = (|I'| ≤ s - sq/2+q) = (|I'| ≤s/1+q/2) ≥q^2/4, as desired.
http://arxiv.org/abs/2409.03121v1
20240904231125
QHDOPT: A Software for Nonlinear Optimization with Quantum Hamiltonian Descent
[ "Samuel Kushnir", "Jiaqi Leng", "Yuxiang Peng", "Lei Fan", "Xiaodi Wu" ]
quant-ph
[ "quant-ph", "cs.MS", "math.OC" ]
1] Samuel Kushnir 2,3,4,†,[Samuel Kushnir and Jiaqi Leng contributed equally to this work. Most of the work was completed at the University of Maryland.]] Jiaqi Leng 1,3] Yuxiang Peng 5,6] Lei Fan 1,3] Xiaodi Wu [1]Department of Computer Science, University of Maryland [2]Department of Mathematics, University of Maryland [3]Joint Center for Quantum Information and Computer Science, University of Maryland [4]Department of Mathematics and Simons Institute for the Theory of Computing, University of California, Berkeley [5]Department of Engineering Technology, University of Houston [6]Department of Electrical and Computer Engineering, University of Houston [†]mailto:jiaqil@terpmail.umd.edujiaqil@terpmail.umd.edu QHDOPT: A Software for Nonlinear Optimization with Quantum Hamiltonian Descent [ ============================================================================== § ABSTRACT We develop an open-source, end-to-end software (named QHDOPT), which can solve nonlinear optimization problems using the quantum Hamiltonian descent (QHD) algorithm. QHDOPT offers an accessible interface and automatically maps tasks to various supported quantum backends (i.e., quantum hardware machines). These features enable users, even those without prior knowledge or experience in quantum computing, to utilize the power of existing quantum devices for nonlinear and nonconvex optimization tasks. In its intermediate compilation layer, QHDOPT employs SimuQ, an efficient interface for Hamiltonian-oriented programming, to facilitate multiple algorithmic specifications and ensure compatible cross-hardware deployment. The detailed documentation of QHDOPT is available at <https://github.com/jiaqileng/QHDOPT>. § INTRODUCTION Nonlinear optimization, also known as nonlinear programming, is a branch of mathematical optimization concerned with solving problems in which the objective function, constraints, or both, exhibit nonlinearity. While nonlinear optimization problems are common in various application fields such as engineering, management, economics, and finance, these problems are in general nonconvex with complicated landscape features like multiple local stationary points, valleys, and plateaus. As the number of variables grows, the complexity of the problem could grow rapidly, posing a significant challenge in obtaining globally optimal solutions. Several open-source and commercial software packages, including Ipopt <cit.>, Gurobi <cit.>, and CPLEX <cit.>, have been developed to tackle large-scale nonlinear optimization problems. While these optimizers can incorporate powerful heuristics to enhance performance for certain problem instances, there is no polynomial-time guarantee for these optimizers because nonlinear optimization is generally -hard. Often, the problem structure is unknown, and there is no commonly agreed-upon go-to optimizer for nonlinear optimization in practice. Quantum computers are emerging technologies that can leverage the laws of quantum mechanics to offer theoretical and practical advantages over classical computers in solving large-scale computational problems. Unlike their classical counterparts, quantum computers utilize a unique phenomenon known as quantum tunneling to accelerate the solution of nonconvex optimization problems. Specifically, a quantum particle can pass through a high potential barrier that would be insurmountable classically due to insufficient energy. This exotic behavior enables a quantum computer to bypass sub-optimal solutions, efficiently navigating the complex landscape of nonlinear optimization. Recently, <cit.> proposes a novel quantum algorithm named Quantum Hamiltonian Descent (QHD). QHD is inspired by the observation that many first-order (i.e., gradient-based) methods can be interpreted as dynamical processes governed by physical laws. For example, it has been shown that the celebrated Nesterov's accelerated gradient descent algorithm can be modeled by a time-dependent Lagrangian mechanical system that would find local minima in the system <cit.>. By upgrading the classical Lagrangian mechanics to quantum mechanics, we end up with a minimum-finding quantum process, just like gradient descent. Additionally, this quantum dynamical process demonstrates the quantum tunneling effect, making it a competitive candidate for solving nonconvex optimization problems. Simulating this quantum dynamical process on a quantum computer gives rise to QHD, a simple but powerful quantum algorithm for continuous optimization, especially nonlinear problems with nonconvex objective functions. A follow-up work by <cit.> shows that QHD can solve a family of hard optimization instances in polynomial time, while an empirical study suggests that these problem instances are intractable for many classical optimization algorithms such as branch-and-bound, stochastic gradient descent, interior point method, etc. A key feature of QHD is that it is formulated as a quantum evolution, which can be simulated on both digital and analog quantum computers. This feature allows us to implement QHD to tackle real-world tasks with near-term realizable quantum computers. Digital quantum computers perform computation by applying a sequence of elementary quantum gates to an initial quantum state. These machines exhibit provable quantum advantages over classical (digital) computers for certain computational tasks, however, they require a large number of digital (i.e., error-corrected) qubits. Although there has recently been a groundbreaking experimental demonstration of early fault tolerance <cit.>, existing digital quantum computers have not yet reached the size necessary to accelerate the solution of real-world problems in application domains such as management, finance, and engineering <cit.>. Analog quantum computers solve computational tasks by configuring and emulating a real quantum system and then performing quantum measurements. These devices are easier to fabricate, control, and scale <cit.>, while they are unavoidably noisy, and no general error correction technique is currently practical <cit.>. <cit.> proposed a systematic technique named Hamming encoding that enables us to implement QHD to solve quadratic programming (QP) problems on analog quantum computers with Ising Hamiltonian. This technique is exemplified in solving 75-dimensional nonconvex QP problems, where the noisy real-machine implementation of QHD outperforms existing open-source nonlinear optimization software like Ipopt. In this paper, we develop , an end-to-end implementation of QHD for nonlinear optimization. A notable feature of is that it supports the deployment of QHD to multiple quantum computing hardware, including gate-based quantum computers such as IonQ, and analog quantum computers such as D-Wave. provides a user-friendly interface, with which a nonlinear optimization problem can be specified via either matrix/numeric or symbolic description. Then, the implementation of QHD is fully automatized and the (approximate) optimal solutions will be returned once the computation is completed. The mid-level compilation and cross-hardware deployment are achieved by utilizing for Hamiltonian-oriented programming <cit.>. Organization. The rest of the paper is organized as follows.[This paper is not intended to be a comprehensive tutorial or documentation on . Instead, we direct the readers to <https://github.com/jiaqileng/QHDOPT> for the source code, examples, tutorials, and documentation.] In problem-formulation, we explain the general problem formulation for nonlinear optimization problems that can be processed and solved by . In workflow-main, we discuss the workflow of including the quantum backend and classical refinement. In hop, we discuss several unique design features of , especially the multi-backend compatibility achieved by incorporating the Hamiltonian-oriented programming (HOP) framework. In qhd, we briefly review the QHD algorithm and its implementation on both digital and analog quantum computers. Then, in workflow, we sketch the workflow of the software, including all major steps in the implementation of QHD and classical post-processing. example provides two worked examples of modeling and solving nonlinear optimization problems. In state, we review the current state and trend of quantum optimization software. We conclude with a comparison of with other available open-source optimizers in comparison. §.§ Problem formulation: box-constrained nonlinear optimization The package solves nonlinear programming problems of the following form: min_x f(x_1, …, x_n) = ∑^n_i=1 g_i(x_i)_univariate part +∑^m_j=1 p_j(x_k_j)q_j(x_ℓ_j)_bivariate part, s.t. L_i≤ x_i ≤ U_i, ∀ i ∈{1,…,n}, where x_1,…,x_n are n variables subject to the box constraint x_i ∈ [L_i,U_i] ⊂ℝ for each i = 1,…,n, and the indices k_j, ℓ_j ∈{1,…,n} and k_j ≠ℓ_j for each j = 1,…, m. The functions g_i(x_i), p_j(x_k_j), and q_j(x_ℓ_j) are real univariate differentiable functions defined on ℝ. Note that the univariate part in obj has at most n terms because we can always combine separate univariate functions of a fixed variable x_i into a single one. However, there is no upper bound for the integer m (i.e., the number of bivariate terms).[It is generally impossible to combine a sum of products into a single product form. For example, we can not find two univariate functions p(x) and q(y) such that p(x)q(y) = sin(x)y + xe^y.] The nonlinear optimization problem primal is in general NP-hard <cit.> and can be used to model several common classes of optimization problems, including linear programming, quadratic programming, and polynomial optimization (with box constraints). In the following examples, we show how to formulate some standard nonlinear optimization problems in the form of obj. [Box-constrained Quadratic programming] A quadratic programming problem with a box constraint takes the form: min_x f(x) 1/2x Qx + b x s.t. 0 ≤ x ≤ 1, where Q∈^n× n is a symmetric matrix and b is a real-valued vector of dimension n. The objective function can be written as f(x) = ∑^n_i = 1(1/2Q_i,i x^2_i + b_i x_i) + ∑^n_1 ≤ k < ℓ≤ n Q_k, ℓx_k x_ℓ. This function is represented by obj by choosing g_i(x_i) = 1/2Q_i,i x_i^2 + b_i x_i, ∀ i = 1,…, n, p_j(x_k_j) = Q_k_j,ℓ_j x_k_j, q_j(x_ℓ_j) = x_ℓ_j, ∀ j ∈{1,…, n(n-1)/2}. Here (k_j, ℓ_j) are the j-th pair in the enumeration {(k, ℓ): 1≤ k<ℓ≤ n}. While the problem formulation can only handle box constraints, we note that many optimization problems with more sophisticated constraints can be reformulated in the form of primal by adding the constraints as a penalty term in the objective function. [Spherical constraints] Consider the optimization problem with n variables: min_x f(x) ∑^n_j=1α_j x_j s.t. ∑^n_j=1x^2_j = 1, where α_j are real scalars for all j = 1,…,n. The feasible set of this problem is the n-dimensional sphere with radius 1, which can not be directly recast as a box in the form of box. Meanwhile, we observe that all the variables must take values between 0 and 1 because the unit sphere is contained in the unit (hyper-)cube. Therefore, we can reformulate spherical to a box-constrained optimization problem by the penalty method: min_x f(x) ∑^n_j=1α_j x_j + λ(∑^n_j=1x^2_j - 1)^2, s.t. 0 ≤ x ≤ 1. This new problem can be handled by our software since the objective function penalty-obj only involves uni- and bi-variate monomials. As the penalty coefficient λ > 0 grows, we can show that the solution to the box-constrained problem penalty will eventually converge to the optimal solution to the original problem spherical. It is worth noting that the problem formulation supported by is restrictive, and there exist many general nonlinear optimization problems that cannot be directly expressed in primal. For example, our formulation cannot deal with objective functions involving trivariate monomials (e.g., xyz). While, in theory, QHD can handle box-constrained optimization models given access to ideal quantum hardware, in we limit the appearance of trivariate parts or higher to cater to the current quantum hardware restrictions. Additionally, we note that there may be several corner cases that are representable by primal but would require an excessively long time for to parse and solve. For example, when the objective function involves thousands of bivariate functions, it might take minutes to compile and implement the automatic differentiation subroutine based on JAX. We advise users to prioritize the use cases with low-degree polynomials, bounded exponential functions, and simple trigonometric functions. §.§ Solving problems in utilizes the Quantum Hamiltonian Descent algorithm to facilitate the solution of nonlinear and nonconvex optimization problems. Theoretically, Quantum Hamiltonian Descent, when running with an ideal fault-tolerant quantum computer, can solve many optimization problems up to global optimality given sufficiently long runtime <cit.>. However, at the current stage, due to the lack of fault tolerance, we can only implement Quantum Hamiltonian Descent in a low-precision and noisy manner, which significantly reduces the solution quality promised by the theoretical guarantee. To mitigate the noisy performance of near-term quantum hardware with limited resources, we adopt a hybrid quantum-classical computing workflow in to achieve optimal performance, as illustrated in workflowB. Pre-processing and problem encoding. First, we map a box-constrained nonlinear optimization problem to a quantum-mechanical system with finite degrees of freedom. This reduced quantum model can be regarded as a finite-precision approximation of the original QHD model. Then, this quantum model is embedded into a larger quantum system that is natively executable using one of the supported quantum backends. This process is called Hamiltonian programming. While the quantum hardware only “sees” a reduced version of the original problem, the Hamiltonian embedding technique <cit.> ensures that the spatial structure inherited from the original problem is preserved and naturally encoded in the quantum operator. Therefore, allows us to run a coarse-grained version of QHD on near-term quantum devices. Deployment and decoding. Then, the quantum operator that encodes the original nonlinear optimization problem is constructed and executed on a quantum backend. Currently, supports three backends: the D-Wave quantum computer, the IonQ quantum computer, and a classical simulator based on QuTiP. The measurement results from quantum devices are in 0-1 format (i.e., binaries), which requires a decoder to recover the corresponding solution in the continuous space (e.g., the unit box). Classical refinement. Limited by the size and coherent time of current quantum devices, the quantum-generated solutions are of low precision and intrinsically noisy. relies on classical local search algorithms, such as first- and second-order methods, to improve numerical precision. Currently, supports two local optimizers: a general-purpose interior point method (Ipopt) and a truncated Newton method (TNC) implemented in SciPy. While we do not include other local search subroutines in , we note that generic local optimizers allowing box constraints should work as well. Since leverages local search algorithms as refiners, the output solutions are necessarily locally optimal (i.e., first- or second-order stationary points, depending on the choice of refinement subroutine). That being said, we would like to note that the Quantum Hamiltonian Descent (QHD) algorithm, when executed on a large fault-tolerant quantum computer, is able to find the global minimum for a large family of nonconvex functions with mild assumptions, provided that the runtime is sufficiently long <cit.>. The performance of for practical problems, however, heavily depends on the quality of near-term quantum devices, which are often of limited scale and prone to physical noise. Meanwhile, it is also possible to refine the quantum-generated solutions using a global solver (e.g., Gurobi, BARON); in this case, the global optimality is guaranteed, but the post-processing time could be significantly longer. Due to the limited time frame, we leave a global-solver-based refinement as future work. §.§ Unique design features In what follows, we discuss a few unique design features of our software. Hamiltonian-oriented programming (HOP). exploits the Quantum Hamiltonian Descent (QHD) algorithm to solve nonlinear optimization problems. This quantum algorithm is formulated as a Hamiltonian simulation (i.e., simulating the evolution of a quantum-mechanical system), encompassing a novel abstraction of computation on quantum devices, which we call Hamiltonian-Oriented Programming (HOP). In contrast to the conventional circuit-based quantum computation paradigm where theorists describe quantum algorithms in terms of quantum circuits, the HOP paradigm describes quantum algorithms as a single or a sequence of quantum Hamiltonian evolution. This new paradigm enables us to build a stack of quantum applications by leveraging the native programmability of quantum hardware in the development of quantum algorithms and software, as illustrated in workflowA. The HOP paradigm is empowered by , a recent framework for programming and compiling quantum Hamiltonian systems by <cit.>. In , the programming and simulation of quantum Hamiltonian systems are wrapped in user-friendly Python methods. This makes the high-level programming and deployment of Hamiltonian-oriented quantum algorithms accessible to users with little exposure to real-machine engineering and manipulation. A detailed discussion on the Hamiltonian programming and compilation in is available in workflow. Multi-backend compatibility. In , we utilize as an intermediate layer for the programming of QHD and leverage the compiler to realize multi-backend compatibility. Through , initially constructs a hardware-agnostic Hamiltonian representation of QHD (i.e., Hamiltonian embedding) that can be deployed on various quantum backends, including D-Wave devices, IonQ devices, and classical simulators via QuTiP <cit.>. Automatic differentiation. relies on JAX, a high-performance numerical computing library, to perform automatic differentiation of smooth, nonlinear objective functions. This feature enables to seamlessly post-process quantum-generated solutions using local search optimizers. § QUANTUM HAMILTONIAN DESCENT In our software, we utilize QHD to solve box-constrained nonlinear optimization problems as described in primal. QHD solves a continuous optimization problem by simulating a quantum dynamical system governed by an evolutionary partial differential equation called Schrödinger equation. Here, we give a high-level review of this quantum algorithm and more details can be found in <cit.>. §.§ Mathematical formulation and interpretation Consider a nonlinear objective function f(x) with a box constraint Ω= {(x_1,…,x_n)∈^n L_i ≤ x_i ≤ U_i, ∀ i = 1,…, n}. To solve this optimization problem, QHD requires simulating the following Schrödinger equation over the feasible set Ω with Dirichlet boundary condition, i.e., Ψ(t,x) = 0 for x ∈∂Ω, i ∂/∂ tΨ(t,x) = [e^φ_t(-1/2Δ) + e^χ_tf(x)]Ψ(t,x), subject to an initial state Ψ(t,x) = Ψ_0(x). Here, the operator Δ∑^n_i=1∂^2/∂ x^2_i is the Laplacian operator defined in the interior of Ω, and the time-dependent functions e^φ_t and e^χ_t control the total energy distribution of the quantum system. In practice, the initial state Ψ_0(x) is often chosen as a quantum state that is easy to prepare, for example, a Gaussian state or a uniformly random state. For general (nonconvex) optimization problems, it is observed that an inverse polynomially decaying e^φ_t and polynomially increasing e^χ_t (e.g., φ_t = -log(1+γ t^2), χ_t = log(1+γ t^2) with a positive γ) work well for many test problems <cit.>. With a Gaussian initial state and smooth time-dependent functions, the dynamics generated by qhd-pde can be simulated using 𝒪(nT) elementary gates and 𝒪(T) queries to the objective function f <cit.>. Physically, the equation qhd-pde describes the time evolution of a quantum particle in the box Ω. The time-dependent functions e^φ_t and e^χ_t control the total energy distribution of this quantum particle: when their ratio e^φ_t/e^χ_t is large, the kinetic energy dominates and the particle tends to bounce around; otherwise, the potential energy takes over and the particle tends to stay still. If we choose these functions such that lim_t→∞ e^φ_t/e^χ_t = 0, the kinetic energy of the system is dissipated over time and eventually the quantum particle will take a low-energy configuration. At this point, if we measure this quantum particle, the measured position (which must lie in the feasible set Ω) is likely to give an approximate solution to the problem f(x). In some sense, QHD can be regarded as a quantum version of Polyak's heavy ball method <cit.>. QHD describes a quantum particle exploring the optimization landscape f(x). When a high-energy barrier emerges, the quantum particle may leverage the quantum tunneling effect to go through the barrier and find a lower local minimum. However, simulating the quantum evolution qhd-pde with a classical computer would require exponential space and time, making this idea impractical as a classical optimization algorithm. On the other hand, the evolution qhd-pde can be efficiently simulated using a quantum computer, which makes QHD a genuine quantum algorithm that can leverage the quantum tunneling effect for nonconvex optimization. Theoretically, it has been shown that QHD can efficiently find the global minimum for certain nonconvex problems with exponentially many local minima, while many classical optimizers such as simulated annealing and SGD appear to require a much longer time to obtain a global solution <cit.>. Numerical experiments also show that QHD outperforms classical first- and second-order methods in a broad class of nonconvex problems with many local stationary points <cit.>. §.§ Real-machine implementation Quantum Hamiltonian Descent is formulated as a Hamiltonian simulation task, i.e., solving a quantum Schrödinger equation as in qhd-pde. While efficient quantum algorithms, such as those proposed by <cit.>, can tackle this simulation task exponentially faster than any known classical algorithms, these quantum simulation algorithms require large fault-tolerant quantum computers. Such ideal quantum computing hardware has not yet been realized due to the immature progress of quantum technology. To fully exploit the limited programmability of current quantum hardware such as D-Wave and IonQ, employs a technique named Hamiltonian embedding <cit.> to implement QHD. This technique enables us to map the QHD Hamiltonian to a larger Hamiltonian, and the latter can be natively simulated on existing quantum devices. This real-machine implementation technique is detailed in ham-program-compile. § THE WORKFLOW OF §.§ Modeling of nonlinear problems offers support for two Python-based input formats: the SymPy format for symbolic input and the QP format for numerical input (i.e., arrays). These two input formats enable users to define their target optimization problems both efficiently and with great flexibility. SymPy format. <cit.> is a Python package that supports symbolic expression processes. Users can specify f(x) in primal by declaring variables in and constructing the expression, as in the code snippet in sympy-exp. Here, we import necessary functions like exp from and QHD from package in lines 1 and 2. We declare variables x and y in using symbols in line 3 where the passed string is for to print the expressions. In line 4, we construct the function f(x), where y**1.5 represents the exponential y^1.5, exp(4*x) represents e^4x, and so on. Lastly, we create a QHD model instance in line 5 and pass f and a symbol list [x, y] to it, informing the QHD model the target optimization function is f with symbols x and y. QP format. For users with specific interests in quadratic programming (QP), we provide a more efficient input model for them. To specify a QP instance with objective function f(x) = 1/2x Q x + b x, we can directly input the matrices Q and b, as in the code snippet in qp-exp. First, we construct Q by a nested Python list or a NumPy array in lines 2 and 3. It is required that Q forms a symmetric square matrix. Then we input the vector b as b. Similar to , we construct the instance by calling the QP method from and pass Q, b into it. §.§ Hamiltonian programming and compilation Once a nonlinear optimization problem f(x) is defined using one of the supported input formats, will form a Hamiltonian description of the corresponding QHD algorithm, as described in qhd-pde. This Hamiltonian description serves as an intermediate layer in the compilation stack and is independent of the choice of the backend (i.e., hardware-agnostic). Although automates this process, making manual execution unnecessary in most cases, we provide detailed discussions for readers who are interested in gaining a deeper understanding of our software's design. There are two major steps in the construction of the Hamiltonian description of QHD, namely, spatial discretization and Hamiltonian embedding. §.§.§ Spatial discretization First, we need to perform spatial discretization of the QHD Hamiltonian (which is an unbounded operator) so that it can be described by a finite-dimensional quantum system. For a thorough and mathematically rigorous discussion, readers are encouraged to refer to <cit.>. Given a nonlinear optimization in the form of primal, the QHD Hamiltonian reads the following, H(t) = e^φ_t(-1/2Δ) + e^χ_t(∑^n_i=1 g_i(x_i) +∑^m_j=1 p_j(x_k_j)q_j(x_ℓ_j)), which acts on any L^2-integrable functions over the feasible set Ω = {(x_1,…,x_n)∈^n L_i ≤ x_i ≤U_i, ∀ i = 1,…, n}. Here, for simplicity, we assume the feasible set is the unit box, i.e., L_i = 0 and U_i = 1 for all i = 1,…,n. We utilize the centered finite difference scheme to discretize this differential operator. Suppose that we divide each dimension of the unit box Ω using N quadrature points {0, h,…, (N-2)h, 1} (where h = 1/(N-1)), the resulting discretized QHD Hamiltonian is an N^n-dimensional operator of the form, Ĥ(t) = e^φ_t(-1/2L_d) + e^χ_tF_d, where (assuming k_j < ℓ_j for all j = 1,…,m) L_d = ∑^n_i=1 I⊗…⊗L_the i-th operator⊗… I, F_d = ∑^n_i=1 I⊗…⊗D(g_i)_the i-th operator⊗… I + ∑^m_j=1 I⊗…⊗D(p_j)_the k_j-th operator⊗…⊗D(q_j)_the ℓ_j-th operator⊗… I. Here, I is the N-dimensional identity operator, L and D(g) are N-dimensional matrices given by (g is a differentiable function defined on [0,1] and g_i g(ih) for i = 0,…,N-1), L = 1/h^2[ -2 1 ; 1 -2 1 ; ... ... ... ...; 1 -2 1; 1 -2; ], D(g) = [ g_0 ; g_1 ; ... ... ... ...; g_N-2 ; g_N-1; ]. The tridiagonal L matrix corresponds to the finite difference discretization of the second-order differential operator d^2/d x^2, and the diagonal matrix D(g) corresponds to the finite difference discretization of the univariate function g(x). Note that L has a global phase -2/h^2, i.e., L = L' - 2/h^2 with L' only contains the off-diagonal part of L. Since the global phase does not affect the quantum evolution (therefore, the result of the QHD algorithm), we replace L with L' in the rest of the discussion. §.§.§ Hamiltonian embedding The discretized QHD Hamiltonian, as described in discretized_qhd, is a Hermitian matrix with an explicit tensor product decomposition structure. This particular structure allows us to leverage the Hamiltonian embedding technique <cit.> to construct a surrogate Hamiltonian H(t) such that the QHD algorithm (i.e., simulating the Hamiltonian Ĥ(t)) can be executed by simulating H(t). In our case, the surrogate Hamiltonian H(t) is an Ising-type quantum Hamiltonian that involves at most nN qubits and max(n,m) N two-body interaction terms. This means H(t) can be efficiently simulated on current quantum computers, including IonQ's trapped ion systems and D-Wave's quantum annealer. To construct the Hamiltonian embedding of Ĥ(t), the first step is to build the Hamiltonian embeddings of the N-by-N matrices L' and D(g) (for arbitrary differentiable function g). Both are sparse matrices so we can utilize the embedding schemes provided in <cit.>. allows users to choose from three embedding schemes: Hamming[The details of Hamming embedding can be found in <cit.>. Note that this embedding scheme is referred to as “Hamming encoding” in <cit.>.], unary, and one-hot[More precisely, the one-hot embedding we implemented in is referred to as “penalty-free one-hot” embedding in <cit.>.]. In embedding-scheme, we list the details of these embedding schemes when applied to L' and D(g). We note that the Hamming embedding scheme only works for quadratic programming, while the other two schemes (unary, one-hot) work for a broader class of nonlinear functions such as exponential functions. To be consistent with our source code, we adopt the left-to-right 0-indexing system for bits/qubits, e.g., 1_0 0_1 1_2 1_3. In embedding-scheme, the integer r represents the number of qubits used to embed an N-dimensional matrix. The operators 𝐗_k, 𝐘_k, and 𝐧_k are the Pauli-X, Pauli-Y, and number operator acting at site k, respectively, 𝐗 = [ 0 1; 1 0 ], 𝐘 = [ 0 -i; i 0 ], 𝐧 = [ 0 0; 0 1 ]. Since the Hamming embedding scheme is only allowed for quadratic programming, we do not consider the Hamming embedding for general nonlinear functions g. Instead, we only consider the embedding of the identity and quadratic functions, i.e., g(x) = x and g(x) = x^2, their corresponding Hamming embeddings are ℰ_1 = 1/r∑^r-1_k=0𝐧_k and ℰ_2 = (ℰ_1)^2, respectively. Now, we denote ℰ^(i)[A] as a Hamiltonian embedding of an N-by-N Hermitian matrix A acting on sites (i-1)r,(i-1)r+1,…,ir-1, where i = 1,…,n. Then, using the rules of building Hamiltonian embeddings <cit.>, we obtain a nr-qubit Hamiltonian that embeds the discretized QHD Hamiltonian Ĥ(t), H(t) = e^φ_t(-1/2∑^n_i=1ℰ^(i)[L']) + e^χ_t(∑^n_i=1ℰ^(i)[D(g_i)] + ∑^m_j=1ℰ^(k_j)[D(p_j)]ℰ^(ℓ_j)[D(q_j)]). [One-hot embedding for x_1x_2] We give a simple example for the Hamiltonian embedding of the discretized QHD Hamiltonian when the objective function is f(x_1,x_2) = x_1x_2. This objective only involves a single bivariate term with p(x) = q(x) = x. We use the one-hot embedding with N = r = 3. Then, the Hamiltonian embeddings of L' and D(x) are, ℰ[L'] = 1/2h^2(𝐗_0𝐗_1 + 𝐗_1𝐗_2 + 𝐘_0𝐘_1 + 𝐘_1𝐘_2), ℰ[D(x)] = 𝐧_0 + 1/2𝐧_1, respectively. As a result, the full Hamiltonian embedding reads H(t) = 2e^φ_t/h^2(𝐗_0𝐗_1 + 𝐗_1𝐗_2 + 𝐗_3𝐗_4 + 𝐗_4𝐗_5 + 𝐘_0𝐘_1 + 𝐘_1𝐘_2 + 𝐘_3𝐘_4 + 𝐘_4𝐘_5) + e^χ_t(𝐧_0 + 1/2𝐧_1)(𝐧_3 + 1/2𝐧_4). In , we use to construct the Hamiltonian embedding H(t). The users only need to specify the number of qubits r (for each continuous variable), the embedding scheme, and a desired backend in the model.optimize() function, as detailed in the next subsection. §.§ Deployment and post-processing When the Hamiltonian embedding H(t) of a given problem is built, it can be executed on a supported quantum backend by running optimize() (refer to example for sample code). The quantum measurement results are then retrieved from the executing backend in the form of bitstrings. Following this, implements a series of classical post-processing subroutines. These include decoding the raw measurement results (i.e., bitstrings) into low-resolution solutions and refining them via a classical local solver. The refined solutions are then returned to the users as final results. §.§.§ Deployment on quantum devices Currently, supports three backend devices for deployment, including classical simulators (e.g., QuTiP), IonQ, and D-Wave. For all three backend devices, the quantum register is initialized to the uniform superposition state. On the IonQ device, the uniform superposition state can be prepared using a single layer of Hadamard gate; on the D-Wave device, the uniform superposition state is the default initial state and it can be prepared in microseconds. When deployed on IonQ, uses φ_t = -log(1+γ t^2) and χ_t = log(1+γ t^2) for the time-dependent functions (see math-formulation for details). The time-dependent functions on D-Wave are more restricted and they can only be specified as piece-wise linear functions. We find the default annealing schedule (20 microseconds) provided by the D-Wave device usually works well in practice. We also showcase user-specified time-dependent functions (annealing schedules) in a notebook in the “examples” folder. Here, we demonstrate the deployment procedures in using the D-Wave backend, while the same process applies to the other two backends. In dwave-execute, a snippet of the source code for the function QHD.dwave_exec() is displayed. After programming the Hamiltonian embedding and the quantum system realizing QHD (lines 2-3), we initiate an abstract D-Wave machine (line 5). Then employ to compile the Hamiltonian embedding into low-level device instructions readable by D-Wave (line 7) using 's DWaveProvider(), effectively generating Hamiltonian H_dev(t) on the D-Wave devices. Next, the instructions are sent to the D-Wave quantum computer to execute (line 9), and the raw quantum samples are collected by as bitstrings (line 12-17). §.§.§ Decoding As we have seen, the real-machine results are in the bitstring format because they are retrieved by computational basis measurements in the quantum computer. These bitstrings need to be converted to floating-point arrays via the built-in decoder, as presented in decoder. This decoder maps a bitstring to a floating-point array that represents a low-resolution solution to the input optimization problem. For example, if we use the unary embedding for a 2-dimensional problem with resolution parameter r = 4, the decoder will map an 8-bit string to a length-2 array. For example, 00010011 is mapped to [0.25, 0.5]. More details of the embedding schemes and their decoding are available in <cit.>. §.§.§ Refinement Limited by the size of current quantum devices, in most cases, we can only use a small resolution parameter (e.g., r =8) in the real-machine implementation of QHD. Therefore, the retrieved measurement results are merely low-resolution solutions to the specified optimization problem. To improve the precision of the solutions, then post-processes the measurement results using local search classical optimization methods. In principle, any generic local optimizers allowing box constraints should work as well; due to the limited resources, we provide two classical refinement options for the users in , including the truncated Newton method (TNC) using SciPy and the interior point method using Ipopt. TNC is a quasi-Newton method, and the interior point method exploited by Ipopt is a second-order method. These classical refiners require the gradient and/or Hessian information of the objective functions. For quadratic programming problems (using the QP input format), the gradient and Hessian can be computed explicitly: ∇ f(x) = Q x, H f(x) = Q. For more general nonlinear optimization problems specified using the SymPy input format, computes the gradient and Hessian information by employing Jax <cit.>, a high-performance numerical computing library developed by Google, to perform auto-differentiation. The post-processed results can be retrieved from model.post_processed_samples. By default, the post-processing subroutine is enabled and automatically executed by running optimize(). However, users can also disable post-processing by specifying optimize(fine_tune=False). In this case, model.post_processed_samples returns None type. § EXAMPLES USING In this section, we exhibit two simple examples showcasing the use cases of . §.§ Quadratic programming We first consider a 2-dimensional quadratic programming problem, whose objective function is defined as follows, f(x,y) = -x^2 + xy - 1/2y^2 + 3/4x - 1/4y =1/2[ x y ][ -2 1; 1 -1 ][ x; y ]+[ 3/4 -1/4 ][ x; y ], for x, y ∈ [0,1]. In example-1, we exemplify using to solve this QP problem with all three backends. Note that the API key (not included in ) is required to access cloud-based quantum computers such as D-Wave and IonQ. By default, QHD.optimize() automatically executes the Scipy TNC method to fine-tune the raw quantum measurement data. To switch to the Ipopt optimizer in the fine-tuning step, one can specify post_processing_method="IPOPT" in the setup, as shown in line 17. §.§ Nonlinear optimization involving exponential function Next, we consider the following nonlinear minimization problem with the objective function: f(x,y) = y^3/2 - e^4x(y-3/4), x,y∈ [0,1]. This objective function f(x,y) is not a polynomial; it involves a fractional power and an exponential function. In example-2, we illustrate a sample code that runs to solve the problem defined above. The function is constructed using the SymPy input format, then deployed on the D-Wave quantum computer with resolution parameter r=8. We may set verbose=1 to print a detailed summary of this execution, including the best-so-far coarse and fine-tuned solutions, as well as a total runtime breakdown, as shown in example2-readout. § THE STATE OF SOFTWARE FOR QUANTUM OPTIMIZATION Software packages are crucial for lowering the barrier to developing and implementing quantum programs across broad user communities. Upon examining the current landscape of quantum software for mathematical optimization, we observe that the majority of the software dedicated to quantum optimization focuses on addressing combinatorial and discrete optimization problems, with limited options available for continuous optimization. Generally, a combinatorial optimization problem can be reformulated as a Quadratically Unconstrained Binary Optimization (QUBO) problem, the solution of which is believed to be a promising application of quantum computing <cit.>. There is a rich collection of libraries for quantum and quantum-inspired optimization that can be employed to generate QUBO reformulations, including  <cit.>,  <cit.>,  <cit.>,  <cit.>. These QUBO problems can be tackled by several methods, such as quantum annealing, Quantum Approximate Optimization Algorithms (QAOA) <cit.>, and other hybrid approaches <cit.>. D-Wave's  <cit.> enables users to interface with their direct QPU (i.e., quantum annealer) and hybrid solvers and retrieve results. QuEra's  <cit.> is a high-level language for configuring programmable Rydberg atom arrays, which can be used to implement annealing-type quantum algorithms and discrete optimization problems like QUBO <cit.>. Los Alamos Advanced Network Science Initiative has also released a package named  <cit.> for simulation and execution of quantum annealing. Besides, several software packages have been published for programming quantum circuits, including IBM's  <cit.>, Google's  <cit.>, Amazon's  <cit.>, Microsoft's  <cit.>, and Xanadu's  <cit.>. These tools can be used to deploy QAOA on gate-based quantum computers. While there have been a few proposals for solving continuous optimization problems using quantum or hybrid computing devices such as photonic quantum computers <cit.> and coherent continuous variable machines <cit.>, we are not aware of a software library customized for nonlinear continuous optimization problems. In practice, some nonlinear optimization problems, such as quadratic programming, may be reformulated as QUBO problems and handled by the aforementioned software tools. However, it remains unclear whether this approach could lead to robust quantum advantages. § COMPARISON WITH EXISTING TOOLS As we discussed in workflow, first obtains some low-resolution solutions by executing the QHD algorithm for a nonlinear optimization problem through Hamiltonian embedding. Next, the software employs a classical local search strategy for fast post-processing of the raw quantum results. It is of interest to understand to what extent the quantum component (i.e., the noisy implementation of QHD) improves the overall performance of . To this end, we have designed a benchmark test to evaluate the performance of  for nonlinear and nonconvex optimization problems. §.§ Test problems We demonstrate the performance of  using fifteen randomly generated nonlinear optimization instances, all with unit box constraints. Problem instances 1 - 5 are nonlinear programming (NLP) problems involving two or three continuous variables, as detailed in test-info. Problem instances 6 - 10 are quadratic programming (QP) problems drawn from the benchmark devised in <cit.>. Problems instances 11 - 15 are nonlinear programming (NLP) problems involving exponential functions, as specified in the following expression: f(x) = 1/2∑^N_i=1∑^N_j=1Q_i,je^x_ie^x_j + ∑^N_i=1 b_i e^-x_i. The last ten test instances (6 - 15) are intermediate-scale problems with 50 continuous variables, ranging from 0 to 1. To ensure the successful mapping of these test problems to quantum computers with limited connectivity, these test problems are generated in a way such that their Hessians are sparse matrices.[Detailed expressions of these test instances are provided in the software repository, see the “examples” folder.] These instances were generated in a largely random manner, and each possesses multiple local solutions, making them fairly challenging for classical optimization software. In our experiment, we observed that local solvers, such as Ipopt, cannot find globally optimal (or even approximately optimal) solutions unless a large number of random initial guesses are tried. BARON, a highly optimized commercial solver for global optimization, can find globally optimal solutions to sparse quadratic programming problems in under 1 second, but it takes several minutes to certify global optimality for nonlinear programming problems that involve exponential-type objectives. §.§ Experiment setup and results In this subsection, we discuss the basic setup of the experiment and the numerical results. We test with two different post-processing optimizers (i.e., Ipopt and Scipy-TNC) using the randomly generated nonlinear programming instances discussed in the previous section. As a comparison, we also run the two classical optimizers on the same test instances using uniformly random initialization. These two classical optimizers are assessed as baselines to illustrate the quantum advantage introduced by the D-Wave-implemented Quantum Hamiltonian Descent (QHD). The classical components in both experiments, including the decoding of D-Wave samples and classical refinement, were executed on a 2022 MacBook Pro laptop with an Apple M2 chip. Our findings assert that QHD, when implemented with D-Wave, brings a significant advantage compared to the standalone use of classical optimizers. §.§.§ Experiment setup for We evaluate on this benchmark using the D-Wave Advantage_system6.3 as the quantum backend. For a fair comparison, we use the unary embedding scheme for all instances, including quadratic and non-quadratic problems. The anneal time is set to be the default value, i.e., 20 microseconds. The total quantum runtime per shot (see the “QPU” columns in benchmark) is calculated as the arithmetic mean of the “qpu_access_time” reported by D-Wave, which includes the programming, state preparation, annealing, and decoding. Note that we do not include the transmission time and the task queuing time in our report. We test with two post-processing optimizers (i.e., Ipopt and Scipy-TNC), and the classical post-processing (more precisely, classical refinement) time is reported in the “Classical Refine” columns in benchmark. The standard deviation of the classical refinement time is also reported in the parenthesis. Except for the initial guesses, both solvers use the default parameters as provided with their Python API. §.§.§ Baseline using classical optimizers As a comparison, we also test three classical optimizers for the same set of problems: Ipopt, Scipy-TNC, and BARON. The first two optimizers are initialized with 1000 uniformly random guesses in the unit box [0,1]^d (where d is the problem dimension), and the runtime data for the 1000 runs have been collected. For a fair comparison, we use the same random seeds for both methods. BARON is executed to generate global solutions with a 2-minute timeout. Except for the initial guesses, all solvers use the default parameters as provided with their Python API. control-group shows the runtime of the three classical solvers: for Ipopt and TNC, the arithmetic mean (and standard deviation) of the runtime is reported; for BARON, the total runtime is reported. Note that for the last five test instances, BARON failed to certify the global optimality of the obtained solutions within the 2-minute timeout window. In baron-sol-quality, we further investigate BARON's solution quality. Our results suggest that, while BARON can find solutions as good as those from the other tested solvers in a comparable timescale, a much longer time is required to prove the global optimality of the obtained solutions. Therefore, we regard the solution returned by BARON as the global minimum. §.§.§ Performance metric In the experiments, we use time-to-solution (TTS) as the key metric to evaluate the performance of various optimization methods. TTS is defined as the total runtime required by a method to achieve at least 0.99 success probability. It can be calculated using the following formula, TTS = t_0 ×⌈ln(1-0.99)/ln(1-p_s)⌉, where t_0 is the (average) runtime per shot, and p_s is the success probability. For all the 15 test instances, the time-to-solution data of four methods (QHD+Ipopt, QHD+TNC, Ipopt, and TNC) are presented in tts-summary. For the experiments involving QHD (i.e., the results in benchmark), t_0 is calculated as the sum of average QPU time and average classical refinement time; for the experiments that only involve classical optimizers (i.e., the results in control-group), t_0 is equivalent to the (average) classical runtime. The success probability p_s is estimated by the fraction of “successful” events in the 1000 samples/trials. Here, a result x' is considered successful if the optimality gap f(x') -f(x^*) is less than 0.001, where x^* is the solution obtained by BARON. §.§ Interpretation of the experiment results Based on the time-to-solution data as reported in tts-summary, we observed that (QHD + a classical optimizer) always outperforms the standalone use of a classical optimizer for the 15 randomly generated test instances. As for the two local optimizers, we find that Scipy-TNC works better than Ipopt as a post-processing subroutine. While the two optimizers usually return refined samples with comparable success probability, Scipy-TNC always shows a lower TTS due to a notably faster runtime. This is potentially because TNC is a quasi-Newton method that does not need to solve the full Newton linear system in the iterations. Another interesting finding is that the classical refinement times of quantum-generated samples (see “Classical Refine” in benchmark) in are significantly shorter than the average runtime of the direct use of local optimizers (see “Avg. Runtime” in control-group). For example, in test instance 1, the average post-processing time (using Ipopt) for a quantum-generated initial guess is 0.2 s, while the average runtime of Ipopt given uniformly random guesses is 13 s. To further investigate this phenomenon, we plot the distribution of objective function values corresponding to three different sample groups, including (1) randomly generated initial guesses, (2) quantum (D-Wave) generated samples, and (3) TNC refined samples (using quantum-generated samples as initial guesses), as shown in solution-quality.[In solution-quality, we only plot objective function values for high-dimensional problems, i.e., test instances 6 - 15.] While the quantum-generated solutions are limited by low precision, it is observed that they are still qualitatively better than random initial guesses. In the subsequent post-processing, performs a local search subroutine to refine solution quality by improving numerical accuracy. In other words, the quantum sampler in can be regarded as a fast and efficient warm-start that devises initial guesses of better quality. § CONCLUSION AND FUTURE WORK is the first open-source software leveraging quantum devices for nonconvex nonlinear optimization problems, providing an accessible interface for domain experts without quantum computing knowledge. Exploiting the idea of Hamiltonian-oriented programming, it efficiently uses quantum devices by implementing the Quantum Hamiltonian Descent algorithm with the framework. We demonstrated 's effectiveness through examples and benchmarks, showing its advantage over classical solvers, especially in large, complex instances. However, the current limitations of quantum device programmability and scalability constrain our benchmarks' scale. While QHD shows promise in solving complex optimization problems, further empirical studies are needed for real-world performance evaluation. There are several avenues for future development of . First, it is desired to broaden the problem class that can be handled by . Currently, due to hardware limitations, supports only the optimization of box-constrained nonlinear problems defined as a sum of univariate and bivariate functions. We anticipate that, shortly, can be extended to address more complicated objective functions as quantum technology and quantum algorithm design continue to co-evolve. Second, while local search algorithms work well to improve the precision of quantum-generated samples, to obtain a global optimality guarantee, it might be promising to replace the refinement/post-processing subroutine in with global optimizers (for example, those based on branch-and-bound). Third, with further progress in quantum engineering, is expected to support more quantum devices from different platforms, including commercial or laboratory devices, which is essential to understanding the advantage of given different combinations of embedding schemes and quantum devices. Last but not least, can be expanded into a plugin for various domain-specific tools, including those in engineering, management, finance, and economics. Adaptions to specific domains are invaluable for users to better utilize quantum devices for their domain problems. Our overarching goal is to establish a user-friendly tool, empowering individuals and organizations to harness the power of quantum devices to solve challenging problems in the real world. § ACKNOWLEDGMENT This work was partially funded by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Accelerated Research in Quantum Computing under Award Number DE-SC0020273, the Air Force Office of Scientific Research under Grant No. FA95502110051, the U.S. National Science Foundation grant CCF-1816695, CCF-1942837 (CAREER), ECCS-2045978, a Sloan research fellowship, the Simons Quantum Postdoctoral Fellowship, and a Simons Investigator award through Grant No. 825053. J.L. and Y.P. are also supported by an open-source quantum software grant from the Unitary Fund. plainnat
http://arxiv.org/abs/2409.03676v1
20240905162931
Signature of maturity in cryptocurrency volatility
[ "Asim Ghosh", "Soumyajyoti Biswas", "Bikas K. Chakrabarti" ]
physics.soc-ph
[ "physics.soc-ph", "q-fin.CP" ]
[Email: ]asimghosh066@gmail.com [Email: ]soumyajyoti.b@srmap.edu.in [Email: ]bikask.chakrabarti@saha.ac.in § ABSTRACT We study the fluctuations, particularly the inequality of fluctuations, in cryptocurrency prices over the last ten years. We calculate the inequality in the price fluctuations through different measures, such as the Gini and Kolkata indices, and also the Q factor (given by the ratio between the highest value and the average value) of these fluctuations. We compare the results with the equivalent quantities in some of the more prominent national currencies and see that while the fluctuations (or inequalities in such fluctuations) for cryptocurrencies were initially significantly higher than national currencies, over time the fluctuation levels of cryptocurrencies tend towards the levels characteristic of national currencies. We also compare similar quantities for a few prominent stock prices. Signature of maturity in cryptocurrency volatility Bikas K. Chakrabarti September 9, 2024 ================================================== § INTRODUCTION Since its conceptualization in 2008 <cit.>, distributed ledger and its application to cryptocurrencies have gathered a lot of attention from financial sectors as well as from data scientists and physicists (see e.g., <cit.>). The primary idea that a currency without a central bank backing is revolutionary and consequently gathered a lot a interests and skepticism. From a point of view of purely collective dynamical system showing emergent behavior, therefore, it is an important case study. It is known that the fluctuation properties (in terms of prices) of cryptocurrencies have shown a different behavior from regulated currencies <cit.>. It is also known that its transaction network, the dynamical properties of such networks and the responses of the prices to world events are far more prominent than regular national currencies <cit.>. It is important, therefore, to analyse how do these properties change over time and whether it is likely that we would see a behavior, in terms of the price fluctuations, in the cryptocurrencies that are more closer to the other regulated currencies in the future. In this paper, we address these issues through a study in the dynamics of daily price fluctuations in cryptocurrencies and the quantification of the inequality of such fluctuating time series and finally compare those behavior with regulated currencies and some stocks in the markets. The quantification of inequalities are done through first constructing a Lorenz curve L(p), where it denotes, for any fluctuating series of values, the ratio of the sum of p fraction of the smallest values in the series and the sum of all values in the series. First introduced in the context of wealth inequality, it meant that L(p) denoted the ratio of total wealth owned by p fraction of the poorest individuals and the total wealth of the society. Of course, the definition could be extended to the distribution or series of any fluctuating quantity. Having defined the Lorenz function, the common ways of quantifying the inequality would be to calculate: (a) The Gini index <cit.>, which refers to the ratio of the area under the Lorenz curve <cit.> and that under the equality line (the form of the Lorenz curve, if all values in the series were exactly the same). The Lorenz curve, by definition is bound by L(0)=0 and L(1)=1 and the equality line is simply a straight line between (0,0) and (1,1). (b) The Kolkata index k <cit.>, on the other hand, is defined as the the fraction k of the sum of all values in the series that equals 1-k fraction of the largest events. In terms of wealth, again, it meant that 1-k fraction of the richest individuals own k fraction of the total wealth. Note that it is a generalization of the Pareto's 80-20 law that states 80% of a society's wealth is owned by 20% of the richest individuals <cit.>. A lot of analysis have been done using real data on the validity of the Pareto's law and its above mentioned generalization, starting from income inequality assessed through tax returns in the US, shares of individual citations of academic scholars, shares of movie incomes, and so on <cit.>. Remarkably, a near universal pattern of g = k = 0.84 ± 0.03 were seen in many of these cases. It was conjectured <cit.> that such a pattern is an outcome of unrestricted competition among `agents' for some `resources'. In a way, the system would self-organize to a point where the inequality among the entities are high, but also bound within a range of about 0.82 in terms of g and k. It was shown later that in generic models of self-organized critical systems, particularly in Bak-Tang-Wiesenfeld and Manna sandpile models that the inequality among the avalanches (responses of the system) would indeed have a near-universal value of about 0.85 <cit.>. Subsequently, such inequalities were calculated in the cases of physical systems (Ising magnets, percolation, fiber bundle model of fracture) and it was shown that systems showing power-law responses would indeed have a value of g and k that are nearly equal close to the critical point, where the fluctuations in the responses are the highest (divergent in the thermodynamic limits, and limited by the system size in finite systems) <cit.>. Also, the value at which the two indices become equal are only weakly dependent on the exponent value of the diverging response function for which the inequality indices are calculated, signalling the origin of the near universal pattern seen in the real data. The crossing of g and k, or the approach of either one towards 0.82 could be used as a signal of approaching critical point or large responses/fluctuations, that was shown numerically and experimentally <cit.>. Finally, another measure of inequality used here is (c) the Q factor <cit.>, which is the ratio between the highest and the average value of signals in a series. In a way it is a background adjusted signal strength of the collective response of a system. It has been shown to be indicative of extraordinary signals from a system near the critical point, in physical as well as social systems. In the rest of the paper, we first discuss the result of unrestricted competition across different sectors of the market and show the near-universal inequality pattern among the competing players of these sectors, indicating a self-organized collective behavior. Then, we focus particularly on the cryptocurrency's behavior and show its price fluctuation inequalities and draw comparisons with similar measures for national currencies. § INEQUALITY OF MARKET CAPITALIZATIONS ACROSS DIFFERENT SECTORS As discussed above, an indication of an unrestricted competition in markets is a resulting near-universal inequality among the competing agents. In Table <ref>, we show the inequality among the market-caps of different stocks in 27 sectors of the Indian stock market. It is seen that the inequalities of the market-caps, quantified by g and k, among the different stocks of various sectors show a near-universal pattern of g = k ≈ 0.82 (with some exceptions). In rare occasions, the inequalities are significantly higher. The Q value, on the other hand, while showing occurrences of large fluctuations among the market-cap values, do not immediately compare across different sectors. In spite of wide variations in the nature of the sectors, the numbers of stocks in all sectors and their market-cap values, the inequalities of the market-caps within individual sectors show near-universal patterns. The near-universal pattern in the market-cap inequalities of stocks in various sectors indicates a self-organized critical state of the market. Apart from most of the sectors having inequality (g and k) close to 0.82, there is indeed a spread in those values with some sectors having significantly less and some having significantly higher inequality. While an immediate reason is not apparent, when g and k are plotted against each other (see Fig. <ref>) they follow a straight line for smaller values of g until the limit g=k, beyond which the two values more or less stay equal. A Landau theory like phenomenological expansion proposed earlier that keeps a minimal power series expansion of the Lorenz function in the form L(p)=Ap+Bp^2, which then leading to <cit.> k=1/2+3g/8, which is very close to what is seen in Fig. <ref>. Another non-linear form for the relation between g versus k was also proposed (see <cit.>). However, it is worth emphasising at this point that for the over all state of the market, the inequality of market-cap for the individual stocks within a sector is so adjusted that there is almost a level of universal inequality. The small fraction of sectors that deviate from the near universal region also fall within a very regular relation between g and k that are widely seen elsewhere ( see e.g., <cit.>). The manifestation of remarkable regularity signal an underlying self-organized critical behavior of the markets that could be immensely helpful in understanding its dynamics. § INEQUALITY OF DAILY PRICE FLUCTUATION AND Q VALUE Here we focus on the price fluctuations in different cryptocurrencies, some prominent stocks and some national currencies. §.§ Cryptocurrencies We take the 10 most prominent cryptocurrencies in terms of their total wealth and look at the daily price fluctuation over a period of about 8 years (from 2015 or later). For each working day in a year, we take the highest (p_h) and lowest (p_l) prices and find their absolute difference |p_h-p_l|. Then for the values obtained for a year, we construct the Lorenz curve L(p) as stated above. Then the inequality indices g and k are calculated respectively, for each year, using g=1-2∫_0^1L(p)dp and solving 1-k=L(k). Also, the value of Q for a year is calculated as the ratio of the highest value of price difference in that and the average price difference throughout the year. The results are shown in Fig. <ref>. The results show that barring a couple of outliers, the Q values have generally decreased over the years for various cryptocurrencies. This suggests a stabilizing behavior in them. We will come back to this point later, while comparing with the behavior seen in national currencies. It is also seen that there is a positive correlation, indeed a possible exponential growth, between the Q values and g, k values, as seen in the semi-log plots in Fig. <ref> showing a near straight line trend. This is somewhat expected in the sense that first of all a high inequality in fluctuation is expected to be manifested through different measures. Also, the Q value in physical system of percolating clusters <cit.> have shown almost a similar behavior as usual response functions, which are known to grow significantly as g or k approach the critical point <cit.>. Here, however, the values of g and k do not rise to values high enough where they are expected to be equal. That is why the variation between g and k, also shown in Fig. <ref> follow a robust linear trend. Other than the daily price fluctuation characteristics of the individual cryptocurrencies discussed above, we also look at the inequality in the cryptocurrency sector as well, similar to what is reported in Table <ref> for various sectors. The difference for cryptocurrencies is that there are over 2500 cryptocurrencies and many have very little market-cap. Therefore, the overall inequality is extremely high. In Fig. <ref>, we show the inequality (g, k) of the cryptocurrency wealth values for the top n-ranked cryptocurrencies, as a function of n. The values of g and k cross as n is increased, and the crossing is seen around 0.82. §.§ International stocks Given that cryptocurrencies are not regulated by a central bank, it is worthwhile to compare its behavior with that of stock prices. Of course, there is no regulation in stock prices of private companies. Therefore, we measure the inequality of daily price fluctuations in 10 prominent international using the same methods mentioned above, for a period of about 50 years. The results of the international stock price inequalities are shown in Fig. <ref>. The Q values show a much higher range of values than what are seen for cryptocurrencies. This is the first indication that in spite of the intuition that unregulated cryptocurrencies might behave similar to the stock prices, in effect they are somewhat different. The positive correlations between Q and g, k still exist, albeit with less prominence. The linearity between g vs k plot, however, robustly appears here as well. §.§ Indian stocks Here we do the same analysis of price fluctuation inequalities, but with 10 prominent stocks from India. Perhaps not surprisingly, we see a similar range of values for Q and similar relations for g and k (see Fig. <ref>). An interesting point to note here that given that the stocks from the same country, national events are expected to be mirrored in the stock prices and their fluctuations. A tendency to peak in Indian national election years seems to be present. §.§ National currencies As mentioned before, we need to compare the fluctuation characteristics of cryptocurrencies with national currencies that are backed up by central banks. We take some of the prominent national currencies and measure their daily price fluctuations as before. Here we take the US dollar as the reference currency i.e., all prices are measured in terms of USD, hence it is absent from the plots. Fig. <ref> shows the inequality of the daily price fluctuations for different currencies. It is interesting to note that the Q values here are closer to the ones seen for the cryptocurrencies than in the cases of various stock prices. Indeed, by comparing Fig. <ref> and Fig. <ref>, we see that the Q values for the cryptocurrencies have started off from a higher range in their initial stages, but over time show a tendency towards approaching the values seen in national currencies rather than the stock prices studied here. This is the most interesting property of the cryptocurrency's price fluctuations characteristics seen here. Even though these are not regulated by any central bank, their price fluctuation characteristics tend towards those seen for other regulated currencies rather than unregulated stock prices. § DISCUSSIONS AND CONCLUSIONS The overall tendency of a multi-component interacting system to manifest emergent collective behavior is well studied in self-organized critical phenomena. Without a fine tuning of a driving parameter to a finite value, the system sets itself in a way that results in a diverging correlation and often an efficient point of operation e.g. in human brain <cit.>. The diverging correlation can also be manifested in terms of the inequalities of the response functions/ fluctuations in such systems. The inequality indices, used to quantify the inequality of the fluctuations of responses, are then known to show near-universal signals in various social and physical systems (see e.g., <cit.>). Here we looked at the fluctuation properties of cryptocurrencies and tried to identify its dynamical tendencies with other national currencies as well as some prominent stocks in the market. Particularly, we have seen that market-caps within various sectors show a near-universal inequality (see Table <ref>) that are reminiscent of what was observed <cit.> for the response functions (avalanches) in self-organized critical systems. This suggests a self-organized critical state of the market as a whole. Then we looked at the inequalities of daily price fluctuations in various cryptocurrencies, prominent stocks and prominent national currencies, all estimated against US dollar (of the corresponding year) as a reference. Some observations of regularity are true across these cases: In all cases g and k show a linear relationship (k = 1/2 + 3/8g, as obtained earlier from a Landau-like of the Lorenz function <cit.>), particularly towards the smaller values of g (see Fig. 2D for cryptocurrencies, Fig. 4D for international stocks, Fig. 5D for Indian stocks, and Fig. 6D for some national currencies). The inequality indices g and k show a positive correlation with the other measure of inequality, the Q factor (see Figs. 2B and 2C for cryptocurrencies, Figs. 4B and 4C for international stocks, Figs. 5B and 5C for Indian stocks, and Figs. 6B and 6C for some national currencies. What is remarkable, however, is the dynamical behavior of Q for cryptocurrencies. Unlike the intuitive expectation of varying like unregulated stocks, the cryptocurrencies, at least in terms of their Q values of daily price fluctuations, behave more like national currencies, as time progresses. This signals a maturity in some of the prominent cryptocurrencies and the trend towards stable currencies. § ACKNOWLEDGEMENT This paper is dedicated to the memory of our colleague Prof. Manipushpak Mitra. BKC is grateful to the Indian National Science Academy for a Senior Scientist Research Grant. 99 nakamoto S. Nakamoto, Bitcoin: A peer-to-peer electronic cash system, 2008. https://bitcoin.org/bitcoin.pdf btc1 J. Yli-Huumo, D. Ko, S. Choi, S. Park, K. Smolander, Where Is Current Research on Blockchain Technology?—A Systematic Review, PLoS ONE 11(10): e0163477 (2016). btc2 Giovanni Santostasi, The physics of Bitcoin, https://giovannisantostasi.medium.com/btc-is-a-power-law-because-it-is-an-infinite-recursive-feedback-loop-5f54de5e501b btc3 C.R. da Cunha, R. da Silva, Relevant stylized facts about bitcoin: Fluctuations, first return probability, and natural phenomena, Physica A 550, 124155 (2020). collect Darko Stosic, Dusan Stosic, Teresa B. Ludermir, Tatijana Stosic, Collective behavior of cryptocurrency price changes, Physica A 507, 499 (2018). btc4 N. Vallarano, C. J. Tessone, T. Squartini, Bitcoin Transaction Networks: An Overview of Recent Results, Front. Phys. 8, 286 (2020). btc5 Jiajing Wu, Jieli Liu, Yijing Zhao, Zibin Zheng, Analysis of cryptocurrency transactions from a network perspective: An overview, Journal of Network and Computer Applications, 190, 103139 (2021). gini C. Gini, Measurement of inequality of incomes, Economics Journal 31, 124126 (1921). lor M. O. Lorenz, Methods of Measuring the Concentration of Wealth, Publ. Am. Stat. Assoc. 9, 209–219 (1905). kolkata A. Ghosh, N. Chattopadhyay, B. K. Chakrabarti, Inequality in societies, academic institutions and science journals: Gini and k-indices, Physica A 410, 3034 (2014). pareto V. Pareto, Cours D’´economie Politique. Reprinted as a Volume of Oeuvres Compl‘etes; Droz: Geneva, Switzerland, 1965; Volume 1896, Available online: <https://www.britannica.com/biography/Vilfredo-Pareto> banerjee23 S. Banerjee, S. Biswas, B. K. Chakrabarti, A.Ghosh and M. Mitra, Sandpile Universality in Social Inequality: Gini and Kolkata Measures, Entropy, 25, 735 (2023) <https://doi.org/10.3390/e25050735> succ_front A. Ghosh, S. Biswas, B. K. Chakrabarti, Success of social inequality measures in predicting critical or failure points in some models of physical systems, Front. Phys. 10, 803 (2022) <https://www.frontiersin.org/journals/physics/articles/10.3389/fphy.2022.990278/full>. manna S. S. Manna, S. Biswas, B. K. Chakrabarti, Near universal values of social inequality indices in self-organized critical models, Physica A 596, 127121 (2022). soumyaditya S. Das, S. Biswas, Critical scaling through Gini index, Phys. Rev. Lett. 131, 157101 (2023). baro24 Diksha, J Baró, S Biswas, Inequalities of energy release rates in compression of nano-porous materials predict its imminent breakdown, arXiv preprint arXiv:2406.06200 (2024) q-fact A. Ghosh, S. S. Manna, B. K. Chakrabarti, Q factor: A measure of competition between the topper and the average in percolation and in self-organized criticality, Phys. Rev. E 110, 014131 (2024). joseph22 B. Joseph, B. K. Chakrabarti, Variation of Gini and Kolkata indices with saving propensity in the Kinetic Exchange model of wealth distribution: An analytical study. Physica A: Stat Mech its Appl (2022) 594:127051 <https://doi.org/10.1016/j.physa.2022.127051>. brain Jordan O’Byrne, Karim Jerbi, How critical is brain criticality?, Trends in Neurosciences 45, 820 (2022).
http://arxiv.org/abs/2409.03592v1
20240905144836
Stabilization of Ambient Pressure Rocksalt Crystal Structure and High Critical Field Superconductivity in ReC via Mo and W Substitution
[ "P. K. Meena", "S. Jangid", "R. K. Kushwaha", "P. Manna", "S. Sharma", "P. Mishra", "R. P. Singh" ]
cond-mat.supr-con
[ "cond-mat.supr-con" ]
Department of Physics, Indian Institute of Science Education and Research Bhopal, Bhopal, 462066, India Department of Physics, Indian Institute of Science Education and Research Bhopal, Bhopal, 462066, India Department of Physics, Indian Institute of Science Education and Research Bhopal, Bhopal, 462066, India Department of Physics, Indian Institute of Science Education and Research Bhopal, Bhopal, 462066, India Department of Physics, Indian Institute of Science Education and Research Bhopal, Bhopal, 462066, India Department of Physics, Indian Institute of Science Education and Research Bhopal, Bhopal, 462066, India []rpsingh@iiserb.ac.in Department of Physics, Indian Institute of Science Education and Research Bhopal, Bhopal, 462066, India § ABSTRACT Transition-metal-based carbides (TMCs), renowned for their exceptional hardness, mechanical strength, and thermal properties, have recently emerged as promising candidates for topological superconductivity. In this study, we synthesized ReC in the NaCl structure at ambient pressure by substituting Mo or W at the Re-site. We investigated the superconducting properties of Re_1-xT_xC (where T = Mo, W) for x = 0.5 using magnetization, resistivity and specific heat measurements. These compounds display type-II, fully gapped, weakly coupled superconductivity with high critical fields, establishing them as new members of superconducting ultra-hard materials at ambient pressure and paving the way for superconducting device applications under extreme conditions. Stabilization of Ambient Pressure Rocksalt Crystal Structure and High Critical Field Superconductivity in ReC via Mo and W Substitution R. P. Singh 0000-0003-2548-231X September 9, 2024 ======================================================================================================================================= Transition-metal carbides (TMCs) are a class of refractory, ultra-hard materials with a unique combination of complex covalent, ionic, and metallic bonding, resulting in exceptional mechanical and electronic properties and making TMCs promising candidates for advanced material applications. Similar to 2D materials such as graphene and transition-metal dichalcogenides <cit.>. Renowned for their superconductivity, high melting points, outstanding strength, corrosion resistance, and excellent thermal and electrical conductivity, TMCs have found applications in cutting tools, wear-resistant coatings, and catalysts. Their ultra-hardness provides a viable alternative to diamonds, which are often costly and difficult to synthesize <cit.>. Moreover, the properties of TMCs can be tailored by controlling their valence electron concentration (VEC)<cit.>. However, the synthesis of TMCs can be challenging due to the off-stoichiometric carbon content. The superconductivity of these refractory materials makes TMCs suitable candidates for device applications in extreme conditions. Recently, rocksalt-structured superconducting TMCs have emerged as prime candidates for hosting topological superconductivity due to their non-trivial band topology <cit.>. Some of these carbides, such as XC (where X = Nb, Mo, Ta, V, and Cr) <cit.>, exhibit intriguing non-trivial features, including Fermi surface nesting, nodal line, and the semimetallic nature of Dirac <cit.>, along with a high superconducting transition temperature (T_C) compared to other reported topological superconductors. Isostructural ScS shows unconventional properties with time-reversal symmetry breaking (TRSB) <cit.>. Rocksalt-structured carbides like (Nb, Ta, Mo, W)C show a transition temperature (T_C) of up to 14 K <cit.>. Recent theoretical calculations also suggest the possibility of unconventional superconductivity in TMCs <cit.>. Re-based carbides are particularly notable for their exceptional high-temperature resistance and superior mechanical properties, which surpass those of diamond under high-pressure conditions, making them highly desirable for industrial applications. Although the superconductivity and ultra-incompressibility of ReC have been studied theoretically, stabilizing the rocksalt structure of ReC under ambient conditions remains a challenge <cit.>. In this paper, we report the stabilization of the NaCl-type structure at ambient pressure and the induction of superconductivity in non-superconducting ReC through the Mo and W substitution. We present a comprehensive study of the structural and superconducting properties of the compounds Re_1-xT_xC (where T = Mo and W) for x = 0.5. These compounds have been predicted to exhibit remarkable mechanical properties, including ultra-compressibility and superhardness, making them promising candidates for advanced applications <cit.>. The superconductivity of these compounds was examined by resistivity, magnetization, and specific heat measurements, revealing bulk type-II superconductivity with weakly coupled BCS characteristics. Polycrystalline Re_1-xT_xC (T = Mo, W) was synthesized using an arc melter with high-purity (4N) Mo, W, Re, and C in stoichiometric ratios of 1-x:x:1. The materials were melted in an Ar atmosphere on a water-cooled copper hearth and remelted several times for homogeneity. The crystal structure and phase purity were analyzed by powder X-ray diffraction (XRD) with CuKα radiation. Magnetic properties were measured with a Quantum Design MPMS-3, while electrical resistivity and specific heat were measured using the four-probe method and the two-tau time relaxation method on a Quantum Design PPMS 9T system. The XRD patterns of Re_1-xT_xC compounds (where T = Mo, W) with Mo: x = 0.1, 0.2, 0.4, 0.5, 0.6, 0.8, 0.9) and W: x = 0.2, 0.5, 0.8) are presented in Fig1(a) and (e). It suggests a single-phase NaCl structure stabilized between x = 0.2 - 0.6 for Mo and x = 0.5 for the substitution of W. The structural refinement of the XRD patterns was performed using a Rietveld method using FULLPROF software <cit.>, a NaCl-type face-centered cubic crystal structure with space group Fm3̅m (no. 225) and the phase purity of the sample. The refined XRD patterns for Re_0.5T_0.5C (T = Mo, W) are shown in Fig1(b) and (f). The inset illustrates the NaCl-type structure, with mixed T and Re atoms at the 4a (0, 0, 0) site and carbon atoms at the 4b (0.5, 0.5, 0.5) site. The lattice parameters, listed in Table <ref>, are consistent with published data for compositions x = 0.5 <cit.> and slightly smaller than those of binary carbides <cit.>. This reduction is attributed to the smaller atomic radius of Re. The observed instability with 20% W substitution compared to Mo substitution might be due to the inherently stable NaCl structure of MoC and the intrinsically unstable NaCl structure of WC under normal pressure. These findings emphasize the complex relationship between substitution levels and structural stability in these materials. Fig1(c) and (g) show magnetization measurements in zero-field-cooled warming (ZFCW) and field-cooled cooling (FCC) modes, with a 1.0 mT applied field, confirmed superconductivity in both compounds. The superconducting transition temperatures (T_C) are listed in Table <ref>, for Re_1-xT_xC (T = Mo, W). The stronger diamagnetic response in FCC compared to ZFCW due to flux trapping confirms the type-II nature of these superconductors. The superconducting fractions exceed 100% due to the irregular sample shape and demagnetization effects <cit.>. The hysteresis magnetization loops for x = 0.5 compositions (insets in Fig1(c) and (g)) further support type-II superconductivity, with H_irr values of 1.6 and 1.8 T for Mo and W-based carbides, respectively. Figs. <ref>(d) and (h) show the temperature-dependent AC electrical resistivity ρ(T) of Re_0.5T_0.5C (T = Mo, W) carbides, measured from 1.9 to 300 K in a zero magnetic field. Above the superconducting transition temperature (T_C), ρ(T) remains nearly constant, indicating poor metallic behavior. The residual resistivity ratio (RRR = ρ_300/ρ_10) for the Mo and W variants, 1.03(7) and 1.01(8), respectively, indicating high intrinsic disorder in the compounds. These values are similar to those reported for binary carbides and high-entropy alloys. The inset of Figs. <ref>(d) and (h) shows a drop in resistivity at T_C = 3.86(4) K for Re_0.5Mo_0.5C and 3.92(7) K for Re_0.5W_0.5C, confirming superconductivity. The observed T_C values are consistent with the magnetization measurement. The resistivity data above T_C can be well understood by the Bloch-Gruneisen (BG) model with n = 3 expressed as <cit.>, ρ(T)= ρ_0 + A (T/θ_D)^3 ∫_0^θ_D/Tx^3/(e^x-1)(1-e^-x) dx where ρ_0 is the residual resistivity, A is the degree of electron correlation in the substance <cit.> and θ_D is the Debye temperature. The fitted resistivity data from the BG model provides A = 2.27 μΩ.cm, θ_D = 283.06, ρ_0 = 142.64 μΩ.cm for Re_0.5Mo_0.5C and A = 2.32 μΩ.cm, θ_D = 287.06, ρ_0 = 307.12 μΩ.cm for Re_0.5W_0.5C. These values of θ_D are relatively close to those obtained from the heat capacity data (discussed later). Low-field magnetization curves were recorded at various temperatures to determine the lower critical field, H_C1(0). The onset of vortex formation and the mixed state was identified as the deviation point from the Meissner line in the M-H curves (insets in Figs. <ref>(a) and (c)). The temperature-dependent H_C1 data was analyzed using the Ginzburg-Landau (GL) equation, expressed as: H_C1(T) = H_C1(0)[1-(T/T_C)^2] providing H_C1(0) values of 2.68(1) and 3.15(3) mT for Re_0.5T_0.5C (T = Mo, W), respectively. To calculate the upper critical field, H_C2(0), the effect of the applied magnetic field on T_C was measured through magnetization and electrical resistivity (insets in Figs. <ref>(b) and (d)). The temperature-dependent H_C2 data was analyzed using the GL equation, expressed as: H_C2(T) = H_C2(0)[1-t^2/1+t^2], yielding H_C2(0) values of 3.60(3) and 3.92(2) T for Re_0.5Mo_0.5C, and 4.06(4) and 4.42(3) T for Re_0.5W_0.5C, from magnetization and resistivity measurements, respectively. Temperature-dependent H_C1(0) and H_C2(0) plots, along with GL fits, are shown for both samples in Figs. <ref>(a) and (c) and Figs. <ref>(b) and (d), respectively. The obtained H_C2(0) and H_C1(0) values were used to calculate the superconducting characteristic length scales, coherence length λ_GL and penetration length ξ_GL <cit.>. The expressions for these parameters are given by: H_C2(0) = Φ_0/2πξ_GL^2. H_C1(0) = Φ_0/4πλ_GL^2( ln λ_GL/ξ_GL + 0.12). where Φ_0 is the magnetic flux quantum. The obtained parameters λ_GL and ξ_GL for the Mo-variant are 5047(5) Å and 95.6(6) Å. For the W variant, they are 4592(7) Å and 90.1(8) Å. The derived GL parameters κ_GL (= λ_GL/ξ_GL) are 52.20(3) and 50.98(5) for Re_0.5Mo_0.5C and Re_0.5W_0.5C. These values exceed 1 /√(2), confirming these superconductors' type II nature. Furthermore, using H_C1(0), H_C2(0) and κ_GL, the thermodynamic critical field H_C was calculated using the relation: H_C^2 ln[k_GL(0)+0.08] = H_C1(0) H_C2(0) <cit.>, yielding values of 48.8(6) and 57.1(3) mT for the Mo and W-based carbides, respectively. Superconductivity can be destroyed by orbital and Pauli paramagnetic limiting field effects. The orbital limit, H_C2^orb(0), is associated with the kinetic energy of the Cooper pair. The Werthamer-Helfand-Hohenberg (WHH) theory provides the expression <cit.>: H^orb_C2(0) = -α T_C.dH_C2(T)/dT|_T=T_C. where α is the purity factor (0.69 for dirty, 0.73 for clean limits). The initial slope, -dH_C2(T)/dT|_T=T_C, was obtained from the H_C2-T phase diagram (Figs. <ref>(b) and (d)). The slope values obtained are 0.96(7) and 0.78(6) T / K for carbides doped with Mo and W, respectively. For dirty limit superconductors (α = 0.69), the calculated H_C2^orb(0) values are 2.54(7) and 2.06 (9) T for carbides based on Mo and W. The Pauli paramagnetic limiting field, H_C2^P(0), arises from the Zeeman splitting and is given by: H_C2^P(0) = 1.86*T_C for weakly coupled BCS superconductors <cit.>. The estimated H_C2^P values are 7.01(2) and 7.03(1) T for Mo and W-based carbides. Since the upper critical field is lower than the Pauli limiting field, the orbital limiting field is responsible for breaking the Cooper pairs in both compounds. To quantify spin paramagnetic effects, we calculated the Maki parameter, (α_m), using the expression α_m = √(2) H_C2^orb(0)/H_C2^P(0). The values of α_m are 0.51(2) and 0.41(4) for Mo and W variants, respectively. The Ginzburg number G_i, a ratio of thermal energy to condensation energy related to the coherence volume, is given by: G_i = 1/2[k_Bμ_0τ T_C/4 πξ^3_GL(0) H^2_C(0)]^2, where τ is an anisotropic ratio and assumes 1 for cubic structure. Using the values of T_C, ξ_GL(0) and H_C, we calculated G_i = 3.15(1) and 2.43(2) × 10^-6 for Mo and W-based carbides. These values are lower than those of high-T_C cuprate superconductors (10^-2) <cit.> but higher than those of low-T_C superconductors (10^-8) <cit.> suggesting that weak thermal fluctuations contribute to vortex unpinning in these carbides <cit.>. Specific heat measurements were performed in zero magnetic field for Re_0.5T_0.5C (T = Mo, W), as shown in Figs. <ref>(a) and (b). A clear transition from the normal to the superconducting state was observed at T_C = 3.7(1) K for both Mo- and W-based carbides, consistent with magnetization and resistivity measurements. The Debye-Sommerfeld relation, C/T = γ_n + β_3T^2, where γ_n is the Sommerfeld coefficient and β_3 is the phononic coefficient β_3, used to analyze normal state behavior. By extrapolating the normal state behavior to low temperatures, we obtained γ_n and β_3 are 6.60(2) mJ/mol K^2 and 0.219(8) mJ/mol K^4 for Re_0.5Mo_0.5C and 6.18(3) mJ/mol K^2 and 0.209(3) mJ/mol K^4 for Re_0.5W_0.5C, respectively. β_3, was used to calculate the Debye temperature, θ_D using the relation θ_D = (12π^4 R N/5 β_3)^1/3, where N is the number of atoms per formula unit and R = 8.314 J mol^-1 K^-1 is a gas constant. The calculated values for θ_D are 328(2) and 333(7) K for Mo and W-based carbides, consistent with resistivity data. The Sommerfeld coefficient γ_n, is related to the density of states at the Fermi energy, D_C(E_F), as γ_n = (π^2 k_B^2/3) D_C(E_F). Here, k_B = 1.38 × 10^-23 J K^-1 is the Boltzmann constant. The calculated values of D_C(E_F) are 2.79(8) and 2.62(2) states eV^-1 f.u.^-1 for Re_0.5T_0.5C (T = Mo, W respectively). These values are higher than the topological isostructural rock salt carbides (Nb, Ta)C <cit.>. McMillan's theory <cit.> provides a formula to calculate the dimensionless electron-phonon coupling parameter λ_ep, which represents the strength of the attractive interaction between electrons and phonons. The expression is defined as: T_C = θ_D/1.45exp[1.04(1+λ_ep)/μ^* (1+0.62.λ_ep)-λ_ep]; where μ^* is the repulsive-screened Coulomb potential parameter (0.13 for intermetallic compounds). The estimated λ_ep values for the Mo and W-variants are 0.574(2) and 0.571(8), respectively, indicating weak-coupled superconductivity. Specific heat measurements also provide insights into the superconducting gap symmetry. By subtracting the phononic terms from the total specific heat at zero fields, we calculated the electronic contribution C_el. The specific heat jump Δ(0)/γ_nT_C was found to be 1.30 (2) and 1.29 (4) for the Mo- and W-based carbide, slightly lower than the BCS value of 1.43. Low-temperature C_el was theoretically fitted by the α-model of the isotropic BCS weak coupling superconductor <cit.>, represented by the equation: S/γ_n T_C= -6/π^2(Δ(0)/k_B T_C) ∫_0^∞[ fln(f)+(1-f)ln(1-f)] dy where f(ξ) = (exp(E(ξ)/k_BT)+1)^-1 is the Fermi function, and y = ξ/Δ(0), with E(ξ) = √(ξ^2+Δ^2(t)), which is the energy of normal electrons relative to the Fermi energy, where t = T/T_C is normalized temperature. The normalized temperature-dependent gap function is written as Δ(t)=tanh[1.82((1.018(1/t))-1)^0.51]. Using the relation between entropy S and C_el, given as C_el = tdS/dt, we calculated the electronic specific heat. The superconducting gap (α = Δ(0)/k_BT_C), was found to be 1.74(3) and 1.73(5) for carbides based on Mo and W, which is close to the BCS value for isotropic, weakly coupled superconductors, suggesting phonon-mediated superconductivity in these ultra-hard materials. The α-model of the isotropic BCS superconductor's best fitting of Figs. <ref>(a) and (b) provides us with this ratio. Data points up to T_C/10, and microscopic measurements are required to determine the exact pairing mechanism. To study the electronic properties of these Re-based carbides, we employed a set of equations considering a spherical Fermi surface. These equations allow us to calculate the mean free path, l_e, and the effective mass, m^* <cit.>. The equations are as follows: γ_n = (π/3)^2/3k_B^2m^*V_f.u.n^1/3/ħ^2N_A l_e = 3π^2ħ^3/e^2ρ_0m^*2v_F^2, n = 1/3π^2(m^*v_F/ħ)^3 where k_B, V_f.u., and N_A are the Boltzmann constant, the volume of a formula unit, and the Avogadro number, respectively. For a dirty limit superconductor, the effective magnetic penetration depth λ_GL(0) can be expressed in terms of the London penetration depth λ_L(0) as: λ_GL(0) = λ_L(1+ξ_0/l_e)^1/2, λ_L = (m^*/μ_0n e^2)^1/2 ξ_GL(0)/ξ_0 = π/2√(3)(1+ξ_0/l_e)^-1/2 The high value of ξ_0/l_e suggests that both Re-based carbides have a dirty limit superconductivity. The calculated electronic parameters are summarized in tbl: parameters. The Uemura plot helps categorize superconductors as conventional or unconventional based on Fermi temperature and superconducting transition temperature <cit.>. We calculated the Fermi temperature for these carbides using the equation <cit.>: k_BT_F = ħ^2/2(3π^2)^2/3n^2/3/m^*. The values obtained for carbides based on Mo and W are T_F are 3.38(6) and 3.61(4) × 10^4 K. The corresponding T_C/T_F ratios, 0.00011 and 0.00010, fall outside the unconventional band range (0.01 < T_C/T_F < 0.05) in the Uemura plot, indicating that these carbides are conventional superconductors. In conclusion, we successfully synthesized ReC, a rocksalt-type crystal structure, at ambient pressure through Mo and W doping. Magnetization, electrical resistivity, and specific heat measurements revealed bulk type-II superconductivity in Re_0.5T_0.5C (T = Mo, W) with T_C ≈ 3.8 K. These compounds exhibit face-centered-cubic NaCl-type structures and weak-coupling s-wave superconductivity. The superconducting properties of Re_0.5T_0.5C are comparable to those observed in Mo and W-based borides with the same valence electron count <cit.>. Further exploration of their mechanical properties is warranted. The combination of potential non-trivial band topology, high spin-orbit coupling elements, enhanced hardness, and high critical field superconductivity in Re_0.5T_0.5C provides valuable insights into the relationships between these properties and their potential for practical applications in extreme conditions. Given the time-reversal symmetry breaking observed in ScS <cit.>, these Re-based carbides may offer insights into the frequent occurrence of unconventional superconducting ground states in Re-based superconductors. Additionally, this study paves the way for the development of ultra-hard superconducting materials by exploring the formation of high-entropy carbides through the addition of suitable elements. Acknowledgments: P.K.M. acknowledges the financial support provided by the CSIR, Government of India, through the SRF Fellowship (Award No: 09/1020(0174)/2019-EMR-I). R.P.S. is thankful to the SERB, Government of India, for the Core Research Grant (CRG/2023/000817). 50 TMCs1 E. Wuchina, E. Opila, M. Opeka, W. Fahrenholtz, and I. Talmy, Electrochem. Soc. Interface 16, 30 (2007). TMCs2 L. E. Toth, Transition Metal Carbides and Nitrides (Academic, New York, 1971). Carbides1 V. A. Gubanov, A. L. Ivanovsky, and V. P. Zhukov (Cambridge, New York: Cambridge University Press) pp. 18, 57 (1994). Carbides2 X. Y. Yang, Y. Lu, F. W. Zheng, and P. Zhang, Chin. Phys. B 24, 116301 (2015). 2D-TMCs T. Qin, Z. Wang, Y. Wang, F. Besenbacher, M. Otyepka, and M. Dong, Nano-Micro Letters 13, 183 (2021). Legar J. M. Leger, J. Low Temp. Phys. 14, 297 (1974). TopologicalSC X. L. Qi and S. C. Zhang, Rev. Mod. Phys. 83, 1057 (2011). Majorana-fermion A. Y. Kitaev, Phys. Usp. 44, 131 (2001). TSC C. Nayak, S. H. Simon, A. Stern, M. Freedman, and S. D. Sarma, Rev. Mod. Phys. 80, 1083 (2008). Nb/TaC T. Shang, J. Z. Zhao, D. J. Gawryluk, M. Shi, M. Medarde, E. Pomjakushina, and T. Shiroka, Phys. Rev. B 101, 214518 (2020). V/CrC R. Zhan and X. Luo, J. Appl. Phys. 125, 053903 (2019). MoC1 C. I. Sathish, Y. Shirako, Y. Tsujimoto, H. L. Feng, Y. Sun, M. Akaogi, and K. Yamaura, Solid State Commun. 177, 33 (2014). TaC-hexagonal1 D. Y. Yan, M. Yang, C. X. Wang, P. B. Song, C. J. Yi, Y. G. Shi, Supercond. Sci. Tech. 34, 035025 (2021). TaC-2 Z. Cui, Y. Qian, W. Zhang, H. Weng, Z. Fang, Chinese Phys. Lett. 37, 087103 (2020). Dirac-semimetal-NbC D. Yan, D. Geng, Q. Gao, Z. Cui, C. Yi, Y. Feng, C. Song, H. Luo, M. Yang, M. Arita, S. Kumar, E. F. Schwier, K. Shimada, L. Zhao, K. Wu, H. Weng, L. Chen, X. J. Zhou, Z. Wang, Y. Shi, and B. Feng Phys. Rev. B 102, 205117 (2020). MoC A. Huang, A. D. Smith, M. Schwinn, Q. Lu, T. Chang, W. Xie, H. Jeng, and G. Bian, Phys. Rev. Materials 2, 054205 (2018). Nodalsemimetal N. Karn, M. Sharma, P. Sharma, G. Gurjar, S. Patnaik, and V. Awana, J. Supercond. Novel Magn. 34, 2717 (2021). Fermisurface-nesting C. Li, N. K. Ravichandran, L. Lindsay, and D. Broido, Phys. Rev. Lett. 121, 175901 (2018). ScS Arushi, R. K. Kushwaha, D. Singh, A. D. Hillier, M. S. Scheurer, and R. P. Singh, Phys. Rev. B 106, L020504 (2022). Mo/WC R. H. Willens, E. Buehler, and B. T. Matthias, Phys. Rev. 159, 327 (1967). 3d-carbides N. J. Szymanski, I. Khatri, J. GAmar, D. Gall, S. V. Khare, J. Mater. Chem. C 7, 12619 (2019). ReC4 H. Y. Gou, L. Hou, J. W. Zhang, and F. M. Gao, Appl Phys Lett 92, 241901 (2008). ReC M. Kavitha, G. S. Priyanga, R. Rajeswarapalanichamy, K. Iyakuttti, Int. J. Refract. Met. Hard Mater. 52, 219 (2015). Re4C Z. Zhao, L. Xu, L. M. Wang, B. Xu, M. Wang, Z. Liu, and J. He, Comput. Mater. Sci. 50, 1592 (2011). ReC1 S. V. Popova and L. G. Boiko, High Temp. High Pressures 3, 237 (1971). ReC2 Z. Zho, L. Cui, L. M. Wang, B. Xu, Z. Liu, D. Yu, J. He, X. F. Zhou, H. T. Wang, and Y. Tian, Cryst. Growth Des. 10, 5024 (2010). ReC-WC P. Hu, X. Xie, L. Bai, R. Zhang, X. Zhang, J. Sun, H. Dong, M. Wen, and F. Wu, J. Appl. Phys. 131, 165102 (2022). Mo/WReC2 A. C. Lawson, J. Less-Common. Met. 23, 103 (1971). ReWC2-1 A. M. Tehrani, A. O. Oliynyk, M. Parry, Z. Rizvi, S. Couper, F. Lin, L. Miyagi, T. D. Sparks, and J. Brgoch, J. Am. Chem. Soc. 140, 9844 (2018). ReWC2-2 S. G. Baird, M. Liu, H. M. Sayeed, and T. D. Sparks, arXiv:2202.02380, (2022). FULLPROF J. Rodríguez-Carvajal, Physica B 192, 55 (1993). ReC-latticeparameters S. V. Popova. Acta Crystallogr. Sect. A Cryst. Physics, Diffraction, Theor. Gen. Crystallogr. 31, 99 (1975). WC1-x Z. Abdullaeva, E. Omurzak, C. Iwamoto, H. Okudera, M. Koinuma, S. Takebe, S. Sulaimankulova, and T. Mashimo, RSC Adv. 3, 513 (2012). demagnetisation-factor A. Aharoni, J. Appl. Phys. 83, 3432 (1998). BG-Model G. Grimvall, The Electron-Phonon Interaction in Metals (North-Holland, Amsterdam, 1981). n A. Bid, A. Bora, and A. K. Raychaudhuri, Phys. Rev. B 74, 035426 (2006). Tin1 M. Tinkham, Introduction to Superconductivity, 2nd ed. (McGraw-Hill, New York, 1996). Tin2 T. Klimczuk, F. Ronning, V. Sidorov, R. J. Cava, and J. D. Thompson, Phys. Rev. Lett. 99, 257004 (2007). Tin3 D. Singh, J. A. T. Barker, A. Thamizhavel, D. McK. Paul, A. D. Hillier, and R. P. Singh, Phys. Rev. B 96, 180501(R) (2017). WHHM1 N. R. Werthamer, E. Helfand, and P. C. Hohenberg, Phys. Rev. 147, 295 (1966). WHHM2 E. Helfand and N. R. Werthamer, Phys. Rev. 147, 288 (1966). Pauli1 B. S. Chandrasekhar, Appl. Phys. Lett. 1, 7 (1962). Pauli2 A. M. Clogston, Phys. Rev. Lett. 9, 266 (1962). Gi-1 G. Blatter, M. V. Feigel’man, V. B. Geshkenbein, A. I. Larkin, and V. M. Vinokur, Rev. Mod. Phys. 66, 1125 (1994). Gi-2 A. E. Koshelev, K. Willa, R. Willa, M. P. Smylie, J. K. Bao, D. Y. Chung, M. G. Kanatzidis, W. K. Kwok, and U. Welp, Phys. Rev. B 100, 094518 (2019). Gi-3 O. Prakash, A. Thamizhavel, and S. Ramakrishnan, Supercond. Sci. Technol. 28, 115012 (2015). el-ph W. L. McMillan, Phys. Rev. 167, 331 (1968). BCS H. Padamsee, J. E. Neighbor, and C. A. Shiffman, J. Low Temp. Phys. 12, 387 (1973). 5-equations D. A. Mayoh, J. A. T. Barker, R. P. Singh, G. Balakrishnan, D. McK. Paul, and M. R. Lees, Phys. Rev. B 96, 064521 (2017). CS/US Y. J. Uemura, G. M. Luke, B. J. Sternlieb, J. H. Brewer, J. F. Carolan, W. N. Hardy, R. Kadono, J. R. Kempton, R. F. Kiefl, S. R. Kreitzman, P. Mulhern, T. M. Riseman, D. L. Williams, B. X. Yang, S. Uchida, H. Takagi, J. Gopalakrishnan, A. W. Sleight, M. A. Subramanian, C. L. Chien, M. Z. Cieplak, G. Xiao, V. Y. Lee, B. W. Statt, C. E. Stronach, W. J. Kossler, and X. H. Yu, Phys. Rev. Lett. 62, 2317 (1989). Tf A. D. Hillier and R. Cywinski, Appl. Magn. Reson. 13, 95 (1997). Mo-WReB Y. Cui, J. Wu, B. Liu, Q. Zhu, G. Xiao, S. Wu, G. Cao, and Z. Ren, J. Alloy. Compd. 832, 154855 (2020).
http://arxiv.org/abs/2409.02072v1
20240903172035
Spectral Representation of Cosmological Correlators
[ "Denis Werth" ]
hep-th
[ "hep-th", "astro-ph.CO", "gr-qc" ]
=15.5pt empty 0 pt 1818 Spectral Representation of 1818 Cosmological Correlators 15pt 1218Denis Werth[werth@iap.frwerth@iap.fr] Institut d'Astrophysique de Paris, Sorbonne Université, CNRS, Paris, F-75014, France Abstract Cosmological correlation functions are significantly more complex than their flat-space analogues, such as tree-level scattering amplitudes. While these amplitudes have simple analytic structure and clear factorisation properties, cosmological correlators often feature branch cuts and lack neat expressions. In this paper, we develop off-shell perturbative methods to study and compute cosmological correlators. We show that such approach not only makes the origin of the correlator singularity structure and factorisation manifest, but also renders practical analytical computations more tractable. Using a spectral representation of massive cosmological propagators that encodes particle production through a suitable iϵ prescription, we remove the need to ever perform nested time integrals as they only appear in a factorised form. This approach explicitly shows that complex correlators are constructed by gluing lower-point off-shell correlators, while performing the spectral integral sets the exchanged particles on shell. Notably, in the complex mass plane instead of energy, computing spectral integrals amounts to collecting towers of poles as the simple building blocks are meromorphic functions. We demonstrate this by deriving a new, simple, and partially resummed representation for the four-point function of conformally coupled scalars mediated by tree-level massive scalar exchange in de Sitter. Additionally, we establish cosmological largest-time equations that relate different channels on in-in branches via analytic continuation, analogous to crossing symmetry in flat space. These universal relations provide simple consistency checks and suggest that dispersive methods hold promise for developing cosmological recursion relations, further connecting techniques from modern scattering amplitudes to cosmology. 1.2 1.1 § INTRODUCTION The complexity of real physical processes is often made simple when these processes are extended to the complex plane. While thinking about particles propagating with complex momenta and scattering at imaginary angles may seem grotesque, this perspective can offer profound insights into the real world. Even more remarkably, fundamental physical principles themselves can be read off from analytic properties of scattering amplitudes. 4pt The development of off-shell methods for scattering amplitudes is deeply rooted in the power of complex analysis. This approach traces its origins back to the celebrated Kramers-Kronig relations, which link absorption and dissipation in a medium by relating the real and imaginary parts of the response function, thereby establishing the foundational connection between causality and analyticity. Building on this foundation, the complex deformation of momenta has led to significant developments, such as Berends-Giele recursion relations <cit.>, recovering tree-level amplitudes by stitching together lower-point building blocks with one leg off-shell, or crossing symmetry, revealing that different processes are related through analytic continuation. From this perspective, further rethinking kinematic constraints and symmetries has led to the discovery of fascinating internal mathematical beauty of scattering amplitudes. Methods such as the use of spinor-helicity variables, improved on-shell recursion relations including BCFW <cit.>, or unitarity methods and generalised cuts <cit.> not only deepen our understanding of quantum field theory but also facilitate the computation of complicated physical processes, as evidenced by recent achievements in high-loop calculations (see <cit.> for nice reviews). 4pt In the context of primordial cosmology, the foundations of cosmological correlation functions have not yet achieved the same degree of understanding. Pushing forward the state-of-the-art of computational techniques, even at first order in perturbation theory, demands gruelling mathematical dexterity, for at least three interrelated reasons. First, computing equal-time correlation functions requires integrating over the entire bulk time evolution, as time translation symmetry is lost and an asymptotically free future is absent. Second, the curved geometry of the cosmological background distorts the free propagation of particles. Finally, unlike tree-level scattering amplitudes, which are meromorphic functions of complex kinematics with branch cuts appearing only at the loop level, tree-level cosmological correlators already exhibit branch cuts. This crucial distinction arises from spontaneous particle production. 4pt Despite these challenges, several powerful techniques have recently been developed. At tree level, approaches such as bootstrapping correlators by solving boundary kinematic equations <cit.> and reformulating perturbative diagrammatic rules in Mellin space <cit.> have proven effective. The use of partial Mellin-Barnes representations <cit.>, leveraging the global dilatation symmetry of de Sitter space that replaces time translations, has further improved these methods. Additionally, numerical techniques that trace the time evolution of correlation functions have been developed to explore and carve out the space of most correlators <cit.>. At the loop order, dispersive integrals have been used to tame loop diagrams <cit.> (also see <cit.> for other techniques).[The resummation of massless loops in de Sitter space—and the associated secular infrared divergences—has a rich and extensive story that goes beyond our focus in this work.] 4pt At the same time, a deeper understanding of the structure of cosmological correlators is gradually emerging as insights from flat-space scattering amplitudes are imported to cosmology <cit.>. Essentially, flat-space amplitudes are embedded within cosmological correlators as residues on the total energy pole, when energies of the external particles add up to zero <cit.>. Similarly, on partial energy poles, when the energy of a subdiagram vanishes, correlators factorise into a product of a (shifted) lower-point correlator and a lower-point scattering amplitude. Notably, these singularities can only be reached by analytically continuing (some) external energies to negative values. Perturbative unitarity leads to a set of cutting rules <cit.>, while its non-perturbative counterpart is captured through the Källén–Lehmann representation in de Sitter space <cit.>. This opens up new possibilities to sew together simple building blocks to construct more complex correlators. Furthermore, simple cosmological correlators can be conceptualised as canonical forms of a polytope <cit.> and progress has been made in understanding the differential equations they satisfy, from which “time emerges" <cit.> (with related aspects discussed in <cit.>). As we will argue in this paper, employing an off-shell formulation of cosmological correlators not only makes their analytic and factorisation properties more explicit, but also streamlines practical computational techniques. 4pt Most of the analytic properties and the encoding of physical principles are phrased at the level of wavefunction coefficients, which loosely are the cosmological counterparts of scattering amplitudes.[Still, wavefunction coefficients are sensitive to total derivatives in the action and are not invariant under field redefinitions. To cure this, an S-matrix for de Sitter space has been recently proposed in <cit.>.] Yet, it remains unclear how these properties can be translated to actual observables, at least beyond leading orders in perturbation theory. Interestingly, cosmological correlators, when pushed to higher order in perturbation theory, are structurally simpler than wavefunction coefficients. For example, they share the same transcendentality as their corresponding flat-space amplitudes <cit.>. Additionally, these objects are most directly connected to cosmological data as these are the main observables. As such, they can be viewed as the cosmological analogue of cross sections. 4pt In this paper, we propose a systematic off-shell study of cosmological correlation functions. Our approach relies on a spectral representation of massive bulk-to-bulk propagators in de Sitter, as first introduced in <cit.> (with previous spectral representations in de Sitter proposed in <cit.>).[Integral representations of cosmological propagators have been proposed in e.g. <cit.>, mostly for the wavefunction coefficients. But all these simple representations do not capture particle production as they were derived either for flat-space objects or for the special case of a conformally coupled field in de Sitter.] Essentially, time ordering is replaced by an appropriate contour integral. While spontaneous particle production makes it impossible to use the usual complex energy plane because of branch cuts, the key insight is instead to shift to the complex mass domain, where the dispersive integral takes the form of a split representation of off-shell bulk-to-boundary propagators. In this plane, mode functions are analytic and the integration contour can be safely closed. This approach uses a suitable iϵ prescription to account for particle production. By incorporating particle production, unlike the usual Feynman propagator, the number of poles effectively doubles, with different Boltzmann weights that either suppress or enhance their effects. This encodes a crossing symmetry between incoming and outgoing modes separated by a Stokes line, fully capturing particle production. In the flat-space limit, where particle production is turned off, this propagator naturally reduces to the standard Feynman propagator. At the level of correlators, this off-shell propagator trivialises time integrals as they only appear in a factorised manner. 4pt On a practical level, we explicitly compute the spectral integral for the four-point function of conformally coupled scalars mediated by the tree-level exchange of massive scalars, and introduce a new, partially resummed representation for this correlator. This new approach also offers a more intuitive understanding of the underlying physics. For example, the single tower of poles that we find maps directly to the quasi-normal modes of the exchanged massive field. Fundamentally, as we will show, this dispersive representation of cosmological correlators clearly illustrates the analytic structure of individual contributions and their factorisation properties. It makes explicit that exchange correlators are constructed by gluing lower-point off-shell correlators with a sewing kernel that encodes both particle production (from the off-shell leg) and a tower of EFT contributions that emerge after integrating out the exchanged field. By explicitly performing this spectral integral, we set the exchanged particle on-shell, allowing us to reconstruct more complex diagrams. 4pt Finally, we leverage the fundamental properties of in-in Schwinger-Keldysh diagrammatics to derive the largest-time equation satisfied by cosmological correlators. This equation relates processes occurring on different in-in branches through the analytic continuation of external energies, mirroring crossing symmetry in flat space. This relationship is universally applicable to all cosmological correlators, regardless of the order in perturbation theory or whether the theory is unitary, and it can serve as a consistency check when computing complicated diagrams. We conclude by illustrating how the largest-time equation is satisfied through a series of explicit examples. Outline. The outline of the paper is as follows: In Sec. <ref>, we briefly review the Schwinger-Keldysh formalism and cosmological propagators, with a particular focus on encoding time ordering as a contour integral. We then introduce the spectral representation of massive propagators in de Sitter and discuss their properties. In Sec. <ref>, we derive the cosmological largest-time equation and prove it for both tree-level and loop-level graphs. For simple correlators, we show how their analytic structure, factorisation properties, and consistency requirements like the largest-time equation become clear when viewed from an off-shell perspective. In Sec. <ref>, we explicitly perform the spectral integral to obtain a new representation of a massive exchange correlator in de Sitter. We summarise our conclusions in Sec. <ref>. Notation and Conventions. We use the mostly-plus signature for the metric (-, +, +, +). Spatial three-dimensional vectors are written in boldface k. We use letters (a, b, …) for Schwinger-Keldysh indices. External and internal energies of a graph are denoted by E_i and K_i, respectively. Cosmic (physical) time is denoted by t and conformal time, such that dτ = dt/a, by τ. Throughout most of the paper, we work in the Poincaré patch of de Sitter space with the metric ṣ^2 = a^2(τ)(-τ̣^2+x^2), where a(τ) = -(Hτ)^-1 is the scale factor and H is the Hubble parameter. We will often set the Hubble scale to unity H=1. Additional definitions will be introduced as needed in the main text. § FROM TIME ORDERING TO CONTOUR INTEGRALS We begin by introducing Schwinger-Keldysh propagators, which play a crucial role in calculating cosmological correlators. We then present the associated spectral representations, which encode time ordering through contour integrals in the complex plane. §.§ Cosmological propagators We are interested in connected equal-time correlation functions of a bulk scalar field φ(t, x). The field φ should be thought of as the fluctuation around a classical background, as usual in cosmological settings. In the following, we consider a weakly coupled field theory described by a Lagrangian density ℒ[φ]. Generating functional. Following the standard Schwinger-Keldysh path integral formalism to compute correlation functions <cit.>, two copies φ_± of the field φ are introduced on the two branches of the in-in, with the (+) branch representing the forward evolution from an initial time t_0 to t and the (-) branch the backward evolution from t to t_0. The condition that both branches are sewn at the time t imposes that the field configurations φ_± should coincide at t, i.e. φ_+(t, x) = φ_-(t, x). Equal-time correlation functions ⟨Ω|φ̂_a_1(t, x_1) …φ̂_a_n(t, x_n)|Ω|$⟩, witha_1, …, a_n=±and where|Ω⟩is the vacuum of the fully interacting theory, are computed by taking functional derivatives ⟨Ω|φ̂_a_1(t, x_1) …φ̂_a_n(t, x_n)|Ω|=⟩1/ia_1… ia_n δ^n Z[J_+, J_-]/δ J_a_1(t, x_1) …δ J_a_n(t, x_n)|_J_±=0 , where the generating functionZ[J_+, J_-]is defined as Z[J_+, J_-] = ∫_𝒞_iϵφ_± exp(i∫_-∞^tṭ' ^̣3x (ℒ[φ_+] + J_+ φ_+ - ℒ[φ_-] - J_- φ_-)) . The contour_iϵdenotes the usual in-in contour that goes in the upper-half complex plane from-∞^+totand reverts towards-∞^-in the lower-half complex plane, with-∞^±≡-∞(1∓iϵ)andϵ>0an infinitesimal parameter. The vacuum can only be defined in the asymptotic past when fluctuations have wavelengths (in Fourier space) much smaller than the size of the cosmological horizon. As such, in Eq. (<ref>), the contour encodes the standardiϵprescription to which we will come back later. Free propagators. Resorting to perturbation theory, the full Lagrangian is split into a free partℒ_0that can be solved exactly, defining the free propagators, and an interacting partℒ_int, i.e.ℒ = ℒ_0 + ℒ_int. The first simplest object that can be defined is the free Wightman function⟨0|φ̂(t_1, x_1) φ̂(t_2, x_1)|0|$⟩ where the expectation value is over the vacuum of the free theory |0⟩. In Fourier space for the spatial coordinates only, after expanding the operator φ̂_k(t) in terms of annihilation and creation operators φ̂_k(t) = u_k(t) â_k + u_k^*(t) â^†_-k , where u_k(t) is the mode function, the Wightman function is given by (k; t_1, t_2) = u_k(t_1) u_k^*(t_2) . We have used invariance under spatial translations and rotations for a homogeneous and isotropic background so that the function only depends on a single 3-momentum magnitude k ≡ |k|. Although this object does not itself have a natural physical interpretation, it is the building block for the four types of Schwinger-Keldysh propagators _ab(k; t_1, t_2) where a, b = ±, defined as _++(k; t_1, t_2) = (k; t_1, t_2) Θ(t_1-t_2) + ^*(k; t_1, t_2) Θ(t_2-t_1) , _+-(k; t_1, t_2) = ^*(k; t_1, t_2) , _-+(k; t_1, t_2) = (k; t_1, t_2) , _–(k; t_1, t_2) = ^*(k; t_1, t_2) Θ(t_1-t_2) + (k; t_1, t_2) Θ(t_2-t_1) . These propagators being not independent is a consequence of the Wightman function satisfying the property (k; t_2, t_1) = ^*(k; t_1, t_2), as can be seen from Eq. (<ref>). From these “bulk-to-bulk" propagators, one can define “bulk-to-boundary" propagators by sending t_2→ t (for t_1<t) and using the condition that the two in-in branches coincide at the external time φ_+(t, x) = φ_-(t, x). They read _+(k; t_1) = ^*(k; t_1, t) , _-(k; t_1) = (k; t_1, t) . Following the diagrammatic rules exposed in <cit.>, we represent these propagators with a black dot 0pt[line width=1. pt, scale=2] [fill=black] (0, 0) circle (.05cm); denoting +, a white dot 0pt[line width=1. pt, scale=2] [fill=white] (0, 0) circle (.05cm); denoting -, and a square 0pt[line width=1. pt, scale=2] [fill=white] ([xshift=0pt,yshift=-1.5pt]1, 0) rectangle ++(3pt,3pt); denoting the boundary at external time t where + and - are indistinguishable: [c] 0pt[line width=1. pt, scale=2] [fill=black] (0, 0) circle (.05cm) node[above=0.5mm] t_1; [black] (0.05, 0) – (1, 0); [fill=black] (1, 0) circle (.05cm) node[above=0.5mm] t_2; = _++(k; t_1, t_2) , 0pt[line width=1. pt, scale=2] [fill=black] (0, 0) circle (.05cm) node[above=0.5mm] t_1; [black] (0.05, 0) – (1, 0); [fill=white] (1, 0) circle (.05cm) node[above=0.5mm] t_2; = _+-(k; t_1, t_2) , 0pt[line width=1. pt, scale=2] [fill=white] (0, 0) circle (.05cm) node[above=0.5mm] t_1; [black] (0.05, 0) – (1, 0); [fill=black] (1, 0) circle (.05cm) node[above=0.5mm] t_2; = _-+(k; t_1, t_2) , 0pt[line width=1. pt, scale=2] [fill=white] (0, 0) circle (.05cm) node[above=0.5mm] t_1; [black] (0.05, 0) – (1, 0); [fill=white] (1, 0) circle (.05cm) node[above=0.5mm] t_2; = _–(k; t_1, t_2) , [c] 0pt[line width=1. pt, scale=2] [fill=black] (0, 0) circle (.05cm) node[above=0.5mm] t_1; [black] (0.05, 0) – (1, 0); [fill=white] ([xshift=0pt,yshift=-1.5pt]1, 0) rectangle ++(3pt,3pt); = _+(k; t_1) , 0pt[line width=1. pt, scale=2] [fill=white] (0, 0) circle (.05cm) node[above=0.5mm] t_1; [black] (0.05, 0) – (1, 0); [fill=white] ([xshift=0pt,yshift=-1.5pt]1, 0) rectangle ++(3pt,3pt); = _-(k; t_1) . A double-coloured dot 0pt[line width=1. pt, scale=2] (0, 0) circle (.05cm); [black] (0,0) – (90:0.05) arc (90:270:0.05) – cycle; means either a black or a white dot. §.§ Multiple propagators: a matter of contour The previously defined propagators are nothing but Green's functions of the free equation of motion with appropriate boundary conditions, and therefore can be written as contour integrals. Feynman propagator. To illustrate how a frequency-space representation of the propagators can be constructed and for the sake of simplicity, let us first consider a massless scalar field φ in flat space. We will upgrade the presented integral representations of propagators to massive fields in de Sitter space in the next section. The corresponding positive-frequency mode function reads u_k(t) = e^-ik t√(2k). The generalisation to massive fields is straightforward after setting k→ω_k=√(k^2+m^2). The key insight is to realise that time ordering can be written as a frequency-space integral using the mathematical identity e^-ik(t_1 - t_2)Θ(t_1-t_2) + e^+ik(t_1 - t_2)Θ(t_2-t_1) = -2k ∫_-∞^+∞ω̣/2iπe^iω(t_1-t_2)/(ω^2 - k^2)_iϵ , where the iϵ prescription is implemented as follows 1/(ω^2-k^2)_iϵ≡1/2k[1/ω - (k-iϵ) - 1/ω - (-k+iϵ)] = 1/ω^2-k^2+iϵ , with ϵ>0 an infinitesimal positive parameter, and where in the second equality we have dropped terms of order 𝒪(ϵ^2) and set 2kϵ = ϵ, which is valid in the limit ϵ→ 0. This procedure effectively sets the two poles slightly off the real axis in the complex plane. To see that we indeed recover the correct Feynman propagator, we suppose that t_1>t_2 and close the contour in the upper-half complex plane. The first term in Eq. (<ref>) gives zero and only the second term selects the residue, effectively setting the exponential on shell, i.e. e^iω(t_1-t_2)→ e^-ik(t_1-t_2). One can proceed in the same way for t_1<t_2, this way closing the contour in the lower-half complex plane, to find e^ik(t_1-t_2). Importantly, this requires that the mode function should be analytic, which is valid here for the exponential in the entire complex plane, and that the integrand decays at least faster than 1/ω when ω→∞ so that the semi-circle at infinity does not contribute. 4pt In the end, the time-ordered propagator _++ can be re-written as _++(k; t_1, t_2) = i ∫_-∞^+∞ω̣/2πũ^*_ω(t_1) ũ_ω(t_2)/(ω^2 - k^2)_iϵ , where ũ_k(t) = e^-ikt is the mode function after stripping off the overall normalisation. Naturally, we have _–(k; t_1, t_2) = _++^*(k; t_1, t_2). Some comments are in order. First, the use of tilted mode functions ũ_k(t) avoids introducing unnecessary branch cuts in the integrand coming from √(k), thereby preserving its analytic property. Second, both times t_1 and t_2 appear factorised so that time integrals within correlators become trivial. This property will be illustrated with concrete examples in Sec. <ref>. Finally, the iϵ prescription is just a simple way for representing time ordering and specifying a pole prescription. Equivalently, one may deform the contour slightly from -∞^- = -∞(1+iϵ) to +∞^+ = +∞(1+iϵ), while keeping the poles on the real axis. We show in App. <ref> how this iϵ prescription can be recovered directly from the Schwinger-Keldysh path integral. Retarded and advanced propagators. Various Green's functions with different boundary conditions can be recovered from the usual integral representation of the Feynman propagator. This can be easily done by deforming and combining diverse integration contours. For example, using the standard identity satisfied by the Heaviside function Θ(t_1-t_2) + Θ(t_2-t_1) = 1, the time-ordered propagator _++ can be written as _++(k; t_1, t_2) = ^*(k; t_1, t_2) + [(k; t_1, t_2) - ^*(k; t_1, t_2)]Θ(t_1-t_2) . The first term is the non-time-ordered propagator _+-(k; t_1, t_2) and therefore is factorised in time, whereas the second one is nothing but the causal retarded Green's function _R(k; t_1, t_2). In terms of integration contours, Eq. (<ref>) reads [line width=1. pt, scale=2] [black, -] (-0.5,0) – (0.5,0) coordinate (xaxis); [black, -] (0,-0.4) – (0,0.4) coordinate (yaxis); [black, fill = black] (0.2, -0.1) circle (.02cm); [black, fill = black] (-0.2, 0.1) circle (.02cm); [pyred, draw, line width = 1pt, postaction = decorate, decoration=markings, mark=at position 0.7 with [line width=1pt]>] (-0.4, 0) – (0.4, 0); = [line width=1. pt, scale=2] [black, -] (-0.5,0) – (0.5,0) coordinate (xaxis); [black, -] (0,-0.4) – (0,0.4) coordinate (yaxis); [black, fill = black] (0.2, 0) circle (.02cm); [black, fill = black] (-0.2, 0) circle (.02cm); [pyred, draw, line width = 1pt, postaction = decorate, decoration=markings, mark=at position 0.4 with [line width=1pt]>] (0.3, 0) arc (0:-360:0.1) – (0.3, 0); - [line width=1. pt, scale=2] [black, -] (-0.5,0) – (0.5,0) coordinate (xaxis); [black, -] (0,-0.4) – (0,0.4) coordinate (yaxis); [black, fill = black] (0.2, 0) circle (.02cm); [black, fill = black] (-0.2, 0) circle (.02cm); [pyred, draw, line width = 1pt, postaction = decorate, decoration=markings, mark=at position 0.7 with [line width=1pt]>] (-0.4, -0.2) – (0.4, -0.2); . Importantly, non-local processes lead to a vanishing retarded Greens's function. This observation has consequences for the cosmological case, and enables to extract various signals from correlators with different origin. Similarly, one can flip the second Heaviside function in the expression of _++ to obtain _++(k; t_1, t_2) = (k; t_1, t_2) + [^*(k; t_1, t_2) - (k; t_1, t_2)]Θ(t_2-t_1) , which, in terms of integration contours, reads [line width=1. pt, scale=2] [black, -] (-0.5,0) – (0.5,0) coordinate (xaxis); [black, -] (0,-0.4) – (0,0.4) coordinate (yaxis); [black, fill = black] (0.2, -0.1) circle (.02cm); [black, fill = black] (-0.2, 0.1) circle (.02cm); [pyred, draw, line width = 1pt, postaction = decorate, decoration=markings, mark=at position 0.7 with [line width=1pt]>] (-0.4, 0) – (0.4, 0); = [line width=1. pt, scale=2] [black, -] (-0.5,0) – (0.5,0) coordinate (xaxis); [black, -] (0,-0.4) – (0,0.4) coordinate (yaxis); [black, fill = black] (0.2, 0) circle (.02cm); [black, fill = black] (-0.2, 0) circle (.02cm); [pyred, draw, line width = 1pt, postaction = decorate, decoration=markings, mark=at position 0.4 with [line width=1pt]>] (-0.1, 0) arc (0:360:0.1) – (-0.1, 0); - [line width=1. pt, scale=2] [black, -] (-0.5,0) – (0.5,0) coordinate (xaxis); [black, -] (0,-0.4) – (0,0.4) coordinate (yaxis); [black, fill = black] (0.2, 0) circle (.02cm); [black, fill = black] (-0.2, 0) circle (.02cm); [pyred, draw, line width = 1pt, postaction = decorate, decoration=markings, mark=at position 0.7 with [line width=1pt]>] (-0.4, 0.2) – (0.4, 0.2); . §.§ Spectral representation of massive cosmological propagators We now show how the previous integration contour prescription is modified when taking into account spontaneous particle production. For concreteness in what follows, we consider a real massive scalar field in de Sitter, see App. <ref> for more details.[We consider fields in the principal series m/H≥ 3/2 so that μ≡√(m^2/H^2 - 9/4) is real. In particular, we will not deal with late-time secular divergences appearing when exchanging light fields.] Contour prescription. Spontaneous particle production manifests itself as non-analyticity in the complex energy domain k→ω once the momentum is set off shell. This renders impossible to construct an integral representation of massive cosmological propagators for the reason that implementing time ordering requires crossing the branch cut. This challenge can be alleviated by instead analytically continuing the massive mode function to the complex μ plane, allowing the field to acquire any complex mass, as shown in <cit.>. The integral representation of the time-ordered massive cosmological propagator reads[The spectral parameter ν should not be confused with ν = iμ that is commonly used to label light fields in the complementary series.] _++(k; τ_1, τ_2) = i ∫_-∞^+∞ν̣ _ν u_k^*(τ_1, ν) u_k^*(τ_2, ν)/(ν^2 - μ^2)_iϵ , where _ν≡νπsinh(πν) is the de Sitter density of states, u_k^*(τ, μ) is the negative-frequency massive mode function defined in (<ref>), and the contour prescription to select the correct pole structure is given by[The explicit pole structure (and the corresponding residues) can be immediately obtained from the usual flat-space prescription (<ref>) or by noticing that 1/ν^2 - μ^2 + iϵ = Γ(iν-iμ-ϵ)Γ(-iν-iμ-ϵ)/Γ(iν-iμ-ϵ+1)Γ(-iν-iμ-ϵ+1) . ] 1/(ν^2-μ^2)_iϵ≡1/2sinh(πμ)[e^+πμ/ν^2-μ^2+iϵ - e^-πμ/ν^2-μ^2-iϵ] . Naturally, the anti-time ordered propagator is given by _–(k; τ_1, τ_2) = _++^*(k; τ_1, τ_2). The detailed derivation of (<ref>) is presented in App. <ref>. The integral over the spectral parameter ν can be viewed as an integration over the scaling dimension Δ = 32+iν in the principal series that labels infinite-dimensional (scalar) representations of the de Sitter group, which defines eigenvalues of the quadratic Casimir operator Δ (3-Δ), see e.g. <cit.>. In this sense, this integral is closely related to the de Sitter Källén–Lehmann representation albeit in spatial Fourier space <cit.>. Such spectral representation has its analogue in anti-de Sitter space, see e.g. <cit.>, and in other curved geometries <cit.>. Flat-space limit. The spectral representation (<ref>) extends the familiar flat-space prescription (<ref>) by capturing spontaneous particle production while at the same time having a well-defined flat-space limit. Indeed, in the limit H→0, the Boltzmann suppression is turned off e^-πμ→0 as μ∼ m/H for a fixed mass. The additional unusual poles ν = ±μ± iϵ vanish and the contour prescription reduces to the flat-space one (<ref>) 1/(ν^2-μ^2)_iϵ1/ν^2-μ^2+iϵ . Following the derivation of (<ref>) presented in App. <ref>, for τ_1 being in the future of τ_2, and using limiting forms given in App. <ref>, the massive propagator reduces to _++(k; τ_1, τ_2) = i/2∫_-∞^+∞ν̣ ν J_iν(-kτ_1) H_iν^(2)(-kτ_2)/ν^2-μ^2+iϵ = π/2 J_iμ(-kτ_1) H_iμ^(2)(-kτ_2) e^-iμ(t_1-t_2)/√(2μ) , where we have selected the single pole at ν = μ-iϵ by closing the contour in the lower-half complex plane, and have set H=1. We recover the properly-normalised flat-space propagator composed of plane waves oscillating in cosmic time t. In the flat-space limit, this propagator captures the free super-horizon evolution of the massive field when spatial gradients are negligible, leading to a dispersion relation dominated by the mass ω_k ∼μ. This provides insight into why analytically continuing the mass, rather than the momentum, allows for defining a spectral representation of massive fields. Particle production. Particle production of massive fields manifests itself as the emergence of negative-frequency modes over time, when initialised with a positive-frequency mode as selected by the Bunch-Davies vacuum. This process leads to mode mixing, reflected in a non-zero average particle number observed at later times. The Bessel function defines the mode function in the infinite future when the massive field reaches a steady state, as v_k(τ, μ) = Γ(1+iμ)/√(2μ)(k/2)^-iμ J_iμ(-kτ) e^-iμ t/√(2μ) . Notice in passing that the late-time limit τ→0 equivalently also corresponds to the large-mass limit μ→∞ as can easily recovered using (<ref>) and the Stirling formula Γ(z) ∼ e^-z z^z √(2π/z). This is not surprising as both variables are conjugate to each other. Decomposing this outgoing mode function onto the basis of ingoing ones, we obtain v_k(τ, μ) = α_k u_k(τ, μ) + β_k u_k^*(τ, μ) , where the Bogolyubov coefficients are α_k = Γ(1+iμ)/√(2πμ)(k/2)^-iμ e^+πμ/2-iπ/2 , β_k = Γ(1+iμ)/√(2πμ)(k/2)^-iμ e^-πμ/2+iπ/2 , from which we obtain the mean particle density of “out" excitations with wavenumber k in the “in" vacuum state |β_k|^2 = 1/(e^2πμ-1). We recover the usual Bose-Einstein distribution, when identifying the energy with the rest mass μ in a thermal bath set by the de Sitter temperature T_dS = 1/2π. Local and non-local parts. The iϵ prescription (<ref>) projects ingoing particle states onto outgoing states, which enables the mixing of positive- and negative-frequency mode functions. This is made manifest by rewriting the off-shell ingoing mode function in terms of off-shell outgoing ones using the connection formula (<ref>), so that the propagator can be written <cit.> _++(k; τ_1, τ_2) = _++^local(k; τ_1, τ_2) + _++^non-local(k; τ_1, τ_2) , where both contributions read _++^local(k; τ_1, τ_2) = +iπ/4∫_-∞^+∞ν̣ _ν/sinh^2(πν) J_iν(-kτ_1) J_iν^*(-kτ_2) + (ν↔-ν)/(ν^2-μ^2)_iϵ , _++^non-local(k; τ_1, τ_2) = -iπ/4∫_-∞^+∞ν̣ _ν/sinh^2(πν) e^-πνJ_iν(-kτ_1) J_iν(-kτ_2) + (ν↔-ν)/(ν^2-μ^2)_iϵ . The local part is interpreted as the accumulated dynamical phase of a single particle propagating since in the late-time limit we have J_iν(-kτ_1) J_iν^*(-kτ_2) ∼sinh(πν)/πν(τ_1/τ_2)^iν∝ e^-iν(t_1-t_2) . Notably, the local part is independent of the momentum so that it describes correlation at coincident points in position space. Assuming τ_1>τ_2, we close the contour in the lower-half complex plane for the term J_iν(-kτ_1) J_iν^*(-kτ_2) and in the upper-half complex plane for the complex conjugate piece, which leads to _++^local(k; τ_1, τ_2) = π/4sinh^2(πμ)[e^+πμ J_iμ(-kτ_1)J_iμ^*(-kτ_2) + e^-πμ J_iμ^*(-kτ_1)J_iμ(-kτ_2)] Γ(+iμ)Γ(-iμ)/4π[e^+πμ(τ_1/τ_2)^+iμ + e^-πμ(τ_1/τ_2)^-iμ] . For the reverse time ordering, the contours should be closed in the opposite directions for both contributions. The doubling of poles projects the off-shell positive-frequency mode into positive- and negative-frequency ones as (τ_1τ_2)^± iν→ e^±πμ(τ_1τ_2)^± iμ + e^∓πμ(τ_1τ_2)^∓ iμ, which then combine to reconstruct the correct mode mixing. This is illustrated in Fig. <ref>. On the other hand, the non-local part corresponds to the creation and the propagation of two particles since J_iν(-kτ_1) J_iν(-kτ_2) ∼1/Γ(1+iν)^2(k^2τ_1τ_2/4)^iν∝ e^-iν(t_1-t_⋆) e^-iν(t_2-t_⋆) , in the late-time limit, where t_⋆=log k is the pair production (cosmic) time. Being non-analytic in the momentum, this piece describes long-range non-local correlations in position space. Similarly to the local contribution, we close the contour in the lower-half complex plane for the term e^-πνJ_iν(-kτ_1) J_iν(-kτ_2) and in the opposite direction for the second one. We obtain _++^non-local(k; τ_1, τ_2) = π/4sinh^2(πμ)[J_iμ(-kτ_1)J_iμ(-kτ_2) + J_iμ^*(-kτ_1)J_iμ^*(-kτ_2)] 1/4π[Γ(-iμ)^2 (kτ_1τ_2/4)^+iμ + Γ(+iμ)^2 (kτ_1τ_2/4)^-iμ] . Importantly, at late times, the choice of closing the contour is completely fixed so that time ordering does not matter. The Feynman propagator _++ effectively reduces to the Wightman function . Non-local processes being real, as can be explicitly seen from the last equation, they do not enter the advanced or retarded Green's functions. This reflects the fact that the endpoints of a soft propagator are in space-like separation. With the spectral representation of the time-ordered propagator, subtracting the commutator of field operators simply amounts to selecting the residues at the poles in the contour prescription _++^non-local(k; τ_1, τ_2) = _++(k; τ_1, τ_2) - _R(k; τ_1, τ_2) , [line width=1. pt, scale=2] [black, -] (-0.5,0) – (0.5,0) coordinate (xaxis); [black, -] (0,-0.4) – (0,0.4) coordinate (yaxis); [black, fill = black] (0.2, 0.1) circle (.02cm); [black, fill = black] (0.2, -0.1) circle (.02cm); [black, fill = black] (-0.2, 0.1) circle (.02cm); [black, fill = black] (-0.2, -0.1) circle (.02cm); [pyred, draw, line width = 1pt, postaction = decorate, decoration=markings, mark=at position 0.4 with [line width=1pt]>] (0.3, -0.1) arc (0:-360:0.1) – (0.3, -0.1); [pyred, draw, line width = 1pt, postaction = decorate, decoration=markings, mark=at position 0.4 with [line width=1pt]>] (-0.1, -0.1) arc (0:-360:0.1) – (-0.1, -0.1); = [line width=1. pt, scale=2] [black, -] (-0.5,0) – (0.5,0) coordinate (xaxis); [black, -] (0,-0.4) – (0,0.4) coordinate (yaxis); [black, fill = black] (0.2, 0.1) circle (.02cm); [black, fill = black] (0.2, -0.1) circle (.02cm); [black, fill = black] (-0.2, 0.1) circle (.02cm); [black, fill = black] (-0.2, -0.1) circle (.02cm); [pyred, draw, line width = 1pt, postaction = decorate, decoration=markings, mark=at position 0.7 with [line width=1pt]>] (-0.4, 0) – (0.4, 0); - [line width=1. pt, scale=2] [black, -] (-0.5,0) – (0.5,0) coordinate (xaxis); [black, -] (0,-0.4) – (0,0.4) coordinate (yaxis); [black, fill = black] (0.2, 0.1) circle (.02cm); [black, fill = black] (0.2, -0.1) circle (.02cm); [black, fill = black] (-0.2, 0.1) circle (.02cm); [black, fill = black] (-0.2, -0.1) circle (.02cm); [pyred, draw, line width = 1pt, postaction = decorate, decoration=markings, mark=at position 0.7 with [line width=1pt]>] (-0.4, -0.2) – (0.4, -0.2); . Of course, selecting the poles in upper-half complex plane requires subtracting the advanced Green's function _A. This contour deformation is completely equivalent to the bulk cutting rules that extract non-local signals <cit.>. Conformally coupled field. The special case of a conformally coupled field in de Sitter m^2=2H^2 is related to the flat-space one by a conformal transformation. Physically, particle production of a conformally coupled field in de Sitter is effectively turned off so that its mode function is analytic in the entire complex k plane. Therefore, from the flat-space propagator (<ref>), one immediately obtains _++(k; τ_1, τ_2) = i τ_1 τ_2 ∫_-∞^+∞ω̣/2πũ^*_ω(τ_1) ũ_ω(τ_2)/(ω^2 - k^2)_iϵ , where ũ_k(τ) = e^-ikτ and where we have set H=1. § COSMOLOGICAL LARGEST-TIME EQUATION Unitarity at the quantum level, often described as the conservation of probability, imposes powerful constraints on observables, even when the full set of states in the Hilbert space is unknown. At a perturbative level, unitarity implies relations between cosmological correlators <cit.> and conserved quantities under unitary time evolution <cit.>. These relations are analogous to the flat-space optical theorem <cit.>. It has been shown that these relations can be organised into a set of cutting rules <cit.>, which stem from the Hermitian analytic nature of propagators and the straightforward factorisation of a combination of bulk-to-bulk propagators, bearing resemblance to the standard S-matrix Cutkosky rules, see e.g. <cit.> for a recent review. However, these cutting rules were derived for the quantum mechanical wavefunction of the universe, as relations satisfied by the corresponding wavefunction coefficients. 4pt In this section, we derive similar relations directly at the level of cosmological correlators, and relaxing the assumption of unitarity. We show that individual in-in branch components of cosmological correlators, once external legs are properly analytically continued to negative energies on the negative branch, satisfy the largest-time equation <cit.>. We then showcase how this equation is satisfied for relatively simple diagrams. §.§ Propagator identity from an off-shell perspective Using the spectral representation of bulk-to-bulk propagators presented in the previous section, deriving the largest-time equation exactly parallels that for the flat-space S-matrix <cit.>. The essence of this equation lies in a specific trivial identity satisfied by propagators. Using the standard distributional identity 1/x+iϵ - 1/x-iϵ = -2iπδ(x) , in the limit ϵ→ 0, which follows from the Sokhotski-Plemelj identity, the flat-space bulk-to-bulk propagators (<ref>) satisfy the known relation[In addition, we have used the following identity for the delta Dirac function δ(ω^2 - k^2) = 1/2k [δ(ω+k) + δ(ω-k)] , for k>0.] _++(k; t_1, t_2) + _–(k; t_1, t_2) = _+-(k; t_1, t_2) + _-+(k; t_1, t_2) , that is, since _–(k; t_1, t_2) = _++^*(k; t_1, t_2), the real part does not contain time ordering. In terms of integration contour, this identity reads [line width=1. pt, scale=2] [black, -] (-0.5,0) – (0.5,0) coordinate (xaxis); [black, -] (0,-0.4) – (0,0.4) coordinate (yaxis); [black, fill = black] (0.2, -0.1) circle (.02cm); [black, fill = black] (-0.2, 0.1) circle (.02cm); [pyred, draw, line width = 1pt, postaction = decorate, decoration=markings, mark=at position 0.7 with [line width=1pt]>] (-0.4, 0) – (0.4, 0); - [line width=1. pt, scale=2] [black, -] (-0.5,0) – (0.5,0) coordinate (xaxis); [black, -] (0,-0.4) – (0,0.4) coordinate (yaxis); [black, fill = black] (0.2, 0.1) circle (.02cm); [black, fill = black] (-0.2, -0.1) circle (.02cm); [pyred, draw, line width = 1pt, postaction = decorate, decoration=markings, mark=at position 0.7 with [line width=1pt]>] (-0.4, 0) – (0.4, 0); = [line width=1. pt, scale=2] [black, -] (-0.5,0) – (0.5,0) coordinate (xaxis); [black, -] (0,-0.4) – (0,0.4) coordinate (yaxis); [black, fill = black] (0.2, 0) circle (.02cm); [black, fill = black] (-0.2, 0) circle (.02cm); [pyred, draw, line width = 1pt, postaction = decorate, decoration=markings, mark=at position 0.4 with [line width=1pt]>] (0.3, 0) arc (0:-360:0.1) – (0.3, 0); - [line width=1. pt, scale=2] [black, -] (-0.5,0) – (0.5,0) coordinate (xaxis); [black, -] (0,-0.4) – (0,0.4) coordinate (yaxis); [black, fill = black] (0.2, 0) circle (.02cm); [black, fill = black] (-0.2, 0) circle (.02cm); [pyred, draw, line width = 1pt, postaction = decorate, decoration=markings, mark=at position 0.4 with [line width=1pt]>] (-0.1, 0) arc (0:-360:0.1) – (-0.1, 0); , with the additional factor i imposing the correct closed contour orientation. Without branch cuts in the energy domain, the additional identity _–(k; t_1, t_2) = -_++(-k; t_1, t_2)—which can be found directly from the frequency-space representation using 1/(ω^2 - k^2)_iϵ→1/ω^2-k^2-iϵ as k→ -k or from the explicit split representation in terms of mode functions—enables us to interpret the combination of propagators (<ref>) as cutting the internal line of a diagram on the same in-in branch since time integrals factorise. At the level of correlators, the additional input of unitarity relates individual diagrams to their complex conjugate version (with flipped external energies) on the same branch. Consequently, the latest-time equation we derive should be understood as only a direct outcome of locality, as it is fundamentally determined by the structure of propagators. This general property remains valid in the presence of dissipation, fluctuations and non-unitary time evolution. 4pt In de Sitter space, we have seen in Sec. <ref> that the effect of particle production amounts to double the usual poles while assigning distinct weights e^±πμ to the associated residues. From the spectral representation (<ref>), it is easy to observe that bulk-to-bulk propagators also satisfy the identity (<ref>), which, in terms of integration contours, is illustrated by [line width=1. pt, scale=2] [black, -] (-0.5,0) – (0.5,0) coordinate (xaxis); [black, -] (0,-0.4) – (0,0.4) coordinate (yaxis); [black, fill = black] (0.2, 0.1) circle (.02cm); at (0.35, 0.3) e^-πμ; [black, fill = black] (0.2, -0.1) circle (.02cm); at (0.35, -0.3) e^+πμ; [black, fill = black] (-0.2, 0.1) circle (.02cm); at (-0.35, 0.3) e^+πμ; [black, fill = black] (-0.2, -0.1) circle (.02cm); at (-0.35, -0.3) e^-πμ; [pyred, draw, line width = 1pt, postaction = decorate, decoration=markings, mark=at position 0.7 with [line width=1pt]>] (-0.4, 0) – (0.4, 0); - [line width=1. pt, scale=2] [black, -] (-0.5,0) – (0.5,0) coordinate (xaxis); [black, -] (0,-0.4) – (0,0.4) coordinate (yaxis); [black, fill = black] (0.2, 0.1) circle (.02cm); at (0.35, 0.3) e^+πμ; [black, fill = black] (0.2, -0.1) circle (.02cm); at (0.35, -0.3) e^-πμ; [black, fill = black] (-0.2, 0.1) circle (.02cm); at (-0.35, 0.3) e^-πμ; [black, fill = black] (-0.2, -0.1) circle (.02cm); at (-0.35, -0.3) e^+πμ; [pyred, draw, line width = 1pt, postaction = decorate, decoration=markings, mark=at position 0.7 with [line width=1pt]>] (-0.4, 0) – (0.4, 0); = [line width=1. pt, scale=2] [black, -] (-0.5,0) – (0.5,0) coordinate (xaxis); [black, -] (0,-0.4) – (0,0.4) coordinate (yaxis); [black, fill = black] (0.2, 0) circle (.02cm); at (0.35, 0.3) e^+πμ; [black, fill = black] (-0.2, 0) circle (.02cm); at (-0.35, 0.3) e^-πμ; [pyred, draw, line width = 1pt, postaction = decorate, decoration=markings, mark=at position 0.4 with [line width=1pt]>] (0.3, 0) arc (0:-360:0.1) – (0.3, 0); [pyred, draw, line width = 1pt, postaction = decorate, decoration=markings, mark=at position 0.4 with [line width=1pt]>] (-0.1, 0) arc (0:-360:0.1) – (-0.1, 0); - [line width=1. pt, scale=2] [black, -] (-0.5,0) – (0.5,0) coordinate (xaxis); [black, -] (0,-0.4) – (0,0.4) coordinate (yaxis); [black, fill = black] (0.2, 0) circle (.02cm); at (0.35, 0.3) e^-πμ; [black, fill = black] (-0.2, 0) circle (.02cm); at (-0.35, 0.3) e^+πμ; [pyred, draw, line width = 1pt, postaction = decorate, decoration=markings, mark=at position 0.4 with [line width=1pt]>] (0.3, 0) arc (0:-360:0.1) – (0.3, 0); [pyred, draw, line width = 1pt, postaction = decorate, decoration=markings, mark=at position 0.4 with [line width=1pt]>] (-0.1, 0) arc (0:-360:0.1) – (-0.1, 0); . Crucially, note that particle production poles change their respective weight under complex conjugation. Sending μ→∞, one immediately recovers the flat-space limit (<ref>).[The combination _K ≡_+++_– is nothing but the standard Keldysh propagator. The advanced, retarded and Keldysh propagators can be recovered after performing a rotation in the φ_± field basis, introducing a new pair of fields φ_r = 1√(2)(φ_+ + φ_-) and φ_a = 1√(2)(φ_+ - φ_-), which is referred to the Keldysh basis <cit.>. The advantage of this basis is that the physical interpretation of various propagators become transparent. Indeed, _R and _A characterise the dynamical properties of the particles in the system under consideration, while _K characterises their statistical distribution and encodes the mean particle density <cit.>.] The identity (<ref>) can of course be recovered from the form of propagators written with the time-ordered usual representation (<ref>) using Θ(t_1-t_2)+Θ(t_2-t_1)=1. Here, we have derived this identity from an off-shell perspective. §.§ Largest-time equation Our aim is to translate the propagator identity (<ref>) at the level of correlators, resulting in an identity satisfied by individual branch contributions of cosmological correlators. In the context of flat-space scattering amplitudes, this identity is known as the largest-time equation <cit.>.[Consider F({x_i}) an amputated (real-space) Feynman diagram with n vertices in flat space. Let us perform the following manipulations on this diagram: * Duplicate the Feynman diagram 2^n times by colouring vertices either in black or in white in all possible ways. * For each vertex, a black-coloured vertex brings a factor i, and a white-coloured one brings a factor (-i). * The propagator between two black vertices is the usual Feynman propagator Δ_F(x-y) = Θ(x^0-y^0)Δ^+(x-y) + Θ(y^0-x^0)Δ^-(x-y), where Δ^±(x-y) are the usual positive and negative frequency commutation functions, while the propagator between two white vertices is taken to be Δ_F^*(x-y). The propagator between black and white vertices is replaced by Δ^+(x-y), and Δ^-(x-y) for white and black vertices. Clearly, the all-black diagram is the usual Feynman diagram while the all-white one is its complex conjugate. The flat-space largest-time equation states that the sum of all these contributions must vanish F({x_i}) + F^*({x_i}) + F({x_i}) = 0 , where F denotes the usual Feynman diagram, F^* its complex conjugate, and F all other 2^n-2 contributions. This is often taken as the starting point to derive recursion relations, see e.g. <cit.>, or cutting rules, see e.g. <cit.>. From the above colouring rules, it comes at no surprise that the Schwinger-Keldysh diagrammatic rules, and therefore cosmological correlators, are the most suited objects to phrase the largest-time equation. ] Graph. Computing correlators boils down to evaluating a set of multi-nested time integrals, whose number equals the number of vertices. Thus, instead of working with correlators, we define a graph with V vertices and N internal lines by its collection of external energies {E_i} (i = 1, …, V) and internal energies {K_j} (j = 1, …, N). The resulting graph integrals are found after removing all overall factors of 1/2k from bulk-to-boundary and bulk-to-bulk propagators, hence defining the rescaled propagators _a(k; t) ≡ 2k _a(k; t) , _ab(k; t, t') ≡ 2k _ab(k; t, t') , and overall coupling constants. We also multiply each graph integral by (-i)^V. Of course, in practice, the external energies are function of external momentum magnitudes. The main motivation is that the number of bulk-to-boundary propagators attached to a vertex, which we take to be massless in flat space and conformally coupled in de Sitter, does not matter, as they satisfy the trivial relation ∏_i=1^n _a(k_i; t) = _a(∑_i=1^n k_i; t) , where n is the number of external legs of a vertex. The sum of all external momentum magnitudes is defined to be the energy of the vertex, i.e. k_1 + … + k_n = E. Diagrammatically, we represent a graph the same way as a correlator (with each vertex carrying a Schwinger-Keldysh index) albeit with external legs removed, and instead labelling each vertex with its energy. For example, a one-site graph integral in flat space reads 0pt[line width=1. pt, scale=2] (0, 0) circle (.05cm) node[above=1mm] E; [black] (0,0) – (90:0.05) arc (90:270:0.05) – cycle; ≡ (-i) ∑_a=±a∫_-∞^a^0ṭ e^i aEt , which represents any contact correlator with arbitrary number of external legs. We also label internal lines with their corresponding internal energies. A simple illustrative example. Before stating the general largest-time equation, we illustrate the consequence of the identity (<ref>) on a simple example, namely the two-site chain. Even though the precise nature of the field φ, interactions, and the background is unimportant for what follows, we consider polynomial self-interactions in flat space and set coupling constants to unity. 4pt First, let us consider the sum of both time-ordered contributions with flipped external energies on the negative branch 0pt [line width=1. pt, scale=2] [fill=black] (0, 0) circle (.05cm) node[above=0.5mm] E_1; [fill=black] (1, 0) circle (.05cm) node[above=0.5mm] E_2; [black] (0.05, 0) – node[above] K (1, 0); + 0pt [line width=1. pt, scale=2] [fill=white] (0, 0) circle (.05cm) node[above=0.5mm] -E_1; [fill=white] (1, 0) circle (.05cm) node[above=0.5mm] -E_2; [black] (0.05, 0) – node[above] K (0.95, 0); . It is given by -∫_-∞^+^0 ṭ_1 ∫_-∞^+^0 ṭ_2 e^i E_1 t_1[_++(K; t_1, t_2) + _–(K; t_1, t_2)] e^i E_2 t_2 . Note that flipping external energies amounts to exchanging both in-in branches. As such, the iϵ prescription that makes the integrals converge needs to be modified. Therefore, we have also deformed the lower bound of the time integrals accordingly so that both graph integrals can be combined. Using the propagator identity (<ref>) and flipping the external energies (and the corresponding lower bound of the time integral) for each integral appearing on the negative branch, we directly obtain -∫_-∞^+^0 ṭ_1 ∫_-∞^-^0 ṭ_2 e^i E_1 t_1_+-(K; t_1, t_2) e^-i E_2 t_2 - ∫_-∞^-^0 ṭ_1 ∫_-∞^+^0 ṭ_2 e^-i E_1 t_1_-+(K; t_1, t_2) e^i E_2 t_2 , which we recognise as -0pt [line width=1. pt, scale=2] [fill=black] (0, 0) circle (.05cm) node[above=0.5mm] E_1; [fill=white] (1, 0) circle (.05cm) node[above=0.5mm] -E_2; [black] (0.05, 0) – node[above] K (0.95, 0); - 0pt [line width=1. pt, scale=2] [fill=white] (0, 0) circle (.05cm) node[above=0.5mm] -E_1; [fill=black] (1, 0) circle (.05cm) node[above=0.5mm] E_2; [black] (0.05, 0) – node[above] K (0.95, 0); . Finally, let us group the factorised contributions in (<ref>) with the nested ones on the same side. The resulting diagrammatic formula reads 0pt [line width=1. pt, scale=2] [fill=black] (0, 0) circle (.05cm) node[above=0.5mm] E_1; [fill=black] (1, 0) circle (.05cm) node[above=0.5mm] E_2; [black] (0.05, 0) – node[above] K (1, 0); + 0pt [line width=1. pt, scale=2] [fill=white] (0, 0) circle (.05cm) node[above=0.5mm] -E_1; [fill=white] (1, 0) circle (.05cm) node[above=0.5mm] -E_2; [black] (0.05, 0) – node[above] K (0.95, 0); +0pt [line width=1. pt, scale=2] [fill=black] (0, 0) circle (.05cm) node[above=0.5mm] E_1; [fill=white] (1, 0) circle (.05cm) node[above=0.5mm] -E_2; [black] (0.05, 0) – node[above] K (0.95, 0); + 0pt [line width=1. pt, scale=2] [fill=white] (0, 0) circle (.05cm) node[above=0.5mm] -E_1; [fill=black] (1, 0) circle (.05cm) node[above=0.5mm] E_2; [black] (0.05, 0) – node[above] K (0.95, 0); = 0 . Note that in this simple flat-space example, the rotation direction to negative energies does not matter as the mode functions are analytic. However, for massive fields in de Sitter space, one needs to specify the analytic continuation to negative energies so that it does not cross any branch cuts. In particular, energies should be rotated in opposite directions for the two branches _+(e^-iπ k; t) = _-(k; t) , _-(e^+iπ k; t) = _+(k; t) . In most of the cases, we are interested in graphs with analytic external mode functions so that this prescription to evade the branch cut and define negative energies remains implicit. Largest-time equation. The previous simple example actually generalises to arbitrary graphs. In words, the sum of all individual graph contributions with negative external energies on the negative branch vanishes. More formally and at tree-level, individual graph integral contributions can be written as _a_1 …a_V({E_i}, {K_j}) ≡ (-i)^V ∫∏_i=1^V ṭ_i f_i(t_i) _a_i(E_i; t_i) ∏_j=1^I _a_j a_j+1(K_j; t_j, t_j+1) , where V is the number of vertices, I ≡ V-1 is the number of internal lines (at tree-level), E_i and K_j denote external and internal energies, respectively, and f_i(t_i) are form factors associated to each vertex that account for possible (time-dependent) couplings. We leave the dependence on momenta implicit. The labels a_j and a_j+1 are to be understood as two consecutive Schwinger-Keldysh vertex indices that match those of the bulk-to-boundary propagators a_i (for a_j) and a_i+1 (for a_j+1). Taking the values ±, they indicate on which in-in branch the corresponding vertices are located. With these definitions, the latest-time equation can be simply stated as the following schematic identity ∑_a_1, …, a_V = ±_a_1 …a_V({a_i E_i}, {K_j}) = 0 . The above sum contains 2^V terms. In the presence of spatial derivative interactions, parity-violating interactions, or with external spinning fields, correlators can explicitly depend on external momenta k_i, that can be contracted with e.g. polarisation tensors. In such cases, analytically continuing the energies k_i → -k_i also implies flipping the signs of the corresponding momenta k_i → - k_i.[It should be pointed out that the latest-time equation can be understood as a consequence of the following combinatoric operator identity <cit.> ∑_k=0^n (-1)^k ∑_σ∈Π(k, n-k)T̅[Ø(t_σ_1) ⋯Ø(t_σ_k)] T[Ø(t_σ_k+1) ⋯Ø(t_σ_n)] = 0 , where T̅ and T are the (anti-)time-ordering operators, and Π(k, n-k) denotes the set of bi-partitions of {1, …, n} with size k and n-k. This “operator optical theorem" can be proved by induction. This means that the identity relating various in-in branches holds more generally at the operator level.] Proof at tree-level. To prove this identity for tree-level graphs, we use an inductive approach, as any tree graph can be constructed iteratively by successively adding internal lines. We follow the same lines as in the original work <cit.> where Cutkosky rules were derived for amputated Feynman diagrams, also see <cit.>. 4pt As seen above, the largest-time equation is satisfied for the simple two-site graph (the even more simple case of a one-site graph is trivial). We now assume that the latest-time equation (<ref>) is valid for a tree-level graph with V vertices and prove that it still remains valid when we add an additional vertex to the graph. Therefore, given a graph with V vertices, we choose a vertex (say the farthest right one) and denote its corresponding external energy by E_i. Of course, this vertex carries ± labels. For what follows, it is useful to introduce some diagrammatic notation. We denote by the grey circle [line width=1. pt, scale=2] [fill=gray] (0, 0) circle (0.3cm); (0.3, 0) circle (.05cm) node[right=1mm] E_i; [black] (0.3,0) – (0.3, 0.05) arc (90:270:0.05) – cycle; , the sum of all individual graph contributions with V vertices, where the picked vertex 0pt[line width=1. pt, scale=2] (0, 0) circle (.05cm); [black] (0,0) – (90:0.05) arc (90:270:0.05) – cycle; has been brought forward. We now attach to it an additional vertex with external energy E and we label the internal energy flowing through the bulk-to-bulk propagator by K. This increases the number of individual contributions by a factor 2, i.e. from 2^V to 2^V+1. Similarly to the simple illustrative example that we have treated above, we now consider the special combination of attaching both time-ordered bulk-to-bulk propagators with negative energies on the negative branch, which pictorially is represented by [line width=1. pt, scale=2] [fill=gray] (0, 0) circle (0.3cm); [fill=black] (0.3, 0) circle (.05cm) node[above right=0.1mm] E_i; [fill=black] (1.3, 0) circle (.05cm) node[above=0.5mm] E; [black] (0.35, 0) – node[below] K (1.35, 0); + [line width=1. pt, scale=2] [fill=gray] (2.5, 0) circle (0.3cm); [fill=white] (2.8, 0) circle (.05cm) node[above right=0.1mm] -E_i; [fill=white] (3.8, 0) circle (.05cm) node[above=0.5mm] -E; [black] (2.85, 0) – node[below] K (3.75, 0); . Note that we have added an additional factor of i (which we include in the grey circle as it does not spoil the argument). Adding this vertex also brings an additional time integral. Flipping the external energies on the negative branch allows us to factor out all the information about external lines (and group both terms under same time integrals after also flipping the iϵ prescription). Internal energies remain untouched. This manipulation isolates the bulk-to-bulk propagators to give [line width=1. pt, scale=2] [fill=gray] (0, 0) circle (0.3cm); ×( [line width=1. pt, scale=2] [fill=black] (0, 0) circle (.05cm); [fill=black] (1, 0) circle (.05cm); [black] (0, 0) – (1, 0); + [line width=1. pt, scale=2] [fill=white] (0, 0) circle (.05cm); [fill=white] (1, 0) circle (.05cm); [black] (0.05, 0) – (0.95, 0); ) . This way, we can use the propagator identity (<ref>) to replace all time-ordered propagators with non-time-ordered ones, yielding [line width=1. pt, scale=2] [fill=gray] (0, 0) circle (0.3cm); ×( [line width=1. pt, scale=2] [fill=black] (0, 0) circle (.05cm); [fill=white] (1, 0) circle (.05cm); [black] (0.05, 0) – (0.95, 0); + [line width=1. pt, scale=2] [fill=white] (0, 0) circle (.05cm); [fill=black] (1, 0) circle (.05cm); [black] (0.05, 0) – (0.95, 0); ) . These terms can then be redistributed to generate two individual graphs. At this stage, we recognise the two factorised contributions with flipped external energies on the negative branch, up to an overall minus sign coming from the Schwinger-Keldysh vertex indices. We obtain - [line width=1. pt, scale=2] [fill=gray] (0, 0) circle (0.3cm); [fill=black] (0.3, 0) circle (.05cm) node[above right=0.1mm] E_i; [fill=white] (1.3, 0) circle (.05cm) node[above=0.5mm] -E; [black] (0.35, 0) – node[below] K (1.25, 0); - [line width=1. pt, scale=2] [fill=gray] (0, 0) circle (0.3cm); [fill=white] (0.3, 0) circle (.05cm) node[above right=0.1mm] -E_i; [fill=black] (1.3, 0) circle (.05cm) node[above=0.5mm] E; [black] (0.35, 0) – node[below] K (1.35, 0); . Combining all terms on the same side, we finally obtain the desired diagrammatic identity. Since the simplest graph explicitly verifies this identity, the largest-time equation is proved for all tree-level graphs. Loop-level diagrams. Cuts, in the broad sense, are at the heart of modern flat-space scattering amplitude methods, especially at the loop level. For cosmological correlators, we will see here that the largest-time equation naturally generalises to loop graphs, as this identity stems from general properties of bulk-to-bulk propagators. 4pt A loop diagram can essentially be generated from a tree-level diagram by merging two vertices. However, this process involves addressing two key points. First, the propagator identity (<ref>), which lies at the core of the largest-time equation at tree-level, must be extended to accommodate products of propagators. Second, loop diagrams contain unconstrained momenta flowing in the loops that need to be integrated over, which often results in divergences. As we are not primarily interested in loops, we will not explicitly perform loop integrals nor paying attention to regularising them. Instead, we will show how the largest-time equation holds at the loop level for a few simple graphs. 4pt One-site graph. Let us consider the following simplest one-site graph at one-loop level with the vertex energy E [line width=1. pt, scale=2] [fill=white] (0, 0) circle (0.3cm); (0.3, 0) circle (.05cm) node[right=1mm] E; [black] (0.3,0) – (0.3, 0.05) arc (90:270:0.05) – cycle; . This diagram consists of two distinct contributions that are complex conjugate to each other. As usual, we sum these contributions after flipping the energy on the negative branch, resulting in [line width=1. pt, scale=2] [fill=white] (0, 0) circle (0.3cm); [fill=black] (0.3, 0) circle (.05cm)node[right=1mm] E; + [line width=1. pt, scale=2] [fill=white] (0, 0) circle (0.3cm); [fill=white] (0.3, 0) circle (.05cm)node[right=1mm] -E; = -i∫_-∞^+^0ṭ e^iE t∫^̣3ℓ/(2π)^3[_++(ℓ; t, t) - _–(ℓ; t, t)] . This combination of graphs vanishes as both propagators are indistinguishable at equal times, i.e. _++(ℓ; t, t) = _–(ℓ; t, t). Notice that this straightforwardly extends to all one-site graphs to any loop order. 4pt Two-site graph. We then consider the two-site graph at one-loop level [line width=1. pt, scale=2] [fill=white] (0, 0) circle (0.3cm); (-0.3, 0) circle (.05cm) node[left=1mm] E_1; [black] (-0.3,0) – (-0.3, 0.05) arc (90:270:0.05) – cycle; (0.3, 0) circle (.05cm) node[right=1mm] E_2; [black] (0.3,0) – (0.3, 0.05) arc (90:270:0.05) – cycle; . Each contribution contains a product of bulk-to-bulk propagators. The key insight is to realise that bulk-to-bulk propagators satisfy ∏_i=1^L+1_++(k_i, t_1, t_2) +∏_i=1^L+1_–(k_i, t_1, t_2) = ∏_i=1^L+1_+-(k_i, t_1, t_2) +∏_i=1^L+1_-+(k_i, t_1, t_2) , where L denotes the number of loops. Of course, this relation is also valid for the rescaled (tilted) propagators. For L=0, we recover the identity (<ref>) that was used in the tree-level case. Proving this identity is simply a matter of manipulating and rearranging time orderings. For unequal times t_1 ≠ t_2, one further requires the use of the trivial formula Θ(t_1-t_2)Θ(t_2-t_1) = 0, and at equal times t_1 = t_2, one should notice that the Wightman function is real (k_i, t, t) = ^*(k_i, t, t). The special value Θ(0)=1/2 is used. At one-loop order L=1, we therefore have [line width=1. pt, scale=2] [fill=white] (0, 0) circle (0.3cm); [fill=black] (-0.3, 0) circle (.05cm)node[left=1mm] E_1; [fill=black] (0.3, 0) circle (.05cm)node[right=1mm] E_2; + [line width=1. pt, scale=2] [fill=white] (0, 0) circle (0.3cm); [fill=white] (-0.3, 0) circle (.05cm)node[left=1mm] -E_1; [fill=white] (0.3, 0) circle (.05cm)node[right=1mm] -E_2; = -i ∫_-∞^+^0ṭ_1 e^iE_1t_1ṭ_2 e^iE_2t_2 ×∫^̣3 ℓ/(2π)^3[_++(l; t_1, t_2) _++(|k-l|; t_1, t_2) + _–(l; t_1, t_2) _–(|k-l|; t_1, t_2)] . Using (<ref>), we recognise the sum of non-time-ordered contributions with flipped energies on the negative branch - [line width=1. pt, scale=2] [fill=white] (0, 0) circle (0.3cm); [fill=black] (-0.3, 0) circle (.05cm)node[left=1mm] E_1; [fill=white] (0.3, 0) circle (.05cm)node[right=1mm] -E_2; - [line width=1. pt, scale=2] [fill=white] (0, 0) circle (0.3cm); [fill=white] (-0.3, 0) circle (.05cm)node[left=1mm] -E_1; [fill=black] (0.3, 0) circle (.05cm)node[right=1mm] E_2; , albeit with an overall minus sign. Putting all terms on the same side, we recover the largest-time equation. Using (<ref>) for L>2, this naturally generalises to any loop order. §.§ Application to simple diagrams The primary challenge in performing perturbative calculations of cosmological correlators lies in the presence of nested time integrals, which are generally difficult to evaluate. However, as we have seen previously, time-ordering can be interpreted as a specific choice of contour to set the exchanged fields on shell. We can do this by considering a suitable integral representation for the bulk-to-bulk propagators, so that the off-shell propagators appear explicitly factorised. 4pt Here, before turning to correlators in a cosmological background which are of primary interest in Sec. <ref>, we first consider correlators in flat space.[In flat-space, we are interested in computing equal-time correlators at a fixed time t = 0 so that interactions are building up during the time interval -∞ < t < 0. For concreteness, we consider a theory containing a single self-interacting massless scalar field φ. At tree level, the generalisation to (possible different) massive fields is straightforward after setting k→√(k^2+m^2). However, adding a mass to a mediated particle inside a loop is much more complicated.] In this case, the corresponding mode functions are analytic in the time (or energy) domain, and the frequency-space representation of the bulk-to-bulk propagator takes the exact same form as the usual Feynman propagator. Therefore, conventional flat-space methods can be used to compute correlators and reveal their analytic structure. In practice, once the factorised time integrals are evaluated, the remaining frequency integral can be finished with the residue theorem. We illustrate this off-shell factorisation with the simplest of all exchanged diagrams, and explicitly show that the computed correlators consistently satisfy the largest-time equation. Two-site chain. Let us start with the simplest two-chain diagram[Concretely, for the interaction Ł = -g3!φ^3, the s-channel tree-level exchange correlator reads ⟨φ_k_1φ_k_2φ_k_3φ_k_4|'⟩ = g^2/16k_1k_2k_3k_4∑_a, b = ±_ab(E_1=k_12, E_2=k_34, K=s) , where s = |k_1+k_2| is the magnitude of the exchange momentum, and k_ij = k_i+k_j.] 0pt [line width=1. pt, scale=2] (0, 0) circle (.05cm) node[above=1mm] E_1; [black] (0,0) – (0, 0.05) arc (90:270:0.05) – cycle; (1, 0) circle (.05cm) node[above=1mm] E_2; [black] (1,0) – (1, 0.05) arc (90:270:0.05) – cycle; [black] (0.05, 0) – node[above] K (1, 0); = ∑_a, b = ±_ab , _ab = -ab∫_-∞^a^0 ṭ_1 ∫_-∞^b^0 ṭ_2 e^iaE_1 t_1_ab(K; t_1, t_2) e^ibE_2 t_2 . In order to compute the time integrals, the iϵ prescription in the in-in contour can be translated to a shift in energy through the formula ∫_-∞^±^0 ṭ e^± izt = ∫_-∞^0 ṭ e^± i(z ∓ iϵ)t = ∓ i/z∓ iϵ , for a real positive energy z>0. This is equivalent to Wick rotating time, as is usually done. However, it gives the correct pole prescription for the later frequency integration when computing the nested contribution. In the limit ϵ→0, the unnested integral is simply given by _+- = ∫_-∞^+^0 ṭ_1 e^i(E_1+K)t_1∫_-∞^-^0 ṭ_2 e^-i(E_2+K)t_2 = 1/E_L E_R , where E_L ≡ E_1+K and E_R ≡ E_2+K are the sum of energies at the left and right vertices. Note that this contribution has only partial energy poles when E_L, R→0, which emerge from the factorised form. Since the result is purely real, we have _-+ = _+-. Similarly, introducing the frequency-space representation of the time-ordered bulk-to-bulk propagator (<ref>) and performing the time integrals using (<ref>), the time-ordered contribution reads _++ = -2K∫_-∞^+∞ω̣/2iπ1/(E_1+ω-iϵ)(ω^2-K^2)_iϵ(E_2-ω-iϵ) . This representation makes it explicit that both time integrals factorise and that time ordering is encoded in the additional frequency integral. The symmetry under E_1 ↔ E_2 can be restored after changing variables ω→ -ω. The analytic structure of the integrand in the complex ω plane exhibits several interesting features that we show in Fig. <ref>: (i) the pole ω = +E_2-iϵ is the off-shell collinear (folded) limit, (ii) the pole ω = -E_1+iϵ is the off-shell (left) partial energy pole, and (iii) the poles ω = ± (K-iϵ) are the on-shell conditions. Notice that poles that can only be reached after analytically continuing the energies to the unphysical kinematic configuration (after setting the exchanged particle on shell) are located in the upper-half complex ω plane. Therefore, selecting for example the physical on-shell condition requires closing the contour in the lower-half complex plane, picking the residues at ω=+K-iϵ and ω=+E_2-iϵ, we obtain _++ = 1/E(1/E_L + 1/E_R) , where E ≡ E_1+E_2 is the total energy of the graph. Of course, closing the contour upwards yields the same result. This shows that partial energy singularities reached for unphysical kinematic configurations are related to folded singularities reached for physical ones by different time ordering choices. In this representation, the total energy pole E→0 is made explicit through the identity 1/(E_1+ω-iϵ)(E_2-ω-iϵ) = 1/E(1/E_1+ω-iϵ + 1/E_2-ω-iϵ) , and partial energy poles are reconstructed from the frequency integral. By reality of the result, we have _– = _++. Summing all contributions, the final result reads ({E_i}, K) = f({E_i}, K)/E E_L E_R , with f({E_i}, K) = 4(E_1+E_2+K) . This graph integral is a rational function of the energies and has a singularity when the total energy vanishes E→0, and when partial energies flowing into the left or right vertices add up to zero E_L, R→0, see e.g. <cit.> for more details. The residue at the total energy poles is the corresponding scattering amplitude <cit.>. Indeed, using the identity E_L+E_R=2K in the limit E→0, the graph integral can be written lim_E → 0 14K ({E_i}, K) = /E , where = 1- is the corresponding s-channel scattering amplitude with =-(k_1^μ + k_2^μ)^2=E_1^2-K^2 being the usual Mandelstam variable. Note that in order to extract the scattering amplitude in Eq. (<ref>), we have imposed the energy conservation condition E_1+E_2=0—valid in the asymptotic past—to write K^2-E_1^2 = E_L E_R. Eventually, the identity (<ref>) makes it explicit that the graph integrals _ab satisfy the largest-time equation _++(E_1, E_2, K) + _–(-E_1, -E_2, K) + _+-(E_1, -E_2, K) + _-+(-E_1, E_2, K) = 0 . Three-site chain. We now examine the three-site chain, which presents additional complexities compared to the two-site chain. Notably, this requires evaluating two frequency integrals for the fully nested contribution. The corresponding graph reads 0pt [line width=1. pt, scale=2] (0, 0) circle (.05cm) node[above=1mm] E_1; [black] (0,0) – (0, 0.05) arc (90:270:0.05) – cycle; (1, 0) circle (.05cm) node[above=1mm] E_2; [black] (1,0) – (1, 0.05) arc (90:270:0.05) – cycle; [black] (0.05, 0) – node[above] K_1 (1, 0); (2, 0) circle (.05cm) node[above=1mm] E_3; [black] (2,0) – (2, 0.05) arc (90:270:0.05) – cycle; [black] (1.05, 0) – node[above] K_2 (2, 0); = ∑_a, b, c = ±_abc , with _abc = i abc∫_-∞^a^0 ṭ_1 ∫_-∞^b^0 ṭ_2 ∫_-∞^c^0 ṭ_3 e^iaE_1t_1 _ab(K_1; t_1, t_2) e^ibE_2t_2_bc(K_2; t_2, t_3) e^icE_3t_3 . We have in total 2^3=8 integrals to evaluate, among which half of them are related by complex conjugation. Using Eq. (<ref>) and the previous result for evaluating a single frequency integral, the fully factorised and partially nested contributions read _+-+ = -1/E_L E_M E_R , _++- = -1/E̅_LE_R(1/E_L + 1/E_M) , _-++ = -1/E_L E̅_R(1/E_M + 1/E_R) , where E_L ≡ E_1+K_1, E_M ≡ E_2+K_1+K_2 and E_R ≡ E_3+K_2 are the sum of energies at the left, middle and right vertices, respectively, and E̅_L ≡ E_1+E_2+K_2 and E̅_R ≡ E_2+E_3+K_1 are the energies of the joined two left and right vertices, respectively. Similar integrals were computed in <cit.> for the corresponding wavefunction coefficient. After using the frequency-space representation for the time-ordered bulk-to-bulk propagators and evaluating the time integrals, the fully nested contribution reads _+++ = -(2K_1)(2K_2) ∫_-∞^+∞ω̣_1/2iπω̣_2/2iπ1/(ω_1^2-K_1^2)_iϵ(ω_2^2-K_2^2)_iϵ ×1/(E_1+ω_1-iϵ)(E_2-ω_1+ω_2-iϵ)(E_3-ω_2-iϵ) . This double integral can be evaluated after expanding the integrand into partial fractions, rendering the off-shell poles manifest, and then picking up residues. We obtain _+++ = -1/E(1/E_2+E_3+K_1-1/E_2+E_3-K_1)(1/E_3+K_2-1/E_3-K_2) -1/(E_3-K_2)(E_1+E_2+K_2)(1/E_2+K_1+K_2 -1/E_2-K_1+K_2) - 1/(E_1+K_1)(E_2+E_3-K_1)(1/E_3+K_2-1/E_3-K_2) - 1/(E_1+K_1)(E_2-K_1+K_2)(E_3-K_2) . Notice that it is important to keep track of the correct iϵ pole prescription to collect the correct residues. Eventually, since all contributions are real, the remaining ones are identical. Summing all contributions, the graph integral is ({E_i}, {K_i}) = f({E_i}, {K_i})/E E_L E̅_L E_M E̅_R E_R , with f({E_i}, {K_i}) = -8[E̅_LE_M E̅_R + E_M(E_1E̅_L+E_3 E̅_R)+E_1E_3(E̅_L+E̅_R)] . The fairly non-trivial form factor f is well symmetric under (E_1 ↔ E_3, K_1 ↔ K_2), and the final result displays the correct E_L, E_M, E_R, E̅_L, E̅_R and E poles, as expected. One can explicitly verify that individual contributions satisfy the largest-time equation 0 = _+++(E_1, E_2, E_3, K_1, K_2) + _—(-E_1, -E_2, -E_3, K_1, K_2) + _++-(E_1, E_2, -E_3, K_1, K_2) + _–+(-E_1, -E_2, E_3, K_1, K_2) + _-++(-E_1, E_2, E_3, K_1, K_2) + _+–(E_1, -E_2, -E_3, K_1, K_2) + _+-+(E_1, -E_2, E_3, K_1, K_2) + _-+-(-E_1, E_2, -E_3, K_1, K_2) , providing a non-trivial check of the final result. In principle, this procedure can be generalised to arbitrary tree-level diagrams. However, the number of required integrals and the complexity of multi-dimensional frequency integrals rapidly become intractable. Conformal two-site chain in de Sitter. As a last example, we consider the two-site chain of conformally coupled scalar fields (with mass m^2=2H^2) in de Sitter, whose computation conceptually follows the same line as in flat space, as the corresponding time-ordered bulk-to-bulk propagators can be recasted as frequency-space integrals.[It is possible to map correlators of conformally coupled fields to correlators of massive scalars with half-integer values of ν by applying a set of weight-shifting operators. This procedure follows from the property of the Hankel function that drastically simplifies for ν=n/2 with n integer (n=1 is the conformally coupled case).] The corresponding graph integrals are 0pt [line width=1. pt, scale=2] (0, 0) circle (.05cm) node[above=1mm] E_1; [black] (0,0) – (0, 0.05) arc (90:270:0.05) – cycle; (1, 0) circle (.05cm) node[above=1mm] E_2; [black] (1,0) – (1, 0.05) arc (90:270:0.05) – cycle; [black] (0.05, 0) – node[above] K (1, 0); = ∑_a, b = ±_ab , _ab = -ab∫_-∞^a^τ_0τ̣_1/τ_1^2∫_-∞^b^τ_0τ̣_2/τ_2^2 e^iaE_1 τ_1_ab(K; τ_1, τ_2) e^ibE_2 τ_2 , where τ_0<0 is a small late-time cutoff. Shifting the energy and neglecting analytic terms in the late-time limit, we use the formula ∫_-∞^±^τ_0τ̣/τ e^± i z τ = ∫_-∞^τ_0τ̣/τ e^± i (z∓ i ϵ) τ = log[± i (z∓ iϵ)τ_0] , for a real positive energy z>0, to write the factorised contribution as _+- = log(+i E_L τ_0)log(-iE_Rτ_0) . Similarly, after performing the time integrals for the nested contribution, we obtain _++ = 2K ∫_-∞^+∞ω̣/2iπlog[i(E_1+ω-iϵ)]log[i(E_2-ω-iϵ)]/(ω^2-K^2)_iϵ . This integral, besides being formally divergent, is hard to evaluate because of the branch cuts spanning over ω∈ (-E_1+iϵ-i∞, -E_1+iϵ) and ω∈ (E_2-iϵ+i∞, E_2-iϵ) (choosing the log branch cut to be the principal one). These branch cuts notably cross the real axis. As we will see in Sec. <ref>, the presence of branch cuts, rather than poles, indicates particle production. In this scenario, the integrand can be transformed into a meromorphic function by transitioning to the complex mass plane instead of the complex frequency plane. However, for the special case of conformally coupled fields, as first noticed in <cit.>, this integral can be written as an energy integral over the flat-space result _++ = 2K ∫_-∞^+∞ω̣/2iπ1/(ω^2-K^2)_iϵ∫_E_1+ω-iϵ^∞∫_E_2-ω-iϵ^∞x̣ỵ/xy = 2K ∫_E_1^∞x̣∫_E_2^∞ỵ∫_-∞^+∞ω̣/2iπ1/(x+ω-iϵ)(ω^2-K^2)_iϵ(y-ω-iϵ) , where we have shifted the kinematic integrals so that their limits do not depend on the off-shell frequency. We recognise the flat-space frequency integral that we previously computed. The energy integrals were first computed in <cit.> and the result reads[In order to extract the divergence, one can subtract and add the term ∫_-∞^+^τ_0τ̣_1/τ_1τ̣_2/τ_2 e^i(E_Lτ_1+E_Rτ_2) = log(iE_L τ_0)log(i E_R τ_0) = ∫_E_1^∞x̣∫_E_2^∞ỵ 1/(x+K-iϵ)(y+K-iϵ) , which can either be written as a product of logarithms (hence isolating the divergence) or as energy integrals that can be combined with the original ones. This effectively mimics the corresponding wavefunction coefficient calculation. The remaining convergent integral to be computed is ∫_0^∞x̣ỵ/(x+y+E)(x+E_L)(y+E_R) . Sophisticated techniques to evaluate such integrals are detailed in <cit.>.] _++ = _2(E-E_L/E) + _2(E-E_R/E) + log(E_L/E)log(E_R/E) - π^2/6 - log(i E_L τ_0)log(i E_Rτ_0) , where _2 is the dilogarithm function. When summing all contributions, the kinematic parts of divergent pieces cancel against each other thanks to the property log(iz)-log(-iz)=-iπ for z<0. Using the Euler's identity _2(z)+_2(1-z)+log(z)log(1-z)=π^2/6 , both nested contributions combine to give _++(E_1, E_2, K) + _–(-E_1, -E_2, K) = -log[i(E_1-K)τ_0]log[i(E_2+K)τ_0] -log[i(E_1+K)τ_0]log[i(E_2-K)τ_0] , which is exactly minus the combination _+-(E_1, -E_2, K)+_-+(-E_1, E_2, K). Consequently, the graph integrals _ab satisfy the largest-time equation. § MASSIVE EXCHANGE CORRELATOR IN DE SITTER We now turn to the case of the two-site chain of conformally coupled scalars mediated by the tree-level exchange of massive scalars in de Sitter. By explicitly performing the spectral integral, we derive a new closed-form expression for this correlator. As we will show, the branch cut in the energy domain due to particle production translates into a tower of poles in the complex mass domain. Summing over these residues leads to a new simple and partially resummed series solution. 4pt We are interested in computing the de Sitter tree-level exchange graph[Concretely, considering a conformally coupled scalar field φ coupled to a massive scalar field ϕ by Ł_int/a^3(t) = -g2ϕφ^2 in de Sitter, the full correlator reads ⟨φ_k_1φ_k_2φ_k_3φ_k_4|'⟩ = g^2 τ_0^4/16k_1k_2k_3k_4 F(E_1 = k_12, E_2 = k_34, K = s) + t + u channels , where we have introduced a late-time cutoff τ_0 and defined F ≡∑_a, b=± F_ab. Notice that we have defined the integrals F_ab without the factor 2 to ease comparison with the literature.] 0pt [line width=1. pt, scale=2] (0, 0) circle (.05cm) node[above=1mm] E_1; [black] (0,0) – (0, 0.05) arc (90:270:0.05) – cycle; (1, 0) circle (.05cm) node[above=1mm] E_2; [black] (1,0) – (1, 0.05) arc (90:270:0.05) – cycle; [black] (0.05, 0) – node[above] K (1, 0); = ∑_a, b = ± F_ab , with F_ab = -abK ∫_-∞^a^0 τ̣'/(-τ')^1/2e^iaE_1τ' ∫_-∞^b^0 τ̣”/(-τ”)^1/2e^ibE_2τ”_ab(K; τ', τ”) . Recall that _ab are the propagators for the canonically normalised field σ(τ, x) ≡ (-τ)^-3/2ϕ(τ, x), hence the unusual power for the conformal time within the integral. We define the usual dimensionless kinematic variables u ≡K/E_1 , v ≡K/E_2 , which lie in the unit disk, i.e. |u|, |v| ≤ 1, and the three-point function F^(3)(z, μ) = 1√(2) |Γ(12+iμ)|^2 P_iμ-1/2(z) , where P_iμ-1/2 is the Legendre function defined in Eq. (<ref>). The non-time-ordered contribution, for which both time integrals factorise, is therefore given by F_+- = F^(3)(u^-1, μ) F^(3)(v^-1, μ) . We have used the useful formula (<ref>) to perform the time integrals. Notice that the Legendre P function is real, i.e. [P_iμ-1/2(z)]^* = P_iμ-1/2(z) for z≥ 0 and μ real. Therefore, the on-shell three-point function F^(3)(z, μ) is real. As a consequence, we obtain the same expression for F_-+. However, anticipating what follows, the off-shell function F^(3)(z, ν) where the mass parameter ν is analytically continued to the complex plane is not real. §.§ Off-shell three-point function For pedagogical reasons, before computing the spectral integral for the time-ordered contribution F_++, we first study the following simpler integral F̅^(3)(z) = ∫_-∞^+∞ν̣ _ν F^(3)(z, ν)/(ν^2-μ^2)_iϵ , with z≥ 1, and F^(3) defined in Eq. (<ref>). The object F̅^(3) is typically proportional to F_++ in the folded limit E_2→ K (or E_1→ K by symmetry), equivalently v→ 1 (or u→ 1), and already exhibits an interesting analytic structure. To be able to perform the spectral integral, it is essential to understand the analytic structure of the function F^(3)(z, ν) in the complex ν plane, as well as its behaviour at infinity, to appropriately determine how to close the contour. 4pt The crucial observation is that the off-shell three-point function F^(3) is a meromorphic function in the complex ν plane. At a fixed kinematic configuration z, the function F^(3)(z, ν) has poles at ν = ± i (12+n) with integer n≥0, which corresponds to the poles of |Γ(12+iν)|^2 = Γ(12+iν) Γ(12-iν), since the Legendre function P is analytic in the entire complex ν plane. However, the Legendre P representation of F^(3) makes the large-ν asymptotic form not neat, i.e. it depends on the phase of z. In order to disentangle both behaviours at infinity, we project the off-shell three point function onto the Legendre Q basis using the connection formula (<ref>), yielding F^(3)(z, ν) = e^-iπ2/√(2)sinh(πν)[Q_-iν-1/2(z) - Q_+iν-1/2(z)] . The function Q_± iν - 1/2(z) has purely imaginary poles located at ν = ± i (12+n) (which come from Γ(12± iμ) in Eq. (<ref>)), and should be closed in the lower (upper) complex plane since Q_± iν -1/2(z) ∼ e^∓ iνξ with ξ >0. Therefore, when performing the spectral integral, we never need to pick up these poles because the contour is closed to avoid the infinite tower.[The factor 1/sinh(πν) = |Γ(1+iν)|^2/(πν) also brings additional poles to the integrand. However, they are precisely cancelled by the density of states _ν.] We illustrate the analytic structure of the integrand (<ref>) and the contour prescription in Fig <ref>. Eventually, picking up only the particle production poles, we obtain F̅^(3)(z) = F^(3)(z e^+iπ, μ) , where we have used the analytic continuation of the Legendre function (<ref>) to re-express the result in the Legendre P basis. As already seen in Sec. <ref>, the particle production pole structure projects ingoing modes on outgoing ones and vice versa. At the level of the three-point function, this amounts to rotate the kinematic configuration z → ze^+iπ. Actually, we show in App. <ref> that the function F^(3)(z, μ) and its analytic continuation F^(3)(ze^+iπ, μ), viewed as functions of z for fixed μ, are related by a dispersive integral in the complex energy domain. As we will now see, for the time-ordered contribution F_++, the infinite tower of poles cannot be evaded by the integration contour, and will result in series solution for the correlator. §.§ Bootstrapping via the spectral representation We now explicitly compute the integral F_++ = - ∫_-∞^+∞ν̣ _ν F^(3)(u^-1, ν) F^(3)(v^-1, ν)/(ν^2-μ^2)_iϵ . This spectral representation makes it explicit that the time-ordered contribution F_++ is the spectral integral of the off-shell function F_+- given in Eq. (<ref>). In practice, it means that solutions for the time-ordered contributions are of higher-transcendentality than the factorised ones. We expect that the result contains a local and non-local contributions which take a simple resummed form, as well as an EFT series contribution coming from infinite towers of poles. Proceeding the same way as previously for the three-point function, we decompose the functions F^(3) in the Legendre Q basis to isolate well-behaved asymptotic forms at large ν. The integrand takes the form _ν F^(3)(u^-1, ν) F^(3)(v^-1, ν) = -ν/2πsinh(πν) ×[Q_-iν-1/2(u^-1)Q_-iν-1/2(v^-1) - Q_-iν-1/2(u^-1)Q_+iν-1/2(v^-1) + (ν↔-ν)] . First, we immediately observe that both terms related by the shadow transformation ν↔ -ν contribute equally to the integral. Thus, it suffices to evaluate only one of these terms. Second, we notice that that the term Q_-iν-1/2(u^-1)Q_-iν-1/2(v^-1) has a well-defined large-ν behaviour that is independent of the hierarchy between the kinematic variables u and v. However, the choice of whether to close the contour upwards of downwards for the term Q_-iν-1/2(u^-1)Q_+iν-1/2(v^-1) depends on the relative magnitude between u and v. As such, we will obtain a solution for |u|<|v| and another one for |u|>|v|. The matching condition at u = v is inherently satisfied, as it is encoded in the spectral integral. Therefore, we decompose the spectral integral into two pieces F_++ = F_++^0 + F_++^<, > , with (we have accounted for the factor 2 coming from the ν↔ -ν symmetry) F_++^0 = 1/π∫_-∞^+∞ν̣ ν/sinh(πν)Q_-iν-1/2(u^-1) Q_-iν-1/2(v^-1)/(ν^2-μ^2)_iϵ , F_++^<, > = -1/π∫_-∞^+∞ν̣ ν/sinh(πν)Q_-iν-1/2(u^-1) Q_+iν-1/2(v^-1)/(ν^2-μ^2)_iϵ . Despite the appearance, the contribution F_++^<, > is well symmetric under u↔ v, which can be explicitly checked after changing variables ν↔ -ν. The analytic structure of these integrands reveals the presence of several poles with different origins: * A first tower of poles comes from the factor ν/sinh(πν) = Γ(1+iν) Γ(1-iν)/π. They are located on the entire imaginary axis ν = ± i (1+n) with n≥ 0, and cannot be evaded by the contour prescription. * The factor Q_-iν-1/2(u^-1) Q_-iν-1/2(v^-1) also generates an infinite tower of (second-order) poles located on the imaginary axis ν=-i (12+n) with n≥0. However, at large ν, the integrand behaves as ∼ e^+iν(ξ_u+ξ_v) with ξ_u = cosh^-1(u^-1) ≥ 0 for u^-1≥1 (and similarly for ξ_v), so the contour should be closed in the upper-half complex plane. As a result, we do not collect these poles. This case is similar to the off-shell three-point function case seen in Sec. <ref>. Here, both off-shell exchanged modes interfere constructively so that their spectrum spans only over half the imaginary axis, with residues doubling their weight. * Poles of the term Q_-iν-1/2(u^-1) Q_+iν-1/2(v^-1) are located at ν=± i (12+n) with n≥0. At large ν, it behaves as ∼ e^+iν(ξ_u-ξ_v). For |u|<|v|, we have ξ_u-ξ_v≥ 0 and we close the contour in the upper-half complex plane, picking the tower of (first-order) poles. This case reflects the destructive interference between off-shell exchanged modes, resulting in the spectrum spanning over the entire imaginary axis. * Eventually, we also have the usual particle production poles coming from the iϵ prescription. Let us now collect these various poles in turn. Particle production residues. The poles coming from the iϵ prescription are the easiest to evaluate. We obtain F_++^0 ⊃-i/2sinh^2(πμ)[e^+πμ Q_+iμ-1/2(u^-1)Q_+iμ-1/2(v^-1) + (μ↔ -μ)] . For the contribution F_++^< for which |u|<|v|, collecting both poles on the upper-half complex plane results in F_++^< ⊃i/2sinh^2(πμ)[e^+πμ Q_+iμ-1/2(u^-1)Q_-iμ-1/2(v^-1) + (μ↔ -μ)] . Combining both contributions, and projecting back onto the Legendre P basis using (<ref>) and (<ref>) yields F_++^collider = - F^(3)(-u^-1, μ) F^(3)(v^-1, μ) , for |u|<|v|. The contribution for which |u|>|v| is simply found after swapping u and v. The found factorised form is not surprising as in the late-time limit time ordering vanishes, as seen in Sec. <ref>. The result (<ref>) can be directly found by performing the time integrals after substituting _++→_-+. EFT residues. Contributions coming from EFT residues are analytic in both u and v as u, v→0, and therefore admits series representations. We first sum over the residues from the factor 1/sinh(πν). The integration contour picks up only half of this tower of first-order poles on the imaginary axis. Explicitly, we obtain F_++^0 ⊃ -2/π ∑_n=0^+∞ (-1)^n (n+1)/(n+1)^2+μ^2 Q_n+1/2(u^-1) Q_n+1/2(v^-1) , where we have used that the residue of Γ(1+iν) at ν = +i(1+n) is (-1)^nn!. The terms in 1/μ^2, characteristic of an EFT expansion, come from 1/(ν^2-μ^2)_iϵ-1/(n+1)^2 - μ^2 . The large n behaviour of the series coefficients is given by |(n+1)/(n+1)^2+μ^2 Q_n+1/2(u^-1) Q_n+1/2(v^-1)| ∼1/(n+1)^2+μ^2e^-(n+1)(ξ_u+ξ_v)/√(sinhξ_u sinhξ_v) , which is well convergent. Notice that close to u, v=1, the exponential damping becomes less efficient and the series converges slower, i.e. as 1/n^2. Similarly, the contribution of these residues to F_++^< is found to be F_++^< ⊃2/π ∑_n=0^+∞ (-1)^n (n+1)/(n+1)^2+μ^2 Q_n+1/2(u^-1) Q_-n-3/2(v^-1) . Quite interestingly, the two towers of residues (<ref>) and (<ref>) exactly cancel each other. Indeed, both contributions can be combined in a single series, and using (<ref>), each series coefficient is proportional to Q_-n-3/2(v^-1) - Q_n+1/2(v^-1) ∝sin[π(n+1)] . This effect, which generally occurs in odd spatial dimensions, was previously noted in <cit.> and is applicable to contact diagrams as well. Notably, while this phenomenon was initially understood as interference between the two branches of the in-in contour, we show here that this cancellation happens within a single branch. We finish with the poles located at ν= ± i (12+n) which only enter the contribution F_++^<, >. They are the only ones contributing to the final expression. For |u|<|v|, we obtain the rather simple expression F_++^EFT = u ∑_n=0^+∞ (-1)^n/(n+ 12)^2+μ^2 (u/v)^n 211+n2, 1+n232+nu^2 211-n2, -n212-nv^2 . The Legendre Q function being not defined for negative integer order, the above expression is found after evaluating the residue of the Γ function entering (<ref>). Full result. The full result is found by summing all contributions, namely the non-time ordered pieces (<ref>) and the time-ordered pieces combining the collider signal (<ref>) with the EFT series (<ref>). A few comments are in order. First, this method yields a convergent result in all kinematic configurations. Although not obvious from the analytical expression, we have checked numerically that the found result perfectly matches the one originally found in <cit.>. Fig. <ref> shows the full closed-form result when keeping only a few terms in the series. We observe that the series convergence rate is the same as the bootstrap result but slower than the series solution found using the partial Mellin-Barnes method <cit.>. Second, compared to the bootstrap result of <cit.>, the spectral integration naturally resums one of the two series, which in practice makes the evaluation much faster. Lastly, since the spectral representation is solution to the bootstrap equation, there is a one-to-one correspondence between the basis of functions onto which the homogeneous solution of the bootstrap equation is expanded and the Legendre P or Q basis. Additionally, when solving the boundary differential equation, fixing the free coefficients requires two boundary conditions for specific values (or limits) of u and v. Here, these coefficients are fully encoded in the spectral integral. Eventually, the spectral method does not requires matching different solutions at u=v because the transition of closing the contour, whether upwards or downwards, is continuous. However, the first derivative of the result is discontinuous at the transition, resulting in a noticeable kink. In short, rather than integrating the boundary differential equation, we directly perform the spectral integral, with the bounds inherently determining the free integration constants. Largest-time equation. Assuming |u|<|v|, it is trivial to observe that F_++(u, v) + F_–(-u, -v) + F_+-(u, -v) + F_-+(-u, v) = 0 , from their explicit forms in (<ref>), (<ref>) and (<ref>). Indeed, both EFT series cancel against each other thanks to the overall factor u, i.e. F_–^EFT(-u, -v) = - F_++^EFT(u, v). Similarly, F_++^collider(u, v) cancels against F_-+(-u, v), and F_–^collider(-u, -v) against F_+-(u, -v). Of course, swapping u and v yields the same result. This provides an additional consistency check for the derived formula. §.§ Singularities and analytic structure The spectral representation of the four-point function enables straightforward examination of various limits by directly inspecting the spectral integral. A first observation is that removing the dynamical part of the propagating exchanged field—that is, discarding the iϵ prescription encoding particle production—, the spectral integral reduces to a contact four-point function generated by the leading bulk interaction φ^4 in a gradient expansion. This simplification is revealed through the generalised Mehler-Fock transformation (Eq. (14.20.14) of <cit.>) ∫_-∞^+∞ν̣ _ν F^(3)(u^-1, ν) F^(3)(v^-1, ν) = uv/u+v . This identity is analogous to setting the differential operator of the bootstrap equation to unity and therefore recovering the usual source term. Let us now probe the following limits: (i) collapsed limit u, v→0, i.e. internal soft momentum K→0 while keeping all external momenta hard, and (ii) the partial-energy pole u→ -1 while keeping v fixed. Collapsed limit. In the limit u, v→ 0, we use the asymptotic behaviour of the Legendre P function (<ref>) for large argument to easily obtain lim_u, v→0 F_+- = (uv/4)^12+iμ Γ(12+iμ)^2Γ(-iμ)^2/2π . Taking the limit inside the spectral integral, we obtain lim_u, v→0 F_++ = -1/2π(uv/4)^12∫_-∞^+∞ν̣/(ν^2-μ^2)_iϵ(uv/4)^+iνΓ(12+iν)^2 Γ(-iν)/Γ(+iν) . At large ν, the integrand scales as ∼ e^iνlog(uv/4) (up to a phase) so that the contour is closed in the lower-half complex plane (recall log(uv/4)≤ 0). The usual particle production poles give lim_u, v→0 F_++⊃i/2[e^+πμ(uv/4)^12+iμΓ(12+iμ)^2Γ(-iμ)^2/2π + (μ↔-μ)] . Notice that two infinite towers of poles arise: one from Γ(12+iν) located at ν = +i (12+n) that we do not collect as they lie in the upper-half complex plane, and another from Γ(-iν) located at ν=-i n with n≥ 0 integer, which we do collect. By definition, this EFT signal is analytic in both u and v as it admits a series representation. Its u, v→0 limit can be directly recovered from its full expression (<ref>). Eventually, keeping only the terms which are non-analytic in uv as u, v→0 (and therefore also non-analytic in K), we obtain lim_u, v→0 F = (uv/4)^12+iμ (1+isinhπμ) Γ(12+iμ)^2Γ(-iμ)^2/2π + c.c. , which exactly reproduces the expression found in <cit.>. Partial-energy pole. At fixed v, let us probe the limit u→ -1. The three-point function F^(3)(z, μ) has a branch point at z=-1 so we analyse its behaviour directly at the level of the time integral. The limit E_1+K→0 probes the early time-limit of the left vertex, so we can take the early-time limit of the Hankel function inside the integral F^(3)(u^-1, μ) ∼1/√(2)lim_z→0∫_z^+∞x̣/x e^-ix(1+u^-1) = 1/√(2)lim_z→0E_1[(1+u^-1)iz] , where E_1 is the exponential integral, which has a logarithmic branch. Isolating the leading term, we obtain lim_u→-1 F^(3)(u^-1, μ) = -1√(2)log(1+u) . The non-time-ordered contributions give lim_u→-1(F_+- + F_-+) = -√(2)log(1+u) × F^(3)(v^-1, μ) . Similarly, the time-ordered contribution is found to be lim_u→-1 F_++ = 1/√(2)log(1+u) ∫_-∞^+∞ν̣ _ν F^(3)(v^-1, ν)/(ν^2-μ^2)_iϵ = 1/√(2)log(1+u) × F^(3)(-v^-1, μ) , where we have used (<ref>). Summing all contributions, we eventually obtain lim_u→-1F = -√(2)log(1+u) ×[F^(3)(v^-1, μ) - F^(3)(-v^-1, μ)] . In this limit, the residue of the partial-energy pole is proportional to the shifted three-point function. This behaviour generalises to any exchanged process where the sum of all “energies" entering a vertex vanishes, see e.g. <cit.>. Similarly, the correlator has a singularity in the flat-space limit u→ -v, i.e. E = E_1+E_2→0 or equivalently μ→∞ as seen in Sec. <ref>, with a coefficient that is related to the flat-space scattering amplitude <cit.>. § CONCLUSIONS In this paper, we have explored an off-shell approach to cosmological correlators. For massive field experiencing particle production, going to the complex mass plane allows one to write a dispersive integral for the propagator that naturally encodes time ordering. Using this object, we have shown that exchanged correlators can be obtained by gluing off-shell lower-point correlators. In this procedure, performing the resulting spectral integral, which amounts to collecting poles as the integrand is meromorphic, effectively sets the exchanged particle on shell. We argued that this representation not only clarifies the analytic structure and factorisation properties of cosmological correlators but also simplifies explicit calculations. As a specific example, we derived a new, simple closed-form formula for the four-point correlator of conformally coupled fields exchanging a massive field in de Sitter space and examined its analytic properties. Additionally, we derived cosmological largest-time equations, which relate individual correlator in-in branch channels through the analytic continuation of certain external energies. These relations, explicitly illustrated for simple diagrams, can serve as consistency checks for more complex correlators. 4pt Eventually, our work opens several interesting avenues for future investigation: * First of all, the spectral representation of massive propagators used in this work does not crucially rely on de Sitter isometries. Meanwhile, a large level of non-Gaussianity can be reached in cosmological processes that strongly break de Sitter boosts. It would therefore be natural to use this spectral representation to obtain clean and simpler closed-form solutions for correlators of particles featuring reduced sound speeds or in the presence of a chemical potential. The latter case is known to boost the particle production rate, which can lead to enhanced cosmological collider signals. However, since mode functions are described by Whittaker functions, the spectral representation should be upgraded accordingly. * In the context of modern scattering amplitude techniques, complex amplitudes can be efficiently constructed by sewing together lower-point amplitudes using sophisticated recursion relations. We have shown a glimpse of how such procedure can be applied to cosmological correlators in simple cases. Since time integrals are trivialised, computing higher-point correlation functions primarily involves performing a series of spectral integrals, which effectively reduces to summing over several towers of poles, given that off-shell lower-point correlators are meromorphic in the complex mass plane. It will be interesting to explore how this off-shell approach can render computations of more complex correlation functions more tractable. Ultimately, we believe this approach lays the groundwork for deriving more general cosmological recursion relations beyond rational correlators. * Finally, as in flat space, states in a unitary quantum field theory in de Sitter are classified by unitary irreducible representations of the isometries, which are parametrised by the conformal dimension (related to the mass) and spin. Imposing unitarity as an additional consistency condition imposes a set of positivity constraints. While previous such bounds have been largely theoretical, exploring their phenomenological implications could yield valuable insights. The spectral representation employed in this work can be viewed as the leading-order perturbative Källén-Lehmann representation in spatial Fourier space. By perturbatively computing massive self-energy corrections, it may be possible to extend this approach further in perturbation theory, potentially deriving useful bounds from the positivity and meromorphic properties of the spectral density. Ultimately, one would hope to use this spectral representation to “dress" massive propagators in de Sitter space by resumming bubble diagrams. Acknowledgements. We thank Guillaume Faye, Jean-Baptiste Fouvry, Mang Hei Gordon Lee, Scott Melville, Enrico Pajer, Sébastien Renaux-Petel, Xi Tong and Zhong-Zhi Xianyu for helpful discussions. We also thank Lucas Pinol, Arthur Poisson and Sébastien Renaux-Petel for comments on the draft. DW is supported by the European Research Council under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 758792, Starting Grant project GEODESI). This article is distributed under the Creative Commons Attribution International Licence (https://creativecommons.org/licenses/by/4.0/CC-BY 4.0). § FEYNMAN IΕ PRESCRIPTION FROM VACUUM WAVE-FUNCTIONAL In this appendix, we come back to the iϵ prescription defining the in-in generating function (<ref>). As is well known, deforming the time integration contour around the infinite past explicitly breaks unitarity <cit.>. This is a consequence of both in-in branches not being invariant under time reversal, producing a commutator of time evolution operators and not identity. This exact same prescription, when thought of as analytically continuing energy instead of time, not only preserves unitarity, but leads to the correct iϵ prescription in the Feynman propagator (<ref>). 4pt Reaching convergence in the infinite past is reminiscent of adiabatically projecting the vacuum of the fully interacting theory |Ω⟩ onto the free one |0⟩. Within the in-in path integral (<ref>), this procedure is achieved with the additional term Ψ[φ_+](t_0)Ψ^*[φ_-](t_0) where Ψ[φ](t) ≡⟨φ(t)|Ω|$⟩ is the vacuum wave-functional that describes the transition amplitude from the vacuum|Ω⟩to a specific field configurationφ(x)at some timet_0<cit.>. Let us therefore compute this object and take the limitt_0 →-∞. The key insight is to use the defining property of the annihilation operatorâ_k|Ω⟩ = 0. After inverting the canonical quantisation of the fieldφ̂_kand its conjugate momentump̂_k, and using the Wronskian conditionu_k p_k^* - u_k^*p_k = i, valid at all time and fixed by the commutation relation (p_kis the mode function associated top̂_k), one obtains â_k = 1/i (p_k^* φ̂_k - u_k^* p̂_k) . Using the field-space representation of the momentum operatorp̂_k →-iδ/δφ_k, withδφ_q/δφ_k = (2π)^3 δ^(3)(q+k), the conditionâ_k|Ω⟩ = 0projected onto⟨φ(t_0)|yields the functional differential equation (u_k^* δ/δφ_k - i p_k^* φ_k)Ψ[φ](t_0) = 0 . Solving this equation gives the following Gaussian solution for the vacuum wave-functional Ψ[φ](t_0) = exp(-1/2∫^̣3k/(2π)^3φ_k(t_0)ω_k(t_0)φ_-k(t_0)) , whereω_k(t_0) = -ip_k^*/u_k^*, with the mode functions evaluated att_0. The overall normalisation is fixed to unity due to the correct vacuum normalisation⟨Ω|Ω|=⟩ 1. We now take the limitt_0→-∞. Reaching the Bunch-Davies state imposes that mode functions behave as simple plane-wavesu_k(t) ∼e^-iktso thatω_k = kis nothing but the dispersion relation. As for the remaining terms, we introduce the following regularisation scheme f(-∞) = ϵ→ 0lim ϵ∫_-∞^t ṭ' f(t') e^ϵ t' , valid for any smooth-enough functionf(t), that can be recovered after integration by parts, to rewrite the contribution from vacuum wave-functionals entering (<ref>) as t_0 → -∞lim Ψ[φ_+](t_0) Ψ^*[φ_-](t_0) = ϵ→ 0lim exp(-ϵ/2∫_-∞^tṭ' ∫^̣3k/(2π)^3 k[φ_k, +(t') φ_-k, +(t') + (+↔ -)]) . We have sete^ϵt' = 1since we only care about the leading term asϵ→0. Obtaining a Gaussian weight in the fields with negative real values, as opposed to purely imaginary ones in the standard action, turns out to be crucial to ensure convergence of the path integral in the infinite past. Indeed, let us consider the free quadratic Lagrangian for the fieldsφ_±written in Fourier spaceℒ[φ_k, ±] = -1/2φ_k, ±(t)(-∂_t^2 - k^2)φ_-k, ±(t). Adding the term (<ref>) amounts to modifying the weight in the exponential of the generating function (<ref>) to ±i/2∫^̣3k/(2π)^3∫_-∞^t ṭ' φ_k, ±(t')[-∂_t'^2 - (k± iϵ)^2]φ_-k, ±(t') , where we have used(k±iϵ)^2 ≈k^2 ±2iϵand relabelled2ϵ→ϵ. In the end, performing the converging Gaussian path integral over the fieldsφ_±leads to the correctiϵprescription for the Feynman propagator (<ref>). § DETAILS ON MASSIVE FIELDS IN DE SITTER Consider a real scalar fieldϕ(τ, x)of massm, where-∞<τ≤0is the conformal time, evolving in de Sitter space with three spatial dimensions. We will focus on fields in the principal series so thatμ≡√(m^2/H^2 - 9/4)is real. In what follows, we setH=1for convenience. Defining the canonically normalised fieldσ(τ, x) ≡(-τ)^-3/2 ϕ(τ, x), its positive-frequency mode function is given by u_k(τ, μ) = i√(π)/2 e^-πμ/2 H_iμ^(1)(-kτ) e^-ikτ/√(2kτ) , whereH_iμ^(1)(z)is the Hankel function of the first kind, such that it reduces to the usual Bunch-Davies vacuum in the far past, up to an overall unimportant phase. The in-in contour_iϵin (<ref>) requires analytically continuing the positive-frequency mode function (<ref>) to the lower-halfkτ-complex plane to ensure reaching the asymptotic vacuum. This precisely avoids crossing the branch cut of the Hankel functionH_iμ^(1)(z), that is chosen to lie on the real negative axis, away from the physical regime. Therefore, using the complex conjugate relation (<ref>), the negative-frequency mode can be written u_k^*(τ, μ) = -i√(π)/2 e^+πμ/2 H_iμ^(2)(-kτ) . §.§ Useful formulae The following mathematical identities are useful for manipulating mode functions of a massive scalar field in de Sitter. Complex conjugate relations. The Hankel functions of the first and second kind are related by the complex conjugate relations [H_iμ^(1)(z)]^* = e^+πμH_iμ^(2)(z^*) , [H_iμ^(2)(z)]^* = e^-πμH_iμ^(1)(z^*) , forz ∈ℂ92{ℝ^-}. Notice that these relations are not verified when the argumentzlies on the branch cut. Symmetries. The Hankel functions obey the following relations H_-iμ^(1)(z) = e^-πμH_iμ^(1)(z) , H_-iμ^(2)(z) = e^+πμH_iμ^(2)(z) , forz ∈ℂ. These identities render manifest an underlying shadowμ↔-μsymmetry for the positive- and negative-frequency mode functions u_k(τ, -μ) = u_k(τ, μ) , u_k^*(τ, -μ) = u_k^*(τ, μ) . Upon rotating time, the Hankel functions satisfy H_-iμ^(1)(e^+iπ z) = -H_iμ^(2)(z) , H_-iμ^(2)(e^-iπz) = -H_iμ^(1)(z) . The first relation is valid forzin the lower-half complex plane excluding the negative real and imaginary axis, and the second relation is valid forzin the upper-half complex plane excluding the positive real and imaginary axis. In the physical time domain, these identities relate the mode functions by a CPT symmetry u_k(z, μ) = u_k^*(e^-iπz, -μ) , u_k^*(z, μ) = u_k(e^+iπz, -μ) . Additional analytic continuations of the Hankel function are given by H_iμ^(1)(e^-iπz) = e^+πμH_iμ^(2)(z) + 2cosh(πμ) H_iμ^(1)(z) , H_iμ^(2)(e^+iπz) = e^-πμH_iμ^(1)(z) + 2cosh(πμ) H_iμ^(2)(z) . Connection formulae. Hankel functions can be expanded in terms of Bessel functions with the following connection formulae H_iμ^(1)(z) = e^+πμJ_+iμ(z) - J_-iμ(z)/sinh(πμ) , H_iμ^(2)(z) = J_-iμ(z) - e^-πμJ_+iμ(z)/sinh(πμ) . The Bessel function of the first kind satisfies J_iμ^*(z) = J_-iμ(z^*) , forz ∈ℂ92{ℝ^-}, and has the following power-law asymptotic form for small argumentz →0J_iμ(z) ∼1/Γ(1+iμ)(z/2)^iμ , whereas the Hankel function has the following plane-wave asymptotic form for large argumentz→∞H_iμ^(1)(z) ∼√(2/π z) e^i(z - π4) + πμ2 . Large-order asymptotic forms. For largeμwithz(≠0) fixed, we have J_iμ(z) ∼1/√(2iπμ)(ez/2iμ)^iμ , H_iμ^(1)(z) ∼ - H_iμ^(2)(z) ∼ -i √(2/iπμ)(ez/2iμ)^-iμ . Legendre functions. The Legendre functions are equivalent to Bessel (and Hankel) functions with higher transcendentality. Explicitly, they are given by P_iμ-1/2(z) ≡2112+iμ, 12-iμ11-z/2 , Q_iμ-1/2(z) ≡√(π/2) Γ(12+iμ)/Γ(1 + iμ) 2^-iμ z^-12-iμ 213/2+iμ/2, 1/2+iμ21+iμ1/z^2 , in terms of the hypergeometric function_2 F_1that is defined by the following formal series 21a, bcz = ∑_n=0^∞(a) _n (b)_n/(c)_nz^n/n! , on the disk|z|<1and by analytic continuation elsewhere, where(a)_n ≡Γ(a+n)/Γ(a)is the Pochhammer symbol. These functions areP_α(z)= andQ_α(z)= in Mathematica. The optional argument “3" selects the correct branch cut structure as it is falsely implemented by default.[The Legendre functions P and Q generalise to associated Legendre functions (also called Ferrers function of the first and second kind) P_iμ-1/2(z)→ P_iμ-1/2^1/2-j(z) and Q_iμ-1/2(z)→ Q_iμ-1/2^1/2-j(z), see Chap. 14 of <cit.>, that are useful when dealing with spatial derivative interactions, i.e. for different powers of conformal time in the correlator time integrals. The non-derivative interaction case simply corresponds to j=1/2.] At fixed argument, the LegendrePfunction is analytic in the entire complexμplane. However, the LegendreQfunction is meromorphic as it has poles coming from the factorΓ(12+iμ). Integral formula. The Legendre functionPis the solution of the following integrals -i √(π)/2e^+πν2∫_-∞^+^0 τ̣ e^i kτ/(-τ)^1/2 H_iν^(2)(-sτ) = e^-iπ4/√(2s) |Γ(12+iν)|^2 P_iν-1/2(k/s) , +i √(π)/2e^-πν2∫_-∞^-^0 τ̣ e^-i kτ/(-τ)^1/2 H_iν^(1)(-sτ) = e^+iπ4/√(2s) |Γ(12+iν)|^2 P_iν-1/2(k/s) . Connection formula. Analogously to Bessel and Hankel functions, the Legendre functions satisfy the connection formula |Γ(12+iμ)|^2 P_iμ-1/2(z) = e^-iπ2/sinh(πμ)[Q_-iμ-1/2(z) - Q_+iμ-1/2(z)] . Its analytic continuation is defined by |Γ(12+iμ)|^2 P_iμ-1/2(z e^+iπ) = 1/sinh(πμ)[e^+πμQ_+iμ-1/2(z) - e^-πμ Q_-iμ-1/2(z)] . Large-order asymptotic forms. The LegendrePfunction has a simple behaviour atz→0(similar to BesselJfunction), while the LegendreQfunction has a simple behaviour atz→∞(similar to HankelHfunction). Indeed, they are given by P_iν-1/2(coshξ) ∼(ξ/sinhξ)^1/2 , Q_iν-1/2(coshξ) ∼π/2i(2ξ/πνξsinhξ)^1/2 e^-i(νξ - π4) . These expressions equivalently give the large-order asymptotic forms. The asymptotic behaviour of the LegendrePfunction reads P_iν-1/2(z) ∼1/√(π)Γ(-iν)/Γ(12-iν)(1/2z)^12+iν , forz→∞. §.§ Derivation of the spectral representation For completeness, we present the proof of the spectral representation of massive scalar field in de Sitter, closely following <cit.>. Let us first consider the case where the timeτ_1is in the future ofτ_2, i.e.-kτ_1<-kτ_2, so that the time-ordered propagator reduces to_++(k; τ_1, τ_2) = u_k(τ_1, μ) u_k^*(τ_2, μ). Expanding the first Hankel functionH_iν^(2)(-kτ_1)in (<ref>) in terms of Bessel functions using (<ref>), using Eq. (<ref>), and the symmetry of the pole prescription with respect toν↔-ν, the integral can be written _++(k; τ_1, τ_2) = i/2∫_-∞^+∞ν̣ ν J_iν(-kτ_1) H_iν^(2)(-kτ_2)/(ν^2-μ^2)_iϵ . The crucial stage is to observe that the numeratorνJ_iν(-kτ_1) H_iν^(2)(-kτ_2)is analytic in the entire complexνplane, so that we choose to close the integration contour in the lower-half complex plane <cit.>. Since the large-νexpansion (<ref>) gives ν J_iν(z_1) H_iν^(2)(z_2) ∼e^iνlog(z_1/z_2)/π , the arc at infinity does not contribute forz_1<z_2andIm(ν) →-∞. By Cauchy's residue theorem, only the two poles located atν= ±μ-iϵare selected by the closed contour integral, which yields _++(k; τ_1, τ_2) = π/4sinh(πμ)(e^+πμJ_+iμ(-kτ_1) H_+iμ^(2)(-kτ_2) - e^-πμ J_-iμ(-kτ_1) H_-iμ^(2)(-kτ_2)) = u_k(τ_1, μ) u_k^*(τ_2, μ) , where we have again used Eqs. (<ref>) and (<ref>). For the case where the timeτ_2is in the future ofτ_1, i.e.-kτ_2<-kτ_1, one needs to expand the second Hankel functionH_iν^(2)(-kτ_2)in (<ref>) in terms of Bessel functions and then close the contour, picking up the corresponding residues. §.§ Non-analyticity and dispersive integral in the energy domain In Sec. <ref>, by performing the spectral integral in the mass domain, we showed that the effect of the particle production poles on the off-shell three-point function is to rotate its external energyF^(3)(z, μ) →F^(3)(ze^+iπ, μ). Similarly, in Sec. <ref> for the four-point function, we have observed that both the factorised contributionF_-+and the collider contribution of the nested channelF_++^colliderare connected through the analytic continuation of external energies. In the following, we show that this connection is manifested through a dispersive integral in the energy domain <cit.>. Complex energy domain. Let us define the off-shell three-point function by its integral representation only F^(3)(z, μ) ≡ -i √(π)/2 e^πμ2+ iπ4√(K) ∫_-∞^0 τ̣ e^iE τ/(-τ)^1/2 H_iμ^(2)(-Kτ) , wherez ≡E/K, withEandKdenoting the external and internal energies, respectively. The physical kinematic region is given byz ≥1. We now want to analytically continue this function in the entire complexzplane, and in particular outside the physical region, without actually evaluating the integral explicitly (the result is given in (<ref>)). First, from the asymptotic form of the Hankel function, we notice that the UV convergence asτ→-∞is controlled by the phase ofE+K, i.e. the sum of the energies entering this three-point function. The integral converges only whenE+Khas a negative imaginary part. As well explained in <cit.>, the integral can be regularised by deforming the contour-∞→-∞^+(E+K)^-1, with-∞^+ ≡-∞(1-iϵ), so that the lower boundτ=-∞is approached from a direction that depends on the argument ofE+K. Then, we note that the integrand exhibits a branch cut, from the Hankel function and the square root, that we choose to be on the positive real axisτ∈(0, +∞), as usual. This gives rise to a discontinuity in the energy domain alongE+K ∈(-∞, 0), equivalently alongz ∈(-∞, -1). Discontinuity. We now show that the full functionF^(3)(z, μ)can be recovered from the knowledge of its discontinuity only. By slightly deforming the time integration, the discontinuity of the integral along the branch cutz ∈(-∞, -1)is given by the discontinuity of the integrand itself z[F^(3)(z, μ)] = ϵ→0lim[F^(3)(ze^-iϵ, μ) - F^(3)(ze^+iϵ, μ)] = i √(π)/2 e^πμ2+ iπ4√(K) ∫_∞^0τ̣ τ[ e^iE τ/τ^1/2 H_iμ^(2)(Kτ)] . Using the known analytic continuation of the Hankel function (<ref>) and (<ref>), the discontinuity of the integrand is given by τ[ e^iE τ/τ^1/2 H_iμ^(2)(Kτ)] = -2icosh(πμ) e^iE τ/τ^1/2 H_iμ^(2)(Kτ) . Eventually, after changing variables, we obtain z[F^(3)(z, μ)] = -2icosh(πμ) F^(3)(-z, μ) . Dispersive integral. At fixedμ, since the functionF^(3)(z, μ)is analytic everywhere except on the branch cut, we can use Cauchy's integral formula to write F^(3)(z, μ) = ∮_ẓ'/2iπF^(3)(z', μ)/z'-z = ∫_-∞^-1ẓ'/2iπ_z'[F^(3)(z', μ)]/z'-z , where the integral contourencircleszcounterclockwise. The second equality is found by deforming the contour as shown in Fig. <ref>, as the integral along the large arc vanishes. Using the found discontinuity of the three-point function (<ref>), we finally get F^(3)(z, μ) = -2icosh(πμ) ∫_-∞^-1ẓ'/2iπF^(3)(-z', μ)/z'-z . Inversely,F^(3)(-z, μ)is obtained fromF^(3)(z, μ)in the same way. The same approach can be employed to reconstruct the complete four-point correlator, including the tower of EFT poles, using only the non-local signal. This powerful method was used and further explored in <cit.>. tocsectionReferencesutphys
http://arxiv.org/abs/2409.02738v1
20240904141417
SOAR: Simultaneous Exploration and Photographing with Heterogeneous UAVs for Fast Autonomous Reconstruction
[ "Mingjie Zhang", "Chen Feng", "Zengzhi Li", "Guiyong Zheng", "Yiming Luo", "Zhu Wang", "Jinni Zhou", "Shaojie Shen", "Boyu Zhou" ]
cs.RO
[ "cs.RO" ]
* Michiel Rollier, Aisling J. Daly, Jan M. Baetens Received 2024; accepted 2024 ==================================================== § ABSTRACT Unmanned Aerial Vehicles (UAVs) have gained significant popularity in scene reconstruction. This paper presents SOAR, a LiDAR-Visual heterogeneous multi-UAV system specifically designed for fast autonomous reconstruction of complex environments. Our system comprises a LiDAR-equipped explorer with a large field-of-view (FoV), alongside photographers equipped with cameras. To ensure rapid acquisition of the scene's surface geometry, we employ a surface frontier-based exploration strategy for the explorer. As the surface is progressively explored, we identify the uncovered areas and generate viewpoints incrementally. These viewpoints are then assigned to photographers through solving a Consistent Multiple Depot Multiple Traveling Salesman Problem (Consistent-MDMTSP), which optimizes scanning efficiency while ensuring task consistency. Finally, photographers utilize the assigned viewpoints to determine optimal coverage paths for acquiring images. We present extensive benchmarks in the realistic simulator, which validates the performance of SOAR compared with classical and state-of-the-art methods. For more details, please see our project page at https://sysu-star.github.io/SOARsysu-star.github.io/SOAR. § INTRODUCTION With the increasing demand for three-dimensional (3D) reconstruction in various fields, including urban planning, digital cultural heritage, and structural inspection, the utilization of unmanned aerial vehicles (UAVs) for autonomous reconstruction has garnered significant attention. Due to their agility and flexibility, UAVs have emerged as ideal platforms for rapidly acquiring images and reconstructing 3D models. An effective planning method is pivotal to fully realizing the potential of UAVs and advancing the efficiency and quality of reconstruction. Most UAV planning methods for reconstruction can be categorized into two categories: model-based and model-free methods. Model-based methods employ an "explore-then-exploit" strategy <cit.>, typically involving pre-scanning the field or relying on up-to-date prior information, such as satellite images<cit.>, to construct a rough prior model. The prior model is then utilized for viewpoint generation and path planning. However, constructing a prior model can be time-consuming and relies heavily on the available data. On the other hand, model-free methods <cit.> partially eliminate the need for a prior model through autonomous exploration. Unfortunately, without the guidance of a prior model, they are limited to conducting local planning to explore unknown regions while simultaneously scanning the known surface. As a result, the efficiency of the reconstruction process is limited. In this paper, we propose SOAR, a LiDAR-Visual heterogeneous multi-UAV planner that enables simultaneous exploration and photographing for fast autonomous reconstruction of complex scenes. Our approach combines the strengths of both model-based and model-free methods. By utilizing a team of collaborative UAVs, it allows for the object to be scanned in parallel with the coarse model generation, which significantly enhances the efficiency of the reconstruction process. The system incorporates an explorer UAV equipped with a LiDAR sensor that has a large sensing range, which enables rapid acquisition of surface geometry. Similar to the prior model in model-based methods, the surface provides abundant information for conducting long-range viewpoint generation and path planning. Simultaneously, the task of scanning the already-explored surface is assigned to multiple photographers equipped with RGB cameras, working collaboratively to achieve comprehensive scene coverage. As the explorer progressively acquires surface information, we propose an efficient viewpoint generation method capable of incrementally generating a minimal number of viewpoints necessary to cover the surface. These viewpoints are then clustered and assigned to the photographers by solving a Consistent-MDMTSP. This iterative process optimizes the scanning efficiency of the photographers while ensuring consistency in consecutive task assignments. Finally, each photographer plans the shortest path to capture images based on the assigned clusters, utilizing them as global guidance for efficient image acquisition. We compare our method with classic and state-of-the-art methods in simulation. Results demonstrate that our method achieves higher efficiency and superior reconstruction quality in benchmark scenarios. In summary, the contributions of this paper are summarized as follows: 1) A novel LiDAR-Visual heterogeneous multi-UAV system that enables rapid and efficient completion of reconstruction tasks. 2) An incremental viewpoint generation method that produces a minimal number of viewpoints to ensure full coverage as the surface information is incrementally acquired. 3) A task assignment method that iteratively optimizes the scanning efficiency while ensuring consistency in consecutive task assignments. 4) The proposed method has been extensively validated in two realistic simulation environments. The source code[<https://github.com/SYSU-STAR/SOAR>] of our system will be released. § RELATED WORK §.§ UAV-based Reconstruction In UAV-based reconstruction research, identifying suitable imaging positions and devising efficient paths are extensively explored topics. Most methods can be classified as either model-based or model-free. The majority of model-based methods utilize the "explore-then-exploit" strategy, which consists of two phases. The first phase is called the exploration phase, which acquires the coarse prior model from pre-flight <cit.> or satellite images<cit.>. In the exploitation phase, global optimal viewpoints and paths are generated based on the coarse prior model to acquire proper images for 3D reconstruction. However, decomposing tasks into two stages incurs high costs and adds complexity to the process. The model-free method means reconstructing target scenes without a prior model. Therefore, this method has to find the best scanning trajectory in an online manner from a partially constructed model.<cit.> adopt surface-based planning, which concentrates on reconstructing a precise 3D surface model instead of exploring whole unknown spaces.<cit.> extract incomplete surface elements via TSDF and generate a list of candidate viewpoints to cover them. Due to the absence of a prior model, model-free methods cannot avoid the occurrence of a local optimal dilemma. In this study, we leverage the strengths of both model-free and model-based approaches by employing a heterogeneous multi-UAV system. The explorer rapidly explores the environment, supplying ample scene information to photographers, thus facilitating efficient online planning. §.§ Multi-robot Planninng Multi-robot systems have been extensively explored in various studies related to scene reconstruction, where they offer reduced reconstruction time. The efficiency of a multi-robot system heavily relies on the effectiveness of task assignment. Traditionally, unknown areas or viewpoints were often considered as assigned tasks in prior works. For instance, Jing et al. <cit.> formulate a multi-agent Coverage Path Planning (CPP) problem, solved using a Set-Covering Vehicle Routing Problem (SC-VRP) approach, to inspect structures. Additionally, Hardouin et al. <cit.> utilize a TSP-Greedy assignment algorithm to assign viewpoints to each robot. To achieve better task partitioning and enhance robustness against communication loss and failure, Zhou et al. <cit.> devise a Capacitated Vehicle Routing Problem (CVRP) formulation to minimize the overall lengths of robot coverage paths. However, in scenarios involving the rapid and incremental generation of viewpoints, the accumulation of unvisited viewpoints becomes significant. It becomes impractical and lacks consistency to include all unvisited viewpoints in each task assignment, as some may have been appropriately assigned in previous iterations. In this paper, we introduce the task assignment method Consistent-MDMTSP, which leverages previous assignment results for rapid iterative optimization of scanning efficiency. Additionally, we introduce costs related to task consistency to enhance the uniformity of assignments. § PROBLEM FORMULATION We consider a heterogeneous multi-UAV system to reconstruct the scene in an unknown and spatially limited 3D space V⊂ℝ^3 with a bounding box B. Our system comprises one explorer and N_p photographers. The explorer is equipped with a large-scale perception LiDAR for rapid exploration. Photographers are equipped with a gimbaled RGB camera with 2 degrees of freedom (pitch angle θ_cam and yaw angle ψ_cam) and limited FoV for scene scanning. Due to the explorer's faster speed and larger perception range compared to the photographers, we assume that the speed of rough exploration exceeds that of fine scanning. Additionally, assume that all agents are allowed to communicate with each other at any time. The space V is represented as a set of cubic voxels v ∈ V, initially designated as unknown and continuously updated by the explorer to be occupied or free. Let V_ukn∈ V, V_occ∈ V, and V_free∈ V denote the sets of unknown, occupied, and free voxels, respectively. Also, let S be the set of surfaces. A voxel v ∈ S if and only if: v ∈ V_occ∃nbr_v^6∈ V_free Here, nbr_v^6 represents the 6-connected neighbors of v. Our objective is to leverage the explorer to identify all surfaces S while simultaneously deploying photographers to achieve full coverage of all surfaces S in the shortest time possible. § SYSTEM OVERVIEW As depicted in Fig. <ref>, the system comprises an explorer and multiple photographers. The explorer utilizes a surface frontier-based exploration approach (Sect.<ref>) to rapidly acquire geometric information about the scene. Concurrently, as more surfaces are explored, viewpoints are incrementally generated (Sect.<ref>). These viewpoints are uniformly and efficiently distributed to each photographer through the Consistent-MDMTSP method with high task consistency (Sect.<ref>). Photographers utilize the received viewpoint cluster tasks as global guidance for local path planning and generation trajectory (Sect.<ref>), ensuring the completion of their assigned tasks in the shortest time possible. Finally, the image-pose pairs are sent for offline 3D reconstruction, resulting in textured 3D models. § METHODOLOGY §.§ Surface Frontier-based Exploration To provide sufficient prior structure information for scene coverage, we aim for the explorer to focus on the surface of the scene. Inspired by <cit.>, we adopt surface frontier-guided planning for rapid exploration. §.§.§ Surface Frontier Detection and Viewpoint Generation. A surface frontier voxel v_sf can be defined as a free voxel with an occupied neighbor v_o and an unknown neighbor v_u, where v_o and v_u are also neighbors. Similar to <cit.>, all v_sf are first clustered based on connectivity, and then larger clusters are split into smaller ones using a PCA-based clustering approach. As the map is gradually updated, the outdated clusters are deleted, and the new clusters are detected. The exploration will end if there is no surface frontier. We calculate the centroid of N_c clusters and sample a set of viewpoints with different yaw angles at a certain radius from the centroid on horizontal planes. To better observe inclined planes such as the roof, we add different sampling heights to viewpoint generation. For each cluster, we select the viewpoint with the most visible v_sf as the representative of this cluster, denoted as vp_i = (p_i, ψ_i), where p_i and ψ_i respectively represent the position and the yaw angle of the viewpoint. vp_i will be reserved for the cluster to guide the path planning. §.§.§ Exploration Planning. Finding the shortest visit tour of N_c viewpoints will increase the efficiency of exploration. We model this problem as an Asymmetric Traveling Salesman Problem (ATSP) and design the cost matrix 𝐂_atsp that is needed by the Lin-Kernighan-Helsgaun heuristic solver<cit.>(LKH-Solver). The 𝐂_atsp is a N_c + 1 dimensions square matrix, the major (N_c + 1) × N_c block is composed of the cost between each pair of viewpoints and explorer's current position (p_0,ψ_0)to the viewpoint, which is presented as: 𝐂(i,j) = max{Len(P(p _i,p_j))/v_max, |ψ_i-ψ_j|/ψ̇_max} where P(p_i, p_j) means the collision-free path between P_i and P_j searched by A* algorithm, v_max and ψ̇_max are the maximum velocity and angular change rate of yaw. The cost from other vp_i to the current position will be set to zero since our method does not consider the cost of the return. Finally, the 𝐂_atsp can be present as: 𝐂_atsp(i,j) = { 0, j = 0 𝐂(i,j) 0≤ i≤ N_c,0 < j ≤ N_c . By solving the ATSP problem, we can determine the next viewpoint to visit and then utilize MINCO<cit.> to generate a continuous collision-free trajectory from the current position to the next viewpoint, thereby exploring the scene surface rapidly. §.§ Incremental Viewpoint Generation To comprehensively cover the scene, it is essential to generate reasonable viewpoints for coverage. As the scene surface information is obtained by the explorer progressively, it is important that the viewpoint generation process can adapt to the dynamically changed surface. To this end, we propose an incremental viewpoint generation method aimed at achieving full scene coverage with minimal viewpoints. §.§.§ Uncovered Surface Extraction. As the environment is gradually explored, the surfaces of the scene also expand, including some unstable or uncertain surfaces that are not suitable for generating corresponding viewpoints at the current moment. Therefore, we incrementally select stable surfaces, as depicted in Fig. <ref>: First, we detect known surfaces based on connectivity and then utilize a clustering method based on Euclidean distance to partition the detected surfaces into smaller surface clusters (Fig. <ref>-(a)). Subsequently, we identify which cluster has been fully explored (Fig. <ref>-(b)), designating it as a "completely explored surface," denoted as S_exp. Here, the S_exp is defined as a surface cluster devoid of any surrounding surface frontiers. For finer coverage, we extract the point cloud PT_new within each voxel of S_exp from the point cloud map maintained by the ikdtree <cit.>, while concurrently labeling each voxel as "extracted." During the next update of the map, we repeat the aforementioned process. However, it's worth noting that the extracted surface will not be involved in the above operation again. This ensures that each point cloud on the surface of the entire scene is only extracted once, thereby avoiding redundant computations (Fig. <ref>-(c),<ref>-(d)). To reduce the generation of invalid viewpoints, we need to filter out the points in PT_new that have already been covered by point clouds. The specific operations are as follows: We utilize all CV_hq to perform ray-casting on PT_new within the camera FoV, where CV_hq are all the high-quality viewpoints obtained through the updates in Sect. <ref>. We merge the point cloud from PT_new, which has not been intersected by collision-free ray-casting, with the previously uncovered point cloud PT_unc,prev to obtain the current uncovered surface point cloud PT_unc. §.§.§ Viewpoint Sampling and Iterative Update. Our method employs 5-DoF viewpoints, denoted as 𝐜𝐯 = [p_c, θ_c, ψ_c], where p_c represents the position of the camera, while θ_c and ψ_c respectively indicate the gimbal's pitch and yaw angles. We conduct viewpoint sampling guided by the normal vectors of the uncovered point cloud P_unc. For each point cloud 𝐩𝐭 = [pt_x, pt_y, pt_z] within P_unc and its associated normal vector 𝐧𝐯 = [n_x, n_y, n_z], we sample a viewpoint at a distance D away according to the following procedure: p_c = 𝐩𝐭 + D ·𝐧𝐯 θ_c = arctan(n_z,√(n_x^2 + n_y^2)) ψ_c = arctan(-n_y, -n_x) However, as the direction of each normal vector 𝐧𝐯 cannot be determined, we sample viewpoints in both directions. We then filter them based on whether they are within a free area and whether a collision-free ray can be projected to the corresponding point cloud. This process yields the initial set of viewpoints CV_ini. Below, we evaluate the coverage capability of each viewpoint. We enumerate through each viewpoint cv in CV_ini to compute the number of point clouds from P_unc that can be observed, denoted as n_obs. Concurrently, for each observed point cloud, we identify the viewpoint with the maximum n_obs as its truly covering viewpoint, labeled as cv_cover. The count of point clouds truly covered by each viewpoint, n_cover, is updated accordingly. In the above process, frequent queries of the correspondence between point clouds and viewpoints are required to perform update operations. Therefore, we maintain a pair of hash tables for both viewpoints and point clouds, enabling quick mapping from the position of a point cloud to the corresponding covering viewpoint. Building on our previous work <cit.>, we use a gravitation-like model to update the viewpoints in CV_ini. This model merges viewpoints that cover fewer areas into those covering more, thus replacing the less effective viewpoints and eliminating redundancy. Specifically, we first sort the viewpoints in CV_ini in descending order based on n_cover. Then, for each viewpoint cv_i, we sequentially query the neighboring viewpoints CV_q within a radius r_q. The pose of cv_i is then updated using the gravitational force exerted by CV_q: p_i = p_i + ∑_cv_q∈CV_qn_cover,q/n_cover,i(p_q - p_i) where p_i is the updated position of cv_i. Similarly, we obtain the updated pitch θ_i and yawψ_i. Then, we label each viewpoint in CV_q as "dormant," ensuring that these viewpoints no longer participate in the aforementioned update process. After one round of enumeration, we obtain the updated viewpoint set CV_u and update the uncovered point cloud PT_unc. We repeat the above procedure of viewpoint sampling and updating until the current coverage rate reaches a threshold. §.§ Consistent-MDMTSP for Task Assignment In this section, we introduce a novel task assignment approach that is based on a viewpoint cluster task structure (Sect. <ref>), while simultaneously ensuring scanning efficiency and consistency in consecutive task assignments (Sect. <ref>). §.§.§ Viewpoint Clustering Task Structure. The broad perception range and rapid exploration pace of the explorer result in a considerable number of viewpoints needing assignment to photographers, imposing significant time overhead if assigned directly. To address this challenge, we draw inspiration from <cit.> and employ a viewpoint clustering method to partition the entire set of viewpoints into several subsets. Additionally, we design a viewpoint clustering task (VCT) structure to incrementally maintain the status of VCTs. Each VCT_i consists of four parameters: VP_i, p_avg,i, h_cost,i, and L_cost,i. VP_i represents the positions of all viewpoints contained in VCT_i. p_avg,i denotes the average position of VP_i. h_cost,i stands for the execution cost of VCT_i, which is approximated to be dependent only on the number of viewpoints in VP_i. The mathematical expression is as follows: h_cost,i = λ_h * (NUM(VP_i)-1) * d_thr Here, NUM(VP_i) represents the number of viewpoints in VCT_i, and d_thr denotes the distance threshold for viewpoint clustering. L_cost,i represents the A* path distance between p_avg,i and all the average positions of other VCTs. The mathematical expression is as follows: L_cost,i,j = Len[P(p_avg,i,p_avg,j)] Note that d_thr is relatively small compared to the entire scene, so it is assumed that p_avg,i remains relatively stable throughout subsequent computations. Therefore, we can maintain L_cost,i incrementally. The proposed viewpoint clustering method primarily relies on visibility and distance for clustering. It ensures that there are no obstructions between any pair of viewpoints within a cluster, and the distance between them is less than the distance threshold d_thr. Whenever a new viewpoint is added, it is first iteratively matched with the existing VCTs based on distance priority within the range of d_thr. If the new viewpoint can undergo unobstructed ray-casting with VP_i in VCT_i and the pairwise distances are all less than the threshold d_thr, then the viewpoint is merged into VCT_i. Otherwise, it initializes itself as a new VCT. When a viewpoint in VP_j is visited, we remove the viewpoint from VP_j in VCT_j, and update p_avg,j and 𝐡_cost,j. If VP_j has no viewpoints, VCT_j will be removed. §.§.§ Consistent-MDMTSP. The optimization problem of assigning multiple tasks to multiple drones while minimizing the maximum travel time of each drone can be formulated as a Multiple Traveling Salesman Problem (MTSP). Despite the availability of mature solvers <cit.> for solving MTSP, there are two main issues with using them directly: 1) Due to incomplete map information, obtaining only a locally optimal solution each time leads to poor consistency, resulting in unnecessary detours for photographers. 2) Since tasks are updated incrementally with minor changes each time, recalculating the overall assignment results is unnecessary. To address these issues, we propose the Consistent-MDMTSP method based on genetic algorithms (GA). This method incorporates the cost term related to task consistency and iteratively generates new assignment results by leveraging the previous results. In our method, we adopt a multi-chromosome genetic representation<cit.>, which enables efficient decoding and encoding of the problem. As depicted in the example illustrated in Fig. <ref>, each individual with multiple chromosomes in the population represents a single solution to the problem. Suppose there are N_p photographers and N_vct VCTs that have not been completed yet. Let a single individual I = {PATH_1, …, PATH_N_p}, where PATH_i = {x_i,1, …, x_i, M_i} and ∑_l=1^N_p M_l = N_vct. Here, PATH_i represents the visit path sequence of the ith photographer, x_i,j denotes the id of the VCT to be visited jth in PATH_i, and M_i represents the number of VCTs in PATH_i. Our fitness function is designed as a combination of distance cost and consistency cost. To evaluate the distance cost, we introduce a weighted directed graph G = (V_d, E_d), where V_d contains N_p photographer nodes and N_vct VCT nodes, and E_d represents the set of edges. We maintain two weight matrices, 𝐂_d,vct and 𝐂_vct: the former represents the distance costs between all photographers and all VCTs: 𝐂_d,vct(k_1,k_2) = Len[P(p_d,k_1,p_avg,k_2)] + h_cost,k_2 k_1 ∈{1,2,…,N_p}, k_2 ∈{1,2,…,N_vct} and the latter represents the distance costs among all VCTs: 𝐂_vct(k_3,k_4) = 𝐋_cost,k_3,k_4 + h_cost,k_4 k_3, k_4∈{1,2,…,N_vct} The calculation of the cost_dis,i for PATH_i is as follows: cost_dis,i = 𝐂_d,vct(i,x_i,1) + ∑_j=1^M_i-1𝐂_vct(x_i,j, x_i,j+1) To improve task assignment consistency, we aim for the current assignment result to closely approximate the previous result when the distance costs are relatively similar. Given the ith photographer’s previous visit path sequence, denoted as PATH_i^* = {x_i,1^*, …, x_i, M_i^*^*}, our objective is to maximize the length of the common prefix between PATH_i and PATH_i^*, assigning higher weights to initial segments, thereby enhancing overall planning consistency. cost_con,i = -∑_k=1^K_same R · e^-α·DSUM(k) The length of this common prefix is denoted by K_same, while DSUM(k) denotes the cumulative distance along PATH_i for the first k VCTs of the ith photographer. The parameters R and α control the weight of consistency and the distance decay rate, respectively. A lower cost_con,i signifies greater task consistency. The overall cost of PATH_i is given by: cost_all,i = cost_dis,i + cost_con,i To achieve a balanced assignments of VCTs among photographers and maintain high task consistency, we define the fitness function for individual I as follows: Fit(I) = -(max{cost_all,i}_i=1^M_i + ϵ * ∑_i=1^M_i cost_all,i) To minimize the maximum cost incurred by any photographer, we identify the cost component with the highest value and optimize it accordingly.Additionally, a small penalty term is added to minimize the overall cost when the maximum costs are similar. The negative sign serves to invert the cost function into a fitness function. Given that only a small subset of VCTs is modified between map updates, we adopt a strategy that leverages the previous best individual. Rather than randomly initializing the population for each iteration, we utilize the highest fitness individual from the preceding iteration, denoted as I_best,prev, as a foundation for generating the initial population P_init,cur. Specifically, we construct I_tmp by excluding executed VCTs from I_best,prev. Then, we randomly insert all newly added VCTs into the chromosomes of I_tmp, thereby obtaining one individual in P_init,cur. Repeating this random operation multiple times yields complete initial population. This approach significantly reduces the iteration times. Finally, after K_GA iterations, the optimization process concludes, and the individual with the highest fitness is selected as the assignment result. This result is then communicated to all photographers. §.§ Coverage Planning for Photographers In this section, the planning strategy for all photographers remains consistent. The strategy involves photographers receiving assigned viewpoint clusters and visitation order through communication between UAVs. Subsequently, they utilize this information as global guidance for local path planning (Sect. <ref>), generating collision-free trajectories (Sect. <ref>) to achieve rapid coverage. §.§.§ Local Path Planning. Each photographer performs refined local path planning guided by the received global viewpoint cluster path. We select all M_local viewpoints from the first K_local VCTs along the global path to plan a local path. We construct an (M_local+1)-dimensional square cost matrix to solve an ATSP with the photographer’s current position as the starting point and the center of the (K_local+1)-th VCT in the global path as the endpoint (K_local≥ 1). This ATSP cost matrix resembles <ref>. §.§.§ Trajectory Optimization. To achieve smooth navigation, we generate a collision-free and continuous trajectory passing through the first M_kc viewpoints P_kc = {v_c^1,…,v_c^M_kc} of the path P_c. Specifically, we partition the trajectory into M_kc pieces and enforce boundary conditions between trajectory pieces: tp_c^i(0)=p_c^i-1, tp_c^i(T_c^i) = p_c^i, ∀ 1≤ i ≤ M_kc Specifically, p_c^0 represents the current position of the drone. To ensure safety, we maintain an ESDF map to impose position constraints: D_ESDF(tp_c^i(t)) ≥ r_s, ∀ t ∈ [0,T_c^i], ∀ 1≤ i ≤ M_kc where D_ESDF(·) represents the signed distance between the drone and the nearest obstacle boundary, tp_c^i denotes the i-th trajectory segment with a duration of T_c^i, and r_s denotes the drone's safe distance. We also impose kinodynamic constraints, including: v(t)≤ v_max, a(t)≤ a_max, and j(t)≤ j_max, to mitigate visual blur caused by aggressive flight. Additionally, The yaw and pitch trajectories are generated with similar constraints as mentioned above. We employ MINCO <cit.> to optimize the trajectory while satisfying the aforementioned constraints. § EXPERIMENTS §.§ Implementation Details We set D = 5m in Eq.<ref>, λ_h = 0.6 and d_thr = 6.0m in Eq.<ref>, ϵ = 10^-4 in Eq.<ref>, R = 50, α = 0.1 in Eq.<ref>, and the number of iterations in the genetic algorithm K_GA = 700. All experiments were conducted using the MARSIM <cit.> simulator to simulate quadrotor UAVs equipped with a MID360 LiDAR. A 2-axis gimbal camera was employed as the sensor for each photographer. In exploration planning (Sect. <ref>) and local path planning (Sect. <ref>), the ATSPs are solved using LKH-Solver<cit.>. All the above modules run on an Intel Core i7-13700F CPU. §.§ Benchmark Comparisons and Analysis To evaluate our proposed framework, we conduct simulations in two complex environments: the Sydney Opera House (30 x 36 x 14 m³) and the Pisa Cathedral (29 x 37 x 15 m³). Our proposed method is compared to both a model-free method and a model-based method, namely the multi-robot version of Star-Searcher <cit.> (SSearchers) and Multi-EE, respectively. All experiments employ four UAVs, with sensor configurations detailed in Table <ref>. The LiDAR-equipped UAV, not involved in image capture, has relaxed dynamic limits of v_max = 2.0 m/s, ω_max = 2.0 rad/s, a_max = 2.0 m/s^2, and j_max = 2.0 m/s^3. The image capture UAVs adhere to stricter limits of v_max = 1.0 m/s, ω_max = 1.0 rad/s, a_max = 1.0 m/s^2, and j_max = 1.0 m/s^3 to ensure image quality. All cameras possess a [80^∘, 60^∘] FoV and capture images at a resolution of 640 × 480 pixels. Star-Searcher <cit.> extends exploration by incorporating surface observation. We implement SSearchers by partitioning the scene into bounding boxes based on prior knowledge and independently applying the Star-Searcher planner within each box. The Multi-EE approach involves an exploration phase where multiple UAVs capture images along predefined trajectories to construct a coarse 3D model using Reality Capture[<https://www.capturingreality.com/>]. Subsequently, in the exploitation phase, global viewpoints are generated based on the coarse model using our proposed method and then distributed among UAVs by solving the MTSP with LKH-Solver <cit.>. Fig. <ref> provides a detailed overview of our approach. To simulate image acquisition, we rendered images in Blender[<https://www.blender.org/>] at a 2 Hz frequency along the drone trajectories and processed these image-pose pairs using Reality Capture to produce the final 3D model. We evaluate performance using two metrics: efficiency (flight time and path length) and reconstruction quality (recall, precision, and F-score)<cit.>, with an F-score threshold of 0.01 m. Due to SSearchers lacking an explicit assignment algorithm, reported times represent the average across all UAVs, while for Ours and Multi-EE, the maximum time among the four UAVs is considered. Table <ref> and Fig. <ref> present the comparison results. Our proposed heterogeneous system outperforms competing methods in both flight time and reconstruction quality, demonstrating its suitability and potential for reconstructing complex, large-scale scenes. This improvement is primarily attributed to our incremental viewpoint generation approach and efficient task assignment strategy. §.§ Ablation Study 1) Incremental viewpoint generation: To validate the superiority of our incremental viewpoint generation strategy, we compared it to the global viewpoint generation approach of FC-Planner <cit.> (Global). In our method, viewpoints are incrementally generated as the explorer gradually explores the environment, while the global method generates viewpoints directly based on the entire map information. As shown in <ref> and <ref>, our method generates a comparable number of viewpoints and achieves a similar coverage rate to the global method, indicating its ability to maintain a high level of global optimality. 2) Task assignment: To evaluate the impact of our Consistent-MDMTSP, we conducted an ablation study (Exp.MTSP) by replacing it with LKH-Solver's MTSP <cit.> while maintaining other experimental settings. As shown in Table <ref>, our method demonstrates shorter computation time, flight time, and flight length, especially in the complex scenario (Sydney). These improvements are attributed to our Consistent-MDMTSP's iterative optimization process, which leverages the previous optimal solution and accelerates the iteration process. Additionally, by increasing the consistency cost, our method ensures higher task consistency. § CONCLUSION This paper presents a LiDAR-Visual heterogeneous multi-UAV system for rapid and autonomous aerial reconstruction. An explorer provides comprehensive scene information through surface frontier-based exploration, while viewpoints are incrementally generated from uncovered surfaces and assigned to photographers using Consistent-MDMTSP. Our approach exhibits superior efficiency and reconstruction quality compared to state-of-the-art methods, as demonstrated through rigorous evaluations in complex simulation environments. While SOAR demonstrates promising results, several limitations remain to be addressed in future research. The current system primarily relies on simulated environments and assumes ideal communication conditions. Real-world applications necessitate considering additional factors such as image overlap and inter-UAV occlusion during the reconstruction process. To address these limitations, future work will concentrate on optimizing the system architecture to enable robust operation in complex, real-world communication environments.
http://arxiv.org/abs/2409.02898v1
20240904174136
Classification of spin-$1/2$ fermionic quantum spin liquids on the trillium lattice
[ "Ming-Hao Li", "Sounak Biswas", "S. A. Parameswaran" ]
cond-mat.str-el
[ "cond-mat.str-el" ]
Rudolf Peierls Centre for Theoretical Physics, Parks Road, Oxford, OX1 3PU, UK Rudolf Peierls Centre for Theoretical Physics, Parks Road, Oxford, OX1 3PU, UK Institut für Theoretische Physik und Astrophysik, Universität Würzburg, 97074 Würzburg, Germany Rudolf Peierls Centre for Theoretical Physics, Parks Road, Oxford, OX1 3PU, UK § ABSTRACT We study fermionic quantum spin liquids (QSLs) on the three-dimensonal trillium lattice of corner-sharing triangles. We are motivated by recent experimental and theoretical investigations that have explored various classical and quantum spin liquid states on similar networks of triangular motifs with strong geometric frustration. Using the framework of Projective Symmetry Groups (PSG), we obtain a classification of all symmetric 𝖹_2 and 𝖴(1) QSLs on the trillium lattice. We find 2 𝖹_2 spin-liquids, and a single 𝖴(1) spin-liquid which is proximate to one of the 𝖹_2 states. The small number of solutions reflects the constraints imposed by the two non-symmorphic symmetries in the space group of trillium. Using self-consistency conditions of the mean-field equations, we obtain the spinon band-structure and spin structure factors corresponding to these states. All three of our spin liquids are gapless at their saddle points: the 𝖹_2 QSLs are both nodal, while the 𝖴(1) case hosting a spinon Fermi surface. One of our 𝖹_2 spin liquids hosts a stable gapless nodal star, that is protected by projective symmetries against additions of further neighbour terms in the mean field ansatz. We comment on directions for further work. Classification of spin-1/2 fermionic quantum spin liquids on the trillium lattice S.A. Parameswaran September 9, 2024 ================================================================================== § INTRODUCTION Spin liquids are magnetic systems that fail to order at the temperatures expected on the basis of their exchange energy scale, while exhibiting cooperative behaviour that distinguish them from high-temperature paramagnet <cit.>. A natural ingredient leading to such lack of ordering is geometric frustration, wherein the lattice structure eliminates simple ground state configurations which minimise the exchange interaction energy between magnetic moments <cit.>, and which would typically lead to symmetry-breaking in the thermodynamic limit. This is manifest, for instance, in a system of three classical spins with pairwise antiferromagnetic Heisenberg interactions: the lowest-energy state of the triangle cannot be described in terms of minimal-energy configurations of each of the individual bonds. Frustrated lattices can be assembled by tiling space with such elementary units — typically triangles or tetrahedra — in order to form edge-sharing or corner-sharing structures: common examples are the triangular and kagome lattices in two dimensions (2D), and the pyrochlore and hyperkagome lattices in 3D. Classical ground states of antiferromagnets on such lattices are macroscopically degenerate <cit.>. These degeneracies can often be understood in the exactly solvable large-N limit: frustration is signaled by a macroscopically degenerate manifold of continuously connected ordering wavevectors <cit.>. Such “classical spin liquids” often order at very low temperatures T (much lower than the scale set by exchange couplings), in accord with the third law of thermodynamics, that forbids the finite T→ 0 entropy associated with an extensive ground-state degeneracy. Typically, thermal or quantum fluctuations select an ordering wave vector out of this manifold, in a phenomenon termed “order by disorder" <cit.>. However, in some cases the system is sufficiently frustrated that the quantum mechanical ground state selected as T→ 0 continues to exhibit no broken symmetries, and instead is a quantum spin characterized by an emergent deconfined gauge structure. The resulting quantum spin liquid (QSL) is often strikingly characterized by the appearance of fractionalized excitations, whereas its gauge structure is more subtly encoded in certain long-range entanglement properties of the ground-state wavefunction. A powerful framework to understand QSL ground states of quantum spin systems is provided by the projective symmetry group <cit.>. This framework, which makes the emergent gauge structure especially transparent, builds on the so-called “parton construction" <cit.>, and proceeds by representing each spin in terms of auxiliary fermionic `spinons', S⃗=1/2f^†_i σ⃗_ij f_j. The physical Hilbert space of quantum spins is recovered via the projection i.e. by imposing the constraint that there is exactly one fermion per site. The resulting Hamiltonian of these auxiliary (or Abrikosov) fermions is generically quartic and can then be studied within a mean field decoupling wherein operators corresponding to fermion hopping, fermion-pair creation, and fermion-pair annihilation are self-consistently determined, leading to a quadratic mean-field “ansatz". By construction, ground states of such ansatzes correspond to symmetric, disordered wavefunctions, i.e., candidate QSL states. This parton (or “projective”) construction suggests low-energy effective descriptions for these phases in terms of spinons coupled to gauge fields. Of course, the resulting strongly-coupled problem can be challenging to treat in a controlled fashion, particularly in cases where the spinon degrees of freedom are gapless. Nevertheless, a key feature of the parton approach is that it provides a systematic framework to enumerate and classify candidate variational wavefunctions in terms of their topological structure, in much the same way that the Landau-Ginzburg formalism provides a useful starting point to investigate broken symmetries as captured by conventional mean-field wavefunctions. Such a classification is facilitated by the projection of the mean-field Hamiltonian from the large Hilbert space of auxiliary fermions back to the physical spin Hilbert space — essential in order to obtain a sensible spin wave function — which confers a “gauge structure" to the fermion Hilbert space. Specifically, mean field ansatzes which correspond to the same physical wave function after projection are related by a gauge transformation. Consequently, the mean field fermion ansatzes are only required to be invariant under physical symmetries up to an associated gauge transformation. In other words, the mean-field ansatz is invariant under a projective symmetry group (PSG) which is usually larger than the physical symmetry group of the QSL wave function. However, there exists a group of pure gauge transformations — typically 𝖹_2, 𝖴(1) or 𝖲𝖴(2)) — termed the invariant gauge group (IGG), which leaves the mean field anstaz invariant. The IGG and PSG together characterize the low-energy, long-wavelength fluctuations around the mean field ansatz: these involve fermions coupled to gauge fields, with the gauge group specified precisely by the IGG, and fermions in a mean-field dispersion classified by representations of the PSG. In other words, different PSGs capture distinct “quantum orders" of QSL phases with a specified IGG, in much the same way that the physical symmetry groups characterise broken symmetries. Notably, there can be distinct PSGs corresponding to the same physical symmetry manifest in the spin wavefunction, underscoring the fact that these “quantum orders” can be richer than their classical counterparts. Experimental evidence for QSLs and the resulting need to characterize their emergent low-energy properties has driven a systematic program of applying the projective construction to a variety of frustrated lattices <cit.>. The resulting mean-field ansatzes provide starting points for more refined calculations where the projected fermion wavefunctions can be calculated variationally using Monte Carlo approaches <cit.>. (Alternative parton constructions that split the spins into bosons <cit.> offer a complementary set of insights into the phenomenology of possible QSLs and their possible proximate phases.) In this work, we continue this program by classifying symmetric spin liquid states on the trillium lattice <cit.>, a three-dimensional network of corner sharing triangles displayed in Fig. <ref>. A natural theoretical motivation of this problem is that the motif of corner-sharing triangles is expected to seed significant magnetic frustration, like the better-known kagome and hyperkagome lattices. At a more experimentally-grounded level, trillium is the magnetic lattice of MnSi, or that of the Ce moments in CeIrSi, which has been considered before in the context of frustrated magnetism <cit.>. Recent characterisations of the quantum spin liquid material K2Ni2(SO4)3 <cit.> show that the magnetic Ni^2+ ions, with S=1, lie on two interconnected trillium lattices, having exactly the same set of symmetries as single trillium lattice— implying that these structures share the classification of symmetric spin liquid states in terms of projective symmetry groups. Another compound KSrFe2(PO4)3 with structures similar to K2Ni2(SO4)3, with S=5/2, has been shown to exhibit no long range order down to T=0.19K <cit.>. Our interest in trillium is also seeded by its remarkable similarity to the hyperhyperkagome (HHK) lattice which describes the network of coupled Cu2+ ions in PbCuTe2O6, which was shown to host a QSL ground state in a series of recent experiments <cit.>, leading to theoretical work on spin liquid states on the underlying HHK structure <cit.>. Both trillium and HHK are three-dimensional networks of corner-sharing triangles with a cubic Bravais lattice where each site belongs to three corner-sharing triangles. Classical frustrated magnets on these lattices share similar phenomenology <cit.>: a large regime with classical spin liquid behaviour, eventually yielding to co-planar ordering at very low temperatures. For both lattices, large-N approaches yield “partial ordering" <cit.>, characterized by a macroscopic but sub-extensive set of ordering wave vectors, whose manifold forms a line (HHK) or surface (trillium) in three dimensional reciprocal space. This is distinct from the large-N signatures of a classical spin liquid, where this manifold would be extensive <cit.>, as obtained, e.g. for the pyrochlore, kagome and hyperkagome lattices. [Note that a recent study of classical Ising models with three-spin interactions on the trillium and HHK showed that both host very similar classical fractal spin liquids, with “fractonic” glassy behaviour arising out of kinetic constraints <cit.>; however this is unlikely to be directly relevant to the QSL problem studied here.] The HHK lattice has the same space group and hence the same classification of PSGs as the three-dimensional hyperkagome lattice <cit.>, in which each site is shared by two (rather than three) corner sharing triangles. The latter has been the subject of numerous theoretical investigations of its ordered and spin-liquid states <cit.> motivated by its relevance to the candidate QSL material Na4Ir3O8 <cit.>. The corresponding 𝖯4_1 32 space group has both 3-fold rotations and a 4-fold non-symmorphic screw rotation, with the latter known to cause a drastic reduction of total number of QSL states <cit.>. In contrast, the 𝖯2_1 3 space group of trillium has a three-fold rotation, along with two non-symmorphic screw rotations <cit.>. In the light of the preceding discussion, it is natural to ask what QSL phases are consistent with symmetries of the trillium lattice. To this end, in this paper we undertake a classification of PSGs for spin-1/2 QSLs on this lattice. Although experiments <cit.> indicate that on K2Ni2SO4 is best understood as an effective spin-1 system, understanding the simpler spin-1/2 case is an important first step towards a more comprehensive study of the higher-spin problem. Accordingly, we hope that the present work will guide the interpretation of results of future experiments, and add to our theoretical understanding of QSL phases in three dimensions. The rest of this paper is organised as follows. In Sec. <ref> we introduce the crystal structure of the trillium lattice and the symmetry generators of its space group. In Sec. <ref> we present the symmetry group relations of trillium, and outline the classification of its PSGs using them. We also present the gauge transformations accompanying physical symmetries for all of the PSGs. The details are relegated to Appendices  <ref> and  <ref>. § BACKGROUND: PROJECTIVE SYMMETRY GROUP FORMALISM We briefly review the projective symmetry group classification of parton mean-field theories, as applied to spin models with Heisenberg exchange interactions. Readers familiar with the parton approach can jump ahead to the next section, but may wish to quickly skim this section to orient themselves with our notation and conventions. We begin with the Heisenberg model on a given spatial lattice, H=∑_{i, j}J_ijS⃗_i·S⃗_j. In order to implement the PSG, we first enlarge the Hilbert space by decomposing spins into Abrikosov fermions as follows: S⃗_i=∑_α,β1/2f^†_iασ⃗_αβf_iβ. The above equation maps the spin Hilbert space to the subspace of the Abrikosov fermion Hilbert space in which the fermion occupation number on each site is 1. This means that, on the operator level, we strictly have ∑_αf^†_iαf_iα=Id. Indeed, by using the identity, we can verify that [S^m,S^n]=iϵ_lmnS^l. In fact, a second constraint, is also introduced as a consequence of the first: ∑_α, β f_iαf_iβϵ_αβ = 0. (One can verify by considering ∑_α, β f_iαf_iβϵ_αβ∑_γf^†_iγf_iγψ, where ∑_γf^†_iγf_iγψ = ψ by virtue of single-occupancy.) In terms of the Abrikosov fermions, the Heisenberg Hamiltonian reads (up to some constants) H =∑_{i, j}∑_αβμνJ_ij1/4(f^†_iασ⃗_αβf_iβ)·(f^†_jμσ⃗_μνf_jν) =∑_{i, j}∑_αβ-1/2J_ij(f^†_iαf_jαf^†_jβf_iβ+1/2f^†_iαf_iαf^†_jβf_jβ), We now study H within a mean-field approximation, by introducing parameters for expectation values of operators η_ijϵ_αβ=-2⟨ f_iαf_jβ⟩, χ_ijδ_αβ=2⟨ f^†_iαf_jβ⟩; where η_ij=η_ji and χ_ij=χ^†_ji. As is usual, we expand operators in Eq. <ref> in terms of fluctuations about their expectation values and ignore terms which are quadratic in fluctuations, leading to H_MFT = - ∑_⟨ i, j⟩3/8J_ij(χ_ji f^†_iμf_jμ + η_ij f^†_iμf^†_jμ + h.c. - |χ_ij|^2 - |η_ij|^2) +∑_i (μ^3_i(f^†_i↑f_i↑ - f_i↓f^†_i↓) + 1/2(μ^1_i+i μ^2_i)f_iμf_iνϵ_μν+ h.c.). , where we have introduced the Lagrange multipliers μ^1,2,3_i to impose the one-fermion-per-site constraint at a mean-field level. These, as well as the parameters χ_ij and η_ij, are determined self consistently. To discuss the 𝖲𝖴(2) gauge structure of the mean-field Hamiltonian, it is convenient to introduce a spinor representation ψ≡[ ψ_1; ψ_2 ]≡[ f_↑; f^†_↓ ]. In terms of these spinors, the H_MFT can be be compactly rewritten as: H_MFT = ∑_⟨ i, j⟩3/8 J_ij[1/2Tr(U^†_ijU_ij) - (ψ^†_i U_ijψ_j + h.c.)] + ∑_i,lμ^l_iψ^†_iτ^l ψ_i , where the matrix U_ij captures both mean-field parameters via U_ij≡[ χ^†_ij η_ij; η^†_ij -χ_ij ] . The constraint implementing projection into the spin Hilbert space at the mean-field level now has the form: ⟨ψ^†_i τ^l ψ_i ⟩ = 0, l=1,2,3. The {U_ij} and {μ^m_i } together constitute variational parameters that specify the mean-field “ansatz”for the Hamiltonian and the corresponding ground state wavefunction. Variationally optimizing the parameters to obtain the lowest energy ground state is equivalent to determining the parameters self-consistently. It is crucial to realize that the ground state of the mean field spinon Hamiltonian is not a valid spin wave function, since the on-site constraints are only enforced on average. The final wavefunction in terms of the physical spin degrees of freedom is constructed from the mean-field spinon state by Gutzwiller projection, i.e. Ψ_spin = P_GΨ_MFT. The spinor representation makes the 𝖲𝖴(2) gauge redundancy of the mean-field Hamiltonian manifest. The Hamiltonian is invariant, trivially, under the site-dependent gauge transformation ψ_i↦ W_iψ_i, U_ij↦ W_i U_ij W_j^† and μ^m_i ↦ W_i μ^m_i W_i^†, with W_i∈𝖲𝖴(2) since this leaves physical spin operator invariant [cf.  <ref>]. Therefore, the mean-field anstaze parametrised by U_ij,μ^m_i and W_i U_ij W_j^†, W_i μ^m_i W_i^† share the same physical spin wavefunctions, i.e. after projection into the spin Hilbert space. This has significant consequences for what we require of symmetric mean-field ansatzes. Consider the action of a symmetry g: U_ij↦ U_g(i)g(j). For a symmetric ansatz we no longer require U_g(i)g(j)=U_ij; instead, we only require that there exist transformations G_g(n) ∈𝖲𝖴(2) for all sites n, such that G_g(g(i)) U_g(i)g(j)G^†_g(g(j)) = U_ij. The gauge redundancy then implies that physical properties of the state represented by the ansatz has not changed. The physical transformations g together with the gauge transformation, (G_g(i),g), which leaves the ansatz invariant, constitute the projective symmetry group (PSG). The PSG characterises the symmetries of the ansatz, and serves to classify and characterise different mean field spin liquid states. The PSG also determines the low-energy description of fluctuations about the mean field states. From the preceding discussion on the gauge structure it is clear that not all fluctuations of the mean-field parameters { U_ij} are physical: the unphysical fluctuations between gauge inequivalent states must be described by gauge fields in the effective theory. The effective theories, then, are likely to be fermions coupled to gauge fields. The gauge structure of the low energy theory is in general not given by the high energy gauge group 𝖲𝖴(2), but is instead determined by the “invariant gauge group” (IGG) <cit.>. The IGG is a subgroup of the PSG comprised of pure gauge transformations which leave the ansatz invariant, i.e., 𝒢 = { W_i|W_i U_ijW^†_j=u_ij, W∈𝖲𝖴(2)}. Given the central importance of the IGG, one usually labels QSLs by the IGG, leading to the terminology of “𝖹_2, 𝖴(1), or 𝖲𝖴(2)" QSLs. The PSGs, therefore, play a role for mean-field QSL phases akin to that of ordinary symmetry groups for broken-symmetry phases, distinguishing quantum disordered states with the same physical symmetries but different emergent properties.This is a good place to flag one final complication: namely, that that certain PSGs do not correspond to non-zero mean field ansatzes. Therefore, simply tabulating the list of PSGs does not conclude the classification of PSGs; it is essential to investigate the physical constraints that each imposes on the mean field ansatzes. Despite this complication, it is nevertheless useful to organize the investigation of symmetric spin liquid ground states on a given spatial lattice in terms of an enumeration of all PSGs, for a given set of physical symmetries (typically, the full lattice space group as well as time reversal symmetry) and the IGG. This allows the construction of the corresponding mean-field ansatzes and spin liquid wavefunctions. In the balance of this paper, we implement such a program for the trillium lattice. § PSGS OF THE TRILLIUM LATTICE §.§ The trillium lattice We begin by describing the trillium lattice and its spatial symmetries. These, along with time reversal, will constitute the physical symmetries that our QSL ground states (after projection to the correct Hilbert space) must respect, and are hence central to the PSG construction. The trillium lattice, shown in Fig. <ref>, has a simple cubic Bravais lattice with four sub-lattices: α, β, γ and δ. The positions of the sublattice sites relative to the unit cell center are given by: r⃗^0_α = (κ , κ , κ ), r⃗^0_β = (1/2+κ , 1/2-κ , 1-κ), r⃗^0_γ = (1-κ , 1/2+κ , 1/2-κ), r⃗^0_δ = (1/2-κ , 1-κ , 1/2+κ), where κ is a free parameter. As mentioned before, the nearest neighbor bonds on the lattice form a network of corner sharing triangles, with each site participating in three triangles, which are the elementary frustrated motifs. We denote the position of a unit cell i by the vector u⃗_⃗i⃗=(x,y,z), where x,y,z are integers. A generic lattice site i is referred to by specifying its unit cell position and sublattice as i≡ (x,y,z;s); such a site lies at position u⃗_i+r⃗^0_s. Since the mean-field parameters {U_ij} specifying the ansatz are associated with the links, it is convenient to uniquely label all links for the purpose of further discussion. We do so by exploiting lattice translation invariance: there are 12 links per unit cell, all of which are translationally inequivalent. We introduce the labels ζ=(1,2… 12) for these links, and specify each of these links in Tab. <ref>. Trillium has space group 𝖯2_1 3, with the symmetry generators {T_x, T_y, T_z, g_a, g_b, g_c }. Here, T_is are the three translational generators. g_c is a threefold rotation about the (1,1,1) axis passing through the origin of an unit cell. g_a and g_b are the generators of the 2-fold non-symmorphic screw rotations. g_a involves a π rotation about an axis in the (0,0,1) direction passing through the point (1/2,0,0), followed by a translation by 1/2 of the unit-cell distance along the rotation axis. g_b involves a π rotation about an axis in the direction (0,1,0) passing through the point (0,0,1/2) followed by a translation of 1/2 of the untit-cell distance along the rotation axis. It has been noted previously <cit.> that non-symmorphic symmetries generally lead to strong constraints on possible PSGs, and a consequent reduction of their number. The generators g_a,g_b and g_c act on a lattice site i≡(x,y,z,s) via g_a: (x,y,z;α)↦ (-x, -y-1, z;δ), (x,y,z;β)↦ (-x-1, -y-1, z+1;γ), (x,y,z;γ)↦ (-x-1, -y-1, z;β), (x,y,z;δ)↦ (-x, -y-1, z+1;α), g_b: (x,y,z;α)↦ (-x-1, y, -z;γ), (x,y,z;β)↦ (-x-1, y, -z-1;δ), (x,y,z;γ)↦ (-x-1, y+1, -z;α), (x,y,z;δ)↦ (-x-1, y+1, -z-1;β), g_c: (x,y,z;α)↦ (z, x, y;α), (x,y,z;β)↦ (z, x, y;γ), (x,y,z;γ)↦ (z, x, y;δ), (x,y,z;δ)↦ (z, x, y;β). §.§ PSG classification on the trillium lattice The PSG involves the group of the transformations (G_g(n),g) that leaves the mean-field ansatz invariant. Here g is a physical symmetry transformation, and G_g(n) ∈𝖲𝖴(2) is the associated site-dependent gauge transformation, with n denoting the physical site. (G_g(n),g) acts on a mean-field parameter U_ij as (G_g(n),g): U_ij↦ G_g(g(i)) U_g(i)g(j)G^†_g(g(j)). It follows from consecutive action on the ansatz that the product of two PSG elements is given by the group compatibility condition (G_g_1(n),g_1) ∘ (G_g_2(n),g_2) = (G_g_1(n)G_g_2(g^-1_1 n),g_1 g_2), and the group inverse by (G_g_1(n), g_1 )^-1 = (G^†_g_1(g_1(n)),g^-1_1 ). Since the elements of the IGG 𝒢 are pure gauge transformations which leave the ansatz invariant, it is clear that whenever (G_g(i),g) is an element of the PSG, (W G_g(i),g), for all W ∈𝒢, is also an element of the PSG. If one considers a gauge-equivalent ansatz, W_i U_ij W^†_j, the PSG element (G_g(i),g) changes to (W_i G_g(i) W^†_g(i)). PSGs related by such gauge transformations are equivalent; they are associated with gauge-equivalent ansatzes and represent the same QSL phase. Our task is to find all such equivalence classes; in other words, to single out one representative from each class by fixing the gauge freedom. It is convenient to carry out this task purely “algebraically”, i.e., by making no reference to the ansatz. To do this, we note that given a physical symmetry group and the IGG 𝒢, the PSG can be viewed as a group equipped with a projection 𝒫 to the physical symmetry group, such that 𝒫: (G_g(i),g) ↦ g. From the discussion in the previous paragraph, 𝒫: (W G_g(i),g) ↦ g for W ∈𝒢 . As a corollary, 𝒫 projects pure gauge transformations in the IGG back to the identity element, 𝒫: (W ,e) ↦ e for W ∈𝒢. The projection map between the PSG and the physical symmetry group implies that the gauge transformation G_g(i) associated with the symmetry transformation g is constrained by the relations between symmetry group elements g. These constraints on G_g can be used to enumerate all gauge-inequivalent choices of G_g for all symmetry transformations g, and hence enumerate all PSGs. To see this, one begins with the relations between the symmetry generators {T_x, T_y, T_z, g_a, g_b, g_c, 𝒯} which completely specify the group. Each such relation will lead to an equation constraining the associated PSG elements. The minimal set of such relations that specify the group is called the “presentation” of the group. Using the GAP computer algebra package <cit.>, we obtain the finite presentation of trillium space group of : g^3_c=e, T^-1_zg^2_a=e, T^-1_yg^2_b=e, T^-1_xT^-1_yT_xT_y=e, T^-1_yT^-1_zT_yT_z=e, T^-1_zT^-1_xT_zT_x=e, g^-1_aT_xg_aT_x=e, g^-1_aT_yg_aT_y=e, g^-1_aT^-1_zg_aT_z=e, g^-1_bT_xg_bT_x=e, g^-1_bT^-1_yg_bT_y=e, g^-1_bT_zg_bT_z=e, g^-1_cT^-1_yg_cT_x=e, g^-1_cT^-1_zg_cT_y=e, g^-1_cT^-1_xg_cT_z=e, g^-1_ag^-1_cg^-1_bT^-1_xT_yg_ag_c=e, g^-1_bg^-1_aT_xT^-1_yT_zg_bg_a=e, g^-1_cg^-1_bT^-1_xT_yg_ag_bg_cg_b=e, where e denotes the identity of the symmetry group.We focus further on QSLs on the trillium lattice which respect time-reversal symmetry (TRS). The TRS operator 𝒯 acts on the mean-field ansatz by complex conjugating the mean-field parameters U_ij and μ_i. It is convenient to include a global gauge transformation i τ_2 in the definition of G_𝒯, such that we have (G_𝒯,𝒯): U_ij↦ G_𝒯(i) i τ_2 U^*_ij (-i τ_2) G^†_𝒯(j) = -G_𝒯(i) U_ij G^†_𝒯(j). Including TRS introduces the following additional relations, which express the fact that 𝒯 commutes with generators in the space group: 𝒯^2=e, 𝒯^-1T^-1_x𝒯T_x=e, 𝒯^-1T^-1_y𝒯T_y=e, 𝒯^-1T^-1_z𝒯T_z=e, 𝒯^-1g^-1_a𝒯g_a=e, 𝒯^-1g^-1_b𝒯g_b=e, 𝒯^-1g^-1_c𝒯g_c=e. Chiral spin liquids, which break TRS and some lattice symmetries separately while preserving their combinations, have also been considered in the literature <cit.>. For chiral PSGs, one considers the symmetry group generated by g 𝒯^ϵ_g instead of the usual symmetry group generated by {g} <cit.>. ϵ_g={0,1} specifies whether the lattice symmetry g is preserved on its own (ϵ_g=0), or preserved only up to TRS (ϵ_g=1). The trillium SG relations given by Eqs. <ref>-<ref> impose the constraint ϵ_g=0 for all generators. This can be easily seen from the fact that for each generator g, there exists one SG relation which has only an odd number of appearances of that generator, which forces ϵ_g=0. One could still consider spin liquids which respect all lattice symmetries but not TRS. In all our PSG calculations, we first derive the PSG classification without TRS, and then impose TRS at the end. While this immediately gives us the PSGs without TRS, we forego a consideration of mean-field ansatzes corresponding to such PSGs, restricting ourselves to the study of physical fully symmetric spin liquids. Ground states for classical spins on the trillium lattice <cit.> are also known to be non-chiral (which, for classical spin configurations, is equivalent to co-planarity). The projective relation between the symmetry group elements and the corresponding PSG elements allow us to translate the above symmetry relations (Eqs <ref>- <ref>) into constraint equations for the PSG elements. Consider a general symmetry group relation among a set of elements, ∏_νg_ν= e. The product of the corresponding PSG elements are given by (G̃, ∏_νg_ν=e), where G̃ can be constructed from the matrices G_g_ν(i) using Eq. <ref>. Under the projection 𝒫 to the symmetry group elements (G̃,e)↦ e); this immediately implies a constraint equation expressing that G̃ must be a member of the IGG, G̃∈𝒢. The unknowns in these equations are of two kinds: first, the site-dependent gauge transformation matrices { G_x , G_y, G_z, G_a, G_b, G_c, G_𝒯} accompanying each symmetry transformation in { T_x, T_y, T_z, g_a, g_b, g_z, 𝒯}; and second, an element of the IGG W∈𝒢 corresponding to each symmetry group relation in Eqs <ref>-<ref>. Solving these equations, along with choice of gauge described earlier, leads to the different inequivalent PSGs. Explicit procedures for solving these equations in a fixed gauge are detailed for specific lattices in Ref. <cit.>, as well as several later works that classifying PSGs in different spatial lattices <cit.>. We have undertaken this procedure to enumerate and classify all symmetric spin liquids with the IGG set to both 𝖹_2 and 𝖴(1). The calculations are tedious, and hence we have relegated their details to the Appendices <ref> and  <ref> for conciseness. Each inequivalent PSG is uniquely specified by the expressions for site-dependent gauge transformations { G_x , G_y, G_z, G_a, G_b, G_c, G_𝒯} which accompany the symmetry transformation. We now summarize our results by specifying these gauge transformations for all the PSGs that we identify. When the IGG is fixed to 𝖹_2, we find 4 inequivalent PSGs. Once the global gauge freedoms are fixed, the gauge transformation matrices associated with lattice translations are uniform, with no position or sublattice dependence for all PSGs, i.e., G_x=G_y=G_z=1. The PSGs can be uniquely indexed by constraints on the gauge transformation matrices obtained from the PSG equations. First, the transformation corresponding to time-reversal G_𝒯 takes the values τ_0 or i τ_z, though the PSGs corresponding to G_𝒯=τ_0 do not lead to any non-zero mean-field ansatzes. Gauge transformation matrices associated with other symmetry generators are also unit-cell independent, although they retain a sublattice dependence. Second, the gauge transformation associated with the rotation g_c acting on sites of sublattice α, G_c(α)=𝒜, takes the 2 values exp(i k(2π/3) τ_z) for k={0,1}. All other gauge transformations can be specified in terms of these three, as detailed in Tab. <ref>. The 2 possible values of G_c(α) and the 2 possible values of G_𝒯 lead to 4 inequivalent PSGs, out of which only 2 (corresponding to G_𝒯=i τ_z) lead to non-zero mean field ansatzes. Next, we fix the IGG to 𝖴(1). As in the case of 𝖹_2, the gauge transformations corresponding to the three translations are uniform, G_x=G_y=G_z=1. The other gauge transformations, however, acquire both a unit-cell and a sublattice dependence. The PSGs can again be indexed by the parameters specifying certain gauge transformations. The gauge transformation associated with the rotation g_c acting on sites of sublattice α, G_c(α)=𝒜, takes the 2 values exp(i k(2π/3) τ_z) for k={0,1}. On including time reversal, we find two possibilities for the associated gauge-transformation G_𝒯. First, G_𝒯 can be iτ_z uniformly, and this case does not lead to any physical spin-liquid ansatz with non-zero mean-field parameters, and so we do not consider these PSGs further. Second, G_𝒯 can acquire a space-dependent form depending on A, which leads to physical spin liquids. Following the second possibility, therefore, we have 1 𝖴(1) spin liquid, as we will later show that only the k=0 case leads to nearest neighbor mean field spin liquid states. We specify the PSGs by expressing all gauge transformations in terms of the parameters A in Tab. <ref>. In the next section, we will construct mean-field ansatzes for spin liquids corresponding to these PSGs and proceed to investigate them. § MEAN-FIELD SPIN LIQUID PHASES In this section, we construct the mean field QSL solutions to the Heisenberg Hamiltonian on the trillium lattice. For this purpose, we will consider only nearest-neighbor interactions and set J=8/3 henceforth. The form of the ansatzes are constrained by the PSG. Concretely, a given PSG (G_g,g),g∈𝖯2_13×𝖹^𝒯_2 requires that, ∀ g: G_g(g(i)) U_g(i)g(j)G^†_g(g(j)) = U_ij, G_g(g(i)) μ_g(i)G^†_g(g(i)) = μ_i. The ansatz for each PSG is derived by systematically imposing Eq. <ref> using the gauge transformations detailed in Tab. <ref> and Tab. <ref>; the results of this procdure, detailed in Appendix <ref>, are tabulated in Table <ref>. The labeling scheme in the table is such that, Given a generic label 𝖹_2x, we can read off the phases in our PSG solutions: 𝒜 = exp(ix2π/3τ_z). There is only 1 𝖴(1) QSL, which is labeled as such. The primed parameters, i.e. quantities like U^x', are defined as follows: [ U^x'; U^y' ] = 1/2[ -1 √(3); -√(3) -1 ][ U^x; U^y ], [ U^x”; U^y” ] = 1/2[ -1 -√(3); √(3) -1 ][ U^x; U^y ]. Concretely, this means we need to find {U_ij, μ_i} such that the following self-consistency equations and on-site constraints are satisfied: χ_ij =⟨ f^†_i↑f_j↑⟩+ ⟨ f^†_i↓f_j↓⟩, η_ij =⟨ f_i↓f_j↑⟩- ⟨ f_i↑f_j↓⟩, 1 = ⟨ f^†_i↑f_i↑⟩+ ⟨ f^†_i↓f_i↓⟩, 0 = ⟨ f_i↑f_i↓⟩. We can impose symmetry conditions on the mean-field solutions to reduce the number of parameters, and the PSG determines these symmetry conditions by requiring that Eq. <ref> is respected. We discuss how the symmetry conditions constrain the mean-field ansatzes in detail in Appendix <ref> and Appendix <ref>, and the results are tabulated in Tab. <ref>. Once we obtain the form of the mean-field Hamiltonians, we assemble the Hamiltonian using our variational parameters, and solve the non-linear equation set Eq. <ref> using the NLSolve package <cit.> available in julia <cit.>. We set up our system with periodic boundary conditions (PBC), with L=99 in the three directions. The numerical values of mean-field parameters and the corresponding energies obtained from the self-consistent solutions are summarized in Table <ref>. Note that the mean-field energies of 𝖹_20 and 𝖴(1) state are the same, while 𝖹_21 has a higher energy. A comment on the close energies of the 𝖹_20 and 𝖴(1) state is in order. The ansatzes corresponding to these states are uniform, i.e, U_ij=U^x τ_x +U^y τ_y (μ^i= μ^x τ_x + μ^y τ_y) for all links (sites) of the 𝖹_20 state while U_ij=U^z τ_z (μ_i = μ^z τ_z) uniformly on all links (sites) of the 𝖴(1) state. The saddle-point solutions for the mean-field parameters of the 𝖹_20 state have the property |μ^x/μ^y| = |U^x/U^y|. Thus we can use a global 𝖲𝖴(2) transformation to change this ansatz — only at the saddle point — to the 𝖴(1) ansatz displayed in Table  <ref>. Similar phenomenon has been noted in previous works such as <cit.>. Therefore, while the nearest-neighbor ansatzes for both QSLs are the same at the saddle point, the general ansatzes displayed in Table  <ref> refer to different QSL states with different IGGs. §.§ Relations between the QSLs We can also infer connections between the 3 distinct QSLs. First, we note that the 𝖴(1) QSL is the parent state of the 𝖹_20 QSL. This can be seen by first performing a gauge transformation on 𝖴(1): W(α)= W(β)= W(γ)= W(δ)= e^-iπ/4τ_y, so that for this QSL we now have U_ζ = U^z τ_x, ζ∈{1,…,12 }, μ_s = μ^z τ_x, s∈{α,…,δ}. From this, we see that introducing the perturbations Δ U_ζ∼τ_y and Δμ_s ∼τ_y breaks the 𝖴(1) symmetry down in a manner that results in the 𝖹_20 state. The 𝖹_21 QSL state cannot be obtained by perturbing around the 𝖴(1) QSL state. §.§ Spinon spectra and the nodal star Our mean-field ansatz allow us to determine the structure of excitations on the mean-field ground state. We compute the eigenvalues of the mean-field Hamiltonian for different wave-vectors to obtain the spinon dispersion spectra in the Brillouin zone. Fig. <ref> shows the spinon band structures for the different mean-field QSL states. As explained earlier, the nearest-neighbour ansatzes for the 𝖹_20 and 𝖴(1) states are related by a global 𝖲𝖴(2) at the saddle point, leading to identical band structures. To illustrate the structure of possible gapless modes, we plot the set of gapless points in the Brillouin zone (BZ) for 𝖴(1) and 𝖹_21 states in Fig. <ref>. The collection of gapless points at the saddle point for the 𝖹_20 QSL is identical to that of the 𝖴(1) QSL. We note that all the mean field states we obtained are gapless at saddle point. The 𝖴(1) possesses a spinon fermi surface at the center of the BZ, with another sheet of spinon fermi surface at the corners. The 𝖹_2 mean-field states are also gapless. The 𝖹_20 state has similar spectrum to those of the 𝖴(1). We note that the gaplessness of the 𝖹_20 state seems to not be protected by symmetries and one might generically expect a gap to open up when our nearest-neighbor ansatzes are extended to include further neighbor terms. The 𝖹_21 state hosts a spectrum with a “nodal star" of gapless points, with dispersion-less bands running from the center of the BZ to its 8 corners. This can be seen from Fig. <ref>. This nodal star is not a specific property of the short-range ansatz we use to display the bands in Fig. <ref>; rather it is robust to the addition of arbitrary links in the ansatz. In App. <ref> we prove that the gapless nodal star is protected by projective symmetries of the 𝖹_21 phase. Such gapless nodal stars have received significant attention in the pyrochlore lattice <cit.>, where two gapless bands along the nodal star were recently proven to be protected by the projective symmetries <cit.>. Such lines were also observed in FCC structures in Ref. <cit.>, where the whole mean field Hamiltonian vanishes along the nodal star. Gapless nodal loops were observed in diamond lattice  <cit.> where strong evidence of symmetry-protection was provided by showing that the gapless nodal loops persist despite longer range bond amplitudes being included in the ansatz. Our proof of the protected nodal star is algebraic and close in spirit to that of Ref. <cit.>. We look at the symmetries of the mean-field Hamiltonian directly in momentum-space H_ MFT=∑_k⃗ψ^†(k⃗) H_ MFT (k⃗) ψ (k⃗), When the spinors (Eq. <ref>) are arranged as ψ(k⃗)=(ψ^α_1(k⃗),ψ^β_1(k⃗),ψ^γ_1(k⃗),ψ^δ_1(k⃗),ψ^α_2(k⃗),ψ^β_2(k⃗),ψ^γ_2(k⃗),ψ^δ_2(k⃗)), time-reversal already implies that H_ MFT(k⃗) takes the form H_ MFT(k⃗)= [ 0_4×4 h_4 × 4(k⃗); h^†_4×4(k⃗) 0_4×4 ]. This block off-diagonal hermitian structure implies that the eigenvalues come in symmetric pairs of ± E(k⃗) everywhere in the BZ. Next, we work out the most general form of h_4 × 4(k⃗) allowed by the projective representations of the symmetries (G_a,g_a), (G_b,g_b) and (G_c,g_c). Restricting the general form of h_4×4(k⃗) to the “nodal star” wavevectors k⃗=(± k, ± k, ± k), we show, using elementary linear algebraic techniques, that it has a maximum rank of 3. This implies that H_ MFT has a maximum rank of 6 along the nodal star, proving the existence of two gapless bands. The complete proof involves explicit expressions for the most general h_4 × 4(k⃗) allowed by projective symmetries, and is fleshed out in Appendix <ref>.By computing the equilibrium state energy of the mean field spinon models, we estimate the temperature dependence of the specific heat for the nodal star spin liquid state. Specifically we see that for 𝖹_20, C_v∼ T^1.22, where as for 𝖹_21, C_v∼ T^0.73. The numerical results are given in Fig. <ref>. The above analysis is not performed for the 𝖴(1) spin liquid, since we expect the gauge field excitations at low energies to modify the results from the calculations of the non-interacting model. §.§ Spin structure factors In Fig. <ref>, we plot the static structure factor of 𝖹_20, 𝖹_21 and 𝖴(1) mean field states in the k_y-k_z plane. The definition of the static structure factor is: 𝔖^s_i,s_j(q⃗) ≡1/N∑_R⃗ e^-iq⃗· (R⃗ + d⃗^0_ij)⟨S⃗_(0;s_i)·S⃗_(R⃗;s_j)⟩, where s_i and s_j are the sub-lattice indices of site i and j, R⃗ is the distance between the two unit cells, and d⃗^0_ij is the distance between the two sub-lattice sites within in the unit cell. And we compute the sum of all these components: 𝔖(q⃗) ≡∑_s_i, s_j𝔖^s_i,s_j(q⃗), then plot the normalized results.The structure factor plots indicate that we have obtained remarkable quantum spin liquid states, especially the 𝖴(1) QSLs, which are visibly featureless, implying a sharp departure from the ordered states. We also note that the 𝖹_21 state is the most featureful among the three. § CONCLUSIONS AND OUTLOOK In this work, we have computed the PSGs for the trillium lattice both with and without time reversal symmetry. In the former case we implement the full construction of the nearest neighbor mean field (fermionic) parton Hamiltonian of the corresponding quantum spin liquid states. We find two distinct such QSLs with a 𝖹_2 gauge group, and a single example of a QSL with a 𝖴(1) gauge group. We also obtained the corresponding mean-field spinon band structures and static structure factors, providing some basic thermodynamic and spectral information on these states. Our main results are reported in Tab. <ref> and Tab. <ref>. As noted in the introduction, one of our principal motivations is the recent report of QSL-type behaviour in K2Ni2SO4; our work represents a stepping stone towards a parton mean-field analysis of this system, which hosts a double trillium lattice with spin-1 moments. Accordingly, a natural next step in this program is to modify the PSG analysis to account for these differences, and perform variational Monte Carlo studies of the Gutzwiller projected mean field QSL wavefunctions to compare with the available experimental data. These tasks are currently underway, and we hope to report on them in the near future. SAP acknowledges useful conversations with Yasir Iqbal, and the hospitality of the Max Planck Institute for Complex Systems (MPI-PKS) Dresden, where part of this work was completed. SB acknowledges useful discussions with Atanu Maity. This work was supported by the European Research Council under the European Union Horizon 2020 Research and Innovation Programme, Starting Grant [Agreement No. 804213-TMCS], a Buckee Scholarship at Merton College (ML), the Deutsche Forschungsgemeinschaft via Grant No. AS120/16-1, Project No. 493886309 (SB) and the Gutzwiller Fellowship of the MPI-PKS (SAP) § IGG=𝖹_2 One can translate the SG relations to the PSG relations: G_c(g_c^3(i))G_c(g_c^2(i))G_c(g_c(i))=η_c, G^†_z(g_a^2(i))G_a(g_a^2(i))G_a(g_a(i))=η_a, G^†_y(g_b^2(i))G_b(g_b^2(i))G_b(g_b(i))=η_b, G^†_x(T^-1_yT_xT_y(i))G^†_y(T_xT_y(i))G_x(T_xT_y(i))G_y(T_y(i))=η_xy, G^†_y(T^-1_zT_yT_z(i))G^†_z(T_yT_z(i))G_y(T_yT_z(i))G_z(T_z(i))=η_yz, G^†_z(T^-1_xT_zT_x(i))G^†_x(T_zT_x(i))G_z(T_zT_x(i))G_x(T_x(i))=η_zx, G^†_a(T_xg_aT_x(i))G_x(T_xg_aT_x(i))G_a(g_aT_x(i))G_x(T_x(i))=η_ax, G^†_a(T_yg_aT_y(i))G_y(T_yg_aT_y(i))G_a(g_aT_y(i))G_y(T_y(i))=η_ay, G^†_a(T^-1_zg_aT_z(i))G^†_z(g_aT_z(i))G_a(g_aT_z(i))G_z(T_z(i))=η_az, G^†_b(T_xg_bT_x(i))G_x(T_xg_bT_x(i))G_b(g_bT_x(i))G_x(T_x(i))=η_bx, G^†_b(T^-1_yg_bT_y(i))G^†_y(g_bT_y(i))G_b(g_bT_y(i))G_y(T_y(i))=η_by, G^†_b(T_zg_bT_z(i))G_z(T_zg_bT_z(i))G_b(g_bT_z(i))G_z(T_z(i))=η_bz, G^†_c(T^-1_yg_cT_x(i))G^†_y(g_cT_x(i))G_c(g_cT_x(i))G_x(T_x(i))=η_cyx, G^†_c(T^-1_zg_cT_y(i))G^†_z(g_cT_y(i))G_c(g_cT_y(i))G_y(T_y(i))=η_czy, G^†_c(T^-1_xg_cT_z(i))G^†_x(g_cT_z(i))G_c(g_cT_z(i))G_z(T_z(i))=η_cxz, G^†_a(g^-1_cg^-1_bT^-1_xT_yg_ag_c(i))G_c^†(g^-1_bT^-1_xT_yg_ag_c(i)) × G_b^†(T^-1_xT_yg_ag_c(i))G^†_x(T_yg_ag_c(i)) × G_y(T_yg_ag_c(i))G_a(g_ag_c(i))G_c(g_c(i))=η_acb, G^†_b(g^-1_aT_xT^-1_yT_zg_bg_a(i))G^†_a(T_xT^-1_yT_zg_bg_a(i)) × G_x(T_xT^-1_yT_zg_bg_a(i))G^†_y(T_zg_bg_a(i)) × G_z(T_zg_bg_a(i))G_b(g_bg_a(i))G_a(g_a(i))=η_ab, G^†_c(g^-1_bT^-1_xT_yg_ag_bg_cg_b(i))G^†_b(T^-1_xT_yg_ag_bg_cg_b(i)) × G^†_x(T_yg_ag_bg_cg_b(i))G_y(T_yg_ag_bg_cg_b(i))G_a(g_ag_bg_cg_b(i)) × G_b(g_bg_cg_b(i))G_c(g_cg_b(i))G_b(g_b(i))=η_cba. The Gs in the above relations are 𝖲𝖴(2) matrices, and are associated with the 𝖲𝖴(2) gauge symmetry, which transforms the G in the following way: G_g(i)↦ W(g^-1(i))G_g(i)W^†(i), W∈𝖲𝖴(2); This equation can be understood as follows: under gauge transformation, we have: U_ij↦Ũ_ij≡ W(i)U_ijW^†(j), and the requirement for the gauge transformed PSG is: G̃_g(g(i))Ũ_g(i)g(j)G̃^†_g(g(j)) = Ũ_ij. From the above relations we derived the gauge transformation of Gs.Aside from the gauge symmetry, we note that we can replace a generic element G with 𝔤G, where 𝔤∈IGG={τ_0, -τ_0 }. Wisely making use of this fact is going to help us reduce the number of phases on the right hand side of the PSG equations.By performing: G_x↦η_cyx G_x, G_z↦η_czy G_z, G_a↦η_acbη_cbaG_a, G_b↦η_acbη_cyx G_b, G_c↦η_c G_c, we eliminate the phases on the right hand side (RHS) of Eq. <ref>, Eq. <ref>, Eq. <ref>, Eq. <ref> and Eq. <ref>. §.§ Solving for the translational Elements Let us start by considering the following equations Eq.<ref>, Eq.<ref> and Eq.<ref> that arise because of the commutation of translational generators. Canonically, this gives us the following expressions of G_x, G_y, G_z: G_x(x,y,z;s) =τ_0, G_y(x,y,z;s)=η^x_xyτ_0, G_z(x,y,z;s) =η^x_zxη^y_yzτ_0. §.§ Solving for G_c Using the IGG 𝖹_2 gauge symmetry, we had eliminated the phases on the RHS of Eq.<ref>, Eq.<ref>. To solve for G_c, one then plug the canonical expressions of the translational PSG elements into Eq.<ref>, Eq.<ref> and Eq.<ref>. One arrives at the following expressions: G^†_c(T^-1_y(i))η^-x_xyG_c(i) =τ_0, G^†_c(T^-1_z(i))η^-x_zxη^-y_yzG_c(i)G_y(g^-1_c(i)) =τ_0, G^†_c(T^-1_x(i))G_c(i)G_z(g^-1_c(i)) =η_cxz. Further simplifying the expressions, one arrives at: G_c(x,y,z) =η^x_xyG_c(x,y-1,z), G_c(x,y,z) =η^x_zxη^y_yzη^-y_xyG_c(x,y,z-1), G_c(x,y,z) =η^-y_zxη^-z_yzη_cxzG_c(x-1,y,z). The above expressions are valid for all sub-lattice indices, and we have suppressed the s indices. One then assumes that the following form is valid for G_c: G_c≡ f_c(x,y,z;s)𝔐_c(s). Because of the mentioned reason, we have f_c(x,y,z;s)=f_c(x,y,z). Then the separation of variables allows one to arrive at: f_c(x,y,z) =η^x_xyf_c(x,y-1,z), f_c(x,y,z) =η^x_zxη^y_yzη^-y_xyf_c(x,y,z-1), f_c(x,y,z) =η^-y_zxη^-z_yzη_cxzf_c(x-1,y,z). For f_c to be a path-independent function, there are certain constraints that the phases have to satisfy. For example, one considers two paths to arrive at f_c(x+1, y+1, z): 1.) f_c(x, y, z)↦ f_c(x+1, y, z)↦ f_c(x+1, y+1, z); 2.) f_c(x, y, z)↦ f_c(x, y+1, z)↦ f_c(x+1, y+1, z). One then compares the phases resulting from the two paths, and enforces them to be identical. Such a process produces the relevant constraints on the phases. We check the path independence on the xy, yz and zx planes respectively, and arrive at the following constraint: η_xy=η^-1_zx;η_yz=η_xy; η_zx=η^-1_yz. It follows then η_xy=η_yz=η_zx=η_1. The previous equations on f_c become: f_c(x,y,z) =η^x_1f_c(x,y-1,z), f_c(x,y,z) =η^x_1f_c(x,y,z-1), f_c(x,y,z) =η^-(y+z)_1η_cxzf_c(x-1,y,z). Therefore, at this point we claim that G_c=η^xy+xz_1η^x_cxz𝔐_c(s).Let us take a look at Eq.<ref>. One can eliminate the phase on the RHS by making use of the IGG gauge symmetry. Plugging the above expression into Eq.<ref>, we arrive at: 𝔐^3_c(α) =τ_0; 𝔐_c(δ)𝔐_c(γ)𝔐_c(β) =τ_0; η_cxz =1. It is useful to make a summary before we close this subsection: 1.) η_c=η_cyx=η_czy=η_cxz=1; 2.) η_xy=η_yz=η_zx=η_1; 3.)G_c=η^xy+xz_1𝔐_c(s), for which the following relations are satisfied: 𝔐^3_c(α) =τ_0; 𝔐_c(δ)𝔐_c(γ)𝔐_c(β) =τ_0; §.§ Solving for G_a To solve for G_a, one plugs the simplified expressions of the translational PSG elements into Eq.<ref>, Eq.<ref> and Eq.<ref>. One arrives at the following expressions: G^†_a(T_x(i))G_a(i) =η_ax, G^†_a(T_y(i))G_y(T_y(i))G_a(i)G_y(g^-1_a(i)) =η_ay, G^†_a(T^-1_z(i))G^†_z(i)G_a(i)G_z(g^-1_a(i)) =η_az. One makes the usual ansatz G_a(i)≡ f_a(x,y,z;s)𝔐_a(s), only this time one does not have f_a(x,y,z;s)=f_a(x,y,z), for the evaluation of G_y/z(g^-1_a(i)) is not s-independent. We have, for s=α/δ, the following conditions for f_a: η^-1_axf_a(x,y,z;α/δ) =f_a(x+1,y,z;α/δ), η^-1_ayf_a(x,y,z;α/δ) =f_a(x,y+1,z;α/δ), η_azη_1f_a(x,y,z;α/δ) =f_a(x,y,z+1;α/δ); and for s=β/γ, the following conditions for f_a: η^-1_axf_a(x,y,z;β/γ) =f_a(x+1,y,z;β/γ), η^-1_ayη^-1_1f_a(x,y,z;β/γ) =f_a(x,y+1,z;β/γ), η_azf_a(x,y,z;β/γ) =f_a(x,y,z+1;β/γ). Note that this time we do not have to check the path independence of f_a, as the phases appearing in the above equations are constants. We then arrive at the following expressions: f_a(x,y,z;α/δ)=η^-x_axη^-y_ayη^z_azη^z_1, f_a(x,y,z;β/γ)=η^-x_axη^-y_ayη^-y_1η^z_az, from which we write: G_a(x,y,z;α/δ)=η^-x_axη^-y_ayη^z_azη^z_1𝔐_a(α/δ), G_a(x,y,z;β/γ)=η^-x_axη^-y_ayη^-y_1η^z_az𝔐_a(β/γ). In plugging these expressions into Eq.<ref>, we first consider i≡ (x,y,z;α). The condition we arrive at is: G^†_z(-x, -y-1, z;δ)G_a(-x, -y-1, z;δ)G_a(x,y,z;α)=η_a, which further simplifies to: η^x+y+1_1η_ay𝔐_a(δ)𝔐_a(α)=η_a. The above equation dictates that η_1=1. Consequently, one has: 𝔐_a(δ)𝔐_a(α)=η_aη^-1_ay. We then consider other sublattice sites, and they give us: G_a(-x-1, -y-1,z+1;γ)G_a(x,y,z;β) =η_a, G_a(-x-1, -y-1,z;β)G_a(x,y,z;γ) =η_a, G_a(-x, -y-1,z+1;α)G_a(x,y,z;δ) =η_a. Plugging the explicit forms into the above equations, and we arrive at: 𝔐_a(γ)𝔐_a(β) =η_aη^-1_axη^-1_ayη^-1_az, 𝔐_a(β)𝔐_a(γ) =η_aη^-1_axη^-1_ay, 𝔐_a(α)𝔐_a(δ) =η_aη^-1_ayη^-1_az. It is useful to make a summary again before we close this subsection: 1.) η_xy=η_yz=η_zx=η_1=1, and as a result, G_x=G_y=G_z=τ_0; 2.) We have G_a(x,y,z;s)=η^-x_axη^-y_ayη^z_az𝔐_a(s); for which the following relations are satisfied: 𝔐_a(δ)𝔐_a(α) =η_aη^-1_ay, 𝔐_a(γ)𝔐_a(β) =η_aη^-1_axη^-1_ayη^-1_az, 𝔐_a(β)𝔐_a(γ) =η_aη^-1_axη^-1_ay, 𝔐_a(α)𝔐_a(δ) =η_aη^-1_ayη^-1_az. §.§ Solving for G_b Now we attack Eq.<ref>, Eq.<ref> and Eq.<ref>. Since G_x, G_y and G_z are trivial now, the equations are reduced to the following form: G^†_b(T_x(i))G_b(i) =η_bx, G^†_b(T^-1_y(i))G_b(i) =η_by, G^†_b(T_z(i))G_b(i) =η_bz. We make the ansatz G_b≡ f_b(x,y,z)𝔐_b(s), where the separation of variables is possible because the above conditions are s-independent. One can quickly arrive at the condition that G_b=η^-x_bxη^y_byη^-z_bz𝔐_b(s).Plugging the expression into Eq.<ref>, one ends up with: G_b(g_b(i))G_b(i)=η_b. Considering the individual sublattice sites respectively, one arrives at: 𝔐_b(γ)𝔐_b(α) =η_bη^-1_bx, 𝔐_b(δ)𝔐_b(β) =η_bη^-1_bxη^-1_bz, 𝔐_b(α)𝔐_b(γ) =η_bη^-1_bxη^-1_by, 𝔐_b(β)𝔐_b(δ) =η_bη^-1_bxη^-1_byη^-1_bz. A quick summary: We have G_b(x,y,z;s)=η^-x_bxη^y_byη^-z_bz𝔐_b(s); for which the following relations are satisfied: 𝔐_b(γ)𝔐_b(α) =η_bη^-1_bx, 𝔐_b(δ)𝔐_b(β) =η_bη^-1_bxη^-1_bz, 𝔐_b(α)𝔐_b(γ) =η_bη^-1_bxη^-1_by, 𝔐_b(β)𝔐_b(δ) =η_bη^-1_bxη^-1_byη^-1_bz. §.§ Solving Eq.<ref>, Eq.<ref> and Eq.<ref> To attack the remaining three equations, note that we can use the IGG gauge symmetry of G_a and G_b to eliminate η_acb and η_cba. The equations are reduced to: G^†_a(g^-1_cg^-1_b(i))G^†_c(g^-1_b(i))G^†_b(i) × G_a(T^-1_yT_x(i))G_c(g^-1_aT^-1_yT_x(i))=τ_0, G^†_b(g^-1_a(i))G^†_a(i)G_b(T^-1_zT_yT^-1_x(i)) × G_a(g^-1_bT^-1_zT_yT^-1_x(i))=η_abτ_0, G^†_c(g^-1_bT^-1_xT_y(i))G^†_b(T^-1_xT_y(i))G_a(i) × G_b(g^-1_a(i))G_c(g^-1_bg^-1_a(i))G_b(g^-1_cg^-1_bg^-1_a(i))=τ_0. We consider first i=(x,y,z;α). Plugging this into Eq.<ref> gives us the following: G^†_a(y-1,-z,-x-1;β)𝔐_c(γ)G^†_b(x,y,z;α) × G_a(x+1,y-1,z;α)𝔐_c(δ)=τ_0. Recalling G_a=η^-x_axη^-y_ayη^z_az𝔐_a(s) and G_b=η^-x_bxη^y_byη^-z_bz𝔐_b(s), the LHS of the above expression is evaluated as: η^y-1_axη^-z_ayη^x+1_az𝔐^†_a(β)𝔐^†_c(γ) ×η^x_bxη^-y_byη^z_bz𝔐^†_b(α)η^-x-1_axη^-y+1_ayη^z_az𝔐_a(α)𝔐_c(δ) = (η_azη_bxη^-1_ax)^x(η_axη^-1_ayη^-1_by)^y(η_azη_bzη^-1_ay)^zη_azη_ay ×𝔐^†_a(β)𝔐^†_c(γ)𝔐^†_b(α)𝔐_a(α)𝔐_c(δ). Since the RHS of the previous expression is unit cell independent, we would have the following equations: η_azη_bx =η_ax, η_byη_ay =η_ax, η_azη_bz =η_ay. Naming η_2≡η_ax, η_3≡η_ay and η_4≡η_az, we would have η_bx=η_2η^-1_4, η_by=η_2η^-1_3 and η_bz=η_3η^-1_4. Also, we now have G_a=η^-x_2η^-y_3η^z_4𝔐_a(s) and G_b=η^-x+y_2η^-y-z_3η^x+z_4𝔐_b(s). The previous constraint becomes: 𝔐^†_a(β)𝔐^†_c(γ)𝔐^†_b(α)𝔐_a(α)𝔐_c(δ)=η^-1_3η^-1_4. What happens now for Eq.<ref>? Plugging i=(x,y,z;α) into the equation gives us: 𝔐^†_c(γ)G^†_b(x-1,y+1,z;α) × G_a(x,y,z;α)G_b(-x,-y-1,z-1;δ) ×𝔐_c(β)G_b(-y-1,-z,x-1;δ)=τ_0. Evaluating the LHS of the above equation gives us: η^x-y_2η^y+z+1_3η^-x-z+1_4η^-x_2η^-y_3η^-z_4η^x-y-1_2 ×η^y-z_3η^-x+z-1_4η^y+1-z_2η^z-x+1_3η^-y+x_4 × 𝔐^†_c(γ)𝔐^†_b(α)𝔐_a(α)𝔐_b(δ)𝔐_c(β)𝔐_b(δ) = (η^-1_2η^-1_3η_4)^x(η^-1_3η^-1_4η_2)^y(η_4η^-1_2η_3)^z ×𝔐^†_c(γ)𝔐^†_b(α)𝔐_a(α)𝔐_b(δ)𝔐_c(β)𝔐_b(δ). Again, the unit cell independence gives us an extra condition η_2=η_3η_4. The original equation becomes: 𝔐^†_c(γ)𝔐^†_b(α)𝔐_a(α)𝔐_b(δ)𝔐_c(β)𝔐_b(δ)=τ_0. No further conditions on the phases can be derived from the three equations. We are in the position to write G_a=η^-x-y_3η^-x+z_4𝔐_a(s) and G_b=η^-x-z_3η^y+z_4𝔐_b(s). Iterating scenarios with different s for i≡ (x,y,z;s), we arrive at the following constraints: 𝔐^†_a(β)𝔐^†_c(γ)𝔐^†_b(α)𝔐_a(α)𝔐_c(δ)=η^-1_3η^-1_4, 𝔐^†_a(γ)𝔐^†_c(δ)𝔐^†_b(β)𝔐_a(β)𝔐_c(γ)=η_4, 𝔐^†_a(α)𝔐^†_c(α)𝔐^†_b(γ)𝔐_a(γ)𝔐_c(β)=τ_0, 𝔐^†_a(δ)𝔐^†_c(β)𝔐^†_b(δ)𝔐_a(δ)𝔐_c(α)=η_3, 𝔐^†_b(δ)𝔐^†_a(α)𝔐_b(α)𝔐_a(γ)=η_abη_3η_4, 𝔐^†_b(γ)𝔐^†_a(β)𝔐_b(β)𝔐_a(δ)=η_abη_3η^-1_4, 𝔐^†_b(β)𝔐^†_a(γ)𝔐_b(γ)𝔐_a(α)=η_abη_3η^-1_4, 𝔐^†_b(α)𝔐^†_a(δ)𝔐_b(δ)𝔐_a(β)=η_abη_3η^-1_4, 𝔐^†_c(γ)𝔐^†_b(α)𝔐_a(α)𝔐_b(δ)𝔐_c(β)𝔐_b(δ)=τ_0, 𝔐^†_c(δ)𝔐^†_b(β)𝔐_a(β)𝔐_b(γ)𝔐_c(α)𝔐_b(α)=η^-1_3, 𝔐^†_c(α)𝔐^†_b(γ)𝔐_a(γ)𝔐_b(β)𝔐_c(δ)𝔐_b(γ)=η_3η_4, 𝔐^†_c(β)𝔐^†_b(δ)𝔐_a(δ)𝔐_b(α)𝔐_c(γ)𝔐_b(β)=η_4. §.§ Solving for the 𝔐s We start by performing some 𝖲𝖴(2) gauge transformations. Let us consider a type of gauge transformation W(i)≡ W(s). Had we started at a generic gauge, we could always make the following gauge transformation: W(α) = 𝔐_a(α), W(β)=𝔐_c(β), W(γ)=𝔐^†_c(δ), W(δ) = τ_0. so that: g.t.: 𝔐_c(δ)↦ W(β)𝔐_c(β)W^†(β)=τ_0, 𝔐_c(γ)↦ W(δ)𝔐_c(δ)W^†(δ)=τ_0, 𝔐_a(δ)↦ W(α)𝔐_a(α)W^†(α)=τ_0. Now we make use of Eq.<ref>, and arrive at 𝔐_c(β)=𝔐_c(γ)=𝔐_c(δ)=τ_0. It should be noted that we are silent on 𝔐_c(α). Indeed, it is not possible to use gauge symmetry alone to trivialise 𝔐_c(α). However, as we shall see, other equations will bring enough restrictions on the form of 𝔐_c(α).Before we take a step further, let us note that taking traces over Eq.<ref> and Eq.<ref> dictates that η_4=1. After the simplification, we have: 𝔐^3_c(α) =τ_0, 𝔐_a(δ)𝔐_a(α) =η_aη^-1_3, 𝔐_a(γ)𝔐_a(β) =η_a, 𝔐_a(β)𝔐_a(γ) =η_a, 𝔐_a(α)𝔐_a(δ) =η_aη^-1_3, 𝔐_b(γ)𝔐_b(α) =η_bη^-1_3, 𝔐_b(δ)𝔐_b(β) =η_b, 𝔐_b(α)𝔐_b(γ) =η_bη^-1_3, 𝔐_b(β)𝔐_b(δ) =η_b, 𝔐^†_a(β)𝔐^†_b(α)𝔐_a(α) =η^-1_3, 𝔐^†_a(γ)𝔐^†_b(β)𝔐_a(β) =τ_0, 𝔐^†_a(α)𝔐^†_c(α)𝔐^†_b(γ)𝔐_a(γ) =τ_0, 𝔐^†_a(δ)𝔐^†_b(δ)𝔐_a(δ)𝔐_c(α) =η_3, 𝔐^†_b(δ)𝔐^†_a(α)𝔐_b(α)𝔐_a(γ) =η_abη_3, 𝔐^†_b(γ)𝔐^†_a(β)𝔐_b(β)𝔐_a(δ) =η_abη_3, 𝔐^†_b(β)𝔐^†_a(γ)𝔐_b(γ)𝔐_a(α) =η_abη_3, 𝔐^†_b(α)𝔐^†_a(δ)𝔐_b(δ)𝔐_a(β) =η_abη_3, 𝔐^†_b(α)𝔐_a(α)𝔐_b(δ)𝔐_b(δ) =τ_0, 𝔐^†_b(β)𝔐_a(β)𝔐_b(γ)𝔐_c(α)𝔐_b(α) =η^-1_3, 𝔐^†_c(α)𝔐^†_b(γ)𝔐_a(γ)𝔐_b(β)𝔐_b(γ) =η_3, 𝔐^†_b(δ)𝔐_a(δ)𝔐_b(α)𝔐_b(β) =τ_0. Let us first look at Eq.<ref>, which can be rewritten as: 𝔐_b(α)𝔐^†_b(β)𝔐_a(β)𝔐_b(γ)𝔐_c(α)=η^-1_3. The above expression, when combined with Eq.<ref> and Eq.<ref>, gives us: 𝔐_b(α)𝔐_b(γ)=η^-1_a, that, when combined with Eq.<ref>, gives us: η^-1_a=η_bη^-1_3. Now we look at Eq.<ref>, which can be rewritten as: 𝔐_a(δ)𝔐^†_b(γ)𝔐^†_a(β)𝔐_b(β)=η_abη_3. The above expression, when combined with Eq.<ref> and Eq.<ref>, gives us: 𝔐_a(δ)𝔐_a(α)=η_a, that, when combined with Eq.<ref>, gives us: η_3=1. Looking at Eq.<ref> and Eq.<ref>, with the help of Eq.<ref>, we reach: 𝔐_a(β)=η_abη_a𝔐_b(δ). Similarly, Eq.<ref> and Eq.<ref>, with the help of Eq.<ref>, give us: 𝔐_a(γ)=η_abη_a𝔐_b(β). Consider, now, Eq.<ref> and Eq.<ref>. The coupled equations can be manoeuvred to give us: 𝔐_a(α)𝔐_b(β)𝔐_b(γ)=τ_0. The above equation, when paired with Eq.<ref>, gives us (note that η_a=η_b now): 𝔐_b(β)𝔐_a(β)=η^-1_a, which can be coupled with Eq.<ref> and Eq.<ref> to give us: η_ab=η_a. At this point, there is only one phase left in the problem: η_a=η_b=η_ab. Let us look at Eq.<ref>, which gives us: 𝔐^3_b(β)=η^-1_a, which then implies that: 𝔐^3_b(δ)=τ_0. We note that, at this point, there are four independent 𝖲𝖴(2) matrices, denoted as follows: 𝒜≡𝔐_c(α), ℬ≡𝔐_b(δ), 𝒞≡𝔐_a(α), 𝒟≡𝔐_b(α). More completely, the 𝔐_as and 𝔐_bs are represented as follows: 𝔐_a(α) =𝒞, 𝔐_a(β) =ℬ, 𝔐_a(γ) =η_a ℬ^†, 𝔐_a(δ) = η_a 𝒞^†; and 𝔐_b(α) =𝒟, 𝔐_b(β) =η_a ℬ^†, 𝔐_b(γ) =η_a 𝒟^†, 𝔐_b(δ) =ℬ. There are, in fact, only five independent constraints for these matrices: 𝒜^3=τ_0, ℬ^3=τ_0, 𝒜=𝒞ℬ𝒞^†, 𝒜=𝒟ℬ𝒟^†, 𝒜=𝒟ℬ^† 𝒞^†. However, note that 𝒞=τ_0 from Eq. <ref>, we immediately have: 𝒜 = ℬ, 𝒟=𝒜^†. In summary: 1.) G_x=G_y=G_z=τ_0; 2.) G_a/b/c(x,y,z;s)=𝔐_a/b/c(s), where the 𝔐s have the following forms: 𝔐_a(α)=τ_0, 𝔐_a(β)=𝒜, 𝔐_a(γ)=η_a 𝒜^†, 𝔐_a(δ)=η_a τ_0; 𝔐_b(α)=𝒜^†, 𝔐_b(β)=η_a 𝒜^†, 𝔐_b(γ)=η_a 𝒜, 𝔐_b(δ)=𝒜; 𝔐_c(α)=𝒜, 𝔐_c(β)=τ_0, 𝔐_c(γ)=τ_0, 𝔐_c(δ)=τ_0. Since we have η_a=± 1, and 𝒜=τ_0, e^i2π/3τ_z, e^i4π/3τ_z, corresponding to apparently 6 solutions. However, let us note that a further gauge transformation W(x,y,z;s)≡η_a^(x+y+z) leads us to: 1.) G_x=G_y=G_z= η_aτ_0; 2.) G_a/b/c(x,y,z;s)=𝔐_a/b/c(s), where the 𝔐s have the following forms: 𝔐_a(α)=τ_0, 𝔐_a(β)=𝒜, 𝔐_a(γ)=𝒜^†, 𝔐_a(δ)=τ_0; 𝔐_b(α)=𝒜^†, 𝔐_b(β)=𝒜^†, 𝔐_b(γ)=𝒜, 𝔐_b(δ)=𝒜; 𝔐_c(α)=𝒜, 𝔐_c(β)=τ_0, 𝔐_c(γ)=τ_0, 𝔐_c(δ)=τ_0. Due to the fact that η_a now becomes global signs which are elements of the IGG, we conclude that η_a=± 1 PSGs are equivalent. We also remark that a gauge transformation W(x,y,z;s)≡ iτ_x maps the PSG solutions in which 𝒜=e^i2π/3τ_z to that in which 𝒜=e^i4π/3τ_z.In conclusion, we have: 1.) G_x=G_y=G_z= τ_0; 2.) G_a/b/c(x,y,z;s)=𝔐_a/b/c(s), where the 𝔐s have the following forms: 𝔐_a(α)=τ_0, 𝔐_a(β)=𝒜, 𝔐_a(γ)=𝒜^†, 𝔐_a(δ)=τ_0; 𝔐_b(α)=𝒜^†, 𝔐_b(β)=𝒜^†, 𝔐_b(γ)=𝒜, 𝔐_b(δ)=𝒜; 𝔐_c(α)=𝒜, 𝔐_c(β)=τ_0, 𝔐_c(γ)=τ_0, 𝔐_c(δ)=τ_0, where 𝒜=τ_0, e^i2π/3τ_z, corresponding to 2 solutions. §.§ Adding Time-Reversal Symmetry Having arrived at the PSG solutions given the space group for the trillium lattice, we are at a position to add time-reversal symmetry (TRS) to the story. The extra relations are translated into the corresponding PSG equations: G_𝒯(i)G_𝒯(i)=η_𝒯, G^†_𝒯(T^-1_x(i))G^†_x(i)G_𝒯(i)G_x(i)=η_x𝒯, G^†_𝒯(T^-1_y(i))G^†_y(i)G_𝒯(i)G_y(i)=η_y𝒯, G^†_𝒯(T^-1_z(i))G^†_z(i)G_𝒯(i)G_z(i)=η_z𝒯, G^†_𝒯(g^-1_a(i))G^†_a(i)G_𝒯(i)G_a(i)=η_a𝒯, G^†_𝒯(g^-1_b(i))G^†_b(i)G_𝒯(i)G_b(i)=η_b𝒯, G^†_𝒯(g^-1_c(i))G^†_c(i)G_𝒯(i)G_c(i)=η_c𝒯. We make the ansatz that G_𝒯≡ f(x,y,z; s)𝔐_𝒯(s), and Eq.<ref>, Eq.<ref> and Eq.<ref> immediately tell us that: G_𝒯(i)=η^x_x𝒯η^y_y𝒯η^z_z𝒯𝔐_𝒯(s). The above form, when plugged into Eq.<ref>, gives us the following constraint: 𝔐^2_𝒯(s)=η_𝒯. We now discuss about the consequences of Eq.<ref>, Eq.<ref> and Eq.<ref>. §.§.§ Solving Eq.<ref> First, we consider i≡(x,y,z;β), since in this case G_c(i)=τ_0. It is straightforward to reach the following constraint on the phases: η_x𝒯=η_y𝒯=η_z𝒯≡η_5. Also the following constraints for 𝔐_𝒯(s) arise if we iterate the sub-lattice indices: 𝔐^†_𝒯(δ)𝔐_𝒯(β)=η_c𝒯, 𝔐^†_𝒯(β)𝔐_𝒯(γ)=η_c𝒯, 𝔐^†_𝒯(γ)𝔐_𝒯(δ)=η_c𝒯, 𝔐^†_𝒯(α)𝔐^†_c(α)𝔐_𝒯(α)𝔐_c(α)=η_c𝒯. From Eq.<ref> and Eq.<ref>, we can reach 𝔐^†_𝒯(δ)𝔐_𝒯(γ)=τ_0. This statement, when coupled with Eq.<ref> gives us η_c𝒯=1. A quick summary: 1.) η_x𝒯=η_y𝒯=η_z𝒯≡η_5, η_c𝒯=1; 2.) We have G_𝒯(x,y,z;s)=η^x+y+z_5𝔐_𝒯(s); for which the following relations are satisfied: 𝔐^†_𝒯(α)𝔐^†_c(α)𝔐_𝒯(α)𝔐_c(α)=τ_0, 𝔐_𝒯(β)=𝔐_𝒯(γ)=𝔐_𝒯(δ). §.§.§ Solving Eq.<ref> and Eq.<ref> Iterating the sub-lattice indices, we arrive at the following constraints: 𝔐^†_𝒯(δ)𝔐^†_a(α)𝔐_𝒯(α)𝔐_a(α) =η_a𝒯, 𝔐^†_𝒯(γ)𝔐^†_a(β)𝔐_𝒯(β)𝔐_a(β) =η_a𝒯, 𝔐^†_𝒯(β)𝔐^†_a(γ)𝔐_𝒯(γ)𝔐_a(γ) =η_a𝒯η_5, 𝔐^†_𝒯(α)𝔐^†_a(δ)𝔐_𝒯(δ)𝔐_a(δ) =η_a𝒯η_5, 𝔐^†_𝒯(γ)𝔐^†_b(α)𝔐_𝒯(α)𝔐_b(α) =η_b𝒯, 𝔐^†_𝒯(δ)𝔐^†_b(β)𝔐_𝒯(β)𝔐_b(β) =η_b𝒯η_5, 𝔐^†_𝒯(α)𝔐^†_b(γ)𝔐_𝒯(γ)𝔐_b(γ) =η_b𝒯η_5, 𝔐^†_𝒯(β)𝔐^†_b(δ)𝔐_𝒯(δ)𝔐_b(δ) =η_b𝒯. §.§.§ Collection of Constraints We further specify that 𝔐_𝒯(α)≡ℰ and 𝔐_𝒯(β)=𝔐_𝒯(γ)=𝔐_𝒯(δ)≡ℱ. We note that Eq.<ref> and Eq.<ref> immediately imply that η_5=1 since 𝔐_a(γ)=η_a𝔐^†_a(β). Furthermore, comparing Eq.<ref> and Eq.<ref> gives us η_a𝒯=η_b𝒯≡η_6, since 𝔐_a(β)=𝔐_b(δ).In the end, we reach the following five independent constraints: ℰ^2=η_𝒯, ℱ^2=η_𝒯, ℰ^† 𝒜^† ℰ𝒜=τ_0, ℱ^† ℬ^† ℱℬ=η_6, 𝒞^† ℰ^† 𝒞ℱ=η_6. Since 𝒞=τ_0, and 𝒜=ℬ, we can determine that η_6=1 and ℰ=ℱ. Also, when η_𝒯=1, we have ℰ=ℱ=τ_0; when η_𝒯=-1, we have ℰ=ℱ=iτ_z. Since without TRS, we had 2 solutions, now we have 4 solutions, as collected in Tab. <ref>. § IGG=𝖴(1) In this section, our target is to find the PSG solutions with 𝖴(1) IGG. The PSG relations listed in Section <ref> still hold, only with the signs on the RHS being replaced as η_𝔤≡exp[iϕ_𝔤]. To proceed, we mention a fact which is a blessing for us. In <cit.>, Wen proved that for PSG solutions with 𝖴(1) IGG, the Gs always have the following canonical forms: G_g(i)≡ (iτ_x)^n_ge^iθ_g(i)τ_z, n_g=0 or 1, where θ_g∈ [0, 2π). Another thing we would like to mention before moving on is that, in this section θ always stands for a function which depends on position i, whereas ϕ always stands for a constant phase.Let us first look at Eq.<ref>, which can be rewritten as: G_a(g_a(i))G_a(i)=G_z(g_a(i))e^iϕ_aτ_z. The above equation already dictates that n_z=0. Why? This is straightforward to see if n_a=0. Now supposing n_a=1, we would have: LHS_<ref>=e^i(-θ_a(g_a(i))+θ_a(i))τ_z, which also implies that n_z=0 on the RHS_<ref>. Similarly, due to Eq.<ref> and Eq.<ref>, we can conclude that n_y=n_c=0.There is a valuable lesson from the above operation. Given a PSG equation G_𝔤G^†_𝔥…=e^iϕ_𝔦τ_z, we demand that (n_𝔤-n_𝔥… )=0 mod 2. Using the above lesson, we see from Eq.<ref> that n_x=0. And Eq.<ref> tells us that n_b=0, whereas Eq.<ref> implies that n_a=0. This is remarkable, for n_x=n_y=n_z=n_a=n_b=n_c=0! What was for us originally a set of coupled 𝖲𝖴(2) matrix equations now reduces to a set of coupled 𝖴(1) matrix equations, which are equations of compact 𝖴(1) phases. §.§ Solving for the Translational Elements and the Simplification Before we take a step further, let us rewrite the remaining PSG equations in terms of θs: θ_c(g^2_c(i))+θ_c(g_c(i))+θ_c(i)=ϕ_c, -θ_z(g_a(i))+θ_a(g_a(i))+θ_a(i)=ϕ_a, -θ_y(g_b(i))+θ_b(g_b(i))+θ_b(i)=ϕ_b, -θ_a(T_x(i))+θ_x(T_x(i))+θ_a(i)+θ_x(g^-1_a(i))=ϕ_ax, -θ_a(T_y(i))+θ_y(T_y(i))+θ_a(i)+θ_y(g^-1_a(i))=ϕ_ay, -θ_a(T^-1_z(i))-θ_z(i)+θ_a(i)+θ_z(g^-1_a(i))=ϕ_az, -θ_b(T_x(i))+θ_x(T_x(i))+θ_b(i)+θ_x(g^-1_b(i))=ϕ_bx, -θ_b(T^-1_y(i))-θ_y(i)+θ_b(i)+θ_y(g^-1_b(i))=ϕ_by, -θ_b(T_z(i))+θ_z(T_z(i))+θ_b(i)+θ_z(g^-1_b(i))=ϕ_bz, -θ_c(T^-1_y(i))-θ_y(i)+θ_c(i)+θ_x(g^-1_c(i))=ϕ_cyx, -θ_c(T^-1_z(i))-θ_z(i)+θ_c(i)+θ_y(g^-1_c(i))=ϕ_czy, -θ_c(T^-1_x(i))-θ_x(i)+θ_c(i)+θ_z(g^-1_c(i))=ϕ_cxz, -θ_a(g^-1_cg^-1_bT^-1_xT_yg_ag_c(i))-θ_c(g^-1_bT^-1_xT_yg_ag_c(i)) -θ_b(T^-1_xT_yg_ag_c(i))-θ_x(T_yg_ag_c(i)) +θ_y(T_yg_ag_c(i))+θ_a(g_ag_c(i)) +θ_c(g_c(i))=ϕ_acb, -θ_b(g^-1_aT_xT^-1_yT_zg_bg_a(i))-θ_a(T_xT^-1_yT_zg_bg_a(i)) +θ_x(T_xT^-1_yT_zg_bg_a(i))-θ_y(T_zg_bg_a(i)) + θ_z(T_zg_bg_a(i))+θ_b(g_bg_a(i))+θ_a(g_a(i))=ϕ_ab, -θ_c(g^-1_bT^-1_xT_yg_ag_bg_cg_b(i))-θ_b(T^-1_xT_yg_ag_bg_cg_b(i)) -θ_x(T_yg_ag_bg_cg_b(i))+ θ_y(T_yg_ag_bg_cg_b(i)) +θ_a(g_ag_bg_cg_b(i))+θ_b(g_bg_cg_b(i)) +θ_c(g_cg_b(i))+θ_b(g_b(i))=ϕ_cba. These θs and ϕs in the above equations are compact 𝖴(1) phase factors, and an equation θ = ϕ means θ = ϕ mod 2π. The θs are associated with the 𝖲𝖴(2) gauge symmetry like before, specifically the 𝖴(1) subgroup of 𝖲𝖴(2) transforms the θ in the following way: θ_U(i)↦θ_U(i)-φ(i)+φ(U^-1(i)), φ∈ [0,2π); note that since we do not want to spoil the choice of τ_z, we consider only the 𝖴(1) subgroup of 𝖲𝖴(2).Similar to the 𝖹_2 case, we eliminate the phases on the right hand side of Eq. <ref>, Eq. <ref>, Eq. <ref>, Eq. <ref> and Eq. <ref>. §.§ Solving for the translational Elements Let us start by considering the equations that arise because of the commutation of translational generators. Canonically, this gives us the following expressions of G_x, G_y, G_z after a gauge fixing: G_x(x,y,z;s)=τ_0, G_y(x,y,z;s)=e^ixϕ_xyτ_z, G_z(x,y,z;s)=e^i(xϕ_zx+yϕ_yz)τ_z. In other words, we have the following representation: θ_x(x,y,z;s)=0, θ_y(x,y,z;s)=xϕ_xy, θ_z(x,y,z;s)=xϕ_zx+yϕ_yz. §.§ Solving for θ_c Using the IGG gauge symmetry, one can eliminate the phases on the RHS of Eq.<ref>, Eq.<ref> and Eq.<ref>, as each of θ_x, θ_y and θ_z appears only once in these equations. To solve for θ_c, one then plug the canonical expressions of the translational PSG elements into Eq.<ref>, Eq.<ref> and Eq.<ref>. We make an ansatz analogous to the one we made in the 𝖹_2 case: θ(i)≡ f(x,y,z;s)+𝔪(s). We realise that the equations under attention are valid for all sub-lattice indices, therefore we have f_c(x,y,z;s)≡ f_c(x,y,z), and: f_c(x,y,z) =f_c(x,y-1,z)+xϕ_xy, f_c(x,y,z) =f_c(x,y,z-1) +xϕ_zx+yϕ_yz-yϕ_xy, f_c(x,y,z) =f_c(x-1,y,z) -yϕ_zx-zϕ_yz + ϕ_cxz. Checking the path-independency of f_c, we arrive at the following constraint: ϕ_xy=ϕ_yz=-ϕ_zx≡ϕ_1. Eventually we arrive at the conclusion that f_c(x,y,z)=(xy-xz)ϕ_1.Let us look at Eq.<ref>. We had eliminated the phase on the RHS by making use of the IGG gauge symmetry. Plugging the above expression into Eq.<ref>, we arrive at: 3𝔪_c(α) =0, 𝔪_c(β)+𝔪_c(γ)+𝔪_c(δ) =0, ϕ_cxz = 0. Before moving on, we make a brief summary: θ_x(i)=0, θ_y(i)=xϕ_1, θ_z(i)=(y-x)ϕ_1, θ_c(i)=(xy-xz)ϕ_1+𝔪_c(s). §.§ Solving for θ_a To solve for θ_a, one plugs the simplified expressions of the translational PSG elements into Eq.<ref>, Eq.<ref> and Eq.<ref>. One arrives at the following expressions: θ_a(T_x(i)) =θ_a(i)-ϕ_ax, θ_a(T_y(i)) =θ_a(i)+θ_y(T_y(i)) +θ_y(g^-1_a(i))-ϕ_ay, θ_a(i) =θ_a(T^-1_z(i))+ϕ_az +θ_z(i)-θ_z(g^-1_a(i)). One makes the usual ansatz θ_a≡ f_a(x,y,z;s)+𝔪_a(s), only this time one does not have f_a(x,y,z;s)=f_a(x,y,z), for the evaluation of θ_y/z(g^-1_a(i)) is not s-independent. We have, for s=α/δ, the following conditions for f_a: f_a(x+1,y,z;α/δ) =f_a(x,y,z;α/δ)-ϕ_ax, f_a(x,y+1,z;α/δ) =f_a(x,y,z;α/δ)-ϕ_ay, f_a(x,y,z;α/δ) =f_a(x,y,z-1;α/δ) +ϕ_az+(2y-2x+1)ϕ_1; and for s=β/γ, the following conditions for f_a: f_a(x+1,y,z;β/γ) =f_a(x,y,z;β/γ)-ϕ_ax, f_a(x,y+1,z;β/γ) =f_a(x,y,z;β/γ)-ϕ_ay-ϕ_1, f_a(x,y,z;β/γ) =f_a(x,y,z-1;β/γ)+ϕ_az +(2y-2x)ϕ_1. Checking the path-independency of f_a, we arrive at the following constraint: 2ϕ_1=0⇒ϕ_1=0 or π. After the path-independency is guaranteed, we arrive at the following expressions: f_a(x,y,z;α/δ) =-xϕ_ax-yϕ_ay+z(ϕ_az+ϕ_1), f_a(x,y,z;β/γ) =-xϕ_ax-y(ϕ_ay+ϕ_1)+zϕ_az. Plugging the forms of θ_a≡ f_a(x,y,z;s)+𝔪_a(s) into Eq.<ref>, further constraints can be derived. Specifically, we iterate the sub-lattice index. Let us consider i≡ (x,y,z;α), we have: -θ_z(-x,-y-1,z;δ) + θ_a(-x,-y-1,z;δ) + θ_a(x,y,z;α) = ϕ_a, which gives us: ϕ_a = -(-y-1+x)ϕ_1 + (xϕ_ax + (y+1)ϕ_ay + z(ϕ_az+ϕ_1))+(-xϕ_ax - yϕ_ay + z(ϕ_az+ϕ_1)) + 𝔪_a(α) + 𝔪_a(δ) = -(-y-1+x)ϕ_1 + ϕ_ay + 2zϕ_az + 𝔪_a(α) + 𝔪_a(δ). The implication from the above equation is that: ϕ_1 = 0, 2ϕ_az=0; 𝔪_a(α) + 𝔪_a(δ) = ϕ_a - ϕ_ay. For i≡ (x,y,z;β), we have: θ_a(-x-1,-y-1,z+1;γ) + θ_a(x,y,z;β) = ϕ_a, which gives us: ϕ_a = ((x+1)ϕ_ax + (y+1)ϕ_ay + (z+1)ϕ_az) +(-xϕ_ax - yϕ_ay + zϕ_az) + 𝔪_a(β) + 𝔪_a(γ) = ϕ_ax + ϕ_ay + ϕ_az + 𝔪_a(β) + 𝔪_a(γ). For i≡ (x,y,z;γ), we have: θ_a(-x-1,-y-1,z;β) + θ_a(x,y,z;γ) = ϕ_a, which gives us: ϕ_a = ((x+1)ϕ_ax + (y+1)ϕ_ay + zϕ_az) +(-xϕ_ax - yϕ_ay + zϕ_az) + 𝔪_a(β) + 𝔪_a(γ) = ϕ_ax + ϕ_ay+ 𝔪_a(β) + 𝔪_a(γ). Combining with the equation for i≡ (x,y,z;β), we have: ϕ_az=0; 𝔪_a(β) + 𝔪_a(γ) = ϕ_a - ϕ_ay - ϕ_ax. Lastly, the case for i≡ (x,y,z;δ) does not give us new relations. Before moving on, we make a brief summary: θ_x/y/z(i)=0, θ_a(i)= -xϕ_ax - yϕ_ay +𝔪_a(s). §.§ Solving for θ_b To solve for θ_b, one plugs the simplified expressions of the translational PSG elements into Eq.<ref>, Eq.<ref> and Eq.<ref>. One arrives at the following expressions: θ_b(T_x(i)) =θ_b(i)-ϕ_bx, θ_b(i) =θ_b(T^-1_y(i))+ϕ_by, θ_b(T_z(i)) =θ_b(i)-ϕ_bz. One makes the usual ansatz θ_b≡ f_b(x,y,z;s)+𝔪_b(s), we arrive at f_b(x,y,z;s)=-xϕ_bx + y ϕ_by - zϕ_bz. Plugging the forms of θ_b≡ f_b(x,y,z;s)+𝔪_b(s) into Eq.<ref>, further constraints can be derived. Specifically, we iterate the sub-lattice index. Let us consider i≡ (x,y,z;α), we have: θ_b(-x-1,y,-z;γ) + θ_b(x,y,z;α) = ϕ_b, which gives us: ϕ_b = ((x+1)ϕ_bx +yϕ_by + zϕ_bz) + (-xϕ_bx + y ϕ_by - zϕ_bz) + 𝔪_b(α) + 𝔪_b(γ) = ϕ_bx + 2yϕ_by + 𝔪_b(α) + 𝔪_b(γ). The implication from the above equation is that: 2ϕ_by=0; 𝔪_b(α) + 𝔪_b(γ) = ϕ_b - ϕ_bx. For i≡ (x,y,z;β), we have: θ_b(-x-1,y,-z-1;δ) + θ_b(x,y,z;β) = ϕ_b, which gives us: ϕ_b =((x+1)ϕ_bx +yϕ_by + (z+1)ϕ_bz) + (-xϕ_bx + y ϕ_by - zϕ_bz) + 𝔪_b(β) + 𝔪_b(δ) = ϕ_bx + ϕ_bz + 𝔪_b(β) + 𝔪_b(δ). For i≡ (x,y,z;γ), we have: θ_b(-x-1,y+1,-z;α) + θ_b(x,y,z;γ) = ϕ_b, which gives us: ϕ_b =((x+1)ϕ_bx +(y+1)ϕ_by + zϕ_bz) + (-xϕ_bx + y ϕ_by - zϕ_bz) + 𝔪_b(α) + 𝔪_b(γ) = ϕ_bx + ϕ_by + 𝔪_b(α) + 𝔪_b(γ). Combining with the equation for i≡ (x,y,z;α), we have: ϕ_by=0; 𝔪_b(α) + 𝔪_b(γ) = ϕ_b - ϕ_bx. Lastly, the case for i≡ (x,y,z;δ) gives us the same relation from the case for i≡ (x,y,z;β), which is: 𝔪_b(β) + 𝔪_b(δ) = ϕ_b - ϕ_bx - ϕ_bz. Before moving on, we make a brief summary: θ_b(i)= -xϕ_bx- zϕ_bz +𝔪_b(s). §.§ Solving Eq. <ref> The equation Eq. <ref> is then reduced to: -θ_a(g^-1_c g^-1_b T^-1_x T_y(i)) - θ_c(g^-1_b T^-1_x T_y(i)) - θ_b(T^-1_x T_y(i)) + θ_a(i) + θ_c(g^-1_a(i)) = 0. For i≡ (x,y,z;α), the above equation is: 0 =-θ_a(y,-z,-x;β) - θ_c(-x,y,-z;γ) - θ_b(x-1,y+1,z;α) + θ_a(x,y,z;α) +θ_c(-x,-y-1, z-1;δ) = x(ϕ_bx - ϕ_ax)+ y(ϕ_ax - ϕ_ay)+ z(ϕ_bz - ϕ_ay)-ϕ_bx -𝔪_a(β)-𝔪_c(γ)-𝔪_b(α)+𝔪_a(α)+𝔪_c(δ). We can conclude from the above equation that ϕ_2≡ϕ_ax = ϕ_ay = ϕ_bx = ϕ_bz. Thus we have f_a(x,y,z) = -(x+y)ϕ_2 and f_b(x,y,z) = -(x+z)ϕ_2. For i≡ (x,y,z;β), the above equation is: 0 =-θ_a(y,-z-1,-x;γ) - θ_c(-x,y,-z-1;δ) - θ_b(x-1,y+1,z;β) + θ_a(x,y,z;β) +θ_c(-x-1,-y-1, z;γ) = -2ϕ_2 -𝔪_a(γ)-𝔪_c(δ)-𝔪_b(β) +𝔪_a(β)+𝔪_c(γ). For i≡ (x,y,z;γ), the above equation is: 0 =-θ_a(y+1,-z,-x;α) - θ_c(-x,y+1,-z;α) - θ_b(x-1,y+1,z;γ) + θ_a(x,y,z;γ) +θ_c(-x-1,-y-1, z-1;β) = -𝔪_a(α)-𝔪_c(α)-𝔪_b(γ)+𝔪_a(γ)+𝔪_c(β). For i≡ (x,y,z;δ), the above equation is: 0 =-θ_a(y+1,-z-1,-x;δ) - θ_c(-x,y+1,-z-1;β) - θ_b(x-1,y+1,z;δ) + θ_a(x,y,z;δ) +θ_c(-x,-y-1, z;α) = -ϕ_2-𝔪_a(δ)-𝔪_c(β)-𝔪_b(δ) +𝔪_a(δ)+𝔪_c(α). §.§ Solving Eq. <ref> The equation is reduced to: -θ_b(g^-1_aT_xT^-1_yT_z(i))-θ_a(T_xT^-1_yT_z(i)) +θ_b(i)+θ_a(g^-1_b(i))=ϕ_ab. For i≡ (x,y,z;α), the above equation is: ϕ_ab = -θ_b(-x-1,-y,z;δ) - θ_a(x+1,y-1,z+1;α) +θ_b(x,y,z;α)+θ_a(-x-1,y-1,-z;γ) =ϕ_2 -𝔪_b(δ)-𝔪_a(α)+𝔪_b(α)+𝔪_a(γ). For i≡ (x,y,z;β), the above equation is: ϕ_ab = -θ_b(-x-2,-y,z+1;γ) - θ_a(x+1,y-1,z+1;β) +θ_b(x,y,z;β)+θ_a(-x-1,y-1,-z-1;δ) =ϕ_2 -𝔪_b(γ)-𝔪_a(β)+𝔪_b(β)+𝔪_a(δ). For i≡ (x,y,z;γ), the above equation is: ϕ_ab = -θ_b(-x-2,-y,z;β) - θ_a(x+1,y-1,z+1;γ) +θ_b(x,y,z;γ)+θ_a(-x-1,y,-z;α) =-ϕ_2 -𝔪_b(β)-𝔪_a(γ)+𝔪_b(γ)+𝔪_a(α). For i≡ (x,y,z;δ), the above equation is: ϕ_ab = -θ_b(-x-1,-y,z+1;α) - θ_a(x+1,y-1,z+1;δ) +θ_b(x,y,z;δ)+θ_a(-x-1,y,-z-1;β) =ϕ_2 -𝔪_b(α)-𝔪_a(δ)+𝔪_b(δ)+𝔪_a(β). §.§ Solving Eq. <ref> The equation Eq.<ref> is then reduced to: -θ_c(g^-1_bT^-1_xT_y(i))-θ_b(T^-1_xT_y(i))+θ_a(i) +θ_b(g^-1_a(i))+θ_c(g^-1_bg^-1_a(i)) +θ_b(g^-1_cg^-1_bg^-1_a(i)) =0. For i≡ (x,y,z;α), the above equation is: 0 =-θ_c(-x,y,-z;γ)-θ_b(x-1,y+1,z;α) +θ_a(x,y,z;α)+θ_b(-x,-y-1,z-1;δ) +θ_c(x-1,-y-1,-z;β)+θ_b(-y-1,-z,x-1;δ) = 2ϕ_2 -𝔪_c(γ)-𝔪_b(α)+𝔪_a(α) +𝔪_b(δ)+𝔪_c(β)+𝔪_b(δ). For i≡ (x,y,z;β), the above equation is: 0 =-θ_c(-x,y,-z-1;δ)-θ_b(x-1,y+1,z;β) +θ_a(x,y,z;β)+θ_b(-x-1,-y-1,z;γ) +θ_c(x,-y-1,-z;α)+θ_b(-y-1,-z,x;α) = ϕ_2 -𝔪_c(δ)-𝔪_b(β)+𝔪_a(β) +𝔪_b(γ)+𝔪_c(α)+𝔪_b(α). For i≡ (x,y,z;γ), the above equation is: 0 =-θ_c(-x,y+1,-z;α)-θ_b(x-1,y+1,z;γ) +θ_a(x,y,z;γ)+θ_b(-x-1,-y-1,z-1;β )+θ_c(x,-y-2,-z;δ)+θ_b(-y-2,-z,x;γ) = 3ϕ_2 -𝔪_c(α)-𝔪_b(γ)+𝔪_a(γ) +𝔪_b(β)+𝔪_c(δ)+𝔪_b(γ). For i≡ (x,y,z;δ), the above equation is: 0 =-θ_c(-x,y+1,-z-1;β)-θ_b(x-1,y+1,z;δ) θ_a(x,y,z;δ)+θ_b(-x,-y-1,z;α) +θ_c(x-1,-y-2,-z;γ)+θ_b(-y-2,-z,x-1;β) = 2ϕ_2 -𝔪_c(β)-𝔪_b(δ)+𝔪_a(δ) +𝔪_b(α)+𝔪_c(γ)+𝔪_b(β). §.§ Collected equations for 𝔪s In this subsection, we summarize the coupled equations to solve for 𝔪s. Before doing so, we note that we can use the 𝖲𝖴(2) gauge symmetry to fix certain 𝔪s. Recall that the action of the gauge transformation is: g.t.:θ_U(i)↦ w(i) - θ_U(i) + w(U^-1(i)). We start off in a generic gauge where all 𝔪s are non-trivial. We first perform the gauge transformation w(β) = 𝔪_c(β) and w(γ) = -𝔪_c(δ). The consequence is that 𝔪_c(β) = 𝔪_c(δ) = 0. And because 𝔪_c(β)+ 𝔪_c(γ) + 𝔪_c(δ)=0 from one of our constraints, we have 𝔪_c(γ)=0. We then perform the gauge transformation w(α) = 𝔪_a(α), such that 𝔪_a(α)=0. And because 𝔪_a(α) + 𝔪_a(δ) = ϕ_a - ϕ_2 from one of our constraints, we have 𝔪_a(δ) = ϕ_a - ϕ_2.The remaining equations after the reductions are: 3𝔪_c(α)=0, 𝔪_a(β) + 𝔪_a(γ) = ϕ_a - 2ϕ_2, 𝔪_b(α) + 𝔪_b(γ) = ϕ_b - ϕ_2, 𝔪_b(β) + 𝔪_b(δ) = ϕ_b - 2ϕ_2, -𝔪_a(β)-𝔪_b(α) = ϕ_2 , -𝔪_a(γ)-𝔪_b(β)+𝔪_a(β) = 2ϕ_2, -𝔪_c(α)-𝔪_b(γ)+𝔪_a(γ) =0, -𝔪_b(δ)+𝔪_c(α) = ϕ_2, -𝔪_b(δ)+𝔪_b(α)+𝔪_a(γ) = ϕ_ab-ϕ_2, -𝔪_b(γ)-𝔪_a(β)+𝔪_b(β) = ϕ_ab-ϕ_a, -𝔪_b(β)-𝔪_a(γ)+𝔪_b(γ) = ϕ_ab+ϕ_2, -𝔪_b(α)+𝔪_b(δ)+𝔪_a(β) = ϕ_ab+ϕ_a-2ϕ_2, -𝔪_b(α)+𝔪_b(δ)+𝔪_b(δ) = - 2ϕ_2, -𝔪_b(β)+𝔪_a(β)+𝔪_b(γ) +𝔪_c(α)+𝔪_b(α) = - ϕ_2, -𝔪_c(α)-𝔪_b(γ)+𝔪_a(γ)+𝔪_b(β)+𝔪_b(γ) = - 3ϕ_2, -𝔪_b(δ)+𝔪_b(α)+𝔪_b(β) = -ϕ_a- ϕ_2. We now set A≡𝔪_c(α), B≡𝔪_a(β), C≡𝔪_b(α) and D≡𝔪_b(β). Eq. <ref> tells us that -B-C=ϕ_2. Also, Eq. <ref> tells us that D = 2B - ϕ_a. Thus all the 𝔪s can be represented using A and B, as deduced from Eq. <ref> to Eq. <ref>: 𝔪_a(α) = 0, 𝔪_a(β) = B, 𝔪_a(γ) = ϕ_a-2ϕ_2 - B, 𝔪_a(δ) = ϕ_a - ϕ_2, 𝔪_b(α) = -ϕ_2-B, 𝔪_b(β) = 2B-ϕ_a, 𝔪_b(γ) = ϕ_b+B, 𝔪_b(δ) = ϕ_b -2ϕ_2+ϕ_a-2B, 𝔪_c(α) = A, 𝔪_c(β) = 0, 𝔪_c(γ) = 0, 𝔪_c(δ) = 0. Eq. <ref> tells us that: A+2B = ϕ_a - ϕ_b -2ϕ_2, whereas Eq. <ref> tells us that: A+2B = ϕ_a + ϕ_b -ϕ_2. From which we can see that ϕ_2=-2ϕ_b. Now if we look at Eq. <ref>, we have ϕ_ab = -ϕ_b. In fact, Eq. <ref> to Eq. <ref> do not tell us more than this. Eq. <ref> tells us that 3B = 4ϕ_b+2ϕ_a, Eq. <ref> tells us that A-B = -ϕ_a-ϕ_b, Eq. <ref> tells us that A-B = -2ϕ_b and finally Eq. <ref> tells us that 3B = ϕ_a + ϕ_b-2ϕ_2. The above relations allow us to assert that: ϕ_3≡ -ϕ_ab=ϕ_a=ϕ_b, ϕ_2 = -2ϕ_3, B = A+2ϕ_3, and: 𝔪_a(α) = 0, 𝔪_a(β) = A + 2ϕ_3, 𝔪_a(γ) =3ϕ_3-A, 𝔪_a(δ) = 3ϕ_3, 𝔪_b(α) = -A, 𝔪_b(β) = -A+3ϕ_3, 𝔪_b(γ) = 3ϕ_3+A, 𝔪_b(δ) = 2ϕ_3+A, 𝔪_c(α) = A, 𝔪_c(β) = 0, 𝔪_c(γ) = 0, 𝔪_c(δ) = 0. We also have: f_a(i) = 2(x+y)ϕ_3, f_b(i) = 2(x+z)ϕ_3. Similar to the 𝖹_2 case, we now consider a further gauge transformation: w(x,y,z;α) =-2ϕ_3+ϕ_3(x+y+z), w(x,y,z;β/γ/δ) =ϕ_3(x+y+z), we see that: 1.) G_x=G_y=G_z= e^-iϕ_3τ_z; 2.) G_a/b/c(x,y,z;s)=e^i𝔪_a/b/c(s)τ_z, where the 𝔪s have the following forms: 𝔪_a(α)=0, 𝔪_a(β)=A, 𝔪_a(γ)=-A, 𝔪_a(δ)=0; 𝔪_b(α)=-A, 𝔪_b(β)=-A, 𝔪_b(γ)=A, 𝔪_b(δ)=A; 𝔪_c(α)=A, 𝔪_c(β)=0, 𝔪_c(γ)=0, 𝔪_c(δ)=0. Due to the fact that ϕ_3 now becomes global signs which are elements of the IGG, we conclude that ϕ_3 is redundant. We also remark that a gauge transformation W(x,y,z;s)≡ iτ_x maps the PSG solutions in which 𝒜=e^i2π/3τ_z to that in which 𝒜=e^i4π/3τ_z.In conclusion, we have: 1.) G_x=G_y=G_z= τ_0; 2.) G_a/b/c(x,y,z;s)=e^i𝔪_a/b/c(s)τ_z, where the 𝔪s have the following forms: 𝔪_a(α)=0, 𝔪_a(β)=A, 𝔪_a(γ)=-A, 𝔪_a(δ)=0; 𝔪_b(α)=-A, 𝔪_b(β)=-A, 𝔪_b(γ)=A, 𝔪_b(δ)=A; 𝔪_c(α)=A, 𝔪_c(β)=0, 𝔪_c(γ)=0, 𝔪_c(δ)=0, where 𝒜=τ_0, e^i2π/3τ_z. §.§ Adding Time-Reversal Symmetry We firstly write the algebraic relations: G_𝒯(i)G_𝒯(i)=e^iϕ_𝒯τ_z, G^†_𝒯(T^-1_x(i))G^†_x(i)G_𝒯(i)G_x(i)=e^iϕ_x𝒯τ_z, G^†_𝒯(T^-1_y(i))G^†_y(i)G_𝒯(i)G_y(i)=e^iϕ_y𝒯τ_z, G^†_𝒯(T^-1_z(i))G^†_z(i)G_𝒯(i)G_z(i)=e^iϕ_z𝒯τ_z, G^†_𝒯(g^-1_a(i))G^†_a(i)G_𝒯(i)G_a(i)=e^iϕ_a𝒯τ_z, G^†_𝒯(g^-1_b(i))G^†_b(i)G_𝒯(i)G_b(i)=e^iϕ_b𝒯τ_z, G^†_𝒯(g^-1_c(i))G^†_c(i)G_𝒯(i)G_c(i)=e^iϕ_c𝒯τ_z. As usual, the canonical form of G_𝒯(i) = (iτ_x)^n_𝒯e^iθ_𝒯(i)τ_z. When n_𝒯=0, we can show that G_𝒯 = iτ_z uniformly much like the case for 𝖹_2. Since the derivation is very similar to the 𝖹_2 case, it is not included here. This group of PSG solutions does not produce mean field 𝖴(1) spin liquids if we consider the constraint imposed by TRS. For the rest of the appendix, let us focus on the case when n_𝒯=1.Let us first look at Eq. <ref>, we straightforwardly conclude that ϕ_𝒯=π. We denote θ_𝒯≡ f_𝒯(x,y,z;s) + 𝔪_𝒯(s). Then Eq. <ref> to Eq. <ref> tell us that: f_𝒯(x,y,z;s)=xϕ_x𝒯 + yϕ_y𝒯 + zϕ_z𝒯. We would like to plug the above results into Eq. <ref>. We arrive at the folllowing constraint: -θ_𝒯(g^-1_c(i)) + 2θ_c(i) + θ_𝒯(i) = ϕ_c𝒯. For the case with i=(x,y,z;α), we have: ϕ_c𝒯 =-θ_𝒯(y,z,x;α) + 2A +θ_𝒯(x,y,z;α) =x(ϕ_x𝒯 - ϕ_y𝒯) + y(ϕ_y𝒯-ϕ_z𝒯) + z(ϕ_z𝒯 - ϕ_x𝒯) + 2A, we arrive at ϕ_4≡ϕ_x𝒯 = ϕ_y𝒯 = ϕ_z𝒯, and ϕ_c𝒯=-A, where we used 3A=0. For the case with i=(x,y,z;β), we have: ϕ_c𝒯 =-θ_𝒯(y,z,x;δ) +θ_𝒯(x,y,z;β) = -𝔪_𝒯(δ) + 𝔪_𝒯(β). For the case with i=(x,y,z;γ), we have: ϕ_c𝒯 =-θ_𝒯(y,z,x;β) +θ_𝒯(x,y,z;γ) = -𝔪_𝒯(β) + 𝔪_𝒯(γ). For the case with i=(x,y,z;δ), we have: ϕ_c𝒯 =-θ_𝒯(y,z,x;γ) +θ_𝒯(x,y,z;δ) = -𝔪_𝒯(γ) + 𝔪_𝒯(δ). Thus if we denote 𝔪_𝒯(β)≡ E, we have 𝔪_𝒯(γ)= E-A and 𝔪_𝒯(δ)= E+A.We now look at Eq. <ref>. Similar to the case before: -θ_𝒯(g^-1_a(i)) + 2θ_a(i) + θ_𝒯(i) = ϕ_a𝒯. For the case with i=(x,y,z;α), we have: ϕ_a𝒯 =-θ_𝒯(-x,-y-1,z-1;δ) +2𝔪_a(α)+θ_𝒯(x,y,z;α) = - (-x-y-1+z-1)ϕ_4 -𝔪_𝒯(δ) + (x+y+z)ϕ_4 + 𝔪_𝒯(α) = x(2ϕ_4) + y(2ϕ_4) +2ϕ_4 + 𝔪_𝒯(α) - 𝔪_𝒯(δ), from which we have: 2ϕ_4 = 0, 𝔪_𝒯(α) - 𝔪_𝒯(δ) = ϕ_a𝒯. For the case with i=(x,y,z;β), we have: ϕ_a𝒯 =-θ_𝒯(-x-1,-y-1,z;γ) +2𝔪_a(β)+θ_𝒯(x,y,z;β) = 𝔪_𝒯(β) - 𝔪_𝒯(γ) + 2A = 0. For the case with i=(x,y,z;γ), we have: ϕ_a𝒯 =-θ_𝒯(-x-1,-y-1,z-1;β) +2𝔪_a(γ)+θ_𝒯(x,y,z;γ) = 3ϕ_4 + 𝔪_𝒯(γ) - 𝔪_𝒯(β) -2A = 3ϕ_4. Note that this relation combined with the one before, gives us that ϕ_4=0. For the case with i=(x,y,z;δ), we have: ϕ_a𝒯 =-θ_𝒯(-x,-y-1,z;α) +2𝔪_a(δ)+θ_𝒯(x,y,z;δ) = ϕ_4 + 𝔪_𝒯(δ) - 𝔪_𝒯(α). Combining the above constraints, we arrive at a set of relations summarised here: ϕ_a𝒯 = ϕ_4 = 0 𝔪_𝒯(α) = 𝔪_𝒯(δ), 𝔪_𝒯(β) = 𝔪_𝒯(γ)+A. Let us now look at Eq. <ref>: -θ_𝒯(g^-1_b(i)) + 2θ_b(i) + θ_𝒯(i) = ϕ_b𝒯. For the case with i=(x,y,z;α), we have: ϕ_b𝒯 =-θ_𝒯(-x-1,y-1,-z;γ) +2𝔪_b(α)+θ_𝒯(x,y,z;α) = - 2A + 𝔪_𝒯(α) - 𝔪_𝒯(γ). For the case with i=(x,y,z;β), we have: ϕ_b𝒯 =-θ_𝒯(-x-1,y-1,-z-1;δ) +2𝔪_b(β)+θ_𝒯(x,y,z;β) = - 2A + 𝔪_𝒯(β) - 𝔪_𝒯(δ). For the case with i=(x,y,z;γ), we have: ϕ_b𝒯 =-θ_𝒯(-x-1,y,-z;α) +2𝔪_b(γ)+θ_𝒯(x,y,z;γ) = 2A + 𝔪_𝒯(γ) - 𝔪_𝒯(α). For the case with i=(x,y,z;δ), we have: ϕ_b𝒯 =-θ_𝒯(-x-1,y,-z-1;β) +2𝔪_b(δ)+θ_𝒯(x,y,z;δ) = 2A + 𝔪_𝒯(δ) - 𝔪_𝒯(β). Combining the above constraints, we arrive at ϕ_b𝒯 = 0 and no new relations.We can then summarise: 𝔪_𝒯(α) = E+A , 𝔪_𝒯(β) = E, 𝔪_𝒯(γ) = E-A, 𝔪_𝒯(δ) = E+A . In the above, E is a free 𝖴(1) phase. However, note that we did not make use of the IGG gauge degrees of freedom associated with TRS. Recalling that G_𝒯∼ G_𝒯W_𝒯, where W_𝒯∈𝖴(1). We choose W_𝒯≡exp (-iE), thus eliminating the free phase in our solutions. We collect the 𝖴(1) PSG solutions into Tab. <ref>. Thus we have: G_𝒯(r⃗,α) = iτ_x e^iAτ_z, G_𝒯(r⃗,β) = iτ_x, G_𝒯(r⃗,γ) = iτ_x e^-iAτ_z, G_𝒯(r⃗,δ) = iτ_x e^iAτ_z. § MEAN-FIELD ANSATZES FOR THE PSG SOLUTIONS Our PSG classification obtains a set of gauge-inequivalent transformations G_g for all g ∈𝖯2_13 ×𝖹_2. In this appendix, we derive the constraints imposed on the mean-field parameters U_ij and μ_i by requiring that an element of the PSG leaves the ansatz invariant. We repeat this condition for convenience: ∀ g: G_g(g(i)) U_g(i)g(j)G^†_g(g(j)) = U_ij, G_g(g(i)) μ_g(i)G^†_g(g(i)) = μ_i. <ref> §.§ 𝖹_2 Here we specify the ansatzes for the PSGs corresponding to the IGG being 𝖹_2. As derived in Appendix <ref> and displayed in Table <ref> in the main text, the four 𝖹_2 PSGs can be indexed by , 𝒜=exp(i 2 π n/3) for i=0,1, and ℰ=τ_0 or ℰ=i τ_z, in terms of which all gauge transformations are listed in Tab. <ref>. We note that if G_𝒯= τ_0 then the invariance of the ansatz under time-reversal requires U_ij=-U_ij and μ_i = -μ_i, leading to no non-zero mean-field ansatzes. For G_𝒯=i τ_z, the invariance under TRS requires the following form for all links and sites: U_ij= U^x_ijτ_x + U^y_ijτ_y μ_i = μ^x_iτ_x + μ^y_iτ_y. Finally, since G_x=G_y=G_z=τ_0 in our solutions, we must require U_ij and μ_ij to be translationally invariant. We then encode the dependence of the parameters U_ij on the link (ij) by determining U for each of the unique links in Tab. <ref>, and determining the functions U^x_i, U^y_i, where i∈{1,2 ⋯ 12} specifies the link in Table <ref>. The on-site parameters are described as μ_α, μ_β, μ_γ and μ_δ, where the subscripts denote the sublattice dependence. Imposing the invariance of the ansatz under the action of (G_c, g_c), we get the following relations between 4 groups of links that are closed under the application of g_c U_1 = U_5 = U_9 U_2 = U_6 = U_10 U_3 = 𝒜 U_7 = 𝒜^2 U_11 U_4 = 𝒜 U_8 = 𝒜^2 U_12 The relations between different groups of links are obtained by the invariance under (G_b,g_b) and (G_a,g_a). The action of (G_a,g_a) gives us U_1 = U_2 U_3 = U_4 U_5 = 𝒜^2 U_7 U_6 = 𝒜^2 U_8 U_9 = 𝒜 U_12 U_10 = 𝒜 U_11 Similarly, the invariance of all links under (G_b,g_b) give us the conditions U_1 = U_3 U_2 = U_4 U_5 = 𝒜^2 U_8 U_6 = 𝒜^2 U_7 U_9 = U_10 U_11 = U_12 Combining the conditions in Eqs. <ref>,  <ref> and  <ref> we find that the U_ij for all links can be specified in terms of only two parameters U^x and U^y: U_1 = U^x τ_x + U^y τ_y; U_2 =U_1; U_3 = U_1 ; U_4 =U_1; U_5 = U_1; U_6 = U_1; U_7 = 𝒜^2 U_1; U_8 = 𝒜^2 U_1; U_9 = U_1; U_10= U_1; U_11 = 𝒜^2 U_1 ; U_12 =𝒜^2 U_1 Similarly, demanding the invariance of μ_i under (G_c,g_c) gives us μ_γ=μ_δ=μ_β, and μ_α=𝒜^2μ_α. Under (G_a, g_a), we have μ_α= μ_δ and μ_β=𝒜^2 μ_γ. This already implies that when 𝒜≠ 1, μ =0 on all sites. When 𝒜=1, site-independent on-site terms of the form μ^x τ_x + μ^y τ_y are allowed in the ansatz. §.§ PSG-protected gapless nodal star in 𝖹_21 spin liquid In this Appendix, we prove that the mean-field Hamiltonian H_ MFT(k⃗) (Eqs. <ref> and Eqs. <ref>) for the 𝖹_21 QSL has two zero-energy eigenvalues for k⃗=(± k, ± k, ± k). To this end, we first work out the most general PSG-allowed H_ MFT(k⃗) for the 𝖹_21 QSL. The rest of the discussion assumes the translation invariance of the ansatzes, which is true for all our QSLs. First, we use the basis (ψ^α_1, ψ^α_2,⋯ψ^δ_1 ψ^δ_2 ) to express the H_ MFT(k⃗) in terms of 2×2 blocks as H_ MFT(k⃗)= [ h_α,α(k⃗) h_α,β(k⃗) h_α,γ(k⃗) h_α,γ(k⃗); h_β ,α(k⃗) h_β ,β(k⃗) h_β ,γ(k⃗) h_β ,γ(k⃗); h_γ,α(k⃗) h_γ,β(k⃗) h_γ,γ(k⃗) h_γ,γ(k⃗); h_δ,α(k⃗) h_δ,β(k⃗) h_δ,γ(k⃗) h_δ,γ(k⃗) ] As just demonstrated in the previous section, when 𝒜≠ 1 and G_𝒯=iτ_z, we have h_α,α=h_β,β=h_γ,γ=h_δ,δ=0 for all k⃗. The block matrices have the form U_x τ^x +U_y τ^y (Eq. <ref>) in real space, leading to h_α,β(r⃗,r⃗')= [ 0 U_α,β(r⃗-r⃗'); U^*_α,β(r⃗-r⃗') 0 ] for a complex amplitude U(r⃗). The fourier-transformed equivalent is given by h_α,β(k⃗) =1/N∑_r⃗,r⃗' h_α,β(r⃗,r⃗')e^ik⃗.(r⃗-r⃗') = [ 0 U_α,β(k⃗); U^*_α,β(-k⃗) 0 ] Foreseeing repeated appearances of the off-diagonal form in Eq. <ref>, we introduce the shorthand M[u(k)], defined by M[u(k⃗)] = [ 0 u(k⃗); u^*(-k⃗) 0 ] Also note that from Eq. <ref> we know that h_α,β(r⃗,r⃗') = h_β,α(r⃗',r⃗). So we have from Eq. <ref> h_α,β(k⃗) = h_α,β(-k⃗) The action of symmetries on the block matrices in k-space can be worked from their real-space equivalents, given by Eq. <ref>. To show this explicitly for a general symmetry transformation g, we assume the unit-cell independence of gauge transformations which has been shown for all our QSLs. To reduce cumbersome expressions, we introduce the shorthand α̅ and g_α(r⃗) to denote the sublattice index and unit-cell position of the operation g(r⃗;α). We have, from Eq. <ref>, (G_g,g): ( h_α,β(k⃗) =1/N∑_r⃗,r⃗' h_α,β(r⃗,r⃗')e^ik⃗.(r⃗-r⃗')) ↦1/N∑_r⃗,r⃗'G_g(α̅) h_α̅β̅(g_α(r⃗),g_β(r⃗'))G^†_g(β̅) e^ik⃗.(r⃗-r⃗') = 1/N∑_r⃗,r⃗'G_g(α̅) h_α̅β̅(r⃗,r⃗')G^†_g(β̅)e^ik⃗.(g^-1_α̅r⃗-g^-1_β̅(r⃗')) = 1/N∑_r⃗,r⃗'G_g(α̅) h_α̅β̅(r⃗,r⃗')G^†_g(β̅) e^ik⃗'.(r⃗-r⃗')+ϕ_g(α,β) = G_g(α̅)h_α̅,β̅(k⃗')G^†_g(β̅)e^iϕ_g(α,β) (G_g,g): h_α,β(k⃗↦ G_g(α̅)h_α̅,β̅(k⃗')G^†_g(β̅)e^iϕ_g(α,β) From the third line to the fourth, we have used the fact that one can always write k⃗.(g^-1_α̅r⃗-g^-1_β̅(r⃗')) as k⃗'.(r⃗-r⃗')+ϕ_g(α,β) for some k⃗' and a constant ϕ_g(α,β) independent of r⃗-r⃗'— this is always true for symmetry operations which are linearly represented on the lattice sites. Now, let us consider the symmetry transformation g=g_b . g_a acting on h_α,β. Using Eq. <ref> followed by Eq. <ref> , we find h_α,β(k_x,k_y,k_z) = h_β,α(k_x,-k_y,-k_z) e^i k_x = h_α,β(-k_x,k_y,k_z) e^i k_x Eq. <ref> can only be satisfied if the real space amplitudes u(r⃗,r⃗') in Eq. <ref> satisfy U_α,β(r⃗)= u(y,z)(δ_x,1+ δ_x,0), where δ is the Kronecker delta not to be confused with the sublattice index, and u(y,z) is any complex function of the coordinates y and z. This form implies that, h_α,β = M[u(k_y,k_z)ζ(k_x)], ζ(k_x)= (1+exp(i k_x)) The function u(k_x,k_y) is the fourier transform of u(y,z) defined in Eq. <ref>. All other block matrices in Eq. <ref> can be expressed in terms of u(k_x,k_y) by applying symmetry transformations to h_α,β. Applying g_c to h_α,β using Eq. <ref> gives us h_α,γ=𝒜^2 M[u(k_z,k_x)ζ(k_y)], h_α,δ=𝒜 M[u(k_x,k_y)ζ(k_z)], where 𝒜 =exp(i (2π/3)τ^z). We note that 𝒜^n M[u]=M[ω^n u], where ω = 2 π/3. This gives us h_α,γ= M[ω^2 u(k_z,k_x)ζ(k_y)], h_α,δ= M[ω u(k_x,k_y)ζ(k_z)], where ω =exp(i 2π/3). Applying g_b to h_γ,δ gives us h_γ,δ =M[ω u(k_y,-k_z) ζ(-k_x)exp(i k_x)] Finally, applying g_c and g^2_c to h_γ,δ gives us h_β,γ =M[ω u(k_x,-k_y) ζ(-k_z)exp(i k_y)], h_δ,β =M[ω u(k_z,-k_x) ζ(-k_y)exp(i k_x)]. Eqs. <ref>, <ref>, <ref>, <ref>, along with Eq. <ref> specify the most general form of all block matrices appearing in H_ MFT(k⃗) in Eq. <ref> that is allowed by projective symmetries of the 𝖹_21 state. Now we express H_ MFT(k⃗) in the basis (ψ^α_1, ⋯ψ^δ_1, ψ^α_2, ⋯ψ^δ_2) to have H_ MFT(k⃗) = [ 0 h(k⃗); h^†(k⃗) 0; ]. The matrix the most general h(k⃗) allowed by the PSG is given by h(k⃗) = ( [ 0 ζ(k_x) u(k_y,k_z) ω ^2 ζ(k_y) u(k_z,k_x) ωζ(k_z) u(k_x,k_y); ζ(-k_x) u(-k_y,-k_z) 0 ω e^i k_yζ(-k_z) u(k_x,-k_y) ω e^-i k_xζ(k_y) u(-k_z,k_x); ω ^2 ζ(-k_y) u(-k_z,-k_x) ω e^-i k_yζ(k_z) u(-k_x,k_y) 0 ω e^i k_zζ(-k_x) u(k_y,-k_z); ωζ(-k_z) u(-k_x,-k_y) ω e^i k_xζ(-k_y) u(k_z,-k_x) ω e^-i k_zζ(k_x) u(-k_y,k_z) 0; ]) Now, we proceed to show that h(k⃗) has a maximum rank of 3 on the points (± k, ± k, ± k). First consider k⃗=(k,k,k). With the further shorthands u = u(k,k), v = u(-k,-k), ρ = u(k,-k) λ = u(-k,k), ζ = ζ(k), r = e^i k we have h(k,k,k)= ( [ 0 ζ u ζ u ω ^2 ζ u ω; ζ ^* v 0 ζρω ζ ^* λω; ζ ^* v ω ^2 ζ ^* λω 0 ζρω; ζ ^* v ω ζρω ζ ^* λω 0; ]) h(k,k,k) can now be brought to the row-echelon form by the sequence of elementary row-transformations given by R_2 ↔ R_1, R_3 →ω R_3 -R_1, R_4 →ω^2 R_4 -R_1 R_3 → uωζ R_3 - λζ^* R_2 , R_4 → R_4 u - R_2 ρ R_4 = -u ω(ζρ (ω +1)-ζ ^* λω)R_4 +uζω ^2 (ζ ^* λ +ζρ) R_3 . in conjunction with the using the identities ω^3=1, ζ r* =ζ^* and r r^*=1.The result of the row transformations is ( [ ζ ^* v 0 ζ ^* ρ r ω ζλ r^* ω; 0 ζ u ζ u ω ^2 ζ u ω; 0 0 ζ u (ω +1) (ζ ^* λ +ζρ) ζ u (ζ ^* λ +ζρ); 0 0 0 0; ]) The appearance of the 0 in the last diagonal element establishes that the maximum rank of h(k,k,k) can be 3. The analysis need not be repeated for the H_MFT at other gapless points like (-k,k,k), (k,-k,k) etc. All of h(± k, ± k, ± k) are related to h(k,k,k) by elementary “rank-preserving" row and column transformations. h(-k,k,k) can be obtained from h(k,k,k) by the transformations R_1 ↔ R_2, R_3 ↔ R_4, C_1 ↔ C_2, C_3 ↔ C_4, R_4 → r^* C_4, C_4 → r C_4, followed by two re-identifications: u ↔ v, which have been considered independent complex numbers in the proof; and ω↔ω^2 which survive the important properties ω^3=1 and 1+ω+ω^2=0. h(k,-k,k) can be obtained in turn from h(-k,k,k) by the transformations R_2 → R_3, R_3 → R_4, R_4 → R_2, C_2 → C_3, C_3 → C_4, C_4 → C_2, R_1 →ω^2 R_1, C_1 →ω^2 C_1. h(k,k,-k) can be obtained from h(-k,k,k) by the transformations R_2 → R_4, R_4 → R_3, R_3 → R_2, C_2 → C_4, C_4 → C_3, C_3 → C_2, R_1 →ω R_1, C_1 →ω C_1. This completes the proof that h(± k, ± k, ± k) has a maximum rank of 3, and consequently, H_MFT(± k, ± k, ± k) has two gapless bands which are protected by projective symmetries against the addition of arbitrarily long-ranged terms in the mean-field ansatz. §.§ 𝖴(1) The 𝖴(1) spin liquid mean field ansatz has the following form: U_ij = iU^0_ijτ_0 + U^z_ijτ_z, dictated by the fact that the ansatz is invariant under the 𝖴(1) IGG gauge transformation. We would like to investigate how the PSG solutions we obtained constrain the nearest neighbor mean field ansatz by subjecting them to the following test: ∀ g∈𝖯2_13×𝖹^𝒯_2: Ĝ_g ĝ(U_ij) = U_ij. Among the PSGs we have, we first study the class in which A=0. After enumerating all the conditions imposed by the PSG, we arrive at the following nearest neighbor mean field ansatz in the class where A = 0.: U_i = λτ_z, where i ∈{1,…,12}. Next we would like to argue that, when A≠ 0, there would be no nearest neighbor mean field ansatz. We write U_1/3≡ i𝔘_1/3exp [iφ_1/3τ_z]. The TRS conditions on these two bonds give: 2φ_1 + A = π, φ_3 = π/2. However, the condition Ĝ_b ĝ_b(U_1)=U_1 gives us: φ_3 + 2A + 4ϕ_3 = φ_1. We time the above equation by 2, and combined with the last two relations, we would immediately arrive at 5A = 0. Note that we had 3A=0. Thus the ansatz does not vanish only when A=0.We conclude that we obtain only one nearest neighbor 𝖴(1) mean field ansatz: U_ζ = λτ_z, ζ∈{1,…,12 } a_s = ωτ_z, i∈{α,…,δ}.
http://arxiv.org/abs/2409.02893v2
20240904173652
Prospects for Revealing Intermediate-Mass Black Holes in NGC 1399 using SKA
[ "B. Karimi", "P. Barmby", "S. Abbassi" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.HE" ]
IMBHs in NGC 1399 GCs]Prospects for Revealing Intermediate-Mass Black Holes in NGC 1399 using SKA Canada Cambridge Academy, Markham, ON, Canada 0000-0003-2767-0090]P. Barmby Department of Physics & Astronomy, Western University, London, ON, Canada Institute for Earth and Space Exploration, Western University, London, ON, Canada 0000-0003-0428-2140]S. Abbassi Department of Physics & Astronomy, Western University, London, ON, Canada P. Barmby pbarmby@uwo.ca § ABSTRACT This study investigates the detectability of intermediate-mass black holes (IMBHs) within the mass range 10^2 M_⊙≤ M_ BH≤ 10^5 M_⊙ in the globular star clusters of NGC 1399 at a frequency of 300.00 MHz. Employing the theoretical Bondi accretion model and the empirical fundamental plane of black hole accretion, we estimate IMBH masses based on bolometric luminosity and X-ray/radio luminosities, respectively. By simulating a 3-hour observation of 77 globular cluster candidates using the Square Kilometer Array, we identify radio detection benchmarks indicative of accretion onto IMBHs. Our results show that IMBHs inside the globular star clusters located in NGC 1399 are indeed detectable, with the Bondi accretion model providing IMBH mass estimates ranging from 2.93 × 10^3.0 ± 0.39 M_⊙ to 7.43 × 10^4.0 ± 0.39 M_⊙, and the empirical fundamental-plane relation suggesting IMBH mass estimation with 3.41× 10^5.0 ± 0.96 M_⊙. These findings highlight the presence and detectability of IMBHs in globular clusters, offering insights into their role as precursors to supermassive black holes and enriching our understanding of black hole formation and evolution in astrophysical environments. § INTRODUCTION Observational signatures facilitate the widespread identification of two distinct categories of astrophysical black holes: stellar-mass black holes (SBHs) and supermassive black holes (SMBHs) within the Universe. Stellar-mass black holes, with masses up to 10^2 M_⊙, are observed within the Milky Way. Supermassive black holes, with masses exceeding 10^6 M_⊙, reside at the cores of galaxies <cit.>. Intermediate-Mass Black Holes (IMBHs) are defined as having masses 10^2 M_⊙≤ M_ IMBH≤ 10^5 M_⊙ <cit.>. The detection and characterization of IMBHs would provide a crucial link between the extensively studied SBHs and SMBHs <cit.>. If IMBHs indeed exist, they could potentially serve as seeds for the formation of supermassive black holes and the development of galactic bulges <cit.>. Scenarios proposed for the formation of IMBHs include the direct collapse of gas in atomic cooling halos in the early universe <cit.>, the collapse of extremely massive Population III stars <cit.>, and the interaction of stars in the center of dense star clusters <cit.>. Observational attempts to identify IMBHs typically concentrate on astrophysical environments including ultra-/hyper-luminous X-ray sources, globular clusters (GCs), and dwarf galaxies <cit.>. Dwarf galaxies, having undergone minimal mergers and evolving largely in isolation, harbor central black holes in nascent stages, potentially elucidating the evolutionary pathways towards supermassive black holes from IMBH seeds <cit.>. GCs emerge as particularly promising sites for IMBH exploration, having long been recognized as hosts of X-ray binary (XRB) systems <cit.>, and offering critical insights into the formation mechanisms of massive black holes during the early universe <cit.>. Debates persist regarding the presence of IMBHs in GCs: for example, <cit.> and <cit.> suggested that most GCs are unable to retain IMBHs against gravitational wave or Newtonian recoils from mergers; to explore this issue more deeply, it is necessary to study a large number of GCs per galaxy. This underscores the ongoing importance of resolving the question of IMBH existence within astrophysical contexts <cit.>. The X-ray and radio emission emanating from the central engines of stellar mass accreting X-ray binary systems are predominantly attributed to the presence of accretion disks (including coronae) and collimated jets. These emissions present a valuable avenue for investigating the correlation between the disk-jet dynamics and the mass of the black hole. Radio emission from XRBs within GCs — attributed to distinctive non-thermal synchrotron radiation associated with jets — has been successfully detected by <cit.>. Recent observational efforts have focused on detecting IMBHs in GCs through various methods, including dynamical studies, X-ray emissions, and radio observations. For instance, <cit.> employed stellar dynamics to infer the presence of IMBHs in an NGC 1399 GC, while <cit.> highlighted the importance of multi-wavelength observations in identifying and confirming IMBH candidates. <cit.> proposed that current radio telescopes can not detect radio signals from IMBHs in GCs. <cit.> performed a simulation study on Next Generation Very Large Array (ngVLA) detectability of IMBHs in GCs around NGC 4472 and concluded that individual ∼ 10^5M_⊙ IMBHs would be detectable, with detections to ∼10^4.5M_⊙ possible through radio stacking. Massive elliptical galaxies like NGC 4472 are particularly promising environments for such studies due to their rich GC systems and higher likelihood of hosting IMBHs <cit.>. The aim of this work is to determine whether IMBHs are detectable in NGC 1399 GCs in future observations with the SKA. We chose NGC 1399 due to its rich GC system, its location in the Southern Hemisphere, and the previous detection of a GC black hole <cit.>. These factors make NGC 1399 an excellent candidate for searching for possible IMBHs using current and future radio telescopes. The SKA, with its unprecedented sensitivity and resolution, is well-suited for detecting the faint radio emissions that may be associated with accretion processes onto IMBHs. By targeting NGC 1399, a massive elliptical galaxy with a rich GC system, we aim to improve the statistical constraints on the presence and properties of IMBHs in GCs, thereby enhancing our understanding of their formation and retention mechanisms. In this study, we employed two different methods: the theoretical Bondi accretion model <cit.> and the empirical fundamental-plane relation for the hard X-ray state, along with measurements of X-ray luminosities <cit.> to estimate radio luminosity and black hole mass. If the estimated black hole mass falls within the range of 10^2 M_⊙≤ M_BH≤ 10^5 M_⊙, then the black hole is considered an IMBH. This paper is organized as follows: In Section <ref>, we describe NGC 1399 and the relevant X-ray observations of its GC system. In Section <ref>, we detail the methodologies employed, including the Bondi accretion model, the Fundamental Plane formula, and a simulated observation of NGC 1399 at 300 MHz with the SKA. In Section <ref>, we quantify black hole masses using the two different methods and determine the radio luminosity detection thresholds, explaining them as signatures of accretion onto central IMBHs in globular clusters. Finally, in Section <ref> we provide our discussion and conclusions. § GLOBULAR CLUSTERS IN NGC 1399 NGC 1399 is a cD galaxy within the Fornax cluster at a distance of 20.68± 0.50 Mpc and mass of ∼ 10^10.89 ± 0.01M_⊙ <cit.>. It hosts a substantial population of 6000–6500 globular clusters <cit.>, with an effective GC system diameter of a few tens of kiloparsecs <cit.>. As with most large galaxies, NGC 1399's GCs can be divided into red, metal-rich, and blue, metal-poor subpopulations <cit.>. The typical GC half-starlight diameter of ∼ 5 pc corresponds to ∼ 50 mas at NGC 1399's distance. Chandra X-ray observations <cit.> offer invaluable insights into the X-ray properties of GCs in NGC 1399 and the formation characteristics of low-mass X-ray binaries (LMXBs) within them. A significant proportion of the 2–10 keV X-ray emission emanates from the central region of NGC 1399, covering an area of approximately 8'× 8', corresponding to 48.17 × 48.17 kpc. A considerable fraction of this emission is attributed to LMXBs both in the galaxy field and within GCs <cit.>. LMXBs exhibit an association with both blue and red globular clusters within NGC 1399, with the red clusters hosting between 60-70% of all LMXBs <cit.>. Some 16% of red clusters and 5% of blue clusters are associated with LMXBs <cit.>. The large number of GC X-ray sources in NGC 1399 makes this galaxy a great candidate to search for radio emission from IMBHs. In particular, the brightest color-confirmed GC X-ray source in NGC 1399 has an X-ray luminosity exceeding 4× 10^39 erg s^-1 <cit.>. This source is situated within one of the most metal-rich GCs and exhibits consistent X-ray luminosity, suggesting the potential presence of multiple LMXBs within a single GC and providing an important impetus for deep radio observations. For our analysis in this study, we utilize the combined data from Chandra's X-ray observations of NGC 1399, as detailed in <cit.>. This dataset was selected because it represents the deepest available X-ray observation of NGC 1399 GCs, covering the largest area. Figure <ref> illustrates the luminosity distribution of LMXBs associated with globular clusters in NGC 1399. The histogram reveals that the majority of LMXBs exhibit X-ray luminosities between 10^38 erg s^-1 and 10^39 erg s^-1, with a noticeable decline at higher luminosities. This pattern suggests that lower luminosity LMXBs are more common within the globular clusters of NGC 1399, while higher luminosity LMXBs are relatively rare. The presence of LMXBs with X-ray luminosities up to 10^39 erg s^-1 hints at the existence of more massive compact objects or highly efficient accretion processes in some clusters. § METHODOLOGY In this study, we employ two methods to estimate black hole masses in NGC 1399 GCs: * Bondi Accretion Model: We describe the classic Bondi accretion model for black holes, referencing works such as <cit.>. * Fundamental Plane Formula: We demonstrate the application of the fundamental plane formula as a mass predictor. This relation elucidates the correlation between a black hole's mass and its X-ray and radio luminosities, supported by studies such as <cit.>. If an estimated black hole mass falls within 10^2 M_⊙≤ M ≤ 10^5 M_⊙, we classify it as an IMBH. After estimating the black hole masses, we calculate their radio luminosities using the methodology outlined in <cit.>, and detectability using the simulated observations described in Section <ref>. Each method is described in detail below. §.§ Simulated Observation To simulate a radio observation of NGC 1399 GCs, we used the SKA continuum sensitivity calculator <cit.>. We assumed a three-hour observation with SKA1-LOW at a wavelength of 1 metre (ν=300 MHz). This frequency ensures adequate resolution (122 mas, corresponding to about 12.2 pc at the distance of NGC 1399) to distinguish individual GCs. The Gaussian full width half maximum beam size ranges from 6 to 300 arcsec, and the field of view is 120 arcmin <cit.>, corresponding to 720 kpc at 20.7 Mpc, sufficient to encompass hundreds of GC candidates in a single SKA pointing. The calculator gives a limiting flux density of S_ν = 100.5 μJy. With σ = 33.5 μJy beam^-1 <cit.>, we have S_ν = 3σ, converting to a radio luminosity of L_R = 1.617 × 10^34 erg s^-1, assuming a spectral index α = 0 <cit.>. §.§ Bondi Accretion Model For our analysis, we employ the classical Bondi accretion model to estimate the black hole mass-accretion rate and subsequently the black hole mass. The gas characteristics at the Bondi radius and the black hole mass are critical parameters in determining the Bondi accretion rate. According to this model, a black hole in a GC accumulates mass from the surrounding tenuous gas at the Bondi accretion rate, which is consistent with the synchrotron radio emission observed from black holes in GCs <cit.>. In the ideal spherical Bondi model, a black hole accretes ideal gas distributed locally from the GC environment. For simplicity, we do not account for the impact of globular star cluster's mass on the black hole accretion rate as it has been discussed in more detail by <cit.>. Additionally, we assume the black hole is non-spinning. Despite its simplicity, this framework allows us to effectively use the Bondi model for our analysis. To estimate the black hole accretion rate, we utilize the classical Bondi accretion model. Hydrodynamical simulations of gas flows in primordial galaxies, as shown by <cit.>, establish an adiabatic process. For a specified black hole mass, gas is accreted at 3% of the Bondi rate for a gas density of 0.2 particles cm^-3 <cit.> and a temperature of 10^4 K <cit.>. After determining the black hole accretion rate, we estimate the black hole mass. §.§ Black Hole Mass Estimation Using Bondi Accretion Model To estimate black hole mass using the Bondi accretion model, we employ the relation between mass accretion rate and black hole mass (Equation <ref>). Unlike previous studies <cit.>, which used the Fundamental Plane formula to estimate black hole mass after describing the Bondi model, we integrate both methods in our approach. To estimate IMBH mass-accretion rate (Ṁ), we establish a relationship between bolometric luminosity (L^B) and Ṁ, considering a linear relation at low accretion rates. For accretion rates below 2% of the Eddington rate, we employ a radiative efficiency of ϵ=0.1 <cit.>. The Bondi accretion rate depends on gas density, temperature, and black hole mass <cit.>. By applying Equation <ref>, derived from <cit.>, with gas density n=0.2 cm^-3 and temperature T=10^4 K as noted by <cit.>, we calculate the black hole mass. The uncertainty in black hole mass estimation using the Bondi accretion model is 0.39 dex <cit.>. Ṁ_BH= 3.2 × 10^14×(M_BH/2× 10^3M_⊙)^2 (n/0.2)(T/10^4)^-3/2kg s^-1 In this study, we do not consider the influence of the magnetic field on black hole accretion rates as pointed out by <cit.>, and solve the Bondi model for non-spinning black holes. Our analysis steps are listed below. * We use the equation L^B = ϵṀ c^2, where L^B is the bolometric luminosity, ϵ = 0.1 is the radiative efficiency, Ṁ is the mass accretion rate, and c is the speed of light. * The bolometric luminosity is related to the X-ray luminosity by L_X (1 ∼ 10 keV) = ϵ L_Bol, with ϵ∼ 0.1 <cit.>. * Determining L^B allows us to solve for Ṁ using the equation from step (1). * Using the determined Ṁ, we estimate the black hole mass with the Bondi formula (Equation <ref>), assuming n = 0.2 cm^-3 and T = 10^4 K <cit.>. * We then use Equation <ref> to estimate the radio luminosity L_R of the black hole, where L_X (2–10 keV) is the X-ray luminosity and M_BH is the black hole mass in units of M_⊙. Our Bondi method thus depends on the fundamental plane relationship <cit.> as the only current method to link radio and X-ray luminosity with black hole mass. The Bondi model predicts a mass range for NGC 1399 GC black holes from 2.93 × 10^3.0 ± 0.39 M_⊙ to 7.43 × 10^4.0 ± 0.39 M_⊙. Given a typical globular cluster mass of ∼ 10^5 M_⊙, these results from the Bondi model suggest that the black hole masses are within the expected range for IMBHs. The resulting radio luminosities range from 10^33.41 ± 0.88 erg s^-1 to 10^34.96 ± 0.88 erg s^-1, which is around the detection threshold of radio luminosity L_R = 1.617 × 10^34 erg s^-1. Thus, we predict that radio emission from such IMBHs should be detectable with SKA observations. §.§ Fundamental Plane of Black Hole Activity Detecting non-thermal synchrotron radio emission from gas accretion by a central black hole is another approach to identifying IMBHs <cit.>. To estimate radio luminosities of possible IMBHs, we utilize the fundamental plane (FP) formula, which correlates X-ray luminosity, radio luminosity, and black hole mass <cit.>. The fundamental plane of black hole activity is applicable for the hard X-ray state (0.5–10 keV), validating its use in estimating black hole masses <cit.>. This formula relates radio emission from jet power, X-ray emission from the accretion disk, and black hole mass <cit.>. The typical L_X range used is 1–10 keV <cit.>, representing a fraction of the bolometric luminosity <cit.>. While <cit.> argue that the fundamental plane formula overestimates black hole mass in active galactic nuclei (AGN) with hard X-ray luminosity L_14-195 keV≥ 10^42 erg s^-1, they do not address its validity for globular clusters. Previous studies support its use for estimating IMBH masses in GCs <cit.>. §.§ Quantifying Black Hole Mass using the Fundamental Plane Formula The fundamental plane formula demonstrates the correlation between radio luminosity, X-ray luminosity, and black hole mass <cit.>. One form of this relation, which explains the correlation between radio flux, X-ray luminosity, black hole mass, and source distance, is given by <cit.>: S_ν = 10 (L_X/3× 10^31)^0.6(M_BH/10^2 M_⊙)^0.78(d/10 kpc)^-2μ Jy where S_ν is the radio flux density in μJy, L_X is the X-ray luminosity in erg s^-1, M_BH is the black hole mass in solar masses, and d is the source distance in kpc. <cit.> gives the uncertainty in black hole mass estimation using this formula as 0.96 dex and notes that uncertainty propagation in the FP relationship is complex. The uncertainties reported may underestimate the true errors due to the linear nature of the FP parameters when expressed in log space. Using the X-ray luminosities of 77 LMXBs from <cit.>, we solve Equation <ref> to estimate black hole masses. It is important to note that this form of the fundamental plane formula is applicable at low frequency. Another form of the fundamental plane relation relates black hole mass, X-ray luminosity, and radio luminosity <cit.>: log L_R = 7.33 + 0.6 log L_X + 0.78 log M_BH In this formula, L_R and L_X are in erg s^-1, and M_BH is in solar masses. The uncertainty of the estimated radio luminosity is 0.88 dex <cit.>, which we consider when presenting radio luminosity values. To estimate the mass of black holes detectable at 300 MHz, we follow these steps: * Set the estimated radio flux density at 20.7 Mpc to S_ν = 100.5 μJy, as explained in Section <ref>. * Use X-ray luminosities, L_X, of 77 LMXBs taken from <cit.> to estimate black hole mass in solar masses using Equation <ref>. * Calculate the black hole radio luminosity using Equation <ref>. § RESULTS Figure <ref> depicts the relationship between black hole mass and X-ray luminosity for IMBHs in NGC 1399, using estimates from both the Bondi accretion model and the fundamental plane method. The Bondi model demonstrates a clear positive correlation, indicating that higher mass IMBHs tend to have greater X-ray emissions. On the other hand, the FP method suggests higher masses across a broader range of X-ray luminosities, highlighting notable discrepancies between the two methods. These differences underscore the inherent uncertainties in the FP method, as noted by <cit.>. Identifying LMXBs with luminosities L_X ≥ 10^41 erg s^-1 as potential IMBH candidates further emphasizes the need for refined models to accurately characterize black hole properties. In Figure <ref>, we observe the relationship between black hole mass and radio luminosity for IMBHs, comparing the Bondi model (blue circles) and the FP method (orange crosses). The Bondi model indicates a positive correlation between mass and radio luminosity, while the FP method does not show a significant correlation. The Bondi model predicts 34 IMBHs are below the threshold of radio luminosity, making them undetectable, whereas the FP model predicts all 77 BHs are above the threshold, suggesting full detectability by SKA. Figure <ref> showcases the spatial distribution of predicted IMBH masses in NGC 1399, based on data from <cit.>. The color gradation represents the estimated Bondi masses of these IMBHs, ranging from 10^3 to 10^4 M_⊙. This distribution indicates that IMBHs are concentrated in specific regions of the galaxy, likely within gravitational potential wells of globular clusters or dense stellar environments. The variation in masses points to diverse accretion histories and environments for these black holes. The fact that many IMBHs exceed the SKA detection threshold highlights the potential for future radio observations to deepen our understanding of their nature, formation, and role in galactic dynamics. Finally, Figure <ref> presents the distribution of IMBHs estimated using both the Bondi accretion model and the FP method. The Bondi model identifies all 77 low-mass X-ray binaries (LMXBs) as IMBHs, with masses ranging from 10^3 to 10^4 solar masses, suggesting a consistent presence of IMBHs across the sample. In contrast, the FP method identifies only one IMBH with mass of 10^5 solar masses, and 76 BHs with mass range of 10^6-7 solar masses, highlighting a significant discrepancy between the two approaches. The Bondi model appears more inclusive, while the FP method sets a higher mass threshold. This underscores the importance of employing diverse methodologies in black hole studies to capture the full spectrum of IMBH detection and classification. § DISCUSSION AND CONCLUSIONS Our work is consistent with that of <cit.> in finding a gap between estimated IMBH masses using the Bondi accretion model and the Fundamental Plane method. This gap arises from inherent differences in their assumptions and methodologies. The Bondi model, relying on spherical accretion from hot gas with specific density and temperature, often underestimates the mass when real conditions deviate from this ideal. As noted by <cit.>, gas densities in globular clusters are poorly known, with measurements existing for only two Milky Way GCs: 47 Tuc and M15. Conversely, the FP method, which uses empirical correlations between black hole mass, X-ray luminosity, and radio luminosity, can overestimate the mass for high-luminosity sources due to its broad applicability assumptions <cit.>. These discrepancies underscore the need for multiple estimation methods to cross-verify black hole masses, accounting for the limitations and uncertainties of each approach. To address these limitations, recent advancements suggest modifying the Bondi model to include slow rotation, bridging the gap between classical Bondi and advection-dominated accretion flows (ADAFs). Studies have shown that incorporating slow rotation and external gravitational influences significantly reduces the accretion rate compared to the classical Bondi scenario <cit.>. This reduction could lower estimated black hole masses by one or two orders of magnitude, potentially widening the gap between the Bondi and FP model estimates <cit.>. Observations of dense regions of massive elliptical galaxies, where hot gas rotates very slowly, support the existence of Bondi-type quasi-spherical accretion flows <cit.>. This suggests similar processes could apply to IMBHs, leading to more accurate yet lower mass estimates with improved Bondi models. Therefore, incorporating realistic dynamics and external gravitational potentials into accretion models is crucial for refining our understanding of black hole masses across different regimes. Detailed numerical simulations are essential to validate these models and establish universal scaling relations in black hole accretion physics, ultimately enhancing our comprehension of IMBH characteristics and their role in galactic dynamics. To thoroughly address the discrepancies and refine mass estimates, it is imperative to employ multiple models and diverse samples, cross-verifying results to ensure robustness and accuracy in our understanding of black hole accretion processes. Our study investigates the detectability of IMBHs within the GCs of NGC 1399 using the Square Kilometer Array (SKA). Our findings indicate that IMBHs within the GCs of NGC 1399 are detectable using relatively short observations with SKA-Low (flux density limit of 100.5 μJy in a 3-hour observation), with significant implications for future observational campaigns. The SKA offers the possibility of both deeper and higher-frequency observations (with SKA-Mid); we defer detailed examination of these possibilities, and of observations of other systems such as ultra-compact dwarf galaxies <cit.>, to future work. We employed two distinct methods for estimating IMBH masses: the theoretical Bondi accretion model and the empirical FP formula. Our analysis using the Bondi model predicts IMBH masses in NGC 1399's GCs ranging from 2.93 × 10^3.0 ± 0.39 M_⊙ to 7.43 × 10^4.0 ± 0.39 M_⊙, while the FP formula suggests a broader range, with estimates up to 3.41 × 10^5.0 ± 0.96 M_⊙. The FP method also indicates higher radio luminosities, suggesting a greater likelihood of detecting these IMBHs with the SKA. The significant discrepancies between the two methods are attributable to their underlying assumptions and methodologies. The Bondi model, which assumes spherical accretion from hot gas with specific density and temperature, typically estimates lower black hole masses compared to the FP method. For higher X-ray luminosity, the gap between estimated mass from the Bondi model and the FP narrows to only one order of magnitude. The two methods are not entirely independent, since the Bondi model requires use of the fundamental plane to estimate radio luminosity; however their discrepancies highlight the need for robust methods to accurately estimate black hole masses, accounting for the limitations and uncertainties inherent in each approach. Recent advancements propose modifications to the Bondi model that include slow rotation and external gravitational influences. Studies have demonstrated that incorporating these factors significantly reduces the accretion rate compared to the classical Bondi scenario, potentially leading to even lower mass estimates for IMBHs. This underscores the importance of adopting more realistic models that account for the complexities of actual accretion processes. The significant differences between the Bondi and FP model estimates highlight the need for developing more sophisticated models to improve the accuracy of black hole mass estimates. Detailed numerical simulations and multi-wavelength observations are essential to refine these models and establish robust scaling relations across different black hole mass regimes. By utilizing both theoretical and empirical models, our study offers valuable insights into the presence and properties of IMBHs in NGC 1399, contributing to the broader understanding of black hole formation and evolution in astrophysical environments. The SKA's unprecedented sensitivity and resolution will be instrumental in uncovering the faint radio emissions associated with accretion processes onto IMBHs, thereby improving statistical constraints on the presence and properties of IMBHs in GCs. We thank the anonymous referee for a timely and helpful report. CXO, HST astropy <cit.> aasjournal
http://arxiv.org/abs/2409.02842v1
20240904161414
SNNAX -- Spiking Neural Networks in JAX
[ "Jamie Lohoff", "Jan Finkbeiner", "Emre Neftci" ]
cs.NE
[ "cs.NE", "cs.LG" ]
IR[IR]Intrinsic Rewards and Motivation PPO[PPO]Proximal Policy Optimization RL[RL]Reinforcement Learning AC[AC]Arrenhius & Current AD[AD]Automatic Differentiation AER[AER]Address Event Representation AEX[AEX]AER EXtension board AMDA[AMDA]“AER Motherboard with D/A converters” ANN[ANN]Artificial Neural Network API[API]Application Programming Interface BP[BP]Back-Propagation BPTT[BPTT]Back-Propagation-Through-Time BM[BM]Boltzmann Machine CAVIAR[CAVIAR]Convolution AER Vision Architecture for Real-Time CCN[CCN]Cooperative and Competitive Network CD[CD]Contrastive Divergence CG[CG]Computational Graph CMOS[CMOS]Complementary Metal–Oxide–Semiconductor CNN[CNN]Convolutional Neural Network COTS[COTS]Commercial Off-The-Shelf CPU[CPU]Central Processing Unit CV[CV]Coefficient of Variation CTC[CTC]connectionist temporal classification DAC[DAC]Digital–to–Analog DBN[DBN]Deep Belief Network DCLL[DECOLLE]Deep Continuous Local Learning DFA[DFA]Deterministic Finite Automaton DFA[DFA]Deterministic Finite Automaton divmod3[DIVMOD3]divisibility of a number by 3 DPE[DPE]Dynamic Parameter Estimation DPI[DPI]Differential-Pair Integrator DSP[DSP]Digital Signal Processor DVS[DVS]Dynamic Vision Sensor EDVAC[EDVAC]Electronic Discrete Variable Automatic Computer EIF[EI&F]Exponential Integrate & Fire EIN[EIN]Excitatory–Inhibitory Network EPSC[EPSC]Excitatory Post-Synaptic Current EPSP[EPSP]Excitatory Post–Synaptic Potential eRBP[eRBP]Event-Driven Random Back-Propagation FPGA[FPGA]Field Programmable Gate Array FSM[FSM]Finite State Machine FZJ[FZJ]Foschungszentrum Jülich GPU[GPU]Graphical Processing Unit HAL[HAL]Hardware Abstraction Layer HH[H&H]Hodgkin & Huxley HMM[HMM]Hidden Markov Model HNF[HNF]Helmholtz Nano-Facility HW[HW]Hardware hWTA[hWTA]Hard Winner–Take–All ID[ID]Implicit Differentiation IF2DWTA[IF2DWTA]Integrate & Fire 2–Dimensional WTA IF[I&F]Integrate & Fire IFSLWTA[IFSLWTA]Integrate & Fire Stop Learning WTA INCF[INCF]International Neuroinformatics Coordinating Facility INRC[INRC]Intel Neuromorphic Research Community INM[INM]Institute of Neuroscience and Medicine INI[INI]Institute of Neuroinformatics IO[IO]Input-Output IoT[IoT]internet of things IPSC[IPSC]Inhibitory Post-Synaptic Current IPU[IPU]Intelligence Processing Unit ISI[ISI]Inter–Spike Interval JFLAP[JFLAP]Java - Formal Languages and Automata Package JIT[JIT]just-in-time compilation JSC[JSC]Jülich Supercomputing Center LIF[LIF]Linear Integrate and Fire LSM[LSM]Liquid State Machine LTD[LTD]Long-Term Depression LTI[LTI]Linear Time-Invariant LTP[LTP]Long-Term Potentiation LTU[LTU]Linear Threshold Unit LSTM[LSTM]long short-term memory MAML[MAML]Model Agnostic Meta Learning MCMCMarkov Chain Monte Carlo MSEMean-Squared Error NAS[NAS]Neural Architecture Search NHML[NHML]Neuromorphic Hardware Mark-up Language NMDA[NMDA]NMDA NME[NE]Neuromorphic Engineering PC[PC]Predictive Coding PCB[PCB]Printed Circuit Board PGI[PGI]Peter Grünberg Institute PRC[PRC]Phase Response Curve PSC[PSC]Post-Synaptic Current PSP[PSP]Post–Synaptic Potential RI[KL]Kullback-Leibler RRAM[RRAM]Resistive Random-Access Memory RBM[RBM]Restricted Boltzmann Machine RTRL[RTRL]Real-Time Recurrent Learning ROC[ROC]Receiver Operator Characteristic RSA[RSA]Representational Similarity Analysis RDA[RDA]Representational Dissimilarity Analysis RDM[RDA]Representational Dissimilarity Matrix RNN[RNN]Recurrent Neural Network SAC[SAC]Selective Attention Chip SCD[SCD]Spike-Based Contrastive Divergence SCX[SCX]Silicon CorteX SG[SG]Surrogate Gradient SGD[SGD]Surrogate Gradient Descent SRM[SRM]Spike Response Model SNN[SNN]Spiking Neural Network STDP[STDP]Spike Time Dependent Plasticity SW[SW]Software sWTA[SWTA]Soft Winner–Take–All TPU[TPU]Tensor Processing Unit VHDL[VHDL]VHSIC Hardware Description Language VLSI[VLSI]Very Large Scale Integration WTA[WTA]Winner–Take–All XML[XML]eXtensible Mark-up Language SIMD[SIMD]Single Instruction Multiple Data MIMD[MIMD]Multiple Instruction Multiple Data UCI[UCI]University of California Irvine Op[Op]Operation ISA[ISA]Instruction Set Architecture MLP[MLP]Multilayer Perceptrons XLA[XLA]Accelerated Linear Algebra ML[ML]machine learning MLF[MLF]machine learning framework Multivariate Second-Order p-Poincaré Inequalities Tara TrauthweinDepartment of Mathematics, University of Luxembourg and Department of Statistics, University of Oxford, tara.trauthwein@stats.ox.ac.uk. The author was supported by the Luxembourg National Research Fund (PRIDE17/1224660/GPS) and by the UK Engineering and Physical Sciences Research Council (EPSRC) grant (EP/T018445/1). ================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT SNN simulators are essential tools to prototype biologically inspired models and neuromorphic hardware architectures and predict their performance. For such a tool, ease of use and flexibility are critical, but so is simulation speed especially given the complexity inherent to simulating SNN. Here, we present SNNAX, a JAX-based framework for simulating and training such models with PyTorch-like intuitiveness and JAX-like execution speed. SNNAX models are easily extended and customized to fit the desired model specifications and target neuromorphic hardware. Additionally, SNNAX offers key features for optimizing the training and deployment of SNN such as flexible automatic differentiation and just-in-time compilation. We evaluate and compare SNNAX to other commonly used ML frameworks used for programming SNNs. We provide key performance metrics, best practices, documented examples for simulating SNN in SNNAX, and implement several benchmarks used in the literature. BSTcontrol § INTRODUCTION SNN simulators leveraging parallel processing have become indispensable tools to rapidly evaluate and test their performance. Furthermore, as most neuromorphic hardware are based on spiking neurons, such simulators have become essential to evaluate the performance of prototypes at large scale. Many state-of-the-art SNN are commonly trained with gradient-based learning; typically facilitated through a modern machine learning (ML) framework like PyTorch or JAX <cit.> and parallel computing accelerators like GPUs and TPUs. These frameworks are highly modular and extensible and thus a popular choice for a dynamic and growing field such as neuromorphic computing. Furthermore, ML frameworks are optimized for performance, allowing for efficient training of potentially large-scale SNN models on modern hardware accelerators. A host of recent work leveraged ML frameworks for simulating SNN <cit.>. Here, we report SNNAX, our JAX-based library for simulating SNN that is built on Equinox <cit.>, a thin neural network and numerical computation library. We chose JAX as the underlying ML framework because it includes several features that are essential for algorithmic exploration while offering high execution performance. Exploring novel learning algorithms is essential to address challenges arising from backpropagation-based methods, e.g. by incorporating results from synaptic plasticity <cit.>. This type of algorithmic exploration is enabled by JAX's extensive automatic differentiation library, its Just-In-Time (JIT) compiler based on XLA, and a paradigm that separates functions and parameters (functional programming). We compare the effectiveness of SNNAX and similar SNN libraries provide notebooks implementing the standard benchmarks inspired by the Neurobench initiative <cit.>. § RELATED WORK Due to neuromorphic computing borrowing from both Neuroscience and AI, there is a significant functional overlap between SNN "brain" simulators like NEST<cit.>, Brian2 <cit.> and GeNN <cit.> and SNN libraries for machine learning frameworks. While these brain simulators enable distributed, highly scalable simulations on supercomputers, their main design principles are biological accuracy and reproducibility and are thus currently unsuited to explore novel AI applications. Missing in such frameworks are traceable dynamic variables that enables the computation of gradients and thus the efficient training of SNN. Thus, most SNN libraries are built around an automatic differentiation (AD) library like PyTorch or JAX that allows for easy and efficient gradient computations. Popular library choices for PyTorch include snnTorch, Norse and Spiking Jelly <cit.>. snnTorch leverages PyTorch's own compiler to accelerate the otherwise impractically slow training of large-scale SNN in Python. Norse also builds on PyTorch but instead of discretizing incoming spikes into a fixed temporal grid, it leverages an event-driven simulation scheme. Both capitalize on many of the already implemented ML features like optimizers, data loading and popular neural network layers such as convolutions. Spiking Jelly leverages custom CUDA kernels for the spiking neuron dynamics implemented through the CuPy interface to further accelerate the ML workloads. It supports both layer-by-layer and step-by-step execution of the neuron models, thereby giving a lot of flexibility to the user. SINABS <cit.> is another example for a PyTorch-based library that is functionally similar to snnTorch, but with a focus on neuromorphic hardware. Gradient computations can be further accelerated with the EXODUS extension <cit.> which uses implicit differentiation and custom CUDA kernels for the SRM model. However, these features only support also only feed-forward architectures and a limited class of neurons. A similar neuromorphic-hardware focused library is SLAYER <cit.>, which includes a functional simulation of the Intel Loihi and custom CUDA kernels for the computation of delay gradients. JAX is a popular alternative to PyTorch which leverages a functional programming paradigm. Treating many of its features as function transformations, JAX puts emphasis on composability and uses the model's source code as a template to generate new source code that supports JIT compilation, vectorization or auto-differentiability. JAX itself does not provide convenience functions and neural network building blocks, but there exist several neural network libraries with different design paradigms such as Haiku, Flax and Equinox. While not strictly necessary to create a SNN library as neatly demonstrated by Rockpool, they can massively streamline implementation and improve user experience. Rockpool <cit.> is one of the earliest effort to leverage JAX for efficient SNN simulation. Currently, Rockpool is used as a training platform for the accompanying Xylo SNN inference platform, and so its published functionality is limited to linear layers and LIF neuron dynamics. Likewise, JAXSNN falls into this line of hardware-centric SNN libraries. JAXSNN is designed with a time-continuous approach to SNNs in mind, exploiting event-prop <cit.> and time-to-first-spike methods <cit.> for gradient computation which tie seamlessly with the accompanying BrainScaleS-2 platform to enable hardware-in-the-loop training. For this reason, it is not a general-purpose training framework that supports arbitrary connectivity and neuron types but rather focuses on maximally enhancing performance on the underlying platform. Haiku and Flax <cit.> are frameworks that reimplement several elements of the JAX API such as batching, internal state management and random number generation. One of the most recently published SNN libraries is Spyx <cit.> builds on the Haiku library. It is designed for maximum training speed on feed-forward architectures and achieves this through layer-by-layer execution and Haiku's loop unrolling feature. This reduces the CPU-GPU communication overhead significantly and allows the batch-parallel application of stateless operations. The downside of this method is the lack of support for arbitrary recurrent connectivity, which can be beneficial on some tasks. Thanks to Haiku, Spyx supports a variety of neural network blocks and neuron models while still achieving similar performance to other frameworks that require the implementation of custom CUDA kernels. It also supports the mapping of the trained parameters to neuromorphic hardware via the Neuromorphic Intermediate Representation (NIR) <cit.>. In a similar vein, Slax <cit.> builds on Flax and is geared towards online training and algorithmic exploration. With this goal in mind, Slax not only supports BPTT but various other approximations like OSTL, RTRL, OTTT, OTPE and FPTT. As opposed to Spyx, Slax supports almost arbitrary connectivity thereby enabling recurrent connections. This comes at the price of step-by-step execution of the entire network architecture as opposed to Spyx' layer-by-layer approach. Flax and Haiku come with the downside that users interested in exploring beyond provided models are required to learn Haiku or Flax, and a much less intuitive interface compared to PyTorch. In contrast, Equinox is a minimal library that offers many scientific computation and neural network features, while being fully compatible with all existing JAX functionalities. SNNAX builds on Equinox as underlying deep learning library. Like Spyx, this allows building state-of-the-art networks with custom building blocks in a way similar to PyTorch modules while operating seamlessly with JAX functions. We believe that Equinox' approach is more suitable for the cutting-edge algorithmically-minded user that wishes to explore new brain-inspired algorithms and structures. Many of the discussed libraries share a unified interface provided by the NIR initiative which enables a certain level of cross-compatibility between them thus also partly exposing the interfaces of a number of neuromorphic accelerators. § A MACHINE LEARNING LIBRARY FOR SNN While developing SNNAX, we identified a set of key functionalities that a SNN library should satisfy to efficiently train modern spiking neural networks. In the following sections, we will discuss these key functionalities and describe their implementation in SNNAX. §.§ Network Connectivity and Hardware Acceleration A key element in modern deep learning is the specification of the connectivity between neurons. This is typically realized by grouping neurons into layers whose connectivity is then described by highly regular connections, e.g. dense all-to-all connections or convolutions. SNNAX is built on Equinox, a JAX library that already provides the core functionalities to describe the network connectivity. Since the corresponding mathematical operations are very costly, but also easily parallizable, SNNAX exploits JAX' XLA backend to run these on dedicated accelerators such as GPUs and TPUs. XLA (aXelerated Linear Algebra) <cit.> is an increasingly popular ML compiler that was originally designed as Tensorflow backend, but now also serves as the backbone of JAX. It defines a representation of the computational graph, optimizes it, and deploys it onto the target hardware. Furthermore, XLA is particularly amenable to extension with novel devices and computing architectures as demonstrated in <cit.>. SNNAX and Equinox are both able to fully harness JAX' JIT compilation, making it a competitive choice for efficient SNN evaluation. In fact, both support all of JAX' function transformation including automated and intuitive batching through vmap and device parallelism through pmap and array sharding mechanism. §.§ Stateful Layers and Long Sequences SNNs are temporally evolving dynamical systems modeled through differential equations. Discretized SNN can be viewed as a special type of RNN with multiple internal states, akin to an LSTM <cit.>. In its most general case of relevance to the implementation in ML frameworks, SNN dynamics can be written as: U^t+Δ t_i = f(U^t_i, S^t_i, θ) + g(U^t, S^t, ϕ); t=0,...,TΔ t S^t_i = Θ(U^t_i) where U^t ∈ℝ^N× M is the internal state of the neuron which consists of M states (e.g. compartments) and S^t ∈ℝ^N is the output of other neurons activity (including inputs). Additionally, Θ is a discontinuous threshold function, θ and ϕ are learnable parameters of the neuron dynamics, and T is the number of time steps. N denotes the number of neurons and M denotes the number of states within each neuron. Because the dynamics above often result from the discretization of continuous dynamics, Δ t is generally small, leading to T being large (generally T > 100) compared to RNN (generally N < 10). Dealing with the internal states of the neurons is often tedious within JAX' functional programming paradigm. SNNAX therefore handles the state management behind a thin layer that is still easily accessible to the user. §.§ Gradient-based Learning Although Neuroscience provides principled models of biological learning, the state-of-the-art for training SNN for most practical applications is still gradient descent. There, automatic differentiation (AD) and the backpropagation algorithm are used to compute the gradient of a cost function with respect to the models parameters <cit.>. AD allows computation of gradients up to machine precision by decomposing the models computational graph into its elemental operations whose symbolic derivatives are known and then utilizes the chain rule to assemble the functions that computes the derivatives. §.§.§ Temporal Credit Assignment SNNAX maps spikes into a regular temporal grid through spike time binning, which makes it functionally similar to snnTorch and Spyx. It then treats the discretized form of the SNNs ODE as an RNN which is evaluated using a for-loop over the time dimension with changing input and internal state but fixed computational graph per time step. The correct implementation of the temporal processing becomes particularly relevant when considering layer-by-layer and step-by-step execution of spiking neural network architectures. Unrolling the jax.lax.scan primitive for several steps instead of just one can be beneficial in the layer-by-layer case because it reduces the CPU-GPU communication and allows the batch-parallel application of stateless operations such as matrix-multiplications and convolutions which can incur massive runtime improvements for feed-forward architectures. SNNAX, evaluates the time-loop over the entire network in a step-by-step manner as opposed to the layer-by-layer approach in Rockpool or Spyx. While this is a performance bottleneck in some cases, we believe that this design choice is necessary since it enables recurrent connections across layers, a key feature of bio-plausibe SNNs that is paramount for the successful implementation of learning algorithms such as e-prop <cit.>. Figures <ref> and <ref> show execution times of LIF-based feed-forward MLP and CNN architectures for different SNN libaries on a single RTX 4090 GPU with a batchsize of 32. When differentiated, we arrive at a very efficient version of backpropagation through time (BPTT) <cit.>, the de-facto standard for training SNNs. §.§.§ Gradient Across Spikes A spiking neuron emits a spike when its internal state reaches a certain threshold. Once the threshold is crossed, the state generally resets to a new value, which makes spike emission a apparent non-differentiable process. Several approaches exist to circumvent the non-differentiable threshold with surrogate gradient approaches currently being the most popular solution <cit.>. There, the ill-defined derivative of the spiking function is replaced with a surrogate function that then allows to obtain a smooth gradient for the entire network. Since SNNAX is built on JAX' advanced AD features, it is straightforward to define these surrogates and customize them according to the users requirements and several surrogate functions commonly found in literature are already implemented. [caption=Implementation of a feed-forward SNN in SNNAX with tools from equinox and JAX. Note the consequent use of PyTree filters throughout the entire implementation., captionpos=b, label=lst:codeexample] # ... import equinox as eqx import snnax.snn as snn model = snn.Sequential(eqx.Conv2D(2, 32, 7, 2, key=key1), snn.LIF((8, 8), [.9, .8], key=key2), snn.flatten(), eqx.Linear(64, 11, key=key3), snn.LIF(11, [.9, .8], key=key4)) # ... # Simple batched loss function @partial(jax.vmap, in_axes=(0, 0, 0)) def loss_fn(in_states, in_spikes, tgt_class): out_state, out_spikes = model(in_states, in_spikes) # Spikes from the last layer are summed pred = out_spikes[-1].sum(-1) loss = optax.softmax_cross_entropy(pred, tgt_class) return loss # Calculating the gradient with equinox PyTree filters and # subsequently jitting the resulting function @eqx.filter_jit @eqx.filter_value_and_grad def loss_and_grad(in_states, in_spikes, tgt_class): return jnp.mean(loss_fn(in_states, in_spikes, tgt_class)) # ... # Simple training loop for spikes, tgt_cls in tqdm(dataloader): # Initializing the membrane potentials of LIF neurons states = model.init_states(key) # Jitting with equinox PyTree filters loss, grads = loss_and_grad(states, spikes, tgt_cls) # Update parameter PyTree with equinox and optax updates, opt_state = optim.update(grads, opt_state) model = eqx.apply_updates(model, updates) §.§.§ Advanced AD Tools JAX' cutting-edge AD features enable a host of involved learning techniques that have proven to be helpful assets when training SNNs. SNN training can be very memory-intensive due to the small simulation time steps necessary, often resulting in sequence lengths in the hundreds or even thousands. When training with BPTT, this quickly saturates the memory of the underlying accelerator, and thus training large models can become prohibitively slow. JAX, like PyTorch provides a gradient checkpointing tool that reduces memory consumption through recomputation during the backward pass. Furthermore, JAX provides a fully-fledged forward-mode AD implementation that allows to trade memory for compute which enables the design of elaborate gradient-based learning algorithms that save on both memory and compute or might even enable online learning <cit.>. Furthermore, JAX allows, with only very few limitations, to compute arbitrarily high orders of derivatives, thereby facilitating Meta-learning and learning-to-learn approaches which have been demonstrated to be particularly useful for few-shot learning <cit.>. The implementation in SNNAX is straight forward in this case, by just differentiating the already differentiated code once again. §.§ User Interface SNNAX, much like Equinox, has been designed with user experience in mind. A common motive of many JAX libraries is the extensive use of PyTrees, which are simple data structures that package large assemblages of arrays and related structures. Equinox represents parameterised functions as immutable PyTree instances in order to be compatible with JAX' functional programming paradigm. This also allows to directly use JAX' functional transformations like jax.jit or jax.grad on calls to the model's class instance without the need for further modifications as required by Haiku or Flax. Both frameworks require additional steps like Flax' @nn.compact decorator or Haiku's set of hk.transform primitives which convert the object-oriented model implementation into an object that is compatible with JAX' functional programming paradigm. These additional levels of abstraction can introduce unforseeable side-effects, incur additional implementation overhead or severely limit the functionality of certain features such as the use of JAX transformations within hk.transform. SNNAX embraces Equinox' Everything-is-a-PyTree paradigm and provides a set of additional tree primitives and management tools dedicated to the management of stateful neural network models. In this vein, SNNAX also provides a set of convenience function such as snnax.Sequential to enable the rapid prototyping and layer-by-layer execution of feed-forward networks similar to Spyx (see Listing <ref>), while snnax.SequentialRecurrent allows the implementation of recurrent models as demonstrated in Slax. For more elaborate models with complex recurrent connections, SNNAX provides a GraphStructure class that allows the definition of intricate feedback loops and recurrences through a graph object. Alternatively, users can implement custom behavior by implementing their own stateful layers based on the StatefulLayer class. The graph representation is then meticulously executed by SNNAX' execution loop while being fully differentiable. Finally, since Equinox attempts to mimic the simplicity of the PyTorch API, SNNAX inherits many of these features as well which flattens the learning curve and reduces the codebase footprint significantly. § CONCLUSION We introduced SNNAX as a new, fast and user-friendly SNN simulation library that allows to efficiently train modern SNNs. Its design principles are in line with already existing SNN libraries and it matches their performance without the need for writing custom CUDA code while being easy to maintain and extend. § ACKNOWLEDGEMENTS This work was sponsored by the Federal Ministry of Education, Germany (project NEUROTEC-II grant no. 16ME0398K and 16ME0399) and NeuroSys as part of the initiative "Clusters4Future", funded by the Federal Ministry of Education and Research BMBF (03ZU1106CB). We also would like to thank Anil Kaya and Anurag K. Mishra for their valuable contributions to the development of the framework. IEEEtran
http://arxiv.org/abs/2409.02232v2
20240903190011
On the $m$th-order Affine Pólya-Szegö Principle
[ "Dylan Langharst", "Michael Roysdon", "Yiming Zhao" ]
math.FA
[ "math.FA", "math.MG", "Primary: 46E30, 46E35 Secondary: 52A20" ]
§ ABSTRACT An affine Pólya-Szegö principle for a family of affine energies, with equality condition characterization, is demonstrated. In particular, this recovers, as special cases, the L^p affine Pólya-Szegö principles due to Cianchi, Lutwak, Yang and Zhang, and subsequently Haberl, Schuster and Xiao. Various applications of this new Pólya-Szegö principle are shown. What makes a face looks like a hat: Decoupling low-level and high-level Visual Properties with Image Triplets Maytus Piriyajitakonkij1,20000-0002-7610-8953 Sirawaj Itthipuripat30000-0001-9302-0964 Ian Ballard†*,40000-0003-1814-3141 Ioannis Pappas*,50000-0002-0168-7014 September 9, 2024 ================================================================================================================================================================== § INTRODUCTION The sharp L^p Sobolev inequality states that for p∈ (1,n) and f∈ W^1,p(), we have a_n,pf_L^np/n-p()≤∇ f_L^p(), where ·_L^q() stands for the usual L^q norm of a function. The constant a_n,p is the best possible and can be explicitly computed, see (<ref>) below. The Sobolev inequality has a wide range of applications across all areas of mathematics, particularly in the study of PDEs. In this sharp form, it can be found in Federer and Fleming <cit.>, Fleming and Rishil <cit.>, Maz'ya <cit.> for p=1, and Aubin <cit.> and Talenti <cit.> for p∈ (1,n). The Sobolev inequality is arguably the most fundamental inequality connecting analysis and geometry. Indeed, the geometric core behind (<ref>), for all p∈ (1,n), is the classical isoperimetric inequality: nω_n^1/nvolume (E)^n-1/n≤surface area(E), where ω_n is the volume of the n-dimensional unit ball. In fact, the two inequalities are equivalent; see, e.g., Gardner <cit.>. Moreover, the standard approaches to both inequalities are perfectly parallel. A critical ingredient for the proof of the Sobolev inequality is the Pólya-Szegö principle stating that the L^p Dirichlet energy ∇ f_L^p() is non-increasing if f is replaced by its spherically symmetric arrangement whereas the classical isoperimetric inequality can be shown by establishing the monotonicity of the surface area with respect to Steiner symmetrization. By now, there is a vast library of literature on Sobolev inequalities and Pólya-Szegö principles. We refer the readers to the survey <cit.> by Talenti, a few recent contributions <cit.> and the references therein. By convex body, we mean a compact convex set in ^n with non-empty interior. The projection body Π K of a convex body K is an important and natural (see, Ludwig <cit.>) object in affine geometry. As the name suggests, the projection body encodes information about the area of the image of the orthogonal projection of K onto (n-1)-dimensional subspaces. Crucially, the fact that the projection body operator is (n)-contravariant, or Πϕ K= ϕ^-tΠ K for all ϕ∈(n), makes its volume and the volume of its polar body fundamental affine invariants. Recalling that, when a convex body K contains the origin in its interior, its polar body K^∘ is well-defined, the polar projection body of a convex body K is precisely Π^∘ K :=(Π K)^∘. Then, the celebrated Petty projection inequality states that for every convex body K in , we have, denoting by the Lebesgue measure on , (Π^∘ K) (K)^n-1≤(Π^∘) ()^n-1, where is the centered unit ball in and equality holds if and only if K is an ellipsoid. By Cauchy's area formula and a simple application of Hölder's inequality, it can be seen that the Petty projection inequality trivially implies the classical isoperimetric inequality. Quoting Gardner and Zhang <cit.>:“the Petty projection inequality is a dramatic improvement upon the classical isoperimetric inequality.” Part of the dramatic improvements of Petty's inequality over its classical counterpart is that (<ref>) is affine invariant, or, more precisely, changing K into ϕ K for any ϕ∈(n) will not change the value of the volume product (Π^∘ K) (K)^n-1. It is worth pointing out that the inequality involving the volume ratio (Π K) (K)^-n+1, n ≥ 3, known as the Petty conjecture, is still open <cit.>. In a landmark work <cit.>, Zhang, using Minkowski's existence theorem, formulated and derived from the Petty projection inequality, an affine Sobolev inequality that is stronger than (<ref>): a_n,1f_L^n/n-1()≤(nω_n)^n+1/n/2ω_n-1(∫_(∫_ |(∇ f(x))^t u| dx)^-ndu)^-1/n: = ℰ_1(f), where equality holds when f is the characteristic function of an ellipsoid. Zhang <cit.> originally stated (<ref>) for C^1 functions with compact support. However, the inequality holds in general for functions of bounded variation as shown by Wang <cit.>. We emphasize again that not only is (<ref>) sharp, but the energy functional ℰ_1(f) is affine invariant in the sense that ℰ_1(f)=ℰ_1(f∘ϕ) for ϕ∈(n). Moreover ℰ_1(f)≤∇ f_L^1() and therefore, we may view ℰ_1(f) as the affine analog of the classical Dirichlet energy. In fact, Lutwak, Yang, and Zhang <cit.> showed that (<ref>) is only one of a family of sharp L^p affine Sobolev inequality: for p∈ [1,n), a_n,pf_L^np/n-p()≤ℰ_p(f), each of which is stronger than its classical counterpart (<ref>). Here, the precise formulation of ℰ_p(f) can be found in <cit.>, or, by substituting m=1 and Q=[-1/2,1/2] in (<ref>). Curiously, while the geometric core behind the L^p Sobolev inequality for any p∈ [1,n) is the same (the isoperimetric inequality), the geometric core behind (<ref>) is an L^p version of the Petty projection inequality (different for each p) established in Lutwak, Yang, and Zhang <cit.>. An asymmetric (and even stronger) version of (<ref>) is due to Haberl and Schuster <cit.>. The proofs invariably use various elements from the booming L^p Brunn-Minkowski theory initiated by Lutwak <cit.> in the 1990s, including the L^p Minkowski inequality and the L^p Minkowski problem, see, e.g., <cit.>. As in the Euclidean case, an affine Pólya-Szegö principle was established by Lutwak, Yang, and Zhang <cit.> and Cianchi, Lutwak, Yang, and Zhang <cit.> for the L^p affine energy: for p≥ 1, ℰ_p(f)≥ℰ_p(f^⋆), where f^⋆ is the spherically symmetric rearrangement of f. See Section <ref> for its precise definition. An asymmetric (and even stronger) version of (<ref>) is due to Haberl, Schuster, and Xiao <cit.>. An equality condition characterization for p>1 was found in Nguyen <cit.>. Recently, the first two named authors and their collaborators <cit.> established a higher-order version of the sharp L^p affine Sobolev inequality (<ref>). In particular, their work encompasses <cit.> as special cases. While not explicitly stated, it will become clear in the current work that each inequality in this family is stronger than its classical counterpart. One of the goals of the current work is to show the validity of the accompanying higher-order affine Pólya-Szegö principle along with its equality condition characterization. A convex body K is uniquely determined by its support function h_K:→ given by h_K(u) = max_x∈ K x^t u. The covariogram function of K is given by g_K(x) = (K∩ (K+x)). The projection body is closely connected to the covariogram of a convex body, see Matheron <cit.>: for θ∈, .d/dr|_r=0^+ g_K(rθ) = - [n-1] (P_θ^⊥K) = -h_Π K(u), where P_θ^⊥K is the image of the orthogonal projection of K onto the (n-1)-dimensional subspace θ^⊥ consisting of vectors perpendicular to θ. Let m∈ℕ. For x∈^nm, we will write x= (x_1,…, x_m) where x_i∈. From time to time, we shall identify ℝ^nm with the space of n by m matrices. Consequently, as an example, the notation u^tx for u∈ and x∈ℝ^nm is simply the usual matrix multiplication between a 1 by n row vector u^t and an n by m matrix x and the product yields a 1 by m row vector. The motivations of <cit.> stem from the investigation of Schneider <cit.> of the the following natural generalization of the covariogram function: for each m∈ℕ, the mth-covariogram function of a convex body K in ^n, is given by g_K,m(x) = (K∩⋂_i=1^m(K+x_i)), for x=(x_1,…, x_m)∈ℝ^nm. This seemingly innocent generalization turns out to lead to unexpected things. We will now describe one conjecture following this. Since the support of the classical covariogram g_K gives an alternative definition for the difference body DK= K+(-K), the support of g_K,m, denoted by D^m(K), naturally generalizes the classical difference body. As was noted by Schneider, the set D^m(K) is a convex body in ^nm, and the volume ratio [nm](D^m K)[n](K)^-m is (n)-invariant. It was shown in <cit.> that (<ref>) is maximized by simplices, which in the case of m=1 recovers the celebrated difference body inequality of Rogers and Shephard <cit.>, a certain reverse form of the celebrated Brunn-Minkowski inequality <cit.>. From this point of view, finding the sharp lower bound of (<ref>) may be viewed as a higher-order analog of the Brunn-Minkowski inequality. The conjecture is settled in dimension 2: Schneider showed that it is minimized by origin-symmetric K when n =2 for all m ∈. The case for arbitrary n when m=1 is the Brunn-Minkowski inequality for the difference body. The minimizer of (<ref>) in the case n ≥ 3, m ≥ 2 remains open and is conjectured to be attained by ellipsoids. With this point of view in mind, it seems apt to refer to Schneider's conjecture as the mth-order Brunn-Minkowski conjecture. As observed in <cit.>, this conjecture is intimately connected to Petty's conjecture mentioned above. With the tools developed in <cit.>, Haddad in <cit.> confirmed a version of the mth-order Brunn-Minkowski conjecture by replacing the volume with mean width. It was shown in <cit.> that g_K,m is differentiable in each radial direction at the origin. Inspired by Matheron's formula (<ref>), one naturally obtains, for each m∈ℕ, an mth-order projection body; this corresponds to the case when Q is the orthogonal simplex in the following definition, introduced in <cit.>. Let p ≥ 1, m ∈, and fix a convex body Q containing the origin in ^m. Given a convex body K containing the origin in its interior in ^n, we define the (L^p, Q)-projection body of K, K, to be the convex body whose support function is given by h_ K(u)^p = ∫_ h_Q(v^tu)^p dσ_K,p(v) for u∈. Here σ_K,p is the L^p surface area measure of K, see Section <ref> for its definition. It is simple to see that is (n) “contravariant”; that is, ϕ K = ϕ^-t K for each ϕ∈(n), where ϕ^-t(x_1,…, x_m)= (ϕ^-t x_1, …, ϕ^-t x_m). Moreover, when m=1, Q=[-1/2,1/2], it recovers the classical L_p projection body Π_p K and when m=1, Q=[0,1], it recovers the asymmetric L^p projection body Π_p^+K first considered in Lutwak <cit.>. It was shown in <cit.> that K contains the origin as an interior point. Consequently, its polar body ( K)^∘ will simply be denoted by K. In <cit.>, a brand new family of sharp affine isoperimetric inequality (containing the Petty projection inequality as a special case) was shown: for p≥ 1, m∈ℕ, and a convex body K in containing the origin in its interior, ( K) (K)^nm/p-m≤() ()^nm/p-m. For p=1, equality holds if and only if K is an ellipsoid. For p>1, equality holds if and only if K is an origin-symmetric ellipsoid. Naturally, a new collection of sharp affine Sobolev inequality accompanies <cit.>: for p∈ [1,n), m∈ℕ, a convex body Q in ^m containing the origin, and f∈ W^1,p(^n), we have a_n,pf_L^np/n-p()≤ d_n,p(Q)(∫_( ∫_ h_Q((∇ f(z))^tθ)^pdz )^-nm/p dθ)^-1/nm:=ℰ_p(Q,f) where d_n,p(Q) := (nω_n)^1/p(nm( ))^1/nm, and a_n,p=n^1/p(n-p/p-1)^p-1/p[ω_n/Γ(n)Γ(n/p)Γ(n+1-n/p)]^1/n, a_n,1=lim_p→ 1^+a_n,p, We remark that the constant d_n,p(Q) is so that the following comparison is valid. Let p≥ 1,m∈, and f∈ W^1,p(). Then, for every convex body Q in ℝ^m containing the origin, one has ℰ_p(Q,f) ≤∇ f_L^p(), with equality when f is radially symmetric. For p=1, the inequality can be extended to BV(^n). In this case, ∇ f_L^1() should be understood as |Df|(), the total variation of f. An immediate consequence of Theorem <ref> is that (<ref>) is stronger than the classical L^p Sobolev inequality (<ref>). Here, note that the “multiplicity” m is hidden implicitly in the dimension of Q for the notation ℰ_p(Q,f). We remark that the energy ℰ_p(Q,f) can be viewed as a higher-order affine version of the Dirichlet energy ∇ f_L^p(), as well as the L^p affine energy ℰ_p(f). In particular, when m=1,Q=[-1/2,1/2], we have ℰ_p([-1/2,1/2], f)= ℰ_p(f); when m=1, Q=[0,1], this recovers the asymmetric L^p affine energy defined implicitly in Haberl-Schuster <cit.>. A consequence of Theorem <ref> is that ℰ_p(Q,f^⋆) = ∇ f^⋆_L^p(), regardless of the choice of m∈ and convex body Q⊂^m containing the origin. A natural question is: is there a Pólya-Szegö principle for the higher-order affine energy ℰ_p(Q,f)? We answer this question positively. Let p≥ 1,m∈, f∈ W^1,p(), and a convex body Q in ^m containing the origin. Then, ℰ_p(Q,f)≥ℰ_p(Q,f^⋆). When p>1 and f satisfies the minor regularity assumption ({x:|∇ f^⋆(x)|=0}∩{x: 0<f^⋆(x)<f_L^∞()})=0, there is equality if and only if the level sets of f are dilations of an ellipsoid in with respect to its center. Note that Theorem <ref> is valid for all p≥ 1 whereas (<ref>) is only true for p∈ [1,n). We remark here that the regularity assumption (<ref>) for the equality condition cannot be removed due to a classical counterexample constructed by Brothers and Ziemer <cit.>. In the m=1 case, Theorem <ref> is due to Lutwak, Yang, and Zhang <cit.>, Cianchi, Lutwak, Yang, and Zhang <cit.>, Haberl, Schuster, and Xiao <cit.>, and Nguyen <cit.>. The extension to the space of BV() in this case is due to Wang <cit.>. Our methods of proof, therefore, are inevitably influenced by them. Affine Pólya-Szegö principles can be used to “upgrade” classical (non-affine) isoperimetric inequalities. Indeed, Theorem <ref>, <ref>, in combination with the classical L^p Sobolev inequality (<ref>), imply the mth-order affine Sobolev inequality (<ref>) established in <cit.>. In fact, ℰ_p(Q,f)≥ℰ_p(Q,f^⋆)= ∇ f^⋆_L^p()≥ a_n,pf^⋆_L^np/n-p() = a_n,pf_L^np/n-p(). The same philosophy will be exploited in Section <ref> to demonstrate a variety of affine versions of classical (non-affine) Sobolev-type inequalities. Finally, in Section <ref>, we consider an associated Poincaré-type inequality for ℰ_p(Q,f) when Q is origin-symmetric. Let Ω⊂ be a bounded domain containing the origin in its interior, m∈, Q∈[m] be origin-symmetric, and 1 ≤ p < ∞. Then, there exists a constant C:=C(n,m,p,Q, Ω) >0 such that, for any function f ∈ W^1,p_0(Ω) it holds that ℰ_p(Q,f) ≥ C f_L^p(). § PRELIMINARIES We first recall some rudimentary facts about convex bodies; here and throughout this section, the book <cit.> by Schneider is the standard reference. We denote by [d]⊃[d] ⊃[d], respectively, the collection of convex bodies in ^d, those that additionally contain the origin, and those that contain the origin in their interiors. We use 𝕊^d-1 for the centered unit sphere in ^d. For each K∈[d], its polar body K^∘∈[d] is given by K^∘ = ⋂_x∈ K{y∈^n: y^tx≤ 1}. The support function h_K:𝕊^d-1→ℝ of a compact, convex set K⊂^d is given by h_K(u)=sup_y∈ Ky^tu, for every u ∈𝕊^d-1. Note that the support function can be extended to ^d by making it 1-homogeneous. The Minkowski functional of K∈[d] is defined to be y_K=inf{r>0:y∈ rK}. Note that ·_K is the possibly asymmetric norm whose unit ball is K. The Minkowski functional, support function, and polar body are related via the following equation: h_K(u) = u_K^∘. By polar coordinates, the volume of K∈[d] can be expressed in the following way through its Minkowski functional: [d](K)=1/d∫_θ_K^-ddθ. The surface area measure, σ_K, for a K ∈[d] is the Borel measure on 𝕊^d given by σ_K(D)=ℋ ^d-1(n^-1_K(D)), for every Borel subset D of , where n_K^-1(D) consists of all boundary points of K with outer unit normals in D. Suppose the boundary ∂ K of a compact set K⊂^d has enough regularity, e.g. K is convex or C^1 smooth. The mixed volume of K with a compact, convex set L is given by V_1(K,L)=1/d∫_∂ Kh_L(n_K(y))dℋ^d-1(y). Minkowski's first inequality (see, for example, <cit.>) states: V_1(K,L)^d≥(K)^d-1(L), with equality if and only if K is homothetic to L. When K∈[d], Lutwak <cit.> introduced the L^p surface area measure, denoted by σ_K,p, of K, dσ_K,p(u)=h_K(u)^1-pdσ_K(u). The L^p surface area measure is arguably one of the cornerstones of the L^p Brunn-Minkowski theory that occupies much of the research in the theory of convex bodies in the last few decades. We now collect some notations and basic facts about a measurable function f ^d →. Throughout the rest of the section, we fix p≥ 1. A function f is said to be non-constant if, for every α∈, ({v ∈ f(x) ≠α}) >0. A function f ^d → is said to have a weak derivative if ∇ f exists in the weak sense; that is, there exists a measurable vector map ∇ f:→ such that ∫_^d f(v) divψ(v) dv = - ∫_^d (∇ f(v))^t ψ(v) dv for every compactly supported, smooth vector field ψ:^d →^d (see <cit.>). We say a function is an L^p Sobolev function, and belongs to the L^p Sobolev space W^1,p(^d), if f,∇ f ∈ L^p(^d). The space of functions of bounded variation, denoted by BV(^d), is very similar, except in the right-hand side of (<ref>), the vector map ∇ f is replaced with a more generic map σ_f:^d↦^d, and integration with respect to the Lebesgue measure is replaced by integration with respect to |Df|, the total variation measure of f. The two spaces are related: if f is in BV(^d) and f has a weak derivative, then, d|Df|(v)=|∇ f(v)| dv and σ_f(v)=∇ f(v)/|∇ f(v)| for |Df| almost all v∈^d. The space of compactly supported, infinitely-times differentiable functions on ^d will be denoted by C_0^∞(^d). Federer's coarea formula (see e.g. <cit.>) states: if f:^d→ is Lipschitz and g:^d→ [0,∞) is measurable, then, for any Borel set A⊆, one has ∫_f^-1(A) ∩{|∇ f|>0} g(x) d x=∫_A ∫_f^-1(t)g(y)/|∇ f(y)| d ℋ^d-1(y) d t . Here, ℋ^d-1 denotes the (d-1)-dimensional Hausdorff measure. The distribution function of a measurable function f:^d→ is given by μ_f(t)=[d]({x∈^d: |f(x)| >t}), and is finite for t>0 if f∈ L^p(^d). Its decreasing rearrangement f^∗:[0,∞)→ [0,∞] is defined by f^∗(s)=sup{t>0:μ_f(t)>s} for s≥ 0. The spherically symmetric rearrangement f^⋆ of f is defined as f^⋆(x)=f^∗(ω_d|x|^d) for x∈^d. Spherically symmetric rearrangement preserves the distribution function, that is μ_f=μ_f^⋆. Moreover, for every continuous, increasing function Φ:[0,∞)↦[0,∞), one has ∫_Φ(|f(x)|)dx = ∫_Φ(f^⋆(x))dx. Similarly, one may define the rearrangement of f with respect to any K∈[d]; see, for example, <cit.>. That is, f^K(x)=f^∗([d](K)x^d_K) for x∈^d. Note that when K is the centered unit ball, it recovers f^⋆. Let ν be a finite Borel measure on that is not concentrated in any closed hemisphere, p≥ 1,m∈, and Q∈[m]. Define the (L^p,Q) cosine transform of ν, denoted by C_p,Q^m μ :→ (0,∞) as (C_p,Q^m μ) (θ) = ∫_ h_Q(v^tθ)^pdμ(v). Note that since ν is not concentrated in any closed hemisphere, the function C_p,Q^mν is a positive, continuous function on . Moreover, since h_Q is convex and p≥ 1, when extended 1-homogeneously to ^nm, the function C_p,Q^m μ is convex. We shall use the following definition that slightly generalizes the (L^p,Q) projection body given in Definition <ref>. Let ν be a finite Borel measure on that is not concentrated in any closed hemisphere, p≥ 1,m∈, and Q∈[m]. The (L^p,Q) projection body of ν is the convex body in [nm] whose support function is given by h_ν = (C_p,Q^m ν)^1/p. We then have ν = (ν)^∘. When ν = σ_K,p, the L^p surface area measure of K∈[n] one has σ_K,p = K. We also require the following definition. For p ≥ 1, m ∈, and Q ∈[m], define the (L^p,Q)-LYZ projection body of non-constant f∈ W^1,p(), f to be the convex body in ^nm whose support function is given, for every θ∈, by h_ f(θ) = (∫_^nh_Q((∇ f(x))^tθ )^p dx)^1/p=h_Q(∇ f(·)^tθ)_L^p(). It will be shown in (<ref>) that f in fact contains the origin in its interior and consequently, its polar body f:=( f)^∘ is well-defined. We remark here the link among f, ν, and K. Lutwak, Yang, and Zhang <cit.> showed that there exists a unique measure on the sphere ν_f,p that is not concentrated on any great hemisphere, so that the integration over ^n in (<ref>) can replaced with integration over with respect to ν_f,p (∇ f(x) becomes merely u∈). Historically, Q=[-1/2,1/2], and thus one can apply the even L^p Minkowski problem by Lutwak <cit.> to the even part of ν_f,p to obtain an unique centrally symmetric convex body whose L^p surface area measure is the even part of ν_f,p. This origin-symmetric convex body associated to f via the even part of ν_f,p, called by Ludwig the LYZ body of f <cit.>, became a powerful tool in convex geometry; for example, Wang <cit.> used the LYZ body to characterize, independently of Nguyen, equality conditions for the Pólya-Szegö principle for the functional ℰ_p(f). This approach was later extended to another energy functional over the Grassmanian by Kniefacz and Schuster <cit.>. In our situation, we would follow <cit.> and apply, for example, the non-even L^p Minkowski problem by Hug, Lutwak, Yang and Zhang <cit.> to the measure μ_f,p to obtain unique convex bodies ⟨ f ⟩_p∈[n], the so-called asymmetric LYZ bodies. But, the origin may be on the boundary of ⟨ f ⟩_p for 1<p < n. Consequently, f is not necessarily applied to ⟨ f ⟩_p, and, in particular, the Petty projection inequality (<ref>) does not readily apply. Such an issue is not present for p> n or p=1. The following equation is immediate following Definition <ref> and the definition of ℰ_p(Q,f): ℰ_p(Q,f) = d_n,p(Q)(nm)^-1/nm( f)^-1/nm. § AN MTH-ORDER AFFINE PÓLYA-SZEGÖ PRINCIPLE This section is dedicated to proving Theorems <ref> and <ref>, the mth-order affine Pólya-Szegö principle. We shall see in later sections that this is the source of many functional sharp affine isoperimetric inequalities. The approach adopted here is inspired by <cit.>. §.§ Proving Theorem <ref> A bit of simple matrix manipulation is needed. For each T∈ GL(n), consider the map T∈ GL(nm) given as T(x_1, ⋯, x_m) = (Tx_1,…, Tx_m) for any x_1,…, x_m∈^n. It is simple to see that if T∈ O(n), then T∈ O(nm). In particular, if θ∈, then Tθ∈. Write θ = (θ_1, …, θ_m) where θ_i∈ℝ^n. Then Tθ = (Tθ_1, …, Tθ_n) and consequently, for every v∈ℝ^n, we have v^t Tθ = (v^t Tθ_1, ⋯, v^tTθ_n)= ((T^tv)^tθ_1, …, (T^tv)^tθ_n)= (T^tv)^tθ. When integrating on O(n) with respect to dT, we mean integrating with respect to the Haar measure on O(n). We start by deriving a new formula for d_n,p(Q). Note that the integral ∫_O(n) h_Q((T^tv)^tθ)^pdT is independent of v∈. This and the definitions of dT and imply d_n,p(Q)^nm = (nω_n)^nm/p nm () = (nω_n)^nm/p∫_θ_^-nmdθ = ∫_(1/nω_n∫_ h_Q(v^tθ)^pdv )^-nm/pdθ =∫_(∫_O(n) h_Q((T^t e_1)^tθ)^pdT )^-nm/pdθ. Here e_1∈ is, say, the first vector in the canonical basis of ^n. Henceforth, we work with W^1,p(); the proof still works with minor modifications in the p=1,f∈ BV() case. Recall that for every T∈ O(n), we have T∈ O(nm). Thus, a change-of variable formula and (<ref>) imply that for every T∈ O(n), we have, (d_n,p(Q)^-1ℰ_p(Q,f))^-nm =∫_h_Q(∇ f(·)^tθ)_L^p()^-nm dθ = ∫_h_Q(∇ f(·)^tTθ)_L^p()^-nm dθ = ∫_h_Q((T^t∇ f(·))^tθ)_L^p()^-nm dθ. Consequently, by the fact that dT is a probability measure on O(n), the Fubini theorem and Jensen's inequality yield (d_n,p(Q)^-1ℰ_p(Q,f))^-nm = ∫_O(n)∫_h_Q((T^t∇ f(·))^tθ)_L^p()^-nm dθ dT = ∫_∫_O(n)h_Q((T^t∇ f(·))^tθ)_L^p()^-nm dT dθ ≥ ∫_(∫_O(n)h_Q((T^t∇ f(·))^tθ)_L^p()^pdT)^-nm/p dθ. Next, we insert the definition of h_Q((T^t∇ f(·))^tθ)_L^p(). We then use the homogeneity of support functions, the Fubini theorem once again, the fact that (<ref>) is independent of v∈, and finally (<ref>) to obtain from (<ref>) (d_n,p(Q)^-1ℰ_p(Q,f))^-nm ≥ ∫_(∫_O(n)∫_^n |∇ f(z)|^ph_Q((T^t ∇ f(z)/|∇ f(z)|)^tθ)^pdzdT)^-nm/p dθ = ∫_(∫_^n |∇ f(z)|^p∫_O(n)h_Q((T^t ∇ f(z)/|∇ f(z)|)^tθ)^pdTdz)^-nm/p dθ = ∇ f_L^p(^n)^-nm∫_(∫_O(n)h_Q((T^t e_1)^tθ)^pdT)^-nm/p dθ = ∇ f_L^p(^n)^-nm d_n,p(Q)^nm. This yields the desired inequality. When f is radially symmetric, for every T∈ O(n), by a change-of-variable and then the chain rule, we have, h_Q((T^t∇ f(·))^tθ)_L^p()^p = ∫_^n h_Q((T^t∇ f(z))^tθ)^pdz= ∫_^n h_Q((T^t∇ f(Tz))^tθ)^pdz = ∫_^n h_Q((∇ f∘ T(z))^tθ)^pdz= ∫_^n h_Q((∇ f(z))^tθ)^pdz = h_Q((∇ f(·))^tθ)_L^p()^p. Thus, equality holds in (<ref>) by the equality condition of Jensen's inequality. §.§ Proving the inequality in Theorem <ref> Throughout this section, let p∈ [1,∞) be fixed. We first prove an inequality for the polar projection bodies ν introduced in Definition <ref>. Let ν be a finite Borel measure on that is not concentrated in any closed hemisphere. Then, there exists K∈[n] (dependent on ν and p) such that 1 = 1/n∫_ h_K(u)^p dν(u) and (K)h_K(u)^p-1dμ(u)=dσ_K(u). Moreover, the convex body K and the Borel measure ν satisfy, for every m∈ and Q∈[m], (ν ) (K)^-m≤( ) ()^nm/p -m. Since ν is not concentrated in any closed hemisphere, we may construct a sequence of discrete measures ν_i such that ν_i are not concentrated in any closed hemisphere and ν_i converges to ν weakly. See, for example, <cit.>. By the solution to the discrete L^p Minkowski problem (see, e.g., <cit.>), there exist polytopes P_i∈[n] such that (P_i)h_P_i(u)^p-1dν_i(u)=dσ_P_i(u). Since each P_i∈[n], this means ν_i = (P_i)^-1σ_P_i, p. An application of <cit.> shows that P_i is uniformly bounded. Blaschke's selection theorem <cit.> now implies that, by passing to a subsequence, we may assume the P_i converge, in the Hausdorff metric, to some K∈[n]. By (<ref>), we have 1 = 1/n∫_ h_P_i(u)^p dσ_P_i, p(u)/(P_i) = 1/n∫_ h_P_i(u)^p dν_i(u) →1/n∫_ h_K^p(u) dν(u), where the convergence follows from the fact that h_P_i→ h_K uniformly (since h_P_i→ h_K uniformly and p≥ 1) and that ν_i converges to ν weakly. Taking the limit in (<ref>) yields (<ref>). Since each P_i contains the origin as an interior point, (<ref>) applies: ( P_i ) (P_i)^nm/p -m≤( ) ()^nm/p -m. The weak convergence of ν_i to ν yields (C_p,Q^m (ν_i))^1/p→(C_p,Q^m (ν))^1/p pointwise on and, consequently, uniformly on . By definition, this means that ν_i →ν in the Hausdorff metric. By taking volume, we obtain (ν_i)→(ν). On the other hand, from (<ref>), we have that ν_i = (P_i)^1/p P_i. Taking volume, this yields (P_i)^nm/p( P_i)→(ν). Letting i→∞ in (<ref>) completes the proof. We remark that, if K ∈[n], then σ_K,p exists and, (<ref>) implies, ν = (K)^-1σ_K,p. Thus, ν = (K)^1/p K. Consequently, equation (<ref>) simply states the usual mth-order Petty projection inequality (<ref>) for this K. As another example, if ν=μ_f,p, the measure described above (<ref>), then the resultant K in Lemma <ref> is precisely ⟨ f ⟩_p. The following lemma is critical in reducing Theorem <ref> from W^1,p() to C_0^∞(). We also obtain some useful facts about f from Definition <ref>. Let p≥ 1,m∈, f_k, f∈ W^1,p(), and Q∈[m]. If f_k→ f in W^1,p(), then θ_f_k= h_Q(∇ f_k(·)^tθ)_L^p()→ h_Q(∇ f(·)^tθ)_L^p()=θ_ f uniformly for θ∈, as k→∞, i.e. f_k→ f in the Hausdorff metric. Moreover, if f is not constantly 0 (up to a set of measure 0), then there exists c_0>0 such that for every θ∈ θ_ f= h_Q(∇ f(·)^tθ)_L^p()>c_0, and consequently f ∈[nm]. Since Q∈[m], there exist M>0 such that Q⊂ MB_2^m. In particular, using the definition of support function, we get that for all x,y∈^m, |h_Q(x)-h_Q(y)|≤ M|x-y|. Since is compact, there exists λ>0 such that |x^tθ|≤λ |x|, for all x∈^n and all θ∈. Using (<ref>) and (<ref>), if p≥ 1 and f∈ W^1,p(), h_Q(∇ f_k(·)^tθ) - h_Q(∇ f(·)^tθ)_L^p() ≤ M∇ (f_k-f)(·)^tθ) _L^p() ≤ Mλ∇ (f_k-f)_L^p() → 0, uniformly in θ∈. It remains to show (<ref>). Since Q∈[m], there exist linearly independent u_1, ⋯, u_m∈𝕊^m-1 and γ>0 such that γ u_i∈ Q. This implies, h_Q(x)≥γmax{(x^tu_1)_+, …, (x^tu_m)_+}. Note that for every θ∈, we have max{|θ u_1|, …, |θ u_m|}>0, since u_1, …, u_m are linearly independent and θ is not the zero map. This, when combined with the fact that continuous functions on a compact set achieve minima, implies the existence of τ>0 such that for all θ∈, max{|θ u_1|, …, |θ u_m|}>τ. In particular, this means there exists u_i_* (for each θ) such that |θ u_i_*|>τ. It is simple to see the map θ↦ h_Q(∇ f(·)^tθ)_L^p() is continuous on . Moreover, by (<ref>) and (<ref>), h_Q(∇ f(·)^tθ)_L^p() ≥γmax_1≤ i≤ m(∇ f(·)^tθ u_i)_+_L^p() = γ |θ u_i_*|(∇ f(·)^tθ u_i_*/|θ u_i_*|)_+_L^p() ≥γτ(∇ f(·)^tθ u_i_*/|θ u_i_*|)_+_L^p() >0. Here, the last line follows from the first part of the proof in <cit.>. Consequently, the compactness of implies the existence of c_0>0 such that (<ref>) holds. We are now ready to apply the geometric inequality, Lemma <ref>, to prove Theorem <ref>. Let f∈ C_0^∞ (). Then, by Sard's theorem, one has for a.e. t>0 that [f]_t:={|f|≥ t} is a bounded, closed set with C^1 boundary ∂ [f]_t={|f|=t}. We assume f is not constantly 0 (up to a set of measure 0), since otherwise, the desired inequality is trivial. We will prove the case when f∈ C_0^∞(). Recall that by Sard's theorem, for almost all t∈ (0, f_∞) we have ∇ f(y)≠ o for y∈∂ [f]_t. For those t, consider the finite Borel measure ν_t on such that for all g∈ C(), we have ∫_ g(v)dν_t(v) = ∫_∂ [f]_t g(∇ f(y)/|∇ f (y)|) |∇ f(y)|^p-1 dℋ^n-1(y). Note that ν_t is not concentrated in any closed hemisphere. This can be seen by taking g(v) = (v^tu)_+ for an arbitarily fixed u∈ and noting that the integral is positive, since f∈ C_0^∞(^n) and the outer unit normals of ∂ [f]_t form . In fact, on ∂[f]_t, one has that n(y):=n_∂[f]_t(y)=∇ f(y)/|∇ f (y)|=σ_f(y). By Lemma <ref>, there exists a convex body, which we denote by ⟨ f ⟩_t,p∈[n], such that (ν_t ) ≤( ) ()^nm/p -m(⟨ f ⟩_t,p)^m and 1 = 1/n∫_ h_⟨ f ⟩_t,p(v)^p dν_t. Observe that, from the coarea formula (<ref>), Minkowski's integral inequality, and (<ref>): d_n,p(Q)^-pℰ_p^p(Q, f) = (∫_( ∫_0^∞∫_∂[f]_t h_Q^p(n(y)^tθ) |∇ f(y)|^p-1dℋ^n-1(y)dt )^-nm/pdθ)^-p/nm = (∫_( ∫_0^∞∫_ h_Q^p(v^tθ) dν_t(v)dt )^-nm/p dθ)^-p/nm = (∫_( ∫_0^∞θ_ν_t^p dt )^-nm/p dθ)^-p/nm ≥ ∫_0^∞(∫_θ_ν_t^-nm dθ)^-p/nmdt = (nm)^-p/nm∫_0^∞(ν_t)^-p/nm dt ≥ (nm())^-p/nmω_n^p-n/n∫_0^∞(⟨ f ⟩_t,p)^-p/ndt. Inserting the definition of d_n,p(Q) (see (<ref>)), we obtain ℰ_p(Q,f) ≥ n^1/pω_n^1/n(∫_0^∞(⟨ f ⟩_t,p)^-p/ndt)^1/p. Suppose p=1. Then, one obtains from (<ref>) and (<ref>) 1 = 1/n∫_ h_⟨ f ⟩_t,1 dν_t = 1/n∫_∂ [f]_t h_⟨ f ⟩_t,1(n(y)) dℋ^n-1(y) = V_1([f]_t,⟨ f ⟩_t,1) ≥([f]_t)^n-1/n(⟨ f ⟩_t,1)^1/n = μ_f(t)^n-1/n(⟨ f ⟩_t,1)^1/n, where the last equality follows from (<ref>). Similarly, if p>1, we have from Hölder's inequality and again (<ref>) and (<ref>) n^1/p(∫_∂ [f]_t|∇ f(y)|^-1dℋ^n-1(y))^p-1/p = (∫_∂ [f]_th_⟨ f ⟩_t,p(n(y))^p|∇ f(y)|^p-1dℋ^n-1(y))^1/p(∫_∂ [f]_t|∇ f(y)|^-1dℋ^n-1(y))^p-1/p ≥∫_∂ [f]_th_⟨ f ⟩_t,p(n(y))dℋ^n-1(y) ≥ n([f]_t)^n-1/n(⟨ f ⟩_t,p)^1/n = nμ_f(t)^n-1/n(⟨ f ⟩_t,p)^1/n. In either case, we deduce that for p≥ 1 (⟨ f ⟩_t,p)^-p/n≥μ_f(t)^p/n(n-1)(1/n∫_∂ [f]_t|∇ f(y)|^-1dℋ^n-1(y))^1-p. Notice there is equality if and only if ⟨ f ⟩_t,p is homothetic to [f]_t for a.e. t>0. Inserting this into (<ref>), we obtain ℰ_p(Q,f) ≥ n^1/pω_n^1/n(∫_0^∞μ_f(t)^p/n(n-1)(1/n∫_∂ [f]_t|∇ f(y)|^-1dℋ^n-1(y))^1-pdt)^1/p. Next, from another use of the coarea formula (<ref>), we have (see e.g. <cit.>) μ_f(t) = ([f]_t∩{∇ f =0})+∫_t^∞∫_{|f|=τ}|∇ f(y)|^-1dℋ^n-1(y)dτ. Notice both terms on the right-hand side are nonincreasing. Therefore, we may differentiate in the variable t and throw-away the first term to deduce -μ^'_f(t) ≥∫_∂ [f]_t|∇ f(y)|^-1dℋ^n-1(y), for a.e. t>0. One has, see e.g. <cit.>, that equality holds in (<ref>) if and only if f=f^⋆. Inserting (<ref>) into (<ref>) yields ℰ_p(Q,f) ≥ nω_n^1/n(∫_0^∞μ_f(t)^p/n(n-1)/(-μ^'_f(t))^p-1dt)^1/p. By concatenating all the equality cases mentioned above, equality in (<ref>) holds if f is replaced by f^*. By the fact that μ_f(t) = μ_f^*(t), this completes the proof in the case of f∈ C_0^∞. The general case then follows from approximation. This is essentially the same as <cit.>, but we recall the approximation for the convenience of the reader. Given f∈ W^1,p(), let {f_k} be a sequence of C_0^∞() functions that converge to f in W^1,p(). By the first part of the proof, for all k=1,2,…, ℰ_p(Q,f_k^*) ≤ℰ_p(Q,f_k) Lemma <ref> and the definition of ℰ_p(Q, ·) show that lim_k→∞ℰ_p(Q,f_k)=ℰ_p(Q,f). On the other hand, since rearrangements are contractive in L^p() (see <cit.>), we know f_k^*→ f^* in L^p() and consequently f_k^⋆→ f^⋆ weakly in W^1,p(). This, when combined with Theorem <ref> and the fact that the L^p gradient norm is lower-semicontinuous with respect to weak convergence in W^1,p(), shows ℰ_p(Q,f)=lim_k→∞ℰ_p(Q,f_k) ≥lim inf_k→∞ℰ_p(Q,f_k^*)= lim inf_k→∞∇ f_k^*_L^p() ≥∇ f^*_L^p()=ℰ_p(Q,f^*). We save the equality conditions for Section <ref> below. Following <cit.>, we call the sets the ⟨ f ⟩_t,p the asymmetric convexification of [f]_t. We isolate from (<ref>) the following inequality: ℰ_p^p(Q, f) > nω_n( )^p/nm∫_0^∞(ν_t)^-p/nm dt ≥ nω_n^p/n∫_0^∞(⟨ f ⟩_t,p)^-p/ndt. This may be viewed as the higher-order analog of the convex Lorentz-Sobolev inequality found in <cit.>. §.§ The equality conditions in Theorem <ref> This section is dedicated to proving the following statement: Let p> 1, f∈ W^1,p(), and Q∈[m]. Suppose f satisfies the minor regularity condition (<ref>). Then, ℰ_p(Q,f) = ℰ_p(Q,f^*) if and only if f(x)=f^E(x+x_0) for some x_0∈ and origin-symmetric ellipsoid E∈[n]. Our approach is heavily inspired by <cit.>. Our first step is to introduce some notation. Let p ≥ 1, m ∈, and fix some Q ∈[m]. Given a compact set L⊂^nm with positive volume, we define the (L^p, Q)-centroid body of L, L, to be the convex body in with the support function h_ L(v)^p= 1/(L)∫_L h_Q(v^tx)^p dx. This of course extends on the classical centroid body operators, which we will discuss in more detail further below. We now derive a different formula for ℰ_p(Q,f). Let p≥ 1,m∈, f∈ W^1,p() not identically zero, and Q∈[m]. Then, ℰ_p(Q,f) = d_n,p(Q) (( f)(nm+p) ∫_^nh_ f(∇ f(x))^p dx)^-1/nm. By definition, we have from Fubini's theorem and polar coordinates ∫_^nh_ f(∇ f(x))^p dx = 1/( f)∫_ f∫_^n h_Q((∇ f(y)^tx)^p dy dx = 1/( f)∫_ fx^p_ f dx = 1/( f)∫_θ^p_ f∫_0^θ^-1_ fr^nm-1+pdrdθ = 1/( f)(nm+p)∫_θ^-nm_ f dθ = 1/( f)(nm+p)∫_(∫_^nh_Q((∇ f(x))^tθ )^p dx)^-nm/pdθ = ℰ_p(Q,f)^-nmd_n,p(Q)^nm/( f)(nm+p). Our next step is to replace ( f) with ( f). We will need the following from <cit.>. We suppress the definition of star body or compact domain; simply note that this applies to elements of [nm]. Fix n,m ∈. Let L⊂^nm be a compact set with positive volume, Q ∈[m], and p ≥ 1. Then ( L)/(L)^1/m≥()/()^1/m. If L is a star body or a compact set with piecewise smooth boundary, then there is equality if and only if L = E up to a set of zero volume for some origin-symmetric ellipsoid E ∈[n]. This of course extends the known m=1 cases: the classical Q=[-1/2,1/2] results from Busemann and Petty <cit.> when p=1 and from Lutwak, Yang and Zhang <cit.> when p>1. By setting Q = [-α_1, α_2], α_1,α_2>0 we obtain the asymmetric L^p case by Haberl and Schuster <cit.>. Setting L= f in (<ref>), one obtains ( f)^1/m≤()^1/m/()( f), with equality if and only if f= E for some origin-symmetric ellipsoid E∈[n]. To work with these equality conditions we use <cit.>: ∘ is bijective on the class of centered ellipsoids, in the sense that E = ω_n^1/p(E)^-1/p(m/ω_n(nm+p))^1/p E. Combining this with Lemma <ref>, we obtain the following. Let p≥ 1,m∈, f∈ W^1,p() not identically zero, and Q∈[m]. Then, ℰ_p(Q,f) ≥(ω_n/( f))^1/n(∫_^nh_ f(∇ f(x))^p dx)^1/p. with equality if and only if f = E for some origin-symmetric ellipsoid E∈[n]. Observe that, by using, Lemma <ref>, (<ref>), (<ref>), and (<ref>), ℰ_p(Q,f) = ℰ_p(Q,f)^nm+p/pℰ_p(Q,f)^-nm/p = ℰ_p(Q,f)^nm+p/p d_n,p(Q)^-nm/p(( f)(nm+p) ∫_^nh_ f(∇ f(x))^p dx)^1/p =d_n,p(Q)(nm)^-nm+p/nmp(nm+p)^1/p( f)^-1/nm(∫_^nh_ f(∇ f(x))^p dx)^1/p ≥(ω_nnm+p/m)^1/p(()/( f))^1/n(∫_^nh_ f(∇ f(x))^p dx)^1/p. Using (<ref>) completes the proof. We have the following two facts. The first can be seen as a Pólya-Szegö principle for convex symmetrization. Fix p>1 and origin-symmetric K∈[n]. For any f∈ W^1,p(), one has ∫_h_K(∇ f(x))^p dx ≥∫_h_K(∇ f^K(x))^p dx. Moreover, if f is nonnegative and satisfies the minor regularity assumption of ({x:|∇ f^K(x)|=0}∩{x: 0<f^K(x)<f_L^∞(^n)})=0, then equality holds in (<ref>) if and only if f(x)=f^K(x+x_0) for some x_0∈. Let p > 1, m∈, f∈ W^1,p() not identically zero, and Q∈[m]. Fix any K∈[n] such that (K)=ω_n. Then, one has ℰ_p(Q,f^⋆) = (∫_^nh_K(∇^K f(x))^p dx)^1/p. We use from Theorem <ref> that ℰ_p(Q,f^⋆)=∇ f^*_L^p()=ℰ_p([-1/2,1/2],f^⋆), and then the claim follows from <cit.>. The equality characterization for Theorem <ref> now follows: we set L=(ω_n/( f))^1/n f. Then, (L)=ω_n and from Proposition <ref> we have ℰ_p(Q,f) ≥(∫_^nh_L(∇ f(x))^p dx)^1/p, with equality only when L is an origin-symmetric ellipsoid E. Finally, when L=E, use Lemma <ref> and Proposition <ref> to finish the equality characterizations. § APPLICATIONS OF THE AFFINE PÓLYA-SZEGÖ PRINCIPLE The central theme of this section is to demonstrate a zoo of affine “upgrades” of well-known functional isoperimetric inequalities using the affine Pólya-Szegö principle (Theorem <ref>). The key ingredient is that in the case of radially symmetric functions, the affine energy ℰ_p(Q, f) reduces to its Euclidean counterpart ∇ f_L^p(). This philosophy was demonstrated in (<ref>) when it was shown that Theorem <ref>, Theorem <ref>, and the classical L^p Sobolev inequality (<ref>), imply the mth-order L^p affine Sobolev inequality (<ref>). The same philosophy applies to a wide range of well-known functional inequalities. We shall skip the proofs of most of the theorems to-be-presented, as they are more or less just along the lines of (<ref>). Occasionally, a proof will be given if additional details are needed. We remark that these affine inequalities will imply their classical (nonaffine) counterparts, thanks to Theorem <ref>. Note that the L^p Sobolev inequality (<ref>) and the mth-order affine Sobolev inequality (<ref>) are only valid for p∈ [1,n). For p>n, the Morrey-Sobolev embedding theorem states that any compactly supported function in W^1,p() is essentially bounded. This can be written as a sharp inequality, see, e.g., Talenti <cit.>: f_L^∞() ≤ b_n,p(supp(f))^1/p(p/n-1)∇ f_L^p(), for every f∈ W^1,p() with support supp(f) having finite volume. Here b_n,p=n^-1/pω_n^-1/n(p-1/p-n)^p-1/p. There is equality for functions of the form f_MS(x)=a(1-|x-x_0|^p-n/p-1)_+ for some a∈ and x_0∈. Chaining Theorems <ref>, <ref>, and (<ref>) as in (<ref>) provides the following affine version of (<ref>). Fix p>n ≥ 1 and f∈ W^1,p() such that supp(f) has finite volume. Then, for every m∈ and Q∈[m] f_L^∞()≤ b_n,p(supp(f))^1/p(p/n-1)ℰ_p(Q,f). Here, the sharp constant b_n,p is given by (<ref>). There is equality for functions of the form f_MS∘ A, where f_MS is given by (<ref>) and A∈(n). This recovers the m=1 cases given by <cit.> when Q=[-1/2,1/2] and <cit.> when Q=[0,1] . It was shown in <cit.> that lim_p→∞ d_n,p(Q) exists. We shall denote the limit as d_n,∞(Q). This motivates the following definition. Let f∈ W^1,∞(), m∈, and let Q∈[m]. Then, the (L^∞,Q) affine Sobolev energy of f is given by ℰ_∞(Q,f)=d_n,∞(Q)(∫_h_Q((∇ f(·))^tθ)_L^∞()^-nm dθ)^-1/nm. Taking the limit in (<ref>) shows the following Faber-Krahn-type inequality for ℰ_∞(Q,f). This recovers the m=1, Q=[0,1] case by Haberl, Schuster and Xiao <cit.>. Let m∈, Q∈[m] and f∈ W^1,∞() such that supp(f) has finite volume. Then, f_L^∞() ≤ω_n^-1/n(supp(f))^1/nℰ_∞(Q,f). Equality holds if f is of the form f(x)=a(1-|A(x-x_0)|)_+ for some a∈,x_0∈ and A∈(n). Note that the fact that equality holds in (<ref>) for f in (<ref>) follows from a direct computation. The inequality follows from Theorem <ref> by taking a limit. Indeed, from Fatou's lemma (and the fact that d_n,p(Q) is continuous in p), f_L^∞() ≤ω_n^-1 / n(supp(f))^1 / nlim sup _q →∞ℰ_q(Q,f) ≤ω_n^-1 / n(supp(f))^1 / nd_n,∞(Q)(lim inf _q →∞∫_h_Q((∇ f(·))^tθ)_L^q()^-nm d θ)^-1/nm ≤ω_n^-1 / n(supp(f))^1 / nd_n,∞(Q)(∫_lim inf _q →∞h_Q((∇ f(·))^tθ)_L^q()^-nm d θ)^-1/nm ≤ω_n^-1 / n(supp(f))^1 / nd_n,∞(Q)(∫_h_Q((∇ f(·))^tθ)_L^∞()^-nm d θ)^-1/nm . Let n>1. Denote m_n = sup_ϕ∫_0^∞ e^ϕ(t)^n/n-1-tdt, where the supremum is taken over all non-decreasing locally absolutely continuous functions in [0,∞) such that ϕ(0)=0 and ∫_0^∞ϕ^'(t)^ndt ≤ 1. The Moser-Trudinger inequality <cit.> states that for every function f∈ W^1,n() with 0<(supp(f))<∞, we have 1/(supp(f))∫_e^(nω_n^1/n|f(x)|/∇ f_L^n())^n/n-1dx ≤ m_n. The constant nω_n^1/n is best possible in the sense that if it were to be replaced by any other larger number, the above inequality would fail for some f∈ W^1,n() with 0<(supp(f))<∞. Carleson and Chang <cit.> showed that spherically symmetric extremals do exist for (<ref>). Theorems <ref> and <ref> now imply the following affine version; when m=1, we recover the Q=[-1/2,1/2] case from <cit.> and the Q=[0,1] case from <cit.>. Let n>1, m∈, and Q∈[m]. Then, for every f∈ W^1,n() with 0<(supp(f)) < ∞, we have 1/(supp(f))∫_e^(nω_n^1/n|f(x)|/ℰ_n(Q,f))^n/n-1dx ≤ m_n. The constant nω_n^1/n is best possible in the sense that if it were to be replaced by any other larger number, the above inequality would fail for some f∈ W^1,n() with 0<(supp(f))<∞. Recall that the sharp logarithmic Sobolev inequality shown in <cit.> states: for f∈ W^1,2() such that f_L^2(^n)=1, one has ∫_|f|^2log |f|dx ≤n/2log((2/neπ)^1/2∇ f_L^2(^n)). Ledoux <cit.> and Del Pino and Dolbeault <cit.> established the following extension of the log-Sobolev inequality: for 1≤ p <n and f∈ W^1,p() such that f_L^p()=1, one has ∫_|f|^plog |f| dx ≤n/plog(c_n,p∇ f_L^p()), where c_n,p =(p/n)^1 / p(p-1/e)^1-1 / p(Γ(1+n/2)/π^n / 2Γ(1+n(p-1)/p))^1 / n, c_n,1=lim_p→ 1^+ c_n,p. As for equality conditions, when p=1, Beckner <cit.> showed that one must look beyond W^1,1() to BV(), where there is equality only for dilates of characteristic functions of centered Euclidean balls. Carlen <cit.>, for p=2, and Del Pino and Dolbeault <cit.> for 1<p<n, showed there is equality if and only if there exists a>0 and x∈ such that f_LS(x)=π^n / 2Γ(1+n/2)/a^n(p-1) / pΓ(1+n(p-1)/p)exp(-1/a|x-x_0|^p /(p-1)). The following corollary includes, as special case <cit.> (m=1, Q=[0,1]). It follows from Theorems <ref>, <ref>, and (<ref>). Let 1≤ p <n and f∈ W^1,p(). Then, for every m∈ and Q∈[m], one has ∫_|f|^plog |f| dx ≤n/plog(c_n,pℰ_p(Q,f)). Here, the sharp constant c_n,p is given by (<ref>). If p=1, there is equality for dilates of characteristic functions of centered ellipsoids. On the other-hand, If 1<p<n, then there is equality for functions of the form f_LS∘ A, were f_LS is of the form (<ref>) and A∈(n). Nash's inequality, shown in its optimal form by Carlen and Loss <cit.>, states: for f∈ L^1()∩ W^1,2(), one has f_L^2()(f_L^2()/f_L^1())^2/n≤β_n∇ f_L^2(), where β_n^2=2(1+n/2)^1+n / 2/n λ_n ω_n^2 / n, and λ_n is the first nonzero Neumann eigenvalue of the Laplacian -Δ on radial functions on B_2^n. In fact, let u be the associated eigenfunction. Then, there is equality in (<ref>) if and only if f is a normalized and scaled version of f_N(x)= u(|x-x_0|) - u(1), if |x| ≤ 1, 0, if |x| ≥ 1, for some x_0∈. Applying Theorems <ref>, <ref>, and (<ref>), we immediately obtain the following theorem, which extends the case m=1,Q=[0,1] by Haberl, Schuster and Xiao <cit.>. If f∈ L^1()∩ W^1,2(), then, for every m∈ and Q∈[m], one has f_L^2()(f_L^2()/f_L^1())^2/n≤β_nℰ_p(Q,f), where β_n is given by (<ref>). There is equality for functions of the form f_N∘ A, where f_N is of the form (<ref>) and A∈(n). We recall that the Gagliardo-Nirenberg inequalities are precisely f_L^r()α_n,p(r,q)≤∇ f_L^p()^θf_L^q()^1-θ where 1<p<n, q<r≤np/n-p, θ∈ (0,1) is so that the inequality is scale-invariant. Note that inequality (<ref>) can be deduced from the L^p Sobolev inequality (<ref>), but without the optimal constant α_n,p(r,q); this is still open in general. Clearly, (<ref>) and Nash's inequality (<ref>) are special cases. In fact, the logarithmic Sobolev inequality (<ref>) is also a limiting case. Del Pino and Dolbeault <cit.> at the turn of the century made a breakthrough in establishing sharp constants for a range of parameters in (<ref>). This was followed by <cit.>, where mass transport was used to give an elegant proof of the same results, among other things. Suppose again that 1<p<n and now pick q∈ (p,pn/n-p-p/n-p]. Then, set r=p(q-1)/p-1 and θ=n(q-p)/(q-1)(np-(n-p)q). Let W_0^1,p,q() denote the completion of the space of smooth compactly supported functions with respect to the norm given by f_p,q=∇ f_L^p() + f_L^q(). Then, it was shown <cit.> that (<ref>) holds for every f∈ W_0^1,p,q() with constant α_n,p(r,q)=(p q/δ)^1/r((p √(π)/q-p)(n(q-p)/p q)^1/p(Γ(δ(p-1)/p(q-p)) Γ(1+n(p-1)/p)/Γ(q(p-1)/q-p) Γ(1+n/2))^1/n)^θ, where δ=np-q(n-p). Among the class W_0^1,p,q(), equality holds if and only if f is of the form f_GN(x)=a(1+b|x-x_0|^p/p-1)^-p-1/q-p for some a∈, b>0 and x_0∈. When restricted to the class W_0^1,p,q() (with the aforementioned choices of q and r), (<ref>) interpolates between the sharp L^p Sobolev inequality (<ref>) and the logarithmic Sobolev inequality (<ref>). Indeed, the former is when q=p(n-1)/(n-p) (which yields t=1 and α_n,p(r,q)=a_n,p from (<ref>)), and the latter follows by sending q→ p from above. Applying, like before, Theorems <ref>, <ref>, and (<ref>), we immediately obtain the following, which extends on the case m=1, Q=[0,1] by Haberl, Schuster and Xiao <cit.>. This formally interpolates between (<ref>) and (<ref>). Let 1<p<n,p<q≤p(n-1)/(n-p), m∈ and let r and θ be given by (<ref>). If f∈ W_0^1,p,q()∩ W^1,p(), then, for every Q∈[m], one has f_L^r()α_n,p(r,q)≤ℰ_p(Q,f)^θf_L^q()^1-θ. There is equality for functions of the form f_GN∘ A, there A∈(n) and f_GN is of the form (<ref>). § THE MTH-ORDER POINCARÉ INEQUALITY The methods used to establish the results in this section are inspired by those in <cit.>. For Ω⊂, define the spaces L^p(Ω), W^1,p(Ω), and W_0^1,p,q(Ω) analogously to the spaces L^p(), W^1,p() and W_0^1,p,q(), respectively, but with integration over Ω. Set W_0^1,p(Ω)=W_0^1,p,p(Ω). The main result of this section is the next inequality. Note that the constant c_0 might not be best possible. Let Ω⊂ be a bounded domain containing the origin in its interior, m∈, Q∈[m] be origin-symmetric, and p ≥ 1. Then, there exists constant c_0>0 dependent on n, p, Q, and Ω, such that for any C^1 not identically zero function f compactly supported in Ω, ℰ_p(Q,f) ≥ c_0 f_L^p()^nm-1/nm∇ f_L^p()^1/nm. For simplicity, since the constant c_0 in the desired inequality is not best, we will not try to keep track of its precise value, but rather use c_0 to denote a “constant” that depends on n, p, Q, Ω which might change from line to line. Recall the body f from (<ref>). Since Q is symmetric, f will be too. Arguing as in the proof of Lemma <ref> to obtain equations (<ref>) and (<ref>), for each θ∈, we can find a u_i_*∈, with θ u_i_*≠ 0 and c_0 >0 such that h_ f(θ) ≥ c_0 ∇ f(·)^tθ u_i_*/|θ u_i_*|_L^p(). Applying the sharp one-dimensional Poincaré inequality given in <cit.> to the above inequality, we may write h_ f(θ) ≥ c_0 (∫_| ∇ f(z)^tθ u_i_*/|θ u_i_*||^p dz)^1/p = c_0 (∫_( θ u_i_*/|θ u_i_*|)^⊥∫_-∞^∞|d/dt f(y+ tθ u_i_*/|θ u_i_*|)|^p dt dy)^1/p ≥ c_0 f_L^p() w(Ω, θ u_i_*/|θ u_i_*|)^-1 ≥ c_0 f_L^p(). where w(Ω, ξ) is the width of Ω in the direction of ξ and the last line follows from the fact that Ω is compact. The inequality (<ref>) shows that f⊃ c_0 f_L^p()B_2^nm. On the other hand, observe that max_θ∈𝕊^nm-1 h_ f(θ)^p = max_θ∈𝕊^nm-1h_Q(∇ f(·)^tθ)_L^p()^p ≥1/nm ω_nm∫_𝕊^nm-1∫_ h_Q(∇ f(z)^tθ)^p dz dθ = 1/nm ω_nm∫_ |∇ f(z)|^p ∫_𝕊^nm-1∫_O(n) h_Q((T^t ∇ f(z)/|∇ f(z)|)^tθ)^p dT dθ dz = 1/n^2m ω_nmω_n∫_𝕊^nm-1∫_𝕊^n-1 h_Q(u^tθ)^p du dθ∇ f_L^p()^p = c_0 ∇ f_L^p()^p. (Recall here c_0 might change from line to line—it is the existence of a positive constant that we are seeking for.) Equation (<ref>) and the fact that f is origin-symmetric imply that f contains the symmetric segment [-c_0∇ f_L^p(), c_0∇ f_L^p()]. Consequently we have that f contains the entire double cone conv(c_0f_L^p() B_2^nm,[-c_0∇ f_L^p(), c_0∇ f_L^p()] ), and therefore, [nm]( f) ≥ c_0 f_L^p()^nm-1∇ f_L^p(), for some c_0>0. Finally, an application of the Blaschke-Santaló inequality (see e.g. <cit.>) yields ℰ_p(Q,f) =d_n,p(Q)(∫_( ∫_ h_Q((∇ f(z))^tθ)^pdz )^-nm/p dθ)^-1/nm = c_0 ( f)^-1/nm≥ c_0 [nm]( f)^1/nm ≥ c_0f_L^p()^nm-1/nm∇ f_L^p()^1/nm. As an immediate corollary of Theorem <ref>, we obtain the mth-order Poincaré inequality, Theorem <ref>, by using an approximation argument and the classical L^p Poincaré inequality. Another corollary of Theorem <ref> is the following embedding theorem. Let Ω⊂ be a bounded domain containing the origin in its interior, m∈, Q∈[m] be origin-symmetric, and 1 ≤ p < ∞. Consider the class of functions ℬ_Q,p(Ω) := {f ∈ W_0^1,p(Ω) ℰ_p(Q,f) ≤ 1}. For any bounded domain Ω⊂ with Lipschitz boundary containing the origin in its interior, m∈, Q∈[m] that is origin-symmetric, and 1 ≤ p < n, the set ℬ_Q,p(Ω) is compactly immersed within L^p(Ω). Consider any sequence of functions f_k ∈ W^1,p_0(Ω), k =1,…, such that ℰ_p(Q,f_k) ≤ 1. If there is some sub-sequence of f_k_j that converges in the L^p-norm, then we are done. If this is not the case, then there is a positive constant c for which f_k_L^p(Ω)≥ c for all k ≥ 1. However, in this case, according to Theorem <ref>, it is necessary that the sequence of f_k is bounded in the W^1,p_0(Ω). Therefore, we may apply the Rellich-Kondarchov embedding theorem to conclude the result. acm
http://arxiv.org/abs/2409.02888v1
20240904172147
Cost-Effectiveness Analysis for Disease Prevention -- A Case Study on Colorectal Cancer Screening
[ "Yi Xiong", "Kwun C G Chan", "Malka Gorfine", "Li Hsu" ]
stat.ME
[ "stat.ME", "stat.AP" ]
[enumerate]itemsep=0mm 6.5in 8.5in theoremTheorem #1#2#1#22mu#1#2 centernot#1#2 @@th#1#2 .5@ @@th#1= -.5@ @th#1$̸ #2 ============================ theoremTheorem apalike =================================================================== Biostatistics Paper1Cost-Effectiveness Analysis for Disease Prevention]Cost-Effectiveness Analysis for Disease Prevention – A Case Study on Colorectal Cancer Screening1]Yi Xiong2]Kwun C G Chan3]Malka Gorfine4]^∗Li Hsu[1]Department of Biostatistics, University at Buffalo, Buffalo, NY, USA[2] Department of Biostatistics, University of Washington, Seattle, WA[3]Department of Statistics and Operations Research, Tel Aviv University, Tel Aviv, Israel[4]Biostatistics Program, Fred Hutchinson Cancer Center, Seattle, WA[∗]Corresponding author. lih@fredhutch.orglih@fredhutch.orgCancer Screening has been widely recognized as an effective strategy for preventing the disease. Despite its effectiveness, determining when to start screening is complicated, because starting too early increases the number of screenings over lifetime and thus costs but starting too late may miss the cancer that could have been prevented. Therefore, to make an informed recommendation on the age to start screening, it is necessary to conduct cost-effectiveness analysis to assess the gain in life years relative to the cost of screenings. As more large-scale observational studies become accessible, there is growing interest in evaluating cost-effectiveness based on empirical evidence. In this paper, we propose a unified measure for evaluating cost-effectiveness and a causal analysis for the continuous intervention of screening initiation age, under the multi-state modeling with semi-competing risks. Extensive simulation results show that the proposed estimators perform well in realistic scenarios. We perform a cost-effectiveness analysis of the colorectal cancer screening, utilizing data from the large-scale Women's Health Initiative. Our analysis reveals that initiating screening at age 50 years yields the highest quality-adjusted life years with an acceptable incremental cost-effectiveness ratio compared to no screening, providing real-world evidence in support of screening recommendation for colorectal cancer.[ [ September 9, 2024 ===================== =============================== 1 Cost-Effectiveness Analysis for Disease Prevention – A Case Study on Colorectal Cancer Screening Yi Xiong Department of Biostatistics, University at Buffalo, Buffalo, NY and Kwun C G Chan Department of Biostatistics, University of Washington, Seattle, WA and Malka Gorfine Department of Statistics and Operations Research, Tel Aviv University, Tel Aviv, Israel and Li Hsu The authors gratefully acknowledge that this work is partially supported by the National Institutes of Health grants. Biostatistics Program, Fred Hutchinson Cancer Center, Seattle, WA September 9, 2024 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= 0 Cost-Effectiveness Analysis for Disease Prevention – A Case Study on Colorectal Cancer Screening § ABSTRACT Cancer Screening has been widely recognized as an effective strategy for preventing the disease. Despite its effectiveness, determining when to start screening is complicated, because starting too early increases the number of screenings over lifetime and thus costs but starting too late may miss the cancer that could have been prevented. Therefore, to make an informed recommendation on the age to start screening, it is necessary to conduct cost-effectiveness analysis to assess the gain in life years relative to the cost of screenings. As more large-scale observational studies become accessible, there is growing interest in evaluating cost-effectiveness based on empirical evidence. In this paper, we propose a unified measure for evaluating cost-effectiveness and a causal analysis for the continuous intervention of screening initiation age, under the multi-state modeling with semi-competing risks. Extensive simulation results show that the proposed estimators perform well in realistic scenarios. We perform a cost-effectiveness analysis of the colorectal cancer screening, utilizing data from the large-scale Women's Health Initiative. Our analysis reveals that initiating screening at age 50 years yields the highest quality-adjusted life years with an acceptable incremental cost-effectiveness ratio compared to no screening, providing real-world evidence in support of screening recommendation for colorectal cancer. Keywords: Causal inference; multi-state model; survival analysis; time-varying intervention § INTRODUCTION Cancer preventive screening is recommended for various cancers in US and many countries <cit.>. As a preventive intervention, it targets people without cancer, unlike early-detection cancer screening, which aims to detect cancer at an early stage in individuals who already have cancer. Preventive screening focuses on identifying and removing precancerous lesions. Consequently, it is typical that a large number of people undergo screening, leading to potentially high costs per cancer prevented. Ideally, the benefits of screening should outweigh its costs <cit.>. Our work is motivated by determining an optimal screening age for colorectal cancer (CRC). CRC is the third most common cancer and the second leading cause of cancer deaths worldwide <cit.>. Yet it is also one of the most preventable cancers due to the effectiveness of endoscopy screening, which both detects and removes precancerous lesions such as polyps <cit.>. Despite its effectiveness, determining when to start screening is complicated. Starting too early increases costs and risks associated with invasive procedures due to more screenings that an individual will have during their lifetime, and yet the cancer risk is generally low at young age. On the other hand, starting too late may miss the cancer that could have been prevented and as a result, have a devastating impact on an individual's quality of life and life expectancy. To provide informed recommendation on the optimal age to begin screening, it is necessary to conduct cost-effectiveness analysis to assess health gains relative to the costs of starting screening at different ages. A common approach for the cost-effective analysis for evaluating cancer screening strategies is to use microsimulation models <cit.>, where a large number of individuals are simulated under some models that reflect the natural history of cancer development, and the benefits and costs of different screening strategies are evaluated. For example, microsimulation models for CRC use a structure that builds upon the adenoma-carcinoma sequence <cit.>. Individuals start without polyps and, without screening intervention, may develop one or more adenomas, which can possibly develop into CRC, and then death from CRC or from competing risks that can occur at any time. Various screening strategies are then applied to these persons, and the benefit (measured by the number of quality-adjusted life-years gained) and costs (measured by the number of required colonoscopies) are quantified <cit.>. While microsimulation models are useful for comparing the effectiveness of different interventions, their underlying model structure and parameter values may not fully capture the complex disease process and the impact of screening on that process <cit.>. There is substantial interest in examining cost-effectiveness based on empirical evidence. Our research aims to assess the cost-effectiveness of initiating CRC screening at different ages using data from the Women's Health Initiative (WHI) <cit.>. Launched in 1991, WHI is one of the largest U.S. disease prevention programs addressing various cancer types including CRC, heart diseases, and other outcomes in postmenopausal women. The study enrolled about 100,000 women aged 50 to 79 years across US. It collected a wide range of sociodemographic and epidemiologic factors including well-curated CRC screening status over time. Given the relatively infrequent occurrence of CRC, the extensive follow-up of WHI offers a unique opportunity to investigate the impact of the timing of CRC screening initiation on the risk of CRC and mortality over a lifetime. Methods for analyzing observational data need to address issues including censored outcomes, competing events, and confounding. There is a rich literature in cost-effectiveness analysis that accommodates censoring and adjusts for confounding factors. For example, <cit.> used a regression-based approach, assuming the Cox proportional hazards model <cit.>, to compare the restricted mean survival times between two groups. <cit.> employed the propensity score approach to estimate treatment effects on costs. <cit.> proposed a doubly-robust estimator to assess the effect of interventions based on both cost and effectiveness. However, most existing studies primarily focus on static intervention, whereas our research examines the cost-effectiveness of initiating screening at different ages. Although age is a continuous variable, our problem differs substantially from conventional continuous intervention settings, such as estimating dose-response functions <cit.>. First, the intervention is subject to censoring, as some individuals may die or develop the disease before starting screening, resulting in the intervention being censored. Second, since screening can prevent disease progression and prolong the disease-free period ultimately, reducing mortality, it is crucial to investigate its impact on disease development, in addition to mortality. Thus, it is essential to consider a multi-state framework with semi-competing risks, allowing us to examine disease and death jointly. Causal inference holds great significance and is embedded, both implicitly and explicitly, in public health practice. Practitioners often base intervention decisions on presumed causal connections and the subsequent outcomes that arise from them <cit.>. In cost-effectiveness analysis, current methodologies for causal inference in healthcare decision-making are primarily derived from randomized trials <cit.>. While randomized trials provide robust evidence for informing cost-effective interventions, observational data allows for analysis in more realistic settings. When dealing with a possibly censored continuous intervention such as the age at screening, larger sample sizes are necessary. For infrequent diseases like CRC, long follow-up periods are needed, which may be cost prohibitive. Observational studies like WHI offer readily available large-scale data for evaluating the cost-effectiveness of continuous interventions. The combination of observational data and causal inference can provide empirical evidence for informing health policy or guidelines regarding the optimal timing of screening initiation. In the realm of causal inference, there have been a few studies examining the causal effects of the timing of treatment initiation or discontinuation <cit.> and the related but different issue of truncation by death <cit.>. However, to our knowledge, no existing work has simultaneously addressed both the challenges of continuous intervention and multi-state modeling with semi-competing risks. In this paper, we aim to conduct a causal cost-effectiveness analysis to determine the initial screening age for CRC. The primary research question is: what would be the cost and benefit if screening were to begin at a specific age? To address this, we propose a unified measure to evaluate cost-effectiveness under a multi-state modeling framework. This comprehensive measure includes well-known quantities such as the restricted mean survival time and quality-adjusted life years <cit.>, and it can also quantify the cost of intervention, such as the number of screenings during lifetime. We consider a multi-state process to account for multiple (dependent) event times that do not require the semi-Markov assumption often needed in other methods. We then develop a procedure to estimate the cost-effectiveness estimand using structural transition hazard models for the proposed multi-state process. The contributions of this paper are threefold. First, we introduce a comprehensive measure for both the effectiveness and cost of a possibly censored continuous intervention. Second, we extend the measure to multi-state modeling, providing novel insights into how cost-effectiveness is impacted by the initial screening age through disease development and various paths to death, with and without prior disease diagnosis. Third, we establish a clear definition of the causal estimand of interest and outline the necessary assumptions associated with the multi-state model for the possibly censored intervention and outcome. The rest of this paper is structured as follows. In Section 2, we describe the proposed cost-effectiveness measure and the causal inference framework, as well as the estimation procedure. Section 3 presents the cost-effectiveness analyses of WHI data regarding the initial screening age for CRC. An extensive simulation study evaluating the performance of our proposed framework is shown in Section 4. Finally, Section 5 provides concluding remarks summarizing the key contributions of this research and potential future directions. § METHODOLOGY §.§ A Unified Measure for the Benefit and Cost Consider an illness-death model (Figure 1), under which individuals can be in the healthy or diseased state during the follow-up. Let K be the number of states, here, under the illness-death model, K=2. Further, let T and D be the age at disease onset and death, respectively. We define the benefit or cost at time t by M(t) = ∑_k=1^K ∫_0^t Y_k(u) dW_k(u), where Y_k(·) is an indicator function, which is 1 if an individual in the kth state and alive (i.e., D ≥ u) and 0 otherwise, and W_k(·) is a cumulative weight function associated with the kth state, k=1, …, K. M(t) encompasses many measures for quantifying benefit or cost. Below we show several examples for M(t) and provide more details in Section <ref>. Example 1: Restricted Mean Survival Time (RMST). The RMST is a popular measure for quantifying the average survival experience up to a certain time point, t<cit.>. Mathematically, it can be defined as the area under the survival curve up to time t. In this case, there is only one event of interest, death. Setting W_k(u) = u, k = 1, …, K, we have E{M(t)} = E{∫_0^t ∑_k=1^K Y_k(u) du} = ∫_0^t (D ≥ u) du, which represents the RMST. We can also set K=1 to obtain RMST, but we opt for K=2 to more effectively capture the impact of screening on the disease process rather than solely modeling overall mortality. Example 2: Cause-Specific Mean Life Years Lost. Life years lost is complementary to the RMST, and it offers an appealing attribute to decompose the years lost due to specific cause pathways leading to mortality before time t<cit.>. Define the expected life years lost E{L(t)} = t - E{M(t)}. Under the illness-death model, an individual may die before the disease occurs, as indicated by the path 0 → 2 or first develop the disease and then die, as indicated by the path 0 → 1 → 3. Let W_k(t) = t, for k=1 and 2, and f_D(·|T,X) be the conditional probability density function of D given T and X, the mean life years lost can then be decomposed into E{L(t)} = ∫_0^t ∫_0^ u f_D(v| T≥ v, X) (T≥ v|X) dv du + ∫_0^t ∫_0^u f_D(v|T<v , X) (T<v|X) dv du. The first term is the mean life years lost through the path 0 → 2 and second term is the mean life years lost through the path 0 → 1 → 3. This is particularly informative when assessing the performance of an intervention. In the CRC example, colonoscopy has been shown to be effective in reducing CRC incidence rates substantially. However, CRC is still relatively infrequent in a population, despite it is a common cancer. As a result, the gain in life years in the population may not be apparent to reflect the effectiveness of colonoscopy because the cancer only affects a small fraction of the population. By examining cause-specific mean life years lost, it can shed a clearer light on the cost-effectiveness of the intervention in relation to the disease that it targets. Example 3: Quality-Adjusted Life Years (QALY). In Example 1, E{M(t)} represents the expected life years up to time t when the time origin is set at birth. However, the weight function W_k(u) can be set to reflect changing quality score over time in the kth state. In this situation, E{M(t)} becomes the expected QALY up to time t. The cause-specific quality-adjusted mean life years lost can be defined accordingly. Example 4: Number of Screenings. When W_1(u) is the cumulative number of screenings up to time u and W_2(u) = 0, then E{M(t)} measures the expected number of screenings prior to disease or death, whichever comes first. Having a unified measure for the benefit or cost forms a fundamental basis for evaluating the cost-effectiveness of an intervention. Decision-makers are often faced with the challenge of determining the value of a particular intervention relative to its associated costs. This becomes especially pertinent when resources are limited, and choices must be made about how to optimize these resources to achieve the best outcomes. By establishing a unified measure, it allows us to assess both the benefit and cost or the combination of benefit and cost such as incremental cost effectiveness ratio (incremental cost/incremental benefit) of different interventions systematically. §.§ Causal Inference of a Continuous Intervention under the Multi-State Model Our interest lies in assessing the effects of different initiation timings for screening on both the benefit (e.g., increased survival years) and cost (e.g., number of screenings) from a cost-effectiveness standpoint. Because the screening age for an individual in an observational study cohort is continuous, we cannot directly adopt the conventional counterfactual framework for a binary intervention (e.g., treatment vs. control). Instead, we consider this to be a counterfactual problem with a continuous intervention. However, as mentioned above, age at which intervention is initiated is subject to censoring, creating additional challenges. The question that we ask is: what would the benefit and cost be if a screening policy recommended screening starting at age s? Let T^* and D^* denote the potential age at disease onset and potential age at death had the individual never undergone the screening. For s<min(T^*,D^*), we define T^(s) and D^(s) as the potential age at disease onset and age at death, respectively, if screening had occurred at s. If an individual had not undergone screening prior to the disease onset or death, i.e., s≥min (T^*,D^*), then T^(s)=T^* and D^(s)=D^*. Define X to be a vector of covariates measured at baseline. As shown in Figure <ref> for the illness-death model, individuals start in state 0 (e.g., healthy) and then either move to state 2 (e.g., death without disease) directly, or transit first to state 1 (e.g., disease) and then to state 3 (e.g., death following disease incidence). Given the continuous screening time s, the illness-death model is defined by three hazard functions in terms of potential outcomes T^(s) and D^(s): the hazard functions from healthy to disease and death, respectively, as well as the hazard function from disease to death. Specifically, the hazards functions from healthy to disease and death are defined under the competing risks framework as following: λ^(s)_01(t|X) =lim_Δ t→ 0Δ t^-1(t ≤ T^(s) < t+Δ t | T^(s)≥ t, D^(s)≥ t, X), λ^(s)_02(t|X) =lim_Δ t→ 0Δ t^-1(t ≤ D^(s) < t+Δ t | T^(s)≥ t, D^(s)≥ t, X), for t>0. For the hazard function from disease to death, we define it based on the sojourn time, D^(s) = D^(s) - T^(s). The commonly used semi-Markov assumption implies that D^(s) is independent of T^(s) given risk factors X. However, this assumption may not be realistic, as for many diseases the survival is associated with age at disease, see e.g., <cit.>. To account for such possible dependence, we define the hazard function including age at disease as a covariate, λ^(s)_13(t̃|X,T^(s)) = lim_Δt̃→ 0 (Δt̃)^-1(t̃≤D^(s) < t̃+Δt̃ | D^(s)≥t̃, T^(s), X), for t̃ > 0. Here we use a different notation t̃ to indicate that it represents a sojourn time starting at the age of disease onset <cit.>. Both potential time to disease onset and time to death are subject to right censoring time C. We assume that C is independent of both times given the risk factor X, as stated below. Assumption A1 (Conditional Independence of Censoring). The censoring time C is independent of {T^*, D^*, T^(s),D^(s), 0<s ≤min(T^*, D^*) } given the risk factor X. The conditional independence assumption (A1) also implies that given T^(s) and X, the sojourn time D̃^(s)=D^(s)-T^(s) is independent of C-T^(s). This allows us to estimate λ_13^(s)(t̃|X, T^(s)) in (<ref>) by e.g., including T^(s) as a covariate, without requiring additional assumptions about the censoring mechanism. Now, define the potential observational time as U^(s)=min(T^(s),D^(s),C). This represents the potential age at disease onset if the disease indicator δ_1^(s)≡ I(T^(s)≤min(D^(s),C) ) equals 1, the potential age at death if the death indicator δ_2^(s)≡ (1-δ_1^(s))I(D^(s)≤ C ) equals 1, and the time at last observation, i.e., the censoring time C, if both δ_1^(s) and δ_2^(s) are 0. For the rare occasions when the disease is diagnosed at the time of death, we assume T^(s) occurs before D^(s). Further, define V^(s)=δ_1^(s){min(D^(s),C) - T^(s)}, which is the potential time to death since the potential disease occurrence if δ_3^(s)≡δ_1^(s) I(D^(s)≤ C) equals 1, and time to last observation since the potential disease occurrence, otherwise. In addition, we define the following potential outcome counting processes, N^(s)_01(t) = I(U^(s)≤ t, δ_1^(s)=1), N^(s)_02(t) = I(U^(s)≤ t, δ_2^(s)=1), N^(s)_13(t̃) = I(V^(s)≤t̃, δ_3^(s)=1), Y^(s)_0(t) = I(U^(s)≥ t), Y^(s)_1(t̃) = I(V^(s)≥t̃). Note that the time-shifted counting process N^(s)_13(t̃) denotes the potential outcome death up to the sojourn time t̃ from the disease onset when screening occurs at time s. Similarly, Y^(s)_1(t̃) indicates whether an individual is potentially at risk in the disease state at the sojourn time t̃. Under Assumption A1, the potential outcome hazard function can be represented in terms of event counting and at-risk processes. Let dN_0k^(s)(t) = N_0k^(s)(t^- + Δ t) - N_0k^(s)(t^-), k = 1, 2, and dN_13^(s)(t̃) = N_13^(s)(t̃^- + Δt̃) - N_0k^(s)(t̃^-), Eq. (<ref>) – (<ref>) can be written as λ^(s)_0k(t|X) = lim_Δ t → 01/Δ t{ dN_0k^(s)(t) = 1| Y_0^(s)(t), X } λ^(s)_13(t̃|T^(s),X) = lim_Δt̃→ 01/Δt̃{ dN_13^(s)(t̃) = 1 | Y_1^(s)(t̃),T^(s), X } . Now, we define the observed data. Let U=min(T,D,C) and V=δ_1{min(D,C)-T} denote the observed time to the first event of disease or death and the observed time to death since disease, respectively, where δ_1=I(T≤min(D,C)) indicates disease status. Let δ_2=(1-δ_1)I(D≤ C), and δ_3=δ_1I(D≤ C) be the indicators of death without disease and death after disease, respectively. The observed counting processes are given by N_01(t) = I(U ≤ t, δ_1 = 1), N_02(t) = I(U ≤ t, δ_2 = 1), N_13(t̃) = I(V≤t̃, δ_3 = 1), Y_0(t) = I(U ≥ t), Y_1(t̃) = I(V≥t̃), where time τ is the end of a study or a pre-specified time point and (Y_0(τ) Y_1(τ)>0) > 0. Further let S denote the time at which the individual initiates screening. The observational screening history up to age U is denoted by Z̅(U;S) = {Z(u;S)=I(S < u), 0<u≤ U }. The following consistency assumption links the potential outcomes to the observed processes. Assumption A2 (Consistency). For individuals who have initiated screening prior to U, i.e. S<U, N_01(t) = N^(S)_01(t), N_02(t) = N^(S)_02(t), and N_13(t̃) = N^(S)_13(t̃) as well as the at-risk processes Y_0(t) = Y^(S)_0(t) and Y_1(t̃) = Y^(S)_1(t̃); for individuals who have not undergone screening prior to U, N_01(t) = N^(s)_01(t), N_02(t) = N^(s)_02(t), N_13(t̃) = N^(s)_13(t̃), Y_0(t) = Y^(s)_0(t), and Y_1(t̃) = Y^(s)_1(t̃) where s≥ U, for all t>0 and t̃ > 0. This consistency assumption suggests that since an individual who has not undergone screening prior to U is consistent with any regime s, s ≥ U, his/her observed processes equal the corresponding potential processes if this individual followed regime s. In the absence of censoring, the potential processes would be for T^* and D^* had the individual never undergone the screening. In addition, the following two identifiability assumptions are required. Assumption A3 (No Unmeasured Confounding). Conditional on baseline risk factors X, the potential outcome hazard functions of disease and mortality at time t with screening time s are independent of the observed screening history up to and including time t. In other words given Assumptions A1 and A3, λ^(s)_0k(t|X) = lim_Δ t → 01/Δ t{ dN_0k^(s)(t) = 1| Y_0^(s)(t), X, Z̅(t;S) }, k = 1, 2, λ^(s)_13(t̃|T^(s),X) = lim_Δt̃→ 01/Δt̃(dN_13^(s)(t̃) = 1 | Y_1^(s)(t̃),T^(s), X, Z̅(T^(s);S)). Assumption A4 (Positivity). There exists a positive constant ϵ > 0 such that the hazard rate function of observed screening at time t given X is bounded below by ϵ and finite, i.e., ϵ < lim_Δ t→ 0P(t≤ S < t+Δ t|U≥ t, S ≥ t, X)/Δ t < ∞. Assumption A3 implies that the baseline risk factors are sufficient to account for the self selected screening time in the observational data. Along with Assumption A2, it establishes a connection between the causal estimands to the observed data. This connection facilitates the inference of the causal effect of the timing of screening initiation. While X in this work consists of only baseline risk factors, there could be some relevant time-dependent risk factors (e.g., body mass index). The assumption can be further relaxed by allowing for time-dependent risk factors up to time t. However, when predicting the risk and determining the appropriate timing to initiate screening, for practical reasons, we opt to assume that the decision is made based on the risk factors observed at baseline, rather than considering potential risk factor values that may develop in the future. Assumption A4 implies that every individual has a non-zero probability of undergoing screening at any age prior to the occurrence of disease or death. Given that our primary focus is on the preventive effect of screening, we consider the time to screening for individuals who have not undergone screening before the onset of the disease to be censored by the time of disease onset. To evaluate the cost-effectiveness concerning the recommended screening age s, our interest lies in the following function, E{M^(s)(t)} = E_X [ ∫_0^t E{ I(T^(s)≥ u,D^(s)≥ u)|X}dW_1(u) ] +E_X [ ∫_0^t E{ I(T^(s)≤ u,D^(s)≥ u)|X}dW_2(u) ]. Let Λ_0k^(s)(t) = ∫_0^t λ_0k^(s)(u) du, Λ_0k(t) = ∫_0^t λ_0k(u) du, k=1, 2, Λ_13^(s)(t) = ∫_0^t λ_13^(s)(u) du, and Λ_13(t) = ∫_0^t λ_13(u) du, the first term on the right hand side of (<ref>) equals E_X [ ∫_0^t exp{-Λ_01^(s)(u|X) - Λ_02^(s)(u|X)} dW_1(u) ] = E_X [ ∫_0^t exp{-Λ_01^(s)(u|X, Z̅(u;S)) - Λ_02^(s)(u|X, Z̅(u;S))} dW_1(u) ], = E_X [ ∫_0^t exp{-Λ_01 (u|X, Z̅(u;S=s)) - Λ_02(u|X, Z̅(u;S=s))} dW_1(u) ], ≡ E_X {∫_0^t (min(T, D) ≥ u | X, Z̅(u;S=s)) dW_1(u) }, where the first equality is due to Assumption A3 and the second equality is due to Assumption A2. Following similar arguments, the second term on the right hand side of (<ref>) equals E_X [ ∫_0^t∫_0^u (D^(s)≥ u|T^(s)=v,X)dF_T^(s)(v|X)dW_2(u) ] = E_X [ ∫_0^t∫_0^u exp{-Λ_13^(s)(u-v|T^(s)=v,X)}dF_T^(s)(v|X)dW_2(u) ] = E_X [ ∫_0^t∫_0^u exp{-Λ_13^(s)(u-v|T^(s)=v,X,Z̅(v;S))}dF_T^(s)(v|X,Z̅(v;S))dW_2(u) ] = E_X [ ∫_0^t∫_0^u exp{-Λ_13(u-v|T=v,X,Z̅(v;S=s))}dF_T(v|X,Z̅(v;S=s))dW_2(u) ], ≡ E_X {∫_0^t (T< u, D ≥ u | X, Z̅(u;S=s)) dW_2(u) }, where F_T^(s)(·) is the cumulative incidence function (CIF) for the potential time at disease onset, defined as F_T^(s)(u|X) = ∫_0^uexp{-Λ_01^(s)(v|X) - Λ_02^(s)(v|X)}dΛ_01^(s)(v|X) = ∫_0^uexp{-Λ_01(v|X,Z̅(v;S=s))-Λ_02(v|X,Z̅(v;S=s))}dΛ_01(v|X,Z̅(v;S=s)), and the second and third equalites are due to Assumptions A3 and A2, respectively. Therefore, to estimate the causal estimand E{M^(s)(t)} for a given screening age s, it has come down to estimate the hazard functions λ_0k(t|X,Z̅(v;S=s)), k = 1, 2, and λ_13(t|X,Z̅(v;S=s)) with observed screening history Z̅(v;S=s) and baseline risk factor X. We employ the Cox proportional hazards models λ_0k(t|Z̅(t;S),X) = λ_0k,0(t)exp(β_0kZ(t;S)+γ'_0kX), k = 1, 2, λ_13(t̃|T,Z̅(T;S),X) = λ_13,0(t̃)exp(α T+β_13Z(T;S)+γ'_13X), where λ_01,0(t), λ_02,0(t) and λ_13,0(t̃) are unspecified baseline hazard functions, and θ_01 = (β_01,γ_01')', θ_02 = (β_02,γ_02')' and θ_13 = (α, β_13, γ_13')' are the regression coefficients. §.§ Estimation The observed data consist of n independently and identically distributed random variables, 𝒪_i, i = 1, …, n, where 𝒪_i={U_i,V_i, δ_1i,δ_2i, δ_3i, Z̅(U_i; S_i) }. The estimation procedure under the Cox proportional hazards model with competing risks and time-dependent covariates has been well established; see e.g., <cit.>. Hence, the maximum partial likelihood estimators θ_01 and θ_02 can be obtained by solving the following respective partial likelihood score equations ∑_i=1^n ∫_0^τ{Z_i(t) - s_01^(1)(t; θ_01)/s_01^(0)(t; θ_01)} N_i01(dt) = 0, and ∑_i=1^n ∫_0^τ{Z_i(t) - s_02^(1)(t; θ_02)/s_02^(0)(t; θ_02)} N_i02(dt) = 0, where N_i01(t), N_i02(t), Y_i0(t) are the realizations of N_01(t), N_02(t) and Y_0(t) for ith individual, and Z_i(t)' = (Z(t;S_i), X_i'), s_01^(j)(t; θ_01) = ∑_i=1^n Y_i0(t) Z_i^⊗ j(t;S_i) exp{β_01Z(t;S_i)+γ'_01X_i}, and s_02^(j)(t; θ_02) = ∑_i=1^n Y_i0(t) Z_i^⊗ j(t) exp{β_02Z(t;S_i)+γ'_02X_i} with Z^⊗ j = 1 and Z for j = 0 and 1, respectively. Under the competing risks framework, the dependence between time to disease and time to death is left unspecified and they are not assumed to be independent <cit.>. While the data are presented as right censoring, the estimation procedure can straightforwardly accommodate left truncation data by appropriately adjusting the risk set Y_i0(t) = I(U_i ≥ t > L_i) where L_i is the left truncation time for the ith individual. Let N_i13(t̃) and Y_1i(t̃) represent realizations of the time-shifted counting processes N_13(t̃) and Y_1(t̃) for the ith individual since disease onset. The time origin has shifted from study entry to the onset of the disease, and the partial likelihood now includes only individuals who have developed the disease. We can then construct a partial likelihood function for the parameters θ_13 in the model (<ref>) given the data N_i13(t̃) and Y_1i(t̃) for all individuals who develop the disease. Notably, this likelihood yields a consistent estimator θ_13, because the model (<ref>) includes T and X as covariates. Under Assumption A1, it is implied that the sojourn time (D-T) and the (sojourn) censoring time (C-T) are independent, given T and X. Consequently, we can obtain the maximum partial likelihood estimator θ_13 by solving the score equation ∑_i=1^n ∫_0^τ{Z_i1 - s_13^(1)(t̃; θ_13)/s_13^(0)(t̃; θ_13)} N_i13(dt̃) = 0, where Z'_i1= (T_i, Z(T_i;S_i), X_i'), s_13^(j)(t̃; θ_13) = ∑_i=1^n Y_i1(t̃) Z_i1^⊗ jexp{α T_i + β_13Z(T_i;S_i)+γ'_13X_i}. The cumulative baseline hazard functions {Λ_01,0(t), Λ_02,0(t), and Λ_13,0(t)} can be estimated with the Breslow estimators provided in Supplemental Materials (SM) Section 1. An estimator of E{M^(s)(t)} can then be obtained by plugging {θ_01, θ_02, θ_13} and the Breslow estimators {Λ_01,0(t), Λ_02,0(t), Λ_13,0(t)} into Eqs. (<ref>) and (<ref>). Specifically, denote P_1(t|X,s) = (min(T,D)≥ t|Z̅(t;S=s),X), P_13(t|r, X, s) = P(D ≥ t|T=r,Z̅(T;S=s),X) and P_2(t|X,s) = (D≥ t,T≤ t|Z̅(t;S=s),X), and the estimators are P_1(t|X, s) = exp{ - Λ_01(t|Z̅(u;S=s),X) - Λ_02(t|Z̅(u;S=s),X) }, P_13(t|r, X, s) = exp{-Λ_13(t-r|T=r,Z̅(r;S=s),X) }, P_2(t|X, s ) = ∫_0^tP_13(t|r, X, s) dF_T(r|Z̅(r;S=s),X), where cumulative hazard function estimators Λ_0k(t|Z̅(t; S=s, X) = ∫_0^t exp{β_0kZ(u; S=s) + γ_0k' X}Λ_0k,0 (du) for k =1, 2 and Λ_13(t-r|T=r,Z̅(r;S=s),X) = ∫_0^t-rexp{αr +β_13Z(r;S=s)+γ'_13X}Λ_13,0(du), as well as cumulative disease incidence estimator F_T(t|Z̅(t;S=s),X) =∫_0^t P_1(u|X, s) exp{β_01Z(u;S=s)+γ'_01X}Λ_01,0(du). The estimator of E{M^(s)(t)} is thus given by E{M^(s)(t)}=1/n∑_i=1^n(∫_0^t P_1(u|X_i, s) dW_1(u)+∫_0^t P_2(u|X_i, s ) dW_2(u) ). To obtain the variance estimator, we draw B bootstrap samples from the observed data {𝒪_i, i=1,…,n} and obtain E{M^(s)(t)} with each bootstrap sample. The variance of E{M^(s)(t)} is estimated by the empirical variance over B bootstrap samples. We have the following theorem: Under Assumptions (A1)–(A4) and regularity conditions (B1)–(B5), for any given s∈ [0,t], as n goes to infinity, * E{M^(s)(t)} converges to E{M^(s)(t)} uniformly over t∈[0,τ]. * √(n)[ E{M^(s)(t)} - E{M^(s)(t)}] converges weakly to a mean-zero Gaussian process over t∈[0,τ]. * The bootstrap-based variance estimator for E{M^(s)(t)} is a consistent estimator of the asymptotic variance of E{M^(s)(t)}. The main step of the proof involves demonstrating that Eq. (<ref>) asymptotically equals a sum of independent and identically distributed martingales. The subsequent part of the proof utilizes martingale theory. The details of the proof are provided in SM Section 2. §.§ Some Common and Useful Estimands In this section, we provide a detailed derivation of examples of E{ M^(s)(t)} as outlined in Section <ref> under the illness-death model (Figure <ref>). §.§.§ Example 1: RMST The causal estimand for the RMST can be obtained by assuming W_1(u) = W_2(u) = u in Eq. (<ref>) for E{M^(s)(t)}. Combining Eqs. (<ref>) and (<ref>) yields the RMST estimand E{M^(s)(t)} = E_X {∫_0^t E{ I(D^(s)≥ u)|X}du}, which equals ∫_0^t (D ≥ u |Z̅(u;S=s),X)du. Since it solely concerns mortality, it is sufficient to model only time to death, D, which has been the conventional approach <cit.>. Here, we decompose the overall survival probability into two components to account for the disease an individual could experience. This allows for a better modeling of the screening effect on the disease process, and subsequently, death. Therefore, the RMST can be written as E{M^(s)(t)} = E_X{∫_0^t P_1(u|X,s) du } + E_X{∫_0^t P_2(u|X,s) du }, which can be estimated by E{M^(s)(t)} = 1/n∑_i=1^n(∫_0^t P_1(u|X_i,s)du+∫_0^t P_2(u|X_i,s)du). §.§.§ Example 2: Cause-Specific Mean Life Years Lost We define the estimand for the mean life years lost given recommended screening at age s to be E{L^(s)(t)} = t - E{M^(s)(t)}. It can be further decomposed into sum of two cause-specific estimands: (1) mean life years lost through the disease pathway E{L^(s)_0 → 1 → 3 (t)} = E_X {∫_0^t ∫_0^ u f_ D^(s)(v|T^(s)<v, X) (T^(s)<v|X) dv du}; and (2) mean life years lost due to other causes E{L^(s)_0 → 2 (t)} = E_X {∫_0^t ∫_0^ u f_ D^(s)(v| T^(s)≥ v, X) ( T^(s)≥ v|X )dv du }, where f_D^(s)(v|T^(s), X) is the conditional probability density function of D^(s) given T^(s) and X. After some algebra and under Assumptions A2 and A3 (details are provided in SM Section 3), these two terms can be written as following E{L^(s)_0 → 1 → 3(t)} = E_X {∫_0^t ∫_0^ u {1-P_13(u|v,X,s)} P_1 (v|X,s) Λ_01(dv|X, Z(v;S=s)) du }, and E{L^(s)_0 → 2 (t)} = E_X {∫_0^t ∫_0^ u P_1 (v|X,s) Λ_02(dv|X, Z(v;S=s))du }. Therefore, the expected loss years due to disease (0 → 1 → 3) and other causes (0 → 2) before time t for individuals with screening at age s can be respectively estimated by E{L^(s)_0 → 1 → 3 (t)} = 1/n∑_i=1^n ∫_0^t∫_0^u { 1-P_13(u|v,X_i,s) }P_1(v|X_i,s) dΛ_01(v|Z(v;S_i=s),X_i)du, and E{L^(s)_0 → 2 (t)} = 1/n∑_i=1^n ∫_0^t∫_0^u P_1(v|X_i,s)dΛ_02(v|Z(v;S_i=s),X_i)du. §.§.§ Example 3: QALY and Quality-Adjusted Years Lost Similar to the estimand for the RMST, we can define the estimand for the QALY. Let Q(u) denote a quality score function ranging from 0 to 1 with 1 being healthy and 0 being death, and set W(u) = ∫_0^u Q(v)dv=u to indicate the cumulative quality score up to time u. Under the illness-death model where an individual is healthy, Q(u) = 1. If an individual is in the diseased state, the quality score will be below 1. Following the unified cost-effectiveness measure E{M^(s)(t)} in Eq. (<ref>), the estimand for the QALY can thus be defined as E_X [ ∫_0^t E{ I(T^(s)≥ u,D^(s)≥ u)|X}du ] +E_X [ ∫_0^t E{ I(T^(s)≤ u,D^(s)≥ u)|X}Q(u) du ]. The first term remains the same as the first term in the RMST. However, the second term differs from the RMST as utility loss attributed to the disease is incorporated. We can also extend the QALY concept to cause-specific QALY lost due to the disease as E_X {∫_0^t ∫_0^ u f_ D^(s)( v| T^(s)≥ v, X) ( T^(s)≥ v|X )dv Q(u) du }, which can be estimated by 1/n∑_i=1^n ∫_0^t∫_0^u P_1(v|X_i,s){ 1-P_13(u|v,X_i,s) } dΛ_01(v|Z(v;S_i=s),X_i)Q(u) du. For an individual having the disease, here, in our CRC example, the quality score Q(·) may vary depending on which phase of cancer care the individual is in. We further illustrate estimation for the QALY and quality adjusted life years lost using utility losses due to cancer care in the real data analysis in Section <ref>. §.§.§ Example 4: Number of Screenings The estimand for the number of screenings is a special case of Eq. (<ref>) by setting W_1(u) to be the cumulative function for the number of screenings up to time u and W_2(·)=0, because screening, as preventive measure, is assumed to take place before the cancer diagnosis or any other terminal events. In the example of CRC colonoscopy screening, the US Preventive Services Task Force recommends that individuals should commence screening at the age of 50 and undergo colonoscopy every 10 years <cit.>. Hence for CRC screening, we can write W_1(u;s) = ⌊ (u-s+10)_+/10 ⌋, where u denotes the age in years, s is the age at the first screening (e.g., 50 years old), x_+ is x if x>0 and 0 otherwise, and ⌊ x ⌋ is the largest integer below x. The estimand is E{M^(s)(t)} = E_X ∫_0^t E{ I(T^(s)≥ u,D^(s)≥ u)|X}dW_1(u; s), and the cost of screening can be estimated by E{M^(s)(t)} = 1/n∑_i=1^n ∫_0^t P_1(u|X_i,s) dW_1(u;s). § COST-EFFECTIVENESS ANALYSIS OF CRC SCREENING We assessed the cost-effectiveness of CRC screening using data from the WHI. Our primary aim is to examine how the starting age for CRC screening affects CRC incidence and mortality. To accomplish this, we conducted the analysis on the 65,062 study participants who had not undergone CRC screening at the beginning of the study. Comprehensive data on socio-demographic and epidemiologic factors were collected at study entry. The mean follow-up duration was 16 years with maximum 25 years. During this period, 1,291 individuals (2.0%) developed CRC, of whom 603 (1.0%) died after being diagnosed with CRC. Additionally, 15,485 participants (23.8%) died without experiencing CRC. Out of the total participants, 44,912 (69.0%) underwent CRC screening, with an average age at start screening of 67.1 years. The outcomes of interest were the age at diagnosis of CRC and age at death, both subject to left truncation (age at enrollment) and right censoring (age at the last follow-up). We employed our proposed estimators to assess RMST, QALY, total cost in terms of the number of colonoscopies, and (quality-adjusted) mean life years lost due to CRC and other causes. These estimators were assessed for different ages at initiating CRC screening, specifically, s= 50, 60, and 70 years, over time interval from 50 years to t= 60, 70, and 80 years. Six baseline risk factors were included, based on risk prediction models developed by <cit.> and <cit.>. These factors were obesity (BMI <30, ≥ 30 kg/m^2), family history of CRC among first-degree relatives (no, yes), exercise (0, 0-2, >2 hours per week), use of aspirin and other nonsteroidal anti-inflammatory drugs (NSAIDs) (nonuser, regular user), vegetable consumption (# of servings per day), and estrogen status within the last 2 years (negative, positive). Two demographic factors were also included: education (high schoolor less, some college, college/graduate degree) and race and ethnicity (White, Asian/Pacific Islander, Black/African American, Hispanic/Latino, and others). The reference level for each categorical covariate is defined as the first value within the parentheses. SM Table S1 summarizes these risk factors. We fit the multi-state models (<ref>) and (<ref>) in Section <ref> with time-dependent screening status, while adjusting for the aforementioned demographic and risk factors. In model (<ref>) for mortality since cancer, age at diagnosis was also included. The risk sets were adjusted for left truncation by age at enrollment. The results are presented in Table <ref>. CRC screening reduced the risk of developing CRC by 57%. Obese Individuals or those with a positive family history had a higher risk of developing CRC, while regular exercise and a positive estrogen status were associated with reduced risk. For death after CRC, screening decreased the risk by 17%. Obesity remained a significant risk factor for mortality, and older age at CRC diagnosis was associated with a higher risk of earlier mortality. §.§ Restricted Mean Survival Time (RMST) Based on these models we estimated the RMST for no screening and screening at age 50, 60 and 70 years old (Table <ref>(a)). We used 100 bootstrap samples to calculate the standard errors (SE). Throughout this section, we report life years gained per 1000 individuals, following the convention of cost-effectiveness analyses (see, e.g., ). Compared to no screening, initiating screening at age 50 would improve the RMST over a 30-years follow-up period (t=80) for the disease-free stage by (28.408 - 28.098)*1000 = 310 years per 1000 individuals (p-value < 0.001, 95% CI, 196 to 424). On the other hand, it would reduce RMST (0.142 - 0.309)*1000 = -167 years (95% CI, -208 to -126, p-value <0.001) for individuals who developed CRC, as those who underwent screening and still developed CRC tended to have cancer at an older age, resulting in shorter RMST since cancer diagnosis. Importantly, the total RMST would still improve by (28.550-28.407) * 1000 = 143 years per 1000 individuals (95% CI, 39 to 247, p-value =0.005). Screening at older ages, specifically at 60 and 70 years, would also lead to improvements in both total RMST and disease-free RMST, albeit to a lesser degree. Similar benefits were observed with shorter follow-up times at t=60 and 70 years. For comparison, we fit a Cox proportional hazards model treating death as a single primary event. We used the same six risk factors as in the multi-state models (SM Table S2), and the associations were consistent with those observed for mortality before CRC in the multi-state modeling approach. The RMST estimates under this model were in line with the total RMST obtained from the multi-state model (Table <ref>(b)). For example, the RMST for screening at 50 years at t=80 was 28.544 vs. the total RMST 28.550 under the multi-state model. This similarity is likely due to the fact that CRC is relatively rare, and the majority would die without CRC. When the disease is not rare, our multi-state modeling-based estimators are more efficient and less biased, as illustrated in one of the simulation scenarios in Section <ref>. §.§ Quality-adjusted Life Years (QALY) For evaluating the cost-effectiveness of cancer screening, studies often report QALY to account for the generally significantly lower health quality during the cancer stage <cit.>. Our proposed measure can incorporate the quality of life score as a time-varying weight function to obtain QALY. Specifically, for the cancer-free stage, we set Q(u) = 1, the same as the RMST for this stage, as there was no loss of quality of life during the disease-free stage. For the cancer stage, we derived the quality of life score based on Table 1 in <cit.>. Briefly, we considered three clinically relevant phases for CRC care: initial, continuing, and terminal care. The initial care phase was defined as the first year after diagnosis; the terminal care phase was the last year of life; and the continuing care phase was all years in between. For participants surviving ≤ 2 years, the final year was considered the terminal care, and the remaining time was allocated to initial care. For participants surviving ≤ 1 year, all of the time was considered the terminal care. The quality score for CRC care is measured on a continuous scale and depends on the duration between the age at cancer diagnosis and age at death. Let a, b and c be the quality scores associated with the initial, continuing and terminal care for CRC, respectively (SM Table S3), and we have: Q(u; T, D) = {[ aI(T≤ u ≤ T+1)+bI(T+1<u≤ D-1) ; + cI(D-1<u≤ D), D-T > 2,; aI(T≤ u ≤ D-1)+cI(D-1<u≤ D), 1 < D-T ≥ 2,; cI(T<u≤ D), D-T ≤1. ] . We incorporated Q(u; T, D) and obtained E{ M(t) } accordingly (details are provided in SM Section 4.1), and the results are presented in Table <ref>. The difference in QALY during the cancer-free stage was the same as that of RMST. However, the difference in QALY during the cancer stage between individuals screened at age 50 years old and no screening would be 121 years per 1000 individuals (95% CI, 92 to 150). The overall QALY gain from screening at age 50 compared to no screening would be 189 years per 1000 individuals (95% CI, 83 to 295) , which was greater than the increase simply in life years (∼ 143 years per 1000 individuals), as indicated by the RMST. §.§ Cause-Specific (Quality-Adjusted) Mean Life Years Lost An important advantage of multi-state modeling is its ability to quantify mean life years lost due to different causes, such as CRC and other causes. This helps us understand the impact of CRC screening on preventing CRC and, consequently, reducing the years lost due to CRC. This is particularly valuable when the disease incidence rate is low. If we only examine the total RMST or QALY, the impact of screening on disease prevention may be overlooked. As shown in Table <ref> and SM Figure S1, if the population underwent screening at age 50, there would be a significant reduction in years lost due to CRC (0.099-0.037)/0.099 = 62.6% (p-value<0.001) during the 30-years follow-up, compared to no screening. Screening at older age would also reduce mean years lost to CRC, but to a lesser extent. There were no statistically significant differences in years lost due to other causes between those who had screening and those who did not (p-value > 0.05). When accounting for the quality score, screening would similarly reduce QALY lost due to CRC compared to no screening (Table <ref> and SM Figure S2). §.§ Cost-Effectiveness Analysis Although earlier screening extends lifetime and reduce mean life years lost due to CRC, itimposes a burden on individuals and strains societal resources. This is because earlier screening results in more screenings over a lifetime, following the guidelines of a colonoscopy every 10 years for CRC screening <cit.>. To understand the (quality-adjusted) life years gained relative to the number of screenings, we estimated the number of screenings assuming individuals would adhere to CRC guidelines as described in Section <ref>. As expected, the earlier the initial screening age, the greater the total number of screenings during the follow-up (SM Table S4) and the greater QALY gain (left panel in Figure <ref>). We calculated the incremental cost-effectiveness ratio (ICER), dividing the increments in the number of screenings by the increments in QALY compared to no screening (right panel in Figure <ref>). When the WHI was initiated, CRC screening was not widely in the US, and it was more than a decade later when the US Preventive Services Task Force first recommended starting screening at age 50. In this context, we considered no screening as the benchmark strategy. We considered initial screening ages at 50, 55, 60, 65 and 70 years, adding 55 and 65 years to provide a clearer trend analysis. The incremental cost-effectiveness ratio ranged from 13.9 to 18.9 on average. Following the principle that recommendable strategies should have an acceptable ICER (typically <50) <cit.>, all strategies have an acceptable ICER, with screening at age 50 resulting in the highest QALY. This finding provides the real-world evidence in support of colonoscopy screening for CRC at age 50 years. We assessed the proportional hazards assumptions of all models, and model (<ref>) shows a modest violation (p-value = 0.04). A close examination suggests the violation is driven by the non-proportional effect of age at diagnosis (p-value <0.01, SM Figure S3). We thus fit a natural cubic spline to reflect the time-varying effect of age at diagnosis (SM Figure S5 and SM Section 4.4). There was little impact on the regression coefficient estimates of all other risk factors (SM Table S5) and the RMST, QALY and cause-specific mean life years lost estimates for screening at various ages and no screening were similar, though with larger standard errors than the multi-state proportional hazards models (SM Table S6). § SIMULATION We conducted simulation studies to examine the finite-sample performance of the proposed multi-state modeling approach for measuring cost-effectiveness. We considered two simulation settings. The first setting (Setting I) aims to demonstrate the finite-sample properties of our proposed approach by modelling the entire disease progression process under a wide range of scenarios. The second setting (Setting II) is designed to mimic the real-data example, in which the disease incidence is relatively low. In each simulation setting, we evaluated the RMST and cause-specific mean years lost due to the disease or other causes for no screening and screening at age 50 years. §.§ Simulation Setting I In this setting, we generated data under a wide range of scenarios. Specifically, we used the Weibull model for the transition hazard functions λ_01(u|Z(u;S),X)=5.2/56.3^5.2 t^5.2-1exp{-1.4Z(u;S)+0.5X} and λ_02(u|Z(u;S),X)=5.9/83.0^5.9 t^5.9-1exp{-0.05Z(u;S)+0.4X}. Here, X was a risk score that may include lifestyle and environmental risk factors (e.g., smoking, alcohol) and genetic risk predispostion (e.g., polygenic risk score). While individual factors in a risk score may be discrete, the aggregation of these risk factors was typically continuous. For simplicity, we generated X ∼(0,1). The screening age S was generated as a function of an individual's risk score X from an exponential distribution with mean 50exp(0.53X). For those who developed the disease, the duration between disease onset and death was generated from an exponential distribution with mean 45exp(-0.3X+0.05T). For the censoring distribution, we considered two scenarios: (i) the censoring time C was completely independent of T, D, S and X, following a (40,100) distribution; (ii) C was conditionally independent of T and D given X, and it was generated under the Cox model with the baseline hazard function assumed to follow the US population age distribution and hazard ratio exp(X). Censoring scenario (i) resulted in approximately 25.4% of individuals developing the disease, 5.0% experiencing death before disease onset, and an overall death rate of 21.2%. Censoring scenario (ii) yielded similar incidence and death rates. The sample size was set to be n=2,500 and a total of 300 datasets were generated under each scenario. To evaluate the performance of bootstrap-based inference, 50 bootstrap samples were generated for each simulated dataset. We compared the proposed multi-state approach to an approach that only models the terminal event, death, which we referred to as the 'overall mortality' approach. We evaluated the RMST over the age range [40,70] for both scenarios, initiating screening at 50 years and having no screening. Table <ref> presents a summary of the proposed multi-state and overall mortality RMST estimators. Under both censoring scenarios, the proposed estimator exhibited no bias. The means of the bootstrap-based SE estimates closely matched with the empirical standard deviations (ESD), and the coverage probabilities of 95% CI were close to 95% for both states (disease-free and disease) and overall. In comparison, the RMST estimates obtained from the overall mortality approach were biased for the screening group with coverage probabilities well below 95%, with 30.0% and 33.3% under the independent censoring and conditional independent censoring scenarios, respectively. For the no-screening group, both the proposed and overall-mortality approaches were unbiased; however, the proposed estimators were more efficient. We also examined the performance of the multi-state approach for estimating the cause-specific mean life years lost due to disease and other causes (Table <ref>). Similarly, the proposed estimators for the mean life years lost exhibited no bias. The bootstrap-based SEs closely approximated the ESD and the coverage probabilities were ∼ 95%. §.§ Simulation Setting II In this setting, we generated the data to mimic the colorectal cancer study described in Section <ref>. Specifically, we generated the time to CRC based on a Weibull model fitted to the CRC age-specific SEER incidence rates and age at death based on a Weibull model fitted to the US mortality data. All other settings were the same as in Setting I. The simulation results were based on 100 simulated datasets, each with n=20,000 individuals, under the conditional independence censoring scenario. On average, approximately 2.0% of individuals developed the disease, 15.0% died before developing the disease, and 1.4% died after developing the disease. Table <ref> presents a summary of the RMST estimators for both the proposed multi-state and overall mortality approaches. Both the proposed and overall mortality estimators performed well in estimating the total RMST, and for the proposed approaches, the RMST estimators for both states also performed well. Both estimators showed that screening extended the total lifetime, although the increments were modest. By modelling the entire process, it can be seen that screening primarily extended the state being disease-free, as it was intended to prevent or delay disease onset. This observation is more evident when examining the mean life years lost due to disease and other causes using our proposed approach (Table <ref>). Screening significantly reduced the mean life years lost due to CRC, whereas there was little difference in the mean life years lost due to other causes. These findings were consistent with the analysis results of the WHI colorectal cancer data, where disease incidence was low, and most deaths occurred before the disease occurs. § CONCLUSIONS In this paper, we develop a causal framework for evaluating the cost-effectiveness of a possibly censored continuous intervention. Our approach incorporates a unified measure that encompasses common measures such as restricted mean survival time and quality adjusted life years. The unified measure also includes cost-related measurements, such as the number of screenings during the follow up. We employ a multi-state model and treat the data as semi-competing risks. In contrast to conventional cost-effectiveness analysis approaches that focus solely on modeling the terminal event, our multi-state modeling approach allows for the estimation for benefit and costs at each state during disease progression, which can better capture and disentangle the effect of an intervention on the target diseases. We establish the asymptotic properties, namely, consistency and asymptotic normality, of proposed estimators. An application to a colorectal cancer study shows that cancer screening improves overall life expectancy, particularly healthy life years. Notably, our approach indicates that initiating screening at age 50 would offer the greatest improvement compared to no screening within the acceptable ICER. Extensive simulation results show that our proposed multi-state estimators are more accurate and efficient compared to conventional approaches that primarily focus on the terminal event. The Cox proportional hazards model is a well-studied and widely used model in biomedical applications due to its practical advantages and established utility. However, the Cox model's validity hinges on the proportional hazards assumption, which, if violated, can compromise the accuracy of the results. In our application, we meticulously evaluated the goodness-of-fit for models (<ref>) and (<ref>). We identified a potential violation of the proportional hazards assumption in the sojourn model (<ref>), specifically concerning the effect of age at diagnosis. To address this, we incorporated a time-varying effect using splines for age at disease. Despite this adjustments, the results remained largely unchanged. To accommodate more complex relationships between confounders, treatment effects, and survival outcomes, one may consider extending the analysis using survival trees <cit.>. Alternatively, one may consider incorporating the effect of confounders through the inverse probability weighting <cit.>. This approach, however, necessitates modeling the time-varying intervention as a function of confounders, a process complicated by semi-competing risks data. Future research in this area could provide valuable insights. It is important to clarify that this paper focuses on the cost-effectiveness of cancer screening for preventive purposes, as opposed to screening for early detection. Screening for early detection involves detecting cancer that has already developed but is not yet clinically apparent, a stage known as the pre-clinical phase. Early detection generally leads to better prognoses and more effective treatment options. However, this could introduce length bias, as individuals with longer pre-clinical phase may have a prolonged window for detection, potentially leading to over-representation in screening outcomes. Additionally, some individuals in the pre-clinical phase may never progress to advanced stages, resulting in unnecessary often-intensive cancer treatment. In contrast, preventive screening aims to prevent cancer from occurring in the first place, for instance, by removing precursor lesions before they can develop into cancer. For example, in colorectal cancer, lesions are removed during colonsocopy screenings, negating the need for further treatment. Preventive screening targets healthy individuals with no current cancer, making length bias irrelevant in this context. The proposed multi-state model may be overly simplistic for describing the carcinogensis process. Microsimulation modeling, which simulates the disease process for an individual based on a natural history model, can more accurately reflect the adenoma-carcinoma sequence that describes the progression from benign adenomatous polyps to malignant colorectal cancer. However, some underlying assumptions of this approach cannot be directly verified. Incorporating the adenoma-carcinoma sequence into multi-state model is challenging, because precursor lesions such as polyps or adenomas detected during colonoscopy, are removed, thereby truncating their growth as intended. Our method could be expanded to include surveillance screening with follow-up screenings. However, due to the complexity (such as truncated growth of precursor lesions and dependent screening schedules), this would require separate development, which is beyond the scope of this paper. Cost-effectiveness analysis for cancer preventive screening is complex and requires different considerations than those for early detection or treatment interventions. Microsimulation modeling has been the primary tool in this area. However, observational data, such as those from the WHI, despite their complexity and limitations, offer real-world evidence and provide invaluable insights for assessing the cost-effectiveness of cancer preventive screening. Empirical modeling should be viewed as complementary to microsimulation modeling. In this context, our proposed causal inference framework for cost-effectiveness analysis of time-varying screening (here, age at first screening) is a starting point to address these important questions using real data. § ACKNOWLEDGEMENTS The work is supported in part by the grants from the National Institutes of Health (R01 CA189532, R01 CA195789, R01 CA236558, P30 CA015704, and U01 CA86368) and the Scientific Computing Infrastructure at the Fred Hutchinson Cancer Research Center which is funded by ORIP grant S10OD028685. The authors are grateful to the generosity of WHI investigators for using the WHI data to illustrate the proposed method. The WHI program is funded by the National Heart, Lung, and Blood Institute, National Institutes of Health, U.S. Department of Health and Human Services through contracts HHSN268201600018C, HHSN268201600001C, HHSN268201600002C, HHSN268201600003C, and HHSN268201600004C. apalike
http://arxiv.org/abs/2409.02825v1
20240904154310
Deep Learning Meets Satellite Images -- An Evaluation on Handcrafted and Learning-based Features for Multi-date Satellite Stereo Images
[ "Shuang Song", "Luca Morelli", "Xinyi Wu", "Rongjun Qin", "Hessah Albanwan", "Fabio Remondino" ]
cs.CV
[ "cs.CV" ]
Deep learning meets satellite images S. Song et al. Geospatial Data Analytics Lab, The Ohio State University, Columbus, USA {song.1634, wu.4988, qin.324}@osu.edu Department of Civil, Environmental and Geodetic Engineering, The Ohio State University, Columbus, USA Department of Electrical and Computer Engineering, The Ohio State University, Columbus, USA Translational Data Analytics Institute, The Ohio State University, Columbus, USA 3D Optical Metrology (3DOM) Unit, Bruno Kessler Foundation (FBK), Trento, Italy {lmorelli,remondino}@fbk.eu Department of Civil, Environmental and Mechanical Engineering, University of Trento, Italy Civil Engineering Department, Kuwait University, Kuwait hessah.albanwan@ku.edu.kw Deep Learning Meets Satellite Images – An Evaluation on Handcrafted and Learning-based Features for Multi-date Satellite Stereo Images Shuang Song1,2,40000-0002-0037-1499 Luca Morelli5,6 0000-0001-7180-2279 Xinyi Wu1,3 0009-0004-4437-7157 Rongjun Qin1,2,3,4 () 0000-0002-5896-1379 Hessah Albanwan 7 Fabio Remondino 5 0000-0001-6097-5342 Accepted Sep 3 2024 to ApJ Letters =============================================================================================================================================================================================================== § ABSTRACT A critical step in the digital surface models (DSM) generation is feature matching. Off-track (or multi-date) satellite stereo images, in particular, can challenge the performance of feature matching due to spectral distortions between images, long baseline, and wide intersection angles. Feature matching methods have evolved over the years from handcrafted methods (, SIFT) to learning-based methods (, SuperPoint and SuperGlue). In this paper, we compare the performance of different features, also known as feature extraction and matching methods, applied to satellite imagery. A wide range of stereo pairs (∼ 500) covering two separate study sites are used. SIFT, as a widely used classic feature extraction and matching algorithm, is compared with seven deep-learning matching methods: SuperGlue, LightGlue, LoFTR, ASpanFormer, DKM, GIM-LightGlue, and GIM-DKM. Results demonstrate that traditional matching methods are still competitive in this age of deep learning, although for particular scenarios learning-based methods are very promising. § INTRODUCTION Satellite stereo images are crucial for applications, such as 3D modeling<cit.>, mapping <cit.>, reconstruction <cit.>, change detection<cit.>, . Their significant advantages are due to their global coverage, low cost per unit area, and frequent revisiting times <cit.>. Current commercial satellites offer images with ground sampling distance (GSD) up to 0.3 meters, potentially producing 1:10,000 topographic maps globally <cit.>. Most satellite images are collected under less ideal conditions, since they are limited to the orbital track and less flexible satellite steering, making perspective stereo image collection an expensive process. As a result, most of the satellite stereo images are constructed by single images of the same scene collected on separate dates, oftentimes months and years apart, and even from different sensors (satellites). Such images are collected under different sun illuminations, sensor responses, atmospheric conditions, anisotropic surfaces, and seasonal landcover variations, as well as a larger baseline and intersection angle <cit.>. Therefore, satellite stereo pairs from different times/tracks, namely off-track stereo images, face elevated challenges when using traditional (handcrafted) algorithms for feature matching and dense stereo matching <cit.>. As a result, the current practice still largely relies on collections that are designated for in-track stereo images, , satellite images taken on the same track and minutes apart, leaving the vast number of satellite images significantly underutilized. Generally, feature matching methods can be simply categorized as traditional and deep learning-based methods <cit.>. Traditional methods are based on handcrafted features (, SIFT <cit.>), while deep learning methods (, SuperPoint <cit.> and SuperGlue <cit.>) are trained to handle extreme appearance and viewing angle changing between the stereo pair images. In the last few years, leaning-based approaches have shown consistent progress in image-matching problems and benchmarks <cit.>. Owning to its ability to learn complex features by samples, learning-based methods have shown to be effective in addressing correspondence problems between images with significant differences in scale, illumination, and colorimetry <cit.>. However, their ability to address the compounded challenges in satellite off-track stereo pairs is just started to be explored <cit.>. In the latter work <cit.>, authors compared the performance of handcrafted and learning-based matching methods on some 40 challenging stereo pairs from ultra-large multi-date satellite image sets by selecting the stereo pairs where the SIFT matcher can find very small number of inliers. In this paper, we performed a more thorough study by testing 496 stereo pairs using the 2019 Data Fusion Contest (DFC) <cit.>. In our evaluation, we consider SIFT as the representative handcrafted method and compare its performance to seven other learning-based matching methods: SuperGlue <cit.>, LoFTR <cit.>, ASpanFormer <cit.>, LightGlue <cit.>, DKM <cit.>, GIM-LightGlue<cit.>, and GIM-DKM <cit.>. The performance of the matching algorithms is evaluated by checking the resulting geometric accuracy of the relative orientation, and the accuracy of the generated digital surface model (DSM) against a reference airborne LiDAR dataset. § RELATED WORKS Early works on feature extraction and matching in satellite imagery noted the unique challenges of off-track satellite stereo images, while most of them focus on evaluating different dense matching algorithms <cit.> or analyzing stereo configurations under varying acquisition conditions <cit.>. For example, <cit.> found that end-to-end learning-based dense stereo matching networks can better process off-track stereo images, albeit it may suffer from generalization issues for unseen datasets (, different sensors and resolutions). However, these studies neglected the fact that a feature matcher should be studied in the first place to ensure accurate geo-referencing within a bundle adjustment process. In recent years, new approaches based on convolutional neural networks (CNNs) have been proposed to overcome the limitations of traditional handcrafted local features, such as SIFT <cit.> and ORB <cit.>. Conventional methods exhibit suboptimal performance when matching images characterized by substantial variations in illumination conditions and/or viewing angles. Typically, these CNNs are trained via self-supervised techniques, utilizing multi-temporal datasets derived from diverse sensors and including a broad spectrum of objects and environments <cit.>. Detection and description have been trained separately, Key.Net <cit.> and HardNet <cit.>, or jointly, as in SuperPoint <cit.>. Concurrently, there is a growing trend towards employing learning-based methods, such as SuperGlue <cit.> and LightGlue <cit.>, among others. For an overview of deep-learning local features and accuracy evaluation, see <cit.> for more details. Recently, differing from key point and feature descriptor-based matching methods, detector-free matching processes a pair of images and output correspondences in one shot <cit.>. LoFTR <cit.> first skips keypoint detection, employing transformers for global matching to succeed in low-texture areas. ASpanFormer <cit.> introduces an adaptive span transformer and was pre-trained to address both low-texture and large perspective changes. DKM <cit.> introduces a dense kernelized feature matching approach that significantly improves two-view geometry estimation. These detector-free methods offer dense and evenly distributed correspondences compared to key point techniques, making them particularly suitable for satellite relative orientation tasks. As mentioned before, classical photogrammetric images are collected at an ideal condition, , with minimal illumination problems and perspective distortions. Therefore, the adoption of learning-based approaches offers fewer advantages than in challenging cases, and, sometimes even results in reduced accuracy, as reported by <cit.>. The advantage of the learning-based method, instead, is evident in challenging multi-temporal datasets <cit.> or under different viewing angles <cit.>. It is noteworthy that these approaches have inherent constraints, including the ability to execute predictions solely on images of limited dimensions determined by GPU capabilities, as well as limitations in rotation and scale invariance, as observed in <cit.>. § METHODOLOGY §.§ The Proposed Processing and Evaluation Framework The processing and evaluation framework, shown in <ref>, aims to assess the performance of classic handcrafted and learning-based feature matching methods. Firstly, the satellite off-track stereo pairs are selected with proper convergence angle and a challenging appearance difference(see <ref>), from which correspondences have been identified by both traditional (, SIFT) and learning-based features (see <ref>). Considering that the localization accuracy of different methods varies, we refine these identified matches using Least Squares Matching (LSM) <cit.>. Using these point correspondences, a RPC-based (Rational Polynomial Coefficients) relative orientation/bias compensation is performed using the RSP (RPC stereo processor) software <cit.> RSP incorporates RANSAC and adjusts the RPC coefficients for the image pairs. Our evaluation is based on the success rate of relative orientation, the number of correctly matched points (inliers), and the epipolar error (y-parallax in the epipolar space) (see <ref>). We set a threshold in epipolar error to filter out successfully corrected RPC coefficients which are good for dense stereo reconstruction. To ensure fairness in comparison, statistics are all based on those successful stereo pairs. In addition, the accuracy of the successively computed DSM is also assessed. After completing the relative orientation step, dense stereo matching is performed to create a DSM using the RSP software <cit.>, which implements a typical SGM (Semi-Global Matching) algorithm <cit.>. The reconstructed DSM is compared to a 3D ground truth DSM, created from an airborne LiDAR sensor <cit.>, using completeness and accuracy (see <ref>). §.§ Satellite Off-track Stereo Pairs - Data Preparation Classic feature matching with handcrafted approaches, such as SIFT, has been widely used in aerial / satellite photogrammetry because of their robustness and efficiency <cit.>. However, as mentioned earlier, it falls short in cases where drastic illumination, scale, and/or view differences are observed. Our evaluation focuses on these challenging cases where images show significant appearance differences. To derive 3D geometry, we select stereo pairs with specific intersection angles in the range of 5° to 35° <cit.>. These selected stereo pairs are ranked based on their seasonal and sun illumination differences, , sun angle difference and month-of-year difference using attributes from metadata, respectively. An example where illumination change leads to a huge difference in appearance is shown in <ref> whereas seasonal differences are shown in <ref>. The month-of-year difference is computed with <ref>, where month_i refers to the month-of-year of two paired images. min(|month_1 - month_2|, 12-|month_1 - month_2|). After applying the intersection angle criteria, we randomly select K pairs from the pair pool of each tile, where K=5 in our evaluation. §.§ Pair Matching with Handcrafted and Learning-based Features and Matchers As a popular descriptor in academia and industry for the last few decades, SIFT is selected as the representative for the handcrafted method. Learning-based methods, start from the milestone SuperPoint/SuperGlue which was introduced in 2020. <ref> reports the employed methods. For matching SIFT features, the classic nearest neighbor approach is used with a ratio threshold equal to 0.95 instead of 0.80-0.85. Indeed, preliminary tests have shown that on these datasets affected by extreme seasonal and illumination changes, a too low ratio threshold is too restrictive in discarding ambiguous matches. With a larger threshold, more matches are retained, leaving the elimination of possible outliers to the test with epipolar geometry. SuperPoint and X-Glue follow the sparse key point detection and feature description stages. SuperGlue and LightGlue are two matching methods based on features extracted by SuperPoint. These algorithms are available in the DIM (Deep-Image-Matching) library <cit.>. Detector-free matching methods use a different protocol, which does not require a key point detector. Those methods yield semi-dense matchings instead of sparse keypoints. We include LoFTR, ASpanFormer, and DKM in the detector-free category. GIM <cit.> is a self-training method for image matching methods, which provides weights of DKM and LightGlue trained using internet videos using self-training schema. Our evaluation includes the LightGlue and DKM networks trained with GIM method and denoted as GIM-LightGlue, and GIM-DKM respectively. §.§ Evaluation Metrics As described in <ref>, the evaluation metrics are twofold: (1) statistics following RPC-based relative orientation and (2) a comparison of dense reconstruction to the ground truth DSM. Our first metric is based on the statistics of relative orientation, including the success rate, the inlier ratio and the epipolar error of the inliers. Instead of adjusting full RPC parameters (80 coefficients in total), we employed 1st-order bias correction similar to a previous work <cit.>. The inlier ratio indicates the number of inliers after RANSAC the initial number of matchings and assesses the effectiveness and precision of the feature matching process. A larger number of inliers increases our confidence in the relative orientation results, as it suggests a smaller number of erroneous matches. The epipolar errors (y-parallax) of inliers are calculated, which means for each matched point, the distance in pixels between a matched point and its corresponding epipolar line. We use the root mean squared epipolar error of all valid matches as a metric, with a smaller error indicating a better matching quality. This metric has been particularly useful in evaluating matching quality when the number of inliers is too low to warrant a reliable relative orientation, potentially impacting the accuracy of the subsequent dense image matching and DSM generation. We use an empirical threshold T=5 px for root mean squared epipolar error in our evaluation to ensure the quality of dense stereo matching. Any pair that has a greater value than the threshold is marked as relative orientation failure and will not proceed in comparison and further processes (, dense stereo matching). Therefore, to ensure a fair comparison, we excluded failure pairs in statistics on both the relative orientation stage and dense reconstruction stage. For image pairs where both classic and learning-based methods provide enough matching points for reliable orientation, we assess the RPCs' quality by creating a DSM through dense stereo matching and comparing it to the actual ground truth DSM. In this scenario the metric is composed by the completeness and the accuracy of the resulting DSM. The completeness of the DSM is defined as the percentage of the ground truth DSM's area that the derived DSM covers. Completeness values range from 0% to 100%, with values closer to 100% indicating superior dense reconstruction. The accuracy of the DSM is the RMSE (Root Mean Square Error) between the derived DSM and the ground truth DSM. To eliminate the possible systematic error due to the misalignment of generated and ground truth DSMs, we apply least squares surface matching to DSMs <cit.>. Then, the RMSE of pixel-wise distances is computed in co-registered DSMs, excluding pixels classified as NaN (Not a Number) from both the generated and ground truth DSM. § EXPERIMENTS AND EVALUATION §.§ Datasets Satellite pairs have been chosen from the DFC2019<cit.> track 3 dataset, a multiple-date satellite image data processing challenge. These include stereo images captured by the WorldView-3 satellite sensor, which has a spatial resolution of 0.3 meters. Additionally, airborne LiDAR data (spatial resolution is 0.5 meters) are provided as ground-truth geometry for DSM analyses. The DFC2019 challenge provides 107 tiles covering over 40 square kilometers collected in Jacksonville, Florida (JAX, FL), and Omaha, Nebraska (OMA, NE). The JAX area includes 53 tiles, each image is cropped in 2048x2048 pixels, covering about 600x600 meters, while the OMA area includes 54 tiles. The image coverage for each tile varies slightly due to the differences in footprints of the multi-date satellite images. In JAX, 24 images were collected between October 2014 and February 2016. In OMA, 43 images were collected between September 2014 and November 2015. The collecting time of each image is plotted in <ref>. The variety of sun direction and viewing direction are visualized in <ref>. Employing the pair selection method outlined in <ref>, we chose up to 5 stereo pairs from each tile, in total 496 sampled pairs in our evaluation. The statistics of the properties of sampled pairs are shown in <ref>. §.§ Analysis with Relative Orientation The RPC-based relative orientation is evaluated in terms of inliers number and epipolar error. After RANSAC, if the number of inliers is less than 5, the relative orientation result is considered unreliable and therefore discarded. Based on this standard, the success rate of relative orientation is presented in <ref>. An important finding is that SIFT matching shows significantly least success rate. This result is illustrated in <ref>, where a pair of images and their matches are reported. The failure is attributed to significant texture changes caused by seasonal differences. A further examination of the inlier ratio statistics of feature matching methods, as shown in <ref>, shows that learning-based methods with key point detectors present less inlier ratio than detector-free methods. Particularly DKM, constantly provides correspondences greater than 95% inlier ratio. An interesting finding is that SIFT outperforms learning-based methods with key point detectors like SuperGlue, LightGlue, and GIM-LightGlue in terms of inlier ratio. When evaluating the epipolar error, as depicted in <ref>, (GIM-)DKM, ASpanFormer and LoFTR inliers demonstrate a smaller epipolar error than SuperGlue, LightGlue and GIM-LightGlue methods. Compared with learning-based method, SIFT is at 2nd place in term of epipolar error, which is competitive if compared to those state-of-the-art learning-based methods. The larger epipolar error of SuperGlue/LightGlue could be explained with the matches from SuperPoint that are extracted at pixel level, while SIFT extracts keypoints with sub-pixel accuracy. §.§ Analysis with Dense Stereo Matching <ref> compares DSMs produced with adjusted RPCs using handcrafted and learning-based matching methods. The completeness and accuracy of DSM are plotted in <ref>. In terms of completeness, DKM demonstrate the best completeness, meanwhile, LoFTR and ASpanFormer falls their rank. The rest including SIFT and other learning-based method shows similar performance in DSM completeness. <ref> shows the final accuracy of the DSM reconstructed based on relative orientation computed with different methods comparing with ground truth. The performance of SIFT, SuperGlue, and LightGlue present the best DSM accuracy, and outperforms the rest of learning-based methods. §.§ Analysis of the Effectiveness of LSM for Point Localization Refinement Least Squares Matching (LSM) <cit.> is a technique for patch-based point matching. It is often used to refine the positions of matched points to achieve sub-pixel accuracy for geometric processing, , relative orientation or bundle adjustment. Considering that feature extraction may be performed on a low-resolution layer of the pyramid (such as SIFT), in our experiment, we explore the effectiveness of using LSM to enhance the accuracy of the matches by adjusting the point locations. We assess the relative change in evaluation metrics (refer to <ref>) with and without LSM using <ref>. Relative change = m_LSM - m_plain/m_plain× 100, where m is one of the previously defined metrics. The relative changes (with and without applying the LSM) considers geometric processing statistics including inlier ratio, epipolar error, DSM completeness, and DSM accuracy across all pairs. The relative differences (by applying the LSM) are shown in <ref>. It can be seen that statistics can be improved notably when being refined by LSM, particularly regarding the quality of DSM, all methods might be benefit from LSM refinement. § CONCLUSIONS This work evaluated the effectiveness of handcrafted and learning-based features for multi-date satellite stereo images. The evaluation focuses on geometric processing problems with off-track satellite stereo pairs. Using a large set of multi-date satellite images, we assessed the quality of matched points by evaluating the resulting accuracy of relative orientation and, subsequentially, the generated DSM. Our findings revealed that learning-based methods are generally superior in robustness of finding matchings than the handcrafted method. This was especially true in cases where the differences in sunlight and seasonal changes posed a challenge. However, for those cases where a handcrafted method is still able to find correspondences, their inliers are accurate in terms of photogrammetric processing. Considering the computational cost and scale-up capability, handcrafted matchers are still competitive in this age of deep learning. As learning-based results are promising, our future works aim to investigate the performance of other learning-based local features and matchers to support the extraction of geometric information from satellite off-track stereo pairs. § ACKNOWLEDGEMENTS The authors are supported in part by the Office of Naval Research [grant numbers N000142012141 & N000142312670] and Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/ Interior Business Center (DOI/IBC) contract number 140D0423C0075. splncs04
http://arxiv.org/abs/2409.02875v1
20240904165853
RISTRETTO: reflected-light exoplanet spectroscopy at the diffraction limit of the VLT
[ "Christophe Lovis", "Nicolas Blind", "Bruno Chazelas", "Muskan Shinde", "Maddalena Bugatti", "Nathanaël Restori", "Isaac Dinis", "Ludovic Genolet", "Ian Hughes", "Michaël Sordet", "Robin Schnell", "Samuel Rihs", "Adrien Crausaz", "Martin Turbet", "Nicolas Billot", "Thierry Fusco", "Benoit Neichel", "Jean-François Sauvage", "Pablo Santos Diaz", "Mathilde Houelle", "Joshua Blackman", "Audrey Lanotte", "Jonas Kühn", "Janis Hagelberg", "Olivier Guyon", "Patrice Martinez", "Alain Spang", "Christoph Mordasini", "David Ehrenreich", "Brice-Olivier Demory", "Emeline Bolmont" ]
astro-ph.EP
[ "astro-ph.EP", "astro-ph.IM" ]
A refined random matrix model for function field L-functions Will Sawin September 9, 2024 ============================================================ § ABSTRACT RISTRETTO is a visible high-resolution spectrograph fed by an extreme adaptive optics (AO) system, to be proposed as a visitor instrument on ESO VLT. The main science goal of RISTRETTO is to pioneer the detection and atmospheric characterisation of exoplanets in reflected light, in particular the temperate rocky planet Proxima b. RISTRETTO will be able to measure albedos and detect atmospheric features in a number of exoplanets orbiting nearby stars for the first time. It will do so by combining a high-contrast AO system working at the diffraction limit of the telescope to a high-resolution spectrograph, via a 7-spaxel integral-field unit (IFU) feeding single-mode fibers. Further science cases for RISTRETTO include the study of accreting protoplanets such as PDS70b/c through spectrally-resolved H-alpha emission, and spatially-resolved studies of Solar System objects such as icy moons and the ice giants Uranus and Neptune. The project is in the manufacturing phase for the spectrograph sub-system, and the preliminary design phase for the AO front-end. Specific developments for RISTRETTO include a novel coronagraphic IFU combining a phase-induced amplitude apodizer (PIAA) to a 3D-printed microlens array feeding a bundle of single-mode fibers. It also features an XAO system with a dual wavefront sensor aiming at high robustness and sensitivity, including to pupil fragmentation. RISTRETTO is a pathfinder instrument in view of similar developments at the ELT, in particular the SCAO-IFU mode of ELT-ANDES and the future ELT-PCS instrument. § INTRODUCTION In 2016, the discovery of Proxima b clearly marked a milestone in the exoplanet field: the star closest to the Sun, at just 1.3 pc, hosts a temperate rocky planet in its habitable zone<cit.>. Given its proximity, angular separation and favorable contrast with respect to the star, there will never be a "better" potentially habitable planet than this one in terms of observability. This triggered the original idea to develop RISTRETTO, a pioneering experiment for reflected-light exoplanet spectroscopy at the VLT<cit.>. Combining an XAO system to a high-resolution spectrograph in the visible, we could show that Proxima b would be amenable to direct detection with an 8m telescope. RISTRETTO would thus become a pathfinder for this science case, which requires an ensemble of state-of-the-art but existing technologies to be combined together for the first time. This would pave the way towards the development of similar instrumentation at the ELT, namely ANDES and PCS. Technical specifications for RISTRETTO were derived from the requirement to characterize Proxima b, which serves as the sizing science case for the instrument. However they broadly apply to reflected-light exoplanet spectroscopy in general. They can be summarized as follows: an inner working angle of at most 40 mas, a planet coupling efficiency of >50%, a stellar coupling (contrast) of <10^-4, a spectral resolution R> 100,000, a spectral range Δλ/λ > 25%, and a total system throughput >5%. We thus started to design an instrument that would be made of three essential parts: a high-Strehl XAO system, a coronagraphic integral-field unit working at the diffraction limit, and a visible high-resolution spectrograph. We also developed a simulator that allows us to explore the feasibility of the various RISTRETTO science cases, which are briefly described in this paper. § INSTRUMENT OVERVIEW The main sub-systems of the RISTRETTO instrument are listed below. We refer to the corresponding papers presented at this conference for details on the status of each sub-system. * Ultra-fast, high-Strehl XAO system: the need to efficiently couple the planet light into the spectrograph calls for achieving a Strehl ratio >70% in I-band. Moreover, the need to achieve a raw contrast of 10^-4 close to the diffraction limit calls for a very fast correction loop (>2 kHz) to minimize servo-lag error. The current XAO baseline design foresees a dual wavefront sensor made of an H-band unmodulated Pyramid for high sensitivity to low-order modes, complemented by a zY-band Zernike wavefront sensor for optimized sensing of phase discontinuities across the pupil (e.g. low-wind effect). Both WFS will share a common detector, a fast C-RED ONE near-IR camera. Wavefront correction will be achieved by a woofer-tweeter system comprising an ALPAO 97-15 woofer with 100 actuators and a Boston Micromachines 2K-3.5 tweeter with 2000 actuators. We refer to papers 13097-213 (Blind et al.), 13097-189 (Shinde et al.), 13097-295 (Shinde et al.), and 13097-192 (Motte et al.) for detailed end-to-end simulations of the RISTRETTO XAO system and test results from the C-RED ONE camera. * Coronagraphic IFU: RISTRETTO will use the newly-developed concept of the PIAA nuller<cit.> (PIAA-N) to feed a bundle of 7 single-mode fibers arranged in an hexagonal pattern covering the 2-λ/D annulus around the central star. This solution offers both a high throughput for planets located within this annulus and strong suppression of the stellar PSF. We refer to paper 13100-103 (Restori et al.) for a detailed description of this setup and the presentation of test results from a PIAA-N prototype. * Visible high-resolution spectrograph: the RISTRETTO spectrograph will cover the 620-840 nm wavelength range at a spectral resolution R=140,000. It is a diffraction-limited design accepting 7 single-mode fibers as input<cit.>. The spectrograph is currently in the manufacturing and procurement phase, with assembly, integration and tests planned for 2024-2025. Tests of the vacuum vessel and thermal enclosure are ongoing to demonstrate the required thermo-mechanical stability of the instrument once installed on the VLT Nasmyth platform. We refer to paper 13096-283 (Chazelas et al.) for a detailed status update of this sub-system. * Instrument simulator: we are developing a full end-to-end simulator for RISTRETTO as a tool to prepare the scientific observing programme. It includes realistic XAO and PIAA performances and uses the Pyechelle software package<cit.> to generate simulated raw echelle spectra. Performances on real star-planet systems such as Proxima Centauri are being investigated and advanced data analysis techniques are being developed to optimally extract the planetary signal. We refer to paper 13096-358 (Bugatti et al.) for a detailed description of this tool. § EXOPLANET TARGET LIST FOR RISTRETTO Over the past three decades, RV surveys have discovered many exoplanets orbiting very nearby stars. Some of them reach angular separations at maximum elongation that are large enough to make them resolvable by 8m-class telescopes in the visible. We used the NASA Exoplanet Archive and the RISTRETTO simulator to derive a target list of exoplanets that RISTRETTO will be able to characterize. As can be seen from Fig. <ref>, a dozen of known objects are accessible to RISTRETTO, from cold Jupiters to warm Neptunes to the temperate rocky planet Proxima b. The cold gas giants GJ876b and GJ876c can be detected in just 1-2 hours of observing time. The warm Saturn HD3651b can be detected in about 1 night of integration. The Neptune-mass objects HD192310b, HD102365b and 61 Vir d can be studied in a few nights. The warm super-Earth GJ887c can be reached in less than a night. And Proxima b, if observed at its brightest accessible phase angle, can be detected in about 5 nights if it has an Earth-like atmosphere. For all these objects, the level of characterization will depend on a priori unknown planet properties such as albedo and atmospheric composition. Broadband and chromatic albedos may be derived from the RISTRETTO detection of the reflected stellar spectral lines. Phase curves may be obtained for planets that can be observed at different phase angles. And molecular absorption by water vapor, methane and oxygen may be detected if present in sufficient quantities. Thus RISTRETTO is by no means a "single-target instrument" but truly has the capability to open a new field in exoplanet studies. § OTHER SCIENCE CASES Besides exoplanets in reflected light, RISTRETTO will address a variety of other science cases that will be of broad interest to the planetary science and stellar communities. We briefly summarize them here: * Accreting protoplanets in H-alpha: the RISTRETTO wavelength range covers the H-alpha line, which is a well-known tracer of accretion in young stars and protoplanets. Thus we will be able to study the spectrally-resolved H-alpha line profile in objects such as PDS70b/c and constrain the geometry and kinematics of the accretion flows onto the planet. Moreover, RISTRETTO will be able to search for new protoplanets at 30-50 mas from young stars, which corresponds to ∼3-5 AU at the distance of nearby star-forming regions. This is the typical orbital distance at which most giant planets are thought to form, and thus RISTRETTO will be able to access this crucial parameter space for the first time. * Kinematics of protoplanetary disks: RISTRETTO will be able to derive spatially-resolved Doppler velocity maps of protoplanetary disks in scattered light. This capability would nicely complement the sub-mm velocity maps obtained by ALMA. Resolved disk kinematics including localized velocity anomalies linked to forming planets could be probed with RISTRETTO. * Spatially-resolved stellar surfaces: with an angular resolution of 19 mas, RISTRETTO will be able to resolve the surfaces of the largest stars in the sky, e.g. Betelgeuse (45 mas in diameter) or Antares (41 mas). It will thus be able to study how spectral lines vary in shape and radial velocity as a function of disk position, and how they evolve with time. * Solar System science: RISTRETTO will be able to study local structures at the surfaces of the icy moons of Jupiter and Saturn (e.g. Io, Europa, Titan) with a spatial resolution of 120 km at Jupiter and 240 km at Saturn. It may also measure local wind velocities and chemical abundances in the atmospheres of Uranus and Neptune. § RELEVANCE AND IMPACT This project will demonstrate novel concepts in XAO and high-contrast spectroscopy that will find direct applications in the ELT era. At the European level, RISTRETTO joins the SPHERE upgrade project SAXO+<cit.> as one of two XAO demonstrators for the ELT. RISTRETTO will differentiate itself from SAXO+ in several respects: * XAO in the visible, where exoplanets are expected to be most reflective (higher albedos), and thus a unique wavelength range that ELT-PCS will target as well * Inner working angle down to 2 λ/D or 38 mas, required on both the VLT and ELT for targeting exoplanets around nearby stars in reflected light * Unmodulated H-band PyWFS for maximal sensitivity to low-order aberrations * Integrated dual WFS strategy for enhanced sensitivity and robustness to phase discontinuities across the pupil * Coronagraphic integral-field unit * Efficient coupling to a high-resolution spectrograph In terms of science return, RISTRETTO is well positioned to be the first to explore Proxima b, our nearest neighbour and the most accessible temperate rocky world there will ever be. Beyond Proxima b, the RISTRETTO experiment will pioneer exoplanet reflected-light spectroscopy, which is a completely new observational approach. We will for the first time be able to directly probe the atmospheres and surfaces of the exoplanets that are closest to the Solar System. These comprise a diversity of objects, from cold Jupiters to warm Neptunes to terrestrial worlds. Their study will be highly complementary to the ongoing characterization of transiting exoplanets with e.g. JWST. RISTRETTO will offer a first glimpse into this population of nearby planets, which will be the one also targeted by ELT-ANDES, ELT-PCS, and NASA Habitable Worlds Observatory (HWO) in the future. This work has been carried out within the framework of the National Centre of Competence in Research PlanetS supported by the Swiss National Science Foundation under grants 51NF40_182901 and 51NF40_205606. The RISTRETTO project was partially funded through the SNSF FLARE programme for large infrastructures under grants 20FL21_173604 and 20FL20_186177. The authors acknowledge the financial support of the SNSF. spiebib
http://arxiv.org/abs/2409.02350v1
20240904003841
Neighbourhood conditions for network stability with link uncertainty
[ "Simone Mariano", "Michael Cantoni" ]
eess.SY
[ "eess.SY", "cs.SY", "math.DS" ]
[ [ ===== § ABSTRACT The main result relates to structured robust stability analysis of an input-output model for networks with link uncertainty. It constitutes a collection of integral quadratic constraints, which together imply robust stability of the uncertain networked dynamics. Each condition is decentralized in the sense that it depends on model data pertaining to the neighbourhood of a specific agent. By contrast, pre-existing conditions for the network model are link-wise decentralized, with each involving conservatively more localized problem data. A numerical example is presented to illustrate the advantage of the new broader neighbourhood conditions. Integral-Quadratic Constraints (IQCs), Network Robustness, Scalable Analysis. § INTRODUCTION Motivated by problems in power and water distribution, transportation, ecology, and economics, large-scale networks of dynamical systems have been long studied in system and control theoretic terms; e.g., see <cit.> for state-space methods, and <cit.> for input-output methods. In this paper, an input-output approach, based on integral quadratic constraints (IQCs)  <cit.>, is pursued to progress the development of scalable stability certificates for networks with uncertain links. The uncertain network model considered here was recently developed in <cit.>, with a focus on assessing the impact of link uncertainty expressed relative to the ideal (unity gain) link. The work is related to <cit.>, and as in <cit.>, most closely to <cit.>. The main contribution is an alternative approach to the decomposition of a monolithic IQC certificate that implies robust network stability. The new decomposition involves a collection of sufficient conditions, each depending on model data pertaining to a specific agent, its neighbours, and the corresponding links. That is, each condition is local to a specific neighbourhood. As such, compared to the link-wise decomposition presented in <cit.>, each neighbourhood based condition involves more (still localized) model data. This provides scope for reduced conservativeness, as illustrated by a numerical example. The paper is organized as follows: Various preliminaries are established next. In Section <ref>, the structured feedback model of a network with uncertain links, and a correspondingly structured IQC based robust stability condition, which is amenable to decomposition, are recalled from <cit.>. Then, in Section <ref>, the novel neighborhood based decomposition is developed, alongside statement of the link-wise conditions from <cit.> for comparison, including a numerical example. Some concluding remarks are provided in Section <ref>. § PRELIMINARIES §.§ Basic notation The natural, real, and complex numbers are denoted , , and , respectively. For i,j∈, [i:j]:={k∈ | i≤ k ≤ j}, which is empty if j<i, _∙α:={β∈ℝ | β∙α} for given order relation ∙∈{>,≥,<,≤}, and [α,β]:=ℝ_≥α∩ℝ_≤β. With ∈{,}, for p∈, the p-dimensional Euclidean space over is denoted by ^p. Given x∈^p, for i∈[1:p], the scalar x_i∈ denotes the i-th coordinate. The vectors 1_p,0_p∈ℝ^p satisfy 1=(1_p)_i=1+(0_p)_i for every i∈[1:p]. ^p× q denotes the space of p× q matrices over , for p,q∈. The identity matrix is denoted by I_p ∈𝔽^p× p, the square zero matrix by O_p∈𝔽^p× p, and the respective p× q matrices of ones and zeros by 1_p× q and 0_p× q. Given M∈^p× q, for i∈[1:p], j∈[1:q], the respective matrices M_(·,j)∈^p× 1∼^p, and M_(i,·)∈^1× q, denote the j-th column, and i-th row. Further, M_(i,j)∈ denotes the entry in position (i,j). Given x∈𝔽^p, the matrix M=diag(x)∈𝔽^p× p is such that M_(i,i)=x_i and M_(i,j)=0, j≠ i∈[1:p], whereby I_p=diag(1_p). The transpose of M∈𝔽^p× q is denoted by M^'∈𝔽^q× p, and M^*=M̅^' denotes the complex conjugate transpose. For M_i=M_i^*∈𝔽^p× p, i∈{1,2}, M_i≻ 0 means there exists ϵ>0 such that x^* M_i x ≥ϵ x^*x for all x∈𝔽^p, M_i≽ 0 means x^* M_i x ≥ 0 for all x∈𝔽^p, and M_1≽ (resp. ≻) M_2 means M_1-M_2 ≽ (resp. ≻) 0. §.§ Signals and systems The Hilbert space of square integrable signals v=(t∈_≥ 0↦ v(t)∈^p) is denoted Ł_2 ^p, where the inner-product ⟨ v, u ⟩ := ∫_0^∞v(t)^' u(t) t and norm v_2 := ⟨ v, v ⟩^1/2 are finite; the superscript is dropped when p=1. The corresponding extended space of locally square integrable signals is denoted by Ł_2e^p; i.e., v:_≥ 0→^p such that π_τ(v)∈Ł_2 ^p for all τ∈_≥ 0, where (π_τ(v))(t):=f(t) for t∈[0,τ), and (π_τ(v))(t):=0 otherwise. The composition of maps F:Ł_2e^p ↦Ł_2e^r and G:Ł_2e^q ↦Ł_2e^p is denoted by F∘ G := (v↦ F(G(v)), and the direct sum by F⊕ G:=((u,v)↦ (F(u),G(v))). Similarly, ⊕_i=1^n G_i = G_1⊕⋯⊕ G_n. When G is linear, in the sense (∀α,β∈) (∀ u,v∈Ł_2e^q) G(α u + β v) = α G(u) + β G(v), the image of v under G is often written Gv, and in composition with another linear map ∘ is dropped. The action of a linear G:Ł_2e^q→Ł_2e^p corresponds to the action of p· q scalar systems G_(i,j):Ł_2e→Ł_2e, i∈[1:p], j∈[1:q], on the coordinates of the signal vector input associated with Ł_2e^q∼Ł_2e×⋯ף_2e; i.e., (Gv)_i = ∑_j=1^q G_(i,j)v_j. Matrix notation is used to denote this. Also, for convenience, the map of pointwise multiplication by a matrix on Ł_2e is not distinguished in notation from the matrix. A system is any map G:Ł_2e^q→Ł_2e^p, with G(0)=0, that is causal in the sense π_τ(G(u)) = π_τ(G (π_τ(u)) for all τ∈_≥ 0. It is called stable if u∈Ł_2 ^q implies G(u) ∈Ł_2 ^p and G:=sup_0≠ uG(u)_2/u_2< ∞ (the composition of stable systems is therefore stable.) The feedback interconnection of G with system Δ:Ł_2e^p →Ł_2e^q is well-posed if for all (d_y,d_u)∈Ł_2e^pף_2e^q, there exists unique (y,u)∈Ł_2e^pף_2e^q, such that y = G(u) + d_y, u = Δ(y) + d_u, and [[G,Δ]] := ((d_y,d_u) ↦ (y,u)) is causal; see Figure <ref>. If, in addition, [[G,Δ]] < ∞, then the closed-loop is called stable. The following result is the well-known IQC robust feedback stability theorem, taken from <cit.>: Given stable system Δ:Ł_2e^p →Ł_2e^q, and bounded linear map Π:Ł_2 ^pף_2 ^q →Ł_2 ^pף_2 ^q that is self-adjoint in the sense (∀ g_1,g_2∈Ł_2 ^pף_2 ^q) ⟨ g_1, Π g_2 ⟩=⟨Π g_1, g_2⟩, suppose ⟨ (y,u), Π (y,u) ⟩≥ 0, u=αΔ(y), for all (y,α)∈Ł_2 ^p × [0,1]. Further, given stable system G:Ł_2e^q→Ł_2e^p, suppose [[G,αΔ]] is well-posed for all α∈[0,1], and there exists ϵ>0 such that ⟨ (y,u), Π (y,u) ⟩≤ -ϵ u_2^2, y = G(u), for all u∈Ł_2 ^q. Then, GΔ is stable. §.§ Graphs Let =(,) be a simple (self-loopless and undirected) graph, where =[1:n] is the set of n∈ℕ∖{1} vertices, and ⊂{{i,j} | i,j∈} is the set of m:=||∈ℕ edges. Bijective κ_:→ with :=[1:m] denotes a fixed enumeration of the edge set for indexing. The set _i:={j | {i,j}∈} comprises the neighbours of i∈, _i:={{i,j} | j∈𝒩_i} is the corresponding neighbourhood edge set, and bijective κ__i:_i→_i is a fixed enumeration of the m_i:=|_i| edges, where _i:=[1:m_i]. The edge indexes associated with i∈ are gathered in the set denoted by _i:={κ_({i,j}) | j ∈_i}. For each k∈, the set _k:=(_i ∪_j) ∖{k}, where {i,j} = κ_^-1(k), is the collection of all indexes of the edges associated with either of the neighbouring vertices i and j, excluding the one linking them. The following technical result regarding the given graph 𝒢=(,) is used subsequently. Given arbitrary E_k,F_k:Ł_2 ^p →Ł_2 ^p for k∈: * ∑_i∈∑_k∈_i12F_k= ∑_k∈ F_k; * ∑_i∈∑_k∈_i ∑_ℓ∈_i∖{k} E_k ∘ F_ℓ = ∑_k∈∑_ℓ∈_k E_k ∘ F_ℓ. See Appendix. § NETWORKED SYSTEM MODEL AND ROBUST STABILITY ANALYSIS In this section, the network model and robust stability analysis from <cit.> is recalled first. A new result is then derived to underpin the aforementioned neighbourhood decomposition, which is subsequently developed in Section <ref>. Consider a network of n∈∖{1} dynamic agents, coupled according to the simple graph =(,). The vertex set =[1:n] corresponds to a fixed enumeration of the agents, and m:=|| is the number of edges, defined according to {i,j}∈ if the output of agent i∈ is shared as an input to agent j∈, and vice-versa. It is assumed that the number of neighbours m_i:=|_i|≥ 1 for all i∈. Note that ∑_i∈ m_i=2m. To tame the notation, each agent has a single output and dynamics corresponding to the system H_i:Ł_2e^m_i→Ł_2e, which is taken to be linear and stable, with the vector input signal coordinate order fixed by the neighbourhood edge-set enumerations κ__i:_i→_i. Define the block diagonal systems H:=⊕_i=1^nH_i:Ł_2e^2m→Ł_2e^n, T:=⊕_i=1^n1_m_i× 1:Ł_2e^n→Ł_2e^2m, and R:=⊕_i=1^n(⊕_k=1^m_i R_i,k):Ł_2e^2m→Ł_2e^2m, where 1_m_i× 1:Ł_2e→Ł_2e^m_i denotes pointwise multiplication by 1_m_i× 1∈ℝ^m_i× 1, and the system R_i,k:Ł_2e→Ł_2e represents the stable but possibly nonlinear and time-varying dynamics of the link from agent i∈ to its neighbour j∈_i with {i,j}=κ__i^-1(k). Given these components, the networked system can be modelled as the structured feedback interconnection [[P, R ∘ T ∘ H]], where P:Ł_2e^2m→Ł_2e^2m is pointwise multiplication by a permutation matrix arising from the structure of . More specifically, for each i∈, k∈_i, and r∈[1:2m], the corresponding entry of this permutation matrix is given by P_(∑_h∈ [1:i-1] m_h +k, r)= 1 if r= ∑_h∈ [1:j-1]m_h +κ__j({i,j}) with j∈κ__i^-1(k)∖{i}, 0 otherwise, where by convention the sum over an empty index set is zero. See <cit.> for more details about the model and its components. The `routing' matrix P=P^'=P^-1 is the adjacency matrix of an undirected 1-regular sub-system graph :=(,), with 2m vertices and m edges, corresponding to the disjoint union of the two-vertex subgraphs 𝒢[e] induced by each edge e∈ in the network graph 𝒢=(,). As such, P=D-L=I_2m- ∑_k∈ L_k, where the degree matrix D=I_2m since is 1-regular, and the Laplacian decomposes as L=∑_k∈L_k =∑_k∈B_(·,k)B_(·,k)^' =B B^', where B∈ℝ^2m× m is the incidence matrix defined by B_(r,k)=1=-B_(s,k) for {r,s}=κ^-1_(k), and B_(t,k):=0 for each t ∈ [1:2m]∖{r,s}, over k∈; the enumeration κ_ is taken to be compatible with the enumerations κ__i and the definition of P in (<ref>). The edge orientation is arbitrary. The network with ideal links (i.e., R=I_2m) is stable in the sense that [[P, T∘ H]] is stable. As detailed in <cit.>, one can leverage Assumption <ref> in the analysis of [[P,R∘ T ∘ H]] by considering uncertainty in the links R relative to ideal unity gain links. Indeed, as illustrated in Figure <ref>, to verify robust stability of the networked system [[P,R∘ T ∘ H]] it is sufficient to verify that [[G,Δ]] is stable, where Δ:= (R-I_2m) ∘ T, and G := H ∘ (P - T∘ H )^-1, with the stable systems R, H, and T as per (<ref>), and P as per (<ref>). Since (P - T∘ H )^-1 is stable by Lemma 1 in <cit.>, G:Ł_2e^2m→Ł_2e^n, and the uncertain Δ:Ł_2e^n→Ł_2e^2m, are both stable. (<cit.>) Under Assumption <ref>, if [[G,Δ]] is stable, with G as per (<ref>), and Δ as per (<ref>), then the networked system model [[P,R∘ T ∘ H]] is stable. Further, when Δ is also linear, stability of [[P,R∘ T ∘ H]] implies stability of [[G,Δ]]. Suppose, for all (y,α)∈Ł_2 ^n× [0,1], (y,u)Φ(y,u)≥ 0, u=αΔ(y), where the given bounded linear self-adjoint operator Φ:(Ł_2 ^nף_2 ^2m) → (Ł_2 ^nף_2 ^2m) is structured according to Φ = [ Φ_1 Φ_2; Φ_2^* Φ_3 ] =[ ⊕_i=1^n Φ_1,i ⊕_i=1^n Φ_2,i; ⊕_i=1^nΦ_2,i^* ⊕_i=1^n Φ_3,i ]; the superscript * denotes Hilbert adjoint. For example, this IQC holds by selecting each Φ_i := [ Φ_1,i Φ_2,i; Φ_2,i^* Φ_3,i ] :Ł_2 ף_2 ^m_i→Ł_2 ף_2 ^m_i, i∈, such that (y_i,u_i)Φ_i(y_i,u_i)≥ 0, u_i=αΔ_i(y_i), for all (y_i,α)∈Ł_2× [0,1], with local Δ_i=(⊕_k=1^m_iR_i,k-I_m_i)∘1_m_i× 1. Then, by Theorem <ref>, the stability of GΔ is implied by the existence of ϵ>0 such that for all z∈Ł_2 ^2m, [ N; M ]z[ Φ_1 Φ_2; Φ_2^* Φ_3 ][ N; M ]z≤ -ϵ z _2^2, where N:= H, and M:=(P-T∘ H)=I_2m-L-T∘ H, with T, H, and P, as per (<ref>) and (<ref>). Importantly, N and M are structured coprime factors of G=NM^-1, as discussed in <cit.>. Also refer to <cit.> for more on the use of Theorem <ref> to arrive at (<ref>), and discussion of the obstacle to direct application of the decomposition method proposed in <cit.> to the equivalent condition [ I_2m; L; ]z[ [ Ξ_1 Ξ_2; Ξ_2^* Ξ_3 ]] [ I_2m; L; ]z≤ -ϵ z _2^2, where L=∑_k∈ [1:m] L_k is the sub-system graph Laplacian in (<ref>), Ξ_1 := N^*Φ_1N+ N^*Φ_2 J+J^*Φ_2^*N+J^*Φ_3 J , Ξ_2 :=-N^*Φ_2-J^*Φ_3 , Ξ_3 :=Φ_3 , and J:= I_2m-T∘ H. The block-diagonal structure of N, J, Φ_1, Φ_2, Φ_3, and the Hilbert adjoints when restricted to Ł_2, is such that Ξ_1 = ⊕_i=1^n Ξ_1,i, Ξ_2= ⊕_i=1^n Ξ_2,i, and Ξ_3 =⊕_i=1^n Ξ_3,i, where Ξ_1,i := H_i^*Φ_1,iH_i +(I_m_i - 1_m_iH_i)^*Φ_3,i (I_m_i - 1_m_iH_i) + H_i^*Φ_2,i(I_m_i - 1_m_iH_i) +(I_m_i - 1_m_iH_i)^*Φ_2,i^*H_i , Ξ_2,i :=-H_i^*Φ_2,i-(I_m_i-1_m_iH_i)^*Φ_3,i , Ξ_3,i :=Φ_3,i . For context, an existing link-wise decomposition of (<ref>) is first recalled from <cit.>. (<cit.>) Let W=∑_k∈ W_k ∈ℝ^2m× 2m be such that W_k≽ 0 and W≻ 0. Suppose there exist X_k=X^*_k:Ł_2 ^2m→Ł_2 ^2m, Z_k=Z_k^*:Ł_2 ^2m→Ł_2 ^2m, and ϵ_k>0, k∈, such that for all z∈Ł_2 ^2m, [ I_2m; L_k; ] z[ X_k+ϵ_k W_k Ξ_2; Ξ_2^* Z_k ][ I_2m; L_k; ]z≤ 0,  k∈, zΞ_1 z - ∑_k∈zX_k z≤ 0, L zΞ_3 L z - ∑_k∈L_k zZ_k L_k z≤ 0, with L_k as per (<ref>). Then, there exists ϵ>0 such that (<ref>) for all z∈Ł_2 ^2m. The m=|| conditions in (<ref>) enable use of the structure of each L_k=B_(·,k)B^'_(·,k) for link-wise decentralized verification of (<ref>), although this can be conservative. An alternative is to decompose according to the broader structure of each K_i=∑_k∈_i L_k, i∈, which encompasses the neighborhood of agent i. Let W=∑_i∈ W_i ∈ℝ^2m× 2m be such that W_i≽ 0 and W≻ 0. Suppose there exist X_i=X^*_i:Ł_2 ^2m→Ł_2 ^2m, Y_i:Ł_2^2m→Ł_2^2m, Z_i=Z_i^*:Ł_2 ^2m→Ł_2 ^2m, and ϵ_i>0, i∈, such that for all z∈Ł_2 ^2m, [ I_2m; K_i; ] z[ X_i+ϵ_i W_i Y_i; Y_i^* Z_i ][ I_2m; K_i; ]z≤ 0,   i∈, zΞ_1 z - ∑_i∈zX_i z≤ 0, zΞ_2 L z - ∑_i∈zY_i K_i z≤ 0, L zΞ_3 L z - ∑_i∈K_i zZ_i K_i z≤ 0, with K_i as per (<ref>). Then, there exists ϵ∈ℝ_>0 such that (<ref>) for all z∈Ł_2 ^2m. For all z∈Ł_2^2m, (<ref>) implies zX_i z + zY_i K_i z + K_i zY_i^* z + K_i zZ_i K_i z≤ -ϵ_i zW_i z, for each i∈, and therefore, ∑_i∈ ( zX_i z + zY_i K_i z + K_i zY_i^* z + K_i zZ_i K_i z ) ≤ -∑_i∈ϵ_i zW_i z≤ - ϵ || z ||_2^2, where ϵ = (min_i∈ϵ_i) ·(min_x∈ℝ^n x^' W x/x^' x)>0; note that W=∑_i∈ W_i ≻ 0 implies ∑_i∈ϵ_i W_i ≽ (min_i∈ϵ_i) ∑_i∈ W_i≻ 0, because each W_i≽ 0. Combining (<ref>), (<ref>), (<ref>), and (<ref>), gives zΞ_1 z + zΞ_2 L z + L zΞ_2^* z + L zΞ_3 L z≤ -ϵ || z ||_2^2, as claimed. As with <cit.>, the proof of Lemma <ref> expands upon ideas from the proof of <cit.>, which is not directly applicable here for the reasons elaborated in <cit.>. Considering K_i as per Lemma <ref>, instead of L_k as per Lemma <ref>, makes each instance of (<ref>) depend on more network model data, which provides scope for reducing the conservativeness of Lemma <ref>. This comes at the cost of the additional inequality (<ref>), which has no counterpart in Lemma <ref> since ∑_k∈ L_k = L, whereas ∑_i∈ K_i ≠ L. § MAIN RESULT: NEIGHBOURHOOD CONDITIONS A possible selection of W_i, X_i, Y_i, Z_i, i∈, is devised for Lemma <ref>. It yields a decentralized collection of conditions that together imply the stability of GΔ, and thus, stability of the network by Theorem <ref>. The conditions are decentralized in the sense that each depends on model data that is local to the neighbourhood of a specific agent, including corresponding components of the IQC based uncertainty description of the local links. The subsequent matrix definitions, and related properties, lead to the proposed selection of W_i, X_i Y_i and Z_i in Lemma <ref>. The definitions pertain to the structure of the networked system model, encoded by the network graph =(,) and corresponding sub-system graph =(,), with =[1:n], m=||=||, ||=2m=∑_i∈m_i, and m_i=|_i|, as per Section <ref>. First, for each k∈=[1:m], define A_k := (diag(B_(·,k)))^2 ∈ℝ^2m× 2m, where B∈ℝ^2m× 2m is the incidence matrix of the sub-system graph Laplacian matrix L=∑_k∈ L_k = ∑_k∈ B_(·,k)B_(·,k)^' in (<ref>). The matrix A_k is diagonal, with {0,1} entries, and (A_k)_(r,r)=1   if and only if   r∈κ^-1_(k); i.e., the value is 1 only in the two locations corresponding to the sub-systems in associated with link k, as per the definition of the incidence matrix B below (<ref>). Indeed, since =(,) is 1-regular, direct calculation gives A_kB_(·,k)=B_(·,k), B_(·,k)^' A_k = B_(·,k)^', A_k B_(·,ℓ)=0_2m, B_(·,ℓ)^' A_k=0_2m^', A_k L_k = (diag(B_(·,k)))^2 B_(·,k) B_(·,k)^' = L_k = L_k^' = L_k A_k and   A_k L_ℓ = (diag(B_(·,k)))^2 B_(·,ℓ) B_(·,ℓ)^' = O_2m = L_ℓ A_k for all k ≠ℓ∈. As such, given i∈, for k∈_i, (∑_ℓ∈_i L_ℓ) A_k = L_k = A_k (∑_ℓ∈_i L_ℓ), and ( ∑_k∈_i A_k ) ( ∑_ℓ∈_i L_ℓ ) = ∑_k∈_i L_k = ( ∑_ℓ∈_i L_ℓ ) ( ∑_k∈_i A_k ), where the set _i of edge indexes associated with agent i is defined as per Section <ref>. Finally, for each i∈, define C_i:=(T_(·,i)) ∈ℝ^2m× 2m, where T is given in (<ref>). As such, C_i=C_i^' is a diagonal matrix with {0,1} entries. Composing it with Ξ_1 in (<ref>) isolates only the model data related to the agent i∈. It can be shown by direct calculation that C_i Ξ_1= Ξ_1 C_i, and ∑_i ∈ C_i=I_2m. The following is used to prove the subsequent main result. For arbitrary linear = ⊕_i=1^n _i, _i:Ł_2^m_i→Ł_2^m_i, and the sub-system graph Laplacian L=∑_k∈ L_k in (<ref>), L L = ∑_k∈ L_k L_k + ∑_k∈∑_ℓ∈_k L_k L_ℓ , where _k collects the edge indexes associated with the two agents linked by edge k, excluding the latter, as per the definition in Section <ref>. See Appendix. For each i∈, let W_i := C_i , X_i := C_i Ξ_1 = Ξ_1 C_i , Y_i := ∑_k∈_i12Ξ_2 A_k , Z_i := ∑_k∈_i12 A_k Ξ_3 A_k + ∑_k∈_i ∑_ℓ∈_i ∖{k}A_k Ξ_3 A_ℓ , with A_k as per (<ref>), C_i as per (<ref>), and the linear block diagonal Ξ_1, Ξ_2, Ξ_3 as per (<ref>). If for all i∈, there exists ϵ_i∈ℝ_>0 such that for all z∈Ł_2 ^2m, [ I_2m; K_i; ]z[ [ X_i+ϵ_i W_i Y_i; Y^*_i Z_i ]][ I_2m; K_i; ] z≤0, with K_i as per (<ref>), then there exists ε∈ℝ_>0 such that (<ref>) for all z∈Ł_2 ^2m. With Z_i as per (<ref>), ∑_i∈K_i zZ_i K_i z =∑_i∈⟨ z, K_i ( ∑_k∈_i12 A_k Ξ_3 A_k +∑_k∈_i∑_ℓ∈_i ∖{k}A_k Ξ_3 A_ℓ ) K_i z ⟩ =∑_i∈⟨ z, ( ∑_k∈_i12 L_k Ξ_3 L_k + ∑_k∈_i∑_ℓ∈_i ∖{k} L_k Ξ_3 L_ℓ)z ⟩ = ⟨ z, ( ∑_i∈∑_k∈_i12 L_k Ξ_3 L_k + ∑_i∈∑_k∈_i∑_ℓ∈_i ∖{k} L_k Ξ_3 L_ℓ) z ⟩ = ⟨ z, ( ∑_k∈ L_k Ξ_3 L_k + ∑_k∈∑_ℓ∈_k L_k Ξ_3 L_ℓ) z ⟩ = L zΞ_3 L z, which implies (<ref>). The second equality holds by the definition of K_i in (<ref>), and the identity (<ref>), whereby (∑_l∈_i L_l) A_k = L_k and A_ℓ (∑_l∈_i L_l) = L_ℓ whenever k,ℓ∈_i. Both parts of Lemma <ref> are used for the fourth equality, and Lemma <ref> for the final equality. Similarly, with X_k as per (<ref>), given (<ref>), ∑_i∈ X_i = ( ∑_i∈ C_i ) Ξ_1 = Ξ_1. As such, ∑_i∈zX_k z = zΞ_1 z for all z∈Ł_2 ^2m, which implies (<ref>). Further, with Y_k as per (<ref>), in view of the identity (<ref>), linearity of Ξ_2, and part <ref>) of Lemma <ref>, for all z∈ L_2^2m, ∑_i∈ Y_i K_i = ∑_i∈12Ξ_2 (∑_k∈_iA_k) (∑_ℓ∈_i L_ℓ ) = ∑_i∈12Ξ_2 ∑_k∈_i L_k = Ξ_2 ∑_k∈ L_k, which implies (<ref>). Finally, with W_i≽ 0 as per (<ref>), ∑_i∈ W_i = ∑_i∈ C_i = I_2m≻ 0; see (<ref>). As such, Lemma <ref> applies, and therefore, (<ref>) for all z∈Ł_2 ^2m, as claimed. For comparison, the link-wise decomposition from <cit.> is recalled below. (<cit.>) For each k∈[1:m], let W_k := A_k , X_k := 1/2(A_kΞ_1 + Ξ_1 A_k) , Y_k := Ξ_2A_k , Z_k := A_k(⊕_i=1^n D_i)A_k , where A_k is defined in (<ref>), and diagonal D_i:Ł_2 ^m_i→Ł_2 ^m_i is such that Ξ_3,i=D_i+S_i for some negative semi-definite S_i:Ł_2 ^m_i→Ł_2 ^m_i, i∈, with Ξ_1, Ξ_2, Ξ_3 as per (<ref>). If for all k∈, there exists ϵ_k∈ℝ_>0 such that for all z∈Ł_2 ^2m [ I_2m; L_k; ]z[ [ X_k+ϵ_k W_k Y_k; Y^*_k Z_k ]][ I_2m; L_k; ] z≤0, then there exists ε∈ℝ_>0 such that (<ref>) for all z∈Ł_2 ^2m. In Proposition <ref>, Z_i is not restricted to depend on only a suitable diagonal component D_3 of Ξ_3. This is one aspect of Proposition <ref> that helps mitigate potential conservativeness of the link-based decomposition in Proposition <ref>. Further, the determination of X_i, Y_i, and Z_i in Proposition <ref> depends on model data available to the neighbors of agent i∈. More specifically, each depends on the dynamics of agent i, and its neighbours in _i, as well as the associated neighbourhood links corresponding to the index set _i. This stems from the block-diagonal structure of the linear Ξ_p, p ∈ [1:3], in (<ref>), and the network structure in each component L_k, k∈, of the sub-system graph Laplacian L=∑_k∈L_k. Indeed, the positions and values of the non-zero elements in K_i Z_i K_i reflect the nonzero pattern corresponding to each L_k for k∈_i, and the same for Y_i K_i; see the proof of Lemma <ref> for further detail. The definition of X_i, on the other hand, only depends on model data specific to agent i∈. This additional aspect of Proposition <ref> could potentially reduce the conservativeness of the decentralized conditions, compared to those in Proposition <ref>, as each of the latter relies on more localized information that pertains only to each link k∈. Since the conditions in Proposition <ref> do not require the network to have any particular interconnection structure, the result is generally applicable for the scalable robust stability analysis of sparsely interconnected large-scale systems. To illustrate the scope for reduced conservativenss provided by Proposition <ref>, consider the following path graph network example with n=10 agents; i.e., m=9, m_1=m_10=1 and m_i=2 for i∈[2:9]. Suppose that the agent dynamics is such that the non-zero entries of H in (<ref>) are identical and linear time-invariant, with transfer function 1/(s+25). Further, suppose the links R are such that the block elements of the block diagonal Δ = (R-I_18)∘ T are sector bounded, with Φ_1,i=-2m_iαβ, Φ_2,i=1_m_i(α+β) and Φ_3,i=-2 I_m_i, with α=-2, β=0.15. Linear Matrix Inequality (LMI) conditions for semi-definite programming-based verification of the monolithic IQC (<ref>), or (<ref>) in Proposition, <ref>, or verifying (<ref>) in Proposition <ref>, can be derived, respectively, via the well-known Kalman-Yakubovich-Popov (KYP) Lemma <cit.>, given a state-space model for the agent dynamics. The details have been omitted due to space limitations. The LMIs obtained for (<ref>) in Proposition <ref>, and those for (<ref>), are demonstrably feasible, thereby successfully guaranteeing the robust stability of the network. By contrast, the LMIs for (<ref>) in Proposition <ref> fail to be feasible for a suitable selection of parameters ϵ_k>0, k∈[1:m]. § CONCLUSIONS Neighbourhood decentralized robust stability conditions are devised for networked systems in the presence of link uncertainty. This result is based on input-output IQCs that are used to describe the link uncertainties, and ultimately the structured robust stability certificate. An example is used to illustrate the scope for reduced conservativeness compared to existing results. Future work will explore alternative decompositions and comparisons. It is also of interest to apply the main result to study specific network scenarios where information exchange is impacted by asynchronous time-varying delays, and dynamic quantization (e.g., see <cit.>.) Proof of Lemma <ref>. Item <ref>) holds because ⋃_i∈_i =, and for k∈, the cardinality of _k:={i∈ | k∈_i} is exactly 2, so that ∑_i∈∑_k∈_i1/2 F_k = ∑_k∈ (12F_k + 12F_k) = ∑_k∈ F_k. Further, for all k∈, the edge index set _k = ⋃_i∈_k_i∖{k}, and (_i∖{k}) ⋂ (_j∖{k})=∅ whenever i, j ∈_k with i≠ j. Therefore, ∑_i∈ ∑_k∈_i ∑_ℓ∈_i∖{k} E_k∘ F_ℓ  = ∑_k∈ ∑_i∈_k ∑_ℓ∈_i∖{k}  E_k∘ F_ℓ = ∑_k∈ ∑_ℓ∈_k  E_k∘ F_ℓ, which is item <ref>). □ Proof of Lemma <ref>. First note that L_k L_ℓ = O_2m for every k∈, and ℓ∈∖ (_k∪{k}). Indeed, given the definition of _k, and the block diagonal structure of = ⊕_i=1^n _i, if the sub-system graph vertex index r∈[1:2m] is such ( B_(·,ℓ))_r ≠ 0, then r ∉ [∑_h∈[1:i-1]m_h+1:∑_h∈[1:i]m_h]  ⋃  [∑_h∈[1:j-1]m_h+1:∑_h∈[1:j]m_h] with {i,j}=κ_^-1(k), whereby B_(r,k)=0. Therefore, (B_(·,k))^' ( B_(·,ℓ)) = 0, and thus, L_k L_ℓ = B_(·,k)B_(·,k)^' B_(·,ℓ)B_(·,ℓ)^'=O_2m. Since is linear by hypothesis, and pointwise multiplication by each L_k is linear, it follows from the preceding observation that L L = (∑_k∈ L_k)   (∑_ℓ∈ L_ℓ) = ∑_k∈ ∑_ℓ∈ L_k L_ℓ = ∑_k∈ ∑_ℓ∈_k∪{k} L_k L_ℓ = ∑_k∈ ( L_k L_k + ∑_ℓ∈_k L_k L_ℓ ) as claimed. □ ieeetr
http://arxiv.org/abs/2409.02976v1
20240904135938
Hallucination Detection in LLMs: Fast and Memory-Efficient Finetuned Models
[ "Gabriel Y. Arteaga", "Thomas B. Schön", "Nicolas Pielawski" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CL" ]
Molecular spin-probe sensing of H-mediated changes in Co nanomagnets L. Limot September 9, 2024 ==================================================================== § ABSTRACT Uncertainty estimation is a necessary component when implementing AI in high-risk settings, such as autonomous cars, medicine, or insurances. Large Language Models (LLMs) have seen a surge in popularity in recent years, but they are subject to hallucinations, which may cause serious harm in high-risk settings. Despite their success, LLMs are expensive to train and run: they need a large amount of computations and memory, preventing the use of ensembling methods in practice. In this work, we present a novel method that allows for fast and memory-friendly training of LLM ensembles. We show that the resulting ensembles can detect hallucinations and are a viable approach in practice as only one GPU is needed for training and inference. Code will be made available upon acceptance. § INTRODUCTION LLMs have recently grown in popularity, thanks to their ability to interpret natural language and generate answers that resemble human discussions, even surpassing human performance in specific tasks <cit.>. However, these models face a significant challenge known as hallucination, where outputs that seem plausible may either deviate from instructions or lack factual accuracy. Hallucinations can broadly be categorized into two types <cit.>: faithfulness hallucinations, where the LLM deviates from provided instructions, and factual hallucinations, where there is a disparity between the generated content and verifiable facts. The risk arises when individuals unaware of these limitations mistakenly treat such outputs as ground-truth, leading to decisions based on erroneous information — a concern particularly relevant to safety-critical areas such as healthcare. Techniques leveraging natural language inference models and retrieval-based methods to detect hallucinations have shown promise in specific applications like summarization and open-domain question answering  <cit.>. However, the effectiveness of these methods is typically limited to a narrow set of tasks, which restricts their generalizability across the broader spectrum of LLM applications. Given these limitations, uncertainty estimation methods emerge as a compelling alternative for detecting both types of hallucinations  <cit.>. Unlike task-specific approaches, uncertainty estimation uses the model's own confidence in its predictions to identify if the outputs are unfaithful or factually incorrect. Recent work in uncertainty quantification in LLMs have emerged, with approaches like deep ensembles  <cit.> and sample-based methods which use stochastic sampling techniques  <cit.>. However, sample-based methods seldom provide reliable uncertainty estimates as they rely on the distribution of a single model's outputs, which may not fully capture the true uncertainty in the model's predictions. While deep ensembles advertise more robust uncertainty estimates by aggregating predictions from multiple independently trained models, they come with significant computational bottlenecks, especially when applied to larger LLMs, as they require substantial resources for training and inference. To address these limitations, we propose a fast and memory-efficient deep ensemble method which is able to provide reliable uncertainty estimates. Figure <ref> describes our proposed method, where low-rank matrices are added on top of a pretrained model and used for fine-tuning. A Low-Rank Adaptation (LoRA) matrix allows the whole ensemble to be trained cheaply and flexibly <cit.>, and each ensemble member has an individual rank-one fast weight matrix[LoRA matrices are rank-r matrices added elementwise to the weight matrix, while the fast weights – rank-1 matrices – are multiplied elementwise.]. This allows the members to be diverse. An uncertainty metric is used afterward and fed into a binary classifier, which is trained to discriminate between hallucinated and correct predictions. We demonstrate that our method can effectively detect both factual and faithfulness hallucinations. Our method achieves a 97.8% accuracy in detecting faithfulness hallucinations, outperforming existing baselines. Additionally, our method attains a 68% accuracy in detecting factual hallucinations without compromising overall predictive performance. These results suggest that our method not only enhances the detection of hallucinations in LLMs, but also offers a practical solution for deploying these models in resource-constrained settings. The main contributions of this paper are: (1) a fast and memory-efficient method for fine-tuning pre-trained LLMs using LoRA matrices and component-specific rank-1 matrix modifications, which reduces computational overhead and enables the effective use of ensemble methods; (2) a novel approach to hallucination detection that reformulates it as a binary classification task, leveraging uncertainty estimates from LLMs as features to distinguish between hallucinated and accurate content; and (3) demonstrating the practicality of our method for use with LLMs on minimal hardware, requiring only a single A40 GPU for both training and inference, thereby showcasing its efficiency and scalability. § RELATED WORK We believe that distinguishing between the two types of hallucinations introduced by Huang et al.  <cit.> provides valuable insights into the behavior of LLMs and highlights the distinct challenges associated with each category. Therefore, we adopt this terminology throughout this paper. Hallucination Detection in LLMs Various approaches have been proposed to identify when an LLM diverges from instructions or deviates from contextual cues in the input. This work is especially critical in tasks like summarization, where adherence to the provided context is crucial for generating accurate summaries. These methods often leverage natural language inference models to compute entailment scores, which are then used to detect instances of unfaithfulness in the generated outputs  <cit.>. Similarly, some research already focused on detecting when LLMs produce factual hallucinations, where their generated content deviates from verifiable facts. Some methods have been developed for situations where the correct answer is known in advance, such as summarization and open-domain question answering  <cit.>. These approaches often involve comparing the generated content against a source document that is known to be accurate. When the ground-truth is not available, some methods leverage retrieval techniques to extract reliable information for verification <cit.> or use LLMs themselves, using a prompt pipeline to facilitate hallucination detection <cit.>. Despite their effectiveness in specific contexts, many of these methods are constrained by their task-specific nature, limiting their generalizability across different LLM applications. Uncertainty Estimation Methods Uncertainty estimation methods are more general, offering greater versatility in hallucination detection  <cit.>. Sample-based methods use sampling decoding techniques to introduce stochasticity in the LLM's responses, where a higher variance of the output is an indication of the model's uncertainty  <cit.>. Another approach involves deep ensembles, which have been hypothesized to provide more informative uncertainty estimates compared to traditional methods  <cit.>. Deep ensembles leverage multiple model instances to capture a range of predictions, thus enhancing the robustness of uncertainty assessments  <cit.>. However, implementing deep ensembles for LLMs through both pretraining  <cit.> and fine-tuning  <cit.> has primarily been constrained to smaller models. Scaling these ensembles to compete with state-of-the-art LLMs  <cit.> typically requires significant computational resources. Memory-Efficient Approaches To overcome the computational challenges associated with training deep ensembles of LLMs, recent research has focused on memory-efficient alternatives. One such approach, which serves as the backbone of our method, is BatchEnsemble  <cit.>, used to pre-train LLM ensembles more efficiently  <cit.>. However, achieving state-of-the-art performance through pre-training can still be prohibitively expensive. Recent studies have proposed a memory-friendly strategy that fine-tunes LLM ensembles from pre-trained weights, rather than training from scratch  <cit.>. This method, referred to as a LoRA Ensemble, assigns each ensemble member its own set of LoRA matrices  <cit.>. While this approach has been utilized to compute uncertainty estimates  <cit.>, it has not been specifically applied to hallucination detection tasks. Our approach is similar in two ways: it reduces training costs by utilizing pre-trained weights and it employs LoRA matrices during fine-tuning. However, unlike the LoRA Ensemble, our method does not rely on LoRA matrices to represent ensemble members. Instead, after fine-tuning, we merge the LoRA matrices with the pre-trained weights and represent the ensemble using sets of rank-one fast weights. § METHOD Uncertainty Estimation To quantify the uncertainty associated with an LLM's predictions, we use the predictive entropy of the output distribution, a concept rooted in information theory. Let 𝐱_<t={x_1,…,x_t-1} represent the preceding tokens, which serve as the input for predicting the target token 𝐱_t at time step t. The predictive entropy is then defined as: ℋ[P(𝐱_t|𝐱_<t; 𝒟)] = - ∑_x_t P(x_t|𝐱_<t; 𝒟) log P(x_t|𝐱_<t; 𝒟). The predictive entropy can further be divided into its two subcomponents aleatoric and epistemic uncertainties: ℐ[𝐱_t, θ| 𝐱_<t, 𝒟]_Epistemic = ℋ[P(𝐱_t|𝐱_<t; 𝒟)]_Predictive-𝔼_p(θ | 𝒟)[ℋ[P(𝐱_t|𝐱_<t; 𝒟)]]_Aleatoric. Epistemic uncertainty captures the lack of knowledge of a system, which shrinks as more data is made available. Conversely, aleatoric uncertainty represents the noise – the variability of the data – and is therefore irreducible <cit.>. P(𝐱_t|𝐱_<t; 𝒟) cannot be directly computed and is approximated with: P(𝐱_t|𝐱_<t; 𝒟) ≈1/M∑_m=1^M P(𝐱_t|𝐱_<t; θ_m), where θ_m is the parameters associated to the m^th model. The estimate of the predictive entropy is an estimate of the predicted uncertainty, and will yield the exact predictive uncertainty as M →∞  <cit.>. Memory-Efficient Fine-tuning We employ deep ensembles  <cit.> to approximate Equation (<ref>). However, training those deep ensembles may require prohibitively high computational resources, available to few organizations. We propose using the BatchEnsemble method  <cit.>, but rather than pre-training each ensemble member  <cit.>, we adapt the method to fine-tune already pre-trained models. BatchEnsemble optimizes memory usage, which is a critical advantage over traditional ensembles. In a standard ensemble, memory requirements grow linearly with the number of ensemble members, with complexity increasing to 𝒪(Mmn) per layer, where M is the number of ensemble members and mn represents the size of the weight matrices. In contrast, BatchEnsemble reduces memory complexity to 𝒪(mn + M(m+n)) per layer, significantly lowering the memory footprint by sharing a single weight matrix U across all ensemble members and augmenting it with trainable vectors r_i ∈ℝ^m × 1 and s_i ∈ℝ^n × 1. The outer product of these vectors yields a fast weight matrix V_i, allowing each ensemble member's weight matrix to be represented as the Hadamard product of the shared weight U and the fast weight V_i: W_i = U ⊙ V_i, where V_i=r_i s_i^T. We further adapt this method by substituting the shared weight with a pre-trained weight, setting U = ω_pretrained. To introduce diversity into the ensemble, we randomly initialize the fast weights. This initialization must be done carefully to preserve the knowledge stored in the pre-trained weights; initializing the fast weights with a mean of 1 is crucial to avoid disrupting the pre-trained knowledge. For more details on the weight initialization procedure, please refer to the Appendix. To minimize the computational demands during fine-tuning, we apply the LoRA method  <cit.>. We retain the pre-trained weight matrix U as U_0 and introduce low-rank matrices B ∈ℝ^m × r and A ∈ℝ^r × n, where r ≪min(m,n). This approach allows us to update U as U_0 + BA, reducing the number of parameters that need to be trained while maintaining model performance. In contrast to the original LoRA paper  <cit.>, our method applies LoRA to all modules, which we believe enhances ensemble diversity and leads to more reliable uncertainty estimates. Noise Injection We hypothesize that injecting noise into the ensemble during training may enhance model diversity and improve uncertainty estimates. To explore this, we employ anchored ensembling  <cit.> as one of our methods. However, this approach can be unstable when scaled to larger models. To address this issue, we incorporate a normalization term, as suggested in  <cit.>, to stabilize the training process. Hallucination Detection To detect hallucinations in an LLM, we design a sub-task for a dedicated classifier. This task is framed as a binary classification problem, where the classifier is trained on a dataset containing uncertainty estimates from our ensemble and corresponding binary labels indicating whether the LLM is hallucinating or not. The choice of classifier depends on the practitioner's needs. For instance, one might choose a fast inference model like a shallow decision tree to minimize computational overhead, or opt for a more expressive model, such as a random forest, for enhanced performance. § DESIGN OF EXPERIMENTS We aim to design experiments that can clearly differentiate between faithfulness and factual hallucinations, enabling us to assess the performance of our method for each type. Baselines We evaluate our proposed adaptations to BatchEnsemble during fine-tuning, including BatchEnsemble with noise injection (NI). For uncertainty-based experiments, we compare against two baselines: a prompt-based method that approximates the output distribution using repeated prompting with stochastic sampling (temperature = 0.5, top-p = 0.99, top-k = 5)  <cit.>, and a LoRA Ensemble, which applies regularization to LoRA B matrices as described by Wang et al. on MMLU and SQuAD datasets  <cit.>. All models, including baselines, are fine-tuned on all modules and utilize the uncertainty measurements described in Equation (<ref>) for consistency. We also evaluate our methods against a single model fine-tuned from pre-trained weights to ensure fair performance comparison. All models utilize Mistral-7B-Instruct-v0.2 as pre-trained weights  <cit.>. Faithfulness Hallucination Detection To detect faithfulness hallucinations, we use the SQuAD and SQuAD 2.0 datasets  <cit.>. The datasets consist of contexts and questions, with SQuAD featuring answerable questions and SQuAD 2.0 including unanswerable ones. We instructed the LLMs to respond with “I don't know” if the answer was not in the context. Any other response to unanswerable questions indicated a faithfulness hallucination. Initially, we trained the models only on answerable questions, but this led to hallucinations across all test points. We then adjusted our approach by including 1/3 unanswerable questions in training to balance the model's responses. Factual Hallucination Detection For factual hallucination detection, we use the MMLU dataset  <cit.>, as the pre-trained Mistral 7B model  <cit.> used it as a benchmark without training on it. MMLU contains multiple-choice questions from diverse knowledge areas. Models were instructed to select one of the available choices; incorrect answers were labeled as factual hallucinations. Predictive Performance We assess the predictive performance of our models on downstream tasks to evaluate if improved uncertainty estimates impact model accuracy. Models are fine-tuned on SQuAD and MMLU datasets, and evaluated using the F1 score, exact match for SQuAD, and accuracy for MMLU  <cit.>. Out-Of-Distribution Test To test out-of-distribution detection, all models are fine-tuned on answerable questions from SQuAD 2.0 and evaluated on unanswerable ones, assessing the models' capacity to recognize shifts in data distribution. § RESULTS Hallucination and OOD Detection In Table <ref>, we assess the quality of our models' and baselines' uncertainty estimates by using them as features for classifiers to predict hallucinations and OOD instances. The uncertainty estimates are fed into five classifiers: k-Nearest Neighbors, logistic regression, decision trees, random forest, and support vector machine. We report the top-1 classification accuracy for each model's uncertainty estimates. See Appendix for details. All methods, including the baselines, demonstrate a relatively high accuracy—exceeding 92%—in detecting cases where the model faithfully hallucinates. Table <ref> describes an example of our method's response to both answerable and unanswerable questions. It shows a notable increase in uncertainty when the model encounters an unanswerable question. The uncertainty sharply decreases after the model generates the first token, suggesting that once it commits to a hallucinated token, it becomes more prone to continue hallucinating—a phenomenon referred to as the “snowballing effect”  <cit.>. More, the high accuracy achieved using the models' uncertainty estimates supports the idea that LLMs' internal states possess an inherent understanding of the generated, hallucinated content, a finding similar to that observed by Azaria & Mitchell  <cit.>. For detecting factual hallucinations, the LoRA Ensemble's uncertainty estimates result in the highest accuracy. A possible explanation is that the high weight decay strategy applied to the LoRA B matrix  <cit.> introduces substantial stochasticity among the ensemble members, leading to improved uncertainty estimates. However, this enhanced expressiveness in uncertainty estimation comes at the cost of predictive performance, which will be discussed further in the next subsection. Additionally, while the sample-based method demonstrates slightly better performance than our proposed approach, this marginal improvement may be attributed to randomness during training or to the task itself. Producing uncertainty estimates for a single token (the choice in a multiple-choice setting) may inherently be easier for the sample-based method. All methods generally performs worse when encountering OOD datapoints. This indicates that, while LLMs are versatile, uncertainty-based approaches still remain limited in their ability to detect hallucinations for examples that does not lie in-distribution, highlighting the need for further research on detecting hallucinations in OOD scenarios. Predictive performance Table <ref> presents the predictive performance of all evaluated models. The results indicate that while all models require fine-tuning to achieve optimal performance on downstream tasks, our proposed BatchEnsemble method with LoRA fine-tuning on shared weights consistently outperforms each baseline across all metrics. Notably, the LoRA ensemble performs worse than the single model despite being an ensemble. This performance discrepancy is likely attributed to the significant weight decay strategy implemented by Wang et al.  <cit.>, which involved fine-tuning only the query and key modules. The combination of high regularization and fine-tuning all modules appears to result in suboptimal performance. Additionally, the results from BatchEnsemble+NI further suggest that implementing regularization strategies effectively in practice poses substantial challenges. Time and Memory footprint All experiments were conducted on a single A40 GPU. Figure <ref> illustrates the performance of BatchEnsemble and highlights its advantages. On the left side of the figure, it is shown that as the number of ensemble members increases, the rate at which inference speed improves for BatchEnsemble is lower compared to the sample-based approach. This suggests that BatchEnsemble, by processing all ensemble members' predictions simultaneously in a single forward pass, is faster in inference than the baseline method. On the right side of the figure, the limitations of using a Vanilla ensemble  <cit.> for uncertainty estimation in LLMs are demonstrated. Specifically, as the number of ensemble members increases, the parameter size grows linearly with the Vanilla method, whereas the increase in BatchEnsemble’s parameter size remains negligible. § CONCLUSION As LLMs are increasingly used across various fields, minimizing harmful predictions is crucial. This work suggests that deep ensembles are effective for quantifying uncertainty and detecting hallucinations. Our results indicate that while current uncertainty estimation techniques are versatile, further development is needed to enhance their effectiveness in detecting factual hallucinations and distribution shifts. Nevertheless, uncertainty estimation shows promise for identifying faithful hallucinations. Integrating uncertainty estimates with classifiers could help mitigate instances where LLMs deviate from instructions. Future research could explore combining this approach with strategies such as prompting users for clarification when instructions are ambiguous. While we used entropy-based metrics to measure uncertainty, other methods such as EigenScore  <cit.> or semantic entropy  <cit.> might offer better estimates for specific tasks. BatchEnsemble could incorporate these metrics, potentially reducing inference time by requiring just one forward pass. Finally, we hypothesized that aleatoric and epistemic uncertainty may correspond to faithfulness and factual hallucinations, respectively. Our findings suggest that current methods for measuring epistemic uncertainty are insufficiently diverse to confirm this hypothesis. However, future noise injection methods may open the path to more informative uncertainty estimates while maintaining predictive performance. § ACKNOWLEDGMENTS This project was funded by The Kjell and Märta Beijer Foundation, WASP, ERC grant 101054643. The computations were enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS), partially funded by the Swedish Research Council through grant agreement no. 2022-06725. § EXPERIMENTAL DETAILS In this section, we will outline the detailed training and evaluation splits used for our experiments §.§ Factual Hallucination Detection Experiment For the factual hallucination detection experiment, we used the MMLU dataset  <cit.>. We created a training set by combining data from the 'all' category and the 'auxiliary_train' split. The dataset was shuffled with a seed of 42, and the first 42,000 data points were selected. Of these, 40,000 were used for training, while the remaining 2,000 were set aside for validation. For evaluation, we used the 'test' split, shuffled with the same seed of 42. We selected the first 5,000 data points from this split for evaluation. §.§ Faithfulness Hallucination Detection Experiment For the faithfulness hallucination detection experiment, we used a modified version of the SQuAD v2 dataset. To create the training and validation splits, we first loaded the SQuAD v2 training set and filtered it into two subsets: answerable and unanswerable questions. Unanswerable questions were relabeled with “I don't know” to train the model to differentiate between the answerable and unanswerable questions. We then shuffled both subsets with a seed of 50 and selected 28,000 answerable and 14,000 unanswerable questions to maintain a distribution similar to the original dataset. These subsets were combined into a single dataset and shuffled again to ensure a balanced distribution of answerable and unanswerable questions. From this combined dataset, we selected 40,000 samples for training and 2,000 samples for validation. For evaluation, we processed the SQuAD v2 validation set and filtered to only keep unanswerable questions. The filtered unanswerable questions were shuffled with a seed of 42, and the first 5,000 data points were selected for evaluation. §.§ Out-Of-Distribution Detection Experiment For the OOD detection experiment, we trained our models on the original SQuAD dataset and evaluated them on unanswerable questions from the SQuAD v2 validation set. We began by loading the SQuAD training dataset and shuffling it using a seed of 50 to ensure randomness. From this dataset, we selected 40,000 samples for training and 2,000 samples for validation. For evaluation, we utilized the validation set from SQuAD v2, specifically filtering out unanswerable questions. The unanswerable subset was shuffled with a seed of 42, and the first 5,000 data points were selected for the evaluation phase. §.§ Predictive Performance Experiments To evaluate predictive performance, we used models trained during the previous experiments on both the MMLU and SQuAD datasets. For the MMLU evaluation, we leveraged the models trained from the factual hallucination detection experiment. We loaded all data points from the MMLU test split, shuffled them with a fixed seed of 42, and selected the first 5,000 samples. For the SQuAD dataset, we used models trained from the OOD detection experiment, as these were specifically trained on the SQuAD dataset. For evaluation, we loaded the SQuAD validation set, shuffled it with a seed of 42, and selected 5,000 samples. § DETAILED RESULTS The tables below present the detailed results for each method across five different classifiers: Logistic Regression (LR), Decision Tree (DT) classifier, Support Vector Classifier (SVC), Random Forest (RF), and k-Nearest Neighbors (kNN). All classifiers were implemented using the scikit-learn library  <cit.>. No hyperparameter tuning was performed; instead, we used the default parameters provided by scikit-learn for each classifier. For the features, we used the first token's predictive entropy and aleatoric uncertainty. Additionally, we provided the classifiers with two more features: the average predictive entropy and the average aleatoric uncertainty. We also experimented with including epistemic uncertainty as a feature, but this reduced the performance, likely due to the correlation between the features. Therefore, we chose to exclude epistemic uncertainty and use the aforementioned features instead. § WEIGHT INITIALIZATION We explore two widely-used weight initialization strategies for our fast weights: He initialization  <cit.> and Xavier initialization  <cit.>. We hypothesize that the pre-trained Mistral 7B model  <cit.> may have used one of these methods, making them natural choices for our approach. The right side of Figure <ref> shows the generated responses for a sample data point from the SQuAD dataset, where the answer is nonsensical. This outcome is expected because the ensemble members are formed by the multiplicative product of shared and fast weights. Initializing the fast weights close to 0 effectively nullifies the knowledge embedded in the pre-trained weights. To address this, we adopt a simple solution: we maintain the variance of the He and Xavier initializations but set the mean to 1. This adjustment ensures that the ensemble members do not completely overwhelm the pre-trained weights. The left side of Figure <ref> illustrates this modified approach. We observe that using He initialization with a mean of 1 produces diverse yet coherent predictions. The increased variability results from the higher variance of He initialization compared to Xavier initialization. We argue that greater diversity in the ensemble enhances uncertainty estimates. Therefore, we select He initialization with a mean of 1, as shown in option (a) of Figure <ref>. § EXAMPLES OF PROMPTS AND ANSWERS For the SQuAD and SQuAD v2 datasets we format our prompts to the LLMs using the following template: [colframe=black, colback=white, sharp corners=south, boxrule=1pt, width=] [INST] # Answer the question based only on the given context. Keep the answer short. If the answer is not in the context or if you are unsure, respond with 'I don't know'. # Context: The provided context # Question: A question based on the provided context [/INST] For the MMLU dataset we instead use the following template: [colframe=black, colback=white, sharp corners=south, boxrule=1pt, width=] [INST] # Instruction You will be given a question followed by four options: A, B, C, and D. Your response should be either A, B, C, or D. # Question: A knowledge question # Options: A) B) C) D) [/INST]
http://arxiv.org/abs/2409.03137v1
20240905001316
The AdEMAMix Optimizer: Better, Faster, Older
[ "Matteo Pagliardini", "Pierre Ablin", "David Grangier" ]
cs.LG
[ "cs.LG", "stat.ML" ]
2 Geometrical Nonlinear Hall Effect Induced by Lorentz Force Wenhui Duan September 9, 2024 ========================================================== § ABSTRACT Momentum based optimizers are central to a wide range of machine learning applications. These typically rely on an Exponential Moving Average (EMA) of gradients, which decays exponentially the present contribution of older gradients. This accounts for gradients being local linear approximations which lose their relevance as the iterate moves along the loss landscape. This work questions the use of a single EMA to accumulate past gradients and empirically demonstrates how this choice can be sub-optimal: a single EMA cannot simultaneously give a high weight to the immediate past, and a non-negligible weight to older gradients. Building on this observation, we propose AdEMAMix, a simple modification of the Adam optimizer with a mixture of two EMAs to better take advantage of past gradients. Our experiments on language modeling and image classification show—quite surprisingly—that gradients can stay relevant for tens of thousands of steps. They help to converge faster, and often to lower minima: e.g., a 1.3B parameter AdEMAMix LLM trained on 101B tokens performs comparably to an AdamW model trained on 197B tokens (+95%). Moreover, our method significantly slows-down model forgetting during training. Our work motivates further exploration of different types of functions to leverage past gradients, beyond EMAs. § INTRODUCTION With large neural networks, deep-learning has revolutionized numerous fields, such as computer vision and natural language processing. At the heart of this paradigm lies the challenge of optimizing complex, non-convex loss functions using noisy gradient estimates. This optimization process is typically carried out using variants of Stochastic Gradient Descent (SGD) <cit.> or adaptive methods such as Adam and AdamW <cit.>, which have become ubiquitous in training state-of-the-art models <cit.>. A key component in many of these iterative optimization algorithms is momentum, which has long been shown to accelerate convergence <cit.> and often leads to solutions with superior generalization properties <cit.>. By accumulating gradient vectors over successive optimization steps, momentum helps overcome small local variations of the loss landscape, potentially escaping shallow local minima, and accelerate in plateau regions <cit.>. Both SGD with momentum (SGD+M) and Adam incorporate momentum under the form of Exponential Moving Averages (EMAs) of past gradients ^T = {^(0),…,^(T)}: EMAEMA(β, ^T) ≜β·EMA(β, ^(T-1)) + (1-β) ^(T) = ∑_i=0^T β^i (1-β) ^(T-i) . Two considerations support the use of EMAs. From a practical standpoint, the recursive formula of EMA allows for efficient implementations, which do not require maintaining a buffer of past gradients. From a theoretical standpoint, gradient descent with momentum leads to optimal convergence rates for quadratics <cit.>. However, those results do not guarantee any optimality for general non-quadratic cases <cit.>. The use of momentum in optimization is grounded in the varying nature of gradients. As local linear approximations of the loss landscape, their information can quickly become outdated as the optimization process progresses <cit.>. For this reason, practitioners typically employ moderate momentum values (i.e. β≈ 0.8 or 0.9), effectively creating a moving average of recent gradients while discarding older information. Selecting larger β values seems counter-intuitive, as it would suggest that older gradients maintain their relevance over extended periods of training. While it is tempting to see the use of small βs as a confirmation of the limited temporal relevance of gradients, our work reveals instead that older gradients can efficiently be used. When we increase β, we decrease the relative importance of recent gradients, and the iterate now fails to respond to local changes in the loss landscape. We observe that a single EMA cannot both give a significant weight to recent gradients, and give a non-negligible weight to older gradients (see Fig. <ref>). However, a linear combination between a “fast-changing” (e.g. β=0.9) and a “slow-changing” (e.g. β=0.9999) EMA allows the iterate to beneficiate from (i) the great speedup provided by the larger (slow-changing) momentum, while (ii) still being reactive to small changes in the loss landscape (fast-changing). More precisely, we find the following statement to convey an important intuition behind this approach: 2em2em While changing the direction of the slow momentum is difficult, any adjustment orthogonal to that direction is easy—which favors fast progress in sinuous canyon-like landscapes. A toy illustration of this can be seen in Fig. <ref>. Based on this idea, we propose AdEMAMix (Adaptive EMA Mixture), a novel Adam based optimizer which successfully leverages very old gradients to reach better solutions. Contributions. Our contributions can be summarized as follows: * We propose AdEMAMix, a novel optimizer which better leverages past gradients by avoiding a common pitfall of EMA-based optimizers (see  <ref>). * We empirically demonstrate the superiority of our method over Adam by training ViT and language models (Transformers and Mamba) of up to 1.3B parameters (see  <ref>). In addition, we show gains from switching mid-training from Adam to AdEMAMix (see Fig. <ref>). * We show AdEMAMix forgets the training data at a slower pace when compared to Adam (see Fig. <ref>). * More broadly, our findings contribute to a deeper understanding of the optimal balance between using historical gradients and adapting to the rapidly changing loss landscape. Our work invites further research in methods combining old and recent gradients, beyond EMAs. § RELATED WORK Works on understanding momentum. From the seminal work of <cit.>, many publications analyzed the effect of gradient descent + momentum (GD+M) in both convex and non-convex settings <cit.>. While the acceleration in the noise-free setting has been long theorized for convex functions, several publications indicate this effect might not necessarily extend to stochastic settings <cit.>, emphasizing instead a link between momentum and effective learning rate. Recent work have been seeking to understand the impact of momentum on generalization through studying the implicit bias of momentum methods <cit.>, exposing a preference of SGD+M for lower norm solutions. Those further exposed a link between higher momentum and higher effective learning rate and higher variance reduction. Despite the volume of prior work on the subject, our understanding of momentum methods in non-convex stochastic settings is still incomplete <cit.>. Oscillatory behaviours, and the sometimes ambiguous effect of variance on optimization render the analysis tedious. From a theoretical standpoint, our work raises several questions. First, given that we gain from averaging very old gradients, what can it reveal of the loss landscape and the consistency of one batch's gradient during training? Second, would our approach not decrease the variance up to a point that is harming generalization <cit.>? While no answer to those questions is given in this work, we provide a toy justification which indicates that large momentums can have a positive impact in noise-free non-convex settings (see Fig. <ref>)—indicating the improvement of our approach is at least partially explainable without considering variance-reduction effects. We moreover expose a link between momentum and forgetting the training data (see Fig. <ref>), which to our knowledge is novel. Works on deep-learning optimizers. Despite the popularity of Adam and AdamW <cit.> in training deep neural networks, optimizer design is a rich field of research and we focus on a few of the works most relevant to this study. Adafactor <cit.> improves Adam's memory efficiency by factorizing the second moment estimate. Lamb <cit.> extends Adam by adding layerwise normalized updates. <cit.> use algorithm discovery to derive the Lion optimizer. Contrary to Adam, Lion uses a single momentum term and the sign function to produce updates with the same magnitude across dimensions. Interestingly, <cit.> also report better scores are obtained when using a slightly larger momentum term (β=0.99). In this work we show how increasing the momentum well beyond this value can still be beneficial. See App. <ref> for a detailed comparison between AdEMAMix and Lion. Recently, <cit.> introduced Sophia, a scalable second-order optimizer designed for LLM training. Sophia uses a Hessian-based pre-conditioner which better normalizes the step size, penalizing steps in high curvature direction and accelerating in low curvature directions. Understanding in which circumstances those novel optimizers bring improvements is still being investigated <cit.>, and Adam's dominance remains mostly unchallenged. Works incorporating an additional momentum term. <cit.> introduce Grokfast, which uses a pre-filtering step on the gradient to amplify the low frequencies and help solve groking. When combined with Adam, it effectively applies the Adam's EMAs on top of another gradient averaging method (e.g. EMA for Grokfast-EMA). Somewhat similarly, <cit.> refer to the Double EMA (DEMA) used in some finance applications <cit.> as one motivation for their AdMeta optimizer. Our motivation behind AdEMAMix is to combine both a high sensitivity to the recent gradients as well as incorporating very distant gradient, in this respect, using nested EMAs is not the right candidate as it reduces the influence of recent gradients. A more detailed review of AdMeta and DEMA is in App. <ref>. Works on distributed optimization. Perhaps surprisingly, recent work on distributed optimization—DiLoCo and SlowMo <cit.>—are relevant to our work. N workers ^(t)_1,…,^(t)_N are trained independently for K steps (e.g. K=500). Every K steps, the delta updates {Δ_i}_i=1^N ≜{_i^(t+K)-_i^(t)}_i=1^N are averaged and applied to each worker using an outer optimizer with momentum β: _i^(t+K) = _i^(t) - η·Opt(1/N∑_i Δ_i, β). The application of the outer momentum every 500 steps increases the importance of old gradients in the optimization trajectory. We believe this observation might in parts explain the strong results provided by those methods, and further motivated our work. § OUR METHOD: ADEMAMIX Setup & notations. Let _: ↦ be a loss function parameterized by , and mapping inputs ∈ to . Given a sampled batch , let = ∇__() be a stochastic gradient of the loss w.r.t. . To minimize the empirical loss, the Adam optimizer <cit.> relies on first and second moments, resp. and , estimated via two EMAs parametrized by (β_1,β_2) ∈ [0,1[^2. A weight-decay parameter λ∈^+ is often used as in <cit.>: ^(t) = β_1 ^(t-1) + (1-β_1) ^(t) , ^(t) = ^(t)/1-β_1^t ^(t) = β_2 ^(t-1) + (1-β_2) ^(t)^2 , ^(t) = ^(t)/1-β_2^t ^(t) = ^(t-1) - η( ^(t)/√(^(t))+ + λ^(t-1)) . AdamW With t>0 being the timestep, η being the learning rate, and a small constant. Initially ^(t=0)=^(t=0)=0. To prevent the bias induced by the initial ^(t=0) and ^(t=0), the outputs of the two EMAs are corrected into ^(t) and ^(t). Those are used to compute the final weight update, scaled by the learning rate. How far to look into the past? A typical value for β_1 is 0.9. Fig. <ref> shows the weights given to past gradients for different values of β. The larger the β, the more uniform the average is. To put this in perspective—observing that ∑_i=0^∞β^i(1-β)=1 for β∈ [0,1[—the number of successive previous steps receiving a cumulative weight of 0.5, is t_half=ln(0.5)/ln(β)-1. For β=0.9, t_half≈ 6, meaning that half of the weight is given to the previous six gradients. This observation can also be extended to SGD with e.g. polyak or nesterov momentums <cit.>, which typically relies on similar β values. The value of β_1 is rarely increased beyond ∼ 0.9. In our experiments with AdamW, increasing β_1 further degraded the performance (see App. <ref>). Does this mean older gradients are outdated? We show that this is not the case, rather, increasing beta is reducing the sensitivity to recent gradients too much. We design AdEMAMix such that the sensitivity to recent gradients is kept, while also incorporating information from much older gradients using an additional momentum term. This allows for the use of much larger β values e.g. 0.9999. To compare, for β=0.9999, t_half≈ 6,930, spreading half of the mass over the previous 6,930 past gradients. AdEMAMix. To keep a high sensitivity to recent gradients, while also incorporating information from older gradients, we add a second EMA (changes compared to AdamW are in Blue): _1^(t) = β_1 _1^(t-1) + (1-β_1) ^(t) , _1^(t) = _1^(t)/1-β_1^t _2^(t)= β_3 _2^(t-1) + (1-β_3) ^(t) ^(t) = β_2 ^(t-1) + (1-β_2) ^(t)^2 , ^(t) = ^(t)/1-β_2^t ^(t) = ^(t-1) - η(_1^(t)+ α_2^(t)/√(^(t))+ + λ^(t-1)) . AdEMAMix In our experiments, while the values of β_1, β_2 remain similar to those of (<ref>), we often use β_3=0.9999. We find α∈[4,10] to work well in practice. Tackling early training instabilities. Early training instabilities are commonplace when training deep models, and empirically motivated the introduction of methods such as learning rate warmup and gradient clipping. <cit.> show how the use of learning rate warmup can be justified from a curvature perspective; allowing the iterates to move to parts of the optimization landscape where larger learning rates are stable. While we use learning rate warmup in all our experiments, we still noticed AdEMAMix models using a large β_3 would diverge early. This, despite not using bias correction over _2, which lets the momentum buffer fill itself slowly. Those failed runs are characterized by updates of large magnitudes in the early phase of training (see App. <ref>, Fig. <ref>). For this reason, we progressively increase the values of β_3 and α using schedulers. For α we use a linear scheduler. A linear scheduler for β_3 would be ill-fitted as the same increment of β_3 have a different impact for different values of β_3. For instance, observe that an increase of β of δ_β=0.0001 barely increases the t_half for β=0.9, while 0.999→ 0.999+δ_β increases the t_half of 77. For this reason, we design the β_3 scheduler to increase t_half linearly (see App. <ref> for a derivation of that scheduler). The two schedulers are summarized below: α^(t) = f_α(t,α,T_α) = min(tα/T_α, α), f_α β_3^(t) = f_β_3(t,β_3,β_start,T_β_3) = min(exp(ln(β_start)ln(β_3)/(1-t/T_β_3)ln(β_3)+ t/T_β_3ln(β_start)), β_3) . f_β_3 With T_α and T_β_3 are resp. the warmup times for α^(t) and β^(t)_3 to reach their final and maximal values. In practice we always set those two to the same value: T_α=T_β_3=T_α,β_3, and we typically use T_α,β_3=T, with T being the total number of iterations. β_start is always set to β_1 in our experiments. The use of those schedulers is not always required, especially, we found those have no impact when AdEMAMix is activated later during training (see Fig. <ref>). The full AdEMAMix optimizer, including the schedulers, is shown in Alg. <ref>. Computational overheads. Adding an additional EMA requires additional memory and compute. The added compute is negligible in comparison to what is required for the forward-backward, and has little impact over the total runtime as shown in Fig. <ref>. Moreover—when considering larger distributed setups—it is worth noting that AdEMAMix is not increasing communication (gradient reduction) over Adam. Therefore, we expect the runtime overhead of AdEMAMix to shrink in those settings, as data movements occupy a larger fraction of the total runtime. A more significant overhead is in terms of memory, as AdEMAMix requires to allocate _2, which is of the same size as the model parameters . We believe this issue is of lesser consequences as Fully-Sharded-Data-Paralellism <cit.> can always be used to distribute the optimizer states across compute nodes. In the cases where memory remains an issue, we see two mitigation strategies. The first one consists in setting β_1=0, which removes entirely the need for _1. We show in App. <ref> (Fig. <ref>) that this strategy might work, but at the cost of less stable training. The second mitigation strategy is to apply factorization strategies as in <cit.>. Hyperparameter sensitivity. While we introduce up to four new hyperparameters: α, β_3, T_α, and T_β_3. In practice we always set T_α=T_β_3=T_α,β_3, and use T_α,β_3=T in most cases. We show in App. <ref> that T_α and T_β_3 should only be large enough to prevent instabilities early during training. While all of our experiments on language modeling use β_3=0.9999, other values such as 0.999 or even 0.99999 still can outperform the AdamW baseline (see App. <ref>, Fig. <ref>). On vision tasks, the scatter plots in Fig. <ref> show all the AdEMAMix models trained for those experiments, highlighting how easy it can be to find good α and β_3 values. Overall, we find the ranges of values of α,β_3 and T_α,β_3 providing improvements over AdamW to be wide. See App. <ref> for hyperparameter sweeps. Limitations. AdEMAMix consists in leveraging very old gradients. Therefore, the method is best suited to settings where the number of iterations is important. We report on this effect in App. <ref>, additionally showing how smaller values of β_3 (e.g. β_3=0.999) can be better for low iterations scenarios. Moreover, retaining gradient information over many thousands steps can pose a problem in domains requiring fast adaptation to a sudden distribution shift, or general cases in which the distribution is non-stationary. § RESULTS In this section we use AdEMAMix to train language models ( <ref> and  <ref>) and vision transformers ( <ref>) ranging from 24M to 1.3B parameters. §.§ Transformer LLM Training Experimental setup. We use a transformer architecture <cit.> with learnt positional encoding. All of our experiments are using sequences of 1,024 tokens. We experiment with three sizes of transformers: 110M, 335M, and 1.3B. For the learning rate, we use 3k warmup steps followed by—unless specified—a cosine decay to 10^-5. We extensively tuned the hyperparameters for both AdamW and AdEMAMix models (see App. <ref>). We use the RedPajama v2 <cit.> dataset for all of our experiments. We use batch sizes of 64, 96 and 128 for respectively our 110M, 335M, and 1.3B parameter models. Depending on the model, we vary the number of iterations from 256k to 1.5M. For AdEMAMix, we use β_3=0.9999 and α∈{5,8,10} depending on the model. A full description of the architecture and hyperparameters used is in App. <ref>. We train on up to 8 A100 NVIDIA GPUs using data-parallelism. Why not simply increasing AdamW's β_1? While our toy experiment in Fig. <ref> already gives some intuition on why increasing Adam's β_1 is likely not to have the same effect as having an additional EMA as in AdEMAMix, we verify this intuition by training multiple 110M models using Adam with large β_1∈{0.99,0.999,0.9999,0.99999}. When we use a large β_1 from the beginning of training, we observe instabilities for larger β_1 values and no β_1>0.9 improves upon the AdamW baseline. One can imagine this to be due to increasing β_1 too early. Therefore, we also modify AdamW and add the same scheduler on β_1 as we use on AdEMAMix's β_3. β_1 is now increased steadily over the entire training duration. While this mostly stabilizes the training, none of the experiments outperformed the baseline using β_1=0.9. Moreover, to rule out any effect that could be due to early training instabilities, we run the same experiments starting from a pre-trained AdamW checkpoint trained for 300k iterations. We simply resume training and either increase β_1 suddenly or using a scheduler. Here again—unlike when using AdEMAMix—none of those experiments outperform the baseline. The details of those experiments are available in App. <ref>. Those experiments show that simply increasing the β_1 value in AdamW is not enough, which justifies our design of AdEMAMix. Better perplexity for the same number of steps. For all model sizes, all the number of iterations used—ranging from 256k to 1M—, AdEMAMix always outperforms the AdamW baseline. In Fig. <ref>, we show the validation loss curves for AdamW and AdEMAMix models trained—for each size—on various numbers of tokens. For 110M parameter models, training for 256k iterations gives similar results as training an AdamW model for 500k iterations. The gap between baseline and our method seems to be increasing as we increase the number of iterations. For 1.3B parameter models, training using 770k steps is on par with training the baseline for 1.5M iterations—reaching the same performance with 51% fewer tokens (an economy of 96M tokens). In Fig. <ref>, we observe similar improvements when using a constant and then linear decay learning rate scheduler. Training speed comparison. We measure the time per iteration for all of our experiments. In Fig. <ref> we plot the time required to train our 110M and 1.3B parameter models for 256k iterations. We observe that the impact of using an additional EMA on the training speed is negligible. If we were to train new models with a fix time budget, the extra iterations of the baseline would not be sufficient to close the gap. For instance, to match a 110M parameter AdEMAMix model trained for 256k iterations, we need to train an AdamW model for 500k iterations, and the time advantage of AdamW would only allow us to do 2,379 additional iterations. Moreover, as mentioned in  <ref>, we expect the time overhead to decrease when multi-node training is used, as IOs would become an important bottleneck, and AdEMAMix is not increasing IOs. Consistency of the gain across token-budgets. In Fig. <ref>, given that enough tokens have been used w.r.t. the model size, we observe consistent gains accross token budgets. It seems our method is able to bring a constant improvement over the baseline. This can be seen more clearly in Fig. <ref>, showing results when using a constant learning rate scheduler—which removes the confounder of the cosine learning rate decay. We observe how, after an initial phase in which the gap grows, this gap becomes seemingly constant after a sufficient number of iterations. Resume from AdamW vs. training AdEMAMix from scratch. So far we trained AdEMAMix models from scratch. We show it is also possible to switch from an AdamW to an AdEMAMix state in the middle of training. When switching to AdEMAMix at step t_switch, we initialize _2^(t_switch)=0 and replace t by t-t_switch in the scheduler equations—if those are used. However, we find that schedulers are not required when resuming training and report results without them in the main paper (see App. <ref> for more details). In Fig <ref> and Fig.<ref>, we show that (i) it is possible to improve upon the baseline when activating AdEMAMix later during training, and (ii) the earlier the switch, the larger the gain, with diminishing returns. This indicates that the improvement of AdEMAMix cannot be attributed solely to early training dynamics, but rather, late training dynamics seem to play an important role. This is further corroborated by the reverse experiment—which switches from AdEMAMix to AdamW mid-training and show a performance degradation (see results in App. <ref>). AdEMAMix models forget the training data slower. As an attempt to understand the reason for AdEMAMix's improvements over AdamW, we study how fast a training batch is forgotten after being used during training. We focus on following one specific batch B. We start by removing B from the RedPajama training data and train AdamW and AdEMAMix models. For those runs, B is therefore akin to a batch from the validation set. We measure the loss on B while training. Now we can resume training from various checkpoints, and inject B into the training data at various times t_B. By comparing the two runs—one having trained on B, while the other never saw B—we can visualize how B is learned, and how it is forgotten. After training on B, we expect the loss on B to decrease suddenly and then increase as the model forgets the contribution of that batch. When comparing the forgetting curves for AdamW and AdEMAMix in Fig. <ref>, we see striking differences. AdamW models forget much faster—the loss over B increases faster—than AdEMAMix models. Moreover, at the end of training, batches processed by AdEMAMix see their loss being improved over many thousands of iterations. Additional experiments on forgetting can be found in App. <ref>. §.§ Mamba LM Training Experimental setup & results. Our experimental setup is similar to  <ref>, except that we now train 168M parameter Mamba models <cit.> using the FineWeb dataset <cit.>. See App. <ref> for more details. In Fig. <ref> the improvement using AdEMAMix is consistent with our experiments on Transformer models. This shows how AdEMAMix's gains can extend beyond Transformer models. §.§ ViT Training Experimental setup. In this section we consider a different setting in which the data is now a limited resource, and we do multiple epochs (e.g. 37 or 320). We use two subsets of the ImageNet <cit.> dataset: (i) the widely used ImageNet-1k subset, consisting of 1.3M images and 1,000 possible classes; (ii) a filtered and pre-processed version of the ImageNet-21k <cit.> containing 11M images corresponding to 10,450 classes. For each, we measure the test loss on a held-out test set. We use the ViT architecture <cit.> at two different scales: 24M and 86M parameters. Importantly, if it is common in the vision literature to pre-train large ViT models and finetune them on smaller datasets, this work focuses on pretraining optimization and we therefore train and test on the same distribution. The models' hyperparameters are detailed in App. <ref>, we use a batch size of 4096 for all our experiments. We used training hyperparameters from <cit.> as a starting point and did some additional tuning of the learning-rate, dropout, and weight decay for our AdamW baselines. We then use the hyperparameters of the best AdamW baseline and train AdEMAMix models with various values of α∈{1,5,10,15,20} and β_3∈{0.999,0.9999}. We train models for 320 and 37 epochs for resp. the ImageNet-1k and ImageNet-21k datasets, corresponding in both cases to 100k iterations. Data-augmentation techniques have been shown to be central to the efficient training of ViTs <cit.>. We use simple data-augmentations, which includes mixup <cit.>. We train on 8 A100 NVIDIA GPUs using Pytorch Fully Sharded Data-Parallelism <cit.>. AdEMAMix for different capacity/data ratios. We consider three scenarios that differ by their capacity/data ratios. First, we trained 24M parameter models on 11M images (ImageNet-21k), for 37 epochs. In this setting, as can be seen in Fig. <ref>, it is trivial to find AdEMAMix parameters outperforming the baseline in terms of both training and test accuracy. We now increase the model size to 86M parameters. Fig. <ref> shows it is still easy to find parameters outperforming the baseline. We now keep the model size of 86M parameters and switch to the smaller ImageNet-1k dataset (1.3M images), which—given our 100k iterations—increases the number of epochs to 320. In this setting, Fig. <ref> shows that outperforming the baseline becomes difficult. These experiments show how AdEMAMix seems to perform best in scenarios with large volumes of data w.r.t. the capacity of the model. Overall, we found AdEMAMix to consistently reduce the training loss more efficiently than AdamW. When this decrease in training loss correlates with a decrease in test loss, AdEMAMix outperforms the AdamW baseline. § CONCLUSION In this work, we find that old gradients can be leveraged to efficiently train large language models and ViTs. Our proposed optimizer combines two momentum terms. A slow (large β) momentum gathers information over many timestep, while a fast (low β) momentum can adapt the trajectory of the iterates to the rapidly changing loss landscape. We demonstrate the superiority of our optimizer over AdamW through a set of experiments on text modeling and image classification. We moreover reveal how our optimizer forgets the training data at a slower pace. § ACKNOWLEDGEMENTS We thank Alaaeldin Mohamed Elnouby Ali for guiding us throughout the training of our ViT models, as well as Federico Danieli and Abhinav Moudgil for their help training Mamba models. We also thank Angelos Katharopoulos and Ronan Collobert for their insightful comments and feedback. iclr2025_conference § IMPLEMENTATION DETAILS §.§ Deriving the β_3 scheduler Let's consider S(t), the sum of the weights given to the last t gradients by an EMA parameterized by β∈ [0,1[: S(t) = (1-β) ∑_i=0^t β^i We want to know which timestep t would correspond to a cumulative weight of 0.5: (1-β) ∑_i=0^t β^i = 0.5 ⇔β^t+1 = 0.5 ⇔ t = ln(0.5)/ln(β)-1 Let f(β) = ln(0.5)/ln(β)-1. This function provides how many past consecutive gradients receive a cumulative weight of 0.5. Its inverse is: f^-1(t) = 0.5^1/t+1 We want a scheduler which increases β from β_start to β_end such that f(β) increases linearly. Given an interpolating parameter μ∈[0,1], this scheduler can be written as: β(μ) = f^-1((1-μ) f(β_start) + μ f(β_end)) By replacing f and f^-1 by their respective formula, one can arrive to: β(μ) = exp(ln(β_start)ln(β_end)/(1-μ)ln(β_end)+ μln(β_start)) By setting β_end=β_3 and μ=t/T_β_3, we arrive to the β_3-scheduler introduced in  <ref>. We show the shape of our scheduler and compare it to a linear scheduler in Fig. <ref>. §.§ Pytorch implementation The following is a code skeleton for our AdEMAMix optimizer in Pytorch <cit.>. [language=Python,breaklines=true,showstringspaces=false,caption=AdEMAMix code skeleton using Pytorch] import math import torch from torch.optim import Optimizer class AdEMAMix(Optimizer): def __init__(self, params, lr=1e-3, betas=(0.9,0.999,0.9999), alpha=5.0, T_beta3=0, T_alpha=0, eps=1e-8, weight_decay=0.0): # init the optimizer @torch.no_grad() def step(self): for group in self.param_groups: lr = group["lr"] lmbda = group["weight_decay"] eps = group["eps"] beta1, beta2, beta3_final = group["betas"] T_beta3 = group["T_beta3"] T_alpha = group["T_alpha"] alpha_final = group["alpha"] for p in group["params"]: grad = p.grad state = self.state[p] # State initialization if len(state) == 0: state["step"] = 0 # step counter used for bias correction state["m1"] = torch.zeros_like(p) # fast EMA state["m2"] = torch.zeros_like(p) # slow EMA state["nu"] = torch.zeros_like(p) # second moment estimate m1, m2, nu = state["m1"], state["m2"], state["nu"] # Bias correction: no correction for beta3's EMA state["step"] += 1 bias_correction1 = 1 - beta1 ** state["step"] bias_correction2 = 1 - beta2 ** state["step"] # Calling the schedulers for alpha and beta3 alpha = alpha_scheduler(state["step"], start=0, end=alpha_final, T=T_alpha) beta3 = beta3_scheduler(state["step"], start=beta1, end=beta3_final, T=T_beta3) # Update the EMAs m1.mul_(beta1).add_(grad, alpha=1 - beta1) m2.mul_(beta3).add_(grad, alpha=1 - beta3) nu.mul_(beta2).addcmul_(grad, grad, value=1 - beta2) # Compute step denom = (nu.sqrt() / math.sqrt(bias_correction2)).add_(eps) update = (m1.div(bias_correction1) + alpha * m2) / denom # Add weight decay update.add_(p, alpha=lmbda) # Apply the update scaled by the learning rate p.add_(-lr * update) return loss §.§ Optax implementation The following is a code skeleton for our AdEMAMix optimizer in Optax, an optimization library based on JAX <cit.>. [language=Python,breaklines=true,showstringspaces=false,caption=AdEMAMix code skeleton using JAX and Optax] from typing import NamedTuple import chex from optax._src import transform, combine, base, numerics from jax import tree_util as jtu import jax.numpy as jnp class ScaleByAdemamixState(NamedTuple): count: chex.Array count_m2: chex.Array m1: base.Updates m2: base.Updates nu: base.Updates def ademamix(lr, b1=0.9, b2=0.999, b3=0.9999, alpha=5.0, b3_scheduler=None, alpha_scheduler=None, eps=1e-8, weight_decay=0.0): return combine.chain( scale_by_ademamix(b1, b2, b3, alpha, b3_scheduler, alpha_scheduler, eps), transform.add_decayed_weights(weight_decay), transform.scale_by_learning_rate(lr), ) def scale_by_ademamix(b1, b2, b3, alpha, b3_scheduler, alpha_scheduler, eps): def init_fn(params): m1 = tree_zeros_like(params) # fast EMA m2 = tree_zeros_like(params) # slow EMA nu = tree_zeros_like(params) # second moment estimate return ScaleByAdemamixState( count=jnp.zeros([], jnp.int32), count_mu2=jnp.zeros([], jnp.int32), m1=m1, m2=m2, nu=nu ) def update_fn(updates, state, params=None): del params c_b3 = b3_scheduler(state.count_m2) if b3_scheduler is not None else b3 c_alpha = alpha_scheduler(state.count_m2) if alpha_scheduler is not None else alpha m1 = tree_update_moment(updates, state.m1, b1, 1) # m1 = b1 * m1 + (1-b1) * updates m2 = tree_update_moment(updates, state.m2, c_b3, 1) nu = tree_update_moment_per_elem_norm(updates, state.nu, b2, 2) count_inc = numerics.safe_int32_increment(state.count) count_m2_inc = numerics.safe_int32_increment(state.count_m2) m1_hat = tree_bias_correction(m1, b1, count_inc) nu_hat = tree_bias_correction(nu, b2, count_inc) updates = jtu.tree_map( lambda m1_, m2_, v_: (m1_+c_alpha*m2_)/(jnp.sqrt(v_)+eps), m1_hat, m2, nu_hat ) return updates, ScaleByAdemamixState( count=count_inc, count_m2=count_m2_inc, m1=m1, m2=m2, nu=nu ) return base.GradientTransformation(init_fn, update_fn) § EXPERIMENTAL DETAILS §.§ Transformer LLM experiments Architecture details. Our architecture is based on the transformer decoder of <cit.>. We use learnt positional encoding. We use a SentencePiece <cit.> tokenizer with a vocabulary of 32000 tokens. The model specific parameters for the different sizes are summarized in Table <ref>. Our implementation is using Jax <cit.>, and we train using , except for normalization modules and softmax which use . The optimizer states and operations are in . How did we tune the hyperparameters? Starting from our smallest models (110M parameters), we first tuned hyperparameters for our AdamW models. We then use the best hyperparameters found as a starting point for AdEMAMix and tuned β_3 and α. When we use schedulers for α and β_3, we set T_α=T_β_3=T, with T being the total number of iterations. Table <ref> summarizes the hyperparameters we tried for this model size. The impact of AdEMAMix's hyperparameters is discussed extensively in App. <ref>. For our 330M parameter models, we mostly kept the best hyperparameters found for the 110M model and re-tuned the learning rate and gradient clipping. For AdEMAMix, we additionally tested multiple β_3 and α values. We summarize this process in Table <ref>. Finally, for our 1.3B parameter models, we re-iterated the same process, re-tuning only the learning rate and gradient clipping parameters for our AdamW runs. When trying to transfer the best learning rate found to AdEMAMix, we found it to be too high, causing instabilities we couldn't fix using gradient clipping. For this reason, we also tuned the learning rate for AdEMAMix for this model size. This process is described in Table <ref>. Hyperparameters for experiments switching from AdamW and AdEMAMix. For experiments in Fig. <ref> and Fig. <ref>, when we switch from AdamW to AdEMAMix during training, for our 110M parameter models (Fig. <ref>), we use α=2, β_3=0.9999 and T_α,β_3=0. For our 1.3B parameter models (Fig. <ref>), we use α=1, β_3=0.9999 and T_α,β_3=0. Other hyperparameters are inherited from the AdamW model we are switching from. Hyperparameters used our constant learning rate scheduler experiments. For Fig. <ref> we use η=10^-4, the remaining of the hyperparameters are identical as those used for our other 1.3B experiments and provided in Table <ref>. Hyperparameters used in our forgetting experiments. For the experiments in Fig. <ref>, we used 110M parameter models and hyperparameters from Table <ref>. §.§ Mamba LM experiments Architecture details. We use a Mamba architecture <cit.> with an embedding dimension of 768, an expansion factor of 2, a state size of 16, and a depth of 24. We use a batch size of 120 sequences of 1024 tokens. We use the tokenizer from <cit.>, using the HuggingFace library <cit.>. Hyperparameter details. We used parameters mostly taken from <cit.>: * Learning rate: η=0.0006, * Warmup steps: 3k, * η-scheduler type: cosine decay to 10^-5, * Weight-decay: λ=0.1, * Gradient clipping value: 1, * Total training steps T∈{64k,128k}. For experiments with AdamW, we use β_1=0.9 and β_2=0.999. For AdEMAMix, we use β_1=0.9, β_2=0.999, β_3=0.9999, T_α,β_3=T and α=8. §.§ ViT experiments Architecture details. We use a ViT architecture following <cit.>. The architecture details for both our 24M and 86M parameter models can be seen in Table <ref>. We do not use an EMA of the iterates as our final model. Our implementation is in Pytorch <cit.>, and use . Data augmentation and Mixup. While more sophisticated augmentation methods exist <cit.>, we simply use random-resized cropping, random horizontal flip (p=0.5), and random color jitter (applied with a probability p=0.8). How did we tune the hyperparameters? For each model size, we started by tuning the hyperparameters for the AdamW baseline. The hyperparameters we tuned and the values we retained for our three different settings are in Table <ref>. For our experiments on ImageNet-1k, given that all models overfit the dataset, we select the best model according to the minimum validation loss, akin to using early stopping. For AdEMAMix, for each setting, we use the hyperparameters of the best AdamW model and only tune α and β_3. For each setting, we train 8 models using α∈{1,5,10,15,20} and β_3∈{0.999,0.9999}. All of the AdEMAMix models are shown in Fig. <ref> and Fig. <ref>. Only the best AdamW model is shown in Fig. <ref>. § ADDITIONAL RESULTS §.§ LLM experiments §.§.§ In-Context Learning (ICL) results In-Context Learning (ICL) results. We evaluate our largest (1.3B) LLM models on in-context learning tasks. We use the package <cit.>. We consider the following tasks: HellaSwag <cit.>, Winogrande <cit.>, ARC <cit.>, BoolQ <cit.>, LogiQA <cit.>, MathQA <cit.>, MMLU <cit.>, OpenbookQA <cit.>, PIQA <cit.>, PubmedQA <cit.>, RewardBench <cit.>, Sciq <cit.>, TruthfulQA <cit.>. The two AdEMAMis and AdamW models we benchmark have been trained for 1M steps, with a batch size of 128, corresponding to around 131B tokens. The results of the evaluation are in Table <ref>. We observe how the model trained with AdEMAMix is outperforming the AdamW model on nearly all of the tasks, sometimes by a large margin for e.g. PubmedQA. In addition to Table <ref>, we track the evolution of some of those scores during training for MMLU, RewardBench, ARC-Challenge, and ARC-Easy, results can be seen in Fig. <ref>. For that figure, we modify the way MMLU is evaluated. Instead of appending all the multiple answers, we concatenate the question prompt and each answer separately, and pick the most likely answer. We found it allows a much better comparison of smaller models, which otherwise fluctuates around random guessing. §.§.§ More results on forgetting Evolution of forgetting during training. In Fig. <ref> in the main paper, we follow the loss on one specific batch over the entire training. In the following experiment, we do a closeup on the forgetting of different batches as training progresses. The goal being to visualize how later batches are ultimately more remembered, and compare the forgetting profiles of AdamW and AdEMAMix. For this, in Fig. <ref> and Fig. <ref>, we follow fixed batches at different stages of the training process. Every 10k iterations, we measure the loss over a specific batch B before and after training on that batch. Why are training batches forgotten more at the end of training? Given the previous observation about how both AdamW and AdEMAMix models are forgetting the training batches at different paces given different stages of training, one can wonder what is causing this phenomenon? To answer this question, we contrast the results of Fig. <ref> with results obtained using a different scheduler with a constant learning rate and a linear decay. We train 110M parameter models for 300k iterations, using a max learning rate of 0.001 and a batch size of 64. Results for those experiments are in Fig. <ref>. Those results indicate that the decaying learning rate might be the main parameter controlling how much a batch is remembered during training. This has interesting implications when selecting which learning rate decay to use. A cosine decay, with a rather long period of decay, might remember more than a constant learning rate scheduler with only a small number of steps of linear decay at the end. §.§.§ Removing the second EMA mid-training Removing the second EMA mid-training. In Fig. <ref> and Fig. <ref>, we looked at what is happening when we switch from AdamW to AdEMAMix in the middle of training. In this section, we will look at the opposite conversion: removing the _2 parameter of AdEMAMix during training, effectively switching back to AdamW. Results for this experiment can be seen in Fig. <ref>. Right after the switch, we observe a drop of the loss, followed by an increase and finally back to convergence. Ultimately, the final loss value is in between the ones obtained training only using AdamW and only using AdEMAMix. §.§.§ Hyperparameter sensitivity Hyperparameter sensitivity. Depending on whether the α and β_3 schedulers are used, AdEMAMix adds up to 4 new hyperparameters: α, β_3, T_α and T_β_3. In all our experiments we tied T_α=T_β_3=T_α,β_3. In this section we analyze the sensitivity of AdEMAMix to those hyperparameters. We study the impact of α and β_3 in Fig. <ref>, revealing wide ranges of values for which hyperparameters are outperforming the AdamW baseline. We study the sensitivity of T_α,β_3 as well as justify the choice of using a scheduler on α in Fig. <ref>. When training from scratch T_α,β_3 needs simply to be large enough to avoid early divergence. In Fig. <ref> we study the sensitivity to the gradient clipping and AdEMAMix's β_1 value. While gradient clipping can help stabilize training and smooth the loss curves, it has little impact over the final loss value. Reducing the value of β_1, we observe some loss spikes, yet the final loss value is relatively unaffected. §.§.§ Impact of training for fewer iterations Sensitivity to the number of iterations. As we rely on old gradients, on question that arises is whether AdEMAMix would still perform well when reducing the number of iterations. In this section we compare the loss obtained by 110M parameter models when halving the number of iterations and doubling the batch size, in such a way that the number of tokens consumed for training is always the same (17B). In Fig. <ref>, we observe that decreasing the number of iterations too much increases the final loss of the model. This effect is more pronounced for AdEMAMix. However, at both 32k and 64k iterations, AdEMAMix still outperforms AdamW. In Fig. <ref> we observe that reducing β_3 can mostly alleviate the problem. §.§.§ Limitations of a single EMA Results on a 2D toy example. A natural question that arises from our method is whether it is possible to obtain the same results without the additional EMA. In this section we aim to convince the reader that the two EMAs are required. First, we propose to study a small toy 2D optimization problem: (x^⋆,y^⋆) ∈_(x,y) f(x,y), with f(x,y) = 8(x-1)^2 × (1.3 x^2+2x+1) + 0.5(y-4)^2 This function was introduced by <cit.> as a function with sharp curvature along the x-axis, and flatter curvature along the y-axis. Initializing the first iterate ^(0)=(0.3, 1.5), we start optimizing with (i) Adam with a large β_1=0.999, and (ii) AdEMAMix with β_1=0.9 and β_3=0.999. In both cases β_2=0.999. To make the experiment interesting, we initialize the EMA buffers for both methods to either (-3,0) or (-0.8,-3). This has for effect to give an initial "speed" to the first iterate. As a result of this speed, Adam with a large β_1 is unable to correct his trajectory. In contrast, using two EMAs, one with a small, and one with a large β—as in AdEMAMix—converges to the solution. The fast changing EMA can correct for the bias introduced by the slow changing EMA. See Fig. <ref> for results. Results training 110M parameters LMs. To further demonstrate that simply increasing β_1 in AdamW does not provide nearly the same gains as AdEMAMix, we run several additional experiments. In Fig. <ref> we show what we obtain when training from scratch using a single EMA with a β_1∈{0.99, 0.999, 0.9999, 0.99999}. Naively increasing β_1 in AdamW does not work; adding a scheduler on β_1 to smoothly increase its value during the entirety of the training also fails. In Fig. <ref>, we show results when increasing β_1 in the middle of training, with and without scheduler on β_1. This differs from the previous setting as we bypass the initial training phase capable of causing instabilities (see  <ref>), as well as bypass the initial iterations during which the bias correction done by AdamW can have an impact. Here again we observe increasing the β_1 value does not provide any noticeable gain. §.§.§ Miscellaneous results Impact of β_3 and α schedulers when starting AdEMAMix from AdamW. As mentioned in  <ref>, we do not necessarily need the α and β_3 schedulers when resuming training from a sufficiently late checkpoint. Fig. <ref> and Fig. <ref> do not use schedulers, unless we start using AdEMAMix from scratch. In Fig. <ref>, we vary the warmup periods for α and β_3 when starting AdEMAMix from an AdamW checkpoint at t_switch=300k. We observe how increasing T_β_3=T_α only slows down the convergence. This serves to illustrate that the schedulers for α and β_3 are only required to stabilise the optimization during the early training phase. AdEMAMix from scratch without schedulers. In this section we provide more justification for the use of schedulers on α and β_3. We reveal that, without the use of schedulers, the norms of the updates increases significantly, even for small α values. Unstable and large weight updates are known to occur in the early phases of training when learning rate warmup is not used <cit.>. Using large momentum values too early seem to increase this phenomenon. See Fig. <ref>. Impact of the linear decay duration when using a constant η-scheduler. In Fig. <ref> we show results using AdEMAMix with a linear warmup → constant → linear decay learning rate scheduler. We used 100k of linear decay. In this section we experiment with 200k steps of linear decay, the rest of the parameters are the same: we use a 1.3B parameter model with a max learning rate of 10^-4 and remaining hyperparameters as in Table <ref>. Results can be seen in Fig. <ref>. Same figure as Fig. <ref>, including the AdamW trained on 197B tokens. In Fig. <ref> we represent the AdamW experiment trained on 197B tokens by a blue horizontal line for aesthetic reasons. In Fig. <ref> we include this missing curve to the same plot. §.§ ViT experiments Top-k accuracies. In Fig. <ref>, we plot the test and train loss for the final iterates. In Fig. <ref>, we give a more detailed view by reporting the evolution of the training loss, test loss, and top-1 accuracy. Looking at the first row, the training loss for AdEMAMix is systematically better than the AdamW baseline. The second row shows that in cases where the test loss correlates well with the train loss, AdEMAMix works well. The top-1 accuracy carries the same message. §.§ Comparison with other methods §.§.§ Comparison with AdMeta and DEMA Double Exponential Moving Average (DEMA). Originally introduced by <cit.>, a DEMA originally aimed to emphasize the weight of recent asset price fluctuations, making the DEMA indicator more reactive to changes compared to simple EMAs. Given notations introduced in the main paper, let EMA_β^(T-N:T)≜EMA(β, ^(T-N), …, ^(T)), let N be the window size representing how many consecutive values are considered in the average, the formula for DEMA can be written as follows: DEMA(β, ^(T-2N), …, ^(T)) = 2 ·EMA_β^(T-N:T) - EMA(β, EMA_β^(T-2N:T-N), …, EMA_β^(T-N:T)) . If a simple EMA tends to give a significant weight to more recent observations, a DEMA emphasizes this behaviour by removing some of the weight given to older observations. This is not what we suggest doing in this work, we want both high sensitivity to recent observations and non-negligible weights given to older observations. AdMeta. <cit.> take inspiration over DEMA and use nested EMAs in their AdMeta-S optimizer: _1^(t) = β_1 _1^(t-1)+^(t) ^(t) = κ^(t) + μ_1^(t) _2^(t) = β_2 _2^(t-1)+(1-β_2)^(t) ^(t) = ^(t-1)-η_2^(t) . AdMeta-S With μ and κ parameterized by β_1 ∈ [0,1[ as such: μ = 25 - 10 (β_1 + 1/β_1) κ = 10/β_1 - 9 . In their AdMeta-S experiments, they use β_1=0.9, corresponding to (μ,κ)=(4.88,2.11). β_2 takes values ranging from 0.1 to 0.4. As a results, unlike AdEMAMix, AdMeta is not leveraging very old gradients. Analysing the AdMeta algorithm, we see that it consists in two nested EMAs. We show the shape of nested EMAs in Fig. <ref>. In sharp contrast with our approach we observe that (i) it reduces the weights given to recent gradients and (ii) it gives a small weight to old gradients. §.§.§ Comparison with Lion The Lion optimizer. <cit.> derived the following optimizer. We change notations to facilitate the comparison with AdEMAMix: ^(t) = ^(t-1) - η·(sign(α^(t-1)+(1-α)^(t)) +λ^(t-1)) ^(t) = β^(t-1) + (1-β) ^(t) . Lion The Lion optimizer uses a sign function and updates its EMA after updating the parameters. Moreover, it does not normalize the updates. While Lion and AdEMAMix are quite different from each others, we can draw one similarity. Indeed, the interpolation α^(t-1)+(1-α)^(t) is similar to combining two EMA, one of them using β=0. While we mention the possibility of setting AdEMAMix's β_1 to 0 in  <ref>, we show in App. <ref> (Fig. <ref>) that this can cause instabilities. Interestingly, <cit.> find that larger β=0.99 values work best. Beside this similarity, the two optimizers behave very differently, the biggest difference being the use of the sign function, which <cit.> claim can help regularize the training. We test the Lion optimizer on language modeling. To tune the hyperparameters, we took values from <cit.> as a starting point, as well as recipes provided by the Optax Jax library. A summary of our hyperparameter tuning is in Table <ref>: The training curve associated to the best hyperparameters is in Fig. <ref>. We observe that Lion is not outperforming our carefully tuned AdamW baseline. Moreover, no attempt to increase β beyond 0.99 was successful, as those models mostly diverged.
http://arxiv.org/abs/2409.02331v1
20240903230428
A parameterization of anisotropic Gaussian fields with penalized complexity priors
[ "Liam Llamazares-Elias", "Jonas Latz", "Finn Lindgren" ]
stat.ME
[ "stat.ME" ]
apalike #1 1 1 1]L. Llamazares-EliasCorresponding author: L.S.Llamazares-Elias@sms.ed.ac.uk [1]School of Mathematics, University of Edinburgh, United Kingdom 2]J. Latz [2]Department of Mathematics, University of Manchester, United Kingdom 1]F. Lindgren A parameterization of anisotropic Gaussian fields with penalized complexity priors [ ================================================================================== 0 Anisotropic Gaussian random fields with identifiable parameters and penalized-complexity priors § ABSTRACT Gaussian random fields (GFs) are fundamental tools in spatial modeling and can be represented flexibly and efficiently as solutions to stochastic partial differential equations (SPDEs). The SPDEs depend on specific parameters, which enforce various field behaviors and can be estimated using Bayesian inference. However, the likelihood typically only provides limited insights into the covariance structure under in-fill asymptotics. In response, it is essential to leverage priors to achieve appropriate, meaningful covariance structures in the posterior. This study introduces a smooth, invertible parameterization of the correlation length and diffusion matrix of an anisotropic GF and constructs penalized complexity (PC) priors for the model when the parameters are constant in space. The formulated prior is weakly informative, effectively penalizing complexity by pushing the correlation range toward infinity and the anisotropy to zero. Keywords: Anisotropy, Bayesian, Penalized Complexity, Prior, Stochastic Partial Differential Equation. 1.9 § INTRODUCTION Gaussian random fields (GF) are widely used to model spatial phenomena <cit.> while accounting for the uncertainty that may arise due to measurement error, model misspecification, or incomplete information. The prevalence of GFs is due to the fact that they are well understood theoretically, verify desirable properties, and are easily characterized – they are entirely specified by their mean and covariance <cit.>. A convenient way of representing certain GFs is as solutions to stochastic partial differential equations (SPDEs). This representation allows for physical interpretation to be assigned to the parameters of the equation. Furthermore, it allows for computationally efficient inference, prediction, and uncertainty quantification using a finite element (FEM) approximation of the field <cit.>. In the literature, a common choice is to model using stationary fields. That is, the correlation of the field at two locations only depends on the Euclidean distance between said locations. While this may be an appropriate assumption in some cases, in others, it is inadvisable. This limitation can be overcome by introducing additional parameters to model the anisotropy present in the field as in <cit.>. In the following, we consider the semi-parametric estimation of the random field and its anisotropy parameters. The existing work leaves us with two significant challenges: * The anisotropy parameterization from <cit.> is non-identifiable as it has two parameter combinations for each anisotropy matrix, leading to a multi-modal likelihood, making it unsuitable as a general parameterization. * Given that not all parameters can be recovered under in-fill asymptotic <cit.>, the choice of prior distribution on the parameters of the model may significantly impact the posterior distribution. As a result, suitable priors need to be defined. To address these issues, we make the following contributions: * Alternative parameterization: We present an alternative parameterization of the anisotropy that preserves parameter interpretability and which is smooth and invertible. * Prior definition: We construct penalized complexity (PC) priors <cit.> for the parameters in the model. An additional benefit of this construction is that it avoids overfitting by favoring simpler base models. * Validation and prediction: We conduct a simulation study that shows that PC priors outperform “non-informative” priors. We then use the derived model to study precipitation in Norway and show that the anisotropic model outperforms the isotropic one in the presence of limited information. The outline of the work is as follows. In <Ref>, we introduce and motivate our anisotropic model. In <Ref>, we address Contribution <ref> and also construct a transformation that renders the parameters a Gaussian vector, providing the convenience of working with Gaussian random variables. In <Ref>, we present Contribution <ref>. Next, in <Ref>, we conduct a simulation study of the designed PC priors and compare the results to other possible priors on the parameters. In <Ref>, we study the performance of the model and priors on a real data set. Finally, in <Ref>, we synthesize the obtained results and discuss future avenues of research. § ANISOTROPIC MODEL §.§ Formulation In this section, we address the first main contribution. How can we introduce anisotropy in our model and parameterize it such that each parameter has an interpretable effect? We work in 2 dimensions and within the framework of the SPDE approach <cit.>. In this approach, the GF u is defined as the solution to a SPDE. Most commonly, the SPDE chosen is (κ^2-Δ)^α/2u=. Here, is Gaussian white noise on L^2(^2), and is defined such that given f,g ∈ L^2(^2) the random measure (f):=⟨ f,⟩ verifies [(f)(g)]=f,g_L^2(^2). The resulting field u has mean zero and is chosen to be isotropic to guarantee uniqueness. We recall that, by definition, the field u is isotropic if there exists some function r such that, for all x,y∈^2, K(x,y):=Cov[u(x),u(y)]=r(y-x). The isotropy of u may be an appropriate assumption in some cases. However, it is inadvisable if the correlation of the field is not equal in all spatial directions. To model this anisotropy, we will consider the model given by (κ^2-∇·H∇)u/σ_u =√(4 π)κ, where the parameters are κ, σ_u ∈ (0,∞) which are positive and bounded away from zero, and a symmetric positive definite matrix H∈^2× 2 with eigenvalues bounded away from zero. The parameters control the length scale, marginal variance, and anisotropy, respectively, as we will explain in the next subsections. The formulation in (<ref>) preserves the advantages of the SPDE approach. Namely, representing u as the solution to a SPDE gives it a physical interpretation. The term κ^2 u represents reaction whereas -∇H·∇ u represents diffusion <cit.>. Furthermore, using a finite element method, u can be projected onto the finite-dimensional Hilbert space _n spanned by the basis functions ψ_1, …, ψ_n linked to a mesh M_n of the domain. This gives a sequence of Gaussian Markov random fields u_n with sparse precision matrices (the precision matrix is the inverse of the covariance matrix), which converges in distribution to u as the mesh becomes finer and finer. This sparsity allows for a significant speed-up in computations (Kriging, posterior simulation, likelihood evaluation, etc.) <cit.>. §.§ Derivation We now motivate the choice of the SPDE (<ref>). By definition, a field u on ^d is stationary if its covariance K depends only on the relative position between two points. That is, for some function r called the covariance function K(x,y):=Cov[u(x),u(y)]=r(y-x), ∀x,y∈^d. Equation (<ref>), which is an extension of (<ref>), requires that the Euclidean translation y-x captures all relevant information related to the covariance of the field between any two spatial locations x and y. However, Euclidean geometry may not fit with the underlying properties of the field. For example, suppose that our field describes the geological properties of some homogeneous terrain. Even if the field was initially stationary, stationarity is lost if the terrain underwent a geological deformation ψ. Instead, the relationship K(x,y)=r(ψ^-1(x)-ψ^-1(y)) would be more appropriate. This is the deformation method <cit.> and is, for instance, sometimes used to connect different layers in deep Gaussian processes. In deep Gaussian processes, this transformation is then random and not diffeomorphic <cit.>. The deformation method provides an easy way to construct non-stationary fields starting from stationary ones. Furthermore, it can be used in a layered approach, where a stationary field is first built through some appropriate method and then deformed into a non-stationary field. As discussed previously, SPDEs provide a convenient framework for the construction of stationary fields, for example, through (<ref>). Let us see what happens when we deform such a field. To this aim, consider an (unknown) d-dimensional manifold 𝒟 and our target manifold 𝒟 obtained through a diffeomorphic transformation ψ:𝒟→𝒟. Further consider the solutions u to (1-Δ)u=√(4π), on 𝒟. Here, either =^d, in which case, to obtain uniqueness, we impose a stationarity condition. Otherwise, is some bounded subset of ℝ^d, in which case Neumann conditions or Dirichlet conditions are imposed on the boundary. Then, a change of variables yields that u:= σ_u ·u∘ψ^-1 verifies the non-stationary SPDE 1/Ψ[1-Ψ∇·ΨΨ^T/Ψ∇] u/σ_u=√(4π)/Ψ^1 / 2𝒲̇ on , where Ψ is the Jacobian of ψ, and we denote its determinant by Ψ <cit.>. Let us write γ for the geometric mean of the eigenvalues of Ψ. Then, Ψ=γ^d and we obtain from (<ref>) that 1/γ^d[1-γ^d ∇·γ^2-dH∇] u(x)/σ_u=√(4π)/γ^d / 2 on , where we defined H:=γ^-2ΨΨ^T. Note that, if we write κ:=γ^-1 and take the dimension d to be 2 we recover our model in (<ref>). For general d, it follows from the definition of H that * H is symmetric. * H has determinant 1. * H is positive definite. From now on, we will impose these three assumptions on H. Furthermore, we will restrict ourselves to the stationary case by imposing that H is constant in space, or equivalently, we impose that ψ is a linear deformation. That is, ψ(x)=Ψx. By construction, the solution field u is thus geometrically anisotropic <cit.>. That is, if we replace the Euclidean metric with the deformed metric x_Ψ^-1:= Ψ^-1x, we obtain analogously to (<ref>) that, for some function r:^d → called the covariance function of u, K(x,y):=Cov[u(x),u(y)] =r(y-x_Ψ^-1), ∀x,y∈^d, In the case =^d, the marginal variance of u is 𝔼[u (x)^2]=1 for all x∈ <cit.> (we recall that the solution to SPDEs of the form (<ref>) have mean 0). As a result, the marginal variance of u is 𝔼[u(x)^2]=σ_u^2 for all x∈. In the case where is a bounded domain, there is a boundary effect that affects the marginal variance of u. However, at a distance larger than twice the correlation length from the boundary, the bounded domain model is almost indistinguishable from the unbounded domain model. As a result, the boundary effect can be made negligible by embedding the region of interest in a sufficiently large domain. § PARAMETERIZATION In this section, we parameterize H so that the parameters convey intrinsic geometric meaning about the field u. Recall conditions <ref>-<ref> and suppose for example that κ is fixed to 1 so that Ψ= √(H). Write {(v,λ^2),(v_⊥,λ^-2)} for the eigensystem of H, where v:=(v_1,v_2)∈ℝ^2 and v_⊥=(-v_2,v_1) and we can suppose λ≥ 1 by reordering if λ< 1. Then, the previous discussion shows that u corresponds to deforming and rescaling the stationary field u through u(x)= σ_u ·u(Ψx), Ψx= λx,vv+λ^-1x,v_⊥v_⊥. The rescaling by σ _u corresponds to changing the variance of the field. The deformation corresponds to stretching the initial domain by a factor of λ in the direction of v and contracting, also by a factor of λ, in the orthogonal direction v_⊥. The above shows that the eigensystem of H carries fundamental geometric information and motivates a parameterization of H in terms of its eigensystem. Equation (<ref>) was also considered in <cit.>. Here, the authors defined v(α):=(cos(α),sin(α)) and parameterized H as H_v(α)=γI+βv(α)v(α)^T. However, the map α↦H_v(α) is not injective as H_v(α)=H_-v(α). Because of this, it is impossible to recover the sign of v and, thus, this parameterization is not identifiable and leads to a bimodal likelihood. The crucial step to obtain an identifiable parameterization is to consider the “half-angle” version v of v as an eigenvector of H. Given v=(v_1,v_2) ∈^2 define v:= vexp(i α /2 ), where α := (v) ∈ [0,2 π). Write v=(v_1,v_2) and v_⊥=(-v_2,v_1). Then, the following defines a smooth, invertible parameterization on the space of symmetric positive definite matrices of determinant 1. H_v =exp(v)/v^2 vv^T+exp(-v)/v^2 v_⊥v_⊥^T =cosh( |v|) I + sinh( |v|)/|v|[ v_1 v_2; v_2 -v_1 ]. The link with the parameterization in <cit.> is the following Let H_v(α),H_v be as in (<ref>), (<ref>), then if we set v(α)=±v/v, γ = exp(-v), β =(1-γ^2)/γ. We obtain H_v(α)=H_v. The idea behind the usage of the half angle version v of v is that it avoids any issues of identifiability in the sign as v and -v do not simultaneously belong to the parameter space. Parameterization using a Cholesky decomposition of H is also possible and is more readily generalized to higher dimensions. However, it is not as easily interpretable in terms of intrinsic properties of u. In <Ref>, we show the half-angle vector field together with the parameterized diffusion matrices H_v. Here, each H_v is represented by the ellipse centred at v whose main axis is exp(v)v/v and whose secondary axis is exp(-v)v_⊥/v. That is, the axes of the ellipses correspond to the eigenvectors of H_v scaled by their respective eigenvalues. <Ref> shows visually how the anisotropy increases with v and is directed towards v. It can also be seen how the parameterization is injective (no two ellipses are the same) and smooth (the ellipses vary smoothly with v). In <Ref> (a), we show the plot of the covariance K(x,0)=𝔼[u(x)u(0)] for κ=1 constant and as v is rotated around the X-axis by 90^∘ in each plot. As we can see, the vector field is most correlated in the direction of v, which is rotated by 45^∘ in each image, half the speed of rotation of v, and is least correlated in the direction of v_⊥. In Figure <Ref> (b), we show simulations of the field where again we leave κ=1 constant and rotate v by 90^∘ in each plot. The figure shows that the field diffuses the most in the direction of v and the least in the direction of v_⊥. The realizations of u are obtained using a FEM to solve the SPDE as detailed in <Ref>. Another method that could be used to simulate the field u and obtain the variance plots is the spectral method. The spectral method uses the spectral density of the field (the Fourier transform of the covariance function r if it exists) S(ξ):= ∫_exp(-2π iξ·x)r(x), ξ∈, to sample from the field, where is Fourier domain (that is, =^d if ⊂^d is a box and =^d if =^d.). The field u is then sampled using the stochastic integral. The spectrum for stationary fields u solving (<ref>) is given by (see <Ref> and <cit.>) S(ξ)=4πκ^2σ_u^2/(κ^2+4π^2ξ^THξ)^2. Then, the covariance can be calculated as the Fourier transform of the spectrum, whereas the field u can be sampled using the stochastic integral u(x)=∫_𝒟u(x) 2π iξ·xdZ(ξ), where Z(ξ) is called the spectral process. Using the fast Fourier transform, high-resolution samples of u can be obtained. See <cit.> for the details. Thus far, we have worked with constant v. The same parameterization goes through when H(x) is allowed to be spatially varying. In this case, v(x) is a spatial vector field. In <Ref>, we take κ= σ_u=1 and show the covariance K(x,(2,2)) and field u when v(x) is chosen to be the “twice-angle” field of the rotational field v(x)=(-x_2,x_1) (left of each subfigure), and from when v(x)=(x_1,x_2) (right of each subfigure). The figures show how the information of the field diffuses infinitesimally in the direction of v. The covariance and samples are once more obtained using the finite element method. In this non-stationary setting, the spectral method cannot be used as the spectrum is only defined for stationary fields. In summary, we have parameterized the anisotropic field u using parameters (κ, v_1,v_2,σ_u), where u solves (<ref>) and H:= H_v is given by (<ref>). The parameterization is identifiable and smooth, and the parameters have an intrinsic geometric interpretation. This will be the parameterization used throughout the paper. § PENALIZED COMPLEXITY PRIORS PC priors were originally developed in <cit.> to construct weakly informative priors while adhering to certain principles. The main idea is that one has a parametric family of models M_θ with parameter θ and a base model M_0 (by convention corresponding to θ=0). One views M_0 as the most suitable in the absence of any information. The larger the distance ζ(M_θ, M_0) between a model M_θ and M_0, the smaller the prior density for θ should be, and the decrease is set to be exponential. That is, we choose the prior distribution for θ, such that: d(θ ):=ζ (M_θ,M_0)∼Exp(λ_θ ). Here, the rate λ_θ > 0 of the exponential distribution serves as a hyperparameter selected by the user, which governs the model's flexibility. A smaller value of λ_θ increases the model's flexibility, allowing for greater deviations of M_θ from the base model M_0. Conversely, a larger value of λ_θ imposes stricter penalties on these deviations, thereby reducing the model's flexibility. In the previously cited <cit.>, this notion of “distance” was taken to be η (M_θ,M_0):=√(2 KLD(M_θ | |M _0)), where KLD is the Kullback-Leibler divergence KLD(M_θ | | M_0 ):=∫_E log(Ṃ_θ/Ṃ_0 )Ṃ_θ . One possible complication of using the KL divergence to define the distance is that the Radon–Nikodym derivative Ṃ_θ/Ṃ_0 does not exist. In fact, in the case of our model, if the variances of M_θ and M_0 are equal, for any possible base model u_0 and θ≠0 the measures M_θ and M_0 are singular. That is, with the usual convention, KLD(M_θ | | M_0 )=∞ . The proof of this fact is given in <Ref>. Equation (<ref>) makes an exact adherence to the previous steps impossible. As a result, we will merely adopt the principles of the PC prior construction (exponential penalization of complexity) while altering how we measure complexity between models. The idea of modifying the metric used to measure complexity has also been carried out in different settings, such as in <cit.>, where a Wasserstein distance was used. The possibility of using the Wasserstein distance to measure the complexity of the model (<ref>) was also considered. However, the Wasserstein distance is bounded in this setting (see <Ref> in the supplementary material) and, as a result, was discarded. Thus, one of the main challenges is to find a computationally feasible distance which captures relevant information about our model. To this aim, we give the following definition. Given two sufficiently regular models u_A∼ M_A,u_B ∼ M_B, with respective spectral densities S_A, S_B and variances σ_A,σ_B, we define the pseudometric D_2(M_A ,M_B):=∫_^22πξ^4S_A(ξ)/σ_A^2-S_B(ξ)/σ_B^2^2 ξ^1/2 . The above definition uses the rescaled spectral densities of each field to define a Sobolev seminorm on the difference of the correlations of M_A, M_B. For more details on how this distance was chosen and other distances that were considered, see <Ref>. Write M_κ,v for the model given by SPDE (<ref>) with parameters (κ,v) and with variance fixed to σ _u=1. Due to the rescaling of the spectral densities in <Ref>, the choice of σ _u does not affect the distance between models and can be set to any other positive value. We define the base model as the limit in distribution of M_κ,v as κ,v go to zero M_0:= lim_(κ_0,v_0) →0M_κ_0,v_0 =0,1⊗1, where 1: 𝒟→ℝ is the constant function , z ∈↦ 1 and (f⊗ g)(x,y):=f(x)g(y). That is, u_0 is constant over space and follows a Gaussian distribution with variance 1. We choose this u_0 as our base model as it is simple, and (κ,v) =0 is the only distinguished point in the parameter space. To reflect the dependency of this distance on the parameters, we use the notation d(κ,v):=D_2(M_κ,v,M_0) := lim_(κ_0,v_0) →0D_2(M_κ,v,M_κ_0,v_0). The exponential penalization in (<ref>) imposes one condition on the prior of (κ,v) whereas, since (κ,v) is three dimensional, two more conditions are necessary to determine the prior distribution uniquely. In <cit.>, this issue was circumvented by working iteratively, fixing one parameter while letting the other vary, building PC priors on each parameter, and then multiplying them together to get a joint prior. However, we prefer to work jointly from the start. This approach is made possible by the structure of d, which is calculated to have the form d(κ,v)=f(v)g(κ) Since the angle α:= (v) does not affect (<ref>), we impose that α is uniformly distributed in [0,2π). This choice guarantees that we do not favor the alignment of the covariance of u in any direction of the plane. Next, since the contribution of f and g to the distance is symmetric, we impose that f-f(0) and g knowing f are exponentially distributed. The translation is necessary as f takes a nonzero minimum f(0) at v= 0, whereas g takes a minimum of 0 at κ =0. This construction leads to the distance d(κ,v ) being exponentially distributed and gives us three conditions that uniquely determine (κ,v). A PC prior for (κ,v) with base model (κ,v) =0 is π(κ,v) =λ_θλ_vf'(v ) f(v ) /2πvexp(-λ_v( f(v )-f(0) ) -λ_θ f(v ) κ ), where λ_θ>0 ,λ_v>0 are hyperparameters and f(r):=(π/3 (3 cosh (2 r )+1))^1/2, f'(r)=√(π)sinh (2 r)/√(cosh (2 r)+1/3). See <Ref> for the proof of this and other results of this section. We plot the marginal density of the prior on κ and v for λ_θ=λ_v=16π^2 in <Ref>. The marginal prior densities take a maximum at κ =0 and v=0 respectively, and by construction, the prior on v is radially symmetric. The hyperparameter λ_θ determines the flexibility of the model (how much we penalize large values of d(κ,v )), whereas λ_v controls the degree of anisotropy (how much we penalize large values of v). Their values can be set to agree with desired quantiles using the following two results. The prior for r:=v satisfies ℙ[r >r_0]=β if and only if λ_v =-log(β )/f(r_0)-f(0) . The prior for κ satisfies ℙ[κ >κ _0]=α if and only if λ_θ = 1/κ_0(1/f(0) W_0(λ_v f(0)λ_vf(0)/α ) -λ_v), where W_0 is the principal branch of the Lambert function. That is, W_0(x) is the real-valued inverse of xx for x≥ 0, x=W_0(x)W_0(x), ∀ x ≥ 0. When specifying values of λ_θ,λ_v, it is useful to consider the ratio between the eigenvalues of H_v and the empirical correlation range. These measure respectively how much more correlated the field is in one direction in space and the distance at which the field becomes essentially uncorrelated (see Section <ref>). We denote these by a:= exp(v), ρ:=√(8)κ^-1 . <Ref> and <Ref> can then be rewritten as follows. The PC priors in satisfy that ℙ[a>a_0]=β and ℙ[ρ <ρ_0]=α if and only if λ_v =-log(β )/f(log(a_0)) -f(0), λ_θ = ρ _0/√(8)(1/f(0) W_0(λ_v f(0)λ_vf(0)/α ) -λ_v) . In a practical application, α,β can be chosen to be small (for example 0.01), a_0 can be chosen to be an unexpectedly large amount of anisotropy, and ρ_0 can be chosen as a surprisingly small correlation range for the field. The parameters (κ,v) can be written as a joint transformation of a three-dimensional vector with standard multivariate Gaussian distribution. Thus, it is possible to efficiently generate independent samples (κ,v) through sampling multivariate Gaussian random variables, which is straightforward. Let Y∼(0, 𝐈_3) and write respectively Φ, R for the CDFs of a univariate standard Gaussian and Rayleigh distribution with shape parameter 1. Define A := √(Y_1^2 + Y_2^2), B := f(0) - log(1 - R(A))/λ_v. Then, it holds that (v_1, v_2, κ) d=φ(Y_1, Y_2, Y_3) := ( f^-1(B) Y_1/A, f^-1(B) Y_2/A, -log(1 - Φ(Y_3))/λ_θ B), where f^-1(x) = 1/2cosh^-1( 4x^2 - 1/3). The transformation φ only involves standard functions and can be evaluated efficiently. For the proof, see <Ref>. § SIMULATION STUDY §.§ Framework This section aims to study the performance of the PC priors. To do so, we set different priors π_κ, v on κ, v, observe a noisy realization of the field, and compare the behavior of the posteriors, resulting from each prior. We consider the random field u that solves our model (<ref>) on the square domain =[0,10]^2 and define the observation process y= A u +ε, where u∈ℝ^n is a discrete approximation of the solution to (<ref>) obtained through a FEM on a mesh with n nodes, and A∈^m × n linearly interpolates to observe u at m=15 locations x_j_j=1^m which are obtained by sampling uniformly from , and ε∈^m is a noise vector with ε∼(0,Q_ε ^-1), Q_ε:=σ_ε^-2I_n. The details of how u is sampled from and more detailed results can be found in <Ref>. We set independent priors (κ , v) ∼π_κ , v, σ_u∼Exp(λ_σ_u), σ_ε∼Exp(λ_σ_ε). The priors on σ_u, σ_ε correspond to their respective PC priors (see respectively <cit.> Section 3.3, <cit.> Theorem 2.1) with hyperparameters λ_σ_u, λ_ε chosen so that ℙ[σ_u>σ_0]=0.01, ℙ[σ_ε>σ_1]=0.01, where we take σ_0=10,σ_1=1.5. The prior π_κ, v is one of the priors to be compared (among them the PC prior). We consider the following options for π_κ , v: * The PC priors in (<ref>) where the anisotropy hyperparameters λ_θ , λ_v are chosen so that the anisotropy ratio a=exp(|v|) and the correlation range ρ = √(8)κ ^-1 satisfy ℙ[a> a_0]= 0.01, ℙ[ρ < ρ_0]=0.01, where we take a_0=10, ρ_0=1. This choice of hyperparameters corresponds to allowing with probability 0.01 that the field is 10 times more correlated in any given direction and, with the same probability, that the field has a correlation range smaller than 1. The values of λ_θ,λ_v can then be calculated using <Ref>. * Independent priors κ∼Exp(λ_κ ) and v∼(0,σ_v^2 I_2 ). Under these priors, κ,v have the same mode as the PC priors (0 in each case). Additionally, λ_κ ,σ_v ^2 are chosen such that, under these priors, (<ref>) also holds. We denote this prior by π_EG. * Independent (improper) uniform priors for log(κ),v with infinite support. We denote this prior by π_U. * Independent linear transformations of beta priors on log(κ),v_1,v_2 with shape parameters 1.1 such that the correlation range is supported in [ρ_0/w, wL] and v_1,v_2 are supported in [-wa_0,w a_0]. The shape parameter is chosen so that the distribution is approximately uniform while having smooth density, which is relevant for the optimization. The purpose of w>1 is to extend the support of the parameters past ρ_0,a_0. The same value of ρ_0=1 is taken, L=10 is taken to be the length of , and w is set to 20. We denote this prior by π_β. We simulate θ^(j)^true from π_sim∈{π_PC, π_EG, π_U, π_β}, use the FEM to simulate u^(j) from (<ref>) and then simulate y^(j) from (<ref>). Then, for each of the four priors π_κ,v∈{π_PC, π_EG, π_U, π_β} we approximate the posterior distribution of the parameters θ=(κ, v, σ_u, σ_ε) given the data y^(j) and evaluate the performance of each of the four priors. This process is repeated J=600 times and repeated for each of the four possible values of π_sim. Since the posterior is not available in closed form, it is necessary to approximate it. We considered three choices: approximating the posterior by the Gaussian Z with the same median and whose precision is minus the Hessian of the posterior at its median, using importance sampling with Z as the proposal distribution, and using smoothed importance sampling with Z as the proposal distribution. Of these three approximations, the best performing method was smoothed importance sampling. As a result, this is the method we use in the following section to approximate integrals against the posterior. The details of the different approximations can be found in <Ref> in the supplementary material. §.§ Results The focus of our study is the anisotropy parameters (κ, v_1, v_2). As a result, in this section, we will focus on the performance of the different priors on these parameters and relegate the results for the remaining parameters to <Ref>. We first show in <Ref> the empirical cumulative distribution function (eCDF) of the vector of distances of the true anisotropy parameters (log(κ^true),v_1^true,v_2^true) to the MAP estimates (log(κ),v_1,v_2) for each of the four different distributions on θ^true. We observe that π_PC and π_EG perform the best and give almost identical results, to the point where it is difficult to distinguish them from the plots. This behavior is to be expected as both these priors are similar and lead to almost identical posteriors, as is discussed further in <Ref>. The difference in performance between π_PC, π_EG and π_U, π_β is clearest for v_1,v_2 whereas for log(κ), when θ^true∼π_U or θ^true∼π_β all four models give comparable results. The figures relative to the parameters v_1 and v_2 are almost identical. This is expected by the symmetry in the priors and the likelihood, as there is no preferred direction of anisotropy Next, in <Ref>, we show the length of the symmetric equal-tailed 0.95 credible interval for each parameter. As can be seen from the plots, both π_PC and π_EG limit the length of the credible intervals to a similar extent while π_U,π_β give much wider credible intervals. This connects with the motivating factor for the construction of the PC priors which is to penalize the complexity of the model. In <Ref>, we show the eCDF of the posterior mean complexity _π _θ|yd(κ , v) , where the complexity d is defined in (<ref>) for each of the four different true distributions on θ. As can be seen, in all four cases, the PC and exponential-Gaussian priors reduce model complexity as compared to the uniform and beta priors. § AN APPLICATION TO RAINFALL DATA §.§ Framework In this section, we analyze the performance of the anisotropic model and the PC priors on a data set for total annual precipitation in southern Norway between September 1, 2008, and August 31, 2009. This data set was studied in <cit.>, <cit.> using a linear model. y_i = β _0 + β _1 h_i + u(x_i)+ ε _i, i=1,…,m= 233, where y_i is the total annual precipitation at location x_i, h_i is the altitude at location x_i, u(x_i) is a spatially correlated random effect, and ε∼(0,σ_ε^2I) is a noise term. In the articles above, u was modeled using a non-stationary Matérn process (κ^2(x)-Δ)(u(x)/σ_u(x))=√(4π)κ(x). The non-stationarity κ^2(x),σ_u(x) was parameterized by a log-linear model with covariates elevation and gradient of elevation and, in <cit.>, priors motivated through the PC prior approach were constructed for these extra parameters. We will only study the stationary anisotropic setting where u| θ is a solution to (<ref>) with spatially constant parameters. By incorporating β:=(β _0,β _1) into u, our linear model (<ref>) fits into the framework of <Ref>, where now A_β := (1_m, h,A), u_β := (β, u) take the place of A, u in (<ref>). We will consider β∼ (0, Q_β^-1), Q_β = τ_βI_2 independent from u, θ and where we set the precision parameter to be τ_β= 10^-4. §.§ Maximum a posteriori estimates We compare the model in (<ref>) under anisotropic PC and EG priors (see <Ref> and <Ref>) on (κ, v) and isotropic PC priors where v is set to 0. To do so, we must first derive an isotropic PC prior on κ using the distance metric (<ref>) restricted to the case where v=0. Using equation (<ref>) we obtain the following result. D_2 (M_κ , 0,M_0)= 1/√(12 π)κ. So, by the principle of exponential penalization, κ∼Exp(λ_iso ). As in <cit.>, we choose the hyperparameters in the three priors so that P(ρ<10)=0.05, P(σ_u>3)=0.05, P(σ_ε>3)=0.05. For the anisotropy parameter v we choose the hyperparameter λ_v so that ℙ[a>10]=0.05. This choice of hyperparameters imposes that with probability 0.05, the field has a correlation range that is 10 times larger in any given direction. In <Ref>, we plot the density of the priors on ρ. Since the marginal density of ρ is the same for the isotropic PC and EG priors, we do not plot it. We also plot the density of the PC and (anisotropic) EG priors on r:=v. The decay of the marginal PC prior on κ and of the EG prior are both exponential. The decay rate of the marginal PC prior on v is exp(- c_1 exp(v )), whereas for the EG prior it decays slower, as exp(-c_2 v^2) for some constants c_1,c_2. The MAP estimates, and symmetric 95% credible intervals for the anisotropic PC, anisotropic EG, and isotropic PC models are shown in <Ref>. The credible intervals for v_1 in the anisotropic models do not contain 0, indicating that with high confidence, anisotropy is present in the precipitation field. The half angle vector of the MAP for the anisotropic model with PC priors v is v=(0.02,0.48), and indicates that the precipitation is a=1.64 times more correlated in the North-South direction than in the East-West direction. In <Ref>, we plot the posterior prediction, latent field, and the covariance function of the anisotropic field u with parameters κ,v. §.§ Model performance To assess the performance of the models, we calculate a variety of scores for each model. We recall that, given a (predictive) distribution P and an (observed) point y, a score is a function S(P,y) that measures how closely the prediction P matches the observation y. If y follows the distribution Q, then the expected score is S(P,Q) := _Q[S(P,y)]. The score should be minimized when the true distribution Q matches the predictive distribution P. In this case, the score is said to be proper. It is strictly proper if it is minimized if and only if P=Q. We will consider the following scores * Given a distribution F̣ and an observation x ∈ the squared error (SE) is defined as SE(F,x) := (x-∫_-∞^∞ tF̣(t))^2. The expression above defines a proper scoring rule (its value is minimized when x follows the predictive distribution). However, it is not strictly proper (the minimum in F is not unique). * Given a CDF F and an observation x ∈, the continuous ranked probability score (CRPS) is defined as <cit.> CRPS(F,x) := ∫_ (F(t)-1{x ≤ t})^2 d t. The CRPS is a strictly proper scoring rule on distributions with finite expectations. * Given a random variable X with mean μ and precision Q, the Dawid-Sebastiani score (DSS) is defined as <cit.> DSS(X;x) := -log(Q ) + (x - μ)^TQ(x - μ). The DSS is similar to a log Gaussian density score and is a strictly proper scoring rule for Gaussian random variables. Still, it also defines a proper score for non-Gaussian random variables. Given a sample sample y=(y_1,...,y_n), a vector of predictive distributions F=(F_1,...,F_n) and a score S , the mean score is defined as S(F, y):= 1/n∑_i=1^n S(F_i,y_i). By the linearity of the expectations, if S is a (strictly) proper scoring rule, then so is S. §.§ Score results In this section, we calculate the mean scores RMSE:=(SE(F,y))^1/2, CRPS(F,y), DSS(F,y), where F_i:= π(y_i| y_-i) is the LOO predictive distribution under each of the three models in Subsection <ref>, and y is the observation. An initial calculation showed that the scores of the previous sections are very similar under all three priors. From this, we conclude that the model is very informative, and the influence of the priors is limited. As a result, to better compare the priors, we uniformly sub-sample y and observe only y' ∈^n_y' with n_y'< n_y=233. We then calculate the scores from the previous sections. Due to the extra variability introduced by sub-sampling the observations, we repeat this process 10 times. The resulting mean scores are shown in <Ref>. The anisotropic models perform better with lesser data, with the PC prior performing slightly better than the EG prior. Whereas with more data the results are almost equivalent. From the trend of the above results it is reasonable to check whether the isotropic model outperforms the other two models with a larger amount of observations. To check this hypothesis we conducted a simulation study where a larger amount of observations are generated, up to n_y=1000. The results are included in <Ref> in the supplementary material and show that this is not the case. The isotropic model does not outperform the anisotropic models with a larger amount of observations. § DISCUSSION In this study, we constructed a smooth, identifiable, and geometrically interpretable parameterization for a 2D anisotropic spatial Gaussian model. We developed penalized complexity (PC) priors for these parameters, defining model complexity as a Sobolev seminorm of the correlation function. This distance was calculated in a closed form using the spectral density. Our performance comparison of the PC prior against other priors demonstrated its effectiveness in penalizing model complexity. We found that priors designed to match the quantiles of the PC prior yielded similar results. Both these priors significantly outperformed the other non-informative priors considered, highlighting the necessity of incorporating penalization in prior information to prevent overfitting. Applying the anisotropic model to a real dataset of precipitation in Southern Norway, we observed that the anisotropic model outperformed the isotropic model when the number of observations was small, with the PC prior slightly outperforming the other choices. However, with a larger number of observations, the isotropic model performed similarly to the anisotropic models. These results indicate that the anisotropic model is more informative when the data is scarce, but as the number of observations increases, the isotropic model becomes more competitive. The results also suggest that a nonstationary model could be better suited to capture spatially varying patterns in the data. In conclusion, we advocate for the use of informative priors for the anisotropy parameters in spatial Gaussian models. The PC prior is highly effective, but other priors designed to match desired quantiles can also be effective and are simpler to construct. Looking forward, we aim to extend this study to the non-stationary setting where the model parameters vary spatially. This presents a challenge as there is no agreed-upon definition of correlation function or spectral density, necessitating a different definition of complexity. We are also interested in extending the parameterization to higher dimensions. The current construction relies on a “half-angle” parameterization of the anisotropy vector, which is not easily extendable to higher dimensions. Finally, we are interested in extending the construction to different orders of regularity. § SUPPLEMENTARY MATERIAL tocsectionSupplementary Material § PROOF OF <REF> By definition, H_v is symmetric with determinant 1. Furthermore, it is positive definite as its eigenvalues are positive. Given a positive definite symmetric matrix A with determinant 1, A has an eigensystem of the form (w,λ ),(w^⊥,λ ^-1), where λ≥ 1. By normalizing and taking -w if necessary we may suppose that w=α i with α∈ [0,π). Then, we obtain A = H_v for v=log(λ)2α i. This proves that the parameterization is surjective. Suppose now that H_v=H_v' then their eigenvalues and eigenvectors must be equal so v=v', v=av'. For some a ∈. From the first condition, we deduce that v=v'. Taking absolute values in the second condition thus gives a=1. By construction v≠ -v' for all v,v' so necessarily v=v'. This proves that the parameterization is invertible. Next, to derive the second equality in (<ref>), we use the half-angle formula cos(α/2)= √(1+cos(α ) /2)sign(sin(α)) , sin(α/2 )= √(1-cos(α) /2). This gives that vv^T= v^2/2I+ v/2[ v_1 v_2; v_2 -v_1 ], v_⊥v^T_⊥= v^2/2I-v/2[ v_1 v_2; v_2 -v_1 ] . The proof follows immediately by using the definition of H_v (first line of (<ref>)). § SELECTION OF A DISTANCE In this section, we discuss the choice of the Sobolev distance in <Ref>. We first examine other possible choices, such as the Kullback-Leibler divergence, the L^2 distance, the Wasserstein distance, and the Hellinger distance, and show that they are unsuitable for our purposes. We then derive the exact form of the Sobolev distance in <ref> and show a possible alternative definition in <ref>. Without pretending full generality we will use the following notation. Firstly, given a domain =[a,b] ⊂^d we denote the orthonormal Fourier basis on by e_k(x):= vol( )^-1/2exp(ω_k·x/b-a), ω_k:= 2 π i k k∈^d, where we used the elementwise division [x/(b-a)]_i:= x_i/(b_i-a_i). Given a stationary field u on , we denote its covariance function r and spectral density, if they exist, by r(x):= Cov[u(x),u(0)], S(k):=∫_ r(x)e_k(x)x. Additionally, we will denote the covariance operator of u by K. For stationary u ∈ L^2([a,b]) this is given by Kf,g_L^2( ):= ∫_∫_ r(y-x)f(x)g(y) xy, f,g ∈ L^2( ). For the calculations, the following lemma will be very useful. Let u be a stationary field in L^2() with spectral density S. Then, the covariance operator of u diagonalizes in the orthonormal Fourier basis e_k_k∈^d with eigenvalues S(k). By the Bochner theorem, the covariance function r of u is the Fourier transform of its spectral density S. As a result, for all f,g ∈ L^2( ) we have Kf,g_L^2( ) =∫_∫_ r(y-x)f(x)g(y) xy=∫_∫_∑_k∈^d S(k)e_k(y-x)f(x)g(y) xy =∑_k∈^d S(k)∫_ f(x)e_k(x)x∫_ g(y)e_k(y) y=∑_k∈^d S(k)f(k)g(k), Applying this to f = e_j and g = e_k shows that K e_j,e_k_L^2( ) = S(k)δ_jk. This concludes the proof. We will also repeatedly use the expression of the spectral density of the solution u to (<ref>). To calculate this, note that the covariance operator of u is given by K= ^-2 where := κ^2- ∇·H∇/√(4 π)κσ _u. Let us consider first the case where = a,b⊂^d. Using that ∇ e_k= i ω_k e_k we obtain that diagonalizes in the Fourier basis [ ]_jk:= e_j,e_k_L^2( ) = κ^2- ω_k·Hω_k/√(4 π)κσ _uδ_jk, [ ^-1]_jk= √(4 π)κσ _u/κ^2- ω_k·Hω_kδ_jk. As a result, by <Ref>, the spectral density of u is given by S(k)= 4 πκ ^2 σ _u^2/(κ^2- ω_k^T Hω_k)^2, k∈^d. If now =^d, the defining property of the spectral density is that, analogously to (<ref>), it holds that <cit.> Kf,g_L^2(^d )= ∫_^d S(ξ)f(ξ)g(ξ)ξ, ∀ f,g ∈ L^2(^d). Using that the Fourier transform is an isometry on L^2(^d), Kf,g_L^2(^d ) = ^-1 f, ^-1g_L^2(^d )= ^-1f,^-1g_L^2(^d ) =∫_^d4πκ ^2 σ _u^2/(κ^2- 4π^2ξ^T Hξ)^2f(ξ)g(ξ)ξ. So the stationary solution to (<ref>) on ^d has spectral density S(ξ)= 4πκ ^2 σ _u^2/(κ^2- 4π^2ξ^T Hξ)^2. §.§ Kullback-Leibler divergence As previously discussed, the KLD is unsuitable as a notion of distance between models as it is infinite for every possible parameter value. As a result, a different way to measure distance between models is necessary. This section discusses some of the options considered and how <Ref> was eventually chosen. We begin by showing that the KLD is infinite for every possible parameter value with the same variance. To do so, we first give a sufficient condition for two stationary measures to be mutually singular. This is essentially the converse direction of <cit.>. See also <cit.>. Write =[a,b]⊂^d and let u_A,u_B be two stationary Gaussian fields in L^2( ) with spectral densities S_A,S_B. Suppose that ∑_k ∈^d(S_A(k)/S_B(k)-1)^2 < ∞. Then, the Gaussian measures defined by u_A,u_B are mutually singular. Denote the covariance operators of u_A,u_B by K_A,K_B respectively and let I be the identity on L^2( ). By the Feldman–Hájek theorem, a necessary condition for u_A,u_B to not be mutually singular is that the following operator is Hilbert Schmidt, T:=(K_B^-1/2K_A^1/2)(K_B^-1/2K_A^1/2)^*-I. is a Hilbert-Schmidt operator where K_A,K_B are the covariance operators of u_A,u_B and I is the identity operator on L^2( ). By <Ref>, K_A, K_B both diagonalize in the orthonormal Fourier basis with eigenvalues given respectively by S_A(k) and S_B(k). As a result, the Hilbert-Schmidt norm of T is given by T_HS^2= ∑_k∈^d(S_A(k)/S_B(k)-1)^2. The result follows by the necessary condition for mutual singularity. Using the just proved <ref>, we show that the KLD between two different solutions to (<ref>) is infinite if they have the same variance parameter. In particular, if we were to renormalize the spectral densities of the two solutions to have the same variance, the KLD would be infinite for all different parameter values. Let u_A,u_B be the solutions to (<ref>) with Neumann or Dirichlet boundary conditions and parameters (κ _A,v_A,σ _A),(κ _B,v_B,σ _B) respectively. Then, the measures defined by u_A,u_B are mutually singular unless κ _A/κ _B=σ _A/σ _B and v_A=v_B. In particular, if u_A ≠ u_B both share the same variance then KLD(μ _A||μ _B)=∞. We have by (<ref>) that ∑_k∈^2(S_A(k)/S_B(k)-1)^2 = ∑_k∈^2(κ _A^2 σ _A^2/κ _B^2 σ _B^2(κ _B^2+4π^2k·H_v_Bk)^2/(κ _A^2+4π^2k·H_v_Ak)^2-1)^2 . Since H_v_A, H_v_B are positive definite, if their eigenvectors and eigenvalues are different, the term k·H_v_Bk/k·H_v_Ak can be made arbitrarily large by choosing k large enough. As a result, the sum diverges unless v_A=v_B and ∑_k∈^2(κ _A^2 σ _A^2/κ _B^2 σ _B^2(κ _B^2+4π^2k·H_v_Ak)^2/(κ _A^2+4π^2k·H_v_Ak)^2-1)^2 < ∞. The limit of the terms in the sum above as k→∞ is lim_k→∞κ _A^2 σ _A^2/κ _B^2 σ _B^2(κ _B^2+4π^2k·H_v_Ak)^2/(κ _A^2+4π^2k·H_v_Ak)^2-1= κ _A^2 σ _A^2/κ _B^2 σ _B^2-1. This limit is zero if and only if κ _A/κ _B=σ _A/σ _B. We deduce that the sum in (<ref>), and so also the sum in (<ref>), diverges unless κ _A/κ _B=σ _A/σ _B and v_A=v_B. The result follows by <Ref>. §.§ The L^2 distance One possible option is to consider the L^2 distance between u,u_0. An application of Fubini shows that given two Gaussian fields u_A∼(0,K_A),u_B∼(0,K_B) [u_A-u_B^2_L^2()]=Tr (K_A+K_B-2K_AB), where K_AB is the covariance between u_A,u_B and Tr is the trace. It is unclear what choice of K_AB would be the most appropriate. For example, choosing K_AB=0 would be too coarse a measure as it would not consider any of the non-stationarity that occurs off the diagonal of K_A, K_B (see Lemma <ref>). One can also choose K_AB as to minimize the L^2 norm while keeping (u_A,u_B) jointly Gaussian by taking K_AB=(√(K_B)K_A√(K_B))^1/2. This leads to the Wasserstein distance, which we discuss in the following subsection. §.§ Wasserstein distance between Gaussian measures Consider a bounded domain =[0, T]^d. Each solution to SPDE (<ref>) induces a measure on L^2(). As a result, one possible way to measure the distance between u and u_0 is to take the Wasserstein distance between the induced Gaussian measures. It is known that given two Gaussian measures M _A∼(m_A, K_A),μ _B∼(m_B, K_B) on a separable Hilbert space, the Wasserstein distance between them is given by <cit.> W_2(μ _A,μ _B)=m_A-m_B_L^2()+Tr(K_A+K_B-2(√(K_B)K_A√(K_B) )^1/2), where Tr is the trace. However, this approach poses some difficulties. Let us write K_A, K_B for the covariance operators of two stationary solutions u_A∼ M_A, u_B∼ M_B to (<ref>) with parameters (κ _A,v_A,σ _A),(κ _B,v_B,σ _B) respectively. By <Ref>, K_A, K_B diagonalize on the same basis, and using the expression of the spectral density in (<ref>), we obtain that W_2(M_A,M_B) = (√(K)-√(K_0))^2= ∑_j,k∈^d[√(K)-√(K_0)]_jk^2 =∑_j ∈^d(√(4 π)κ_A σ_A /κ_A ^2+ 4 π^2ξ_j^T H_v_Aξ_j-√(4 π)κ_B σ_B /κ_B ^2+ 4 π^2ξ_j^T H_v_Bξ_j)^2 . Making the domain go to infinity shows that, by definition of ξ_j our discrete sum becomes an integral with W_2(M _A,M_B)=vol() /π∫_^2(κ_A /κ_A ^2+ ξ^T H_v_Aξ-κ _B/κ _B^2+ ξ^T H_v_Bξ)^2ξ +O(1). The Wasserstein distance between μ _0,μ _1 scales as vol() times the Hellinger distance of the spectral densities of u_A,u_B. Thus, a reasonable option could be to define the scaling as the distance D_W_2 (M_A,M_B):= lim_T →∞W_2(M_A,M_B)/vol() =Hell(S_A,S_B). However, a calculation in this simplified case shows that this will be bounded. For example, in the case where, as in the base model, v_A=v_B=0, and σ_A = σ _B =1, we have the following. D_W_2 (u_A,u_B)=2(1-κ_Aκ_B (log(κ _A^2)-log(κ_B ^2))/κ _A^2-κ_B ^2). The above expression takes a maximum of 2. This makes putting an exponential distribution on the Wasserstein distance impossible, and as a result, the Wasserstein distance cannot be used to define PC priors for our model (<ref>). §.§ Sobolev distance between fields Given a stationary field u on with s-times differentiable covariance function r(x):= Cov[u(x,u(0))], we define the seminorm ·_s as the Sobolev seminorm of order s of r. That is, u_s:= ∇^s r_L^2()=2πξ^sS(ξ)_L^2( ):=∫_2πξ^2sS(ξ)^2 ξ^1/2, where is the Fourier domain of . That is, =^d if is a box and =^d if =^d. If we think of the L^2() norm of r as a measure of the total “random energy” present in u. Then, ·_s measures the oscillation size in this energy to order s. We now discuss which value of s is appropriate. In our case, as we saw in (<ref>), the solution u to (<ref>) with parameters κ, v, σ_u has spectral density S_κ, v, σ_u(ξ)= 4πκ ^2σ_u^2/(κ ^2+4π^2ξ·H_vξ )^2 . A change of variables ξ→κξ shows that, for dimension d=2, the behavior in κ is u_s∼κ ^s-1 . If we take s=0, then u_0 becomes infinite, whereas if we choose s=1, we obtain that u_1 is bounded in κ and thus cannot be exponentially distributed. This leads to the choice s=2 used in <Ref>. The seminorm u_s is different to the Sobolev seminorm of u as we have that, given a multi-index α∈^d, the derivative D^α u of a stationary field u exists as long as ∫_(2πξ)^2α S(ξ) ξ < ∞ . Thus, one possible choice of metric, corresponding to s=1 /2, could also be to set u_*:=∫_2πξ S(ξ) ξ . This distance is unbounded in κ and finite at the base model. A calculation gives a distance of a similar form to the one derived for the distance used in this paper in <Ref>. d_*(κ , v)^2 :=lim_κ_0,v→ 0 ∫_2πξ(S_κ, v, σ_u/σ_u^2-S_κ, v, σ_0/σ _0^2) ξ=2π E(1-2v)-v/2κ, where E is the complete elliptic integral E(m):=∫_0^π /2(1-msin^2(α ))^1/2α̣. This distance is unbounded in κ and v and 0 at the base model. As a result, using d_* instead of d could also provide a valid alternative. § DERIVATION OF PC PRIORS Let M_κ ,v,σ_u,M_κ _0,0,σ_0 be the stationary models given by (<ref>) with parameters (κ,v,σ_u) and (κ_0,0,σ_0) respectively. Then, d(κ ,v):=lim_κ_0 → 0D_2(M_κ ,v,σ_u,M_κ_0,0,σ_0)^2=π/3κ ^2 (3 cosh (2 | v| )+1) . For any (κ ,v,σ_u) the spectral density of u_κ ,v,σ_u is S_κ ,v,σ_u(ξ)=4πκ^2 σ _u^2/(κ ^2+4π^2ξ^T Hξ)^2 . As a result, in a distributional sense lim_κ _0 → 0S_κ_0, 0,σ_0/σ_0^2(ξ)= lim_κ _0 → 04πκ_0^2 /(κ_0 ^2+4π^2ξ^2 )^2=δ0 (ξ), where δ0 is the Dirac delta at 0. In consequence, expanding the square in (<ref>) and using the change of variable ξ→κξ/(2π), we obtain that lim_κ_0 → 0D_2(M_κ ,v,σ_u,M_κ_0,0,σ_0)^2=∫_^22πξ^4 S_κ ,v, σ _u(ξ)^2/σ _u^4ξ =4κ^2∫_^2ξ^4/(1+ξ^T H_vξ)^4ξ. Let α:= (v) and B_α /2 be a rotation by α /2 radians. Then B_α /2H_vB_α /2 ^-1= [ -v 0; 0 exv ] . Thus, by rotating and then changing to polar coordinates, we obtain lim_κ_0 → 0D_2(M_κ ,v,σ_u,M_κ_0,0,σ_0)^2=4κ^2 ∫_0^2π∫_0^∞r^5/(1+r^2 (| v| sin ^2(ϕ )+-| v| cos ^2(ϕ )))^4ṛϕ̣ =∫_0^2π4κ ^2/6 (vsin ^2(ϕ )+-vcos ^2(ϕ ))^3ϕ̣=π/3 (3 cosh (2 | v| )+1)κ ^2 . This concludes the proof. As we see from <Ref>, d(κ ,v) is independent of the argument α of v and depends only on κ and r:=v. For this reason, we consider α to be independent of κ,v and, since a priori, there is no preferred direction for the anisotropy, we set a uniform prior on α π_α (α)=1/2π1_[0,2π ](α) We will now define priors on κ,r following a sequential construction. To do so, we will use the fact that the distance to the base model ρ (κ,v ) decomposes as a product d(v,κ) =f(r)g(κ), f(r) :=(π/3 (3 cosh (2 | v| )+1) )^1/2, g(κ):=κ. Together with the following lemma, Let X,Y be two positive random variables such that Y|X has density f_Y|X(y,x)=λ x exp(-λ x y )1_[0,+∞)(y). Then, the product Z:=XY follows an exponential distribution with parameter λ. By the law of total expectation. the cumulative density function of Z verifies F_Z(z) =ℙ[XY≤ z]=[[1_XY≤ z|X]]=[∫_0^z/Xf_Y|X(y|X) dy] =∫_0^∞∫_0^z /x f_Y|X(y|x)dℙ_X(x), where ℙ_X is the push-forward of ℙ by X. Applying the fundamental theorem of calculus shows that the density of Z is F'_Z(z)=∫_0^∞1/xf_Y|X(z /x|x)dℙ_X(x)=λ-λ z∫_0^∞ dℙ_X(x)=λ-λ z. This concludes the proof. The above lemma states that if we want the distance d=f g to be exponentially distributed, it suffices to let f follow any distribution and have g conditional on f be exponentially distributed with parameter proportional to f. Since f takes a minimum of f(0)=√(4π/3 ), the conditional distribution of f given g cannot follow an exponential distribution. Therefore, we will instead select the conditional distribution of g given f to be exponential. While <Ref> allows complete freedom in choosing a distribution for f, the symmetry in the distance fg implies there is no intrinsic reason to prefer penalizing an increase in f either more or less than an increase in g. Consequently, we will also impose an exponential penalty on f by setting its marginal density to be equal to the following, π_f(f)= λ_v-λ_v (f- f(0))1_[f(0),∞). We can set priors in the stationary setting in the same way as in the previous section. §.§ Proof of <Ref> Let us write r=v, α =(v) . We have already determined the prior distributions π_f(f)= λ_v-λ_v(f- f(0))1_[f(0),∞)(f), π_κ |f(κ |f)=λ_θ f -λ_θ f κ1_[0,∞)(κ ) Since f:[0,∞) → [f(0), ∞) is bijective, we may apply a change variables to obtain that the marginal prior for r is π _r(r)=π_f(f(r))f'(r)= λ_v-λ_v( f(r)-f(0))f'(r) . Again, by the bijectivity of f, we obtain that π_κ|r=π_κ|f=f(r)=λ_θ f(r) -λ_θ f(r) κ1_[0,∞)(κ ). Since α is uniformly distributed on [0,2π ] independently of r,κ the joint prior for (κ,α,r ) is π_κ,α,r (κ,α,r )=1_[0,2π](α)/2ππ_r (r)π_κ|r (κ|r ) . Changing variables from polar to Cartesian coordinates gives π_κ,v(κ,v )=1/2πvπ_r (v )π_κ|r (κ|v) . Using equations (<ref>) and (<ref>) in (<ref>) and using the expression for f derived in (<ref>) concludes the proof. §.§ Proof of <Ref> The prior for f is defined in (<ref>). Since f is increasing, we obtain that the cumulative distribution function of r is for all r_0≥ 0 F_r(r_0)=ℙ(r≤ r_0)=ℙ(f(r)≤ f(r_0))=1--λ_θ (f(r_0)-f(0)). The theorem follows by a straightforward algebraic manipulation. §.§ Proof of <Ref> We begin by calculating the prior π_κ for κ. A calculation shows that for all κ >0 π_κ(κ ) =∫_π_f(f)π_κ |f(κ |f)df=∫_f(0)^∞λ_vλ_θ f exp(-λ_v(f- f(0))-λ_θ f κ )df = λ_θλ_v-f(0) κλ_θ(f(0) κλ_θ +f(0) λ_v+1)/(κλ_θ +λ_v)^2, whereas for κ <0 we have π_κ (κ )=0. A further calculation shows that the CDF of κ is F_κ (κ _0 )=∫_-∞^κ_0π_κ (κ) κ̣= 1-λ_v-f(0) λ_θκ _0/λ_θκ _0+λ_v1_[0,∞](κ _0). Now solving for λ_θ gives that for any κ _0>0 ℙ[κ >κ _0]=α⟺λ_θ = 1/f(0)κ _0 W_0(λ_v f(0)λ_vf(0)/α ) -λ_v/κ _0 This concludes the proof. § JOINT TRANSFORMATION In this section, we show how to write (κ,v) jointly as a function of a three-dimensional standard Gaussian Yd=(0, 𝐈_3). Our idea will be to generalize the method of inverse sampling. This is done by building an invertible generalization of the cumulative distribution function. Given a random variable X=(X_1,X_2) ∈^2 we define the generalized CDF of X as φ(X,x):=φ_X(x):=(ℙ[X_1≤ x_1],ℙ[X_2≤ x_2|X_1=x_1]). In the case where X_1, X_2 are independent we obtain that φ_X(x)=(F_X_1(x_1),F_X_2(x_2)). By construction φ_X:^2→ [0,1]^2. The definition can be extended without difficulty to dimensions larger than 2. However, the notation becomes more cumbersome. The motivation for the above definition is given by the following properties, which mimic that of the 1-dimensional CDF. Let X be a two-dimensional random variable, then the following hold * The distribution of X is determined by φ. That is if Y is a two dimensional random variable such that φ_X=φ_Y, then Xd=Y. * Consider a function ϕ:A→^2 defined on some subset A ⊂^2 of the form ϕ(x_1,x_2)=(ϕ_1(x_1),ϕ_2(x_1,x_2)), where ϕ_1 is invertible and ϕ_1,ϕ_2(x_1,·) are monotone functions. Then, φ(ϕ(X),ϕ(x))=φ(X,x), ∀x∈ A. * Let X be a ^2 valued random variable. Then φ_X has a generalized inverse and φ_X, φ_X^-1 are both of the form (<ref>). We prove the above points in order. * Suppose φ_X=φ_Y, then by definition of φ we have that for all x=(x_1,x_2) ∈^2 ℙ[X_1≤ x_1]=ℙ[Y_1≤ x_1], ℙ[X_2≤ x_2|X_1=x_1]=ℙ[Y_2≤ x_2|Y_1=x_1] . From the basic theory of random variables, the left-hand side of the above implies that X_1d=Y_1. As a result, we obtain that ℙ[X_2≤ x_2] =∫_ℙ[X_2 ≤ x_2|X_1=x_1]dℙ_X_1(x_1) =∫_ℙ[Y_2 ≤ x_2|Y_1=x_1]dℙ_Y_1(x_1)=ℙ[Y_2≤ x_2]. Since this holds for all x_2 ∈ we deduce as previously that X_2d= Y_2. Thus we obtain that Xd=Y, as desired. * To prove <ref> we work with φ componentwise. By the monotonicity of ϕ_1, it is clear that for the first component, φ_1(ϕ(X),ϕ(x)):=ℙ[ϕ_1(X_1)≤ϕ_1(x_1)]=ℙ[X_1≤ x_1]=φ_1(X,x). The second part is proved similarly. We have that φ_2(ϕ(X),ϕ(x)):=ℙ[ϕ_2(X_1,X_2)≤ϕ_2(x_1,x_2)|ϕ_1(X_1)=ϕ_1(x_1)] =ℙ[ϕ_2(X_1,X_2)≤ϕ_2(x_1,x_2)|X_1=x_1]=ℙ[ϕ_2(x_1,X_2)≤ϕ_2(x_1,x_2)|X_1=x_1] = ℙ[X_2≤ x_2|X_1=x_1]=φ_2(X,x), where in the second equality, we used that ϕ_1 is invertible, so the σ-algebra generated by X_1 is the same as that generated by ϕ(X_1) and ϕ(X_1)=ϕ(x_1) if and only if X_1=x_1. And in the fourth equality, we used that ϕ_2(x_1,· ) is monotone. * We first prove φ_X is invertible. Let us take p=(p_1,p_2) ∈ [0,1]^2. Then, the first component of φ_X is the univariate CDF F_X_1, and we can define x_1:=F_X_1^-1(p_1). Now, since φ_2(X,(x_1,· )) is increasing (as a function of the dot) for each x_1, it has a generalized inverse x_2:=φ_2(X,(x_1,· ))^-1(p_2). An algebraic verification now shows that φ_X has generalized inverse φ_X^-1(p)=(x_1,x_2). Furthermore, since φ_X is monotone in each component, so is φ^-1_X. This completes the proof. Given random variables X,Y valued in ^2 it holds that φ_X^-1∘φ_Y(Y)d=X By <ref> of <Ref>, it suffices to prove that the generalized CDFs of X and Y coincide. To this aim, we have that, by <ref> of the previous lemma φ(φ_X^-1∘φ_Y(Y),x)=φ(Y,φ_Y^-1∘φ_X(x))=φ_Y(φ_Y^-1(φ_X(x)))=φ_X(x). This concludes the proof. We now calculate the transformation which makes (κ,r) Gaussian. Let (v,κ) be distributed according to the PC priors π (v,κ ) and define r:=v. Then, given U∼(0,𝐈_2) it holds that (r,κ)d=( f^-1(f(0)-log(1-Φ(U_1))/λ_v ), (log(1-Φ(U_1)))/λ_v-f(0) )^-1log(1-Φ(U_2))/λ_θ ), where Φ is the CDF of a univariate standard Gaussian and r:=v. The CDF F_r of r was calculated in (<ref>). An inversion gives x_1:=F_r^-1(p_1) = f^-1(f(0)-log(1-p_1)/λ_v ) To calculate the CDF of κ |r, we work with the density in (<ref>). This gives φ_2((r ,κ),(x_1,x_2))= 1--λ_θ f(x_1)x_2. For each fixed x_1, the above has the inverse x_2:=φ_2((r,κ),(x_1,· ))^-1(p_2)=- log(1-p_2)/λ_θ f(x_1) This, combined with (<ref>), the preceding <Ref> (with Y= U ), and <Ref> (applied to U), completes the proof. §.§ Proof of <Ref> Let α∼_[0,2π ] be independent from r. Then, by the definition r= v, vd=(rcos(α ),rsin(α )), The proof then follows from Lemma <ref> and by observing that R(√(Y_1^2+Y_2^2) ),Φ(Y_3) are i.i.d. uniformly distributed on [0,1], and that Y_1/√(Y_1^2+Y_2^2)d=cos(α), Y_1/√(Y_1^2+Y_2^2)d=sin(α) . §.§ Proof of <Ref> The determinant of H_v(α) is (β+γ)γ. Setting this equal to 1 gives β=(1-γ^2)/γ. Next, we note that given any vector e, it holds that 𝐈=ee^T+e_⊥e^T_⊥. Substituting this into (<ref>) with e=v(α) H_v(α)=(γ+β)v(α)v(α)^T+γv(α)_⊥v(α)_⊥ ^T=1/γv(α)v(α)^T+γv(α)_⊥v(α)_⊥ ^T. Setting H_v=H_v(α), we obtain the condition 1/γv(α)v(α)^T+γv(α)_⊥v(α)^T_⊥=v/v^2 vv^T+-v/v^2 v_⊥v_⊥^T. Thus, equality holds in the conditions of the theorem. § SIMULATION STUDY DETAILS In this section, we expand on the simulation study of <Ref>. We will discuss the details of the simulation of the SPDE, compare the priors, and show some additional results. §.§ Sampling from the SPDE To simulate the Gaussian field resulting from (<ref>), we use the package. This package uses a FEM method to calculate to approximate the precision matrix of u. That is, we approximate u to be in the Hilbert space H spanned by our finite element basis ϕ_i_i=1^n and impose [ϕ_i, u, i∈{1,...,n}]= [ϕ_i, κ, i∈{1,…,n}] The condition that u∈ H leaves us with u(x)= ∑_i=1^n ϕ_i(x) u_i, where one can show that the vector of weights u=(u_1,…,u_n) is Gaussian. Let us write Q_u for its precision matrix, u∼(0,Q_u^-1). Then, from (<ref>), knowing Q_u determines u. The value of Q_u is determined in turn by (<ref>), as substituting in (<ref>) and applying integration by parts gives Lu∼(0,C_κ), where L=C_κ+G_H, [C_κ]_ij=ϕ_i,κ ^2ϕ_j_L^2( ), [G_H]_ij=∇ϕ_i, H∇ϕ_j_L^2( ) . From (<ref>) we deduce that the precision Q of the weight vector w is Q_u=(C_κ +G_H)^†C_κ^-1 (C_κ +G_H)=C_κ +2G_H+G_HC_κ ^-1G_H, where we used that since H is symmetric, and we are considering real-valued functions, C_κ,G_H are symmetric. The finite element basis is chosen so that C_κ, G_H are sparse. Since C_κ is not sparse, it is approximated by the mass lumped version [C_κ]_ij:=δ_ij∑_k=1^n [C_κ]_ik. After this approximation we obtain a sparse precision Q_u which allows us to sample efficiently from u (or rather, its approximation u∼(0,Q_u)) by using a Cholesky decomposition Q_u=LL^T and solving L^Tz=u where z∼(0,𝐈). It remains to discuss how the integrals defining C_κ,G_H are calculated. Let be the mesh of the domain. For each triangle T∈ we denote the average value of κ , H on the nodes of T as κ^2(T),H(T) and approximate the integrals in (<ref>) by [C_κ]_ij≈∑_T∈→κ ^2(T)∫_Tϕ_i ϕ_j dx, [G_H]_ij≈∑_T∈→ k,l=1,2[H(T)]_kl∫_T∂_kϕ_i ∂_lϕ_j dx. Finally, we take ϕ_i to be piecewise linear equal to 1 on node i and equal to 0 on every other node. An explicit formula for the integrals in (<ref>) can be found in <cit.>. §.§ Comparison of priors for the simulation study To better understand the four priors π_PC,π_EG,π_U,π_β and their corresponding posteriors, in <Ref> and <Ref>, we fix σ_u, σ_ε and plot each of the marginal prior and posterior densities mentioned above (up to a multiplicative constant) in log(κ) and v respectively, for a given observation y. As we can see, π_PC and π_EG give similar prior and (as a result) posterior densities, to the point where it is impossible to distinguish them in the plots. The uniform and beta priors and posteriors are similar to each other and differ mainly in the size of support. The beta priors and posteriors resemble a cutoff version of their uniform analogs. This indicates that choosing π_PC and π_EG as priors will give similar results. Likewise, π_U, π_β will also give similar results to each other. This is borne out in the results. §.§ Posterior sampling Returning to the simulation, our parameters are θ =(logκ,v, log(σ_u),log(σ_ε))∈ℝ^5. Bayes formula gives for any prior π(θ) on θ that π(θ |y) =π(θ , y)/π(y)π(u | θ ,y)/π(u | θ ,y)=π(θ ) π(u | θ ) π(y | u,θ)/π(y) π(u | θ ,y). We note that the right-hand side of (<ref>) involves u. However, the resulting expression is independent of the value of u chosen. As a result, the log posterior ℓ (θ |y) is up to an additive constant given by ℓ (θ |y) = ℓ (θ )+ℓ (u|θ )+ℓ(y|u,θ)- ℓ (u|θ ,y) The maximum of the right-hand side of (<ref>) is independent of u. However, in the case where ℓ (u|θ,y) were not known, the value of u would affect the result. If u|θ,y were approximated to be Gaussian, it would make the most sense to take u as the mode m_u of this Gaussian approximation. In our case, all terms in (<ref>) are known exactly. The log prior ℓ(θ) is determined by (<ref>) and the remaining terms are given by ℓ(u|θ ) =1/2(logQ_u-u-m_u_Q_u^2-nlog(2π)) ℓ(y|u,θ) =1/2(logQ_ε -y-Au_Q_ε^2-mlog(2π)) ℓ(u|θ ,y) =1/2(logQ_u|y,θ -u- m_u|y,θ_Q_u|y,θ^2-nlog(2π)), where we defined x_Q^2:= x^TQx and m is the dimension of y. The precision Q_u of u is calculated by FEM and is given by (<ref>) divided by 4πσ_u^2. The remaining precision matrices are Q_u|y,θ:= Q_u+ A^T Q_εA, m_u|y,θ:=Q_u | y,θ^-1(Q_u m_u+A^T Q_εy). Maximizing the expression in (<ref>) is made possible by the fact that the precision matrices are sparse. We obtain the MAP estimate θ:= max_θ∈^3ℓ (θ |y)=max_θ∈^3ℓ (θ |y). Using this, we do the following simulations. We fix as our domain the square [-1,1]^2 from which we sample uniformly and independently m=15 locations x_j_j=1^m and use a mesh with n=2062 degrees of freedom. Then, we do the following. For π_sim∈{π_PC, π_EG, π_U, π_β}: * For j= 1,...,N=200: * We simulate θ^(j)^true from π_sim, use the FEM to simulate u^(j) from (<ref>) and then simulate y^(j) from (<ref>). * For prior π_est∈{π_PC, π_EG, π_U, π_β} on θ: * We calculate a maximum a posteriori estimate θ^(j) using (<ref>). * We calculate θ^(j)^true_i -θ_i. * We calculate a Gaussian approximation to the posterior distribution. To do so, we write Z∼(θ^(j), Σ_θ|y), where the precision, covariance, and marginal standard deviations of Z are, respectively [M_θ^(j)|y^(j)]_ij=-∂^2ℓ(θ|y^(j))/∂θ_i∂θ_j, Σ_θ^(j)|y^(j)=M_θ^(j)|y^(j)^-1. We write π_Z(θ) for the density of Z. If the posterior distribution is Gaussian, then π_Z=π (θ|y^(j)). * We approximate the posterior measure using self-normalized importance sampling. This step is necessary as we only have access to the unnormalized ℓ (see (<ref>)). We do this step both with and without Pareto smoothing <cit.>. In both cases, using Z as our proposal distribution. Let {θ^(s)∼Z,s=1,..., S} be i.i.d. We denote the normalized, unnormalized and self-normalized importance ratios by r(θ)= π(θ|y^(j))/π_ Z(θ), r(θ)= exp(ℓ (θ))/π_ Z(θ), r(θ^(s)):= r (θ^(s))/∑_s=1^S r (θ^(s)). Since we do not have access to the normalizing constant of π (θ|y^(j)), it is the last two ratios that we use. Using r(θ^(s)) calculate the normalized smoothed weights w(θ^(s)) and approximate π (θ|y^(j)) ≈π_IS(θ|y^(j)):= ∑_s=1^S w(θ^(s)) δ_θ^(s)(θ) . In the case where Pareto smoothing is not used, we directly take w(θ^(s))=r(θ^(s)). The reason for using smoothing and no smoothing is to see if Pareto smoothing improves the approximation (for example, by comparing the frequency of times the true parameter is within the 0.95 confidence intervals). With this framework, the expect value of a function f(θ) can be approximated as 𝔼[f(θ)|y^(j)] ≈𝔼_IS[f(θ)|y^(j)]:= ∑_s=1^S w(θ^(s)) f(θ^(s)). Additionally, we will be interested in the Kullback-Leibler divergence between the true posterior and the Gaussian approximation to the posterior. Here, f(θ)=log(r(θ|y^(j))) is known up to a normalizing constant, but we can approximate it using the self-normalized importance ratios as follows KLD(π(·|y^(j))||π_Z ) = ∫_^5 π(θ|y^(j)) log(r (θ))θ∼∑_s=1^S r(θ^(s)) log(Sr (θ^(s))) ∼∑_s=1^S w(θ^(s)) log(Sw (θ^(s))), where the first equality is the definition of the Kullback-Leibler divergence and the first and second approximations are the self-normalized importance sampling step (r ∼ S r ) and smoothing step, respectively. This can be seen to converge to the true Kullback-Leibler divergence as S→∞ by using that r=_Zr r and the central limit theorem. * Using π_IS (θ|y^(j)) we approximate the mean, covariance, and KL divergence and build a 0.95 credible interval I for θ. We also check whether the true parameter θ^(j)^true is in I and compare these results against the Gaussian approximation to the posterior in (<ref>). * We calculate a_i=ℙ_est[θ _i≤θ_i^(j)^true|y^(j)] for i=1,…,5 both in the case where θ is sampled from π_IS (θ|y^(j)) and from the Gaussian approximation to the posterior using prior π_est. We then plot the empirical cumulative distribution function of a_i and compare it to the uniform distribution. If the implementation is well calibrated, then a_i∼ U(0,1). See <cit.>. The above was tried using the Gaussian distribution Z to replace the posterior and not using importance sampling. The method using no smoothing in the importance sampling was also tested. But both the methods gave worse results than Pareto smoothed importance sampling and we only display this approximation in the results. Using <Ref>, we can check whether the posterior distribution for θ is accurate, and using <Ref>, we check whether the implementation is well-calibrated. Since the uniform priors give support to extreme values of κ,v, sampling the true parameter θ^true from the uniform prior often leads to singularities in the calculation of the importance samples. These cases made up a high percentage of the simulations (in one case, 90%), for which reason the improper uniform priors were not used to sample from θ^true in the simulations below. Rather, the width of the beta priors was also reduced when simulating these parameters to avoid singularities. The results for the MAP estimate and the CI lengths were already presented for the anisotropy parameters log(κ),v in <Ref> <Ref> and <Ref>. Here, we begin by presenting these results for the remaining parameters log(σ_u),log(σ_ε). In <Ref>, we show the frequency of the true parameter being in the 0.95 credible interval. As can be seen in the graphs, the performance depends on the distribution from which θ^true is sampled. If θ^true is sampled from π_PC or π_EG then these two priors perform better, whereas otherwise π_U,π_β perform better. We also observe that the confidence intervals for σ_ε are not wide enough. This may be due to the importance sampling. The results for the remaining parameters log(σ_u),log(σ_ε) are more similar, which is explained by the fact that the same priors for these parameters are used in all cases. This is a common theme in the results, and we will not comment on it further. In <Ref>, we show the eCDF of probabilities a_i=ℙ[θ_i≤θ_i^true|y] for each of the five parameters. If the implementation is well calibrated, then a_i should be uniform when π_sim=π. Once more, the performance divides into two cases. If θ^true is sampled from π_PC or π_EG then these two priors perform better, whereas otherwise π_U,π_β perform better. In <Ref>, we show the p-values for the KS test between a_i=ℙ_est[θ_i≤θ_i^true|y] and the CDF of the uniform distribution. In <Ref>, we show the empirical difference against F(a_i) together with a 95% confidence interval. As can be seen from the plots, the implementation is well-calibrated. In <Ref>, we show the eCDF of the Kullback-Leibler divergence between the posterior and the Gaussian approximation to the posterior around the median. As can be seen from the plots, in all cases π_PC(· |y) and π_EG(· |y) are closer to there respective Gaussian approximations than π_U(· |y) and π_β(· |y). This is because the posterior using π_U,π_β are very much not Gaussian. To better compare PC and exponential-Gaussian priors, we also plot these separately. As can be seen, once more, they behave quite similarly, showing that both formulations lead to a practically identical penalization of complexity. § PRECIPITATION STUDY DETAILS §.§ Calculations In this section, we show how to calculate scores of <Ref>. For what follows, we first define the leave out one mean of y_i having observed all locations except y_i m_y_i|y_-i := ∫_^5( ∫_ y_i π (y_i|y_-i,θ)ỵ_i) π(θ| y) θ = ∫_^5 m_y_i|y_-i,θπ (θ| y) θ. The expression of m_y_i|y_-i is approximately that of y_i|y_-i, with the exception that the integral in (<ref>) is against π(θ| y) as opposed to π(θ| y_-i). This is a common approach to calculating the LOO predictions as it avoids the need to approximate π (θ| y_-i) for each i. The expression is justified by the fact that the posterior distribution of θ does not depend much on any single observation. That is, π (θ| y)≈π (θ| y_-i). Similarly, we will need the posterior leave out one variance σ^2_y_i|y_-i := ∫_^5σ^2_y_i|y_-i,θπ (θ| y) θ, where σ^2_y_i|y_-i,θ is the posterior variance of y_i having observed all locations except y_i. This will be used later in the computations. Because of the independence of u, β from θ we have that . u_β| θ ∼(0,Q_u_β^-1), where Q_u_β:=[ Q_β 0_2 × n; ; 0_n × 2 Q_u ]. We can then calculate θ to maximize the (unnormalized) log-posterior as in (<ref>) ℓ (θ |y) = ℓ (θ )+ℓ (u_β |θ )+ℓ(y|u_β ,θ )- ℓ (u_β |θ ,y). To calculate the RMSE, CRPS, and DSS, it is sufficient to calculate the posterior mean and variance of y_i given y_-i and θ. We show how to do this efficiently. Due to the independence of u and ε and of ε _i and ε_-i knowing θ, we deduce that y_-i = A_-iu+ ε_-i and ε _i are independent knowing θ. As a result, y_i | y_-i,θ = (A_i u) | y_-i,θ + ε _i | θ∼ (A_i m_u|y_-i,θ, A_i Σ_u|y_-i,θA_i^T + σ_ε^2) = (m_y_i|y_-i,θ, σ_y_i|y_-i,θ^2), where A_i ∈^1 × N is the i-th row of A. Furthermore, we have that, see (<ref>) Q_u|y_-i, θ = Q_u + σ_ε ^-2A_-i^TA_-i= Q_u|y, θ - σ_ε ^-2A_i^T A_i m_u|y_-i,θ =σ_ε^-2Σ_u | y_-i,θA_-i^T y_-i, where we wrote A_-i∈^n-1 × N for the matrix A with the i-th row removed. To avoid calculating an inverse for each i, we can calculate Σ_u| y, θ once and then use the rank 1 correction provided by the Sherman-Morrison formula to obtain Σ_u|y_-i,θ = Σ_u|y,θ + Σ_u|y,θA_i^TA_iΣ_u|y,θ/σ_ε^2 -A_iΣ_u|y,θA_i^T. Using (<ref>) in (<ref>) and writing V_i :=A_i Σ_u|y,θA_i^T and q_ε:= σ_ε^-2, gives σ_y_i|y_-i,θ^2 = V_i + V_i^2/σ_ε^2 - V_i + σ_ε^2 = V_i/1-q_εV_i +σ_ε^2 = σ_ε^2 /1-q_εV_i. Furthermore, for the mean, we have from (<ref>) that m_y_i|y_-i,θ =q_εA_i Σ_u | y_-i,θA_-i^T y_-i= q_εA_i Σ_u | y_-i,θA^T y - q_εA_i Σ_u | y_-i,θA_i^T y_i Let us write η_i := A_i u| y,θ= A_i m_u|y,θ= q_εA_i Σ_u|y,θA^T y, Then, using the rank 1 correction in (<ref>) we obtain for the first term in (<ref>) that q_εA_i Σ_u | y_-i,θA^T y = q_εA_i Σ_u | y,θA^T y +q_εA_iΣ_u|y,θA_i^TA_iΣ_u|y,θ/σ_ε^2 -A_iΣ_u|y,θA_i^TA^Ty = η_i + V_i/σ_ε^2 -V_iη_i= η_i /1-q_εV_i. Whereas, for the second term in (<ref>), we have that q_εA_i Σ_u | y_-i,θA_i^T y_i = q_εA_i Σ_u | y,θA_i^T y_i +q_εA_iΣ_u|y,θA_i^TA_iΣ_u|y,θ/σ_ε^2 -A_iΣ_u|y,θA_i^TA_i^Ty_i = q_ε V_i y_i + q_εV_i^2/σ_ε^2 - V_iy_i = q_ε V_i y_i(1+ q_εV_i/1-q_εV_i)= q_ε V_i y_i/1-q_εV_i Using (<ref>) and (<ref>) in (<ref>) we obtain m_y_i|y_-i,θ = η_i /1-q_εV_i - q_ε V_i y_i/1-q_εV_i = η_i - q_ε V_i y_i/1-q_εV_i= y_i + η_i-y_i/1- q_εV_i. The term V_i can be calculated efficiently using a Takahashi recursion on the Cholesky factor of the posterior precision Q_u|y,θ without the need to calculate a dense matrix inverse, as implemented by . To calculate η_i, we use a matrix-vector solve. Next, since the expression of π (θ| y_i) is known up to a normalizing constant, we can use importance sampling and Riemann integration together with (<ref>), (<ref>) to calculate (<ref>). To estimate the CRPS, we use importance sampling and the expression for the posterior predictive distribution (<ref>). We have F_i(t) := ∫_^5(∫_-∞^t π(y_i | y_-i, θ)ỵ_i)p(θ| y)θ≈∑_j=1^J Φ(t-m_y_i|y_-i,θ_j/σ_y_i|y_-i,θ_j ) w_j, where Φ is the cumulative distribution function of a standard Gaussian variable, and θ_j,w_j are the importance samples and smoothed self-normalized weights. Using this expression and the fact that there is an exact expression for the CRPS of a Gaussian mixture <cit.>, we obtain the CRPS. §.§ Spatial analysis of the scores We hypothesize that perhaps some of the models perform better than others in different spatial areas. To confirm this hypothesis, we begin by plotting the difference in the scores between the isotropic and anisotropic PC models. We plot the difference of the scores for the RMSE, CRPS, and DSS in <Ref>. We also do the same for the difference in the scores between the isotropic and anisotropic EG models in <Ref>. The results are very similar. We observe that the anisotropic models perform better towards the western coast and worse in the remaining region. This is to be expected as the correlation structure is different in the region with high elevation than the lower elevation parts of the domain and indicates that a non-stationary model might be more appropriate as the anisotropy varies in space. §.§ Precipitation simulation study In this section, we repeat the structure of the simulations in <Ref>, but now, to investigate how the quality of the prediction changes with increased data, we use synthetic data. To do so, we simulate precipitation data on 4000 uniformly distributed locations in Norway from the model in (<ref>) with parameters ρ = 201, v = (-0.45,0.03), σ_u =0.63, σ_ε = 0.14, β_0 = 0.96, β_1 =0.67 ρ = 132, v = (0.00,0.00), σ_u =0.65, σ_ε = 0.13, β_0 = 0.93, β_1 =0.66. These correspond to data generated from anisotropic and isotropic models, respectively, using the MAP obtained for the precipitation data in <Ref> using anisotropic and isotropic PC priors, respectively. The data is shown in <Ref>. We then fit the model to the data and calculate the RMSE, CRPS, and DSS scores. We repeat this process for n_y=25,50,100,150,200,400, 600,800,1000 observations sampled uniformly from the simulated data. We repeat this process 100 times for each n_y and each prior. The results are shown in <Ref>. As can be seen from the plots, the anisotropic model performs better than the isotropic model for less data. However, as the number of observations increases, the scores become almost equal. The difference between the anisotropic PC and EG models is less pronounced than in the previous section. Finally, we also plot the interval score for the parameters (log(κ ), v_1, v_2, log(σ _u)). Given a credible interval (L_F,U_F) with confidence level α , the interval score is defined by S_INT (F, y)=U_F-L_F+2/α(L_F-y) 𝕀(y<L_F)+2/α(y-U_F) 𝕀(y>U_F). The interval score is a proper scoring rule consistent for equal-tail error probability intervals: S(F, G) is minimized for the narrowest P I that has expected coverage 1-α. The results are shown in <Ref>. As can be seen from the plots, the interval score for the parameters (log(κ ), σ _u) is generally better for the anisotropic model. This is especially pronounced for log(κ ) when there is less data. The interval score for v_1 and v_2 is not shown for the isotropic model as these parameters are not estimated in the isotropic model.
http://arxiv.org/abs/2409.03679v1
20240905163147
Polar Neptunes are Stable to Tides
[ "Emma Louden", "Sarah Millholland" ]
astro-ph.EP
[ "astro-ph.EP" ]
0000-0003-3179-5320]Emma M. Louden Department of Astronomy, Yale University, New Haven, CT 06511, USA emma.louden@yale.edu 0000-0003-3130-2282]Sarah C. Millholland Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA MIT Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA § ABSTRACT There is an intriguing and growing population of Neptune-sized planets with stellar obliquities near ∼90^∘. One previously proposed formation pathway is a disk-driven resonance, which can take place at the end stages of planet formation in a system containing an inner Neptune, outer cold Jupiter, and protoplanetary disk. This mechanism occurs within the first ∼10 Myr, but most of the polar Neptunes we see today are ∼Gyrs old. Up until now, there has not been an extensive analysis of whether the polar orbits are stable over ∼Gyr timescales. Tidal realignment mechanisms are known to operate in other systems, and if they are active here, this would cause theoretical tension with a primordial misalignment story. In this paper, we explore the effects of tidal evolution on the disk-driven resonance theory. We use both N-body and secular simulations to study tidal effects on both the initial resonant encounter and long-term evolution. We find that the polar orbits are remarkably stable on ∼Gyr timescales. Inclination damping does not occur for the polar cases, although we do identify sub-polar cases where it is important. We consider two case study polar Neptunes, WASP-107 b and HAT-P-11 b, and study them in the context of this theory, finding consistency with present-day properties if their tidal quality factors are Q ≳ 10^4 and Q ≳ 10^5, respectively. § INTRODUCTION One of the many unexpected discoveries within exoplanetary science is that a planet's orbital angular momentum vector need not be aligned with its host star's spin vector. The angle between these two vectors is called the “stellar obliquity” or stellar spin-orbit misalignment. The solar obliquity is 7^∘ when measured using the mean plane of the planets <cit.>. This relatively small value is consistent with expectations from planet formation. However, for exoplanets, a broad range of stellar obliquities have been observed, and they exhibit various trends with stellar and planetary properties. (See for a review.) For instance, hot Jupiters orbiting hot stars have high obliquities <cit.>, as do super-Earths and sub-Neptunes orbiting hot stars <cit.>. The so-called “polar planets” are a class of systems with ∼90^∘ stellar obliquities. <cit.> computed the 3-dimensional stellar obliquity (Ψ) for 57 systems by combining constraints on stellar inclination angles with Rossiter-McLaughlin measurements of sky-projected stellar obliquities (λ). The findings revealed a significant number of systems with nearly perpendicular orbits (Ψ≈ 80^∘ - 120^∘), indicating a statistical preference for polar orbits over a full range of obliquities when using frequentist tests. Recent works using Bayesian methods have conflicting evidence for an overabundance of perpendicular orbits. <cit.> do not find strong evidence for the polar planets whereas <cit.> finds evidence that such a peak does exist when restricting attention to the sample of sub-Saturn planets and hot Jupiters orbiting F stars. Firm conclusions on the statistical nature of the polar planet population await a larger sample size. Regardless, the known polar planet systems are still dynamically puzzling. About half of the polar planets are warm Neptune or super-Neptune-sized planets. These polar Neptunes share a variety of distinguishing characteristics: moderately eccentric (∼0.05 < e ≲ 0.3) and polar orbits, puffy atmospheres that show evidence for mass loss in several cases, and evidence for exterior massive companion planets in some cases (see Figure <ref>). Examples include WASP-107 b <cit.>, HAT-P-11 b <cit.>, HD 3167 c <cit.>, and K2-290 b <cit.>. Moreover, most polar Neptunes reside in or near the hot Neptune desert <cit.>, a sparsely-populated region of period-radius space that is thought to be sculpted by photoevaporation <cit.> and potentially high-eccentricity migration <cit.>. The mechanism(s) responsible for the unusual orbits of the polar Neptunes should also explain these other features. However, there is not yet consensus as to the origins of the polar Neptunes. Several theoretical scenarios have been proposed to explain 3D obliquities near Ψ≈ 90^∘. <cit.> showed that tidal dissipation within the host star can strand a system at a 90^∘ obliquity. This mechanism works through inertial wave dissipation in stars with convective zones, but it probably doesn't explain all perpendicular systems. In addition, primordial magnetic interactions between the protostar and the protoplanetary disk can torque the disk to high inclinations, with the planets then forming in these misaligned states <cit.>. Similarly, <cit.> recently proposed that the central star in a polar planet system may have started as tight binary system that subsequently merged, and the polar planet formed in a protoplanetary disk that was perpendicular to the original binary orbital plane. As for another possibility, von Zeipel-Kozai-Lidov cycles and planet-planet scattering have long been invoked to explain spin-orbit misalignments <cit.>, but they have gained extra attention recently in the context of the polar planets <cit.>. Separate from these ideas, another compelling theory for generating ∼90^∘ stellar obliquities is disk-driven resonance, which was proposed by <cit.> (hereafter ). This mechanism is envisioned to operate within the first ∼ 10 Myr in a system containing an inner Neptune-sized planet, outer giant planet, and a transition disk. As the disk dissipates, a secular inclination resonance is encountered between the inner Neptune and distant giant planet, which tilts the Neptune's orbit up to 90^∘. This theory accounts for the high obliquities, the non-zero eccentricities, and the Jovian companions with large mutual inclinations sometimes found in the polar Neptune systems. Its ability to explain all of these features at once makes disk-driven resonance particularly compelling. This theory is our primary focus in this paper. A key question is whether disk-driven resonance creates polar orbits that are long-term stable. Disk-driven resonance occurs at the very early epochs of formation, but the polar Neptunes we see today are mostly billions of years old. Within those ∼Gyrs of time, the orbits can be modulated by effects like tidal and stellar evolution, and these effects can potentially be significant. For example, tidally-driven orbital realignment is thought to operate for hot Jupiters orbiting cool stars, explaining the differences in the hot star and cool star populations <cit.>. There can also be more subtle effects, like inclination damping driven by interactions between the planet's spin and orbit <cit.>. There has not yet been a thorough investigation of tidal effects in the context of the disk-driven resonance mechanism. It is crucial to determine whether there is any impact on the mechanism's viability and the polar planet's long-term stability. In this paper, we explore the degree to which the disk-driven resonance theory for polar Neptunes is influenced by tidal evolution. We investigate both short-term and long-term effects on the orbital and spin evolution. The organization of the paper is as follows. Section <ref> describes the disk-driven resonance mechanism and the properties of a system that affect its outcome. Section <ref> focuses on the system's early evolution while the disk is still present. We explore how tides impact the initial excitation into a polar orbit. Next, in Section <ref>, we study the long-term evolution of the system. We specifically explore whether tidal effects may erase the initial polar orbit. Finally, we discuss the results in Section <ref> and explore two case studies of observed near-polar Neptunes, HAT-P-11 b and WASP-107 b. § FORMATION OF SLIGHTLY ECCENTRIC POLAR NEPTUNES THROUGH DISK-DRIVEN RESONANCE introduced the disk-driven resonance theory to explain polar Neptunes. Here a close-in (≲ 0.1 AU) Neptune-sized planet interacts with both an exterior (∼1-5 AU) giant planet and an outer, slowly-depleting protoplanetary disk of gas and dust with the inner ∼ 5 AU evacuated. This is motivated by the existence of observed transition disks with inner regions depleted of gas and dust <cit.>. Initially, perturbations to the outer planet's orbit are dominated by the disk, while the inner planet's orbit is perturbed primarily by the outer planet and the star's quadrupolar gravitational field (oblateness). The inner planet is assumed to have zero eccentricity at the start (see Figure <ref>). The dynamical transition to a polar orbit begins with the establishment of secular inclination resonance. The disk induces nodal precession of the outer planet's orbit, Ω̇_out, at a rate proportional to the disk mass, which we assume decreases smoothly and slowly. The outer planet, in turn, induces nodal precession of the inner planet's orbit, Ω̇_in. Initially, |Ω̇_out| > |Ω̇_in|. As the transition disk dissipates, |Ω̇_out| decreases until the precession rates of the two planets match and a scanning secular resonance <cit.> is crossed. The inclination of the inner planet is forced to increase because it stays in resonance by maintaining the same nodal precession frequency as the outer planet, and the precession frequency slows as the inclination rises. The inner planet's inclination growth stops at a critical inclination I_crit, which, under certain conditions, is polar. This is a generalization of the critical inclination in the von Zeipel-Kozai-Lidov mechanism <cit.>. That is, as soon as the critical inclination is reached, the inner planet undergoes eccentricity excitation that detunes the secular resonance and halts the inclination growth. If I_crit is near ∼90^∘ or undefined, then the planet can reach a polar orbit before the resonance is detuned. The critical inclination is dictated by the relative perturbations on the inner planet from the outer planet, general relativity (GR), and stellar oblateness. In the absence of GR and stellar oblateness, the critical inclination would be equal to 39.2^∘. However, the short-range forces suppress eccentricity excitation due to the periapse precession they induce <cit.>, and thus a larger inclination is required to induce eccentricity excitation. The critical inclination can be calculated using the dimensionless quantities η_GR and η_⋆. Here η_GR measures the strength of GR perturbations relative to the planet-planet interactions, and η_⋆ similarly measures the strength of the stellar quadrupole perturbations relative to the planet-planet interactions. They are defined as η_⋆ = 2 J_2 M_⋆/m_2R_⋆^2 a_2^3/a_2^5(1-e_2^2)^3/2 η_GR = 8 G M_⋆/c^2a_2^3/a_1^4M_⋆/m_2(1-e_2^2)^3/2, where M_⋆ and R_⋆ are the stellar mass and radius, c is the speed of light, a_1 is the semi-major axis of the inner Neptune, a_2 and e_2 are the semi-major axis and eccentricity of the outer cold Jupiter, and m_2 is its mass. In the equation for η_⋆, J_2 is the coefficient of the quadrupole moment of the star's gravitational field, and it depends on the stellar rotation period P_⋆, the Love number k_2⋆ and R_⋆ as <cit.> J_2 ≈k_2⋆/34π^2/P_⋆^2R_⋆^3/G M_⋆. In terms of η_⋆ and η_GR, the critical inclination is given by I_crit = sin^-1(4 + 4 + / 10 + 5 )^1/2. Figure <ref> shows how the combined values of η_⋆ and η_GR determine the critical inclination. For ≥ 6 +, the argument of the arcsine is greater than 1 and thus all inclinations are stable, meaning the planet would reach a polar orbit. We also note that the behavior in the parameter space in Figure <ref> changes at I_crit = 63.4^∘, since apsidal precession from stellar oblateness is prograde for inclinations < 63.4^∘ and retrograde for > 63.4^∘. To summarize, the inner planet can reach a polar and possibly eccentric orbit, but the outcome is dependent upon a “tug-of-war” between the effects of the stellar oblateness, the GR effects, and the cold Jupiter. This competition is summarized succinctly in the quantities η_⋆ and η_GR, which are most sensitively dependent on six key parameters of the system: mass and radius of the star (M_⋆, R_⋆), rotation period of the star (P_⋆), mass and semi-major axis of the cold Jupiter (m_2, a_2), and semi-major axis of the inner Neptune (a_1). § EARLY EVOLUTION WITH TIDES The disk-driven resonance theory could be affected by tidal evolution at multiple epochs in time. First, there is the early ≲10 Myr period during the disk phase when the resonance is encountered and the inclination (and sometimes eccentricity) excitation occurs. Second, after the polar orbit is established, there is the long ∼ Gyrs period during which orbital realignment might occur. We will investigate these phases separately, starting with the early evolution. Here we explore how the initial resonant encounter is impacted by tides. The initial resonance excitation involves a complicated set of interactions between the two planets, disk, and oblate star. We opt to perform direct N-body simulations to fully capture these interactions and the corresponding tidal and spin dynamics. Since such simulations are expensive, we only investigate a representative case study rather than assess the full parameter space. We use a numerical integrator that evolves the system using instantaneous accelerations in the framework of <cit.>. This code self-consistently accounts for the accelerations from (1) standard Newtonian gravity, (2) the oblateness of the host star, (3) constant-time lag tides, and (4) the outer protoplanetary disk. It evolves the planetary orbits and spins simultaneously. The code was originally developed and described in <cit.>, and further details can be found therein. The disk effects were added in <cit.>, but the disk was taken to be infinite. Here we model the disk as a finite transition disk between R_in and R_out with surface density Σ = Σ_0 (r/R_out)^-α. The gravitational potential is given by <cit.> as Φ = --α + 2/1 - η^-α + 2G M_disk/R_out[ 1 - η^1 - α/1 - α. . + -1 + η^-1 - α/1+αr_p^2/2 R_out^2(-1 + 3/2sin^2θ_p) ], where η = R_in/R_out, M_disk is the mass of the disk, r_p is the radial position of the planet, and θ_p is the polar angle of the planet. The disk mass decreases over time according to M_disk = M_disk,0/[1 + t/t_disk)]^3/2. We run two simulations of a representative synthetic system with all being equal except one simulation includes tides and the other does not. The parameters of the system are the same as the simulation in the rightmost panel of Figure 2 of , so as to aid comparison to their results. This case corresponds to a resonant encounter that yields a nearly polar and eccentric planet. The system consists of a Solar-mass star, a Neptune-mass and Neptune-radius planet at 0.07 AU with an initial eccentricity of 0.01 and inclination of 1^∘, a 4 M_Jup-mass planet at 2 AU in a circular orbit with an inclination of 5^∘, and a transition disk with edges R_in = 3 AU, R_out = 30 AU and an initial mass of 50 M_Jup. The power law exponent of the disk surface density is set to α = 1, and the disk decay timescale is set to t_disk=1 Myr. The star has a radius of 1.3 R_⊙, rotation period of 7 days, and Love number of k_2⋆ = 0.2. For the simulation of the planet that includes tides, we set the Love number of the planet to k_2p = 0.4 and tidal quality factor equal to Q = 10^5. Figure <ref> shows the results of the simulations. We show the inclination and eccentricity evolution of the inner Neptune-mass planet. Focusing first on the case without tides, we see that this set of parameters corresponds to a case in which the inner Neptune encounters the inclination resonance and tilts to I_crit∼ 70^∘, at which point the eccentricity grows rapidly and the resonance is detuned. The inclination growth is then stalled just above 70^∘. The behavior is the very similar to that seen in , which simultaneously validates our code and stands as an encouraging sign since we are using a different numerical method (N-body, whereas used secular simulations). As for the simulation with tides turned on, the results are pretty similar overall. The inner planet's inclination evolution is nearly identical to the case without tides; it increases to the same near-polar critical inclination with a small lag. The eccentricity also increases when the critical inclination is reached, but the eccentricity growth is slightly delayed and muted by the tidal damping. We expect that further damping would be observed if the simulation was run for longer. For completeness, we note that we explored additional simulations with different system parameters (particularly the tidal quality factor of the inner planet) and found more complicated eccentricity behavior in some simulations. In these cases, the eccentricity decreased rapidly after its initial excitation and then stabilized at a lower value around e∼0.1. Exploring these dynamics across a broader parameter space is beyond the scope of this work. However, we can at least take away from this case study that tidal evolution does not significantly affect the inclination evolution but it can impact the eccentricity excitation following the resonant encounter. A more exhaustive parameter exploration could be considered in future work. To round out our analysis of the early evolution, we also explore the behavior of the inner Neptune's spin vector during the disk-driven resonance encounter. Tidal dissipation in a planet is driven not only by its eccentricity but also its “planetary obliquity” (the angle between the planet's spin vector and the orbit normal vector) <cit.>. Obliquity tides are theorized to drive orbital evolution and atmospheric inflation in a variety of exoplanetary systems <cit.>. It is thus relevant to investigate whether we expect an obliquity enhancement in the inner Neptune, which could affect the subsequent orbital evolution. Planetary obliquities can be excited by secular spin-orbit resonances, which occur when the planet's spin-axis precession frequency becomes commensurable with one of its orbit nodal precession frequencies. In Appendix <ref>, we investigate this scenario in detail. We find that the parameter space in which the inner Neptune would obtain a high obliquity through a secular spin-orbit resonance does not overlap with the parameter space for it to obtain a polar orbit through disk-driven resonance. Therefore, we do not anticipate that the inner Neptune would experience significant obliquity tides, at least not due to obliquity excitation from a simultaneous resonance crossing. § LONG-TERM SECULAR EVOLUTION WITH TIDES With a clear picture of the impact of tides on the short-term initial phase of resonance capture, we now turn to the subsequent long-term ∼Gyrs period. The observed polar Neptunes are billions of years old and have been subject to tidal forces throughout their lifetime. Here we explore a wide range of parameters and model the long-term evolution to test if the tides cause a meaningful change like destabilization of the high inclination. Such an outcome would cast doubt on the theory that the polar orbits were established early on. §.§ Simulation set-up The N-body integrator we used in the last section is too slow to run over the long timescales of interest here. Instead, we perform secular (orbit-averaged) simulations using [<https://github.com/djmunoz/kozaipy>]. is a package for integrating the secular equations of motion of few-body, hierarchical systems including tidal friction. The equations are the same as in <cit.> and originally <cit.>. The code evolves the orbits and spins simultaneously and accounts for mutual perturbations due to the Newtonian gravitational interactions, quadrupolar distortion of the star due to rotation and tides, tidal friction in the constant-time lag approximation, and general relativity. We use a Monte Carlo approach to create a suite of simulation outcomes for different system configurations. As discussed in Section <ref>, the orbital configurations resulting from the resonance interactions are dictated by η_⋆ and η_GR. We thus choose key parameters to vary based on η_⋆ and η_GR. All relevant variables are defined in Table <ref>, but not all of these are varied in each simulation, as we will describe in more detail in the next section. The viscous timescale t_v is related to the tidal quality factor Q of the planet by[For the stellar tidal quality factor, k_2,1 would be replaced by k_2⋆, m_1 and R_1 would be replaced by M_⋆ and R_⋆, and t_v,1 would be replaced by t_v,⋆. Our standard t_v,⋆ values correspond to Q_⋆≈ 10^6-10^7. ] Q_1 = 4/3k_2,1/(1+2k_2,1)^2Gm_1/R_1^3(a_1^3/GM_⋆)^1/2 t_v,1. The simulations are designed to model the system's evolution just after the disk-driven resonance encounter has taken place. Given a set of parameters, we calculate the critical inclination (equation <ref>) and set this as the inner Neptune's initial inclination. To account for the fact that the initial inclination resonance occurs during the star's pre-main sequence phase while the subsequent long-term evolution is during the main sequence phase, we determine the critical inclination using R_⋆ = 1.3 R_⊙ and P_⋆ =7 days to reflect fiducial pre-main sequence values. Then, for the integration, we set the stellar parameters using the sampled variables and keep them constant. We do not evolve the stellar properties over time because we expect that tides and planet-planet interactions dominate the orbital evolution. The integrations run for 6 Gyr. §.§ “Tug-of-war” simulation suites We define three simulation suites that span the “tug-of-war” between the outer Jupiter, stellar oblateness, and general relativity. In Suite 1, we sample all parameters according to the schemes outlined in Table <ref>. This suite is designed to represent the full range of outcomes given any plausible parameter choices. Suites 2 and 3 are more targeted. In Suite 2, we vary R_⋆ and a_1, and in Suite 3, we vary M_⋆, a_1, and m_2. For all three suites, we run ∼5,000 different simulations. The results of Suites 1, 2, and 3 are shown in Figure <ref>. To understand the degree of variation in the inclination evolution, we compute the standard deviation of the inner planet's inclination for each run. Recall that the planet starts at the critical inclination, so a small standard deviation would indicate that the misaligned orbit is long-term stable. It is clear from Figure <ref> that very small standard deviations are the usual outcome. Specifically, for Suites 1, 2, and 3, there are respectively 375, 292, and 133 runs where the standard deviation of the inner planet's inclination is greater than 1^∘. Suite 2 shows a distinct curve of initial mutual inclination vs. a_1. This is a consequence of initializing the inner planet at the critical inclination. As described earlier, Suite 2 varies R_⋆ and a_1, but R_⋆ is set to 1.3 R_⊙ for the critical inclination calculation. Thus, there is only one degree of freedom in setting I_crit, and that is reflected in the curve of the inclination versus a_1. For η_GR > 6 + η_⋆, I_crit = 90^∘ and fully polar orbits are created. The curve is also seen as an approximate lower envelope in the Suite 1 and 3 simulations where more parameters are varied. Overall, the simulation results offer a few notable takeaways. First, and most importantly for this work, the polar planets are extremely stable to tides in all three suites. While there is some deviation in the inner planet's inclination in some of the runs in each suite, the planets in polar orbits are all stable over time. This is further illustrated in Figure <ref>. The top (bottom) panel shows the subset of runs from all three suites with a standard deviation greater (lower) than one. The top panel includes no runs with planets in polar orbits. There are some near-polar runs that exhibit changes in inclination, but the truly polar orbits are stable. This conclusion is further cemented by the bottom panel. Even within the low standard deviation subset, the runs with the highest standard deviation are predominately concentrated around 50^∘. The second key result pertains to the location of the instability hotspot for all three suites. In Figure <ref>, the color bar shows that the cases with the highest standard deviation of the inner planet's inclination have an initial inclination in the range ∼45^∘-80^∘. Looking more closely at these higher standard deviation runs in Figure <ref>, we see an evolution reminiscent of high-eccentricity migration. The eccentricity slowly increases over time, and when it gets very large, the orbit undergoes rapid tidal migration and circularization. The planet ends up in a stable circular orbit closer to the star with a lower inclination. Such outcomes are the only mechanism in which we observed large-scale inclination variations. One caveat of these simulations is that we do not know the exact eccentricities emerging after the initial resonant encounters. As shown in Figure <ref>, the eccentricity can be excited during the resonant encounter but then influenced by tidal evolution to stabilize at smaller values. We set the eccentricities somewhat arbitrarily but informed by the earlier simulation and the observed population of Neptune-sized planets. To test the sensitivity of this assumption, we ran a suite of simulations where the eccentricity was allowed to vary more widely and found that the stability of the polar planets as well as the location of the hotspot was unchanged. There was the same order of magnitude of runs with standard deviation of the inner planet's inclination greater than 1^∘ and the hotspot appears in the same range of initial mutual inclination. It is useful to contrast our results with <cit.>, who also explored the secular dynamics of similar hierarchical two-planet systems but in a slightly different parameter regime. They identified a planetary spin-facilitated mechanism that induced unexpected and significant damping of the mutual inclination in long-term orbital dynamics. To validate our own simulations and compare with their results, we run another suite of long-term simulations similar to those presented in <cit.>. The set-up and results are described in Appendix <ref>. In short, we reproduce their findings and show that a particular regime of parameter space (namely, where the inner planet is more massive than Neptune) results in inclination damping by tens of degrees. The effect is significant, but it doesn't overlap with the parameter regime of interest in this work. § CASE STUDIES Thus far, we have used generalized synthetic planetary systems with randomized parameters to explore the long-term dynamics following disk-driven resonance. Here we turn to observed polar Neptune systems to gain further insights. We analyze two case study planets in polar and slightly eccentric orbits: HAT-P-11 b and WASP-107 b. Both of these planets have distant giant planet companions, making them strong candidates for a disk-driven resonance origin scenario . Moreover, they are actively undergoing atmospheric mass loss <cit.>, which could be a side effect of enhanced internal heating driven by tides from their eccentric orbits <cit.>. We aim to examine the long-term evolution of the inner planets following the initial establishment of their polar orbits via disk-driven resonance. We will use dynamical simulations to track the stability of the orbits under the influence of tides. Moreover, we will explore whether the long-term orbital evolution allows us to place bounds on the planets' tidal quality factors. Before proceeding, we note that HAT-P-11 b and WASP-107 b were explored in two recent papers by <cit.> and <cit.>, who showed that these planets could be explained by a high-eccentricity migration pathway induced by von Zeipel-Kozai-Lidov (ZKL) cycles. Disk-driven resonance is a potential alternative to this. We do not necessarily favor a certain formation pathway, as both theories have advantages. One advantage of disk-driven resonance is that the inner and outer planets form coplanar. Conversely, for ZKL cycles to start, the planets must first obtain a large mutual inclination, which could be produced by planet-planet scattering or a stellar flyby. However, we also note that both processes could work in concert, with disk-driven resonance producing an initial tilt and then scattering or secular interactions leading to further eccentricity and inclination changes. The outer planet in HAT-P-11 has a substantial eccentricity and possibly also a misaligned stellar obliquity, which is indicative of a history of scattering <cit.>. §.§ HAT-P-11 b The HAT-P-11 planetary system, comprising two known exoplanets, offers a compelling glimpse into the dynamical processes producing planets in slightly eccentric polar orbits. Closest to the host star is HAT-P-11 b, a Neptune-sized planet discovered utilizing the transit method <cit.>. With a mass approximately 26 times that of Earth and a radius four times larger, HAT-P-11 b orbits its host star every 4.88 days at a close distance of about 0.053 AU with a moderate eccentricity of 0.218^+0.034_-0031 <cit.>. The projected obliquity is λ = 103^+26_-10 degrees, and the 3D obliquity is ψ = 97^+8_-4 degrees <cit.>. <cit.> discovered the outer Jupiter-sized planet and suggested a large mutual inclination between the inner sub-Neptune and outer Jupiter, which was recently confirmed by <cit.>. Here, we explore the idea that HAT-P-11-b's polar orbit was established via a disk-driven resonance. We consider the post-disk evolution of the system and perform simulations that probe the long-term stability of HAT-P-11-b's orbit. The initial eccentricity following resonant excitation is unknown, as is the tidal Q of the planet. The planet may also have undergone tidal migration, ending up in a smaller orbit than it started at. Thus, we construct a suite of 5,000 simulations with different initial eccentricities, semi-major axes, and tidal Q values. Specifically, we allow the eccentricity to vary from 0.02 to 0.95, the semi-major axis from 0.05 AU to 0.06 AU, and Q from 10^3 to 10^7, as summarized in Table <ref>. For all simulations, we start the inner planet at the critical inclination calculated using using the fiducial pre-main sequence stellar parameters (R_⋆ = 1.3 R_⊙, P_⋆ = 7 days). The system is evolved using for 6 Gyr. Of the 5,000 simulations, 2,089 were excluded because the planet was engulfed by the star before reaching the end of the simulation. The orbital evolutions of all simulations are shown in Figure <ref>. The inclination stays remarkably stable over a 6 Gyr timescale (0.003% average change in mutual inclination). The planet's eccentricity and semi-major axis evolves due to the tidal dissipation. In Figure <ref>, the blue lines show the 120 runs where the final semi-major axis and eccentricity match the observed values with tolerances of 0.005 AU and 0.05, respectively. These cases show some decrease in the semi-major axis of the inner planet. Only a certain range of Q values results in an orbit that agrees with the present-day configuration. We will discuss this further in Section <ref>. §.§ WASP-107 b The WASP-107 system hosts the well-studied super-Neptune WASP-107 b <cit.>. This planet has become a focal point for understanding the atmospheric properties and orbital dynamics of puffy planets in close proximity to their host stars. With a mass only about 30 times that of Earth yet a nearly Jupiter-sized radius, WASP-107 b completes an orbit around its host star every 5.7 days with a semi-major axis of 0.055 AU and eccentricity of 0.06±0.04 <cit.>. The planet's orbit is polar, with a sky-projected obliquity of λ = 118^+38_-19 degrees <cit.> and a 3D obliquity of ψ = 92.6^+30.7_-1.8 degrees <cit.>. <cit.> detected a 0.36 M_Jup companion planet on a wide eccentric orbit. Alongside dynamical studies, WASP-107 b has provided insights into the nature of inflated super-Neptunes through studies of its escaping atmosphere <cit.> and high internal heat flux <cit.>, both suggestive of strong tidal heating <cit.>. The planet's unusual orbit and interior/atmosphere make it a prime subject for detailed characterization. Just like our case study of HAT-P-11, we run 5,000 simulations of the WASP-107 system that explore the long-term evolution of planet b after the hypothesized disk-driven resonance encounter established its polar orbit. We allow the eccentricity to vary from 0.02 to 0.95, the semi-major axis from 0.05 AU to 0.06 AU, and Q from 10^3 to 10^7. Of the 5,000 simulations, 379 were excluded because the planet fell into the star before reaching the end of the 6 Gyr. The system evolution is shown in Figure <ref>. Like HAT-P-11 b, the inclination stays extremely stable over a 6 Gyr timescale (0.005% average change in mutual inclination). The planet's eccentricity and semi-major axis evolve due to tidal dissipation. In Figure <ref>, the blue lines show the 28 runs where the final semi-major axis and eccentricity match the observed values with tolerances of 0.005 AU and 0.05, respectively. §.§ Constraining Tidal Q Our simulations used a range of values for the planet's tidal quality factor Q, but only some of the resulting systems matched the observed system properties. Here we explore this in more detail in an effort to obtain constraints on the Q values. The tidal quality factor is a metric for the efficiency of tidal energy dissipation, defined by Q^-1 = -1/2π E_0∮dE/dt dt, where E_0 encapsulates the peak energy stored within the tidal distortion of the planet, and the integral represents the energy dissipated in a complete cycle <cit.>. Smaller values of Q correspond to more efficient dissipation and vice versa. Within the Solar System, the terrestrial planets and satellites have Q ∼ 10^2 <cit.>, while Uranus and Neptune have Q ∼ 10^4 <cit.>. For Neptune-like exoplanets, we expect ∼10^4-10^5, although the detailed values would depend on the body's specific rheological properties. For both case study systems, we do not know the inner planet's initial eccentricity nor its Q, but we do have measurements for the current eccentricity. Both planets have non-circular orbits, indicating that Q cannot be too low, otherwise the orbits would have circularized. We isolate the simulations that produce final eccentricities matching the observed values (blue lines in Figures <ref> and <ref>). By identifying their initial eccentricities and Q values, we can obtain constraints on the Q values for HAT-P-11 b and WASP-107 b within the framework of this theory. These simulations place the first Q constraints for either of these planets. As shown in Figure <ref>, we find that Q≳ 10^5 for HAT-P-11 b, and Q≳10^4 for WASP-107 b. The constraints depend on the initial eccentricity as well. Overall, these results suggest a level of dissipation that is relatively inefficient but consistent with expectations from the Solar System planets. Moreover, we expect that the constraints on Q would further increase if we included the evolution of the planet's radius in our simulations <cit.>, since more significant past radius inflation would need to be balanced by a larger Q to result in the same timescales of tidal evolution. § CONCLUSIONS The polar Neptunes are a mysterious new class of exoplanets. To investigate their origins, we extended the study of the disk-driven resonance theory proposed by <cit.>. This scenario suggests that the orbit tilting happens early, within the first ∼10 Myr of the system's lifetime. Before this work, there had not yet been investigations of the planet's subsequent ∼Gyrs of orbital evolution, during which tides might feasibly realign the orbits. We performed a comprehensive analysis of the impact of tidal evolution on both the initial excitation of the polar orbits and their long-term stability. Our main takeaways are as follows: * Short-term N-body integrations of the initial disk-driven resonance encounter reveal that tidal effects do not change the inclination evolution resulting from the resonance, but they can impact the eccentricity, leading it to stabilize at a smaller value than seen in a tides-free simulation. A broader parameter space exploration would be valuable in future work. * Long-term secular simulations show that polar Neptune orbits are extremely stable. The orbits do not realign as a result of tidal dissipation in the star or in the planet. This result implies that there is no constraint on when the polar misalignment must be initially established, and thus a primordial excitation theory such as disk-driven resonance is in no tension with the observations. * On the other hand, systems emerging from the initial disk-driven resonance with a more moderate, non-polar obliquity in the range of ∼45^∘-80^∘ can often undergo substantial realignment due to tidal interactions. In some cases, these interactions are mediated by effects of spin-orbit coupling. * HAT-P-11 and WASP-107 are exemplary case study systems, as they contain both polar Neptunes and outer giant planets. Long-term secular simulations show stable orbits for the polar Neptunes. Moreover, working within the framework of disk-driven resonance, we constrain the tidal quality factors of the planets to Q ≳ 10^5 for HAT-P-11 b and Q ≳ 10^4 for WASP-107 b, similar to Uranus and Neptune. Disk-driven resonance is not the only possible formation pathway for these planets, but it is consistent with available constraints. Our theoretical exploration of disk-driven resonance can be complemented with further observational tests. Disk-driven resonance predicts that the polar Neptunes are accompanied by distant giant planets with small stellar obliquities and ∼90^∘ mutual inclinations with respect to the inner Neptunes . This can be tested with future observations that constrain the full 3D architectures of systems containing polar Neptunes, including the stellar obliquities of the outer giant planets and the mutual orbital inclinations. Such constraints are already possible through the usage of Hipparcos and Gaia astrometry <cit.>, but they will become significantly more prevalent with Gaia DR4 <cit.>. Moreover, in systems where outer giant planets have not yet been discovered, many more long-period perturbers may soon be identified using long-term radial velocity monitoring and Gaia astrometry <cit.>. There is thus a promising future ahead for a more complete understanding of polar Neptune systems and their enigmatic origins. § ACKNOWLEDGMENTS This material is based upon work supported by the National Science Foundation under Grant No. 2306391. We thank the NSF for their support. We thank Cristobal Petrovich, Guðmundur Stefánsson, Alexandre Correia, and James Owen for helpful conversations. We are grateful to the Yale Center for Research Computing for the use of the research computing infrastructure, and we especially thank Tom Langford for guidance and support using the Grace cluster. Additionally, we gratefully acknowledge access to computational resources through the MIT Engaging cluster at the Massachusetts Green High Performance Computing Center (MGHPCC) facility. § SECULAR SPIN-ORBIT RESONANCE The early orbital evolution responsible for the secular inclination resonance encounter could potentially result in other resonances being crossed. In particular, we are interested in the secular spin-orbit resonance, which involves a commensurability between the orbit nodal precession frequency, g = Ω̇, and the planetary spin-axis precession frequency, α. This resonance excites the planetary obliquity, and a non-zero planetary obliquity results in another source of tidal dissipation (in addition to eccentricity). Secular spin-orbit resonances put planets into so-called “Cassini states” <cit.>, equilibrium configurations of a body's spin pole in a uniformly precessing orbit frame. These have been well-studied in both Solar System and exoplanetary system contexts <cit.>. To excite the planetary obliquity, the ratio of frequencies, |g|/α, must cross unity from above. That is, a necessary condition for resonance crossing is that |g|/α > 1 initially. Cassini states are strictly defined only for uniform orbital precession when g = Ω̇ and the inclination I are constant. In cases where there are multiple sources of perturbations, including the scenario considered here, the precession is non-uniform. The planet's inclination and node evolution comprises a superposition of several modes with multiple frequencies {g_i}. However, for well-separated frequencies, the resulting spin vector equilibria are “quasi-Cassini states,” which behave approximately as Cassini states with g equal to one of the g_i modes. In the system considered here, a secular spin-orbit resonance for the inner planet would involve the fastest g_i frequency since α is generally fast due to the inner planet's close-in orbit. The full set of g_i frequencies can only be calculated through a complete secular analysis of the system, but they can be approximated by pairwise interactions. The fastest frequency for the inner planet is g_⋆,1, the nodal precession frequency of the inner planet induced by stellar oblateness <cit.>. This frequency decreases over time due to stellar contraction and rotational spin down, meaning that |g_⋆,1|/α_1 evolves in the right direction for resonance crossing. We can thus assess the conditions in which the inner planet encounters the resonance by considering the initial value of |g_⋆,1|/α_1 across the parameter space. Figure <ref> shows the initial value of |g_⋆,1|/α_1 in a parameter space of a_1 on the x-axis and a combination of stellar parameters on the y-axis. Obliquity excitation is expected when |g_⋆,1|/α_1>1 initially. The figure can be compared with Figure 4, where the outcomes of the disk-driven resonance are shown in the same parameter space. We find that the region of parameter space where polar orbits are created correspond to |g_⋆,1|/α_1 ≪ 1, far outside of the region where secular spin-orbit resonance is expected. This indicates that planets that obtain polar orbits through disk-driven resonance probably do not also obtain high obliquities, at least not through a simultaneous secular spin-orbit resonance encounter. § COMPARISON WITH <CIT.> <cit.>, hereafter , studied the dynamics of hierarchical two-planet systems and found that there can be a significant and surprising source of inclination damping. This does not actually result from a direct consequence of tidal effects on the orbit but rather from a complex interaction between tidal effects on the spin and secular eccentricity oscillations that lead to a gradual pumping of eccentricity and damping of inclination. Since we don't see widespread inclination damping in our simulations, it is useful to compare to . They considered cases where the inner planet was more massive and on a wider orbit than the close-in Neptunes we are studying here. This suggests that we do not see inclination damping simply because our systems are in a different parameter regime. We can confirm this interpretation and simultaneously validate our simulations by exploring cases with parameters similar to those used in . We first reproduce a simulation performed by of the system HD 74156, which contains two giant planets on eccentric orbits <cit.>. We use the same set-up as Figure 6, and the results are shown in Figure <ref>. The orbital evolution is nearly identical to that seen in . The inner planet experiences significant semi-major axis, inclination, eccentricity damping as a result of the coupled spin-orbit and tidal evolution. The close agreement of our simulation results with offers a strong validation of our numerical method. We further explore a comparison to 's findings using a larger set of simulations. Similar to Section <ref>, we create a simulation suite of the long-term orbital evolution of synthetic systems where the inner planet is initialized at the critical inclination. However, we use parameters that are more similar to the systems explored in . The parameters that differ include the inner planet's semi-major axis and the planet masses. The outcomes are consistent with in that they show significant inclination damping. About 89% of the runs resulted in a standard deviation of the inner planet's inclination greater than 1^∘, compared to only 6% of our earlier runs in the disk-driven resonance regime that had this high level of inclination variation (e.g. Figure <ref>). Clearly, these systems are in a different regime than the polar Neptunes we explored in Section <ref>. aasjournal
http://arxiv.org/abs/2409.03538v1
20240905135557
Spectral properties of hexagonal lattices with the -R coupling
[ "Pavel Exner", "Jan Pekař" ]
math-ph
[ "math-ph", "math.MP", "math.SP", "quant-ph" ]
label1,label2] Pavel Exner exner@ujf.cas.cz label2,label3] Jan Pekař honzapekar28@gmail.com [label1]Doppler Institute for Mathematical Physics and Applied Mathematics, Czech Technical University, Břehová 7, 11519 Prague, Czechia [label2]Department of Theoretical Physics, Nuclear Physics Institute, Czech Academy of Sciences, 25068 Řež near Prague, Czechia [label3]Faculty of Mathematics and Physics, Charles University, V Holešovičkách 2, 18040 Prague, Czechia § ABSTRACT We analyze the spectrum of the hexagonal lattice graph with a vertex coupling which manifestly violates the time reversal invariance and at high energies it asymptotically decouples edges at even degree vertices; a comparison is made to the case when such a decoupling occurs at odd degree vertices. We also show that the spectral character does not change if the equilateral elementary cell of the lattice is dilated to have three different edge lengths, except that flat bands are absent if those are incommensurate. quantum graph vertex coupling time-reversal invariance violation lattice transport properties [2020] 81Q35 35J10 § INTRODUCTION Quantum graphs represent a wide class of systems with a number of interesting properties; for an introduction and rich bibliography we refer to the monographs <cit.>. The versatility of quantum graph models comes, in particular, from the fact that there are numerous ways in which their Hamiltonians can be made self-adjoint, coming from different choices of the ways in which the wave functions are matched at the vertices. Denoting by Ψ={ψ_j} and Ψ'={ψ'_j} the vectors of boundary values of the functions and their derivatives (conventionally taken in the outward direction), respectively, on edges meeting at a vertex of degree N, conservation of the probability current at the vertex is guaranteed whenever the condition (U-I)Ψ+iℓ(U+I)Ψ'=0 holds, where U is an N× N unitary matrix and ℓ is a parameter fixing the length scale. This fact is usually referred to the paper <cit.> but the condition was known already to Rofe-Beketov <cit.>. In the overwhelming part of the quantum graph literature this richness lies fallow, though, as most often people use the so-called δ coupling where the wave functions are continuous at the vertex (having a common value ψ) and ∑_j ψ'_j=αψ with some α∈, especially the particular choice α=0 referred to as Kirchhoff. There are, however, other choices, some of the potential physical interest. Motivated by attempts to use quantum graphs to model the anomalous Hall effect <cit.>, we proposed in <cit.> a simple vertex coupling violating the time-reversal invariance, the violation `intensity' being maximal at the momentum k=l^-1; the corresponding unitary matrix is R given by (<ref>) below. One of the interesting properties of this coupling is that its high-energy transport properties depend on the vertex parity: if N is odd, the edges become asymptotically decoupled while for even-degree vertices the scattering matrix has a nontrivial limit. One of the consequences concerns the ratio of gap-to-band size in periodic graphs with such a coupling – see, e.g., <cit.>. Note that the R coupling belongs to a wider class in which the matrices U are circulant and transpose non-invariant; such couplings exhibit, for instance, a non-trivial 𝒫𝒯-symmetry even if the corresponding Hamiltonians are self-adjoint <cit.>. The true reason behind the mentioned high-energy dichotomy is not that much the parity itself, but rather the spectrum of the matrix U. The asymptotic edge decoupling occurs whenever -1 is not an eigenvalue of U; recall that the spectrum of R consists of the complex roots of unity. We have illustrated this fact in <cit.> on the example of a periodic square lattice graph with the vertex coupling given by U=^iμR: unless μ was an integer multiple of 2π/N the gaps in the spectrum expanded so that asymptotically they dominated the spectrum. The main aim of the present letter is to show that the said dichotomy can be reverted, making the odd degree vertices `transport friendly' at high energies. To this aim is enough to choose μ=π, in other words, to consider the coupling described by the matrix U := -R = [ 0 -1 0 … 0; 0 0 -1 … 0; ⋮ 0 0 ⋱ ⋮; 0 … ⋱ ⋱ -1; -1 0 … 0 0; ], which in the component form reduces to -ψ_j+1 - ψ_j + iℓ(-ψ'_j+1 + ψ'_j) = 0, j ∈ℤ (mod N). We will first analyze the spectral and scattering properties of a single vertex. After that we will investigate a regular honeycomb lattice on which the band-to-gap ratio tends to zero as k→∞ in case of the R coupling <cit.>; we will show that for the coupling (<ref>) the opposite is true. Finally, we will look at the spectrum of deformed periodic honeycomb lattices with the -R coupling in the spirit of <cit.>; we will see that while the band dominance is preserved, geometric deformation may lead to more than one asymptotic behavior type of the gaps. § STAR GRAPHS Consider a star graph with N semi-infinite edges meeting at a single vertex and the vertex condition (<ref>). By general principles <cit.>, the negative spectrum of such a system is discrete. It can be found easily: using the Ansatz ψ_j = c_j ^-κx, κ > 0, the requirement (<ref>) yields a system of equations for the coefficients c_j which turns up to be solvable iff (-1-iκℓ)^N + (-1)^N-1(-1+iκℓ)^N = 0. This condition has solutions only if N ≥ 3, and the eigenvalues of our Hamiltonian are -κ^2, where κ = 1/ℓtanmπ/N with m running through 1, … , ⌊N/2⌋ for N odd and 1, … , ⌊N-1/2⌋ for N even, in particular, for N=3,4 there is a single negative eigenvalue equal to 1/3ℓ^-2 and ℓ^-2, respectively. As for the continuous spectrum, the on-shell S-matrix at momentum k is by <cit.> equal to S(k) = (kℓ-1)I+(kℓ+1)U/(kℓ+1)I+(kℓ-1)U. As indicated in the introduction, its behavior as k→∞, and likewise as k→ 0, is determined by the spectrum of the matrix U; it is obvious from (<ref>) that the limits are trivial as long as -1 and 1, respectively, do not belong to σ(U). In our current situation with U given by (<ref>), its spectrum consists of eigenvalues -ω_j= -^2πij/N, j = 0, …, N-1. Hence -1 ∈σ(U) holds always; a direct computation using the fact that both I and U are circulant matrices gives lim_k →∞ S_ij(k) = N-2/N δ_ij -2/N (1-δ_ij). In a similar way, we get lim_k → 0 S_ij(k) = 2-N/N δ_ij + (-1)^i-j 2/N (1-δ_ij) for an even N, otherwise lim_k → 0 S(k) = -I. § HEXAGONAL LATTICE Consider next a periodic hexagonal lattice with the edges of length l sketched in the Fig. <ref>. Since the latter fixes the length scale, without loss of generality we can put ℓ=1 in (<ref>); the coordinate direction is chosen to be increasing from the left to the right. As usual in the case of periodic graphs <cit.>, one can use the Bloch-Floquet decomposition and reduce the problem to finding the spectrum on the period cell. To this aim, we use the Ansatz ψ_j(x) = C_j^+^ikx + C_j^-^-ikx, x ∈ [0, l/2], φ_j(x) = D_j^+^ikx + D_j^-^-ikx, x ∈ [-l/2,0], with j = 1, 2, 3. At the cell center, functions ψ_1 and φ_1 have to be matched smoothly which yields D^+_1 = C^+_1, D^-_1 = C^-_1, while imposing the quasiperiodic conditions at the border of the fundamental domain we get D^+_2 = C^+_2^ikl^-θ_2, D^-_2 = C^-_2^-ikl^-θ_2, D^+_3 = C^+_3^ikl^-θ_1, D^-_3 = C^-_3^-ikl^-θ_1, where θ_1 and θ_2 are the quasimomentum components running both through the interval (-π,π] as the Brillouin zone is the square (-π,π]^2. At the two vertices in the period cell condition (<ref>) must be valid giving -ψ_2(0) - ψ_1(l/2) - i(ψ^'_2(0) + ψ^'_1(l/2)) = 0, -ψ_3(0) - ψ_2(0) + i(-ψ^'_3(0) +ψ^'_2(0)) = 0, - ψ_1(l/2) -ψ_3(0) + i(ψ^'_1(l/2)+ψ^'_3(0)) = 0, -φ_2(0) - φ_1(-l/2) + i(φ^'_2(0) + φ^'_1(-l/2)) = 0, -φ_3(0) - φ_2(0) + i(φ^'_3(0) -φ^'_2(0)) = 0, - φ_1(-l/2) -φ_3(0) - i(φ^'_1(-l/2)+φ^'_3(0)) = 0. Combining the condition (<ref>)–(<ref>) we arrive at a system of linear equations for the coefficients C^±_j, which is solvable only if its determinant vanishes; excluding numerical prefactors we obtain the following spectral condition: sinkl (cos2kl (3k^2 +1)^2 + 3k^4 + 6k^2 -1 -4d_θk^2(k^2-1)) = 0 with d_θ := cos(θ_1 - θ_2) + cosθ_1 + cosθ_2; d_θ∈ [-3/2,3]. Its solutions are of two kinds; we have either sinkl = 0 giving rise to flat bands at the energies k^2=(mπ/l)^2, m∈, or those satisfying the relation cos2kl =1-6k^2-3k^4 + 4d_θk^2(k^2-1)/(3k^2 +1)^2 for some θ∈(-π,π]^2. This applies to the positive part of the spectrum; in the negative part the flat bands are absent and the counterpart to (<ref>) is obtained simply by substitution k = iκ, κ > 0, giving cosh2κl =1+6κ^2-3κ^4 + 4d_θκ^2(κ^2+1)/(3κ^2 -1)^2. Both conditions (<ref>) and (<ref>) determining the absolutely continuous spectral bands allows for a graphical solution as sketched in Fig. <ref>, where the positive horizontal half-line corresponds to the momentum variable k, the negative one to κ. It is not difficult to derive the band properties. We begin with the negative part of the spectrum: * bands are determined by the intersection of cosh2κl with the region bordered by the curves g_+(κ) = 1+18κ^2 + 9κ^4/(3κ^2-1)^2 and g_-(κ) = 1 +3κ^2/1-3κ^2, * negative spectrum is never empty, infσ(H) < -1/3, * for l > 2√(3) the negative spectral bands are strictly negative and there are two of them, one below and one above the energy -1/3, * for 2√(3)≥ l > √(3) the second negative band extends to zero, * for l ≤√(3) there is only one negative band, placed below the energy -1/3, * for large l the negative bands become exponentially narrow, centered around the single vertex bound state energy (<ref>), in this case -1/3. They are of the size ≈2/√(3)^-1/√(3)l up to an 𝒪(^-2/√(3)l) error, with the distance ≈2/√(3)^-2/√(3)l + 𝒪(^-4/√(3)l) separating them, both at the momentum scale, * the first band decreases in the energy scale as l → 0, being between energies (-2/√(3)1/l,-1/3) up to an 𝒪(l) error. On the other hand, the positive spectral bands are determined by the intersection of cos2kl with the region bordered by the curves h_+(k) = 1-18k^2 + 9k^4/(3k^2+1)^2 and h_-(k) = 1 -3k^2/1+3k^2 and one can easily find their properties: * the number of gaps in the positive spectrum is infinite; they are centered around the points k = mπ/2l. If m is even, the corresponding gap has the asymptotic width ΔE = 8/√(3)1/l + 𝒪(m^-2) at the energy scale, while for m odd it is ΔE = 4/√(3)1/l + 𝒪(m^-2) as m →∞, * the first positive band starts at zero if √(3)≤ l < 2√(3), otherwise there is a gap between it and the second (or the only) negative band. * at higher energies the spectrum is dominated by spectral bands. Because lim_k →∞h_+(k) = 1 and lim_k →∞h_-(k) = -1, the probability of belonging to the positive spectrum in the spirit of <cit.> can be trivially expressed analytically and equals to P_σ(H) := lim_E →∞1/Eσ(H) ∩ [0,E] = 1. For comparison we show in Fig. <ref> the graphical solution of the spectral problem for hexagonal lattice with the R coupling, correcting at the same time an error in <cit.> concerning the lower boundary of the shaded region; it does not affect the conclusions about the numbers and distribution of bands and gaps, as well as the dominance of the latter, but it changes the coefficient values in the asymptotic expressions. The upper boundary is g_+(κ) = κ^4+18κ^2 + 9/(κ^2-3)^2 and h_+(k) = k^4-18k^2 + 9/(k^2+3)^2 in the negative and positive part respectively, while for the lower one the expressions given in <cit.> have to be replaced by g_-(κ) = κ^2+3/κ^2-3 and h_-(k) = k^2+3/k^2-3, respectively. Consequently, * the two negative bands centered around -3 have for large l the widths ≈ 2√(3)e^-√(3)l up to an 𝒪(e^-2√(3)l) error, with the distance ≈ 2√(3)e^-2√(3)l + 𝒪(e^-4√(3)l) separating them, * the lowest band decreases in the energy scale as l → 0, being contained in (-2√(3)/l,-√(3)/l), up to an 𝒪(l) error, * the first positive band starts at zero if l ≥2/√(3), otherwise there is a gap between it and the second negative band, * as for the asymptotic behavior of the positive bands, the two around the points k^2 = (mπ/l)^2 have the width 2√(3)/l + 𝒪(m^-2), and the gap between them is ≈4√(3)/l + 𝒪(m^-2) as m →∞. § GENERAL HEXAGONAL LATTICE Next we consider the same infinite hexagonal lattice, but now scaled independently in each of the three directions, as pictured in Fig <ref>; the Hamiltonian (=Laplacian) on the edges and the vertex coupling -R remain unchanged. To solve the spectral problem we employ an Ansatz analogous to (<ref>) used in the above particular case, ψ_1(x) = C_1^+^ikx + C_1^-^-ikx, x ∈ [0, a/2], ψ_2(x) = C_2^+^ikx + C_2^-^-ikx, x ∈ [0, b/2], ψ_3(x) = C_3^+^ikx + C_3^-^-ikx, x ∈ [0, c/2], φ_1(x) = D_1^+^ikx + D_1^-^-ikx, x ∈ [-a/2,0], φ_2(x) = D_2^+^ikx + D_2^-^-ikx, x ∈ [-b/2,0], φ_3(x) = D_3^+^ikx + D_3^-^-ikx, x ∈ [-c/2,0]. The smooth matching of the functions along the edge of length a leads again to the condition (<ref>), while the Bloch-Floquet conditions now give D^+_2 = C^+_2^ikb^-θ_2, D^-_2 = C^-_2^-ikb^-θ_2, D^+_3 = C^+_3^ikc^-θ_1, D^-_3 = C^-_3^-ikc^-θ_1. The matching relations coming from condition (<ref>) have a structure analogous to (<ref>), but with the coordinates adjusted to the present edge lengths. The resulting linear system of equations for the coefficients C^±_j is solvable provided (3k^4+1)sinaksinbksinck + 2k^2(k^2-1)(sinbkcosθ_2+sinckcosθ_1+sinakcos(θ_2-θ_1)) - 2k^2(k^2+1)(cosakcosbksinck+cosbkcoscksinak+cosckcosaksinbk) = 0. Using simple algebraic manipulations, one can check that by choosing a = b = c = l the condition (<ref>) reduces to (<ref>). Before discussing this model, let us recall the paper <cit.> in which such a general hexagonal lattice was investigated in the situation when the vertex coupling is of a δ-type. Many of the proofs there can be used directly, modulo minor modifications, and we just summarize the conclusions for our present model. As for the flat bands, similarly to Proposition 2.1, 2.2 and 3.1 of <cit.>, the point spectrum is non-empty only if the edge lengths are rationally related, as in that case there is an infinite number of k's for which sinak = sinbk = sinck = 0 holds; their specific values, i.e. the flat band's momenta, obviously depend on the ratios involved. For incommensurate lengths, the spectrum is purely absolutely continuous. Let us turn to the (non-flat) spectral bands. The condition (<ref>) can be rewritten as (3k^4+1)+2k^2(k^2-1)(cosθ_2/sinaksinck+cosθ_1/sinaksinbk+cos(θ_2-θ_1)/sinbksinck) - 2k^2(k^2+1)(akbk+bkck+ckak) = 0, provided we exclude the `Dirichlet points', which is a common nickname for the values of k such that sinlk = 0 for any of l ∈{a,b,c}. The conclusions we can draw for this condition are less specific than in the regular case; we focus on three particular energy regions. §.§ The high-energy asymptotics For large momentum values, (<ref>) can be written as 3+ 2(cosθ_2/sinaksinck+cosθ_1/sinaksinbk+cos(θ_2-θ_1)/sinbksinck) -2(akbk+bkck+ckak) = 𝒪(k^-2), or alternatively - (ak+bk+ck)^2+1/sin^2ak +1/sin^2bk +1/sin^2ck + 2(cosθ_2/sinaksinck+cosθ_1/sinaksinbk+cos(θ_2-θ_1)/sinbksinck) = 𝒪(k^-2). The left-hand side of this condition coincides with the one derived in <cit.> for the Kirchhoff coupling, where the continuous spectrum covers the whole positive half-line. In our case, the right-hand side does not vanish and gaps are generally present, however, in the language of probability (<ref>) we have * P_σ(H) = 1 for any choice of the lengths a,b,c. Let us now return to the form (<ref>) of the spectral condition and consider the momentum value k = mπ/a, m ∈ℕ assuming that all the hexagon lengths are incommensurate. Then the spectral condition reduces to + 2k^2(k^2-1)(sinbkcosθ_2+sinckcosθ_1) -(-1)^m 2k^2(k^2+1)(sinbkcosck+sinckcosbk) = 0, which can be rewritten as k^4[sinbk(cosθ_2-(-1)^mcosck)+sinck(cosθ_1-(-1)^mcosbk)] - k^2[sinbk(cosθ_2+(-1)^mcosck)+sinck(cosθ_1+(-1)^mcosbk)] = 0. It is obvious that the chosen points k cannot belong to the spectrum, as there are no values of θ_1 and θ_2 which would annulate the terms proportional to k^4 and to k^2 simultaneously. The spectrum is a closed set, hence one has to investigate a neighborhood of such a point. To this aim, we expand the condition (<ref>) for the momentum mπ/a+δ to the second order, 0 = C+δB+δ^2A +𝒪(δ^3), where C= -2(-1)^m(1+a^2/m^2π^2)(sinbmπ/acoscmπ/a+sincmπ/acosbmπ/a) +2(1-a^2/m^2π^2)(sinbmπ/acosθ_2+sincmπ/acosθ_1), B= a(-1)^m(3+a^4/m^4π^4)sinbmπ/asincmπ/a +2(1-a^2/m^2π^2)(a(-1)^mcos(θ_1-θ_2)+bcosbmπ/acosθ_2+ccoscmπ/acosθ_1) +4a^3/m^3π^3(sinbmπ/acosθ_2+sincmπ/acosθ_1) +4(-1)^ma^3/m^3π^3(sinbmπ/acoscmπ/a+sincmπ/acosbmπ/a) -2(-1)^m(1+a^2/m^2π^2)(a+b+c)cosbmπ/acoscmπ/a +2(-1)^m(b+c)sinbmπ/asincmπ/a, A= ac(-1)^m(3+a^4/m^4π^4)sinbmπ/acoscmπ/a +ab(-1)^m(3+a^4/m^4π^4)cosbmπ/asincmπ/a -4a(-1)^ma^5/m^5π^5sinbmπ/asincmπ/a +4a^3/m^3π^3(a(-1)^mcos(θ_1-θ_2)+bcosbmπ/acosθ_2+ccoscmπ/acosθ_1) -6a^4/m^4π^4(sinbmπ/acosθ_2+sincmπ/acosθ_1) -(1-a^2/m^2π^2)(b^2sinbmπ/acosθ_2+c^2sincmπ/acosθ_1) -6(-1)^ma^4/m^4π^4(sinbmπ/acoscmπ/a+sincmπ/acosbmπ/a) +4(-1)^ma^3/m^3π^3[(a+b+c)cosbmπ/acoscmπ/a-(b+c)sincmπ/asinbmπ/a] +(-1)^m(1+a^2/m^2π^2)[(a^2+b^2+c^2+2ab+2bc)sinbmπ/acoscmπ/a +(a^2+b^2+c^2+2bc+2ac)cosbmπ/asincmπ/a]. In the leading order, we then have δ≈ -C/B. The 𝒪(m^0) = 𝒪(1) terms in the numerator C can be canceled out with the appropriate choice of θ_1 and θ_2 in accordance with the high-energy limit presented earlier, specifically cosθ_2 = (-1)^mcoscmπ/a and cosθ_1 = (-1)^mcosbmπ/a. At the same time, the 𝒪(m^-2) terms present there remain and the expression becomes C = -4(-1)^ma^2/m^2π^2(sinbmπ/acoscmπ/a+sincmπ/acosbmπ/a), while the 𝒪(1) terms in the denominator B are not influenced by such a choice, and, after some easy algebraic manipulations, we find B = (-1)^m(5a+2b+2c)sincmπ/asinbmπ/a + 𝒪(m^-2). In the described situation, there are therefore gaps around points k = mπ/a of the halfwidth δ =1/m^2π^24a^2(sinbmπ/acoscmπ/a+sincmπ/acosbmπ/a)/(5a+2b+2c)sincmπ/asinbmπ/a+𝒪(m^-4), which behave as m^-2 at the momentum scale as m →∞. Since the spectral condition (<ref>) is symmetric with respect to exchanges of any two lengths a, b, c, the same can be said also about points k = mπ/b and k = mπ/c (with an appropriate permutation of the lengths in the expression of δ), all of which, together with k = mπ/a, appear in the spectrum periodically with respect to momentum. Of course, the situation would not be exactly the same when some or all of the lengths are commensurate, but it is similar. For definiteness, let us assume one commensurate pair only, e.g., b and m such that sinmπ = sinbmπ/a = 0. Due to the high energy limit, cosθ_1 must be chosen in the same way as before, being equal to (-1)^mcosbmπ/a = (-1)^m(b/a+1), while cosθ_2 is yet free of such constraints. Then C = -4(-1)^m(b/a+1)a^2/m^2π^2sincmπ/a, B= 2(-1)^mb/a(a+b)cosθ_2-2(-1)^m(-1)^mb/a(a+b)coscmπ/a+ 𝒪(m^-2), and we would get the same behavior, δ≈ m^-2, unless cosθ_2 coincides with (-1)^mcoscmπ/a (as in the high-energy limit in the incommensurate situation). In that case, we must include in our calculation also the A term, A =(-1)^m(b/a+1)(3ab+a^2+b^2+2bc+2ac)sincmπ/a+𝒪(m^-2) and to express δ as a pair of solutions of a quadratic equation. We are interested in the asymptotic behavior only, which allows us to use the approximation δ = -B/2A±√(B^2/4A^2-C/A) ⟹ δ≈ -B/2A±√(-C/A)(1-B^2/8AC) if B^2 ≪ AC. Since C = 𝒪(m^-2), B = 𝒪(m^-2) and A = 𝒪(1), we are left with δ = ±√(-C/A) +𝒪(m^-2) = ±1/mπ2a/√(3ab+a^2+b^2+2bc+2ac) +𝒪(m^-2) independently of the value of sincmπ/a, assuming it is non-zero. Should it also be zero, then we have an example of a flat band - if sinmπ = sinbmπ/a = sincmπ/a = 0, the whole spectral condition (<ref>) vanishes independently of θ_1 and θ_2. The argument regarding the permutation of lengths still holds. Let us now briefly mention two situations not included in the analysis above: the commensurability ratio in which sinmπ = cosbmπ/a = 0, and the case in which sinmπ = cosbmπ/a = coscmπ/a = 0. In the first one, we end up with C = -4(-1)^m(b/a+1)-1/2a^2/m^2π^2coscmπ/a, B= (-1)^m(b/a+1)-1/2(5a+2b+2c)sincmπ/a + 𝒪(m^-2), and δ =1/m^2π^24a^2/(5a+2b+2c)cmπ/a+𝒪(m^-4), which can be viewed as a limit case of incommensurate lengths. The second one once again leads to a point mπ/a being a part of the spectrum (not a flat band though), because then the choice cosθ_1 = cosθ_2 = 0 solves the spectral condition (<ref>). To summarize the discussion: * There is an infinite number of spectral gaps for any possible combination of dilated hexagon lengths. Their widths are generally of the order 𝒪(m^-2) at the momentum scale as m→∞ for incommensurate lengths, and they may be of order 𝒪(m^-1) around some points if the lengths are commensurate; recall that all the gaps behave like that if the hexagon is equilateral. §.§ The low-energy limit For k → 0, we expand condition (<ref>) around the point k=0, getting k^3(-abc + 2(a+b+c) + 2(acos(θ_2-θ_1) + bcosθ_2 + ccosθ_1)) = 𝒪(k^5). To decide whether there is an open gap around k=0 we have to find extrema of the left-hand side of (<ref>). Finding the maximum is easy; since the edge lengths are positive, it is sufficient to choose θ_1 = θ_2 = 0. The minimum requires a little more care. We reformulate Lemma 3.3 of <cit.> denoting for a moment acos(θ_2-θ_1) + bcosθ_2 + ccosθ_1 =: f(θ_1, θ_2). If 1/a+1/b+1/c≤ 2max{1/a,1/b,1/c}, then min_θ_1, θ_2 f(θ_1, θ_2) = -a-b-c+2min{a,b,c} and a spectral band in the vicinity of k = 0 exists provided 4min{a,b,c} < abc < 4(a+b+c). On the other hand, if 1/a+1/b+1/c≥ 2max{1/a,1/b,1/c}, then min_θ_1, θ_2 f(θ_1, θ_2) = -abc/2(1/a^2+1/b^2+1/c^2) and the band condition reads 2(a+b+c) - abc(1/a^2+1/b^2+1/c^2) < abc < 4(a+b+c). If these conditions are, and each particular situation, satisfied, the positive spectrum extends to zero. §.§ Negative spectrum As in the regular case, the respective spectral condition is obtained directly from (<ref>) through replacing real k with k = iκ. Referring to Theorem 2.6 of <cit.>, we know that there are at most two negative spectral bands, as at each of the two vertices in the elementary cell the matrix -R has exactly one eigenvalue in the upper complex plane. Moreover, the above expansion of the spectral condition in the leading order is the same as for the positive spectrum. The higher negative spectral band then extends to zero only if it is true for the lowest positive one, otherwise the spectrum has a gap around k = 0. §.§ Data availability statement Data are available in the article. §.§ Conflict of interest The authors have no conflict of interest. § ACKNOWLEDGEMENTS The research was supported by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 873071. 99 BB13 R. Band, G. Berkolaiko, Universality of the momentum band density of periodic networks, Phys. Rev. Lett. 113 (2013), 13040. BE22 M. Baradaran, P. Exner: Kagome network with vertex coupling of a preferred orientation, J. Math. Phys 63 (2022), 083502 BET22 M. Baradaran, P. Exner, M. Tater: Spectrum of periodic chain graphs with time-reversal non-invariant vertex coupling, Ann. Phys. 443 (2022), 168992 BK13 G. Berkolaiko, P. Kuchment: Introduction to Quantum Graphs, Mathematical Surveys and Monographs, vol. 186. AMS, Providence, T.I., 2013; ISBN 9780821892114. ET18 P. Exner, M. Tater: Quantum graphs with vertices of a preferred orientation. Phys. Lett. A382 (2018), 283–287. ET21 P. Exner, M. Tater: Quantum graphs: self-adjoint, and yet exhibiting a nontrivial 𝒫𝒯-symmetry, Phys. Lett. A416 (2021), 127669 ET15 P. Exner, O. Turek: Spectrum of a dilated honeycomb network, Integral Equations and Operator Theory 81 (2015), 535–557. KN22 A. Kostenko, N. Nicolussi: Laplacians on Infinite Graphs, Mem. EMS, Berlin 2022. KS99 V. Kostrykin, R. Schrader: Kirchhoff's rule for quantum wires, II. The inverse problem with possible applications to quantum computers, Fortschr. Phys. 48 (2000), 703–716. Ku24 P. Kurasov: Spectral Geometry of Graphs, Birkhäuser, Berlin 2024. RB69 F.S. Rofe-Beketov: Self-adjoint extensions of differential operators in a space of vector-valued functions, Teor. Funkcii, Funkcional. Anal. Prilozh. 8 (1969), 3–24 (in Russian). SK15 P. Středa, J. Kučera: Orbital momentum and topological phase transformation. Phys. Rev. B92 (2015), 235152. SV23 P. Středa, K. Výborný: Anomalous Hall conductivity and quantum friction, Phys. Rev. B107 (2023), 014425 We80 J. Weidmann: Linear Operators in Hilbert Space, Springer, Heidelberg 1980.
http://arxiv.org/abs/2409.03305v1
20240905072106
Derangements in non-Frobenius groups
[ "Daniele Garzoni" ]
math.GR
[ "math.GR" ]
Daniele Garzoni, Department of Mathematics, University of Southern California, Los Angeles, CA 90089-2532, USA garzoni@usc.edu § ABSTRACT We prove that if G is a transitive permutation group of sufficiently large degree n, then either G is primitive and Frobenius, or the proportion of derangements in G is larger than 1/(2n^1/2). This is sharp, generalizes substantially bounds of Cameron–Cohen and Guralnick–Wan, and settles a conjecture of Guralnick–Tiep in large degree. We also give an application to coverings of varieties over finite fields. Derangements in non-Frobenius groups Daniele Garzoni ==================================== § INTRODUCTION Given a finite transitive permutation group G on a set Ω of size n≥ 2, an element g∈ G is called a derangement if it acts without fixed points on Ω. Chebotarev density theorem establishes a direct link between derangements and rational points of varieties over finite fields. In particular, derangements arise naturally in arithmetic geometry and were apparent already in the work of Frobenius in the nineteenth century. Derangements have other applications, for example to problems of random generation (see <cit.>) and to the structure of Brauer groups (see <cit.>). Denote by δ(G,Ω), or simply δ(G), the proportion of derangements of G on Ω. A lemma going back to Jordan asserts that δ(G)>0, and giving lower bounds for δ(G) has been a central problem in group theory in the past three decades; see <Ref>, <Ref> and <cit.> for context related to this problem. Answering a question of Lenstra motivated by the number field sieve (see <cit.>), Cameron–Cohen <cit.> showed that δ(G)≥ 1/n; more precisely, if n≥ 3 then one of the following holds: (i) G is a Frobenius group of order n(n-1) and δ(G)=1/n. (ii) δ(G)>1/n. The proof of this result is very short and elementary (see also <cit.>, which further simplifies the proof). See below for the definition of a Frobenius group. Later, Guralnick–Wan <cit.> showed that if n≥ 7 then one of the following holds: (i) G is a Frobenius group of order n(n-1)/a and δ(G)=a/n, where a∈{1,2}. (ii) δ(G)>2/n. The proof of this result is much harder than that of <cit.> – it requires, besides some nontrivial elementary arguments, the Classification of Finite Simple Groups. The groups attaining the bounds δ(G)=1/n and 2/n are very special: they are primitive Frobenius groups. We recall that a transitive permutation group G is called Frobenius if some nontrivial element fixes a point, and only the identity fixes at least two points. A classical result of Frobenius, based on character theory, states that G contains a regular normal subgroup, which is furthermore nilpotent by a result of Thompson. If G is Frobenius, then |G|=n(n-1)/a for a divisor a of n-1, and δ(G)=a/n. There are infinitely many examples attaining this bound; for instance, for every prime power n and every divisor a of n-1, consider G=_n ⋊ G_0 where G_0≤_n^× has order (n-1)/a. In particular, we find infinitely many examples with δ(G)=1/n,2/n, 3/n, 4/n… Given the exceptionality of Frobenius groups, it is natural to ask whether a considerably stronger lower bound holds in the other cases. The purpose of this paper is to answer this question in large degree, by showing that a bound of approximately 1/(2n^1/2) holds. What is more, it is sufficient to exclude groups that are both primitive and Frobenius in order to get this improvement. Let g(n):=n^1/2+1/2n∼1/2n^1/2. Let G be a finite transitive permutation group of sufficiently large degree n. Then one of the following holds: * G is primitive and Frobenius. * δ(G)≥ g(n). There are primitive non-Frobenius groups attaining equality in (2); take G=_n⋊ G_0, where n is a square, G_0=_1(n)⋊σ and σ is the n^1/2-th power map (see <Ref>). <Ref> generalizes substantially the aforementioned bounds of <cit.>. What is more, in <cit.>, Guralnick–Tiep conjectured that δ(G)≥ 1/n^1/2 if G is primitive affine and non-Frobenius. <Ref>, in particular, confirms (up to a necessary minor amendment) this conjecture in large degree. We stress at once that the case of primitive affine groups is the crux of this paper; see <Ref> and <Ref> for details. We state a consequence of <Ref> that highlights the aforementioned connection between derangements and arithmetic (see <cit.>). Let π X → Y be a separable morphism of sufficiently large degree n between integral normal quasi-projective varieties of dimension d over a finite field _ℓ. Assume that the Galois closure of π is geometrically integral, and that the monodromy group of π is not primitive and Frobenius. Then, |π(X(_ℓ))| ≤(1- g(n)) ℓ^d + O_π(ℓ^d-1/2). If π is defined by a polynomial f(x)∈_ℓ(Y)[x], then the condition that the monodromy group is primitive and Frobenius can be phrased field theoretically as follows: _ℓ(X)/_ℓ(Y) is minimal, and the Galois closure of _ℓ(X)/_ℓ(Y) is not _ℓ(X), and is generated by any two roots of f. Therefore the bound in <Ref> applies whenever this (very restrictive) condition does not hold. If the Galois closure of π (namely, the normalization of X in the Galois closure of _ℓ(X)/_ℓ(Y)) is not geometrically integral, then one must more generally count derangements in a coset (indeed this is one of the motivations of <cit.>, which addresses the case of curves). With regard to <Ref>, it seems worth noting that one gets almost the same bound (approximately 1/(60n^1/2)) just by excluding primitive Frobenius subgroups of _1(n), as opposed to all primitive Frobenius groups. Let f(n):= n^1/2+1/60n∼1/60n^1/2. Let G be a finite transitive permutation group of sufficiently large degree n. Then one of the following holds: * G is a primitive Frobenius subgroup of _1(n). * δ(G)≥ f(n). There are primitive (Frobenius) groups that do not lie in (1) and that attain equality in (2), of the form G=_n^2⋊ G_0, where _2(5)⊴ G_0; see <Ref>. §.§ About the proof We prove an elementary reduction of <Ref> to primitive groups, see <Ref>. Luczak–Pyber <cit.> and Fulman–Guralnick <cit.> proved the so-called Boston–Shalev conjecture, asserting that if G is a finite simple transitive permutation group of degree n, then δ(G)≥ϵ for an absolute constant ϵ>0. See also <cit.> for a conjugacy-class version. A consequence of this result is the following (see <cit.>): (Luczak–Pyber, Fulman–Guralnick) Let G be a primitive non-affine permutation group of degree n. Then δ(G)≥ϵ/log(n) for an absolute constant ϵ>0. Of course, if n is sufficiently large then /log(n) > g(n), and so in view of <Ref> and <Ref>, it remains to prove <Ref> in the case where G is primitive affine, so G=V⋊ G_0 where V is elementary abelian and G_0≤(V) is irreducible. In this case, there is an amusing phenomenon – G has many elements fixing no point of V=Ω if and only if G_0 has many elements fixing a point of V{0} (see <Ref>). In particular, denoting by α(G_0,V), or simply α(G_0), the proportion of elements of G_0 having eigenvalue 1 on V, our task is to lower bound α(G_0). This is a well-studied problem, especially for quasisimple groups; see for example <cit.> and the references therein. Much work has been devoted to the study of representations where every element has eigenvalue 1, which is to say, α(G_0)=1. The following theorem about irreducible linear groups implies <Ref> for primitive affine groups. We work over any finite field, not necessarily of prime order. Let h(n):=n^1/2+2/2(n-1)∼1/2n^1/2. Let d be a positive integer and q be a prime power, with qd sufficiently large. Let V≅_q^d and G≤_d(q) be irreducible, and r be maximal so that G≤_d/r(q^r). Then one of the following holds: * d=r and G acts semiregularly on V{0}. * G acts semiregularly on V{0}, and α(G)≥ 1/(60(q^d/2-1)) and δ(V⋊ G)≥ f(q^d). * α(G)≥ h(q^d) and δ(V⋊ G) ≥ g(q^d). (In (2) and (3), δ(V⋊ G)=δ(V⋊ G,V) refers to the affine action on V.) We already remarked that the equalities in the bounds for δ(V⋊ G) can be attained. The same is true for the bounds for α(G), see <Ref>. In most cases (e.g., for d/r≥ 3) we will prove the bound for α(G), and the required bound for δ(V⋊ G) will follow immediately. In order to prove <Ref>, we will reduce the problem to the case of primitive linear groups and exploit the powerful structure theory of these groups, see <Ref>. The reduction is elementary (<Ref>), so assume G is primitive. At this point, after some further reductions, we will focus on the generalized Fitting subgroup L (whose index in G is relatively small), and we will count elements with eigenvalue 1 in L. Assuming for simplicity that L acts absolutely irreducibly, we have L=G_1∘⋯∘ G_t, where G_i is either central or extraspecial or a power of a quasisimple group, and V=V_1⊗⋯⊗ V_t. This will essentially reduce the problem to lower bounding α(H) when H≤_1(q) (<Ref>), or H≤_2(q) (<Ref>), or H extraspecial or quasisimple (<Ref>). A nice feature of the proof is that we will be able to prove <Ref> only by working with large extraspecial or quasisimple groups (and in many cases, bounds much stronger than we need hold). Note, however, that in the central product L=G_1∘⋯∘ G_t, there may certainly be factors that have small order, and to which our bounds do not apply. In order to handle this issue, we will group together all the factors of small order (thereby paying a constant); what we gain in the factors of large order will be enough to compensate. We wonder whether, in the notation of <Ref>, the bound α(G)≫ 1/q^r holds. There are many examples where a matching upper bound holds, and the lower bound holds if d/r≤ 2 (by <Ref>) and for many quasisimple groups in defining characteristic, see <Ref> and also <cit.>. This problem could be considered as an analogue of the Boston–Shalev conjecture for affine groups over fields of bounded size. §.§ Notation We write f=O(g) or f≪ g if there exists a positive absolute constant C such that f≤ Cg. §.§ Acknowledgements I thank Bob Guralnick for suggesting the proof of <Ref> and for interesting discussions, and Michael Larsen for an explanation on the Lang–Weyl estimate for Suzuki and Ree groups. § REDUCTION TO PRIMITIVE GROUPS Assume that <Ref> holds for primitive groups. Then, δ(G)>g(n) for every imprimitive permutation group G of sufficiently large degree n. In fact, the proof will show that if G is imprimitive then δ(G)>1/(1.5n^1/2). Let Ω= Ω_1∪⋯∪Ω_t be a maximal system of imprimitivity, with |Ω_i|=m≥ 2 and t≥ 2, let Δ={Ω_1, …, Ω_t} and ρ G →(Δ)≅ S_t. Then, G^ρ is primitive. By <cit.>, we have δ(G,Ω)≥δ(G^ρ,Δ)≥ 1/t. If t is bounded this is ≫ 1 and we are done, so suppose that t is large. Case 1: G^ρ is not Frobenius. Then, by <Ref> δ(G,Ω)≥δ(G^ρ,Δ) ≥ g(t). Since n=mt≥ 2t, we have g(t) > g(n) and we are done. Case 2: G^ρ is Frobenius. We have |G^ρ|= t(t-1)/a for a divisor a of t-1, and δ(G^ρ, Δ) = a/t. Assume first t/a < 1.5n^1/2. Then a/t > 1/(1.5n^1/2) > g(n) and we are done. Assume then t/a ≥ 1.5n^1/2, i.e., t≥ 2.25ma^2. For each i=1, …, t, let H_i=Stab_G(Ω_i), so H_i acts transitively on Ω_i. Since G^ρ is Frobenius, a nontrivial element of G^ρ fixes at most one point. In particular, δ(G,Ω) ≥∑_i=1^t ( δ(H_i, Ω_i)/|G:H_i| - 1/|G^ρ|) = δ(H_1, Ω_1)- a/t-1. Now fix 0< < 1-1/1.1 ≤ 1-1/(1.1a) (e.g., =0.09) and take t large so that (t-1)/t> 1-. By <cit.>, we have δ(H_1, Ω_1)≥ 1/m, and the assumption t≥ 2.2ma^2 implies 1/m - a/t-1 > 1/m - a/(1-)t≥1/m( 1 - 1/(1-)2.2a) > 1/2m > 1.4/2n^1/2 > g(n), which concludes the proof. § AFFINE GROUPS: PRELIMINARIES Let G be a finite group and let V be an _qG-module of dimension d<∞. Denote by π the permutation character of G acting on V, and define η(G)=η(G,V) as the inverse of the harmonic mean of π, i.e., η(G)=1/|G|∑_g∈ G1/π(g). δ(V⋊ G)=1-η(G). Let gv∈ V⋊ G. We have u^gv= ug+v, so u^gv=u if and only if v=u-ug. It follows that gv is a derangement if and only if v∉ [g,V]. Let c(g):=(_V(g)), so π(g)=q^c(g). Since [g,V] is a subspace of V of dimension d-c(g), and counting derangements by summing over all g∈ G and all v∉[g,V], we have δ(V⋊ G) = 1/|G|∑_g∈ Gq^d - q^d-c(g)/q^d = 1- η(G) as wanted. Recall that α(G) denotes the proportion of elements of G with eigenvalue 1 on V. (1-1/q)α(G) ≤δ(V⋊ G)≤α(G). Writing α=α(G) and η=η(G), note that 1/|G|(1-α)|G|≤η≤1/|G|( α|G|/q+(1-α)|G|) = 1-(1-1/q)α. Now just apply Lemma <ref>. In particular, as already noticed in the introduction, in most cases in order to prove <Ref> it will be sufficient to bound α(G). § PROOF OF <REF> MODULO PRELIMINARY RESULTS Here we reduce the proof of <Ref> to the following three results, that we will prove in <Ref>. <Ref> holds if d=r, i.e., G≤_1(q^d). <Ref> holds if G is primitive and d/r=2, i.e., G≤_2(q^d/2). There exists an absolute constant C>0 such that if d≥ 3, G≤_d(q) is an absolutely irreducible quasisimple or extraspecial group of order at least C, and Z≤_q^×, then α(ZG)> 2|O|log(q)q^-d/2, where O=(G) if G is quasisimple and O=_2r(s) if G is extraspecial of order s^2r+1. The factor 2 in the inequality in <Ref> will be useful in the proof of <Ref> for technical reasons. In <Ref>, we do not assume that G is primitive because it will be convenient to quote the result in that form. In every case, let us show that the imprimitive case follows from the primitive one. If <Ref> holds when G is a primitive linear group, then it holds when G is an imprimitive linear group. Let G≤_m(q)≀ S_t preserve the decomposition V=V_1⊕⋯⊕ V_t, with t≥ 2, permuting transitively the factors, and assume the stabilizer H of V_1 acts primitively on V_1. Note α(G,V)≥α(H,V_1)/t. If H does not induce a semiregular group on V_1{0}, then α(H,V_1)≥ h(q^m) > 1/(2q^m/2) and so α(G,V)> 1/(2tq^m/2)≫ 1/q^d/3. Assume then that H induces a semiregular group on V_1{0}; then s:=|H^V_1|≤ q^m-1 and α(H,V_1)=1/s, and so α(G,V)≥ 1/(t(q^m-1)). If m≤ d/3 this is ≫ q^d/3, so assume m=d/2. For i=1,2, let N_i be the subgroup of H acting trivially on V_i. We have |G|=2|N_1|s. The elements of H with eigenvalue 1 are those of N_1∪ N_2, so α(G,V)≥2|N_1|-1/2|N_1|s≥1/2s≥1/2(q^d/2-1). Moreover, δ(V⋊ G,V)≥ 1/(4(q^d/2-1))>f(q^d) by <Ref>, so the case where G is semiregular on V{0} is done. Assume then G is not semiregular on V{0}. If |N_1|≥ 2, then we can improve the second inequality in (<ref>) and get α(G,V)≥ 3/(4(q^d/2-1)) > h(q^d). What is more, the nontrivial elements of N_1∪ N_2 fix q^d/2 vectors, so an easy modification of the proof of <Ref> gives δ(V⋊ G,V)≥ 3/(4(q^d/2-1))·(1-1/q^d/2)> g(q^d) and we are done. Assume finally N_1=1, so |G|=2s. If G contains at least three elements with eigenvalue 1, then α(G,V)≥3/|G| = 3/2s≥3/2(q^d/2-1), and so by <Ref> δ(V⋊ G,V)≥ 3/(4(q^d/2-1)) and we are done. Assume then G contains a unique nontrivial element x with eigenvalue 1; then |x|=2, and α(G,V) = 2/|G| ≥ 1/(q^d/2-1). We have x∈ G H, so write x=(x_1,x_2)τ where 1≠τ∈ S_2 and x_i∈_d/2(q). Since |x|=2, we have x_1x_2=1, from which x fixes q^d/2 vectors and so as above δ(V⋊ G,V)≥ 1/(q^d/2-1)·(1-1/q^d/2)>g(q^d) and we are done. We now prove <Ref> assuming <Ref>. Let G≤_d(q) be irreducible. In view of <Ref>, we may assume G is primitive. From now, for convenience we change notation and replace d/r by d and q^r by q. In particular, G≤_dr(q^1/r) is primitive and d is minimal so that G≤_d(q). Let H:=G∩_d(q). By Clifford's theorem, each characteristic subgroup L of H acts homogeneously. Let V_1 be an _q L-component, whose dimension we denote by a. By Schur's lemma, __q L(V_1) is a field extension of _q, say of degree b. By <cit.>, __d(q)(L)≅_d/b(q^b), and since G normalizes L, we get G≤_d/b(q^b). By the minimality of d, we deduce b=1, which is to say, V_1 is absolutely irreducible. Choose now L=F^*(H). We have L=(H) R, where R= G_1 ∘⋯∘ G_t and the following holds. There exists 0≤ r ≤ t such that for 1≤ i≤ r, G_i is an extraspecial p_i-group of order p_i^2r_i+1, of exponent p_i if p_i is odd; and possibly replacing G_i by ZG_i where Z≤_q^× has order 4, we have H/_H(G_i)≤ p_i^2r_i._2r_i(p_i). For r+1≤ i≤ t, G_i is a central product of ℓ_i copies of a quasisimple group S_i, and H/_H(G_i) ≤(G_i)=Aut(S_i)≀ S_ℓ_i. Recall now that (L)=_H(L). We have H/_H(L)=H/⋂_i _H(G_i) ≤∏_i=1^t H/_H(G_i) and therefore |H|/|L|≤∏_i=1^r |O_i| where for i≤ r, O_i := _2r_i(p_i), and for i≥ r+1, O_i:= Out(S_i)≀ S_ℓ_i. Assume first that L acts (absolutely) irreducibly on V. If d≤ 2, we have G≤_1(q) or _2(q) and we conclude by <Ref>. Assume then d≥ 3. We will prove that α(L,V)> 2log(q)|H:L|q^-d/2, which implies α(G, V) > 2/q^d/2 since |G:L|≤log(q)|H:L|, and so δ(V⋊ G,V)> 1/q^d/2 by <Ref>. This will conclude the proof. Since L is absolutely irreducible and L=(H)R, R is also absolutely irreducible. Then, by <cit.> we deduce that W=W_1⊗⋯⊗ W_t, where W_i is an absolutely irreducible _qG_i-module of dimension d_i. In particular, for 1 ≤ i ≤ r, W_i is faithful and d_i=p_i^r_i. For r+1≤ i≤ t, W_i = M_i_1⊗⋯⊗ M_iℓ_i where M_i_j≅ M_i is a faithful absolutely irreducible _qS_i-module of dimension m_i (again by <cit.>). (In other words, W_i is a tensor product of ℓ_i copies of M_i, but it will sometimes be useful to make use of the indices i_j.) We may take it that if S_i≅ S_j with i≠ j then M_i≇M_j. Let now C>0 satisfy the following conditions: ♢ C satisfies the conclusion of <Ref>. ♢ If q is a power of s, s(s^2-1)≥ C, and either ℓ≥ 3 or ℓ≥ 1 and s<q, then 2log(q)(2slog(s))^ℓℓ! < q^2^ℓ-1. . ♢ If q(q^2-1)≥ C and a≥ 2 then 2log(q)(24qlog^2(q))^a/2(1+δ· qlog(q)) < q^2^a-1, where δ=0 if a=2, and δ=1 if a≥ 3. Let us also record the following inequality, which holds for d≥ 3, ℓ≥ 1, and q a prime power: ℓ! q^dℓ/2≤ q^d^ℓ/2. Now that we fixed C, let g(C) be so that for every ℓ≥ g(C) and every m≥ 2, 2C^Cℓℓ!q^mℓlog(q) < q^m^ℓ/2 (It is easily seen that g(C) exists.) For later use, note that if G is a finite group of order at most C, then (crudely) |(G)|≤ C^C. Let now f(C) be equal to 1, plus the product of all the following quantities: For every extraspecial group of order s^2f+1≤ C, s^2f+1|_2f(s)|; For every quasisimple group L of order at most C and for every ℓ≤ g(C), (|(L)|^ℓℓ!)^|L|. (For a later use, note that |L| is an upper bound for the number of equivalence classes of absolutely irreducible _q L-representations.) Now let ♢ Ω_1 be the set of indices 1≤ i ≤ t such that either i≤ r and |G_i|≤ C, or i≥ r+1 and |S_i|≤ C and ℓ_i ≤ g(C). ♢ Ω_2 be the set of indices r+1≤ i ≤ t such that |S_i|≤ C and ℓ_i > g(C). ♢ Ω_3 be the set of indices r+1≤ i ≤ t not belonging to ∪_j≤ 2Ω_j and such that m_i=(M_i)=2, ℓ_i≤ 2 and S_i≅_2(q). ♢ Ω_4 be the set of indices r+1≤ i ≤ t not belonging to ∪_j≤ 2Ω_j and such that m_i=(M_i)=2, and ℓ_i ≥ 3 or S_i≅_2(q_i) with q_i<q. ♢ Ω_5 be the set of remaining indices. For j=1, …, 5, denote d(α_j):=∏_i∈Ω_id_i. Since n is sufficiently large, we may assume that ∪_i>1Ω_i ≠∅. Now choose any j∈∪_i>1Ω_i. If j≤ r replace G_j by (H)G_j; if j≥ r+1 replace S_j_1 by (H)S_j_1. From now on we will not specify this in the notation. In particular, L=G_1∘⋯∘ G_t. We make use of the following easy observation: If g_i∈ G_i has eigenvalue 1 on W_i, then the image of (g_1, …,g_t) in L has eigenvalue 1 on V. We deduce that α(L, V)≥∏_1≤ i ≤ 5α_i where α_i = ∏_j∈Ω_iα(G_j, W_j). Observe that by definition of f(C), α_1/∏_i∈Ω_1|O_i|≥1/∏_i∈Ω_1|G_i||O_i| > 1/f(C). Moreover, by <cit.> and Lemma <ref>, for i∈Ω_2, α(S_i,M_i)≥δ(M_i⋊ S_i,M_i)≥ 1/q^m_i, therefore by <Ref>, noting that d_i=m_i^t_i, we get that if Ω_2≠∅ then α_2/∏_i∈Ω_2|O_i|≥∏_i∈Ω_21/C^Cℓ_iℓ_i! q^m_iℓ_i > 2log(q) ∏_i∈Ω_21/q^d_i/2≥2log(q)/q^d(α_2)/2. (Note that the assumption that Ω_2≠∅ is only needed to allow the factor 2log(q). The same will hold for Ω_4 and Ω_5, below.) By <Ref>, by the definition C and by (<ref>), we have that if Ω_5≠∅ then α_5/∏_i∈Ω_5|O_i| > 2log(q) ∏_1≤ i≤ r i∈Ω_51/q^d_i/2∏_r+1≤ i≤ t i∈Ω_51/ℓ_i! q^m_iℓ_i/2≥2log(q)/q^d(α_5)/2. Note now that for i∈Ω_3∪Ω_4, S_i=_2(q_i) or (H)_2(q_i), where q is a power of q_i and M_i is either the natural module or the dual. (And there is at most one i_j for which S_i_j≠_2(q_i).) We have α(S_i,M_i)≥ 1/q_i (see <Ref>). Using |O_i|≤ 2log(q_i) and (<ref>) (applied with s=q_i) we deduce that if Ω_4≠∅ then α_4/∏_i∈Ω_4|O_i|≥∏_i∈Ω_41/(2log(q_i)q_i)^ℓ_iℓ_i! > 2log(q)/q^d(α_4)/2 Now, let us address Ω_3. If a:=log(d(α_3)) = ∑_i∈Ω_3ℓ_i is even, we choose an arbitrary matching of the factors S_i_j, i∈Ω_3, 1≤ j≤ℓ_i. If a is odd, we do the same by leaving out an arbitrary factor. If S_ν and S_η are matched, we readily see that α(S_ν∘ S_η,M_ν⊗ M_η)≥ 1/(3q). (Indeed, choose an element of S_ν having eigenvalues in _q^× – for q≥ 7 the proportion of choices is at least (q-3)/(2(q-1))≥ 1/3 – and choose an element of S_η having eigenvalue λ^-1, where λ is an eigenvalue of the first element – the proportion of choices is at least 1/q, see <Ref>.) Using that |O_i|≤ 2log(q), we get α_3/∏_i∈Ω_3|O_i|≥1/qlog(q)1/(24qlog^2(q))^a/2 where the first factor 1/(qlog(q)) appears only if a is odd. Is is now easy to deduce the desired conclusion (<ref>), using the assumption d≥ 3. Indeed, if a≥ 2 then by <Ref>, the right-hand side of (<ref>) is at least 2log(q)/q^d(α_3)/2, and for a=1 it is equal to 1/(qlog(q)). In particular, if a=0 or a≥ 2 then we get from <Ref> α(L,V)/|H:L|> 2log(q)/f(C)q^(d(α_2)+d(α_3)+ d(α_4)+ d(α_5))/2 which for sufficiently large qd is at least 2log(q)q^-d/2, giving (<ref>) as desired. If a=1, then the assumption d≥ 3 implies that Ω_1∪Ω_2∪Ω_4∪Ω_5≠∅, and one concludes similarly, using that d(α_i)=1 or ≥ 3 for i=2,4,5. Assume now L is not irreducible. Letting V' be a component, we have α(L,V')=α(L,V) and |V'|≤ |V|^1/2, so if d':=(V')≥ 3 then the conclusion follows immediately from <Ref> (and in fact, we can simply use |V'|≤ |V|). If d'=2, then d≥ 4 and by <Ref>, α(L,V')≫ 1/q. Since |H:L|≪ 1 we have α(L,V)≫ 1/q ≥ 2log(q) |H:L|q^-d/2 for qd large. This concludes the proof of <Ref> assuming <Ref>. § _1(Q) In this section we prove <Ref> in the case where d=r (that is, <Ref>). The bulk of the proof boils down to elementary number theory. We will work with any subgroup of _1(q^d) (with no irreducibility assumption); for convenience, we replace q^d by q in the notation, so G≤_1(q) is not semiregular on V{0}. Let G≤_1(q) be such that G is not semiregular on V{0}. If q is sufficiently large then α(G,V)≥ h(q) and δ(V⋊ G,V)≥ g(q). Let us now fix some notation. Write q=p^f, where G≤_1(q)⋊Gal(_q/_p) and G projects onto Gal(_q/_p) (so in this section p need not be prime). Since G is not semiregular on V{0}, we have f>1. Denote H:=G∩_1(q) and t:=|H|, and write G=H,z, with z=τ x, τ is the p-th power map and x∈_1(q). Let x:=xH∈_1(q)/H and m:=|x|, so m | (q-1)/(p-1) and m| (q-1)/t. We denote by A the set of nontrivial elements of G fixing a nonzero vector, and by (a,b) the greatest common divisor of the positive integers a and b. Fix a coset z^ℓ H, with ℓ a proper divisor of f. We have that A∩ z^ℓ H≠∅ if and only if x^(p^ℓ-1)/(p-1) is a (p^ℓ-1)-th power. Moreover, if this is the case, |A∩ z^ℓ H|=(q-1/p^ℓ-1, t). Write y:=x^(p^ℓ-1)/(p-1), so z^ℓ=τ^ℓ y. For 0≠ v∈_q and h∈ H we have vz^ℓ h = v^p^ℓyh, and this is equal to v if and only if yh = v^1-p^ℓ. In particular, there exist h and v≠ 0 such that vz^ℓ h=v if and only if y is a (p^ℓ-1)-th power modulo H. Assume now that yh=v^1-p^ℓ. Then A ∩ z^ℓ H is the set of elements obtained by multiplying z^ℓ h by a (p^ℓ-1)-th power of _1(q) contained in H. The number of these elements is ((q-1)/(p^ℓ-1),t), and this concludes the proof. We introduce further notation. For a prime number r and a positive integer a, we let γ_r(a) be the integer i such that r^i is the r-part of a. Moreover, we let γ_r(a):=γ_r((p^a-1)/(p-1)). Finally, we write m=(q-1)/(tC) for a positive integer C. We will often be concerned with several primes r_1, …, r_b. In this case, for convenience we will write γ^i(a)=γ_r_i(a), and similarly γ^i(a)=γ_r_i(a). Let r_1, …, r_b be the distinct prime divisors of (f,p-1) (possibly b=0), and let ℓ<f be a divisor of f. Then, A ∩ z^ℓ H≠∅ if and only if, for every i=1, …, b, one of the following holds: (i) γ^i(m)≤γ^i(ℓ). (ii) γ^i(p^ℓ-1) ≤γ^i(C) + γ^i(ℓ), i.e., γ^i(p-1) ≤γ^i(C). By <Ref>, A∩ z^ℓ H≠∅ if and only if x^(p^ℓ-1)/(p-1) is a (p^ℓ-1)-th power. Note that x^(p^ℓ - 1)/(p-1) is a (p^ℓ -1)-th power if and only if m/(m,(p^ℓ-1)/(p-1))|(p^f-1)/t/((p^f-1)/t, p^ℓ-1). Using that Cm=(p^f-1)/t, we see that this is equivalent to (Cm, p^ℓ-1) | C· (m,(p^ℓ-1)/(p-1)). This is equivalent to saying that, for every prime number r, min{γ_r(C) + γ_r(m),γ_r(p^ℓ-1)}≤γ_r(C) + min{γ_r(m), γ_r(ℓ)}. We need to show that (<ref>) holds for every r if and only, for every i=1,…, b, (i) or (ii) in the statement holds. We first claim that (<ref>) holds for r=r_i if and only if (i) or (ii) holds for i. This is a straightforward check; let us show, for instance, that (i) or (ii) in the statement for i implies (<ref>) for r=r_i. Assume (i). Then the RHS of (<ref>) for r=r_i is γ^i(C) + γ^i(m). But this term appears in the minimum of the LHS, so (<ref>) holds. Assume, now, (ii), and assume that (i) does not hold (otherwise we are in the previous case). Then γ^i(m)> γ^i(ℓ), so by (ii) the LHS is γ^i(p^ℓ-1); moreover the RHS is γ^i(C) + γ^i(ℓ), hence (<ref>) holds by (ii). The converse implication for r=r_i is proved similarly: Assume that (<ref>) holds for r=r_i, and assume that (i) does not hold for i; then deduce easily that (ii) must hold. In order to conclude the proof, it is sufficient to show that (<ref>) always holds for every prime r not dividing (f,p-1). In order to check this, we may assume γ_r(m)≥ 1, otherwise (<ref>) holds easily. So assume that γ_r(m)≥ 1 and that r does not divide (f,p-1). If r does not divide p-1, then γ_r(p^ℓ-1) = γ_r(ℓ). Now, if γ_r(m)≤γ_r(ℓ), then by the same argument as above (<ref>) holds. If, instead, γ_r(m)≥γ_r(ℓ), then the RHS is γ_r(C) + γ_r(ℓ) = γ_r(C) + γ_r(p^ℓ-1), and γ_r(p^ℓ-1) appears in the minimum in the LHS, hence (<ref>) holds. Therefore, in order to conclude the proof, it is sufficient to show that, if γ_r(m)≥ 1 and r does not divide (f,p-1), then r does not divide p-1. Assume the contrary. We have r| m| (p^f-1)/(p-1), hence r| (p^f-1)/(p-1) - (p-1) = 2+p^2+⋯ + p^f-1. But r divides also p^2-1; hence r divides 2+p^2+⋯ + p^f-1- (p^2-1) = 3+p^3+⋯ +p^f-1. Going on in this way, we see that r divides f, hence it divides (f,p-1), which is a contradiction. This concludes the proof. The previous lemma reduces the problem to elementary number theory. We need a few more lemmas. Let r be a prime, and assume γ_r(p-1)≥ 1 and r ∤ j. Then γ_r(p^j-1)=γ_r(p-1). Note that (p^j-1)/(p-1) = 1+p+⋯ + p^j-1≡ j ≢0 r. Let r be a prime and assume i≥ 1 and (r,i)≠ (2,1). If γ_r(p-1)=i, then γ_r(p^r-1)=i+1. Assume p-1=ar^i, with (a,r)=1, so p≡ ar^i +1 r^i+2. Then p^r ≡ (ar^i+1)^r ≡ a^rr^ir + rar^i + 1 r^i+2. If (r,i)≠ (2,1), then this is ≡ ar^i+1 + 1r^i+2, whence the statement holds. Note that for (r,i)= (2,1) <Ref> does not hold. Indeed, if γ_2(p-1)=1, then γ_2(p^2-1)≥ 3. Let r be an odd prime, let i≥ 1 and assume γ_r(p-1)≥ 1. Then γ_r((p^r^i-1)/(p-1))=i. In particular, γ_r((q-1)/(p-1))=γ_r(f). By <Ref>, γ_r(p^r-1) = γ_r(p-1)+1, hence the first assertion of the statement holds for i=1; then use induction. The last assertion (“In particular...”) follows from this and Lemma <ref>. The case r=2 is different. Let i≥ 1 and assume γ_2(p+1) = j, with j≥ 1. Then γ_2((p^2^i-1)/(p-1))=i+j-1. In particular, γ_2((q-1)/(p-1))=γ_2(f)+j-1. By induction on i. The case i=1 is the hypothesis. For i≥ 2 write p^2^i - 1 = (p^2^i-1 +1)(p^2^i-1 -1). We have p^2^i-1+1 ≡ 2 4 (since 2^i-1 is even), hence γ_2(p^2^i-1)= γ_2(p^2^i-1-1) + 1, whence the statement follows by induction and <Ref>. We are now ready to prove <Ref>. We need to show that if G is not semiregular on V{0}, then α(G)≥ g(q) and δ(V⋊ G)≥ h(q), where g and h are defined in (<ref>) and (<ref>). Assume first there exists ℓ| f, ℓ < f/2, such that A∩ z^ℓ H≠∅. By <Ref>, α(G)> ((q-1)/(p^ℓ-1),t)/tf≥1/(p^ℓ-1,t)f > 1/fp^ℓ. (In the second inequality we used that (a,bc) ≤ (a,b)(a,c).) For ℓ≤ f/3 we have fp^ℓ≪ q^1/3, hence this case is done. Assume now that f is even and A⊆ z^f/2H. We will in fact prove the desired bound for every q. We start with the bound to α(G); the bound to δ(V⋊ G) will follow easily at the end. Write (t,p^f/2-1) = (p^f/2-1)/B. Then m|p^f-1/t| B(p^f/2+1). Let r_1, …, r_b denote the distinct prime divisors of (f,p-1) (possibly b=0). By Lemma <ref>, for every i=1,…, b, one of the following holds: * γ^i(m) ≤γ^i(f/2). * γ^i(p-1) ≤γ^i(C). Let s be a prime divisor or f. Note that, for every i such that s≠ r_i, by Lemma <ref> we have γ^i(f/s)=γ^i(f). In particular, since m|(p^f-1)/(p-1), certainly γ^i(m) ≤γ^i(f) = γ^i(f/s). If s is odd then by assumption A∩ z^f/sH=∅. We deduce from Lemma <ref> that (a) For any odd prime s dividing f, we have that s|(p-1). Let now i∈{1, …, s} be such that r_i is odd (if it exists). By the same reasoning as above (with s=r_i), since A∩ z^f/r_iH=∅, by Lemma <ref> we must have γ^i(m) > γ^i(f/r_i) and γ^i(p-1) > γ^i(C). Moreover, by Lemma <ref>, γ^i(f/r_i) = γ^i(f)-1. Recall, now, that (1) or (2) holds. But (2) cannot hold, because it would contradict the final inequality of the previous paragraph; so (1) holds. We have, therefore, γ^i(f/r_i) < γ^i(m) ≤γ^i(f/2) = γ^i(f/r_i) + 1 = γ^i(f)=γ^i(f) (the last equality follows from <Ref>), so it must be γ^i(m) = γ^i(f)=γ^i(f). We summarize this: (b) For any i such that r_i is odd, γ^i(m) =γ^i(f)= γ^i(f). Then, we claim that (c) Either p is odd, or f≡ 2 4. Indeed, assume, by contradiction, that p is even and f≡ 0 4. Then, for every i, r_i is odd and γ^i(f/2) = γ^i(f/4)=γ^i(f), hence A∩ z^f/2H≠∅ if and only if A∩ z^f/4H≠∅, which contradicts our assumption. This proves (c). Now, by <Ref> we have α(G) = (t,p^f/2+1)/tf + 1/tf≥1/f(t,p^f/2-1) +1/tf= B/f(p^f/2-1) +1/tf. By (a) and (b), we have f/2^γ_2(f)| m, so by <Ref> tf | 2^γ_2(f)(p^f-1). By (<ref>), f/2^γ_2(f)| m| B(p^f/2+1), and for i such that r_i is odd, r_i ∤ p^f/2+1, so that f| 2^γ_2(f)m | 2^γ_2(f)B. . If γ_2(f)=1, then by (<ref>), (<ref>) and (<ref>) we get α(G)≥B/f(p^f/2-1) +1/tf≥1/2(p^f/2-1) + 1/2(p^f-1)= h(q), as wanted. From now, therefore, we assume N:=γ_2(f) ≥ 2. By (c), we have p odd. If t is odd, then γ_2(B) = γ_2(p^f/2-1). By Lemma <ref>, γ_2(p^f/2-1)≥ N+j-1 ≥ N, where j:=γ_2(p+1)≥ 1. Hence γ_2(B)≥γ_2(f), so by (<ref>) f| B and by (<ref>) α(G) > B/f(p^f/2-1)≥1/p^f/2-1 and we are done. Assume now that N≥ 2 and t is even. If γ_2(t)=γ_2(p^f-1), then m|p^f-1/t is odd, so γ_2(m)=0≤γ_2(f/4), therefore A∩ z^f/4H≠∅, contradiction. Assume finally that N≥ 2, t is even and γ_2(t)<γ_2(p^f-1). In particular, γ_2(t) ≤max{γ_2(p^f/2-1), γ_2(p^f/2+1)}. Also, clearly min{γ_2(p^f/2-1), γ_2(p^f/2+1)}=1. It is easy, then, to deduce α(G) = (t,p^f/2+1)/tf +1/tf= 2/f(t,p^f/2-1) +1/tf= 2B/f(p^f/2-1) +1/tf. Now, if r_i is odd, by the validity of (1) or (2) we deduce that either γ^i(m)≤γ^i(f/4) or γ^i(p-1)≤γ^i(C). Since A∩ z^f/4H=∅, it follows from <Ref> that (2) does not hold, and so γ_2(m)≤γ_2(f/2), and also γ_2(m)> γ_2(f/4). It follows γ_2(m) = γ_2(f/2) = N+j-2≥ N-1, so by <Ref> f| 2m, and by (<ref>) ft| 2(p^f-1). Since by (<ref>) m| B(p^f/2+1), we deduce γ_2(B) ≥ N+ j - 3 ≥ N-2, and so by (<ref>) and (<ref>) we get α(G) = 2B/f(p^f/2-1) + 1/tf≥1/2(p^f/2-1) + 1/2(p^f-1) = h(q). The lower bound to α(G) is proved. For the lower bound to δ(V⋊ G), note that every element of A fixes exactly p^f/2 vectors, and so by <Ref> and some reorganization we get δ(V⋊ G) = q^1/2-1/q(α(G) + 1/|G|q^1/2). From inspection of this proof, we have either |G|≤ 2(q-1), or α(G)≥ 1/(q^1/2-1), in which case we are done. Therefore we may assume |G|≤ 2(q-1), and using α(G)≥ h(q), we get δ(V⋊ G)≥ g(q) by some reorganization. Let q=p^f with f even and G=_1(q)⋊σ, where σ is the q^1/2-th power map. By <Ref>, the proportion of elements in G_1(q) fixing a nonzero vector is (q^1/2+1,t)/2t = 1/2(q^1/2-1) (with t=q-1). Counting the identity, we have α(G) = 1/2(q^1/2-1) + 1/2(q-1) = h(q) and so equality can be attained in <Ref>(3). What is more, by the same calculation as at the end of the previous proof, we get δ(V⋊ G)=g(q), and so equality can be attained in <Ref>(2). § _2(Q) In this section we prove <Ref>. For convenience, we replace q^d/2 by q in the notation. The cases to consider are when G normalizes _2(5), Q_8, or _2(q_0) with q a power of q_0≥ 4. We consider each case in turn. In the first two cases, in some sense we will argue by reducing the problem to _1(q), and so will use several times <Ref>, which we proved in <Ref>. §.§ _2(5) Assume q≡± 1 10 and let L:=_2(5), L≤ G ≤__2(q)(L)=LZ⋊τ, where Z=_q^×, q=p^f, p prime, and τ is the p-th power map. Recall that f(n) was defined in (<ref>). For q sufficiently large we have α(G) ≥ 1/(60(q-1)) and δ(V⋊ G) ≥ f(q^2). If moreover G is not semiregular on V{0}, then α(G)≥ 31/(60(q-1)) and δ(V⋊ G) > 31/(60(q-1))· (1-1/q). In this proof, whenever we write “semiregular”, we mean “semiregular on V{0}”. Set A:=G∩ (Z τ), and note that |G:A|≤ 60. Assume first that G is semiregular. Then A is semiregular and so |A|≤ q-1, from which α(G, V)≥α(A,V)/60≥1/60(q-1). Note also that the set of derangements of G is V{0}, and so δ(V⋊ G,V)=q^2-1/|G|q^2≥q^2-1/60(q-1)q^2 = f(q^2), as wanted. Assume now that G is not semiregular; we will show that α(G,V)≥ 31/(60(q-1)), and we will deduce the bound to δ(V⋊ G,V) at the end. If A is not semiregular, by <Ref> we have α(A,V)≥ h(q) > 1/2q^1/2, and so α(G, V)≥α(A,V)/60 > 1/120q^1/2 and we are done. Assume then A is semiregular, so |A|≤ q-1. Assume first that H:=G∩_2(q) is semiregular, and let 1≠ g=xσ∈ G fix a nonzero vector, where x∈_2(q) and σ is a power of τ. Since H is semiregular, |g|=|σ|, and so by <cit.> there exists y∈_2(q) such that g^y=zσ, where z∈ Z. Replacing G by G^y, we are reduced to the case where A is not semiregular, and so this case is done. Assume finally that A is semiregular and H is not semiregular. Note that in L there are 30 elements of order 4, 20 elements of order 3, 20 elements of order 6, 24 elements of order 5, 24 elements of order 10, and 2 central elements. Let H=LZ' where Z'≤ Z. If 3∤ q, then a nontrivial element of H with eigenvalue 1 is of the form gz, where g∈ L, 1≠ z∈ Z', and g has eigenvalue z^-1. It is easy to see that α(G,V)≥ 31/(60(q-1)) (attained when q≡ 1 mod 4, q≡ -1 mod 3, q≡ -1 mod 5, and Z'=Z). Assume finally that 3| q, so element of order 3 have eigenvalue 1. Since q≡± 1 5, we also have q≡ 1 8. In particular, we may assume that 4∤ |Z'|, for otherwise we have α(G,V)≥ 51/(60(q-1)). We can also assume that |G|=60(q-1) and |A|=q-1, otherwise α(G,V)≥ 21/(30(q-1)). Now, take g∈ L of order 4 such that τ normalizes g; up to conjugation we may take it that J:=A,g acts diagonally on V. Let W be the space generated by the first basis vector, and let J be the group induced by J on W; since 4∤ |Z'|, note that |J| = |J|. If J is not semiregular on W{0} then by <Ref> α(J,W)=α(J, W)>1/(2q^1/2) and so α(G,V)≥α(J,W)/30>1/(60q^1/2) and we are done. Therefore we assume that J is semiregular on W{0}, from which |J|≤ q-1 and so |G|≤ 30(q-1), contradicting our assumption. This proves the lower bound to α(G,V). For the lower bound to δ(V⋊ G,V), note that every bound to α(G,V) in the proof is given by counting elements of H=G∩_2(q) (or otherwise α(G,V)≫ 1/q^1/2). In particular, every such nontrivial element fixes q vectors, and so the bound follows easily from <Ref>. Assume q≡ -160 and let G=ZL where Z=_q^×. Then G acts semiregularly on V{0} and so α(G)=1/(60(q-1)) and δ(V⋊ G)=(q+1)/(60q^2)=f(q^2). In particular, the bounds in <Ref>(2) (and <Ref>(2)) are sharp. §.§ Q_8 Let q be odd and Q_8≤ G≤__2(q)(Q_8)=KZ⋊τ, where K is a double cover of S_4, Z=_q^×, q=p^f, p prime, and τ is the p-th power map. We set Ŝ_̂4̂:=_2(3) and we denote by S_4 the other double cover of S_4, so K∈{Ŝ_̂4̂, S_4}. For q sufficiently large we have α(G) ≥ 1/(24(q-1)). If moreover G is not semiregular on V{0}, then α(G)≥ 13/(24(q-1)) and δ(V⋊ G)≥ 13/(24(q-1))· (1-1/q). As in the proof of <Ref>, we set A:=G∩ Zτ and when we write “semiregular” we mean “semiregular on V{0}”. Since |G:A|≤ 24, if G is semiregular we have α(G,V)≥α(A,V)/24≥1/24(q-1). Assume then G is not semiregular; we want to show α(G,V)≥ 13/(24(q-1)), and we will deduce the bound to δ(V⋊ G, V) at the end of the proof. Setting H:=G∩_2(q), exactly as in the proof of <Ref>, we reduce to the case where A is semiregular and H is not semiregular. We assume (as we may) that if p≡± 3 8 then K=Ŝ_̂4̂. Write G=N,g_1z_1,g_2z_2σ, where N:=G∩ K, g_1,g_2∈ K, z_1,z_2∈ Z, σ is a power of τ, and H=N,g_1z_1. If |N|=16, or 48, or 24 with K= Ŝ_̂4̂, define β:=|N|. If |N|=8, define β:=24. Assume finally |N|=24 with K=S_4. If q≡ -1 8, or q≡ 1 8 and |z_1|_2=8 and g_1∉N, define β:=48; in the other cases, define β:=24. We will now show that either α(G,V)≫ 1/q^1/2 (in which case the proof is complete) or |G|≤β(q-1)/2. Note that this is immediate if β=48, so for now we may assume β<48, and in particular |N|<48. Let J:=-1, g_1z_1,g_2z_2σ. If we show that |J|≤ q-1, then |G|≤ |N|(q-1)/2, which is enough in all cases since β≥ |N|. We will accomplish this stronger bound in some cases. If we may take g_1=g_2=1 then J≤ A has order at most q-1 as desired, so assume this is not the case. Since g_1 and g_2 normalize N, if |N|=16 then we may take g_1=g_2=1 and so these cases are excluded. If |N|=24, then we may take g_1 or g_2 is equal to 1, and the other, call it g, is of 2-power order. We may assume g∉N as explained in this paragraph. Since we are assuming β<48, we have either K=Ŝ_̂4̂ or q≡ 1 8. In particular, g is diagonalizable over _q, and we may take it diagonal, of order 2 if K=Ŝ_̂4̂, and of order 8 if K=S_4. Assume first g=g_2. Let W be the space generated by the first basis vector and let J be the group induced by J on W; note |J|=|J|. If J is semiregular then |J|≤ q-1 as desired. If J is not semiregular, then by <Ref> we have α(J,V)> 1/(2q^1/2), and so α(G,V)≥α(J,V)/24> 1/(48q^1/2) and we are done. Assume now g=g_1 and K=S_4. Then p≠ 3 and since β<48, we have q≡ 1 8 and |z_1|_2≠ 8=|g|. This easily implies that |J|=|J|, and we reduce to |J|≤ q-1 in the same way as above. Assume now g=g_1 and K=Ŝ_̂4̂, so |g|=2. Assume g centralizes the space W generated by the first basis vector. Set J_1:=-1,z_1, z_2σ, and note that |J|=|J_1|, and so either |J|≤ q-1, as desired, or J_1 is not semiregular on W. But note that each element of J_1 fixing a nonzero vector of W gives rise to an element of J fixing a nonzero vector of V, and so α(J,V)> 1/(2q^1/2) and we are done. Assume finally |N|=8. If g_1∈ N or g_2∈ N, then |G|≤ 12(q-1) as desired. If g_1 and g_2 are not in N, then we may take |g_1|=3 and g_2 of 2-power order. It follows that K=Ŝ_̂4̂ or q≡ 1 8. (Indeed, if K=S_4 and q≡ -1 8, we have q=p^f with f odd, in which case by choosing g_2∈_2(p), we have that g_2z_2' is a power of g_2z_2σ for some z_2'∈ Z, which is impossible.) In particular, we can pass to a subgroup B of index 3; the same argument as in the previous paragraph then gives |B| ≤ 4(q-1) and so |G|≤ 12(q-1), as desired. Therefore we may assume |G|≤β(q-1)/2 in all cases, as claimed. Recall we are assuming H is not semiregular; let 1≠ y∈ H with eigenvalue 1. Assume y=nxz, where n∈ N, x∈ K, z∈ Z, and xz a power of g_1z_1 not belonging to N{1}. Let c:=nx, so y=cz, and note that |c|=2,3,4,6 or 8. If |c|=2 then x=1 and y=n. If |c|= 6 or 8 then we replace y by y^2 and assume |c|=3 or 4. Now note that, in all cases with |c|=2,3 or 4, if c'∈ Nx is such that |c'|=|c|, then c'z has eigenvalue 1. Moreover, if |c|=3 and xz≠ 1, then c^2 has eigenvalue 1, and the number of elements of order 3 in Nxz is equal to the number of elements of order 3 in N(xz)^2. There is one case where we need to improve slightly the counting. If K=S_4, |N|=24, |z_1|_2=8 and g_1∉ N, then we can take |g_1|=8, and up to replacing z_1 by -z_1, a power g_1z_1 of the form g_1z_1' has eigenvalue 1. Furthermore, for every _2(q)-conjugate a of g_1 in Ng_1 (there are 6 such conjugates), az'_1 has eigenvalue 1, and finally we find other 6 nontrivial elements with eigenvalue 1 in N(g_1z'_1)^2 = Nz_1'^2. We counted 12 elements, which is also the number of elements of order 4 in K N. Let now C be a coset of N in K normalizing N and let t∈{3,4}. If N=Q_8, C⊆ P N where P∈Syl_3(K), and t=3, we let F_C(t)=8 (namely, the number of elements of P of order 3). If p≠ 3, |N|=48 or |N|=24, and t=3, we let F_C(t)=16 (namely, twice the number of elements of K of order 3). In all other cases, we let F_C(t) be the number of elements that are either in C and have order t, or are in N(N) and have order 2. We finally let f_C(t):=(F_C(t)+1)/β. It follows from the above discussion that there exists a coset C of N in K normalizing N and t∈{3,4}, such that if |N|=24, K=S_4, |z_1|_2=8 and |g_1|∉N, then C=K N and t=4; if |N|=24 and q≡ -1 8 (so p≠ 3), then t=3; and such that moreover f(t)>1/β and α(G,V)≥F_C(t)/|G| + 1/|G|≥2f_C(t)/(q-1). (In the second inequality we used |G|≤β(q-1)/2. ) It is straightforward to check that if f(t)>1/β then f(t)≥ 13/48 (attained for example by K=N=Ŝ_̂4̂, q≡ 3 8 and q≡ 2 3, and t=3 or 4). This gives α(G,V)≥ 13/(24(q-1)) as required. The lower bound to δ(V⋊ G,V) follows as at the end of the proof of <Ref>. §.§ _2(q_0) We begin by recording a simple estimate for the proportion of elements in a coset of _2(q) having a given eigenvalue in _q. Let g∈_2(q) and let λ∈_q^×. The proportion of elements of _2(q)g having eigenvalue λ is at least 1/q. If λ^2≠(g), the proportion of (semisimple) elements of _2(q)g having eigenvalue λ is 1/(q-1). If λ^2= (g), the proportion of λ-potent elements is 1/q. We turn then to the general case. Let q=q_0^r with q_0≥ 4, r≥ 1, and _2(q_0) ≤ G ≤__2(q)(_2(q_0))=_2(q_0)Z⋊τ, where Z=_q^×, q=p^f, p prime, and τ is the p-th power map. This is the only case where the bound for δ(V⋊ G) will require somewhat more work than the bound for α(G). We have α(G)> 3/(4q) and δ(V⋊ G)> 9/(16q). Let H:=G∩ (_2(q_0)⋊τ), so |G:H| ≤ (q-1)/(q_0-1). Let W=_q_0^2; we now count the elements of H fixing a vector in W{0}. Assume that H projects onto (_q/_r), let N:=H∩_2(q_0), and consider any coset Ng with g∈ H. Assume that g projects onto (_q/_r^i), and set s(i):=|_q_0∩_r^i|. By Shintani descent (see <cit.>) and <Ref>, the proportion of elements of Ng that fix a nonzero vector of W is at least 1/s(i)≥ 1/q_0. Therefore, α(H,W)≥ 1/q_0. Since |G:H|≤ (q-1)/(q_0-1), we have α(G,V) ≥ (q_0-1)/(q_0(q-1)) ≥ 3/(4(q-1)) (since q_0≥ 4), as desired. Next we prove the lower bound to δ(V⋊ G,V). Assume q=r^m. Note that in a coset Ng where g∈ H projects onto (_q/_r^i), each element fixing a nonzero vector fixes at least r^i vectors. Since in such coset we produced at least |N|/s(i) elements fixing a nonzero vector, we deduce η(G,V)≤1/|G|(∑_i| mφ(m/i)|N|/s(i) r^i+ |G|-∑_i| mφ(m/i)|N|/s(i)), where φ is Euler's totient function, and so by <Ref> δ(V⋊ G,V)≥|N|/|G|∑_i| mφ(m/i)(r^i-1)/s(i)r^i≥q_0-1/(q-1)m∑_i| mφ(m/i)(r^i-1)/s(i)r^i, where in the last inequality we used |G|≤ (q-1)m|N|/(q_0-1). Using ∑_i| mφ(m/i)=m, (r^i-1)/r^i≥ (r-1)/r, and s(i)≤ q_0, we see that the right-hand side of (<ref>) is at least (q_0-1)(r-1)/(q_0(q-1)r). In particular, for r>2 (and q_0≥ 4) this is at least 9/(16(q-1)), which is enough. Assume then r=2. Using that s(1)=2, that for i>1 we have (2^i-1)/2^i≥ 3/4 and s(i)≤ q_0, we get 1/m∑_i| mφ(m/i)(2^i-1)/s(i)2^i ≥φ(m)/4m + 3(m-φ(m))/4mq_0 = 3/4q_0+ φ(m)/m(1/4 - 3/4q_0) >3/4q_0. In particular, we get from (<ref>) that δ(V⋊ G,V)> 3(q_0-1)/(4q_0(q-1)) ≥ 9/(16(q-1)) also in this case. Let G≤_2(q^d/2) be such that G≤_d(q) is irreducible and primitive. Let H:=G∩_2(q^d/2) and L:=F^*(H). By Clifford's theorem, V is homogeneous as an _q^d/2L-module. If a component is one-dimensional, we have L≤(H) and _H(L)=H. But (L)=_H(L), from which H=L consists of scalars, and since |G:H|≤ d/2, G cannot be irreducible, false. Therefore, L is irreducible on V, and so it is absolutely irreducible. If the layer E of L is nontrivial, then E=_2(5) or E=_2(q_0) with q_0≥ 4. Since G normalizes E, these cases have been handled in <Ref>. If E=1, then L=ZQ_8 with Z≤_q^d/2^×. Then in fact G normalizes Q_8, and so has been handled in <Ref>. § EXTRASPECIAL GROUPS <Ref> for extraspecial groups is immediate. Let G≤_d(q) be extraspecial of order r^2s+1 and absolutely irreducible. Then α(G)>|_2s(r)|/q^o(d) as |G|→∞. We have d=r^s, and we simply use that the identity has eigenvalue 1; then α(G)/|_2s(r)|>1/|G||_2s(r)| > 1/r^2s+1r^4s^2 > 1/2^o(d)≥1/q^o(d) as d→∞. The assumption that G is sufficiently large is convenient here. For example, if G=2_-^11 and q=3 then |_10^-(2)|/(α(G)|V|^1/2) is larger than 2 millions. (Note that the nontrivial elements of 2_±^2s+1 with eigenvalue 1 are precisely the noncentral involutions, whose number is seen to be 4^s± 2^s-2.) This says that counting elements of G with eigenvalue 1 is far from enough to handle all subgroups of N:=__d(3)(G)=G._10^-(2). The argument for subgroups of N can be amended with some more care. However, this issue would add (seemingly technical) complications to the arguments of <Ref>, where we were able to essentially ignore all extraspecial and quasisimple groups of small order and reduce the problem to the generalized Fitting subgroup. § QUASISIMPLE GROUPS In this section, d≥ 3, V≅_q^d, G is a quasisimple group with S:=G/(G), and G≤_d(q) is absolutely irreducible. §.§ Alternating groups and groups of Lie type in non-defining characteristic We first isolate the case of the fully deleted permutation module for alternating groups. Let G=A_m with m≥ 5, and let V denote the fully deleted permutation module. Then, α(G,V)≥ 1- 2(1+log(m))/m. Let g∈ G. It is easy to see that if g has at least three cycles, then g fixes a nonzero vector of V. Let us then count these elements. The proportion of elements of G with at most two cycles is 2m odd/m + 2m even(∑_1≤ i<m/21/i(m-i) + 2/m^2), where - denotes the indicator function. Note that ∑_1≤ i<m/21/i(m-i) + 2/m^2 = 1/m∑_1≤ i<m/2(1/i + 1/m-i) + 2/m^2 = 1/m∑_1≤ i≤ m-11/i ≤1+log(m)/m, where in the last inequality we used a standard upper bound for the partial sum of the harmonic series. The lemma follows then from this estimate and (<ref>). Assume in the following lemma that either S=A_m with m≥ 5, or S is a group of Lie type of (untwisted) Lie rank r, defined over _s, with (q,s)=1. In these cases, as we shall recall in the proof, we have that d→∞ as |S|→∞. We have α(G,V) > |(S)|/q^o(d) as |S|→∞. Assume first S=A_m. By <Ref>, we may assume that V is not the fully deleted permutation module. Then <cit.> gives d ≥ (m^2-5m+2)/2. Therefore, for m sufficiently large, counting only the identity in G, we get α(G,V)≥1/m! > |(S)|/2^o(d) as wanted. Assume now S is of Lie type; note that |G|≤ s^O(r^2) and |(S)|≪ rlog(s). By <cit.>, we see that d≥ s^cr for some positive absolute constant c, so α(G,V)≥1/|G|≥|(S)|/2^o(d) also in this case. §.§ Groups of Lie type in defining characteristic Assume now S is a group of Lie type of untwisted rank r defined over _s, and assume (q,r)≠ 1. Assume q is a power of the prime p. If |S| is sufficiently large, then G=X_σ /Z, where X is a simple linear algebraic group of simply connected type over K:=_q, σ is a Steinberg endomorphism of X, and Z is a central subgroup of X_σ:={x∈ X | x^σ=x} (see e.g. <cit.>). The irreducible representations of X_σ over K are restriction of representations of X, and are described by the theory of highest weights. We refer to <cit.> or <cit.> for the basic aspects of this theory (which, for our purposes, are sufficient). Denote by u(-) be the proportion of p-elements. u(X_σ)=u(G)/|Z|. Since (|Z|,p)=1, each p-element of G lifts to a unique p-element of X_σ. Next, we recall a result of Steinberg (see <cit.>). We define the level s_0 of a Steinberg endomorphism σ of X as the absolute value of the eigenvalues of σ on Γ⊗_𝐙𝐂, where Γ is the character group; see <cit.>. If X_σ is a Suzuki or Ree group then s_0 is not an integer and s_0^2 is an integer. In all other cases, s_0 is an integer. This also gives an interpretation of the sentence “S is defined over _s”, appearing in the first paragraph of this section – this holds if and only if either s_0=s, or X_σ is a Suzuki or Ree group and s=s_0^2. (Steinberg) Assume that X has rank r, and let σ be a Steinberg endomorphism of X of level s_0. Then, the proportion of regular unipotent elements of X_σ is 1/s_0^r. Let λ_1, …, λ_r be the fundamental dominant weights of X, corresponding to the simple roots α_1, …, α_r. A character of the form ∑_i=1^r c_iλ_i, with c_i∈𝐙, is called p-restricted if 0≤ c_i≤ p-1 for every i. An irreducible KX-representation V=V(λ) is called p-restricted if λ is p-restricted (here λ denotes the highest dominant weight of V). If X_σ is a Suzuki or Ree group, then the irreducible KX_σ-modules are the V(λ), where λ = ∑_i=1^r c_i λ_i, 0≤ c_i < s_0^2, and λ is supported on short roots, namely, c_i=0 if α_i is long. Moreover, these s_0^r representations are pairwise non-equivalent (see <cit.>). In this case we will also say that V is supported on short roots. We now record a lemma concerning field of definition of p-restricted representations. Let σ be a Steinberg endomorphism of X of level s_0, and let d be the order of the permutation induced by σ on the Dynkin diagram. Let V be an irreducible p-restricted KX_σ-representation, supported on short roots if X_σ is Suzuki or Ree, and assume the minimal field of definition for V is _p^f. Then p^f = s_0 or p^f = s_0^d. Write s_0=p^e. Assume first X_σ is not Suzuki or Ree. By <cit.> (note that “q” in this reference corresponds to “s” here), we have f | ed, so we only need to show that f≥ e. Write V=V(λ) where λ= ∑_i c_iλ_i is the highest dominant weight of V, so 0≤ c_i≤ p-1 for every i. As X_σ-modules, we have V(λ) ≅ V(λ)^(f)≅ V(p^fλ). We have p^fλ = ∑_i p^fc_iλ_i, and since V(λ) and V(p^fλ) are X_σ-equivalent, by <cit.>, there must exist c_j with c_jp^f ≥ p^e; so p^e ≤ c_jp^f ≤ (p-1)p^f, from which f≥ e, as desired. If X_σ is Suzuki or Ree, then <cit.> and the same calculation as above gives f=2e. The following is well-known. Assume that V has weight zero. Then every element of X has eigenvalue 1. Let T be a maximal torus of V, so by assumption _V(T)≠ 0. Now let g∈ X, and write g=su, with s semisimple, u unipotent, and su=us. Up to conjugation, we may assume s∈ T. Since u is a unipotent element normalizing W:=_V(s), we have 0≠_W(u)≤_V(g), which concludes the proof. I am grateful to Bob Guralnick for suggesting the proof of the following lemma. See <cit.> for similar results for algebraic groups and for semisimple elements, which however are not sufficient here. Let σ be a Steinberg endomorphism of X of level s_0, and let V be an irreducible KX-representation. Assume that there exists a weight χ such that χσ∈𝐙χ. Then, α(X_σ, V)≥ 1/s_0 + O_X,χ(s_0^-3/2). Note that {(x-1)=0} is a subvariety of X of codimension at most 1. If it is equal to X then we are clearly done; assume then this is not the case. We will show that there exists a σ-stable irreducible component Y of {(x-1)=0} of codimension 1 in X. The Lang–Weyl estimates (which hold for every σ; see <cit.> and the references therein for the case of Suzuki or Ree groups) will then imply |Y_σ|/|X_σ| = s_0^-1 + O(s_0^-3/2), where the implied constant depends on Y. We will see that Y depends on X and χ, so the proof will be concluded. Let T be a σ-stable maximal torus, such that χ T →_m, and let S:=(χ). By <Ref>, χ≠ 0 and so S is a closed subgroup of T of codimension 1. Note that, since χσ∈𝐙χ, S is σ-stable. Denote by ℛ the set of regular semisimple elements of X, which is σ-stable. It is known that ℛ is open (and dense) in X. Assume first that U:=S^∘∩ℛ≠∅, so U is open and dense in S^∘. Now consider the morphism f X× S^∘→ X given by f(g,s) = s^g. Then Im(f) is σ-stable, and consists of elements having eigenvalue 1. Next, note that there is a dense subset D of Im(f) consisting of elements whose fiber has dimension (T)=(S)+1. (For example, we can take D=ℛ∩Im(f), and we use that T has finite index in _X(T).) In particular, (Im(f))=(X)-1. Setting Y:=Im(f), and noting that Y depends on X and χ, proves the lemma in this case. Assume finally that S^∘∩ℛ=∅. Then L:=_X(S^∘) is a reductive group which is not a torus. Then, there exists a regular unipotent element u∈ L_0:= [L^∘,L^∘]. Consider then the morphism f X× S^∘→ X given by f(g,s) = (su)^g. As in the previous case, we see that Im(f) has codimension 1 in X. (In this case, we can choose D⊆Im(f) as the set of elements conjugate to su where s∈ S^∘ is such that _X(s)=L; we recall that L has finite index in _X(L), see <cit.>.) In particular, letting Y be the closure of the set of elements conjugate to su' where s∈ S^∘ and u'∈ L_0 unipotent, we have that Y⊇Im(f) has codimension 1, is σ-stable, and depends only on X and χ, so we are done. §.§ Natural module for classical groups In this subsection V is the natural module for a classical group G. For orthogonal groups we assume n even, since for n odd every element of _n(q) has eigenvalue 1. For G=_n(s), _n(s), _2n(s), ^±_2n(s), Neumann–Praeger <cit.> gave estimates for the proportion of elements of G having a given eigenvalue, and showed that its limit as n→∞ with s fixed exists (and, for example, in non-orthogonal groups it is 1/s+O(1/s^2)). See also <cit.> for recent related results. We prove an easy result that holds in each coset of [G,G] in G. Our result and proof are similar to <cit.>, and in particular are an application of the inequality | ⋃_g∈ G H^g | ≤|G|/|_G(H):H| for H≤ G. In <cit.>, in fact, the authors are interested in semisimple elements with eigenvalue 1, and nontrivial estimates for the proportion of semisimple elements of G are required. Here we do not need to focus on semisimple elements, which makes the analysis easier and in some cases allows to handle some more value of s (for s≥ 3 the bounds below are meaningful). We give exact bounds (rather than asymptotic), as the proof offers them naturally and they might be useful. * If _n(s) ≤ G≤_n(s) with n≥ 2 then 1 - 1/s-1/s-1≤α(G,V) ≤1/s-1. * If G=_2n(s) with n≥ 2 then 1 - 1/s-1/s≤α(G,V) ≤1/s-1. * If _n(s)≤ G≤_n(s) with n≥ 3 then 1 - s/s^2-1/s+1≤α(G,V) ≤s/s^2-1. * If G=Ω^±_2n(s) with n≥ 4 and s odd then s(1 - 2s/s^2-1)/s^2-1≤α(G,V) ≤2s/s^2-1. All proofs are similar; we first prove the upper bound, and deduce the lower bound. The upper bounds in (1)–(3) are contained in <cit.>. (This is an easy calculation using (<ref>), applied to the centralizer H of a 1-space, singular or nondegenerate; in (1) and (3) the bounds are stated only for _n(s) and _n(s) but the same proof works for any G as in the statement.) For the upper bound in (4), note that if an element g of Ω^±_2n(s), s odd, has eigenvalue 1, then it fixes either a singular vector, or it centralizes a nondegenerate 2-space of minus type. (In order to see this, assume that vg=v with v nonsingular. Letting W= v^⊥, since (W) is odd and (g)=1, there exists 0≠ w∈ W with wg=w. If w is singular we are done; otherwise v,w is nondegenerate. If it is of minus type we are done again, and otherwise it contains a singular vector, done for the third and final time.) The contributions of the two subgroups in (<ref>) are, respectively, 1/(s-1) and 1/(2(s+1)); bounding 1/(2(s+1)) < 1/(s+1) and summing the two contributions we get the desired inequality. Now let us prove the lower bounds. In (1), for n=2 we have α(G,V)≥ 1/s, so assume n≥ 3. We count elements that centralize a vector v, stabilize a complement and act without eigenvalue 1 there. Each such element acts without eigenvalue 1 on exactly one hyperplane (namely [V,g]), hence there are no repetitions. Using the upper bound that we just proved, the statement follows from an easy calculation. The bounds in (2), (3) and (4) are proved in the same way. In (2), we count elements that are regular unipotent on a nondegenerate 2-space and act without eigenvalue 1 on the perpendicular complement. (The proportion of regular unipotent elements of _2(s) is 1/s.) In (3) we count elements that fix a nondegenerate vector and act without eigenvalue 1 on the perpendicular complement. In (4) we count elements acting trivially on a nondegenerate 2-space (of any type) and without eigenvalue 1 on the orthogonal complement. The previous result is empty for linear and symplectic groups when s=2, and does not address orthogonal groups in even characteristic. (This latter exclusion depends on the fact that the centralizer H of a nonsingular vector is a maximal subgroup, and so (<ref>) is useless.) For these cases, we record estimates that can be extracted from <cit.>. * For G=_n(2) and _2n(2) we have α(G,V)> 0.5. * For G=Ω^±_2n(s), s even, n≥ 4, we have α(G,V)> 1/s - 1/s^2 - 1/s^16. (1) Let us start from G=_n(2). By <cit.> we get that |1-α(G,V)-G(2,1)|≤ c(2) · 2^-6, where G(2,1) and c(2) are defined in that paper. We have c(2)<3.5 by <cit.>, and so the error term is at most 0.06. Moreover, G(2,1) is estimated in <cit.>. Looking at the proof, we see that log(G(2,1)) = -1 - ∑_k≥ 2(∑_j≥ 2; j| k1/j) 2^-k. In order to upper bound log(G(2,1)), we lower bound the sum considering only the term k=2 and get log(G(2,1))< -1 - 1/8, so G(2,1) < e^-1-1/8 <0.33 and therefore 1-α(G,V) < 0.39. The case of _2n(2), as well as (2), are very similar, and we skip the details. §.§ <Ref> in defining characteristic We can now prove <Ref> for quasisimple groups G≤_d(q) in defining characteristic, which is the remaining case. We will do this in two lemmas: <Ref>. In <Ref>, we will prove a slightly stronger bound in most cases. Recall that d≥ 3. We have α(G, V)≫ q^-d/3, and moreover either α(G, V)≫ q|(S)|log(q) q^-d/3, or G and d appear in <Ref>. Case 1: q≥ s. Recall that by Steinberg's twisted tensor product theorem, if V is not p-restricted then d≥ d_0^2 where d_0 is the dimension of some p-restricted module. Assume first the rank r is at least 100. We have q|(S)|log(q)≤ q^4. By <Ref>, the proportion of unipotent elements in G is at least 1/s_0^r ≥ 1/q^r, where s_0=s, or G is Suzuki or Ree and s_0=s^1/2. Since unipotent elements have eigenvalue 1, we are done if d/3-4≥ r, which is to say, d≥ 3r+12. Assume then this is not the case; by <cit.>, V is (a Frobenius twist or the dual of) the natural module. Note that α(G,V) is unchanged by taking duals or Frobenius twists, so we may assume V is the natural module. Then from <Ref>, α(G,V)≫ 1/s ≫ 1/q and we are done. Assume then r< 100, so we may assume q is large and therefore q|(S)|log(q) ≤ q^1.1. In particular, counting regular unipotent elements we are done if d/3-1.1≥ r, which is to say, d ≥ 3r+3.3. For the cases in <Ref>, we are done provided d ≥ 3r. Assume first that G is exceptional. If G= G_2(s) and d≥ 10, we are done by <Ref>. Otherwise, by <cit.> and (<ref>), if G= G_2(s) then d=6 or 7. If d=7 then s is odd and the representation has weight zero (see <cit.>, or just note that G_2(s) embeds into _7(s)), so we are done by <Ref>. Assume then s is even, so d=6. The case s=q appears in <Ref>, and <Ref> holds. In the case s<q, by <Ref> we have α(G,V) ≫ 1/s ≫ 1/q^2-1.1 and we are done. Assume now G is exceptional and not G_2(s). If G is not ^3D_4(s), ^2B_2(s), then <Ref> follows from <cit.>. If G=^2B_2(s) and d=4, note that q≥ s=s_0^2 and so 1/s_0^2 > q^-4/3, which proves the first bound. Moreover this case appears in <Ref>. If G=^2B_2(s) and d>4 then d≥ 16 and we are done. If G= ^3D_4(s), then by <cit.> either V≅ V^τ_0 and d≥ 24, so <Ref> holds, or q≥ s^3 and d≥ 8, which is also enough. Assume now G is classical, and denote by n the dimension of the natural module. We start from the case X_σ=A_r(s) or ^2A_r(s), so n=r+1. Note that in this case <Ref> is equivalent to d>3n. Assume V is (a Frobenius twist or the dual of) the natural module, the alternating square, or the symmetric square. Note that for ^2A_r(s) we have q≥ s^2, unless r=3 and V is the alternating square, in which case q≥ s. We now claim that α(G,V)≫_r 1/s, and one readily checks that this is enough. (Note that some cases appear <Ref>, namely the ones in the third, fourth, sixth and seventh lines, and the case d=3 for A_1(s).) The case of the natural module follows from <Ref>. For the symmetric square, we simply count elements with eigenvalue 1 on the natural module. For the alternating square, we count elements acting as diag(λ, λ^-1) on a 2-space (nondegenerate for unitary groups) and fixing no 1-space on a complement, and the claim is proved. (For A_r(s) we could also apply <Ref>.) Assume then V is not among these, so d≥ n^2/2 by <cit.>. Then, <Ref> holds provided n>6. Assume then n≤ 6. Since we are excluding natural module, alternating square and symmetric square, we see by <cit.> and (<ref>) that in all the remaining cases either d>3n, or n=2 and d=4,5,6, or n=3 and d=7,8. For n=3 and d=7,8, the highest dominant weight is stable under the graph automorphism, so by <Ref> α(G,V)≫ 1/s and this is enough. For n=2 and d=5, every element of G has eigenvalue 1 and we are done. The cases n=2 and d=4,6 appear in <Ref> and we simply count unipotent elements. The other cases are similar, and easier; let us handle for example X_σ = C_r(q), so n=2r. If d ≥ n^2/2 then <Ref> holds for n≥ 6, so we may assume n=4 or d<n^2/2. If n=4 then <Ref> holds if d≥ 10, and so by <cit.> we have d=4 or 5. If d<n^2/2, then by <cit.> we are reduced to a finite list of possibilities for V, and we can apply <Ref>. The cases C_r(s), r=2,3 and d=2r appear in <Ref>; the case C_2(s)≅ B_2(s) and d=5 does not appear as every element has eigenvalue 1. Case 2: q<s=:p^e. Assume first G is untwisted. Let _p^f be the minimal field of definition for V, so p^f≤ q. By <cit.>, V⊗ K is a tensor product of m≥ e/f Frobenius twists of a module M. By assuming that m is maximal, we have that the minimal field of definition for M is _s, and in particular |V|≥ p^df= p^(M)^mf≥ p^e·(M)^2/2 = s^(M)^2/2. Moreover, α(G,V)≥α(G,M). Then, if M does not appear in <Ref> and (M)≥ 3, by Case 1 we readily get α(G,V)≥ q|(S)|log(q)q^-d/3 = q|(S)|log(q)|V|^-1/3. Assume then that (M)=2 or that M appears in <Ref>; then by the same calculation we are done unless X_σ=A_1(s) and d=4. This case appears in <Ref> (note that the value d=4 was also found in Case 1). The case where G is twisted is similar, using again <cit.>. (i) In <Ref>, we did not describe explicitly the representations, and we did not list the precise conditions on q for which the examples are indeed exceptions to the stronger bound α(G,V)≫ q|(S)|log(q)q^-d/3 in <Ref>. This can be done straightforwardly, and is not relevant for us. Note that we reported the case where q is uniquely determined (e.g. in the first row, we have q=s, or the stronger bound holds), and in the other cases, usually the stronger bound holds unless q assumes a couple of values (namely s and s^2, or s^2 and s^4). (ii) In the fifth row, for A_1(s), we have either q≥ s, or q≥ s^1/2 and V=W⊗ W^(s^1/2) where W is the natural module. What is more, in the seventh row, V is the alternating square, which can be realized over _s also for ^2A_3(s). Assume V appears in <Ref> and let G≤ T ≤__d(q)(G). If G is sufficiently large then α(T,V) >2 log(q)|(S)| q^-d/2. Since |(S)|≪log(s)≪log(q), it is enough to show that α(T, V) ≥ 1/q^d/2-ε for some ϵ >0. Moreover, since N≤ (ZG).(S), with Z=_q^×, we have |T:T∩ ZG|≪log(q) and so we may also assume G≤ T≤ ZG. Assume first q<s, so by <Ref>(ii), q=s':=s^1/2, G=A_1(s), d=4 and V=W⊗ W^(s') where W is the natural module. Write T=GZ', where Z'≤ Z, and work in every coset z'G with z'∈ Z'; counting elements of G that have eigenvalue λ with λ^s'+1=z^-1, we deduce that α(T,V)≫ 1/s' ≫ 1/q^4/2-. Assume then q≥ s. Let u=2 if G=^2A_r(s) with d≠ 6, and let u=1 otherwise. Then, q is a power of s^u. We have T≤ ZG, and let H:=G∩_d(s^u), so |T/H| ≤ (q-1)/(s^u-1) and α(G,V)≫α(H,V)s^u/q. In particular, whenever we show α(H,V)≫ 1/s^u we are done. For C_r(s) and r=2,3, we see that α(H,V)≫ 1/s. This follows from the fact that for every λ∈_s, the proportion of elements of _2r(s) (r=2,3) with eigenvalue λ is ≫ 1/s. For A_r(s) and V the natural module, we have α(H,V)≫ 1/s from <Ref>. For ^2A_r(s) and V the natural module, setting K:=H∩_d(s), we deduce from <Ref> that α(K,V)≫ 1/s and so α(H,V)≫ 1/s^2=1/s^u. For A_3(s) or ^2A_3(s) and d=6, the module is the alternating square; we count elements of H acting as diag(λ,λ^-1), with λ∈_s, on a 2-space (nondegenerate for ^2A_3(q)) and fixing no 1-space on a complement, and get α(H,V)≫ 1/s. For A_1(s), if an element has eigenvalue 1 on the natural module then it has eigenvalue 1 on V, α(H,V)≫ 1/s. For A_2(s) and d=6 the module is the symmetric square and we obtain the same bound. The remaining cases are G=G_2(s) or ^2B_2(s). Write H=GZ' where Z'≤_s^×. Fix a coset zG with z∈ Z'. Assuming |z|>6 in the case G=G_2(s), there exists a regular semisimple class C in G with eigenvalue z^-1. Then |C|/|G|≍ 1/s_0^2 and elements of zC have eigenvalue 1, so going through all such cosets we get α(H,V)≫ 1/s_0^2 ≫ 1/s^d/2-ϵ, which is enough. This follows immediately from <Ref>. §.§ Summarizing the proof of <Ref> The proofs of <Ref>, are now complete. For the reader's convenience, we summarize them here. <Ref> reduces <Ref> to primitive groups G. If G is non-affine we apply <Ref>, so assume G is affine. It is clear that <Ref> implies <Ref> in the affine case, so it suffices to prove <Ref>. In <Ref>, we showed that <Ref> follows from <Ref>. We proved <Ref> in <Ref>, <Ref> in <Ref>, and we summarized the proof of <Ref> right at the end of <Ref>. alpha
http://arxiv.org/abs/2409.02495v1
20240904074628
CoAst: Validation-Free Contribution Assessment for Federated Learning based on Cross-Round Valuation
[ "Hao Wu", "Likun Zhang", "Shucheng Li", "Fengyuan Xu", "Sheng Zhong" ]
cs.LG
[ "cs.LG", "cs.AI" ]
0000-0002-0980-9805 National Key Lab for Novel Software Technology, Nanjing University Nanjing Jiangsu China hao.wu@nju.edu.cn 0000-0001-9309-4541 University of Chinese Academy of Sciences Beijing China zhanglikun@iie.ac.cn 0000-0002-5414-6203 National Key Lab for Novel Software Technology, Nanjing University Nanjing Jiangsu China shuchengli@smail.nju.edu.cn Corresponding author. 0000-0003-3388-7544 National Key Lab for Novel Software Technology, Nanjing University Nanjing Jiangsu China fengyuan.xu@nju.edu.cn 0000-0002-6581-8730 National Key Lab for Novel Software Technology, Nanjing University Nanjing Jiangsu China zhongsheng@nju.edu.cn § ABSTRACT In the federated learning (FL) process, since the data held by each participant is different, it is necessary to figure out which participant has a higher contribution to the model performance. Effective contribution assessment can help motivate data owners to participate in the FL training. Research works in this field can be divided into two directions based on whether a validation dataset is required. Validation-based methods need to use representative validation data to measure the model accuracy, which is difficult to obtain in practical FL scenarios. Existing validation-free methods assess the contribution based on the parameters and gradients of local models and the global model in a single training round, which is easily compromised by the stochasticity of model training. In this work, we propose CoAst, a practical method to assess the FL participants' contribution without access to any validation data. The core idea of CoAst involves two aspects: one is to only count the most important part of model parameters through a weights quantization, and the other is a cross-round valuation based on the similarity between the current local parameters and the global parameter updates in several subsequent communication rounds. Extensive experiments show that CoAst has comparable assessment reliability to existing validation-based methods and outperforms existing validation-free methods. <ccs2012> <concept> <concept_id>10002951.10003227.10003251</concept_id> <concept_desc>Information systems Multimedia information systems</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178.10010224.10010225</concept_id> <concept_desc>Computing methodologies Computer vision tasks</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Information systems Multimedia information systems [500]Computing methodologies Computer vision tasks : Validation-Free Contribution Assessment for Federated Learning based on Cross-Round Valuation Sheng Zhong September 9, 2024 =============================================================================================== § INTRODUCTION With the development of deep learning (DL), the concept that "Data is the new oil" has gained more and more consensus among people <cit.>. The emerging remarkable capabilities demonstrated by large language models <cit.> further bring attention to the enormous value of collaborating on a large amount of data. To train DL models on data owned by different parties, collaborative learning techniques, represented by federated learning (FL) <cit.>, have been extensively studied. However, due to disparities in the data held by different participants, including variations in data quality and quantity, each participant's contribution to the performance of the FL model differs a lot. How to accurately evaluate the contribution of each participant is crucial for the fair distribution of rewards. This process benefits the promotion of data quality and creates incentives for data sharing  <cit.>. There has been a line of research in the community on the contribution assessment of participants in FL, which can be mainly divided into two categories, i.e., validation-based methods <cit.> and validation-free methods <cit.>. The effectiveness of validation-based methods heavily relies on a representative validation dataset, which is used to evaluate the model performance. However, in real-world FL scenarios, obtaining the representative validation dataset that covers the distribution of all clients' data can be challenging. To overcome the limitations imposed by the validation dataset, validation-free methods are proposed to assess the contribution based on the statistical characteristics of model parameters. They estimate the parameters' correlation, information gain, and mutual information among local models (produced by clients) and the global model (aggregated by the server) to answer whose contribution is higher. However, the existing validation-free efforts, detailed in the next section, only consider the models' parameters (or gradient) in a single training round. We refer to a training round as the process of the client updating and uploading the local model and getting the global model from the server after aggregation. Due to the difficulties, such as the stochastic nature of gradient descent and the uninterpretable nature of the DL model, the accuracy of these assessment methods can sometimes be seriously compromised. Performing validation-free assessment faces two challenges: 1) The training process of DL models does not update parameters in a linear way, so only comparing the parameters of the local model in each round with the parameters of the final model will ignore the clients' contribution reflected in the iterative process of parameter updating. 2) Due to the parameter redundancy of DL models, not all parameters in the local model reflect the client's contribution to the performance improvement of the global model. Unimportant local parameters may comprise the assessment's effectiveness. In this work, we propose , a validation-free FL contribution assessment method, performing client valuation in a cross-round way. The overview of is depicted in Figure <ref>. The core idea of the cross-round valuation is to evaluate a local model's contribution in a certain round (e.g., t) leveraging the parameter updates of the global model in several subsequent rounds (e.g., rounds {t+1, …, t+k}). To cope with the interference of unimportant parameters, shares a similar idea with the ternary weights quantization <cit.>. spotlights the most important parameters by neglecting the parameters with small updates and keeping only the sign of the parameters with a large update. It is worth noting that our assessment goal is to accurately and effectively measure each client's contribution to the FL model, rather than improve the FL model performance. As a result, neither affects the local training procedure nor changes the global models. Experimental results show that the assessment accuracy of is comparable to the validation-based methods and outperforms the existing validation-free methods. We summarize the contribution as follows. * We design a novel validation-free contribution assessment method, , which can answer whose contribution is greater in a practical FL scenario without a validation dataset. * We propose a cross-round assessment mechanism to consider the effect of the intermediate local models on the final trained model, and we utilize the ternary weight quantization technique to capture the parameters that contribute the most. * Experimental results demonstrate that outperforms the SOTA validation-free methods in assessment effectiveness and achieves comparable performance to validation-based methods. The rest of the paper is organized as follows. In Section <ref>, we review the related works. In Section <ref>, we formalize the targeted scenario and problem. Section <ref> presents the detailed design of our proposed . Then, we describe the experimental settings in Section <ref> and provide the evaluation results in Section <ref>. The limitation of this work is discussed in Section <ref>. Finally, we conclude this paper in Section <ref>. § RELATED WORKS Validation-based methods. The validation-based methods use a validation dataset to assess a client's contribution by evaluating its impact on the performance of the aggregated model. The leave-one-out is the most natural way to assess the value. It assesses the data value of one contributor by calculating the model performance change when the contributor is removed from the set of contributors. However, the leave-one-out is unfair to multiple similar and mutually substitutable contributors. Ruoxi et al. <cit.> use the Shapley value to assess data value in the FL scenario. They compute the marginal increase of the average accuracy of the model due to the addition of one data contributor. Guan et al. <cit.> extend the application scenarios of Shapley value-based solutions to FL scenarios. Zhenan et al. <cit.> apply the Shapley value-based solutions to vertical federated learning and improve the efficiency through approximation. Zhaoxuan et al. <cit.> allow for efficient data valuation without long-term model training. They assess the contribution through a domain-aware generalization bound, which is derived from the neural tangent kernel (NTK) theory. There are also research efforts to improve the system performance <cit.> and to analyze the fairness <cit.>. When the server uses this line of work to assess the clients' contribution, it requires a representative dataset which covers the distribution among all clients' data. However, in real-world FL scenarios, obtaining such a representative validation dataset is infeasible. Validation-free methods. The validation-free methods use statistics of training data or the correlation among local and global parameters to value the clients. These works usually have some specific assumptions on the distribution of gradients, model parameters, or local training data. Therefore, they may face performance degradation in the real world when their assumptions are not satisfied. Xinyi et al. <cit.> propose a volume measure on the replication robustness, which assesses the contribution based on the diversity of training data. However, work <cit.> shows that this idea not only suffers from exploding volumes in high-dimensional inputs but also entirely ignores the useful information in the validation dataset. Rachael et al. <cit.> measure the data value based on the information gains of the model parameters. They hold that the contributors with the highest value can reduce the uncertainty of the model parameters. However, when the distribution of data is complex, the accuracy of the information gains is biased. Xinyi et al. <cit.> use the gradient similarity to measure the data value of the contributors' combination by comparing the data of one combination of the contributors with the gradient similarity of the global FL model trained by all contributors. However, due to the randomness of the stochastic gradient descent and gradient pruning, the value assessed in some rounds may not accurately reflect the true value of the data, or even the value of high-value data is negative. Hongtao et al. <cit.> propose a test data-free data value evaluation based on the pairwise correlation among the models based on the statistical characteristics of the models. They assume that the parameters trained by different contributors share the same distribution, which may not be satisfied when the data is imbalanced and non-independent, and identically distributed. § PROBLEM OVERVIEW §.§ Targeted Scenario We assume all participants, including clients and the server, are honest and follow the agreed-on training protocol of FL. Due to differences in training data quality and quantity among clients, each client's contribution to overall model performance varies. The server can access the model parameters uploaded by each client, but it lacks a representative validation dataset. Each client delegates the server to evaluate the contribution of each client without offering validation data. Without loss of generality, we assume that each client is involved in all training rounds. The training data of each client is prepared before the FL training, and they will not add new data during the training process. §.§ Problem Formalization Consider an FL training procedure with one server and N clients. The training procedure consists of M training rounds. Recall that, in one training round, a client uploads its local parameters to the server once and receives the corresponding aggregated parameters (a.k.a. global parameters). After M training rounds, the contribution of all clients is determined. We denote the ground-truth contribution of all clients as P={p_1, p_2, …, p_N}, where p_i is the contribution of client i. The ranking of all clients' contribution is denoted as R = {r_1, r_2, …, r_N}, and r_i = |{ j | p_j ≥ p_i }|, where |·| returns the number of elements in a set. Our goal is to design a function L, which can measure how much each client improves the performance of the global model. That is, P̂ = L(Θ, {θ_i^t}_ i∈ [1,N],t∈ [1,M]), where Θ denotes the parameters of the global model ϕ_Θ in the last round, and θ_i^t denotes the parameters of the local model trained by client i in round t. Then the predicted R̂ can be calculated based on the P̂ through Equation <ref>. The objective of function L is to minimize the distance of the predicted R̂ and R, i.e., min d(R̂, R), where d(·, ·) is the distance measurement function. Due to the multi-round property of FL algorithm, the contribution score of each client in round t (denoted as P̂^t) can be naturally represented as: P̂_i^t = L(Θ^t, {θ_i^t}), where Θ^t is the parameters of the global model aggregated in round t. So, we can reformulate Equation <ref> as: P̂ = {∑_t=0^M P̂^t_i | i∈ [1,N]}. § DESIGN §.§ Design Overview We propose two key techniques to address the challenges introduced in the introduction section. First, we design a cross-round valuation mechanism. Due to stochastic gradient descent, gradient pruning, and parameter regularization, the parameter updates in a certain round (e.g., round t) may not accurately reflect the true value of the client, and even high-value clients may be assigned negative contributions. Fortunately, the FL training process minimizes the optimization objective and improves the model's accuracy, which means that the global parameter updates have a positive contribution in most training rounds. It allows us to value the client i in round t with global parameters of subsequent several rounds. Second, we borrow ideas from efforts on model compression and quantization to filter out those unimportant parameters. Binary weight quantization <cit.> is proposed for the efficiency of computation and storage and demonstrates that retaining only the sign of parameters during model updates can still reach acceptable accuracy. Then, the ternary weight quantization <cit.> sets unimportant parameter updates to zero on top of binary weight quantization and reaches comparable accuracy to full precision training. It shows that by setting a threshold, parameter updates that contribute minimally to the model performance can be effectively filtered out. Thus, we apply the idea of ternary weight quantization to the global model aggregation and then value the client's contribution with the remaining important parameters. We demonstrate the workflow of in Figure <ref>. The contribution assessment is transparent to clients, and there is nothing to change during the local training procedure. On the server, when receiving the local parameters trained in round t, first prunes the parameters by their importance through the ternary weight quantization, then performs the aggregation on the pruned models. After aggregation, we use cross-round valuation to measure the contributions of local models. In the next part, we will detail these two key designs. §.§ Parameter Pruning Without loss of generality, we use the training process of round t as an example to detail the algorithm design. N clients first locally train the local models of round t based on the global parameters of round t-1, denoted as Θ^t-1. Then, all clients upload local parameters of round t to the server, which are denoted as {θ_i^t}_i∈{1,…,N}. In the following procedure, clients do nothing but wait for the aggregated global parameters of round t, denoted as Θ^t, to continue their local training in the next round. Once all local parameters {θ_i^t}_i∈{1,…,N} are received, calculates the local updates of round t, denoted as {Δ_i^t }_i∈{1,…,N}, where Δ_i^t = θ_i^t - Θ^t-1 . Then performs parameter pruning according to their importance, similar to the idea of model quantization such as binarization and ternarization. Considering the structural and functional differences between layers of DL models, quantifies the parameter importance in a layer-wise manner. Here we denote the parameter updates in each layer as Δ_i := [δ_1, δ_2, …, δ_l] where l refers to the number of layers. Take the j-th layer as an example. first calculates the r-th percentile of |δ_j|, denoted as δ_j^r, where r is a hyperparameter controlling the pruning rate. Then, prunes the parameters by clipping the parameter updates. For each element of δ_j, denoted as u, it is clipped as: ũ = 1 if u > δ_j^r -1 if u < - δ_j^r 0 otherwise. It means we regard the elements of δ_j whose absolute values are greater than δ_j^r as important and only keep their signs, while the remaining elements are pruned to 0. Given the clipped parameter updates Δ̃_i^t, the pruned local parameters θ̃_i^t can be calculated as θ̃^t_i = Θ^t-1 + α·Δ̃_i^t, where α is a hyperparameter for normalization. We report the whole parameter pruning procedure in Algorithm <ref>. §.§ Cross-round Valuation In each round, the contribution of the model is valued by the global model aggregated of the next k rounds, and it also values the local models of the last k rounds. Recall that the parameters of the global model are the average of the pruned local parameters (Equation <ref>), which are calculated by adding the sign of the parameter update. Therefore, the global parameters Θ^t is calculated by Θ^t = 1/N·∑_i=1^N θ̃_i^t = Θ^t-1 + α/N·∑_i=1^N Δ̃_i^t . Therefore, in round t, the parameter update of the global model denoted as U^t := Θ^t - Θ^t-1, is proportional to the sum of the pruned local updates, i.e., U^t ∝∑_i=1^N Δ̃_i^t . Recall that the value of element ũ∈Δ_i' belongs to {-1, 0, 1}. That is, the U^t (Equation <ref>) is the normalized voting result, which indicates, for each element b ∈Θ^t-1, how many clients believe that its value should be increased by α/N, and how many clients believe that it should be decreased by α/N. After k rounds following the t-th round, we obtain the parameters Θ^t+k , which can be calculated by Θ^t+k = Θ^t + ∑_e=t+1^t+k U^e . In the following part, we denote ∑_e=t+1^t+k U^e as U^(t,k) for simplification. Since the global parameters are obtained by averaging all local parameters, we can regard the parameters of any client as a correction to the sum of parameters of the other N-1 clients. Without loss of generality, for any client i, we reformulate Equation <ref> to Θ^t+k = 1/N·∑_j∈{1,…,N}∖{i}θ_j^t + 1/N·θ_i^t + U^(t,k) . We assume that the model's performance is improved after the k rounds. That is, among all clients, if a client's local model of round t, i.e., θ_i^t, during the parameter aggregation process is more similar to the subsequent global parameter updates, i.e., U^(t,k), then the contribution of this client in this training round is greater. We demonstrate this idea in Figure <ref>. Clients will recalculate the local models based on the last global model, so the choice of k should not be too large. Otherwise, the update direction represented by U^(t,k) may not capture the details of stochastic gradient updates and thus cannot indicate the contribution in one training round. Therefore, the contribution of client i can be measured by the similarity between θ_i^t and U^(t,k). Here, we use Signed Cosine similarity to measure the similarity because the θ_i^t and U^(t,k) are local parameters and global updates, respectively. Note that Signed Cosine similarity is sensitive to the sign information of vectors and can better reflect the directional relationship between vectors. Due to the parameter pruning, the model updates can be considered as the sign of each parameter's update (Equation <ref>). That is, in our design, the sign of these updates is more important than their magnitude. Note that θ and U^(t,k) share the same shape, and we assume that they can be indexed through h. calculates the client's contribution in round t by p̂_i^t = ∑_h=1^|Θ|𝗌𝗀𝗇(θ_i^t[h]) ·𝗌𝗀𝗇(U^(t,k)[h]) , where 𝗌𝗀𝗇(·) is the function to indicate the sign of the value. § IMPLEMENTATION §.§ Dataset Settings We evaluate the 's performance on three datasets, i.e., CIFAR-10 <cit.>, CIFAR-100 <cit.>, and STL-10 <cit.>. We randomly partition the training dataset among each participant. We assume that by randomly and evenly partitioning these three datasets according to the number of clients, several datasets with the same data valuation can be obtained. Therefore, if a group of clients uses these partitioned datasets for training, the contribution of these clients is the same. We have set up four scenarios to mimic the contribution differences caused by data quality and quantity differences. N in the following settings denotes the number of clients. §.§.§ Setting 1: Different quantity Assuming that randomly partitioned datasets share a similar distribution, the more samples in the training dataset, the higher the contribution to the model accuracy. In this scenario, we prepare datasets with different contributions by randomly assigning different numbers of samples to each client. Let the number of clients be N and the size of the training dataset be |X_train|. Then, the size of the dataset for the i-th client is D_i = 1 - 0.5 ·i/N|X_train|. §.§.§ Setting 2: Adding noise In this scenario, we prepare the training datasets of different qualities for clients by adding Gaussian noise of different intensities. We first randomly and evenly partition the dataset according to the number of clients. Then we perform the Gaussian noise to the dataset of client i with a mean of μ_i and standard deviation of σ_i, where i∈{1, …, N}. We set the mean and variance of Gaussian noise decrease linearly, i.e., μ_i = 0.01 · i, σ_i = 0.625 ·i/N. We report some data samples in Figure <ref>. §.§.§ Setting 3: Adjusting resolution In this scenario, we mimic the data of different quality by adjusting the resolution of the training data through Gaussian blur. Different degrees of Gaussian blur can be achieved by setting kernels of different sizes and variances. We first randomly and evenly partition the dataset according to the number of clients. Then, we use different degrees of Gaussian blur to preprocess the training data. Let the sequence of kernel sizes and standard deviation be s_i and σ_i, where i∈{1, …, N}. We select a linearly decreasing sequence of kernel sizes and standard deviation, i.e., s_i = 2· i + 1, σ_i = 0.4 · i + 1. We report some data samples in Figure <ref>. §.§.§ Setting 4: Masking In this scenario, we prepare the dataset with different quality by adding a mask on training data. The content covered by the mask is set to 0. We first randomly and evenly partition the dataset according to the number of clients. Then, for each client, we randomly mask a part of the image. The area of the mask covers r_i% of the image for client i, and its position is randomly generated. The r_i is a random number between l_i and u_i, where l_i = 0.5 ·i/N, u_i = 0.75 ·i/N . We report some data samples in Figure <ref>. §.§ Implementation Details We perform experiments with Pytorch on a server with two A100 (80G) GPU cards. We use three model architectures, i.e., TinyResNet, ConvNet <cit.>, and ResNet-4. The TinyResNet consists of a convolution layer, whose weight shape is 3×7×7×64, and a ResBlock <cit.>, whose kernel size is 3×3 and output channel is 64. The ResNet-4 consists of 4 ResBlocks, whose kernel size is 3×3 and output channel is 64, 128, 128, and 128. In our setting, we use 1 central server and 5 clients, (i.e., N=5). We initialize the learning rate to 0.01 and gradually decrease it as the training progresses. We use the FedAvg method to aggregate the local models. §.§ Baseline Methods We use three baseline methods. The validation-based method <cit.>, denoted as baseline 1, measures the accuracy of the aggregated models w/ or w/o one local model with the validation dataset to calculate the contribution. Although baseline 1 is computationally time-consuming, it has the highest accuracy among Shapley value-based solutions by exhaustively considering all possible cases. We implement two SOTA validation-free methods, i.e., Fed-PCA <cit.> and CGSV <cit.>, as the baseline 2 and baseline 3. §.§ Metrics To measure the accuracy of the contribution assessment, we use the Spearman correlation coefficient <cit.> as the distance measurement function d in Equation <ref>. The Spearman correlation coefficient is good at measuring the degree to which the ranks of the two variables are associated with each other. And it also is used in related works <cit.>. Formally, R and R̂ are two sequences of length n, which denote the ground-truth order and predicted order of the clients' contribution, i.e., R = [o_1, o_2, … , o_n], R̂ = [ô_1, ô_2, … , ô_n]. We calculate the Spearman correlation coefficient (ρ) through the: ρ = 1 - 6/n(n^2-1)∑_i=1^n ‖ o_i - ô_i ‖_2. The ρ ranges from -1 to 1, where -1 indicates a perfectly negative correlation, i.e., sequences R and R̂ are in reverse order, 0 indicates no correlation, i.e., random guessing, and 1 indicates a perfectly positive correlation. § EVALUATION We measure the performance of our in four configurations with three datasets and three model architectures. Configuration 1 is CIFAR-10 and TinyResNet; Configuration 2 is CIFAR-100 and TinyResNet; Configuration 3 is STL-10 and ResNet-4; Configuration 4 is CIFAR-100 and ConvNet. §.§ Overall Performance In the 's experiments, for hyperparameters in Algorithm <ref>, we set r to 10, α to 0.02, and N to 5. We set the k in Equation <ref> to 2. We report the overall performance in Table <ref>. SV, PCA, and CGSV represent baseline 1, baseline 2, and baseline 3, respectively. The average accuracy of baseline 1 and are 0.855 and 0.9, which means that our is comparable to that of baseline 1, which is a validation-based method. even outperforms baseline 1 by 0.22, 0.02, and 0.12 in setting 1, setting 2, and setting 4. Our 's performance outperforms the SOTA validation-free methods, i.e., Fed-PCA (baseline 2) and CGSV (baseline 3), in almost all cases. On average, our outperforms Fed-PCA by 0.94 and CGSV by 0.52 in all cases. The poor performance of Fed-PCA is because the model architecture used in our experiment is too complex to perform precise probability analysis. Experimental results demonstrate the 's effectiveness and robustness in different cases. §.§ Ablation Study §.§.§ k in cross-round valuation We explore how k affects the performance of . We perform the experiment with different k values, which means that different numbers of global updates are used to assess the local models' contribution. We report the results in Table <ref>. In different experimental settings, with k=5 reaches the best performance, and the performance of with k=2 is comparable to that of with k=5. The small and large k values, i.e., k=1 and k=10, have relatively poor performance. This is because a small k value may still let the contribution assessment be affected by stochasticity, while a large value of k may make lack attention to the stochastic gradient descent process. We recommend setting k to 2 or 5 when using our in practical use. §.§.§ Parameter pruning Recall that the parameter pruning consists of a parameter update clipping procedure (Line <ref> in Algorithm <ref>) and a top r% update selection procedure (Line <ref> in Algorithm <ref>). To measure the effect of parameter pruning, we perform the following contrast experiments with different settings, which are as follows. * Ours. r=10, w/ update clipping. * Exp1. r=20, w/ update clipping. * Exp2. r=10, w/o update clipping. * Exp3. r=100, w/o update clipping. (No parameter pruning.) We report the results of these contrast experiments in Table <ref>. By comparing the results of ours and Exp1, we can conclude that increasing the proportion of parameter selection leads to a slight decrease in performance, which is likely due to the noise introduced by the selected parameters. Our 's average performance is 0.40625 and 0.25625 higher than Exp2 and Exp3, respectively, indicating that the design of parameter pruning effectively ensures the accuracy of contribution assessment. §.§.§ Update clipping strategy In the , we use a hyperparameter α (Equation <ref>) to normalize the local parameter update in each round. Here we explore how the hyperparameter α value affects the contribution assessment effectiveness. We perform three experiments with CIFAR-10 and TinyResNet. We set M to 100, N to 5, and r to 10. In the first two experiments, we set the value of α to 0.01 and 0.02, respectively. In the third experiment, we use an adaptive clipping strategy, where we set α as the average value of the selected r% parameters (i.e., 10) of each layer. We report the results of these three experiments in Table <ref>. By comparing the experimental results of the first two experiments, it can be seen that the choice of hyperparameters has little impact on the assessment performance. However, in the third experiment, the assessment performance decreases. This is because clipping the parameter update to different values interferes with the assessment process. In our design, we aim to quantize all parameter updates and assess the contribution based on the direction of the local parameters and global parameter updates. §.§.§ Client number Here we explore how the number of clients, i.e., N, affects the stability of . We experiment with CIFAR-100 and TinyResNet and set k to 10 and r to 10. We report the experimental results in Table <ref>. When the number of clients increased from 5 to 10, the performance change of Fed-PCA averaged 0.4775, the performance change of CGSV averaged 0.38, and the accuracy change of our method averaged 0.07. When the number of clients is 5, our outperforms Fed-PCA and CGSV by an average of 1.025 and 0.5225, respectively. When the number of clients is 10, our outperforms Fed-PCA and CGSV by an average of 0.775 and 0.795, respectively. The experimental results fully demonstrate the stability of our method with respect to changes in the number of clients. § DISCUSSION AND LIMITATION Performance of model pruning. Recall that in the procedure of parameter aggregation, prunes local parameters according to their importance. Although weight quantization methods are often used in practical FL scenarios to reduce network bandwidth, they tend to incur performance degradation of the model as well as slower convergence speed. Thus the convergence time of our is slightly longer than that of the typical method, i.e., without applying any quantization. We report the model performance trained through the typical method and ours in Table <ref>. The model trained through the typical method converges after 100 rounds (i.e., M=100), while the model trained in our method requires 200 rounds (i.e., M=200) to converge. As can be seen, the performance of the model trained through outperforms that of the model trained through the typical method at the 100th round only in one case. When the model converges, the performance of the model trained through exceeds the model trained in the typical scenario in two cases. However, experiments in Section <ref> show that weight quantization is important to the accuracy of the assessment since quantization mitigates the negative impact of redundant model parameters on the assessment. In future work, we will draw on research ideas in model quantization to improve the model performance in accuracy and convergence. Application value of contribution assessment. Concerns regarding data quality assessment and contribution evaluation have become ubiquitous among nations, communities, and individuals alike. Some countries have timely developed advanced standards and sophisticated sophisticated indicators to guide norms for contribution and data quality assessment <cit.>. The technologies presented in this work could provide pivotal support for the application of these standards within industrial contexts. § CONCLUSION This work proposes a validation-free contribution assessment, , for FL scenarios. Compared with existing efforts, it greatly improves the contribution assessment performance under different dataset settings by introducing two key designs: parameter pruning and cross-round valuation. Comprehensive evaluations showed that our outperforms existing methods on different dataset settings and models. We believe that will inspire the community to improve the FL paradigm with an inherent contribution assessment. This work was supported in part by the National Key R&D Program of China under Grants 2022YFF0604503, in part by NSFC under Grants 62272224, 62341201, 62302207, 62272215 and 61872176, in part by the Leading Edge Technology Program of Jiangsu Natural Science Foundation under Grant BK20202001, and in part by the Science Foundation for Youths of Jiangsu Province under Grant BK20220772. ACM-Reference-Format
http://arxiv.org/abs/2409.03153v1
20240905010122
Can we enhance prosocial behavior? Using post-ride feedback to improve micromobility interactions
[ "Sidney T. Scott-Sharoni", "Shashank Mehrotra", "Kevin Salubre", "Miao Song", "Teruhisa Misu", "Kumar Akash" ]
cs.HC
[ "cs.HC", "cs.RO" ]
Post-ride feedback to improve prosocial behavior in micromobility interactions]Can we enhance prosocial behavior? Using post-ride feedback to improve micromobility interactions sidney.scott.sharoni@gatech.edu Georgia Institute of Technology School of Psychology Atlanta Georgia USA 30332 shashank_mehrotra@honda-ri.com Honda Research Institute USA., Inc. 70 Rio Robles San Jose California USA 95134 kevin_salubre@honda-ri.com Honda Research Institute USA., Inc. 70 Rio Robles San Jose California USA 95134 miao_song@honda-ri.com Honda Research Institute USA., Inc. 2420 Oak Valley Drive, Ann Arbor Michigan USA 48103 tmisu@honda-ri.com Honda Research Institute USA., Inc. 70 Rio Robles San Jose California USA 95134 kakash@honda-ri.com Honda Research Institute USA., Inc. 70 Rio Robles San Jose California USA 95134 § ABSTRACT Micromobility devices, such as e-scooters and delivery robots, hold promise for eco-friendly and cost-effective alternatives for future urban transportation. However, their lack of societal acceptance remains a challenge. Therefore, we must consider ways to promote prosocial behavior in micromobility interactions. We investigate how post-ride feedback can encourage the prosocial behavior of e-scooter riders while interacting with sidewalk users, including pedestrians and delivery robots. Using a web-based platform, we measure the prosocial behavior of e-scooter riders. Results found that post-ride feedback can successfully promote prosocial behavior, and objective measures indicated better gap behavior, lower speeds at interaction, and longer stopping time around other sidewalk actors. The findings of this study demonstrate the efficacy of post-ride feedback and provide a step toward designing methodologies to improve the prosocial behavior of mobility users. <ccs2012> <concept> <concept_id>10003120.10003121.10003122.10003334</concept_id> <concept_desc>Human-centered computing User studies</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Human-centered computing User studies < g r a p h i c s > Using post-ride feedback to improve prosocial interactions in mobility. Top left: Ego scooter yielding to automated delivery robots while maintaining a safe boundary. Top right: Ego scooter restricting the path of pedestrians. Bottom right: Scores calculation from distribution of prosociality metrics. Bottom left: Proposed prosocial behavior ride report based on user behavior. Using post-ride feedback to improve prosocial interactions in mobility. Top left: Ego scooter yielding to automated delivery robots while maintaining a safe boundary. Top right: Ego scooter restricting the path of pedestrians. Bottom right: Score calculation from distribution of prosociality metrics. Bottom left: Proposed prosocial behavior ride report based on user behavior during the online study trial. 17 April 2024 [ Kumar Akash September 9, 2024 ===================== § INTRODUCTION An increasing number of cities are beginning to shift their mobility solutions away from cars toward more environmentally-friendly mobility means <cit.>. One of the most promising solutions has been micromobility, which provides flexible, sustainable, cost-effective, and on-demand transport alternatives <cit.>. Micromobility has the potential to shift away from private cars significantly and help with many of the transportation-related issues that cities throughout the globe are now dealing with. With the significant rise in popularity of these micromobility modes, especially with the shared mobility model <cit.>, there has been an increased pushback in their social acceptance. Shared electric scooters were recently banned in Paris with 90 percent support <cit.>. Furthermore, recent research has found that users feel apprehensive about micromobility means of transportation <cit.>. Additionally, the rise in delivery robots, which are likely to increase traffic on sidewalks <cit.>, may present challenges for pedestrian safety <cit.>. Advances in driving automation technologies promise to improve sidewalk interaction safety <cit.>. Perception and planning algorithms currently being developed for cars can potentially improve the safety of sidewalk mobility modes, such as e-scooters, delivery robots, etc. However, sidewalk micromobility modes may still be human-controlled in the near future. Therefore, promoting prosocial behavior amongst micromobility users becomes critical to ensure social acceptance of these technologies. Penner et al. define prosocial behavior as “a broad category of acts that are defined by some significant segment of society and/or one’s social group as generally beneficial to other people” <cit.>. Recent work has explored prosocial behavior in driving-related research, like allowing another vehicle to merge <cit.>. However, this has not been explored in a micromobility context on sidewalks where on-road traffic rules do not necessarily apply. In this paper, we investigate prosocial behavior in micromobility, objectively measure it, and facilitate the promotion of prosocial behavior using post-ride feedback. This allows mobility users to meaningfully self-reflect and improve prosocial behavior in future interactions. Existing research has shown that individualized feedback can improve drivers' engagement and eco-driving performances <cit.>. Extending this idea, we propose that post-ride feedback based on metrics can promote prosocial behaviors in sidewalk interactions. The contributions of this paper are as follows: * We demonstrate the feasibility of using post-ride feedback to improve prosocial behavior in the context of micromobility users. The feedback includes objective measures to systematically report the sidewalk interaction behaviors. * We evaluate how prosocial behavior is influenced by environmental interactions (infrastructure or type of actors). This is further complemented by an analysis of the participants' subjective responses. § RELATED WORK The advancement of novel automated vehicle (AV) technology and newer forms of shared automated vehicles (SAV) hold promise in providing benefits to people in several aspects, including daily commutes <cit.> <cit.>, vehicle emissions <cit.>, transportation affordances <cit.>, and general road safety <cit.>. However, the mass adoption of these technologies is unlikely to occur for at least another ten years, according to industry experts <cit.>. In the meantime, it remains imperative for researchers to make progress on ensuring seamless interactions in these mixed autonomous environments, where emerging technology does not cause disruptions. One such transportation method is micromobilities, which are smaller, slower, and often more individual means of transportation <cit.>. Micromobility users engage in closer interactions with pedestrians than cars and may adopt different roles (i.e., pedestrian, scooter, vehicle) depending on what is most beneficial to them at each interaction with other pedestrians, sometimes at the expense of other pedestrians <cit.>. This may cause discomfort to others due to potential safety risks and space-sharing conflict <cit.> <cit.> <cit.>. <cit.> defined space-sharing conflict as “an observable situation from which it can be reasonably inferred that two or more road users intend to occupy the same region of space at the same time in the near future.” We define interactions as behavioral differences between road users during a space-sharing conflict. While these definitions specify “road users,” we extend these concepts to the sidewalk as similar space-sharing conflicts can occur between micromobility users and pedestrians. The next section discusses the characterization of prosocial behavior in mobility. §.§ Prosocial Behavior in Mobility Past studies on prosocial behavior typically involve games in which participants make either real or pretend decisions about how much money to give themselves and another player <cit.>. While prosocial behavior has been explored in other contexts, we focus on prosocial behavior in transportation and mobility. <cit.> found that prosocial driving behavior was influenced by factors like self-esteem, efficacy, social and moral norms, and situational contexts. Social norms, such as peers telling each other to drive slowly, increased prosocial driving behaviors. <cit.> found that drivers' prosocial behavior is correlated positively with positive attitudes, normative perceptions, and perceived control. Another study concluded that prosocial driving is intentional and that research should investigate methods of increasing prosocial behaviors rather than reducing risky behaviors <cit.>. <cit.> created a self-report measure of prosocial and aggressive driving behaviors and found that their prosocial driving factor correlated with fewer traffic violations. They defined prosocial driving as “a pattern of safe driving behaviors that potentially protect the well-being of passengers, other drivers, and pedestrians, and that promotes effective cooperation with others in the driving environment.” This definition serves as the basis of our research interest in examining prosocial behavior in micromobility. To the best of our knowledge, no prior study has explored prosocial behavior in a micromobility context. §.§ Behavior Change through Feedback Research on how feedback impacts human behavior has existed for several decades, beginning steadily around the 1950s. The most fundamental research to this effect is rooted in the work by Thorndike's law of effect <cit.>, where positive feedback increases a desired behavior while negative responses can reduce it. People typically perform a desired behavior better when they are provided with knowledge of their successes and failures <cit.> <cit.>. Grounded in this established field, recent advances have demonstrated how AI-based technologies can engage young learners by emoting effectively through prosocial prompting <cit.>. Past literature has established how prosocial behavior can be influenced through feedback. Through the utilization of the law of effect, positive emotional rewards can lead to prosocial behavior, which can further reinforce positive emotions <cit.>. Behren et al., <cit.> found that immediate feedback in a group setting resulted in participants volunteering and performing cooperative behaviors in cooperative game settings. Pillitlai et al. <cit.> found that participants engage in prosocial behavior in group-based settings where they receive immediate feedback on cooperative or competitive behaviors, which could be attributed to the compulsion to adhere to normative behaviors. These studies show the importance of feedback in eliciting prosocial behavior. In mobility, information about the environment can improve future driver behavior around other road actors <cit.> <cit.>. The studies on feedback design in the mobility context try to improve driving behavior by reducing hard braking events, reducing distraction, and improve takeover performance <cit.>. However, existing research does not look to bridge the gap between post-ride feedback, micromobility, and prosocial behavior. By understanding the nuances of prosociality and reinforcing good interactions, seamless interactions between sidewalk users can help facilitate comfortable and efficient mobility environments. This research aims to bridge these gaps through an online experimental study. § USER STUDY To evaluate the prosocial behaviors of micromobility users, we use an online web-based driving simulator platform where the participants interact with other road users while actively controlling a micromobility. This section details the online study platform used for the evaluation, a description of the user study design and participant group, the experimental procedure, quantitative metrics for evaluating prosocial behavior, and the design of the post-ride feedback. §.§ Apparatus - Online Web Study platform The study was conducted online using a web-based driving simulator that allowed users active longitudinal and lateral control of the e-scooter. The study trials were built in the game developing platform, Unity (v. 3.4.2) <cit.>, using a collection of publicly purchasable 3D environmental assets amalgamated into a cohesive map of a suburban city environment. Each participant completed five trials (1 practice and 4 study trials) on the same city map but never on the same strip of sidewalk. The online format allowed the participants to complete the study through their personal desktop/laptop computers, where the study ran on a web browser. Participants controlled a player, a person riding on an electric scooter, through the virtual city using their keyboard keys. The rider (hereafter, called ego) appeared from a third-person camera angle (see Figure <ref>). The scooter had a max speed of 10 MPH (16.1 KPH) in accordance with laws from the District of Columbia <cit.>. Participants were also advised to drive on the right side of the sidewalk when possible. To see the complete list of instructions, access the full study at <https://researchhost.github.io/prosocialfeedback_review/>. §.§ Participants A 2×2×2 mixed-design study design was considered for this study. The between-subject factor was whether they received or did not receive feedback. Two within-subject factors were the other actors (OA) type (2 levels: Pedestrian and Robot) and the traveling path of OA (2 levels: Oncoming and Crossing). A total of 304 participants completed the study via Prolific, an online study platform with more attentive responses than other online data collection tools <cit.>. The criteria for participation included (1) residing in the United States, (2) possessing a valid U.S. driver's license, and (3) having normal or corrected-to-normal vision. Prolific users earned $4, which took them between 25 to 30 minutes to complete the study and not fail the attention checks. The compensation was in line with the requirements of the prolific study platform <cit.>. Several participants reported lag issues due to the limitations of their computer devices. To account for those challenges, only participants who observed the study at a median frame rate of 24 frames per second and above <cit.> were considered. The study was approved by the Bioethics Committee in Honda R&D (approval code: 99HM-065H)). §.§ Experimental Instructions and Study Trials At the beginning of the study, participants were provided with a vignette and instructions. The vignette informed the participants about the context and their task. Participants were instructed to ride the e-scooter on the sidewalk in a straight path to the opposite end of the road within three minutes. A straight path was chosen to mitigate the challenges of remembering or seeking directions. Additionally, participants could only cross streets by riding within the crosswalk road lines to control the behavior variability. A 180-second countdown timer appeared at the top of the display. At 30 seconds, the timer would flash red, which would continue up to -90 seconds before the trial automatically ended. The time pressure was induced to mimic real-life situations in which people are required to make a trade-off between acting prosocially and their own needs or priorities. Participants were also reminded to avoid collisions. None of the instructions suggested to the participants that they needed to behave prosocially or only follow behaviors based on legal restrictions. All participants experienced a practice trial that lasted either until they reached the opposite end of the road or until the timer expired. The practice drive did not include any other actors or events. The trial included boundary walls to ensure participants did not deviate from the path; however, in the practice trial, the boundary walls appeared from afar invisible but as a blue grid at a close distance. For all other trials, the boundary walls were invisible. We informed participants about the purpose of the wall, ensuring participants followed the path intended for the study completion. §.§.§ Study trial descriptions The study included four main trials. The study trials occurred in a suburban downtown area with light pedestrian and delivery robot traffic. No cars drove through the environment to avoid confounding participants' yielding behavior. Each study trial included four events. A different street or direction of travel in the map was used for each trial. Events were defined as interactions with a space-sharing conflict based on the definition by Markkula et al. <cit.>. Half of the events occurred at an intersection for each study trial. Other events occurred due to an obstacle or blocked path. All obstacles were on the right side of the sidewalk, directly in the participant's path. Participants had to maneuver around the obstacle while two other actors (OAs) moved toward the ego. For two events, OAs were a dyad of pedestrians in direct path conflict with the other, while the other two events included OAs as a dyad of delivery robots (see Figure <ref>). Note that the OAs in all trials were only pedestrians or delivery robots, and the OAs moved in a dyad of their own type for the events. All OAs moved at the same speed of 5.36 KPH (3.33 MPH). While pedestrians and delivery robots, in actuality, may not move at the same speed, we wanted to control for a potential confounding effect. To prevent ordering effects, the maps were presented in varying sequences based on a balanced Latin-square design <cit.>. In total, four different map orders were shown to the participants. §.§ Procedure The procedures for the two studies were nearly identical and were only differentiated by whether the participants received feedback or not (Figure <ref>). Participants began the study by virtually signing the informed consent after selecting it on Prolific. Participants responded to an attestation, which asked participants to provide honest and thoughtful responses. The attestation aimed to set a cognitive mindset for the participants and prevent gamification <cit.>. Following, participants read and responded to instructions and the vignette as described in the above section. A practice trial commenced to familiarize participants with the layout and controls. Participants then completed each study trial. Attention checks were included before the first trial and after the second trial. Comprehension checks appeared after the third trial and, in the second study, after the fourth trial. Post-study questionnaires recorded individual differences and ride satisfaction post-trial. The analyses of the survey responses are beyond the scope of the current article. The entire experiment for either study lasted approximately 30 minutes. §.§ Quantifying Prosocial Behavior in Mobility In the previous sections, we motivated the case of studying prosocial behavior in mobility. It is established that the increase in the different types of traffic participants and different micromobility would impact societal acceptance. It is, therefore, imperative to define performance measures for these behaviors. This section defines the different performance measures in detail. Minimum Gap with OA Past research has established how sufficient gaps between traffic participants can significantly impact perceptual decision-making when negotiating traffic situations with other actors (OA) <cit.>. Studies that have tried to understand the subjective perception of traffic participants also reported a preference for a safe gap when interacting with multiple road users <cit.>. With the importance of gap acceptance established, it is vital to consider how much minimum gap the ego maintains with other actors. For x^ego_t and x^OA_t be the positions of the ego and the OA at time t, respectively, we define the minimum gap as d_min = min_t (x^ego_t, x^OA_t), where (x_1, x_2) = ‖x_1- x_2 ‖ is the Euclidean distance between any positions x_1 and x_1. Speed at minimum gaps It has been found previously that traffic participants' approach speed influences gap acceptance <cit.>. Perceived ego speed by other actors can greatly influence prosocial behavior and lower the participants' interaction risk <cit.>. We define the speed at minimum gap as v_min = v^ego_t_min, where v^ego_t is the speed of the ego at time t and t_min = _t (x^ego_t, x^OA_t) is the time at minimum gap. Ego stopped time for other actors The metric considers the amount of time the ego spent waiting for the OAs at an event. We defined stopping when the ego's speed was less than 0.3 MPH. Ego trial time The metric considers the total time a participant took to complete a trial. Although this metric does not quantify prosocial behavior, it captures the cost incurred by the participant to perform prosocial actions. Participants have to spend more trial time if they yield to OAs taking longer paths to keep more distance from OAs. § POST-RIDE FEEDBACK DESIGN Our proposed post-ride feedback summarizes and informs participants about their prosocial behaviors. The post-ride feedback allows them to meaningfully self-reflect, thereby changing and improving their behavior in future interactions. We proposed two quantitative score-based metrics: Yield Behavior and Safe Boundaries; these metrics are easily comprehended and can be implemented in mobility modes that are equipped with sensors for detecting objects for collision avoidance systems. Yield behavior is defined by the degree of disruption and interference placed on the others during a space-sharing conflict interaction. We calculated it based on the time OAs on the sidewalk took to cross a predefined event zone during an interaction. For example, if a participant blocked either OA's path, the OA would stop, thereby increasing their time to cross. The longer the participant blocked the path, the higher the time and, therefore, the lower the yield behavior score. Higher scores on the post-ride feedback denoted more prosociality. Participants were provided with a general definition of the yield behavior score but not to a mathematical extent (see Figure <ref>). Safe boundaries metric as the ratio of the speed of travel to the distal boundary at the minimum gap during a space-sharing conflict. Specifically, we define the safe boundary metric as b_safe = v_min/d_min. We included the word “safe” for the post-ride feedback design in an attempt to prevent participants from assuming that the score was solely influenced by lateral distance. The post-ride feedback, presented as the “Ride Report”, denoted the scores numerically and visually with highlighted stars (see Figure <ref>). The five stars were included so the participants could better comprehend their performance than just reporting numbers. The numerical values for the metrics were mapped to 6-point numerical metric values to scores from 0.0 to 5.0 based on the data from the no-feedback group participants. Note that the no-feedback group participants' data collection was done before the feedback group. Given an event in a route, the metric values from all the participants in the no-feedback group were first filtered for outliers based on the 1.5 IQR rule <cit.>. The scores from 0.0 to 5.0 were linearly mapped from the maximum to minimum values, respectively. A sample mapping for the first event in Route 4 is shown in Figure <ref>; the minimum safe boundary metric of 0.01 was mapped to a 5.0 safe boundary score, the maximum safe boundary metric of 12.33 was mapped to a 0.0 safe boundary score, and the intermediate values were linearly mapped. Note that the lower values of the yield behavior time and safe boundary metric meant more prosocial behavior and were mapped to higher prosocial scores. After each trial, each user earned four scores across the four events for each behavior. The final score displayed on the Ride Report was the average value of these four scores rounded to one decimal place. Furthermore, the number of stars was based on the rounded integer values of the scores. The overall procedure to calculate the scores is shown in Figure <ref>. To demonstrate whether the behavior was desirable, the Ride Report included three different possibilities for emotive responses (sad, happy, neutral) to the scoring (see Figure <ref>). The emotive responses differed based on the score that the participant received. Scores above 4.0 elicited a green, happy face. Scores from 2.0 to 3.9 received a yellow, neutral face. Scores below 2.0 received a red, sad face. § FINDINGS We present the findings from the data analysis to evaluate the efficacy of the post-ride feedback. Firstly, we explore how the post-ride feedback influenced the scores for subsequent rides. Secondly, we detail how post-ride feedback and event-related factors in trials impacted the behavioral metrics. Finally, we report the participants' reflections on the post-ride feedback. §.§ Change in Prosocial Behavior Scores across Trials To compare the scores of the feedback group participants', we calculated the no-feedback group participants' scores. Given that the no-feedback group participant data was collected without the scores for the post-ride feedback. The scores were calculated post hoc. The scores were compared with the feedback group participants. We conducted four two-sample t-tests, one for each trial to compare between the two groups. To control for an inflated familywise error rate, we used a Bonferroni Correction and only considered p-values as significant if they were less than 0.0125. The findings from the t-test are reported in Table <ref>. There was a significant difference in the yield behavior scores for Trials 3 and 4 between the two groups, with higher scores for the feedback group participants. Safe boundary scores were significantly different for Trials 2, 3, and 4. Given the participants in the feedback group only saw the report after Trial 1, no significant differences were observed between the two groups for Trial 1. This shows that the two groups had similar yielding behavior in the beginning, which changed due to the post-ride feedback after two trials. Figure <ref> shows an increasing trend in the yield behavior scores for the participants who received the post-ride feedback. Figure <ref> shows that safe boundary scores continuously increased for participants who received the post-ride feedback versus those who did not receive the feedback. §.§ Evaluation of the Behavioral Metrics We established that the post-ride feedback influenced participant behavior over the trials. To observe the performance, behavioral metrics (defined in Section <ref>) were considered. In this section, we report how the behavioral metrics were influenced by post-ride feedback and the event-related factors that the participants observed in the user study. We used linear mixed models (LMMs) using R software (version 4.1) and lme4 package to analyze the relationship between the independent variables (Feedback condition, OA type, and Traveling path of OA) and dependent variable (minimum gap, speed at minimum gap, ego stopped time for OAs, unsafe boundary, and ego trial time)  <cit.>. Participants and trial orders were treated as random effects. The best-fit model was selected based on the lowest AIC criterion. The best-fit model was the interaction effects in every possible combination using the compare_performance function of the performance package in R <cit.>. Findings from the mixed effects model are shown in Table <ref> and <ref>. The model equation for each dependent variable (DV) is given by DV∼ Feedback type + OA type + Traveling path of OA + Traveling path of OA : OA type + Feedback type : OA type + Feedback type : Traveling path of OA + (1|Participant) + (1|Trial) + ϵ . §.§.§ Behavior at gap Based on the model results, main effects were observed for a higher minimum gap for the feedback group (β=0.513) at the crosswalk (β=2.714) and while interacting with Robots (β=0.596). Main effects were also observed for speed at the minimum gap for the feedback group (β=-0.813), and the minimum gap at the crosswalk (β=5.427), and when OA was a robot (β = -0.322). For the minimum gap, Interaction effects were observed between Feedback received and traveling at the crosswalk (β = 1.100) and between traveling at the crosswalk and interacting with Robots (β = 0.984). Interestingly, an interaction effect was observed for the speed at the minimum gap between participants who received the feedback and traveling at the crosswalk (β = 0.513). Figure <ref> shows the trends for the minimum gap with OAs increased for participants who received the post-ride feedback. The trends are supported by the results from the model, which indicate the efficacy of the feedback on how participants maintained distance from other agents. The trend is prominent when the OAs are at the crosswalk. Similarly, there is a higher minimum gap with OAs for participants who received feedback. These trends are confirmed by the modeling results, which indicate that feedback influenced the minimum gap. Considering the travel path of the OAs in Figure <ref>, we see that the participants who received the post-ride feedback kept a higher minimum gap when encountering OAs at crosswalks. The result is expected as the crosswalks typically have larger spaces to maneuver around than block-level interactions. The minimum gap is observed to increase and maintain across subsequent trials in the crosswalk for participants who received feedback. Receiving post-ride feedback and traveling at crosswalks observed an increase in the minimum gap over successive trials for the feedback groups, as shown in Section <ref>. No trends are observed for oncoming OAs. This could be attributed to a limitation in the space to negotiate, and the minimum gap may not entirely represent the minimum gap observed by the participants. For speed at minimum gap, the model results also aligned with the trends observed in Figure <ref>. The trends confirm that the speed at the minimum gap decreased for the participants who received the post-ride feedback. Figure <ref> shows different speeds at the minimum gap for different travel paths of the OAs. Participants kept a lower speed minimum gap when encountering oncoming OAs and received post-ride feedback. However, the speed at crosswalks remained higher when the travel path of OAs was crosswalk. This was further confirmed as an interaction effect was observed between receiving feedback and traveling at crosswalk. Additionally, there is a decrease in the speed with OAs, as shown in Figure <ref>. We observe slower speeds around the robots as compared to pedestrians. Irrespective of the individual interactions, the findings from the analysis confirm that the post-ride feedback does result in a lowering of the speed around OAs and lower speed with oncoming OAs, which can be indicative of better-yielding behavior whenever the opportunity to yield presents itself to the participants. §.§.§ Ego stopped time Figure <ref> shows that ego stopped time increased for the participants who received the post-ride feedback. Figure <ref> shows the distribution of the ego time spent stopping for different travel paths of OAs. Participants generally stopped for a shorter time at the crosswalk than for oncoming OAs. For oncoming OAs, feedback group participants stopped longer than the no-feedback group. This effect can be attributed to post-ride feedback causing participants to proactively stop for OAs, which aligns with the findings in Section <ref>. For OAs, the ego stopped time is slightly higher for Robots, which could be attributed to participants cautiously assessing the trajectory of the delivery robots. In contrast, they would better understand the expected behaviors of the pedestrians. Stopping behavior is similar for pedestrians and robots. The trends can be seen in Figure <ref>. §.§.§ Ego trial time From Figure <ref>, we see that the ego trial completion time increased for the participants who received post-ride feedback. From the estimates in Table <ref> and Figure <ref>, it is worth noting that post-ride feedback led to an increase in trial completion time. While trial completion may not be representative of prosocial behavior, it is indicative of costs associated with performing prosocial behaviors. Note that the participant pool of prolific workers typically optimizes time efficiency in task completion <cit.>. While participants were neither incentivized for timely completion nor penalized for delay in completion, participants did forego time voluntarily. We also expect this phenomenon in real-world situations, where people may prioritize prosociality in dyadic interactions over trip completion when aware of their behaviors. Given that people do not incur significant loss due to acting prosocially, we believe that such feedback can increase prosocial behavior in people. §.§ Impact of the post-ride feedback on the trials It was evident that changes in scores occurred throughout the trials, which is also supported by the subjective responses from the participants. Based on a frequency analysis, most participants (77/105) reported actively trying to change their behavior to increase their scores. However, a small number of participants (18/105) reported no influence on their behavior, which could indicate that the post-ride feedback did not necessarily influence the way they rode. The rest of the participants' responses (10/105) were not conclusive. Despite this, an increase in yielding behaviors and safe boundaries were the most cited responses on how participants changed their scores, highlighting the impact of the post-ride feedback of the ride report. As participant P14 noted, “I tried to keep my previous rating in mind for the next trial each time. I did my best to improve as much as possible. When I saw an obstacle, I tried slowing to around 5 MPH before approaching it. This gave me more time to react if necessary.”. Another participant P29 remarked, “I simply tried to follow the rules of the road more carefully on subsequent ride tests. For example, I tried to yield to pedestrians more often to get a better ride score. I would stop in the middle of a crosswalk to prevent getting too close to other robots or pedestrians on the other side of the sidewalk path.” Sentiments about the scores Subjective opinions of the scores were categorized into positive and negative sentiments. Most participants (42/105) had a positive opinion toward the feedback. However, a significant population (39/105) had mixed feelings about the scores they received. Participants expressed skepticism about the scores, with opinions citing the scores' inaccuracy, harshness, and unfairness. Additionally, participants believed the scores they received did not match their expectations based on their rides. Overall, it was evident that a lack of transparency in the scoring system caused skepticism toward the report, which may have resulted in some participants not caring about the feedback. As Participant P92 explained, “I felt that the scores I received were a little critical as I did not collide with any pedestrians and yet still made it to the end of the street in time. I prioritized reaching the end of the street within the time limit as I came very close to not making it in time in a few trials, with only a few seconds to spare. I would have liked to be more considerate while traveling on the sidewalk, but the time limit was short enough to prevent me from doing things like coming to a complete stop at times.” Additionally, participant P11 stated, “I wasn't sure if they were random or actually based on my performance. I wanted to complete each ride as quickly as possible without bumping into others, so I tried to avoid others rather than yield to them narrowly. I didn't care about the effect on my score.”. The participants' opinions provided important perspectives on post-ride feedback. They expressed interest in understanding the cause of the performance scores. The feedback provides detailed insights on how the post-ride feedback can potentially help participants reflect on their reports and could potentially aid in further improving the efficacy of the post-ride feedback. § DISCUSSION This study assessed the feasibility of post-ride feedback improving the prosocial behavior of mobility users. The feedback utilized objective measures and reported how users interacted with other road users. A total of 208 participants' data was considered in the online study, where 105 participants received the post-ride feedback, and 103 participants received no feedback. The study evaluated how prosocial behavior is influenced by mobility interactions. §.§ Influence of feedback on prosocial behavior The analysis found that participants who received post-ride feedback maintained larger gaps and increased stop time while interacting with OAs. They also lowered their speed at oncoming traffic paths. Overall, participants also increased the overall trial time. The improvement in prosocial behavior using informative feedback suggests that feedback can help improve mobility behaviors, which is supported by previous findings <cit.>. Eckoldt et al. <cit.> suggested that designing for prosocial driving behaviors enables better communication and smoother interactions with other road users. Another finding from our study demonstrated that informative feedback resulted in the participants lowering their speed when the available space was low (oncoming traffic) and increasing the gap when the available space was higher (crosswalk). This aligns with the work done by Harris et al. <cit.>, where they considered maintaining sufficient space, lower speeds close to other road actors, and stopping for others when required as suitable proxy behaviors for assessing prosocial behavior in mobility environments. §.§ Participant opinions on the Feedback report Participants expressed engagement with the post-ride feedback and proactively looked to engage in prosocial behavior based on the reports. Most of the participants engaged with the report, indicating post-ride feedback efficacy. While most participants expressed positive opinions on the post-ride feedback, there were concerns about a lack of transparency on the scoring system and a lack of recommendations on improving their behavior. In the free-form response texts, several participants provided suggestions to enhance the efficacy of the post-ride feedback by (1) clarifying the scoring scheme to enhance transparency and (2) including supplemental information to help identify ways to improve pro-social behavior. These findings provide invaluable insights into this feasibility study and an opportunity to improve the future study with a user-oriented post-ride feedback system. The study format presented the opportunity for gamification; however, gamification is a strong tool for increasing desired behaviors <cit.>, and similar methods have been used to reduce unsafe driving behaviors <cit.>. The negative responses from the informative feedback caused participants to amend and decrease non-prosocial behaviors such as short gap distances and blocking the path of OAs. To amend the prosocial scores after receiving feedback, participants likely required increased attention and effort, as suggested by the increase in trial time. The findings from the study are supported by Feedback Intervention Theory (FIT), which can guide attention to behavior to meet the standards. This can explain the increased trial time in individuals who received post-trial feedback. Regardless of whether the primary motivator was an internal desire to behave prosocially or to receive a higher score, the feedback motivated participants to behave differently. However, understanding the true motivations behind prosocial behavior may be challenging. §.§ Limitations and future work While this research evaluates the efficacy of objective measures for prosocial behavior and the feasibility of post-ride feedback, there are some limitations of this research. The online study provides a low-fidelity platform for evaluating behaviors in a gaming-like experience. We plan to conduct follow-up VR-based evaluations to mimic close-to-real-world behaviors in the near future. In-person study could further benefit from the contextualization of the perceived rewards in time-demanding situations and whether participants would engage in prosocial behavior if they experienced the temporal demands of commuting in a real-world setting. Additionally, the evaluation performance can be confounded by prior experience participating in simulator environments like studies or playing video games, which could be accounted for in future studies. § CONCLUSION In this work, we proposed a post-ride feedback-based methodology to promote prosocial behavior in micromobility users. An online Unity-based interactive user study was designed where the ego rider interacted with pedestrians and robots, and the performance was evaluated through objective measures that helped quantify prosocial behavior. Participants who received the post-ride feedback provided insights toward the future improvement of the feedback design. The results found that the post-ride feedback successfully improved prosocial behavior. The findings from this study provide a step toward designing effective post-ride feedback that facilitates safer interactions and societal acceptance of new mobility technologies. We would like to thank Hiu Chun lo, Research Engineer at Honda Research Institute, who was instrumental in providing support for the design of the user study platform. ACM-Reference-Format
http://arxiv.org/abs/2409.02438v1
20240904042949
Non-target Divergence Hypothesis: Toward Understanding Domain Gaps in Cross-Modal Knowledge Distillation
[ "Yilong Chen", "Zongyi Xu", "Xiaoshui Huang", "Shanshan Zhao", "Xinqi Jiang", "Xinyu Gao", "Xinbo Gao" ]
cs.CV
[ "cs.CV" ]
IEEE TRANSACTIONS ON MULTIMEDIA Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals Non-target Divergence Hypothesis: Toward Understanding Domain Gaps in Cross-Modal Knowledge Distillation Yilong Chen, Zongyi Xu, Xiaoshui Huang, Shanshan Zhao, Xinqi Jiang, Xinyu Gao, and Xinbo Gao, Fellow, IEEE September 9, 2024 ============================================================================================================================================================== § ABSTRACT Compared to single-modal knowledge distillation, cross-modal knowledge distillation faces more severe challenges due to domain gaps between modalities. Although various methods have proposed various solutions to overcome these challenges, there is still limited research on how domain gaps affect cross-modal knowledge distillation. This paper provides an in-depth analysis and evaluation of this issue. We first introduce the Non-Target Divergence Hypothesis (NTDH) to reveal the impact of domain gaps on cross-modal knowledge distillation. Our key finding is that domain gaps between modalities lead to distribution differences in non-target classes, and the smaller these differences, the better the performance of cross-modal knowledge distillation. Subsequently, based on Vapnik-Chervonenkis (VC) theory, we derive the upper and lower bounds of the approximation error for cross-modal knowledge distillation, thereby theoretically validating the NTDH. Finally, experiments on five cross-modal datasets further confirm the validity, generalisability, and applicability of the NTDH. Cross-Modal Knowledge Distillation, Domain Gaps, Multimodal Fusion. § INTRODUCTION In recent years, cross-modal knowledge distillation (KD) has expanded the traditional KD approach to encompass multimodal learning, achieving notable success in various applications <cit.>. However, when there are considerable domain gaps in cross-modal KD, even a more accurate teacher model may not effectively instruct the student model. To overcome these challenges, many researchers have sought to enhance the effectiveness of cross-modal KD by designing efficient fusion strategies <cit.> or developing novel loss functions <cit.>. However, most of these methods focus on complex technical designs with less emphasis on exploring their theoretical foundations, which is the focus of this paper. Xue et al. <cit.> are the first to theoretically focus on KD under modality differences, proposing the Modality Focus Hypothesis (MFH). This hypothesis posits that the performance of cross-modal KD hinges on the modality-general decisive features preserved in the teacher network. These features indicate the degree of alignment between different modalities, with greater alignment leading to improved KD outcomes. Fig. <ref>(a) shows an example of a multimodal dataset with both audio and images, where the image data includes not only the scene of guitar music but also background information, leading to incomplete modality alignment; according to the MFH, improved feature alignment is expected to enhance cross-modal KD effectiveness. However, the above hypothesis has two shortcomings: (1) It cannot explain why cross-modal KD might still fail even when there is modality alignment, as shown in situations like those in Fig. <ref>(b)-(d). (2) It lacks a mathematical definition of modality-general key features, making it difficult to identify these features in practical applications. To address these deficiencies, we first pre-register or align the modalities involved in our study to eliminate the effects caused by unregistered modalities, thereby focusing our research on domain gaps. We adopt this approach because we believe that misalignment is merely an external manifestation, while the actual domain gaps between modalities are the core distinction between cross-modal and single-modal KD. Secondly, we clearly define the key factors affecting cross-modal KD. Specifically, inspired by <cit.>, we divide classification predictions into two levels: (1) Target class prediction distribution: a binary prediction distribution for the target category and all non-target categories; (2) Non-target class prediction distribution: a multi-category prediction distribution for each non-target category. We find that for cross-modal KD, the distribution divergence in non-target categories is the decisive factor. The smaller the divergences, the better the effect of cross-modal KD, which we refer to as the Non-target Divergence Hypothesis. Since the distance of non-target distributions can be easily measured using existing distance functions, they can be explicitly defined and calculated. For example, in multimodal point cloud semantic segmentation, both point clouds and images classify road areas. These predictions can be divided into road class prediction distribution and non-road class prediction distribution. If the distribution difference of the non-road class predictions is large, the effectiveness of cross-modal KD is significantly affected and may even fail, as shown in Fig. <ref>. The purpose of this work is to prove the validity of the hypothesis from a theoretical analysis perspective. Our major contributions are the following: 1) We propose the Non-target Divergence Hypothesis (NTDH), which posits that the divergence of non-target distributions between modalities is a key factor determining the effectiveness of cross-modal KD. 2) We theoretically prove the upper and lower bounds of the cross-modal KD approximation error, thereby validating the rationality of NTDH. 3) We design a weight regulation method and a masking method to experimentally verify the hypothesis. Experimental results on five multimodal datasets support the proposed NTDH. § RELATED WORK §.§ Unimodal KD KD is a general technique for transferring knowledge from a teacher network to a student network, widely applied in various vision tasks <cit.>. Although progress has been made in improving distillation techniques and exploring new application areas, research on the mechanisms underlying KD remains limited <cit.>. For example, Cho et al. <cit.> and Mirzadeh et al. <cit.> investigate KD in the context of model compression, focusing on the performance of student and teacher networks when their model sizes differ. They point out that a mismatch in capacity between the student and teacher networks can lead to KD failure. Ren et al. <cit.> analyze KD in vision transformers and find that the inductive bias of the teacher model is more important than its accuracy in improving the performance of the student model. However, the aforementioned methods mainly focus on the factors affecting unimodal KD, while research on cross-modal KD remains limited. §.§ Crossmodal KD With the widespread use of the internet and the increasing application of multimodal sensors, there is a surge of interest in multimodal learning <cit.>. In line with this trend, KD extends to cross-modal KD, which commonly applies in three scenarios. First, Knowledge Expansion: Cross-modal KD compensates for insufficient data in a single modality. For instance, Ahmad et al. <cit.> use multimodal MRI data to overcome clinical data scarcity, while Li et al. <cit.> transfer RGB-trained models to infrared data, addressing its data limitations. Similar approaches include using Galvanic Skin Response signals to offset Electroencephalogram data collection challenges <cit.>, supplementing Synthetic Aperture Radar images with RGB data <cit.>, and transferring English-trained models to other languages <cit.>. Second, Multimodal Knowledge Fusion: Cross-modal KD fuses complementary information from different modalities into a unimodal network, requiring multimodal data only during training and simplifying deployment <cit.>, such as using Lidar for RGB images <cit.>, RGB for point clouds <cit.>, facial images for voice data <cit.>, and RGB to enhance thermal imaging <cit.>. Large language models also aid in cross-modal retrieval <cit.>. Third, Additional Constraints: In some cases, significant differences in network structures for different modalities make feature layer fusion highly challenging. The loss function of cross-modal knowledge distillation then acts as a regularization term, enforcing consistency constraints on outputs to enhance the performance of individual modalities. For instance, Sarkar et al. <cit.> use KL divergences to align audio and video outputs, while another method, MCKD <cit.>, applies multimodal contrastive KD for video-text retrieval. Although these methods show potential in cross-modal KD, most widely used approaches still rely on single-modal techniques, raising questions about their effectiveness and limitations. This paper, therefore, analyzes key factors influencing cross-modal KD to enhance its application. §.§ Domain Gaps in Cross-modal KD In cross-modal KD, domain gaps critically impact performance. To reduce these gaps, modality alignment methods are often employed to unify different modalities from various domains into a common <cit.> or intermediate modality <cit.>. However, this approach may result in the loss of modality-specific characteristics. To address this, some methods incorporate decoupling strategies to preserve these characteristics, such as using independent detection heads <cit.> for KD or employing feature partitioning to effectively transfer knowledge from the teacher modality while preserving unique features of the student modality <cit.>. Additionally, other approaches focus on minimizing domain gaps by filtering out features with significant discrepancies. For example, Zhuang et al. <cit.> and Wang et al. <cit.> focus on evaluating and filtering significant domain discrepancies, while Huo et al. <cit.> selectively filter out misaligned samples to avoid modality imbalance. Although these methods have made some progress, they primarily address multimodal feature fusion and are not directly applicable to dual-branch networks in cross-modal KD. To address this issue, this paper proposes a mask-based approach to mitigate the impact of domain shifts on cross-modal KD. § THE PROPOSED HYPOTHESIS In this section, we first define the symbols and present the necessary assumptions for subsequent proofs. Next, we introduce the Non-target Divergence Hypothesis and provide a detailed explanation of this hypothesis based on experimental results on Scikit-learn dataset <cit.>. Finally, we prove the hypothesis using Vapnik-Chervonenkis (VC) theory <cit.>. §.§ Symbol Definitions And Conditional Assumptions We provide a comprehensive overview of the fundamentals of KD and introduce the symbols used in this study. Although our focus is primarily on C-class classification problems, the concepts discussed are also applicable to regression tasks. To maintain the generality of the framework, we consider a two-modal setting, where the data is represented as x^a and x^b, corresponding to the data from modalities `A' and `B', respectively, as shown in Eq. (<ref>). {{(x_i^a,y_i)}_i=1^n∼P^n(x^a,y),x_i^a∈ℝ^d_a,y_i∈Δ^c {(x_i^b,y_i)}_i=1^n∼P^n(x^b,y),x_i^b∈ℝ^d_b,y_i∈Δ^c ., where (x_i^a,y_i) and (x_i^b,y_i) respectively represent the feature-label pair of modality `A' and modality `B', and Δ^c represents the set of c-dimensional probability vectors. Suppose that our goal is to train a student network that takes x^b as input. In the case of cross-modal KD, the teacher network takes the training data x^a as input and minimizes the training objective as follows: f_t=f∈ℱ_tmin 1/n∑_i=1^nℒ(y_i,σ (f(x_i^a)))+Ω ( f ) , where ℱ_t is a class of functions from ℝ^d_a to ℝ^c , the function σ : ℝ^c→Δ^c is the softmax operation σ(z)_k=e^zk/∑_j=1^ce^z_j. For all 1≤ k≤ c , the function ℒ:Δ^c×Δ^c→ℝ is Kullback-Leibler(KL) Divergence <cit.>, and Ω :ℝ→ℝ is an increasing function which serves as a regularizer. After training the teacher model f_t using the data x_i^a from modality `A', our goal is to transfer the knowledge acquired by the teacher network to the student network operating in modality `B'. Therefore, in addition to minimizing the KL loss between the student output and one-hot label, it is also required to minimize the KL loss between the teacher and student outputs. The objective of optimizing the student network is as follows: f_s = f ∈ℱ_smin 1/n∑_i=1^n[(1-λ) ·ℒ(y_i,σ (f(x_i^b))) + λ·ℒ(s_i,σ (f(x_i^b)))], where s_i=σ (f_t(x_i^a)/T)∈Δ^c represents the soft predictions obtained from f_t about the training on modality `A'. The temperature parameter T (T>0) controls the level of softening or smoothing of the class-probability predictions from f_t and the imitation parameter λ∈ [0,1] determines the balance between imitating the soft predictions s_i and predicting the true hard labels y_i. Given that the primary focus of this paper is the impact of domain gaps in cross-modal KD, we have listed the following assumptions conditions to control variables and exclude the interference of model capacity and modality strength. These assumptions form the foundational conditions for the discussion in this paper and the basis for the theoretical reasoning in Sec. <ref>. * Assumption 1: ℱ_s and ℱ_t have the same model capacity, meaning they have the same ability to fit or learn complexity or accommodate information. * Assumption 2: x_i^a and x_i^b have the same modality strength, meaning that when the same model is trained using x_i^a and x_i^b as data separately, the difference in model prediction accuracy is not significant. §.§ Non-target Divergence Hypothesis Under the assumption in Sec. <ref> and based on VC theory <cit.>, the upper and lower bounds of the cross-modal KD approximation error have been derived (see Sec. <ref> for the omitted proof). According to this conclusion, it can be inferred that the divergence in non-target class prediction distributions between the teacher and student networks is a key factor affecting the effectiveness of cross-modal KD. This divergence stems from inherent domain discrepancies between modalities, which hinder effective guidance from the teacher to the student. This leads to the non-target divergence hypothesis. Non-target Divergence Hypothesis (NTDH): For cross-modal KD, the performance of KD is determined by the distribution divergence of non-target classes. The smaller this divergence, the better the student network performs. This hypothesis posits that cross-modal KD benefits from the consistency in the distribution of non-target classes. Furthermore, it explains the observation that, in certain circumstances, the performance of the teacher network is not directly correlated with the performance of the student network. To intuitively understand our hypothesis, we use the Scikit-learn toolkit to simulate six sets of multimodal data, where γ∈[ 0,1 ] represents the degree of domain discrepancy between modalities, with higher values indicating greater domain differences, as shown in Fig. <ref>. To satisfy Assumptions 1 and 2, both the teacher and student networks use the same architecture and maintain consistency in data intensity (see Sec. <ref> for details). We conduct experiments on these six datasets and record the Jensen-Shannon Divergence between the teacher and student networks in both target and non-target distributions during stable training, denoted as TCJSD and NCJSD, respectively: TCJSD =1/2· p_k^talog( p_k^ta/m_k)+1/2· p_k^sblog( p_k^sb/m_k), NCJSD = ∑_i=1,i k^C( 1/2·p̂_i^talog( p̂_i^ta/m_i) + 1/2·p̂_i^sblog( p̂_i^sb/m_i) ), where m_i=1/2·( p̂_i^ta+p̂_i^sb), m_k=1/2·( p_k^ta+p_k^sb), [ p_k^ta = σ (f_t(x_k^a)) = exp (f_t(x_k^a))/∑_j=1^Cexp (f_t(x_j^a)),; p_k^sb = σ (f_s(x_k^b)) = exp (f_s(x_k^b))/∑_j=1^Cexp (f_s(x_j^b)),; p̂_i^ta = σ (f_t(x̂_i^a)) = exp (f_t(x̂_i^a))/∑_j=i, j k^Cexp (f_t(x_j^a)),; p̂_i^sb = σ (f_s(x̂_i^b)) = exp (f_s(x̂_i^b))/∑_j=i, j k^Cexp (f_s(x_j^b)). ] The experimental results indicate that as the domain discrepancy between modalities increases (i.e., as γ increases from 0 to 1), the accuracy of the teacher network remains stable, while the performance of the student network declines significantly. Concurrently, there is a significant increase in the distance of the non-target distribution, whereas the distance of the target distribution remains almost unchanged, as illustrated in Fig. <ref>. These findings suggest that in cross-modal KD, the greater the distribution divergence of non-target classes, the lower the effectiveness of KD. In contrast, a smaller discrepancy leads to better distillation outcomes. This is a qualitative observation; further validation of these conclusions will be provided through theoretical derivations in Sec. <ref>. §.§ Prove the Non-target Divergence Hypothesis Recall our three actors: the student function f_s∈ℱ_s (trained on x_i^b ), the teacher function f_t∈ℱ_t (trained on x_i^a or x_i^b), and the real target function of interest to both the student and the teacher, f∈ℱ . For simplicity, consider pure distillation, where the imitation parameter is set to λ =1. According to VC theory <cit.>, the classification error of the classifier, f_s^b can be expressed as: R(f_s^b)-R(f)≤ O( | ℱ_s^b|_C/√(n))+ε_sb, where the O(· ) and ε_sb terms are the estimation and approximation error, respectively. The former refers to the performance gap between a model on training data and its theoretical best performance. The latter refers to the difference between a model output and the true target function. It reflects whether the model representational capacity is sufficiently powerful to accurately approximate the true target function. If the hypothesis space of a model cannot capture the complexity of the target function, the approximation error will be large. Here, R represents the error, | ·|_C denotes a measure of the capacity of some function class, and n represents the number of data points. Let f_t^a∈ℱ_t^a and f_t^b∈ℱ_t^b be the teacher function trained on x_i^a and x_i^a , then: R(f_t^a)-R(f)≤ O( | ℱ_t^a|_C/n)+ε_ta, R(f_t^b)-R(f)≤ O( | ℱ_t^b|_C/n)+ε_tb. Then, we can transfer the knowledge of the teacher separately from data `A' or `B' to the student. Let f_t^a serve as the teacher function in cross-modal KD, and f_t^b in Unimodal KD, then: Cross modal KD R(f_s^b)-R(f_t^a)≤ O( | ℱ_s^b|_c/n^α)+ε_m, where ε_m is the approximation error of the teacher function class ℱ_s^b with respect to f_t^a∈ℱ_t^a , and 1/2≤α≤ 1. Unimodal KD R(f_s^b)-R(f_t^b)≤ O( | ℱs^b|_C/n^α)+ε_l, where ε_l is the approximation error of the teacher function class ℱ_s^b with respect to f_t^b∈ℱ_t^b. Then, if we employ cross-modal KD, combining Eqs. (<ref>) and (<ref>), we can obtain an alternative expression for the student learning the real function f, as follows: Alternative Expression Through Cross-Modal KD R(f_s^b)-R(f)=R(f_s^b)-R(f_t^a)+R(f_t^a)-R(f) ≤ O( | ℱ_s^b|_C/n^α) +ε_m+O( | ℱ_t^a|_C/n)+ε_ta. Similarly, by employing unimodal KD and combining Eqs. (<ref>) and (<ref>), we can obtain an alternative expression for the student's learning of the real function f, as follows: Alternative Expression Through Unimodal KD R(f_s^b)-R(f)=R(f_s^b)-R(f_t^b)+R(f_t^b)-R(f) ≤ O( | ℱ_s^b|_C/n^α) +ε_l+O( | ℱ_t^b|_C/n)+ε_tb. Combining Eqs. (<ref>) and (<ref>), it is necessary to satisfy Eq. (<ref>); otherwise, cross-modal KD would not outperform Unimodal KD. O( | ℱ_t^a|_C/n)_estimation+ε_m+ε_ta_approximation≤O( | ℱ_t^b|_C/n)_estimation+ε_l+ε_tb_approximation. Observing Eq. (<ref>), we can see that it consists of two components: estimation and approximation error. Next, we will analyze these two parts separately. Regarding estimation error, it is primarily determined by model capacity and data pattern strength. According to the assumptions in Sec. <ref>, when the model capacity and data strength are the same, O(| F_t^a|_C/n) and O(| F_t^b|_C/n) are equivalent. Therefore, we can eliminate O(| F_t^a|_C/n) and O(| F_t^b|_C/n) from both sides of Eq. (<ref>) without altering its outcome. Regarding the approximation error term, it reflects the difference between the output of the neural network or the KD model and the target. In a neural network model, the model has only one objective: to minimize the difference between the network's predictions and the one-hot encoded target. Here, ε_ta and ε_tb represent the approximation errors when training the neural network model under the modality a and the modality b, respectively. Under the conditions of Assumption 2, when the intensities of the data modalities are the same, the error between the prediction results trained under modality a and modality b and the ground truth are the same. Therefore, we can further eliminate ε_ta and ε_tb on both sides of Eq. (<ref>). In the KD model, the model has two objectives: first, to minimize the discrepancy between the student output and the one-hot target; second, to reduce the difference between teacher and student outputs. ε_l represents the unimodal KD model, while ε_m represents the multimodal KD model. For the error generated by the first objective, regardless of whether the prediction results come from the unimodal or multimodal model, they are compared with the ground truth. When both Assumption 1 and Assumption 2 are satisfied, the approximation errors of the models is the same and can therefore be offset against each other. For the error generated by the second objective, due to the different targets of the unimodal and multimodal models (the unimodal model aims to approximate the prediction results under modality a, while the multimodal model aims to approximate the prediction results under modality b), they cannot be offset against each other. For the unimodal model, if the training time is sufficient and Assumption 1 is met, the error will approach zero because the input modalities of the teacher and student networks are the same. In contrast, due to the modality differences in the multimodal model, the error is larger and cannot be ignored. Based on the above analysis, we simplify Eq. (<ref>) by canceling O(| F_t^a|_C/n) with O(| F_t^b|_C/n) and ε_ta with ε_tb on both sides, ignoring ε_l, and retaining ε_m, ultimately reducing Eq. (<ref>) to: ε_m<0. Since ε_m uses the KL divergence for optimization in cross-modal KD, and according to Gibbs' inequality, the KL divergence is nonnegative, ε_m must also satisfy the following: ε_m=KL(F_t^a||F_s^b)≥ 0. By comparing Eq. (<ref>) with Eq. (<ref>), we arrive at a contradictory conclusion, indicating that the condition for cross-modal KD to outperform unimodal KD cannot hold. In other words, when both Assumption 1 and Assumption 2 are satisfied, the performance of cross-modal KD cannot exceed that of unimodal KD. Thus, we obtain the lower bound for the approximation error in cross-modal KD, which corresponds to unimodal KD. In this case, unimodal KD can be regarded as a special case of cross-modal KD, where there are no modality differences. Next, we will further derive the upper bound of the approximation error ε_m in cross-modal KD. According to <cit.>, KL divergence can be decomposed into a part related to the target class distribution and a part related to the non-target class distribution, denoted by KL(p^ta||p^sb) and KL(p̂^ta||p̂^sb) respectively. The decomposed result is represented as: ε_m=KL(F_t^a||F_s^b)=p_k^talog(p_k^ta/p_k^sb)+∑_i=1,i k^Cp_i^talog(p_i^ta/p_i^sb) =p_k^talog(p_k^ta/p_k^sb)+p_\ k^ta∑_i=1,i k^Cp̂_i^ta(log(p̂_i^ta/p̂_i^sb)+log(p_\ k^ta/p_\ k^sb)) =p_k^talog(p_k^ta/p_k^sb)+p_\ k^talog(p_\ k^ta/p_\ k^sb)_KL(p^ta||p^sb)+p_\ k^ta∑_i=1,i k^Cp̂_i^talog(p̂_i^ta/p̂_i^sb)_KL(p̂^ta||p̂^sb) =KL(p^ta||p^sb)+p_\ k^ta· KL(p̂^ta||p̂^sb) , where [ p_i^ta = σ (f_t(x_i^a)) = exp (f_t(x_i^a))/∑_j=1^Cexp (f_t(x_j^a)),; p_i^sb = σ (f_s(x_i^b)) = exp (f_s(x_i^b))/∑_j=1^Cexp (f_s(x_j^b)),; p_\ k^ta = σ (f_\ k(x_i^a)) = ∑_i=1, i k^Cexp (f_t(x_i^a))/∑_j=1^Cexp (f_t(x_j^a)).; ] Based on Eq. (<ref>), the upper bound of the approximation error in cross-modal KD can be obtained as follows: ε_m=KL(p^ta||p^sb)+p_\ k^ta· KL(p̂^ta||p̂^sb) ≤KL(p^ta||p^sb)_TCKL+KL(p̂^ta||p̂^sb)_NCKL. According to Eq. (<ref>), the upper bound of the approximation error in cross-modal KD is composed of two parts: one part is determined by the distribution error of the target classes, referred to as TCKL; the other part is determined by the distribution error of the non-target classes, referred to as NCKL. Compared to TCKL, NCKL involves the probability distributions of more classes, thereby increasing its complexity and uncertainty. It can be proven that when the number of non-target classes exceeds two, NCKL becomes significantly larger than TCKL. To demonstrate this, consider the expressions for TCKL and NCKL: TCKL=p_k^talog(p_k^ta/p_k^sb) , NCKL=∑_i=1,i k^Cp̂_i^talog(p̂_i^ta/p̂_i^sb). The ratio of NCKL to TCKL can be derived from Eqs. (<ref>) and (<ref>), as shown in NCKL/TCKL=∑_i=1,i k^Cp̂_i^talog(p̂_i^ta/p̂_i^sb)/p_k^talog(p_k^ta/p_k^sb). To estimate NCKL/TCKL, we perform a Taylor series expansion and approximate by neglecting higher-order infinitesimal terms. Let p̂_i^sb=p̂_i^ta+ε_i and p_k^sb=p_k^ta+ε_t, where ε_i and ε_t are small quantities. After the Taylor series expansion, the result of log( p̂i^ta/p̂_i^sb) is as follows: log( p̂_i^ta/p̂_i^sb)≈log( 1-ε_i/p̂_i^ta)≈ -ε_i/p̂i^ta-1/2( ε_i/p̂_i^ta)^2. Therefore, NCKL and TCKL can be approximated by: NCKL ≈∑_i kp̂_i^ta( -ε_i/p̂_i^ta-1/2( ε_i/p̂_i^ta)^2) = -∑_i kε_i-1/2∑_i tε _i^2/p̂_i^ta, TCKL≈ p_k^talog( p_k^ta/p_k^ta+ε_t)≈ -p_k^taε_t/p_k^ta=-ε_t. From Eqs. (<ref>) and (<ref>), NCKL/TCKL can be approximated by: NCKL/TCKL≈-∑_i kε_i-1/2∑_i kε _i^2/p̂_k^ta/-ε_t≥C-1/1·ε_i/ε_t. Based on Assumptions 1 and 2, the prediction errors of the teacher and student networks on the target class are comparable, so ε_t is much smaller than ε_i. When C-1 is sufficiently large, NCKL/TCKL will be much greater than 1. In summary, we have proven that in cross-modal KD, when the number of classes C is greater than 2, NCKL is significantly larger than TCKL. Therefore, the distribution differences among the non-target classes are the key factors that influence the effectiveness of cross-modal KD. When the distribution differences among non-target classes decrease, the upper bound of the approximation error will correspondingly reduce, thereby enhancing the effectiveness of cross-modal KD. § EXPERIMENTS In this section, we validate the hypothesis through experiments conducted on five multimodal datasets. First, we introduce the datasets and network models used in the experiments. Next, we elaborate on the experimental design and methodology. Finally, we assess the validity of the hypothesis in the context of traditional KD <cit.>. Additionally, we demonstrate the broad applicability and practical value of NTDH. §.§ Datasets The five multimodal datasets encompass three types: a simulated dataset (Scikit-learn), a synthetic dataset (MNIST/MNIST-M), and three real-world datasets (RAVDESS, SemanticKITTI, and NYU Depth V2). To ensure that the experimental datasets adhere to the assumptions outlined in Section <ref> and to mitigate the effects of data modality strength and model capacity on the results, we perform preprocessing. The adjustments to model capacity and modality strength are summarized in Table <ref>. §.§.§ Scikit-learn The Scikit-learn dataset is generated using the Scikit-learn toolkit, which allows us to precisely control the differences and strengths between modalities, ensuring that the data fully meets Assumptions 1 and 2. Specifically, we use the make_classification function from the toolkit to generate multi-class data, where the labels are determined by the feature f, and the length of the feature is d. Next, let x^a∈ℝ^d, x^b∈ℝ^d, and y constitute the multimodal data (x^a,x^b,y). Each modality is formed by a portion of the decisive features of x combined with noise z∈ℝ^d_z. Specifically, x^b=[ x_d_0:d_b,z ]∈ℝ^d, x^a=[ x_d_0:d_t,x_d_b:d_a,z ]∈ℝ^d, and d_a+d_t=2·d_b. We define a ratio γ =d_a-d_b/d_b ∈ [0,1], which characterizes the ratio of domain difference between modalities. As γ increases, the domain difference between data modalities also increases. In the experiment, x^b∈ℝ^d is fixed, and γ is set to γ =[0,0.25,0.5,0.75,1], constructing data with varying modality domain differences, as shown in Fig. <ref>(a). §.§.§ MNIST/MNIST-M MNIST <cit.> is a widely used dataset for handwritten digit recognition, containing grayscale images of digits from 0 to 9, each paired with a corresponding label. Each image has a resolution of 28x28 pixels, with the dataset comprising 60,000 training images and 10,000 test images. MNIST-M <cit.> is derived from the MNIST digits by randomly mixing colored patches from the BSDS500 <cit.>, creating a different modality of the same handwritten digits. Since the accuracy performance of MNIST and MNIST-M varies under the same model, to satisfy Assumption 2, which requires that the data strengths of both modalities be the same, we add noise of varying intensities into MNIST and MNIST-M, as shown in Fig. <ref>(b). §.§.§ RAVDESS The RAVDESS <cit.> is a dataset used for multimodal emotion recognition, containing data in both audio and video modalities. The dataset consists of speech and song recordings by 24 actors (12 male and 12 female), covering emotional categories such as neutral, happy, angry, fearful, disgusted, surprised, and sad. RAVDESS includes a total of 1,440 files and is widely used in research on emotion recognition, multi-modal learning, and human-computer interaction systems. In our study, we use the video modality as the input for the student network, while the audio modality serves as the input for the teacher network. §.§.§ SemanticKITTI SemanticKITTI <cit.> is a large-scale dataset based on the KITTI odometry benchmark, providing 43,000 scans with point-wise semantic annotations, of which 21,000 scans (sequences 00-10) are available for training and validation. The dataset includes 19 semantic categories and is used for the evaluation of point cloud semantic segmentation benchmarks. To enable different modalities to be processed by the same network, we follow <cit.> and project the original 3D point cloud data into the camera coordinate system, resulting in 2D point cloud features, as shown on the left side of Fig. <ref>(c). Additionally, in point cloud semantic segmentation, the image modality lacks dense semantic labels, leading to significantly higher segmentation accuracy for the point cloud modality compared to the image modality. To equalize the data strength, we obtain dense semantic labels for the image modality following the method described in <cit.>, as shown on the right side of Fig. <ref>(c). §.§.§ NYU Depth V2 The NYU Depth V2 dataset, introduced by Silberman et al. in 2012, is collected using Microsoft Kinect's RGB and depth cameras from commercial and residential buildings across three cities in the United States. The dataset consists of 1,449 densely annotated pairs of RGB and depth images, covering 464 distinct indoor scenes across 26 scene categories. In this study, the RGB data serve as the input for the teacher network, while the depth images are used as input for the student network, and experiments are conducted in the context of the semantic scene completion task. §.§ Network Model We fine-tune the existing network models to serve as the teacher and student models for our experiments. Specifically, for the Scikit-learn, both the teacher and student networks use a 2-layer fully connected network; for the MNIST/MNIST-M, both employ a 3-layer fully connected network; for the RAVDESS, due to the differing shapes of audio and image data, both the teacher and student networks utilize a 2D convolutional network with the same number of layers, and the kernel sizes are adjusted to accommodate the different data modalities. In the SemanticKITTI, the overall network architecture proposed by <cit.> is adopted, with the intermediate fusion layer removed; similarly, in the NYU Depth V2, the network architecture proposed by <cit.> is used, and the fusion layer is also removed. §.§ Experiment Plan To verify the validity of our hypothesis, we design a detailed experimental plan. We employ a divide-and-conquer approach by splitting the hypothesis into two parts and validating each separately. For the first part, we use a weight adjustment method to verify that the distribution differences among non-target classes are the primary factors affecting the effectiveness of KD. This is done by adjusting the weight coefficients in the loss function that account for the distribution differences between target and non-target classes. For the second part, we apply a masking method to remove features or samples that have a significant impact on the distribution differences among non-target classes, observing whether this can improve the performance of cross-modal KD. The experimental plan is illustrated in Fig. <ref>, with the detailed steps as follows: §.§.§ The First Part We observe whether the distribution differences among non-target classes are the primary factors affecting KD by altering the weights in the loss function that optimize the distribution differences between target and non-target classes. We refer to this method as the weight adjustment method. Specifically, we first decompose the KL divergence-based KD loss function into TCKL and NCKL according to Eqs. (<ref>) and (<ref>). TCKL aims to reduce the distribution gap between the teacher and student for target classes, while NCKL focuses on reducing the distribution gap for non-target classes. Next, we assign weight coefficients α and β to TCKL and NCKL, respectively. Finally, by adjusting these weight coefficients, we either increase or decrease the influence of the distribution divergence of non-target classes. §.§.§ The Second Part In the first part of the study, we confirm that the distribution differences among non-target classes are a key factor affecting the effectiveness of cross-modal KD. To further explore the relationship between non-target distribution differences and KD performance, we propose a mask-based method. This method ranks the features or samples based on the non-target distribution differences. After ranking, we only use the top-ranked features or samples during the distillation process to reduce the non-target distribution differences. If this method improves the performance of KD, it validates our hypothesis. For smaller datasets, such as the Scikit-learn dataset, MNIST/MNIST-M, and RAVDESS, we apply the feature mask, while for larger datasets, such as SemanticKITTI and NYU Depth V2, we use the sample mask. The specific algorithm is as follows: Feature Mask: The main steps of the feature mask method are shown in Algorithm 1. The inputs are x^a∈ℝ^n×d_1, x^b∈ℝ^n×d_2, and y∈ℝ^n, representing n pairs of features from modalities a and b, as well as the corresponding labels for these n targets. The output is a saliency vector p∈ℝ^d_1, which represents the difference in non-target class distribution between the teacher and student networks, where the i-th entry p_i∈ [0,1] reflects the saliency of the corresponding feature dimension. A larger saliency value indicates a greater difference in non-target class distribution for that feature channel. Algorithm 1 is designed based on a backtracking approach starting from the output layer. Specifically, in Step 4, we jointly train two unimodal networks, f_a and f_b, which take unimodal data x_a and x_b as inputs, respectively. The first term in Eq. (<ref>) aims to align the feature spaces learned by the two networks, while the remaining terms ensure that the learned features accurately predict the labels. In Step 5, we utilize the idea of feature importance ranking <cit.> to trace the saliency of input features with respect to non-target class distribution differences. For the i-th dimension in x_a, we randomly permute the values along that dimension and obtain the permuted result x̃_a in Step 8. Next, in Step 10, we quantify the difference in non-target class distribution for each input feature channel by calculating the NCJSD, where a larger distance indicates a more significant difference between the teacher and student networks. Therefore, we can quantify the magnitude of the non-target class distribution difference in each input feature channel using the saliency vector p. We repeat the permutation process multiple times and average the distance values to improve stability. Finally, in Step 13, p is normalized to [0,1]. Once the feature channels are ranked, we can reduce the non-target class distribution differences during distillation by applying a mask to the top γ% of the feature channels. Sample Mask: The main steps of the feature masking method are illustrated in Algorithm 2. The inputs to Algorithm 2 are σ (f_t(x_i^a)) and σ (f_s(x_i^b)), representing the predicted probability distributions of the teacher and student networks, respectively. The output is a saliency vector P∈ℝ^n, which indicates the difference in non-target class distribution between the teacher and student networks, where the i-th entry p_i∈ [0,1] reflects the saliency of the corresponding sample. A larger saliency value indicates that the sample exhibits a greater difference in non-target class distribution. Here, n represents the number of samples, such as the number of points in each batch for the SemanticKITTI or the number of pixels in each batch for the NYU Depth V2. In Step 6, we calculate the NCJSD for all samples in each batch. In Step 8, we sort these distances, where larger distances indicate more significant differences in non-target class distribution between the teacher and student networks. Therefore, we can quantify the magnitude of the non-target class distribution difference for each sample using the saliency vector p. Finally, in Step 12, p is normalized to [0,1]. Once the sorting of all samples in the batch is completed, we can reduce the distribution differences between the non-target classes during distillation by applying a mask to the top γ% of the samples. §.§ The Results of the First Part §.§.§ Scikit-learn The results of the Scikit-learn dataset are shown in Fig. <ref>. The horizontal axis represents the modality difference γ, where γ = 0 indicates no difference and γ = 1 indicates the maximum difference. The experiment sets six levels of modality divergence. By adjusting the weights α and β, we study the impact of distribution divergence between target and non-target classes on the effectiveness of KD. In Fig. <ref>(a), we fix α and increase β to give more weight to the target class under the six different modality differences. Correspondingly, in Fig. <ref>(b), we fix the β and increase the α to give more weight to the distribution divergence of non-target classes. By comparing the experimental results, it can be observed that when γ =1, indicating a large modality difference, increasing α significantly improves the effectiveness of KD, whereas increasing β significantly reduces the effectiveness. This indicates that, in cases of large modality differences, the distribution divergence of non-target classes play a crucial role in the effectiveness of KD. §.§.§ Other dataset On the MNIST/MNIST-M datasets, we employ the feature mask, fixing the β parameter while gradually increasing the α to amplify the weight of the distribution divergence of non-target classes. We then fix the α and increase the β to enhance the weight of target class distribution differences. The results show that when the weight of non-target class distribution divergence increases, accuracy significantly decreases, while increasing the weight of target class distribution divergence results in almost no noticeable change. This indicates that the distribution divergence of non-target classes is a crucial factor influencing the effectiveness of cross-modal KD. To further validate this phenomenon, similar experiments are conducted on the RAVDESS, SemanticKITTI, and NYU Depth V2 datasets, and consistent results are obtained, as shown in Figs. <ref>(b)-(d), respectively. These consistent results across different datasets further confirm our hypothesis. §.§ The Results of the Second Part §.§.§ Scikit-learn By applying Algorithm 1 to deactivate the top γ of features with the highest NCJSD, we reduce the non-target distribution divergence between the teacher and student networks, which is referred to as the True Mask. For control purposes, we establish two additional groups: one that deactivates the top γ of features with the lowest NCJSD, thereby increasing the non-target distribution divergence between the teacher and student networks, is known as the False Mask; the other randomly deactivates γ of features, referred to as the Random Mask. Fig. <ref> shows the results for γ = 0.5 and γ = 0 on the Scikit-learn dataset. The 0-th layer features of the teacher network (i.e., the input data) are selected as the masking object, with nine different masking ratios ranging from 10% to 90%. For the True Mask, a higher removal ratio indicates that more features with greater non-target distribution divergence are removed from the teacher network. Conversely, for the False Mask, the situation is reversed. The results show that for the True Mask, the performance of the student network initially improves with increasing γ%. This improvement suggests that features with significant non-target distribution divergence between the teacher and student networks are being progressively discarded, thereby enhancing the performance of the student network. However, as the deactivation process starts to affect features with smaller non-target distribution divergence, performance begins to decline. In contrast, the False Mask performs the worst, while the Random Mask results fall between the two. This observation is consistent with our expectations. §.§.§ MNIST/MNIST-M and RAVDESS Similarly, we also conduct experiments on the MNIST/MNIST-M and RAVDESS datasets using Algorithm 1, and reach similar conclusions, as shown in Figs. <ref>(a) and (b). Additionally, we use heatmaps to visualize the masked areas on the MNIST/MNIST-M datasets, as shown in Fig. <ref>. For the False Mask, the highlighted regions are mainly concentrated in the non-target areas, with fewer in the target areas; for the True Mask, the situation is the opposite, with the highlighted regions primarily focused on the target areas and fewer in the non-target areas. From these results, it is observed that the target areas are mainly concentrated in the foreground, while the non-target areas are concentrated in the background, which aligns with our previous analysis. §.§.§ SemanticKITTI and NYU Depth V2 We use Algorithm 2 on the SemanticKITTI and NYU Depth V2 datasets, as shown in Figs. <ref>(a) and (b). The results indicate that the effectiveness of KD initially improves and then declines as samples with large non-target class distribution divergence are removed. Initially, removing these samples enhances KD because their significant distribution differences hinder the process. However, as the removal proportion γ% increases, samples with smaller distribution divergences are also eliminated, leading to a decrease in KD effectiveness in later stages. Furthermore, we visualize the samples with significant distribution divergence in non-target classes on the SemanticKITTI dataset, as shown in Fig. <ref>. The highlighted areas, primarily concentrated along object edges, indicate that distribution divergence is greatest in these regions. The reason for this is that in segmentation tasks, the edges are typically the most challenging to distinguish, leading to the most pronounced divergence in non-target class distributions between different modalities. §.§ Generalization Beyond traditional KD <cit.>, various improved algorithms have been proposed. To evaluate the generalization ability of NTDH, we apply the feature masking method to several of these improved KD algorithms, including FitNet <cit.>, Contrastive Representation Distillation (CRD) <cit.>, Relational Knowledge Distillation (RKD) <cit.>, Probabilistic Knowledge Transfer for deep representation learning (PKT) <cit.> , Similarity-Preserving KD (SP) <cit.> and Decoupled Knowledge Distillation (DKD) <cit.>. Table <ref> presents the experimental results on the MNIST/MNIST-M dataset. The results show that none of the methods improve student performance, indicating that KD methods designed for unimodal scenarios are not suitable for cross-modal applications. However, when we use Algorithm 1 to remove portions with significant non-target class distribution divergence, especially at smaller removal proportions (e.g., 25% and 50%), all algorithms show significant performance improvements. In contrast, random and false masking lead to substantial performance declines, highlighting the critical role of non-target class differences in cross-modal KD. Table <ref> presents the experimental results on the RAVDESS dataset. The results show that cross-modal KD fails to improve student performance, highlighting the limitations of these methods in cross-modal applications. However, using Algorithm 1 to remove portions with significant modal differences, particularly at lower removal proportions (e.g., 25% and 50%), leads to performance improvements in most algorithms. In contrast, control experiments with Random and False masking result in significant performance declines, underscoring the decisive role of non-target class distribution divergence in cross-modal KD. §.§ Applications This section further explores the practical application of NTDH. According to this hypothesis, in cross-modal KD, if the non-target class distribution difference between Teacher (a) and the student is smaller than that between Teacher (b) and the student, then we expect the student guided by Teacher (a) to outperform the student guided by Teacher (b). To reduce the non-target class distribution difference between the teacher and student networks, the method in Algorithm 2 can be applied. To validate the effectiveness of this method, we apply it to existing cross-modal KD algorithms, such as 2DPASS <cit.> and PMF <cit.>, and conduct tests on the SemanticKITTI dataset. Table <ref> shows the experimental results, indicating that appropriately masking samples with significant non-target distribution divergence (e.g., 25% and 50%) can improve performance. For 2DPASS, the maximum improvement reached 2.0%. By comparing the change curves of NCJSD and TCJSD distances before and after masking, as shown in Fig. <ref>(a), it is evident that the non-target distribution divergence significantly decreases, while the target distribution divergence changes only slightly. This confirms that the performance improvement is due to the reduction of non-target distribution differences. In contrast, the improvement in PMF is smaller because the Perception-Aware Loss in the original paper has already removed samples with large modality differences, leading to insignificant changes in NCJSD and TCJSD distances before and after masking, as shown in Fig. <ref>(b). § CONCLUSION In this work, we delve into cross-modal KD and introduce the NTDH method, which analyzes the impact of domain gaps in multimodal data on KD performance, highlighting the importance of non-target class distribution divergence. Through theoretical analysis and carefully designed experiments, we validate the rationale, generalization capability, and potential applications of NTDH. We aim for NTDH to provide valuable insights for the practical use of cross-modal KD and to foster interest in understanding domain discrepancies in multimodal learning. However, as this paper focuses on the effects of domain discrepancies on cross-modal KD, the exploration of theoretical applications is limited. IEEEtran
http://arxiv.org/abs/2409.02791v1
20240904150639
The Interaction of Moving $\mathbf{Q\bar{Q}}$ and QQq in the Thermal Plasma
[ "Xuan Liu", "Sheng Lin", "Xun Chen" ]
hep-ph
[ "hep-ph" ]
School of Nuclear Science and Technology, University of South China, Hengyang 421001, China School of Nuclear Science and Technology, University of South China, Hengyang 421001, China chenxun@usc.edu.cn School of Nuclear Science and Technology, University of South China, Hengyang 421001, China Key Laboratory of Quark and Lepton Physics (MOE), Central China Normal University, Wuhan 430079,China § ABSTRACT The strength of the force between heavy quarks is studied for heavy quarkonium (QQ) and doubly heavy baryons (QQq) at finite temperature and rapidity using the gauge/gravity duality in this paper. The strength of the interaction is defined as an effective running coupling α from lattice. By considering the QQ and QQq moving through the thermal quark-gluon plasma, we find that the interaction of heavy quarks for QQ is small and remains almost constant when L<0.2 fm. The strength of the interaction for QQq is always less than that for QQ and approaches a constant when L<0.4 fm. The α and the maximum of effective running coupling α̃ for both QQ and QQq decrease with increasing temperature or rapidity. Through comparison, we find that QQq is more sensitive to changes in temperature and rapidity, and the interaction force between quarks is always less than that of QQ, indicating that QQ is more stable than QQq in the presence of temperature and rapidity. The Interaction of Moving 𝐐𝐐̅ and QQq in the Thermal Plasma Xun Chen September 9, 2024 =========================================================== § INTRODUCTION The running coupling constant serves as a crucial metric for measuring the strength of the strong interaction between quarks, and the force between quarks is essential for our understanding of QCD and the QCD phase transition. When quarks are confined within hadrons, the force between them increases with the separation distance until the string breaks, resulting in the creation of new quark-antiquark pairs, thereby preventing the isolation of free quarks. QCD predicts that deconfinement occurs under extreme conditions, where color screening happens at long distances between quarks, leading to the emergence of relatively free quarks <cit.>. A system composed of such deconfined particles is referred to as a Quark-Gluon Plasma (QGP) <cit.>. Moreover, the property of asymptotic freedom indicates that quarks behave as free particles at very high energy scales. These phenomena can be attempted to be explained through the running of the coupling constant. Additionally, the running coupling constant plays a key role in many important physical processes such as jet quenching, heavy quarkonium production, and Higgs particle production <cit.>. Lattice QCD defined an effective running coupling <cit.>, α_Q Q̅(r)=1/C_F r^2 ∂ E(r)/∂ r, to study the force between the static quark and antiquark. C_F=4 / 3 is the Casimir operator in the fundamental representation of SU(3) and E is the static energy of QQ. Creating QGP through heavy-ion collisions allows for the simulation of the extreme conditions of high temperature and rapid expansion present in the early universe <cit.>. This helps us to understand the evolution of the early universe and the properties of QCD. Heavy quarkonium is an important probe for studying conditions of extreme high temperature and rapid expansion. Moreover, in the recent LHCb experiment at CERN, researchers have discovered a new particle known as Ξ_cc^++ <cit.>. It is composed of two heavy quarks and one light quark, and its discovery has greatly increased interest in the study of doubly heavy baryon. The running coupling constant has been extensively studied in the Refs. <cit.>, as discussed through lattice calculations at finite temperatures in the Refs. <cit.>. It is well known that the QGP has been rapidly expanding since its inception. Therefore, rapidity is an unavoidable factor to consider when discussing the effective running coupling of particles. This paper attempts to reveal more information about the QGP by contrasting heavy quarkonium and doubly heavy baryon at finite temperatures and rapidities. This can aid in our understanding of particle transport in the QGP and the plasma's effect on particle properties. Moreover, the interaction forces between quarks in the QGP can also reflect the state of the QGP to a certain extent, such as determining whether it is a strongly coupled QGP (sQGP) or a weakly coupled QGP (wQGP) <cit.>. Lattice gauge theory remains the fundamental tool for studying non-perturbative phenomena in QCD, yet its application to doubly heavy baryon has been relatively limited <cit.>. Gauge/gravity duality offers a new avenue for probing strongly coupled gauge theories. Originally, Maldacena <cit.> proposed the gauge/gravity duality for conformal field theories, but it was subsequently extended to encompass theories akin to QCD, thereby establishing to some extent a linkage between string theory and heavy-ion collisions <cit.>. Research on moving heavy quarkonium can be found in lattice <cit.>, effective field theory <cit.>, perturbative QCD <cit.>, S matrix <cit.>, and holographic QCD <cit.>. Moreover, the multi-quark potential obtained through effective string model is in good agreement with lattice results <cit.>. In this paper, we primarily investigate the effective running coupling of heavy quarkonium and doubly heavy baryon at finite temperature and rapidity. The rest of the paper is organized as follows: in the Sec. <ref>, we determine the holographic parameters by fitting their lattice potentials, and present the state diagram of particles in the T-η plane. In the Sec. <ref>, we discuss the effective running coupling of heavy quarkonium and doubly heavy baryon, including the effective running coupling with distance, and with temperature and rapidity, as well as the impact of rapidity on the temperature dependence of the effective coupling constant. Additionally, we examine the effects of temperature and rapidity on their screening distances. In the Sec. <ref>, we provide a summary of this paper. § PRELIMINARIES The effective string holographic models have been proposed by Andreev recently. These models can not only describe the potential of heavy quarkonium <cit.>, but also describe the potential of exotic hadrons <cit.>. The purpose of this study is to reveal the properties of QQ and QQq based on the effective string model by investigating the effective running coupling and the screening distance of their motion in a thermal medium. First, we present the metric ds^2 =w(r) (-f(r)dt^2+dx^2+1/f(r)dr^2 )+e^-𝐬r^2g_ab^(5)dω^adω^b, where w(r) =e^𝐬r^2R^2/r^2 , f(r) =1-r^4/r_h^4. The metric signifies a deformation of the Euclidean AdS_5×S_5 space, controlled by a single parameter 𝐬 and with a radius R = 1. In this work, 𝐬 is determined to be 0.41 GeV^2 by fitting the lattice potential of heavy quarkonium. Therefore, the metric is composed of an AdS_5 space and a five-dimensional compact space X with coordinates ω^a. The function f(r) is the blackening factor, which decreases within the interval [0, r_h], where r_h represents the black hole horizon (brane). Additionally, when hadrons are confined, an imaginary wall exists at r_w on the r-axis <cit.>. The Hawking temperature T associated with the black hole is given by: T=1/4π | df/dr |_r=r_h=1/π r_h. In this paper, we consider a particle moving at a rapidity η along the x_3 direction in a thermal medium at temperature T. It is known that the particle is also subject to drag force in the medium, but this is not the main focus of our discussion <cit.>. We can assume that the particle is at rest, while the thermal medium moves relative to it at rapidity η, which can be considered as a "thermal wind" blowing past the particle in the x_3 direction <cit.>. Through a Lorentz transformation, we can provide a new background metric: ds^2=w(r) (-g_1(r)dt^2 -2sinh(η)cosh(η)(1-g_1(r)/g_2(r))dx_3dt +g_3(r)dx_3^2+dx_1^2+dx_2^2+g_2(r)/g_1(r)dr^2)+e^-𝐬r^2g_ab^(5)dω^adω^b, where g_1(r) =f(r)cosh^2(η)-sinh^2(η), g_2(r) =g_1(r)/f(r), g_3(r) =cosh^2(η)-f(r)sinh^2(η). The Nambu-Goto action of a string is S_NG = - g ∫ dξ^0 dξ^1√(- g_ab), where the g_ab is an induced metric (ξ^0, ξ^1) are worldsheet coordinates, and g is related to the string tension. Based on the AdS/CFT correspondence, we know that the vertex corresponds to a five brane <cit.>, the baryon vertex action is S_vert = τ_5∫ d^6ξ√(γ^(6)), where τ_5 is the brane tension and ξ^i are the world-volume coordinates. Since the brane is wrapped on the compact space X, it appears point-like in AdS_5. We choose a static gauge where ξ^0 = t and ξ^a = θ^a, with θ^a being the coordinates on X. Consequently, the action is: S_vert=τ_v∫ dte^-2𝐬r^2/r√(g_1(r)), where τ_v is a dimensionless parameter defined by τ_v=τ_5Rvol(X) and vol(X) is a volume of X. Finally, we consider the light quarks at the endpoints of the string as a tachyon field, which is coupled to the worldsheet boundary through S_q=∫ dτ𝐞T, where T(x, r) is a scalar field that describes open string tachyon, τ is a coordinate on the boundary, and 𝐞 is the boundary metric <cit.>. We consider only the case where T(x, r)=T_0 and the worldsheet boundary is a line in the t direction, in which case the action can be written as: S_q=m∫ dte^𝐬r^2/2/r√(g_1(r)), This action represents a particle of mass T_0 at rest, with a medium at temperature T moving past it at a rapidity η. §.§ The potential and effective running coupling of heavy quarkonium In this paper, we investigate the scenario where the direction of motion is perpendicular to the heavy quark pair, with the heavy quark pair located at x_1 and the rapidity η along the x_3 direction. For brevity, x will be used to denote x_1 in the following. Then we choose the static gauge ξ^0=t, ξ^1=x, and the action of QQ can be written as: S_QQ =g_QQ∫_0^t∫_-L/2^L/2 w(r)√(g_1(r)+g_2(r)(∂_xr)^2) dx =g_QQt∫_-L/2^L/2 w(r)√(g_1(r)+g_2(r)(∂_xr)^2) dx, where the g_QQ = 0.176. And the boundary condition of r(x) is r(± L/2)=0, r(0)=r_0, (∂_xr|_r=r_0)^2=0. By substituting into the Euler-Lagrange equation, we can obtain: ∂_xr=√(w(r)^2g_1(r)^2-g_1(r)w(r_0)^2g_1(r_0)/w(r_0)^2g_1(r_0)g_2(r)). The distance between heavy quark and anti-quark is: L=2∫_0^r_0∂_rx dr, where ∂_rx=∂ x/∂ r=1/∂_xr. By E=S/t and after normalizing, we can obtain the potential of QQ, E_QQ=2g_QQ(∫_0^r_0(w(r)√(g_1(r)(∂_rx)^2+g_2(r))-1/r^2)dr-1/r_0). We consider the pattern of QQ̅ string breaking as QQ⟶Qq+Qq. It is easy to know that E_break= E_Qq+E_Qq=2g_QQ(∫_0^r_q(w(r)√(g_2(r))-1/r^2)dr-1/r_q+ne^sr_q^2/2/r_q√(g_1(r_q))). At T=0, η=0, we obtain E_break=1.057 GeV which means that string breaking occurs for the QQ̅ at E_QQ=1.057 GeV, resulting in a breaking distance of L_break=1.23 fm. We present a comparison between the potential obtained by the model and the lattice potential in Fig. <ref>. Therefore, the effective running coupling of QQ from lattice QCD is given by <cit.> α_QQ=3L^2/4dE_QQ/dL. Then we present its basic behavior in Fig. <ref>. The effective coupling constant of QQ is very small and increases only slightly at small scales when L < 0.2 fm, which can explain the phenomenon of asymptotic freedom exhibited by quarks at short distances. At larger scales, the obvious increase of the effective coupling constant with distance also corresponds to our understanding of the strong interaction force. We discuss in detail its effective running coupling properties in moving thermal media in the subsequent sections. §.§ The potential and effective running coupling of doubly heavy baryon The double heavy baryon consists of two heavy quarks and one light quark, which introduces a baryon vertex. When considering a string configuration for the ground state of QQq, it is natural due to symmetry to place the light quark between the two heavy quarks <cit.>. The QQq potential depends only on the distance L between the heavy quarks. In the Ref. <cit.>, the QQq potential is also fitted to a function similar to the Cornell potential: E_QQq=σ_effL-A_eff/L+C_eff, where the first term represents the confinement potential, and the second term comes from one-gluon exchange <cit.>. Due to the octet nature of gluons, the factor 4/3 is included in A_eff. As the distance between the heavy quark pair increases, its string configuration will change, as shown in Fig. <ref>. And the three configurations from left to right are called: small L, intermediate L, and large L. In the small L configuration, the action is composed of three strings, the baryon vertex, and the light quark. After r_v=r_q, it transitions to the intermediate L configuration, where the position of the light quark coincides with the baryon vertex, and the action is constituted by two strings, the baryon vertex, and the light quark. When the string at the baryon vertex transitions from a "convex" to a "concave" shape, it becomes the large L configuration. At this configuration, the composition of the action is consistent with the intermediate L. Therefore, the total action at small L is: S=∑_i=1^3 S_NG^(i)+S_vert+S_q. Then we choose the static gauge where ξ^0=t, ξ^1=r, and the boundary conditions of x(r) is: x(0)=± L/2, x(r_v)=x(r_q)=0, (∂_rx)^2=^2(α), r=r_v (∂_rx)^2=0, r∈ (r_v,r_q]. And we obtain the total action as S=g_QQqt (2∫_0^r_v w(r)√(g_1(r)(∂_rx)^2+g_2(r)) dr+ ∫_r_v^r_q w(r)√(g_2(r))dr +3ke^-2𝐬r^2/r√(g_1(r))+ne^𝐬r^2/2/r√(g_1(r))), where k=τ_v/3g_QQq, n=m/g_QQq. By substituting the first term of the action into the Euler-Lagrange equation, we obtain ∂_rx=√(w(r_v)^2g_1(r_v)^2g_2(r)/w(r)^2g_1(r)^2(g_1(r_v)+g_2(r_v)tan^2(α))-g_1(r)w(r_v)^2g_1(r_v)^2). Furthermore, there must be a balance of forces at the light quark and the baryon vertex, with the light quark site satisfying f_q+e_3'=0, and the vertex site satisfying f_v+e_3+e_1+e_2=0. These forces are obtained by taking the variation of their action: e_1 =g_QQqw(r_v)(-√(g_1(r_v)f(r_v)/f(r_v)+tan^2α), -√(g_1(r_v)/f(r_v)^2^2(α)+f(r_v))), e_2 =g_QQqw(r_v)(√(g_1(r_v)f(r_v)/f(r_v)+tan^2α), -√(g_1(r_v)/f(r_v)^2^2(α)+f(r_v))), e_3 =g_QQqw(r_v)(0, √(g_2(r_v))), e_3' =g_QQqw(r_q)(0, -√(g_2(r_q))), f_q =(0, -g_QQqn∂_r_q(e^𝐬r_q^2/2/r_q√(g_1(r_q)))), f_v =(0, -3g_QQqk∂_r_v(e^-2𝐬r_v^2/r_v√(g_1(r_v)))), it is evident that when T and η are fixed, the force at the light quark site depends only on r_q, while the force at the vertex involves only two unknowns, r_v and α. Therefore, we can determine the value of r_q and the function of α in terms of r_v, and from these, we can derive the potential of small L. The potential, with the divergent terms eliminated through normalization, is represented as: E_small=g_QQq (2∫_0^r_v(w(r)√(g_1(r)(∂_rx)^2+g_2(r))-1/r^2)dr-2/r_v+∫_r_v^r_q w(r)√(g_2(r))dr +3ke^-2𝐬r_v^2/r_v√(g_1(r_v))+ne^𝐬r_q^2/2/r_q√(g_1(r_q)))+c. At intermediate L, the total action is given by S=∑_i=1^2 S_NG^(i)+S_vert+S_q, and the boundary conditions of x(r) become x(0)=± L/2, x(r_v)=0, (∂_rx|_r=r_v)^2=^2(α). And only the forces at the vertices need to be considered: f_v+f_q+e_1+e_2=0, Since r_q = r_v, we only need to replace r_q with r_v in the force. Similarly, we can obtain its potential at intermediate L as: E_intermediate=g_QQq (2∫_0^r_v(w(r)√(g_1(r)(∂_rx)^2+g_2(r))-1/r^2)dr-2/r_v +3ke^-2𝐬r_v^2/r_v√(g_1(r_v))+ne^𝐬r_v^2/2/r_v√(g_1(r_v)))+c. The distance between heavy quark pairs for small L and intermediate L is calculated using the following function L=2∫_0^r_v∂_rx dr. As Fig. <ref> shows the large L is special because the string has two additional smooth inflection points. For the string configuration at large L, we choose a new metric ξ^0=t, ξ^1=x, and now the boundary condition of r(x) is r(± L/2)=0, r(0)=r_v, (∂_xr)^2=tan^2(α), r=r_v (∂_xr)^2=0, r=r_0. For configurations at large L, the form of the force balance equation at the vertex remains consistent with that at intermediate L but introduces an additional unknown r_0, which can be determined through the properties of the first integral ℋ|_r=r_0 =w(r_0)√(g_1(r_0)), ℋ|_r=r_v =w(r_v)g_1(r_v)/√(g_1(r_v)+g_2(r_v)tan^2(α)), w(r_0)√(g_1(r_0)) =w(r_v)g_1(r_v)/√(g_1(r_v)+g_2(r_v)tan^2(α)). Furthermore we obtain ∂_xr=√(w(r)^2g_1(r)^2-g_1(r)w(r_0)^2g_1(r_0)/w(r_0)^2g_1(r_0)g_2(r)). For convenience in the calculation, we present the potential after equivalent transformation and normalization as follows E_large=g_QQq (2∫_0^r_v(w(r)√(g_1(r)(∂_rx)^2+g_2(r))-1/r^2)dr+2∫_r_v^r_0 w(r)√(g_1(r)(∂_rx)^2+g_2(r))dr -2/r_v +3ke^-2𝐬r_v^2/r√(g_1(r_v))+ne^𝐬r_v^2/2/r_v√(g_1(r_v)))+c. The distance at large L should be L=2(∫_0^r_v∂_rx dr+∫_r_v^r_0∂_rx dr). Clearly, the complete potential is pieced together from three potential functions, but we do not need to focus on which configuration it belongs to. Therefore, in the following text, the potential of QQq will be collectively referred to as E_QQq. The parameters determined through fitting the lattice points for the QQq potential are: 𝐬=0.41 GeV^2, k=-0.321, n=2.941, g_QQq=0.082, c=0.73 GeV. The potential of QQq is shown in Fig. <ref>. The last point for small L is (r_v, L, E) = (1.1600, 0.3250, 1.0365), the starting point for intermediate L is (r_v, L, E) = (1.1602, 0.3251, 1.0366), and its last point is (r_v, L, E) = (1.4132, 0.7174, 1.2282). The starting point for large L is (r_v, L, E) = (1.4150, 0.7233, 1.2310), with no sudden change in the data. Based on the model of heavy quarkonium in Refs. <cit.>, similarly fitted to the Cornell potential for the QQq, we can also determine the effective running coupling, α_QQq = 3L^2/4dE_QQq/dL, to study the force between heavy quarks. It is the inter-two-quark potential in baryons which effectively includes the light-quark effects <cit.>. Moreover, it can be proven that the physical significance of Eq. (<ref>) and Eq. (<ref>) is the same, as both represent the effective coupling strength between heavy quarks. Thus, the effective running coupling of QQq is a function of the separation distance between the two heavy quarks as shown in Fig. <ref>. It can be seen that at small scales, QQq also exhibits asymptotic freedom behavior, and its range (L< 0.4 fm) is broader compared to QQ. Moreover, the effective running coupling of QQq is always smaller than that of QQ and is almost half of it. This is very close to the relationship between A_QQq and A_QQ fitted in the lattice <cit.>, which is due to the presence of the light quark reducing the interquark force <cit.>. § NUMERICAL RESULTS AND DISCUSSION Before we proceed with the discussion, we need to establish the critical points (critical temperatures at different rapidities). At low temperatures and rapidities, QQ is confined, an imaginary wall r_w. The potential of the confined QQ increases with distance, but when it rises to a certain value, string breaking occurs, exciting light quark and anti-quark from the vacuum, with quarks always remaining confined within a hadron. At high temperatures and rapidities, QQ becomes deconfined, and the imaginary wall disappears. At this point, there is a maximum quark separation distance. When this distance is exceeded, the QQ no longer forms a U-shaped string configuration but instead consists of two straight string segments extending from the boundary to the horizon <cit.>. At this point, the potential also reaches its maximum, indicating that the QQ is screened at this point. This distance is known as the screening length. Considering string breaking or screening as a characteristic to distinguish between confinement and deconfinement, we can obtain the critical point of QQ. The critical points for QQq can be determined similarly, however, it possesses an additional property. At high temperature and/or rapidity QQq can dissociate at a certain distance. However, At extreme high temperature and/or rapidity the QQq can not exit judged from the force balance. Base on the previous discussion, we can draw out the state diagrams of QQ and QQq on the (T, η) plane as in Fig. <ref>. The detailed discussions about the state diagram have already been completed in our previous work <cit.>. §.§ Discussion about heavy quarkonium Next, we discuss the temperature dependence of the effective running coupling of QQ. From Fig. <ref>, it can be seen that critical temperature T_c=0.1345 GeV when η=0. We calculate the effective running coupling in the range of T/T_c ∈ [0, 3] as shown in Fig. <ref>. When T/T_c ∈ [0, 1], QQ is in a confined state and the effect of temperature on the effective running coupling is relatively small. Therefore, our discussion of effective running coupling primarily concentrates on the deconfined state when T/T_c > 1. Additionally, the string breaking can happen when T/T_c < 1. The detailed discussion of string breaking at finite temperature and rapidity can be found in our previous works <cit.>. From the left of Fig.<ref>, it can be observed that the higher the temperature, the smaller the effective running coupling, and at small scales, the effective running coupling's dependence on temperature is minor. This is understandable due to the asymptotic freedom of quarks at small scales. Additionally, the effect of temperature on effective running coupling primarily manifests in the maximum effective coupling constant α̃_QQ (T, η), with the maximum effective coupling constant decreasing as the temperature increases, and the maximum coupling distance L_α̃_QQ reduces. The impact of rapidity on the effective running coupling is similar to that of temperature, as shown in the right of Fig. <ref>. We choose a lower temperature, T=0.1412 GeV (when η=0, T=1.05T_c), to observe a comprehensive process of the influence of rapidity on the effective running coupling. It can be seen that the greater the rapidity, the smaller the effective running coupling, and the smaller the maximum effective coupling constant. Furthermore, it is noteworthy that the screening distance L_screen for QQ is greater than the maximum coupling distance L_α̃_QQ. Here we provide the definition: △ L= L_screen-L_α̃_QQ. The maximum effective coupling constant as a function of T/T_c(η=0) and η/η_c(T=0.1 GeV) is shown in Fig. <ref>. From the left graph, it is apparent that when T/T_c < 1.5, the maximum effective coupling constant rapidly decreases to a lower level as the temperature increases, and becomes relatively flat for T/T_c > 3. The rapidity dependence of the maximum effective coupling constant is qualitatively similar to that of temperature; however, as a function of rapidity, it exhibits a more gradual decline and a broader range (η/η_c <2) in the falling region. We present the screening distance and the maximum coupling distance in Fig. <ref>. The screening distance decreases with increasing temperature or rapidity, while the slope also decreases with increasing temperature or rapidity. The screening distance as a function of T/T_c behaves similarly to that observed in the Ref. <cit.>. The △ L increases with temperature and becomes conspicuously flat, and as a function of η/η_c, it displays a declining trend for (η/η_c >3.5). Next, we select three sets of data from the critical points in Fig. <ref> for comparison: (η, T_c)=(0, 0.1345), (0.6, 0.123), (1.2, 0.0985). From Fig. <ref> and Fig. <ref>, it can be seen that although they both represent the function of the maximum effective coupling constant or screening distance with respect to T/T_c, there are slight differences under different rapidities. Specifically, the larger the rapidity, the smaller the maximum effective coupling constant and screening distance. This suggests that the larger the rapidity, the stronger the dependence of the maximum effective coupling constant and screening distance on temperature. §.§ Discussion about doubly heavy baryon As before, we first present the temperature dependence of the effective running coupling for QQq as shown in the left of Fig. <ref>, where η=0, T_c=0.1245 GeV. The rapidity dependence of the effective running coupling is shown in the right of Fig. <ref>, where T=0.1245 GeV. Their temperature and rapidity values reach up to the maximum values for which QQq can exist, T ∈ [0, 0.1435] (η=0), η∈ [0, 0.751] (T=0.1245 GeV). As shown in the two figures, the temperature and rapidity dependence of the effective running coupling for QQq is essentially consistent with that of QQ. The higher the temperature and rapidity, the lower the effective running coupling curve. Furthermore, the distance at which QQq reaches maximum coupling is almost identical to the screening distance. We focus on the maximum effective coupling constant of QQq, with the maximum effective coupling constant of QQq in relation to QQ displayed as the function of T/T_c and η/η_c in Fig. <ref>. And the screening distances for QQq and QQ is shown in Fig. <ref>. The graphs indicate that the maximum coupling constant of QQq quickly reduces with an increase in temperature or rapidity, and it consistently remains much lower than that of effective QQ, showing a greater sensitivity to both temperature and rapidity. However, QQq is subject to a limit from the maximum (T, η), and its graph ends as the maximum effective coupling constant begins to level off. Similarly, the screening distance for QQq is invariably shorter than that of QQ, but it diminishes more quickly. Additionally, since QQq is always within the descending region, its trend of reduction with rapidity is consistently less intense than with temperature. § SUMMARY In this work, we initially determined the parameters of our model by fitting the lattice potentials for QQ and QQq. Subsequently, we calculated their the strength of the interaction which is defined as effective running coupling from the lattice. We discovered that, as a function of the distance between quarks, the effective coupling constant of QQ is always noticeably larger than that of QQq, and the rate of increase with distance is more pronounced for QQ than for QQq. Furthermore, the effective coupling constant for QQ is extremely low and flat within the range of L < 0.2 fm, whereas for QQq, this occurs approximately within L < 0.4 fm. We interpret this as a manifestation of asymptotic freedom in the behavior of the effective coupling constants. Additionally, their effective coupling constants, as functions of distance, decrease with an increase in temperature or rapidity, which is particularly evident in the effective coupling constants at long distances, especially in the maximum effective coupling constant α̃. The effective coupling constants for small scales remain almost unchanged with variations at finite temperature or rapidity. Based on the holographic model, we have conducted a detailed analysis of the effective running coupling constant at finite temperature and rapidity in this paper. Through the above discussion, we can infer that the QQq system is less stable than the QQ̅ system in the presence of temperature and rapidity. Furthermore, the study of the effective running coupling for the QQq system at finite magnetic fields and under rotation can also be conducted in the future work. § ACKNOWLEDGMENTS This work is supported by the NSFC under Grant Nos. 12405154 and Open Fund for Key Laboratories of the Ministry of Education under Grants Nos. QLPL2024P01. § REFERENCES
http://arxiv.org/abs/2409.02191v1
20240903180131
$Z^\prime$-mediated dark matter freeze-in at stronger coupling
[ "Giorgio Arcadi", "David Cabo-Almeida", "Oleg Lebedev" ]
hep-ph
[ "hep-ph" ]
September 9, 2024 a,b]Giorgio Arcadi, a,b,c,d]David Cabo-Almeida, e]Oleg Lebedev [a]Dipartimento di Scienze Matematiche e Informatiche, Scienze Fisiche e Scienze della Terra, Universita degli Studi di Messina, Viale Ferdinando Stagno d'Alcontres 31, I-98166 Messina, Italy [b]INFN Sezione di Catania, Via Santa Sofia 64, I-95123 Catania, Italy [c]Departament de Física Quàntica i Astrofísica, Universitat de Barcelona, Martí i Franquès 1, E08028 Barcelona, Spain [d]Institut de Ciències del Cosmos (ICCUB), Universitat de Barcelona, Martí i Franquès 1, E08028 Barcelona, Spain [e]Department of Physics and Helsinki Institute of Physics, Gustaf Hällströmin katu 2a, FI-00014 Helsinki, Finland giorgio.arcadi@unime.it david.cabo@ct.infn.it oleg.lebedev@helsinki.fi We study freeze-in production of fermionic dark matter mediated by a Z^' gauge boson. In particular, we explore the regime of Boltzmann-suppressed production, when the Standard Model (SM) thermal bath temperature never exceeds the dark matter mass. The corresponding gauge coupling is then required to be significant, up to order one. As a result, this class of freeze-in models can be probed by the current and future direct dark matter detection experiments. Z^'-mediated dark matter freeze-in           at stronger coupling [ September 9, 2024 ====================================================================== § INTRODUCTION Cosmological dark matter remains one of the most mysterious entities in modern physics. The most common approach to this problem is to postulate the existence of a stable particle which has no quantum numbers with respect to the Standard Model gauge symmetries and whose gravitational effects we observe. Its further properties depend on the coupling to the observed particles as well as itself. If such couplings are very small, the dark matter field never reaches thermal equilibrium and its abundance accumulates over time due to its slow production by the SM thermal bath <cit.>. This mechanism is known as “freeze-in” <cit.>. The freeze-in mechanism is predictive only if other production channels are negligible. Generally, there are further non-thermal DM production mechanisms which are operative in the Early Universe <cit.>. Most importantly, all particles are produced by gravity itself during and after inflation. For example, particle production immediately after inflation, during the inflaton oscillation epoch, can be very efficient due to the presence of the Planck-suppressed operators that couple the inflaton ϕ to DM such as <cit.> 1 M_ Pl^2 ϕ^4 s^2  , 1 M_ Pl ϕ^2 χ̅χ , where s is scalar DM and χ is fermion DM. These operators are generated by both classical and quantum gravitational effects. The resulting abundance of DM exceeds the observed value by many orders of magnitude unless the corresponding Wilson coefficients happen to be very small or DM is extremely light <cit.>. One way to address this problem is to allow for a long inflaton-dominated expansion period before reheating. The DM quanta produced immediately after inflation are relativistic and their energy density gets diluted by expansion in a non-relativistic background. At the reheating stage, the resulting DM abundance is then small <cit.>. This implies that the reheating temperature T_R is relatively low, being only limited from below, T_R > 4MeV, by observations <cit.>. It opens the possibility that the SM bath temperature has never been higher than the DM mass and the freeze-in production is Boltzmann suppressed <cit.>. The SM-DM coupling required for producing the correct DM relic abundance can then be as large as order one, which facilitates direct searches for dark matter via DM-nucleon scattering and in collider experiments. The purpose of our current work is to explore this idea in the context of a Z^' extension of the Standard Model <cit.>. The role of dark matter is played by a Dirac fermion which couples to a Z^' and whose production by the SM thermal bath is mediated by the Z^'. We delineate parameter space leading to the correct DM relic abundance, while satisfying the current direct detection and collider constraints. We find that the Z^'-mediated freeze-in can be probed by direct DM detection experiments as long as the Z^' has significant “vector” couplings to the SM fermions. §.§ The model We study an extension of the Standard Model with a massive Z^' boson and a Dirac fermion χ, which has no SM quantum numbers and plays the role of dark matter. The relevant couplings are ℒ⊃-m_χχ̅χ-1/2 M^2_Z^' Z_μ^' Z^'μ+χ̅γ^μ(V_χ-A_χγ_5) χ Z_μ^'+∑_f f̅γ^μ(V_f-A_f γ_5) f Z_μ^' , Here f represents the SM fermions; V_f,χ and A_f,χ are the vectorial and axial couplings. These are related to the Z^' gauge coupling g_Z^' and the charges of the left- and right-handed fermions X_f_L,X_f_R as V_f=g_Z^'(X_f_L+X_f_R)/2 and A_f=g_Z^'(X_f_L-X_f_R)/2. We assume that the Z^' is the only portal between the SM and dark matter, focussing on the TeV scale M_Z^'. We also take its mixing with the SM gauge bosons to be negligible for our purposes. The U(1) extensions of the SM are subject to the anomaly cancellation constraint, which generally necessitates the presence of additional fermions. Such fermions, however, can be vector-like with respect to the SM gauge charges and hence have large masses making their impact on the DM phenomenology insignificant. A similar approach was taken in related work <cit.> (see also <cit.>). In what follows, we study freeze-in production of dark matter χ mediated by the Z^'. Our main assumption is that the SM thermal bath temperature T is always below m_χ. This means, in particular, that the reheating temperature T_R is sufficiently low, allowing for dilution of the gravitationally produced relics, as explained above. To compute the DM production, we resort to the instant reheating approximation, e.g. take T to increase abruptly from zero to T_R and then decrease as usual, in accordance with the SM entropy conservation. This gives a good estimate of the DM abundance in cosmological models with a flat SM temperature profile before reheating <cit.>. Using appropriate rescaling and replacing T_R with the maximal SM sector temperature, we can extend our results to a broader class of models. Earlier work on the Z^'-mediated freeze-in, which assumes a high reheating temperature, can be found in <cit.>. § DARK MATTER RELIC ABUNDANCE In this section, we study freeze-in dark matter production in the Boltzmann-suppressed regime T≪ m_χ. Dark matter is produced via annihilation of the SM thermal bath particles, e.g. f̅ f → Z^'→χ̅χ, and its density n= n_χ + n_χ̅ is controlled by the Boltzmann equation, ṅ + 3Hn =2 Γ ( SM→χ̅χ) - 2 Γ ( χ̅χ→ SM ) , where the factor of 2 accounts for production of 2 DM states, χ and χ̅, and Γ is the reaction rate per unit volume. Since dark matter is non-relativistic at T ≪ m_χ, the a→ b reaction rate is given by Γ_a→ b = ∫( ∏_i∈ ad^3 p_i (2 π)^3 2E_i f(p_i)) ( ∏_j∈ bd^3 p_j (2 π)^3 2E_j) | M_a→ b|^2   (2π)^4 δ^4(p_a-p_b) , where p_i and p_j are the initial and final state momenta, respectively, and f(p) is the momentum distribution function. M_a→ b is the QFT a → b transition amplitude, in which both the initial and final state phase space symmetry factors are absorbed. For the 2→ 2 reactions, energy conservation combined with the Boltzmann statistics f(p)=e^-E/T implies f (p_1) f(p_2) = f(p_3) f (p_4) . Therefore, the DM production rate via thermal SM states can be written as the thermal DM annihilation rate into SM quanta, Γ ( SM→χ̅χ) = Γ ( χ̅χ|_ therm→ SM) . The latter can be computed following Gelmini and Gondolo <cit.>, Γ (χ̅χ|_ therm→ SM ) = ⟨σ(χ̅χ|_ therm→ SM ) v_r ⟩ n_χ n_χ̅ = 2^2 (2π)^6 ∫σ v_r e^-E_1/T e^-E_2/T d^3 p_1 d^3 p_2 = 2^3π^2 T (2π)^6 ∫_4m_χ^2^∞ d s σ ( s- 4 m_χ^2) √( s) K_1 (√( s)/T) , where σ is the χ̅χ→ SM cross section; v_r is the relative velocity of the colliding quanta with energies E_1,E_2 and momenta p_1,p_2; n_χ is the χ number density; ⟨ ... ⟩ denotes a thermal average; s is the Mandelstam variable, and K_1(x) is the modified Bessel function of the first kind. We have explicitly factored out the spin d.o.f. factor 2^2 specific to fermion annihilation (n_χ= 2 e^-E/T). Let us focus for now on a heavy Z^' regime, M_Z' > 2m_χ. It is instructive to consider in detail the limit M_Z' ≫ m_χ, m_f, where f represents the SM fermions. This allows us to obtain simple analytical approximations in the pure freeze-in limit. In our numerical studies however, we do not resort to this approximation and use the exact (tree-level) results together with the backreaction term in (<ref>). In what follows, we consider the vector and axial Z^' cases separately. §.§ Vector coupling In most of the parameter space, the main production/annihilation channel is χ̅χ↔f̅ f , where f is an SM fermion and contributions of all the fermions lighter than χ should be summed. The DM annihilation via a vector Z^' is allowed already at the s-wave level and the corresponding cross section for m_χ≫ m_f is σ (χ̅χ→f̅ f) ≃V_f^2 V_χ^2 m_χ^2 2π M_Z'^4 √(s s-4m_χ^2) . The reaction rate is found from Eq. <ref>. In the regime m_χ≫ T, we may use the asymptotic form K_1 (√( s)/T) ≃√(π 2)T^1/2 s^1/4 e^-√(s) /T. When computing the integral over s, one can use the following approximation: since the integrand peaks sharply close to s= 4m_χ^2, one may replace s→ 4m_χ^2 in “slow” functions of s, while keeping the factors √( s-4m_χ^2) and e^-√(s) /T as they are. The resulting integral reduces to the Gamma function such that ∫_4m_χ^2^∞ d s √( s-4m_χ^2) e^-√(s) /T≃√(π) 2 (4 m_χ T)^3/2 e^-2m_χ /T . The resulting DM production rate via SM fermion annihilation according to (<ref>) is Γ (f̅ f →χ̅χ) = V_f^2 V_χ^2 m_χ^5 T^3 2π^4 M_Z'^4 e^-2m_χ /T . In the pure freeze-in regime, the Boltzmann equation (<ref>) reduces to ṅ + 3nH =2 Γ (f̅ f →χ̅χ). It can be solved analytically for simple reheating scenarios. For definiteness, we resort to the instant reheating approximation, that is, we assume that the SM sector temperature increases abruptly from zero to T_R. After that it is determined by entropy conservation, as usual. Integrating the Boltzmann equation from T=T_R to T=0, one finds that n∝ T^3 at late times and, due to the exponential suppression of the reaction rate, the production is dominated by the initial moments when T ≃ T_R. Using s_ SM= 2π^2 45 g_* T^3 ,  H = √(g_* π^2 90 ) T^2 M_ Pl , where g_* is the effective number of the SM d.o.f. and M_ Pl= 2.4 × 10^18 GeV, we find Y ≡n s_ SM≃∑_f 45 √(90) 4V_f^2 V_χ^2 π^7 g_*^3/2 m_χ^4 M_ Pl M_Z'^4 T_R e^-2m_χ /T_R , where the sum runs over all SM Dirac fermions f with the color multiplicity properly included.[The neutrinos contribute 1/2 of the Dirac fermion contribution. For V_f = V_χ =λ and m_χ > m_t, the sum over all the SM fermions amounts to a factor of 22.5.] Here we have assumed that g_* stays approximately constant in the regime of interest. For the universal couplings V_f = V_χ =λ, the observed Y=4.4 × 10^-10 GeV/m_χ requires λ≃ 10^-6 M_Z' T_R^1/4 m_χ^5/4 e^m_χ/(2T_R) for g_* ≃ 107. Since m_χ≫ T_R, the exponential factor can be large making λ as large as O (1). Nevertheless, the produced dark matter density is low and it does not thermalize, thereby justifying the assumption of the freeze-in regime. We note that the size of the coupling is determined primarily by m_χ /T_R and rather insensitive to a specific charge assignment, thus making the universal charge limit justified. The above result is obtained in the instant reheating approximation. As shown in <cit.>, this gives a good estimate of the required coupling within a larger class of models. If the SM temperature is constant before reheating, Eq. <ref> stands as it is, up to a small (percent level) shift in T_R. More generally, DM production peaks at the maximal temperature T_ max such that one replaces T_R → T_ max in the above formulas and, to account for entropy production between T=T_ max and T=T_R, rescales the coupling with the factor (T_R/T_ max)^κ, where κ is model dependent. The common feature of freeze-in models at stronger coupling is that the DM abundance exhibits the factor e^-2m_ DM /T_R times some power of the dark matter coupling <cit.>. As in the other models, we find that an order one coupling requires m_ DM∼ 20 T_R for typical parameter values. §.§ Axial coupling Consider now DM production via f̅ f →χ̅χ mediated by an axial Z^' <cit.>. To compute the reaction rate, we consider non-relativistic DM annihilation. In contrast to the previous case, in the m_f → 0 limit, the process is p-wave and hence less efficient. Neglecting the fermion masses, one finds σ (χ̅χ→f̅ f) ≃A_f^2 A_χ^2 12 π M_Z'^4 √(s( s-4m_χ^2)) . This approximation applies for heavy DM, m_χ≫ m_t, or in the regime where the top-quark channel is not available, m_χ < m_t. The reaction rate calculation proceeds as before, except the different velocity dependence leads to the integral ∫_4m_χ^2^∞ d s (s-4m_χ^2)^3/2 e^-√(s) /T≃3√(π) 4 (4 m_χ T)^5/2 e^-2m_χ /T instead of (<ref>). The result is Γ (f̅ f →χ̅χ) = A_f^2 A_χ^2 m_χ^4 T^4 2π^4 M_Z'^4 e^-2m_χ /T . Note that the rate is now proportional to T^4, unlike that in the vector case. The DM abundance is given by Y ≃∑_f 45 √(90) 4A_f^2 A_χ^2 π^7 g_*^3/2 m_χ^3 M_ Pl M_Z'^4 e^-2m_χ /T_R . For A_f = A_χ =λ and g_* ≃ 107, the required coupling is λ≃ 10^-6 M_Z' m_χ e^m_χ/(2T_R) . §.§ Additional channels For heavier DM, m_χ∼ M_Z', the decay and t-channel processes Z^'→χ̅χ ,  Z^' Z^'→χ̅χ become important. The Z^' number density receives Boltzmann suppression similar to e^-2m_χ /T, hence processes with the Z^' on-shell cannot be neglected. The Z^' gauge boson maintains thermal equilibrium with the SM bath due to its coupling to light fermions. The reaction f̅ f ↔ Z^' is very efficient for the coupling range of interest and faster than the rate of expansion of the Universe (see e.g. analogous calculations in <cit.>), thereby implying thermal equilibrium. The decay channel Z^'→χ̅χ dominates around M_Z'∼ 2m_χ, while the efficiency of the t-channel mode Z^' Z^'→χ̅χ depends on the nature of the Z^' couplings. In case of the axial couplings, the fermion annihilation mode is suppressed by m_f or DM velocity. On the other hand, the processes involving the longitudinal component of a Z^' at high energy are enhanced by E/M_Z'. Hence, one expects Z^' Z^'→χ̅χ to dominate for large m_χ. Fig. <ref> (left) shows the relative reaction rate contributions of the different production channels for the axial coupling case, produced with micrOMEGAs <cit.>. As expected, we observe that the t-channel annihilation becomes dominant for heavy dark matter. In the vector coupling case, on the other hand, the SM fermion annihilation mode is efficient and we find that it remains the main production channel at large m_χ. §.§ Dark matter thermalization The above considerations help us understand the qualitative behaviour of λ (m_χ, T_R) at low DM densities, when the term Γ ( χ̅χ→ SM ) in Eq. <ref> can be neglected. However, in our numerical analysis we use the micrOMEGAs tool <cit.> which takes into account all the channels as well as the DM annihilation effects. The latter lead to a qualitative change in the relic abundance calculations at larger coupling. As one increases λ, the DM density grows thus enhancing DM annihilation and eventually leading to its thermalization. This manifests itself in the freeze-in lines λ (m_χ, T_R) merging with the freeze-out curve λ (m_χ), as we show in the next section. Such thermalization has been studied in detail in <cit.> and entails a smooth FIMP-WIMP transition <cit.>. Fig. <ref> (right) illustrates the effect of the coupling increase on the eventual DM abundance. While at small λ, Y grows with the coupling, this ceases to be the case above a certain critical coupling, which is around λ∼ 0.02 in this example. We observe that the abundance decreases over time due to DM annihilation. At yet larger couplings, Y(T) follows its equilibrium value Y_ eq(T) for an extended period of time, before freeze-out. This signals DM thermalization. We note that, in order to compute Γ ( χ̅χ→ SM ), one needs to know the momentum distribution of dark matter. This reaction becomes important at large enough coupling and, thus, it is safe to assume that the SM-DM system is in kinetic equilibrium at that stage. Indeed, the elastic scattering reaction SM+χ→ SM+χ is more efficient than the annihilation one due to the higher density of the SM states. Hence, kinetic equilibrium sets in at smaller couplings. This is also the assumption adopted in micrOMEGAs . § CONSTRAINTS AND PARAMETER SPACE ANALYSIS In this section, we delineate parameter space of the model producing the correct DM relic abundance and discuss various constraints on the “universal” coupling λ. These include bounds on the Z^' interactions from collider experiments as well as constraints on dark matter from direct and indirect searches. §.§ LEP constraints A heavy Z^' can be integrated out leading to a set of 4-fermion interactions. The strictest constraints on the contact interactions are imposed by the 4-lepton operators ℓℓℓℓ, while the lepton-quark operators ℓℓ q q <cit.> are less important. The most significant observables are the muon decay constant, which is affected by the muon-electron operators, as well as the LEP lepton measurements <cit.>. The relevant constraints for the flavor-universal case can be read off from Fig. 3 (right) of <cit.>. The vector coupling (κ_L=κ_R) is constrained by λ M_Z' < 0.12 TeV at 95 % CL. The constraint on the axial coupling (κ_L=-κ_R) is somewhat weaker, λ M_Z' < 0.19 TeV at 95 % CL. The corresponding excluded regions are shown in Fig. <ref> in light orange, marked “LEP”. §.§ LHC constraints The LHC experiments are conducting direct searches for a new gauge boson Z^' in the channel pp →ℓ^+ ℓ^- X, where X represents the beam fragment jets. The constraints are obtained on the quantity σ (pp → Z^') BR(Z^'→ℓ^+ ℓ^- ), employing the narrow width approximation. We use the current data from ATLAS <cit.> based on LHC Run 2 at √(s)=13TeV with a total integrated luminosity of L = 139 fb^-1. The theory prediction for σ (pp → Z^') is computed using MadGraph <cit.>. As the first step, we reproduce the ATLAS limits on the Z_ψ^' model <cit.>. Then, we derive the bounds on the universal coupling λ in our model. The results are presented in Fig. <ref>: the light blue area marked Z^'→ℓℓ is excluded by the LHC. The typical bound on the Z^' mass for the SM-like couplings is around 5 TeV <cit.>. The constraint fades away very fast as M_Z' increases and above 6-7 TeV the LHC has no sensitivity to the extra gauge boson, at least at the perturbative level. This is unlike the other bounds (LEP, direct DM detection) which scale as a function of λ/M_Z'. §.§ Direct DM detection Direct DM detection experiments traditionally set the strongest constraints on dark matter models (see, e.g. a recent discussion in <cit.>). These are based on non-observation of any significant interaction of dark matter with nucleons. In our case, such an interaction is mediated by the Z^'. The DM-nucleon scattering occurs at low energy, hence the Z^' can be integrated out resulting in the effective Lagrangian ℒ_eff ⊃ 1/M_Z^'^2[V^χ(2 V^u+V^d) χ̅γ^μχ p̅γ_μ p. .+ A^χ(Δ_u^p A^u+(Δ_d^p+Δ_s^p) A^d) χ̅γ^μγ^5 χ p̅γ_μγ^5 p], where p represents the proton, and a similar expression applies to the neutron n. Δ_i^p,n stands for the spin content of quark i in the proton/neutron. According to <cit.>, Δ_u^p=0.842, Δ_d^p=-0.427, Δ_s^p=-0.085 , and the corresponding neutron quantities are obtained by the isospin symmetry. The resulting cross section for spin-independent scattering of DM on a nucleus with charge Z and atomic weight A is σ_χ N^SI=μ_χ N^2 ( V^χ)^2 /π M_Z^'^4[V^u(1+Z/A)+V^d(2-Z/A)]^2 , where μ_χ N is the reduced mass of the DM-nucleon system. This cross section must lie below the LZ 2022 and XENON1T bounds <cit.>, which imposes a significant constraint on the model parameters. For the case of the universal coupling λ, the direct detection constraint scales as λ^4/M_Z'^4, while σ_χ N^SI remains independent of the DM mass as long as m_χ≫ 1GeV. The axial couplings lead to the spin-dependent DM-nucleon scattering, which is only weakly constrained and the resulting bounds are unimportant for our analysis. The LHC searches impose the leading constraints in this case, at least for the M_Z' range of interest. In Fig. <ref>, we show the direct detection constraints on the universal coupling λ with the help of the micrOMEGAs tool. The grey regions marked by “LZ+XENON1T” are excluded.[ The LZ collaboration has updated its 2022 direct detection bound. According to the preliminary (unpublished) estimate, the bound has improved by a factor of 5 - 10, depending on the DM mass. This translates into a stronger constraint on λ by a factor of 1.5 - 1.8.] We observe that for a heavy Z^', these constraints supersede the other bounds, while for a lighter Z^', the LHC searches set the strictest bounds. The figures also display the “neutrino fog” which represents the neutrino background for the direct DM detection experiments <cit.>. The standard techniques would not be able to distinguish the DM scattering from that of neutrinos, if λ falls below the neutrino fog curve. While the corresponding parameter space could be explored with innovative techniques, DM detection is challenging for such low couplings. Nevertheless, we see that large regions of the freeze-in parameter space, especially for a heavy Z^', lie above the neutrino background and thus can be probed by the current and future direct DM detection experiments such as XENONnT <cit.> and DARWIN <cit.>.[This property is shared by a class of freeze-in models with a light mediator <cit.> as well as the Higgs-portal-type WIMP models with a first order phase transition <cit.>. ] §.§ Indirect DM detection Indirect dark matter detection is based on possible signatures of its annihilation in regions with significant DM density, e.g. the Galactic center. The strictest bounds come from the Fermi-LAT <cit.> observation of 30 dSphs for 14.3 years. Interpreting these constraints in our model using micrOMEGAs, we find that they are superseded by the bounds discussed above and thus are insignificant. §.§ Discussion and summary The correct DM relic abundance is reproduced along the colored lines in Fig. <ref>. Each of them has a different reheating temperature T_R, which we also identify with the maximal SM bath temperature. We observe that, at sufficiently large coupling, all the freeze-in lines merge with the thermal black curve. This signals DM thermalization such that the relic abundance is controlled by the usual freeze-out. The corresponding parameter space is, however, ruled out experimentally, leaving only a very narrow resonance region m_χ≃ M_Z'/2. In addition to the constraints discussed above, we impose the perturbativity bound which excludes the red hatched area. In this region, the Z^' resonance becomes too broad such that Γ_Z' > 0.5 M_Z' and it ceases to be a “particle” in the conventional sense. While for a light Z^' this bound is superseded by other constraints, in the case of a heavy Z^', it becomes important, especially for the axial-type couplings. The axial coupling case is constrained mostly by collider observables, whereas the direct detection bounds are very weak. This also implies that an axial Z^'-induced freeze-in can hardly be probed experimentally. On the other hand, the vector case is more interesting and significant parts of the freeze-in parameter space can be probed by the direct detection experiments. For M_Z'=1TeV, the LHC constraints dominate, yet they leave a modestly sized region above the ν-fog line, which can therefore be probed by direct DM detection. The size of this area grows as M_Z' increases and at M_Z'=7TeV, the LHC bounds fade away. We observe that, as long as the Z^' couplings have a significant vector component, the direct DM detection prospects are good in the entire range of the DM mass considered, 10-10^4 GeV. The required gauge coupling is in the range 10^-3-10^-1 for a Z^' mass between 1 and 10 TeV. § CONCLUSIONS We have analyzed Z^'-mediated fermion dark matter production in the freeze-in regime at stronger coupling, when the SM bath temperature is below the DM mass. Models with a low reheating temperature are motivated by the problem of gravitational particle production which mars the usual freeze-in mechanism. In particular, Planck–suppressed operators coupling dark matter to the inflaton are very efficient in particle production immediately after inflation, which leads to a large initial abundance of dark matter. This problem can be solved by allowing for an extended period of inflaton-dominated expansion thereby diluting the initial DM abundance. As a result, the reheating temperature in this framework is relatively low, depending on further details. Using both analytic estimates and more sophisticated numerical tools, we find that the correct DM relic abundance can be produced for a broad range of the DM mass m_χ, assuming a TeV-scale Z^'. The main factor is the reheating temperature T_R and, for m_χ / T_R ∼ 20, the Z^' couplings can be as large as O(1). We distinguish the vector and axial coupling cases, which exhibit different phenomenology. While the DM production rates are similar in both cases, the constraints on the parameter space differ substantially. In particular, the direct DM detection results set strict bounds on the vector coupling, whereas the axial coupling remains essentially unconstrained. In our freeze-in framework, this implies that the vector case can be probed by the current and future direct detection experiments, which will reach the sensitivity at the level of the “neutrino fog”. The axial case, on the other hand, is very difficult to test due to the suppression of the DM-nucleus cross section. Further bounds on the parameter space are imposed by the direct Z^' searches at the LHC, LEP measurements of the lepton production and constraints from the muon decay. As the Z^' coupling increases, the DM production grows more efficient and the inverse reaction becomes significant. At some critical value of the coupling (depending on T_R), both reactions equilibrate and dark matter thermalizes. We observe this explicitly as the freeze-in relic abundance curves merge with the standard freeze-out line. In the vector coupling case, the corresponding region of the parameter space is ruled out by the direct DM detection. However, at lower couplings, the Boltzmann-suppressed freeze-in is still operative and produces the correct relic abundance while evading such constraints. For a range of the vector couplings, typically of order 10^-3-10^-1, the DM-nucleon cross section lies above the neutrino fog and thus can be tested in the near future. Needless to say, this does not require the Z^' to be purely “vectorial”, it only sets a lower bound on the vector component of the Z^' coupling. Our main conclusion is that, as long as the vector coupling of the Z^' is substantial, the freeze-in dark matter can be probed further (and possibly discovered) by direct DM detection experiments such as XENONnT and DARWIN. Acknowledgments The authors thank Francesco Costa for fruitful discussions. D.C.A. acknowledges funding from the Spanish MCIN/AEI/10.13039/501100011033 through grant PID2022-136224NB-C21. § GENERAL CROSS SECTIONS AND DECAY WIDTHS In this appendix, we provide some analytical formulas for the general case of the vector and axial couplings being present simultaneously. The DM annihilation cross section into a Standard Model Dirac fermion f is given by σ_χ̅χ→f̅ f = 1/12 π s[(s-M_Z^'^2)^2+M_Z^'^2 Γ_Z'^2]√(1-4 m_f^2 / s/1-4 m_χ^2 / s)[A_ f ^ 2 V_χ^2(s-4 m_f^2)(2 m_χ^2+s). +A_ f^2A_χ^2(4 m_χ^2[m_f^2(7-6 s/M_Z^'^2+3 s^2/M_Z^'^4)-s] +s(s-4 m_f^2)) .+V_f^2(2 m_f^2+s)(A_χ^2(s-4 m_χ^2)+V_χ^2(2 m_χ^2+s))] , where s is the Mandelstam variable. This result agrees with the corresponding cross section presented in <cit.>. The Z^' decay width is Γ_Z'=Γ_Z^'→χ̅χ+∑_fΓ_Z^'→f̅ f , with 1.2Γ_Z^'→χ̅χ=√(M_Z'^2-4 m_χ^2) (A_χ^2 (M_Z'^2-4 m_χ^2)+V_χ^2 (2 m_χ^2+M_Z'^2))/12 π M_Z'^2 , 1.2Γ_Z^'→f̅ f=√(M_Z'^2-4 m_f^2) (A_f^2 (M_Z'^2-4 m_f^2)+V_f^2 (2 m_f^2+M_Z'^2))/12 π M_Z'^2 . In the universal coupling case with a heavy Z^', the sum over the SM fermions amounts to a factor of 22.5, which includes 3×6 quark contributions, 3 charged lepton contributions and 3×1/2 neutrino terms. The branching ratio for the Z^'→ e^+ e^- decay approaches 1/23.5 in this limit. The t-channel Z^' Z^'→χ̅χ analytical results can be found in <cit.>, which we have also reproduced. 99 Dodelson:1993je S. Dodelson and L. M. Widrow, Phys. Rev. Lett. 72, 17 (1994). Hall:2009bx L. J. Hall, K. Jedamzik, J. March-Russell and S. M. West, JHEP 03, 080 (2010). Lebedev:2022cic O. Lebedev, JCAP 02, 032 (2023). Lebedev:2022ljz O. Lebedev and J. H. Yoon, JCAP 07, no.07, 001 (2022). Koutroulis:2023fgp F. Koutroulis, O. Lebedev and S. Pokorski, JHEP 04, 027 (2024). Hannestad:2004px S. Hannestad, Phys. Rev. D 70 (2004), 043506. Cosme:2023xpa C. Cosme, F. Costa and O. Lebedev, Phys. Rev. D 109, no.7, 075038 (2024). Langacker:2008yv P. Langacker, Rev. Mod. Phys. 81 (2009), 1199-1228 Arcadi:2013qia G. Arcadi, Y. Mambrini, M. H. G. Tytgat and B. Zaldivar, JHEP 03 (2014), 134. Lebedev:2014bba O. Lebedev and Y. Mambrini, Phys. Lett. B 734 (2014), 350-353. Arcadi:2014lta G. Arcadi, Y. Mambrini and F. Richard, JCAP 03 (2015), 018 Cosme:2024ndc C. Cosme, F. Costa and O. Lebedev, JCAP 06 (2024), 031. Cosme:2021baj C. Cosme, M. Dutra, S. Godfrey and T. R. Gray, JHEP 09 (2021), 056 Gondolo:1990dk P. Gondolo and G. Gelmini, Nucl. Phys. B 360 (1991), 145-179 Koivunen:2024vhr N. Koivunen, O. Lebedev and M. Raidal, [arXiv:2403.15533 [hep-ph]]. Arcadi:2024wwg G. Arcadi, F. Costa, A. Goudelis and O. Lebedev, JHEP 07 (2024), 044. Belanger:2018ccd G. Bélanger, F. Boudjema, A. Goudelis, A. Pukhov and B. Zaldivar, Comput. Phys. Commun. 231, 173-186 (2018). Alguero:2023zol G. Alguero, G. Belanger, F. Boudjema, S. Chakraborti, A. Goudelis, S. Kraml, A. Mjallal and A. Pukhov, Comput. Phys. Commun. 299, 109133 (2024). Silva-Malpartida:2024emu J. Silva-Malpartida, N. Bernal, J. Jones-Pérez and R. A. Lineros, [arXiv:2408.08950 [hep-ph]]. Cheung:2001wx K. m. Cheung, Phys. Lett. B 517 (2001), 167-176 ALEPH:2006bhb J. Alcaraz et al. [ALEPH, DELPHI, L3, OPAL and LEP Electroweak Working Group], [arXiv:hep-ex/0612034 [hep-ex]]. ALEPH:2013dgf S. Schael et al. [ALEPH, DELPHI, L3, OPAL and LEP Electroweak], Phys. Rept. 532 (2013), 119-244 Falkowski:2015krw A. Falkowski and K. Mimouni, JHEP 02 (2016), 086 ATLAS:2019erb G. Aad et al. [ATLAS], Phys. Lett. B 796 (2019), 68-87 Alwall:2014hca J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer, H. S. Shao, T. Stelzer, P. Torrielli and M. Zaro, JHEP 07 (2014), 079. Bandyopadhyay:2018cwu T. Bandyopadhyay, G. Bhattacharyya, D. Das and A. Raychaudhuri, Phys. Rev. D 98 (2018) no.3, 035027 Arcadi:2024ukq G. Arcadi, D. Cabo-Almeida, M. Dutra, P. Ghosh, M. Lindner, Y. Mambrini, J. P. Neto, M. Pierre, S. Profumo and F. S. Queiroz, [arXiv:2403.15860 [hep-ph]]. HERMES:2006jyl A. Airapetian et al. [HERMES], Phys. Rev. D 75 (2007), 012007 LZ:2022lsv J. Aalbers et al. [LZ], Phys. Rev. Lett. 131 (2023) no.4, 041002 XENON:2018voc E. Aprile et al. [XENON], Phys. Rev. Lett. 121, no.11, 111302 (2018). Billard:2021uyg J. Billard, M. Boulay, S. Cebrián, L. Covi, G. Fiorillo, A. Green, J. Kopp, B. Majorovits, K. Palladino and F. Petricca, et al. Rept. Prog. Phys. 85, no.5, 056201 (2022). XENON:2020kmp E. Aprile et al. [XENON], JCAP 11, 031 (2020). DARWIN:2016hyl J. Aalbers et al. [DARWIN], JCAP 11, 017 (2016). Hambye:2018dpi T. Hambye, M. H. G. Tytgat, J. Vandecasteele and L. Vanderheyden, Phys. Rev. D 98, no.7, 075017 (2018). Boddy:2024vgt K. K. Boddy, K. Freese, G. Montefalcone and B. Shams Es Haghi, [arXiv:2405.06226 [hep-ph]]. Wong:2023qon X. R. Wong and K. P. Xie, Phys. Rev. D 108 (2023) no.5, 055035 McDaniel:2023bju A. McDaniel, M. Ajello, C. M. Karwin, M. Di Mauro, A. Drlica-Wagner and M. A. Sánchez-Conde, Phys. Rev. D 109 (2024) no.6, 063024 Berlin:2014tja A. Berlin, D. Hooper and S. D. McDermott, Phys. Rev. D 89 (2014) no.11, 115022 Klasen:2016qux M. Klasen, F. Lyonnet and F. S. Queiroz, Eur. Phys. J. C 77 (2017) no.5, 348
http://arxiv.org/abs/2409.03590v1
20240905144635
Proof of the refined Dubrovin conjecture for the Lagrangian Grassmanian $LG(2,4)$
[ "Fangze Sheng" ]
math.AG
[ "math.AG", "math.CA" ]
Refined Dubrovin conjecture for the Lagrangian Grassmanian LG(2,4)]Proof of the refined Dubrovin conjecture for the Lagrangian Grassmanian LG(2,4) School of Mathematical Sciences, University of Science and Technology of China, Hefei 230026, P. R. China fzsheng2000@mail.ustc.edu.cn § ABSTRACT The Dubrovin conjecture predicts a relationship between the monodromy data of the Frobenius manifold associated to the quantum cohomology of a smooth projective variety and the bounded derived category of the same variety. A refinement of this conjecture was given by Cotti, Dubrovin and Guzzetti, which is equivalent to the Gamma conjecture II proposed by Galkin, Golyshev and Iritani. The Gamma conjecture II for quadrics was proved by Hu and Ke. The Lagrangian Grassmanian LG(2,4) is isomorphic to the quadric in ℙ^4. In this paper, we give a new proof of the refined Dubrovin conjecture for the Lagrangian Grassmanian LG(2,4) by explicit computation. [ Fangze Sheng September 9, 2024 ===================== § INTRODUCTION The monodromy data of a semisimple Dubrovin–Frobenius manifold M <cit.> consist of four constant matrices (μ, R, S, C). Here, μ and R are given by monodromy data of the ODE system associated to the extended Dubrovin connection on M ×ℂ^* at z=0, the matrix S, called the Stokes matrix, describes asymptotic behaviours of solutions to the same system as |z| →∞, and the matrix C, called the central connection matrix, relates two canonically-constructed analytic solutions. Dubrovin proved that one can reconstruct the Dubrovin–Frobenius structure of a semisimple Dubrovin–Frobenius manifold from its monodromy data (μ, R, S, C) <cit.>. Let X be a D-dimensional smooth projective variety with vanishing odd cohomology, and X_0,k,β the moduli space of stable maps of degree β∈ H_2(X;ℤ)/ torsion with target X from curves of genus 0 with k distinct marked points. Choose a homogeneous basis ϕ_1 = 1,ϕ_2,…,ϕ_n of the cohomology ring H^*(X;ℂ) such that ϕ_α∈ H^2q_α(X;ℂ), α=1,…,n, 0=q_1<q_2 ≤…≤ q_n-1 < q_n = D. The Gromov–Witten (GW) invariants of X of genus 0 and degree β are the integrals <cit.> ∫_[X_0,k,β]^ virt ev_1^*(ϕ_α_1) ⋯ ev_k^*(ϕ_α_k), α_1,…,α_k=1,…,n. Here, ev_a, a=1,…,k, are the evaluation maps, and [X_0,k,β]^ virt is the virtual fundamental class. The genus 0 primary free energy F_0( v;Q) for the GW invariants of X is defined by F_0( v;Q) = ∑_k≥0∑_α_1,…,α_k ≥0∑_βQ^β v^α_1⋯ v^α_k/k!∫_[X_0,k,β]^ virt ev_1^*(ϕ_α_1) ⋯ ev_k^*(ϕ_α_k), where (v^1,…,v^n) are indeterminates, and Q^β=Q_1^m_1⋯ Q_r^m_r ( for β=m_1β_1+…+m_rβ_r) is an element of the Novikov ring ℂ [[Q_1,…,Q_r]]. Here, (β_1,…,β_r) is a basis of H_2(X;ℤ)/ torsion. When X is Fano, it is known that F_0( v;Q) is a power series of v^1, Q_1 e^v^2, …, Q_r e^v^r+1, v^r+2, …, v^n and one can take Q_1=⋯=Q_r=1 (cf. <cit.>). If F_0( v):=F_0( v; Q)|_Q_1=⋯=Q_r=1 is convergent, then an analytic Dubrovin–Frobenius structure can be defined on its convergence domain. The quantum cohomology of X is the commutative ring given by flat vector fields on this Dubrovin–Frobenius manifold, and its restriction to H^2(X;ℂ) is called the small quantum cohomology of X. The quantum cohomology of X carries a canonical calibration <cit.> obtained from genus 0 GW invariants, which can also be fixed by using an analytic criterion <cit.> (cf. <cit.>). In his 1998 ICM talk <cit.>, Dubrovin proposed a conjecture on the relationship between monodromy data of the quantum cohomology of a smooth projective variety and the bounded derived category of the same variety. We refer the readers to Section <ref> for a precise statement of the conjecture (see Conjecture <ref>) and for a refinement proposed by Cotti, Dubrovin and Guzzetti <cit.> (see Conjecture <ref>). Note that part 3 of the refined Dubrovin conjecture <ref> is equivalent <cit.> to the Gamma conjecture II proposed by Galkin, Golyshev and Iritani <cit.>. Recently, some new significance of the Dubrovin conjecture (and its refined version) is given in <cit.>. In the past two decades, the Dubrovin conjecture has been proved for some cases. Dubrovin proposed a method to directly compute the monodromy data in <cit.>, where the Stokes matrix for the quantum cohomology of ℙ^2 was computed. In <cit.>, Dubrovin computed the central connection matrix for ℙ^2, and proved the Dubrovin conjecture for this case. The Stokes matrix for an arbitrary complex projective space was computed by Guzzetti in <cit.>, and part 2 of Conjecture <ref> was proved (part 1 of Conjecture <ref> for this case was known). The same results were also obtained by Tanabé in <cit.>. Recently, the refined Dubrovin conjecture for an arbitrary Grassmanian was proved by Cotti, Dubrovin and Guzzetti <cit.> (cf. <cit.>). In <cit.>, Cotti computed the monodromy data and then proved the refined conjecture for Hirzebruch surfaces. Part 2 of Conjecture <ref> has been verified for other cases, including cubic surfaces by Ueda <cit.>, minimal Fano threefolds by Golyshev <cit.>, a class of orbifold projective lines by Iwaki and Takahashi <cit.>, weighted projective spaces and certain smooth Fano hypersurfaces by Cruz Morales and van der Put <cit.>. The Gamma conjecture II has been proved for arbitrary Grassmanians in <cit.> (cf. <cit.>), for quadrics of arbitrary dimensions by Hu and Ke <cit.> and for certain toric Fano manifolds by Fang and Zhou <cit.>. Let V be the vector space ℂ^2n, equipped with a symplectic form. A subspace Σ of V is called isotropic if the restriction of the symplectic form to Σ vanishes. The maximal possible dimension of an isotropic subspace is n, and in this case Σ is called a Lagrangian subspace. The Lagrangian Grassmanian LG(n,2n) is the projective complex manifold which parameterizes Lagrangian subspaces Σ in V. A description of the small quantum cohomology of LG(n,2n) was formulated by Kresch and Tamvakis by quantum Schubert calculus <cit.> (cf. <cit.>). The existence of a full exceptional collection in the bounded derived category of coherent sheaves on a Lagrangian Grassmanian was proved for LG(3,6) by Samokhin <cit.>, for LG(4,8) by Polishchuk and Samokhin <cit.>, and for LG(n,2n), n ≥ 1 by Fonarev <cit.>. The main result of this paper is the following theorem. The refined Dubrovin conjecture holds for the Lagrangian Grassmanian LG(2,4). Since LG(2,4) is isomorphic to the quadric in ℙ^4, Theorem <ref> can also be deduced from the corollary of Hu-Ke <cit.>. In this paper, we will give an explicit proof by computing the monodromy data for the quantum cohomology of LG(2,4). The rest of the paper is organized as follows. In Section <ref>, we review the monodromy data of semisimple Dubrovin–Frobenius manifolds. In Section <ref>, we recall some basic notions of derived category, and state the Dubrovin conjecture and a refined version. In Section <ref>, we briefly review some results of the small quantum cohomology of LG(2,4) and then give our proof of Theorem <ref>. § REVIEW ON MONODROMY DATA OF SEMISIMPLE DUBROVIN–FROBENIUS MANIFOLDS A Frobenius algebra is a triple (A, e, ⟨ , ⟩), where A is a commutative and associative algebra over ℂ with unity e, and ⟨ , ⟩: A× A→ℂ is a symmetric, non-degenerate and bilinear product satisfying ⟨ x· y , z⟩ = ⟨ x, y· z⟩, ∀ x,y,z ∈ A. A Dubrovin–Frobenius structure of charge D <cit.> on a complex manifold M is a family of Frobenius algebras (T_p M, e_p, ⟨ , ⟩)_p∈ M, depending holomorphically on p and satisfying the following three axioms: 1 The metric ⟨ , ⟩ is flat; moreover, denote by ∇ the Levi–Civita connection of ⟨ , ⟩, then it is required that ∇ e = 0. 2 Define a 3-tensor field c by c(X,Y,Z):=⟨ X· Y,Z⟩, for X,Y,Z being holomorphic vector fields on M. Then the 4-tensor field ∇ c is required to be symmetric. 3 There exists a holomorphic vector field E on M satisfying [E, X· Y]-[E,X]· Y - X· [E,Y] = X· Y, E ⟨ X,Y ⟩ - ⟨ [E,X],Y⟩ - ⟨ X, [E,Y]⟩ = (2-D) ⟨ X,Y⟩. A complex manifold endowed with a Dubrovin–Frobenius structure of charge D is called a Dubrovin–Frobenius manifold of charge D, with ⟨ , ⟩ being called the invariant flat metric and E the Euler vector field. We denote the Dubrovin–Frobenius manifold by (M, ·, e, ⟨ , ⟩, E). A point p on a Dubrovin–Frobenius manifold M is called a semisimple point, if T_pM is semisimple. A Dubrovin–Frobenius manifold is called semisimple, if its generic points are semisimple points. For an n-dimensional Dubrovin–Frobenius manifold (M, ·, e, ⟨ , ⟩, E), an flat affine connection ∇̃, called extended Dubrovin connection, can be defined <cit.> on M×ℂ^* through ∇̃_X Y :=∇_X Y + z X · Y, ∇̃_∂_z Y := ∂ Y∂ z + E · Y- 1z(2-D/2 id - ∇ E)Y, ∇̃_∂_z∂_z :=0, ∇̃_X∂_z :=0. Here, X,Y are arbitrary holomorphic vector fields on M ×ℂ^* with zero component along ∂/∂ z. Choose flat coordinates v=(v^1,…,v^n) such that E=∑_β=1^n ((1-D/2-μ_β) v^β + r^β) ∂/∂ v^β. The differential system for a flat section y=(y^1,…,y^n)^T reads (see <cit.>) ∂ y/∂ v^α = z C_α y, α=1,…,n, d y/d z = (𝒰+μ/z) y, where C_α is the matrix of multiplication by ∂/∂ v^α, 𝒰 is the matrix of multiplication by the Euler vector field and μ = diag(μ_1,…,μ_n). Fix v. Then there exists a fundamental matrix solution to (<ref>) of the form <cit.> 𝒴_0(z) = Φ(z) z^μ z^R, where R is a constant matrix, Φ(z) is an analytic matrix-valued function on ℂ satisfying Φ(0) = I, Φ(-z)^TηΦ(z)= η. The matrices μ,R are the monodromy data at z=0 of the Dubrovin–Frobenius manifold <cit.>. Now we restrict our consideration to a semisimple point p∈ M. Denote the restriction of coordinate vector fields (∂/∂ v^1,…,∂/∂ v^n) to this point by (e_1,…,e_n). Let u_1,…, u_n be eigenvalues to 𝒰 and assume they are pairwise distinct. Let f_1,…,f_n be the normalized eigenvectors such that f_i · f_j = δ_ij f_j and ⟨ f_i , f_j ⟩ = δ_ij, i,j=1,…,n. Let Ψ=(ψ_iα) be the transition matrix from the orthonormal basis (f_1,…,f_n) to the basis (e_1,…,e_n), i.e., e_α=∑_i=1^n ψ_iα f_i , α=1,…,n. For any fundamental matrix solution 𝒴(z) to (<ref>), let Y(z)=Ψ𝒴(z). Then Y=Y(z) satisfies the equation dY/dz = (U+V/z) Y , where U = Ψ𝒰Ψ^-1 = diag(u_1,…,u_n) and V=ΨμΨ^-1. Note that z=∞ is an irregular singularity of (<ref>) of Poincaré rank 1. Then <cit.> (cf. <cit.>) equation (<ref>) has a unique formal solution Y_ formal(z) of the form Y_ formal(z) = G (z) e^z U, where G (z) is a formal power series of z^-1 with G(∞)=I and satisfies the orthogonality condition G (-z)^T G (z) = I. A line ℓ through the origin in the complex z-plane is called admissible if Re z(u_i- u_j)|_z ∈ℓ∖ 0≠ 0, ∀ i≠ j. Fix any admissible line ℓ with an orientation. Denote by ϕ∈[0,2π) the angle from the positive real axis and to the positive direction of ℓ. Construct two sectors Λ_ left: ϕ-ε< arg z< ϕ + π+ ε, Λ_ right: ϕ-π-ε< arg z< ϕ + ε, where ε is a sufficiently small positive number. From <cit.>(cf. <cit.>) there exist unique fundamental matrix solutions Y_L/R(z) to the ODE system (<ref>), analytic in Π_ left/ right, such that Y_L/R(z) ∼ Y_ formal(z) as |z|→∞ within the sectors. We will consider the behaviour of the matrix solutions on the narrow sectors Λ_+: ϕ-ε < arg z < ϕ+ε, and Λ_-: ϕ-π-ε < arg z < ϕ-π+ε. In fact, the sectors Λ_ left/ right where the asymptotic behaviour (<ref>) holds can be extended. We introduce Stokes rays, which are oriented rays R_ij in ℂ defined by R_ij:={-i(u_i-u_j)ρρ≥ 0}. By definition, an admissible line ℓ must contain no Stokes rays. The sectors Λ_ left/ right can be extended up to the first nearest Stokes ray (see <cit.>). We denote the extended sectors by Π_ left/ right and the corresponding extended narrow sectors by Π_+ and Π_-. Within the narrow sector Π_+, the two analytic fundamental matrix solutions in (<ref>) is related by a constant matrix S called the Stokes matrix with respect to the admissible line ℓ (cf. <cit.>): Y_L(z) =Y_R(z) S, z∈Π_+. Similarly, according to <cit.> (cf. <cit.>), in the narrow sector Π_-, we have Y_L(z) =Y_R(z) S^T, z∈Π_-. Within the narrow sector Π_+, the fundamental matrix solutions Y_0(z):=Ψ𝒴(z) and Y_R(z) to the ODE system (<ref>) also be related by a constant matrix C called the central connection matrix with respect to the admissible line ℓ: Y_R(z) = Y_0(z) C. When M is semisimple, the quadruple (μ, R, S, C) is called the monodromy data of M <cit.>. S and C satisfy the following restrictions: C S^T S^-1 C^-1 = e^2π i μ e^2π i R, S = C^-1 e^-π i R e^-π i μη^-1 (C^T)^-1. Following Dubrovin <cit.>, we consider an action of the braid group ℬ_n on the set of monodromy data (S, C). Denote by β_1,2,…, β_n-1,n the generators of ℬ_n, satisfying the relations β_i,i+1β_i+1,i+2β_i,i+1=β_i+1,i+2β_i,i+1β_i+1,i+2, β_i,i+1β_j,j+1=β_j,j+1β_i,i+1, if |i-j|>1. Let ℓ be an admissible line and (S, C) be the monodromy data with respect to ℓ. Choose the order of canonical coordinates (u_1,…,u_n) such that S is an upper triangular matrix. The action of the elementary braid β_i,i+1∈ℬ_N, 1≤ i ≤ n-1 has the form <cit.> β_i,i+1(S):= K^β_i,i+1(S)  S  K^β_i,i+1(S), β_i,i+1(C):= C  (K^β_i,i+1(S))^-1, where (K^β_i,i+1(S))_kk=1, k=1,…, N k≠ i,i+1, (K^β_i,i+1(S))_i+1,i+1=-s_i,i+1, (K^β_i,i+1(S))_i,i+1=(K^β_i,i+1(S))_i+1,i=1, and all other entries of the matrix K^β_i,i+1 are equal to zero. An important class of Dubrovin–Frobenius manifolds come from quantum cohomology. To be precise, let X be a D-dimensional smooth projective variety with vanishing odd cohomology, and X_g,k,β the moduli space of stable maps of degree β∈ H_2(X;ℤ)/ torsion with target X from curves of genus g with k distinct marked points. Here, g,k≥0. We denote the Poincaré pairing on H^*(X;ℂ) by ⟨ , ⟩. Choose a homogeneous basis ϕ_1 = 1,ϕ_2,…,ϕ_n of the cohomology ring H^*(X;ℂ) such that ϕ_α∈ H^2q_α(X;ℂ), α=1,…,n, 0=q_1<q_2 ≤…≤ q_n-1 < q_n = d. The integrals ∫_[X_g,k,β]^ virt c_1(ℒ_1)^i_1 ev_1^*(ϕ_α_1) ⋯ c_1(ℒ_k)^i_k ev_k^*(ϕ_α_k), α_1,…,α_k=1,…,l, i_1,…,i_k≥0, are called Gromov–Witten (GW) invariants of X of genus g and degree β <cit.>. Here, ev_a, a=1,…,k are the evaluation maps, ℒ_a is the ath tautological line bundle on X_g,k,β, and [X_g,k,β]^ virt is the virtual fundamental class. The genus g primary free energy F_g( v;Q) for the GW invariants of X is defined by F_g( v;Q) = ∑_k≥0∑_α_1,…,α_k ≥0∑_βQ^β v^α_1⋯ v^α_k/k!∫_[X_g,k,β]^ virt ev_1^*(ϕ_α_1) ⋯ ev_k^*(ϕ_α_k), where (v^1,…,v^n) are indeterminates, and Q^β=Q_1^m_1⋯ Q_r^m_r ( for β=m_1β_1+…+m_rβ_r) is an element of the Novikov ring ℂ [[Q_1,…,Q_r]]. Here, (β_1,…,β_r) is a basis of H_2(X;ℤ)/ torsion. Now we restrict our consideration to the case that X is Fano. Then for any fixed k≥0 and α_1,…,α_k ∈{1,…,n}, the sum ∑_β in (<ref>) is a finite sum. Moreover, by the divisor equation <cit.> ∫_[X_g,k+1,β]^virev_1^*(ϕ_α_1)⋯ev_k^*(ϕ_α_k)ϕ=(∫_βϕ) ∫_[X_g,k,β]^virev_1^*(ϕ_α_1)⋯ev_k^*(ϕ_α_k). we know that F_0( v;Q) is a power series of v^1, Q_1 e^v^2, …, Q_r e^v^r+1, v^r+2, …, v^n. Take Q_1=⋯=Q_r=1. Denote F_0( v):=F_0( v; Q)|_Q_1=⋯=Q_r=1. Assume that the power series F_0( v) has a convergence domain Ω. Then F_0( v) leads to an analytic Dubrovin–Frobenius structure of charge D on Ω <cit.>, with the invariant flat metric ⟨ , ⟩, the unity vector field e=∂/∂ v^1 and the Euler vector field E given by E=c_1(X)+∑_α=1^n(1-12degϕ_α)v^αϕ_α, where c_1(X) is the first Chern class of X. We call the commutative ring given by flat vector fields on the Dubrovin–Frobenius manifold by the quantum cohomology of X, denoted by QH(X), and the restriction of this ring to Ω∩ H^2(X;ℂ) (i.e. points with coordinates v^i=0 unless i=2,…, r+1) the small quantum cohomology of X. Note that the constant matrix R in (<ref>) can be chosen to correspond to c_1(X) ∪ H^*(X;ℂ)→ H^*(X;ℂ) <cit.>. For the quantum cohomology of a smooth projective variety X, a canonical calibration <cit.> obtained from genus 0 GW invariants can be chosen. Fix a point whose coordinates are v=(v^1,…,v^n) on the quantum cohomoloy of a smooth projective variety X. Then a calibration at this point is given by 𝒴_ top(z):=Φ_ top(z) z^μ z^R, Φ_ top(z)_λ^γ: =δ_λ^γ+∑_k,l≥ 0∑_β∑_α_1,…,α_kh_λ,k,l,β,α^γ/k! v^α_1… v^α_k z^l+1. Here h_λ,k,l,β,α^γ:=∑_ϵη^ϵγ∫_[X_0,k+2,β]^virtc_1(ℒ_1)^l ev_1^*(ϕ_λ) ev_2^*(ϕ_ϵ)∏_j=1^k ev^*_j+2(ϕ_α_j). We call (<ref>) the topological-enumerative solution of (<ref>) . In <cit.>, Galkin, Golyshev and Iritani gave an analytic criterion to obtain the above calibration when X is Fano. Let X be a Fano manifold and fix a point v on the small quantum cohomology of X (i.e. point with coordinates v^i=0 unless i=2,…, r+1). Then among all solutions of the form Φ(z)z^μ z^R to (<ref>) around z=0, the topological-enumerative solution (<ref>) is the unique one for which the product H(z)=z^-μΦ(z)z^μ, is holomorphic at z=0 and satisfies H(0)=exp((∑_i=2^r+1 v^i ϕ_i)∪(-)). § EXCEPTIONAL COLLECTIONS, GAMMA CLASSES AND A REFINED VERSION OF THE DUBROVIN CONJECTURE Let X be a smooth projective variety of complex dimension D with odd-vanishing cohomology, and denote by 𝒟^b(X) the bounded derived category of coherent sheaves on X. Given E,F∈ Ob(𝒟^b(X)), define Hom^*(E,F) as the ℂ-vector space Hom^*(E,F):=⊕_k∈ℤHom(E,F[k]). A collection (E_1,…, E_n) of objects of 𝒟^b(X) is called an exceptional collection if * Hom^*(E_k,E_k) is a one dimensional ℂ-algebra for each 1≤ k ≤ n, generated by the identity morphism, * we have Hom^*(E_j,E_k)=0 for j>k. An exceptional collection is full if any triangular subcategory containing all objects of this collection is equivalent to 𝒟^b(X) via the inclusion functor. Now let TX be the tangent bundle of X, and let δ_1,…,δ_D be the Chern roots of TX, so that c_k(TX)=σ_k(δ_1,…,δ_r), 1≤ k≤ D, where σ_k is the k-th elementary symmetric polynomial. The Gamma classes of X are the characteristic classes (cf. <cit.>) Γ_X^±:=∏_j=1^DΓ(1±δ_j), where γ is the Euler-Mascheroni constant and ζ is the Riemann zeta function. Consider an arbitrary vector bundle V on X, and let τ_1,…,τ_r be the Chern roots of V. The Chern character of V is the characteristic class Ch(V)∈ H^*(X) defined by Ch(V):=∑_j=1^r e^2π i τ_j. The following conjecture was proposed by Dubrovin in his 1998 ICM talk. Assume that X is Fano and that the power series F_0(v) has a convergence domain Ω. Then 1. the quantum cohomology of X is semisimple if and only if 𝒟^b(X) admits a full exceptional collection. If the quantum cohomology of X is semisimple, then there exists a full exceptional collection (E_1,…, E_n) of 𝒟^b(X) such that: 2. the Stokes matrix S is equal to the inverse of the Euler matrix (χ(E_j,E_k))_1≤ j,k ≤ n; 3. the central connection matrix has the form C=C'C” where the columns of C” are coordinates of Ch(E_i), and C' H^*(X;ℂ)→ H^*(X;ℂ) is an operator commuting with c_1(X)∪ H^*(X;ℂ)→ H^*(X;ℂ). In <cit.>, Dubrovin suggested that the matrix C' should be given by using the Gamma class of X. In <cit.>, Cotti, Dubrovin and Guzzetti proposed a refinement of Conjecture <ref>. Assume that X is a Fano variety of dimension D with odd-vanishing cohomology and that the power series F_0(v) has a convergence domain Ω. Then 1. the quantum cohomology of X is semisimple if and only if 𝒟^b(X) admits a full exceptional collection. If the quantum cohomology of X is semisimple, then there exists a full exceptional collection (E_1,…, E_n) of 𝒟^b(X) such that: 2. the Stokes matrix S is equal to the inverse of the Euler matrix (χ(E_j,E_k))_1≤ j,k ≤ n; 3. the central connection matrix C, connecting the solution Y_R (see (<ref>)) with the topological-enumerative solution Y_ top=Ψ·𝒴_ top of Proposition <ref>, is equal to the matrix whose columns are given by the coordinates of i^D̅/(2π)^D/2Γ^-_X∪exp(-π i c_1(X))∪ Ch(E_k), 1≤ k ≤ n. Here D̅ = D mod 2. In <cit.>, Cotti, Dubrovin and Guzzetti proved that part 3 of Conjecture <ref> is equivalent to the Gamma conjecture II proposed by Galkin, Golyshev and Iritani <cit.>. In several occasions, the convergence assumption in Conjecture <ref> and <ref> is true. In <cit.>, Dubrovin suggested that the convergence can be proved when the quantum cohomology of X is semisimple. Cotti then proved that if the small quantum cohomology of X is semisimple, then F_0 has a convergence domain <cit.>. § PROOF OF THE REFINED DUBROVIN CONJECTRUE FOR LG(2,4) §.§ Small quantum cohomology of LG(2,4) It is known that the Lagrangian Grassmanian LG(2,4) is isomorphic to the 3-dimensional quadric. The cohomology ring H^*(LG(2,4);ℂ) has the following ring presentation (cf. <cit.>) H^*(LG(2,4);ℂ) ≅ℂ[x_1,x_2]/⟨ x_1^2 - 2 x_2, x_2^2 ⟩. From the general theory of Schubert calculus <cit.>, it is known that H^*(LG(2,4);ℂ) is a complex linear space of dimension 4 and a basis is given by Schubert classes: σ_0:=1, σ_1, σ_2, σ_2,1. We denote them by e_1:=σ_0, e_2:=σ_1, e_3:=σ_2, e_4:=σ_2,1, and denote by v^i the coordinate with respect to e_i. Then the coordinates in the small quantum cohomology are v=(0,v^2,0,0). The corresponding Schubert polynomials are 𝔖_0 = 1 , 𝔖_1 = x_1 , 𝔖_2= 1/2x_1^2 , 𝔖_2,1=1/2x_1^3, which are images of the Schubert classes under the isomorphism (<ref>). The first Chern class of LG(2,4) is c_1=3σ_1. The matrix of the multiplication c_1∪ under the basis (<ref>) is R = ( [ 0 0 0 0; 3 0 0 0; 0 6 0 0; 0 0 3 0; ]). The matrix of Poincaré pairing η(α,β):=∫_LG(2,4)α∧β is η = ( [ 0 0 0 1; 0 0 1 0; 0 1 0 0; 1 0 0 0; ]). The small quantum cohomology ring qH^*(LG(2,4);ℂ) has the following ring presentation <cit.>: qH^*(LG(2,4);ℂ) ≅ℂ[x_1,x_2,q]/⟨ x_1^2 - 2 x_2, -q x_1 + x_2^2 ⟩, where q=e^v^2. Images of the Schubert classes under the isomorphism (<ref>) are <cit.> 𝔖_0^q = 1 , 𝔖_1^q = x_1 , 𝔖_2^q= 1/2x_1^2 , 𝔖_2,1^q=1/2x_1^3-q. The multiplication table of the small quantum cohomology of LG(2,4) is shown in <ref>. It is known that q=1 (i.e. v^i=0, 1≤ i ≤ 4) is a semisimple point of the small quantum cohomology of LG(2,4) <cit.>. Then according to a result of Cotti <cit.> we know that F_0( v) (see (<ref>)) has a convergence domain. §.§ Deformed flat connection At the point q=1, equation (<ref>) becomes dydz=(𝒰+μz)y, where 𝒰 = ( [ 0 0 3 0; 3 0 0 3; 0 6 0 0; 0 0 3 0; ]) and μ = ( [ -3/2 0 0 0; 0 -1/2 0 0; 0 0 1/2 0; 0 0 0 3/2; ]). The system (<ref>) can be reduced to an equivalent scalar ODE D^4φ-108 z^3 Dφ-162 z^3φ=0, where D = z d/dz. Let y=(y_1,…,y_4)^T and y_4(z)=z^3/2φ(z). From the system (<ref>), y_1,y_2,y_3 are uniquely determined by φ through the following equations: y_1(z) =z^2 φ^(3)(z)+φ'(z)+3 z φ”(z)-54 z^2 φ(z)/54 √(z), y_2(z) =1/18(z^3/2φ”(z)+√(z)φ'(z)), y_3(z) =1/3 z^3/2φ'(z), where φ satisfies the scalar ODE (<ref>). The indicial equation of (<ref>) at z=0 reads r^4=0. A solution of (<ref>) is (cf. <cit.>) φ̃(z)= ∑^∞_d=0(2d)!(d!)^5z^3d=1 + 2 z^3 + 3/4z^6 + 5 /54z^9 + 35 /6912z^12 + 7 /48000z^15+O(z^18). By Proposition <ref>, the topological-enumerative solution of (<ref>) is 𝒴_top(z) = Φ_top(z) z^μz^R, where μ = ( [ -3/2 0 0 0; 0 -1/2 0 0; 0 0 1/2 0; 0 0 0 3/2; ]), R=( [ 0 0 0 0; 3 0 0 0; 0 6 0 0; 0 0 3 0; ]), and Φ_top(z) = I + ( [ 0 0 1 0; 0 0 0 1; 0 0 0 0; 0 0 0 0; ]) z+ ( [ 0 -2 0 0; 0 0 0 0; 0 0 0 2; 0 0 0 0; ]) z^2+( [ 2 0 0 1; 0 -2 0 0; 0 0 -2 0; 0 0 0 2; ]) z^3 +( [ 0 0 -3/2 0; 4 0 0 3/2; 0 0 0 0; 0 0 -4 0; ])z^4 +( [ 0 3/2 0 0; 0 0 -7/2 0; 8 0 0 3/2; 0 8 0 0; ])z^5+( [ 13/4 0 0 1/2; 0 33/4 0 0; 0 0 -17/4 0; 0 0 0 3/4; ])z^6 +( [ 0 0 -19/12 0; -5/2 0 0 5/12; 0 25/2 0 0; 0 0 -5/2 0; ])z^7+O(z^8). §.§ Computation of the Stokes matrix and the central connection matrix It suffices to compute the monodromy data at q=1 due to the simisimplicity. Let ϵ=e^2/3iπ. The canonical coordinates at q=1 are u_1=0, u_2=3· 2^2/3, u_3=3· 2^2/3ϵ^2, u_4=3· 2^2/3ϵ. The Stokes rays (see (<ref>)) are R_12 ={iρ: ρ≥0}, R_13={-ρ e^i π/6: ρ≥0}, R_14={ρ e^-i π/6: ρ≥0}, R_23 ={-ρ e^i π/3: ρ≥0}, R_24={ρ e^-i π/3: ρ≥0}, R_34 ={ρ: ρ≥0}. We fix the admissible line ℓ: ℓ={ρ e^i π/4: ρ∈ℝ}. Let Π_left={z:π/6 < argz < 4π/3}, Π_right={z:-5π/6< argz < π/3}. The narrow sectors are then given by Π_+={z:π/6< argz <π/3}, Π_-={z:-π/6< argz <π/3}. §.§.§ Solutions to the scalar ODE (<ref>) The following functions are solutions of the ODE (<ref>): * the function φ_1(z)=∫_Λ_1Γ (s)^4/Γ(s+1/2)2^-2 s e^π i s z^-3 s ds, -π/2< z<π/2, where Λ_1 is any line in the complex plane from the point κ - i ∞ to κ + i ∞ for any κ>0; * the function φ_2(z)=∫_Λ_2Γ (s)^4 Γ(1/2-s) 2^-2 s z^-3 s ds, -5π/6< z<5π/6, where Λ_2 is any line in the complex plane from the point κ - i ∞ to κ + i ∞ for any 0<κ<1/2. This Lemma can be obtained via a direct verification (cf. <cit.>). Now we consider asymptotic behaviours of φ_1 and φ_2 as |z| →∞ and several identities. The following asymptotic behaviours hold as |z| →∞ -z^3/2/2√(2)π^2φ_1(z) = 1/√(6) e^u_4 z(1+O(1z)), -π/2< z<π/2, -z^3/2/√(2)π^3φ_2(z) = -i/√(2) e^u_1 z(1+O(1z)), -π/6< z<π/2. The proof of this Lemma is an elementary exercise by using Laplace method (cf. <cit.>). Let ϵ=e^2/3iπ. The functions φ_1, φ_2 satisfy the following identity φ_2(zϵ^-1)= 2πφ_1(z)-φ_2(z), -π/6< z<π/2. This Lemma can be obtained via a direct verification (cf. <cit.>). Any solution φ to (<ref>) satisfies the following identity φ(zϵ^4)-4φ(zϵ^3)+6φ(zϵ^2)-4φ(zϵ)+φ(z)=0. (cf. <cit.>) Note that if φ(z) is a solution to (<ref>) then φ(zϵ) also is. Then the map z↦ zϵ defines a linear map on the solution space of (<ref>) , where the solutions have the form φ(z)=∑_n≥ 0z^3n(a_n+b_nlog z+c_n(log z)^2+d_n(log z)^3 ), where a_0, b_0, c_0, d_0 are arbitrary constants, and a_n, b_n, c_n, d_n, n≥ 1 can be obtained recursively. Choosing a basis of solutions with coefficients (a_0,b_0,c_0,d_0)=(1,0,0,0), (0,1,0,0) and (0,0,1,0), (0,0,0,1), the matrix of the operator (Aφ)(z):=φ(zϵ) is triangular whose diagonal entires are all 1. By Cayley–Hamilton theorem we then deduce that (A- I)^5=0, namely A^4-4A^3+6A^2-4A+I=0. §.§.§ Asymptotic behaviour of fundamental matrix solutions Let Ψ be the transition matrix from (f_1,f_2,f_3,f_4) to (e_1,e_2,e_3,e_4) at q=1, i.e. (see (<ref>)) e_α=∑_j=1^4 ψ_j α f_j , α = 1,2,3,4. We can compute directly f_1 = i/√(2)e_1-i/√(2)e_4, f_2 = 1/√(6)e_1+1/√(2)√(3)e_2+√(2)/√(3)e_3+1/√(6)e_4, f_3 = 1/√(6)e_1+1/√(2)√(3)e^2 i π/3e_2+√(2)/√(3) e^-2 i π/3 e_3+1/√(6)e_4, f_4 = 1/√(6)e_1+1/√(2)√(3)e^-2 i π/3e_2+√(2)/√(3)e^2 i π/3e_3+1/√(6)e_4. Then Ψ = ( [ -i/√(2) 0 0 i/√(2); 1/√(6) √(2)/√(3) 1/√(2)√(3) 1/√(6); 1/√(6) -√(2)/√(3)e^i π/3 1/√(2)√(3)e^2i π/3 1/√(6); 1/√(6) √(2)/√(3)e^2i π/3 -1/√(2)√(3)e^i π/3 1/√(6); ]). Denote U= Ψ𝒰Ψ^-1 = ( [ 0 0 0 0; 0 3· 2^2/3 0 0; 0 0 3· 2^2/3ϵ^2 0; 0 0 0 3· 2^2/3ϵ; ]), and V= ΨμΨ^-1 = ( [ 0 √(3)/2i √(3)/2i √(3)/2i; -√(3)/2i 0 -√(3)/6i √(3)/6i; -√(3)/2i √(3)/6i 0 -√(3)/6i; -√(3)/2i -√(3)/6i √(3)/6i 0; ]). For a fundamental matrix solution 𝒴(z) to (<ref>), let Y(z) = Ψ𝒴(z). Then Y=Y(z) satisfies the equation dYdz=(U+Vz)Y. Note that there exists unique fundamental matrix solutions Y_L/R to (<ref>), analytic in Π_left/right, respectively, such that Y_L/R(z)∼ G(z) e^zU as |z| →∞ within the sectors, where G (z) is a formal power series of z^-1 with G(∞)=I (see (<ref>)). Then the corresponding solutions to (<ref>) have the following asymptotic behaviour as |z| →∞ 𝒴_L/R(z) = Ψ^-1Y_L/R(z) =( [ i/√(2)e^u_1 z 1/√(6)e^u_2 z 1/√(6)e^u_3 z 1/√(6)e^u_4 z; 0 1/√(2)√(3)e^u_2 z -e^-1/3 i π/√(2)√(3)e^u_3 z -e^i π/3/√(2)√(3)e^u_4 z; 0 √(2)/√(3)e^u_2 z -i √(2) e^-1/6 i π/√(3)e^u_3 z -√(2) e^-1/3 i π/√(3)e^u_4 z; -i/√(2)e^u_1 z 1/√(6)e^u_2 z 1/√(6)e^u_3 z 1/√(6)e^u_4 z; ]) (1+O(1z)). §.§.§ Fundamental matrix solutions 𝒴_L/R and the Stokes matrix Let S'=(s'_jk) be the Stokes matrix relating Y_L and Y_R in the narrow sector Π_+, i.e. Y_L(z)=Y_R(z) S', z ∈Π_+. Then we have y^L_4k(z)=∑_j=1^4 y^R_4j(z) s'_jk, for π6< arg z<π3, Note that y^L/R_4k have the following asymptotic behaviour as |z| →∞ y^L/R_4k(z)=c_k e^u_k z(1+O(1z)) , for z ∈Π_left/right, where c_1=-i/√(2) , c_2=1/√(6) , c_3=1/√(6) , c_4=1/√(6). By comparing asymptotic behaviours , we can determine the form of S'. For example, by definition, we have y^L_44(z)=∑_j=1^4 y^R_4j(z) s'_j4, for π6< arg z<π3. Since e^u_4 z is dominated by e^u_1 z,e^u_2 z,e^u_3 z on the sector 0< arg z<2π/3, i.e. |e^u_4 z|<|e^u_k z|, k=1,2,3, functions y^R_41, y^R_42, y^R_43 will not appear on the right-hand side. Then we have y^L_44(z)=y^R_44(z), for π6< arg z<π3, which means s'_14=0, s'_24=0, s'_34=0, s'_44=1. By similar dominance arguments, we have s'_11=1, s'_22=1, s'_33=1, s'_23=0, s'_21=0, s'_31=0 . Then S' must have the form ( [ 1 * * 0; 0 1 0 0; 0 * 1 0; * * 1; ]). To explicitly compute S', we have to construct entries of fundamental matrix solutions 𝒴_L/R=(y^L/R_jk)_1≤ j,k ≤ 4 on each sector Π_left/right that have the same asymptotic behaviour as (<ref>). Note that we only have to construct y^L/R_4k, 1≤ k ≤ 4 due to Proposition <ref>. From (<ref>) and the existence of y^L_44 and y^R_44, φ_1(z) can be analytically continued to the sector -5π/6< arg z<4π/3. Since the asymptotic behaviours of y^L_44, y^R_44 as |z| →∞ are same, we have -z^3/2/2√(2)π^2φ_1(z)= 1/√(6) e^u_4 z(1+O(1z)), for -5π/6< arg z<4π/3. Note that the continued φ_1(z) may not have the integral representation (<ref>) in the above region. Since φ_1(zϵ^2) is also a solution of (<ref>), we have -z^3/2/2√(2)π^2φ_1(zϵ^2) = linear combination of y^R_4k, 1≤ k ≤ 4, for -5π/6< arg z<0. Rotating z by ϵ^2 in (<ref>), we get -z^3/2/2√(2)π^2φ_1(zϵ^2) = 1/√(6) e^u_2 z(1+O(1z)), for -13π/6< arg z<0. From (<ref>) and the fact e^u_2 z is dominated by e^u_1 z,e^u_3 z,e^u_4 z on the sector -4π/3< arg z<-2π/3, we have -z^3/2/2√(2)π^2φ_1(zϵ^2) = y^R_42(z), for -5π/6< arg z<0. By the existence of y^R_42 on -5π/6< arg z<π/3, -z^3/2/2√(2)π^2φ_1(zϵ^2) can be analytically continued to -13π/6< arg z<π/3 and still have the asymptotic behaviour as in (<ref>). Then we have the following proposition for φ_1(z). The following asymptotic behaviour holds as |z| →∞ -z^3/2/2√(2)π^2φ_1(z)= 1/√(6) e^u_4 z(1+O(1z)), -5π/6< z<5π/3. Therefore, we conclude that y^R_42(z) = -z^3/2/2√(2)π^2φ_1(zϵ^2), for z ∈Π_right. Now φ_2(z) can be analytically continued to the sector π/2< arg z<5π/6 through the identity (<ref>). From the fact e^u_4 z is dominated by e^u_1 z on the whole sector -π/6< arg z<5π/6 and the uniformity of asymptotic behaviour, we can determine the asymptotic behaviour of φ_2(z) as |z| →∞: -√(2)i z^3/2/π^2φ_2(z)= -i/√(2) e^u_1 z(1+O(1z)), for -π/6< arg z<5π/6. By the identity (<ref>) with the variable z rotated by ϵ and similar arguments as above, we finally arrive at the following proposition for φ_2(z). The following asymptotic behaviour holds as |z| →∞ -√(2)i z^3/2/π^2φ_2(z)= -i/√(2) e^u_1 z(1+O(1z)), -5π/6< z<5π/6. By the identity (<ref>), φ_2(z) can be further continued to -3π/2< arg z<5π/3, although it may not have the same asymptotic behaviour as (<ref>) and the integral representation as (<ref>) beyond the sector -5π/6< arg z<5π/6. Since it is a linear combination of solutions, the continued φ_2(z) is still a solution to (<ref>). Now the identity (<ref>) also holds in a larger sector. The identity (<ref>) holds for -5π/6< z<5π/3. Now we can determine some entries of fundamental matrix solutions besides y^R_42: y^L_44(z) =-z^3/2/2√(2)π^2φ_1(z), for z ∈Π_left, y^R_44(z) =-z^3/2/2√(2)π^2φ_1(z), for z ∈Π_right, y^L_42(z) =z^3/2/2√(2)π^2φ_1(zϵ^-1), for z ∈Π_left, y^R_43(z) =z^3/2/2√(2)π^2φ_1(zϵ), for z ∈Π_right, y^L_41(z) =√(2)i z^3/2/π^2φ_2(zϵ^-1), for z ∈Π_left, y^R_41(z) =-√(2)i z^3/2/π^2φ_2(z), for z ∈Π_right. The only remained entry is y^L_43. For y^L_43, let some coefficients be undetermined: -z^3/2/2√(2)π^2φ_1(zϵ^-2)=y^L_43(z)+γ_2”y^L_42(z), for π/2< arg z<4π/3, and z^3/2/2√(2)π^2φ_1(zϵ)=y^L_43(z)+γ_1”y^L_41(z)+γ_4”y^L_44(z), for π/6< arg z<π. Subtract (<ref>) and (<ref>), we get -z^3/2/2√(2)π^2φ_1(zϵ^-2)-z^3/2/2√(2)π^2φ_1(zϵ)=γ_2”y^L_42(z)-γ_1”y^L_41(z)-γ_4”y^L_44(z) , for π/2< arg z<π. We then replace all φ_1 by φ_2 in (<ref>) by using (<ref>) and apply (<ref>) to φ_2. By comparing cofficients, we obtain y^L_43(z)=-z^3/2/2√(2)π^2φ_1(zϵ^-2)+5y^L_42(z), for π/6< arg z<2π/3, and y^L_43(z)=z^3/2/2√(2)π^2φ_1(zϵ)-4y^L_41(z)+5y^L_44(z), for π/2< arg z<4π/3. Note that we have finished the construction of 𝒴_L/R. Comparing 𝒴_L and 𝒴_R, we get the Stokes matrix S' S' = ( [ 1 4 4 0; 0 1 0 0; 0 5 1 0; -4 -5 -11 1; ]) . We then triangularize S' by a permutation P of canonical coordinates: S = PS'P^-1= [ 1 -4 -11 -5; 0 1 4 4; 0 0 1 5; 0 0 0 1; ], where P = ( [ 0 0 0 1; 1 0 0 0; 0 0 1 0; 0 1 0 0; ]). §.§.§ Central connection matrix Let C^'=(c_jk^')_1≤ j,k ≤ 4 be the central connection matrix relating 𝒴_top(z) and 𝒴_R(z), i.e. 𝒴_R(z) =𝒴_top(z)C^' , To compute C^', we have to find the expansion of Y^R at z=0 and compare the coefficents to determine c_jk^'. For example, y_44^R(z) =-z^3/2/2√(2)π^2φ_1(z)=-z^3/2/2√(2)π^2∫_Λ_1Γ (s)^4/Γ(s+1/2)2^-2 s e^π i s z^-3 s ds = -z^3/2/2√(2)π^22π i∑_n=0^∞res_s=-n(Γ (s)^4/Γ(s+1/2)2^-2 s e^π i s z^-3 s) =z^3/2(9 i /2 √(2)π ^3/2(log z)^3+i (162 γ -54 i π ) /12 √(2)π ^3/2(log z)^2 +i (162 γ ^2-108 i γπ -15 π ^2) /12 √(2)π ^3/2log z. . +i (-12 ζ (3)+54 γ ^3-54 i γ ^2 π -15 γπ ^2+i π ^3)/12 √(2)π ^3/2)+z^9/2(9 i/√(2)π ^3/2(log z)^3. . +i (-216+324 γ -108 i π ) /12 √(2)π ^3/2(log z)^2+i (144-432 γ +324 γ ^2+144 i π -216 i γπ -30 π ^2) /12 √(2)π ^3/2log z . . +i (-24 ζ (3)+144 γ -216 γ ^2+108 γ ^3-48 i π +144 i γπ -108 i γ ^2 π +20 π ^2-30 γπ ^2+2 i π ^3)/12 √(2)π ^3/2) +O(z^15/2). From (<ref>), y_44^R (z) = z^3/2(9 c^'_1,4 (log z)^3+9 c^'_2,4 (log z)^2+3 c^'_3,4log z+c^'_4,4)+z^9/2(18 c^'_1,4 (log z)^3. . +(18 c^'_2,4-36 c^'_1,4) (log z)^2 +(24 c^'_1,4-24 c^'_2,4+6 c^'_3,4) log z+8 c^'_2,4-4 c^'_3,4+2 c^'_4,4)+O(z^15/2). Compare the coefficents we get c^'_1,4 = i/2 √(2)π ^3/2, c^'_2,4 = π +3 i γ/2 √(2)π ^3/2, c^'_3,4 = 54 i γ ^2+36 γπ -5 i π ^2/12 √(2)π ^3/2, c^'_4,4 = -12 i ζ (3)-54 i γ ^3-54 γ ^2 π +15 i γπ ^2+π ^3/12 √(2)π ^3/2. In a similar way, we compare y_41^R, y_42^R, y_43^R with right hand side of (<ref>) and we can obtain C^'. After the permutation P of canonical coordinates, we get C = C^'P^-1 = ( [ i/2 √(2)π ^3/2 i/√(2)π ^3/2; π +3 i γ/2 √(2)π ^3/2 3 i γ/√(2)π ^3/2; 54 i γ ^2+36 γπ -5 i π ^2/12 √(2)π ^3/2 i (54 γ ^2+7 π ^2)/6 √(2)π ^3/2; -12 i ζ (3)-54 i γ ^3-54 γ ^2 π +15 i γπ ^2+π ^3/12 √(2)π ^3/2 i (-4 ζ (3)+18 γ ^3+7 γπ ^2)/2 √(2)π ^3/2; ]. . [ -i/2 √(2)π ^3/2 i/2 √(2)π ^3/2; π -3 i γ/2 √(2)π ^3/2 -3 (π -i γ )/2 √(2)π ^3/2; -54 i γ ^2+36 γπ +5 i π ^2/12 √(2)π ^3/2 i (54 γ ^2+108 i γπ -53 π ^2)/12 √(2)π ^3/2; 12 i ζ (3)+(π -3 i γ ) (18 γ ^2+12 i γπ -π ^2)/12 √(2)π ^3/2 -4 i ζ (3)+18 i γ ^3-54 γ ^2 π -53 i γπ ^2+17 π ^3/4 √(2)π ^3/2; ]). §.§ Proof of the refined Dubrovin conjecture <ref> for LG(2,4) Let 𝒰 be the tautological bundle on LG(2,4), and 𝒰^*:= (𝒰,𝒪). The collection ( 𝒪, 𝒪(1), Σ^(2,1)𝒰^*, 𝒪(2) ) is a full exceptional collection <cit.> (cf. <cit.>), where Σ is the Schur functor. Since the semisimplicity of the small quantum cohomology of LG(2,4) at q=1 is known <cit.>, we have proved part 1 of Conjecture <ref> for LG(2,4). Now twist (<ref>) by ∧^2 𝒰^*: ( 𝒪⊗∧^2 𝒰^*, 𝒪(1)⊗∧^2 𝒰^*, Σ^(2,1)𝒰^*⊗∧^2 𝒰^*, 𝒪(2)⊗∧^2 𝒰^* ). Denote the objects of (<ref>) by E_k, 1≤ k ≤ 4. The following proposition can be easily verified. The twisted collection (<ref>) is a full exceptional collection. The Euler matrix S̃ = (χ(E_j,E_k))_1≤ j,k ≤ n can be computed: S̃= ( [ 1 5 16 14; 0 1 4 5; 0 0 1 4; 0 0 0 1; ]). The Gamma class Γ̂^- of LG(2,4) is: Γ̂^-=1+3 γσ_1+1/6(54 γ ^2+π ^2) σ_2+1/2(-4 ζ (3)+18 γ ^3+γπ ^2)σ_2,1. The graded Chern characters are Ch(𝒪) =1, Ch(𝒪(1)) =1+2 i πσ_1-4 π ^2 σ_2-83 i π ^3 σ_2,1, Ch(Σ^(2,1)𝒰^*) =2+6 i πσ_1-16 π ^2 σ_2-32 i π ^3 σ_2,1, Ch(𝒪(2)) =1+4 i πσ_1-16 π ^2 σ_2-643 i π ^3 σ_2,1, Ch(∧^2 𝒰^*) =1+2 i πσ_1-4 π ^2 σ_2-83 i π ^3 σ_2,1. Then we can compute the matrix C_Γ = ( [ i/2 √(2)π ^3/2 i/2 √(2)π ^3/2; π +3 i γ/2 √(2)π ^3/2 -π -3 i γ/2 √(2)π ^3/2; 54 i γ ^2+36 γπ -5 i π ^2/12 √(2)π ^3/2 i (54 γ ^2+36 i γπ -5 π ^2)/12 √(2)π ^3/2; -12 i ζ (3)-54 i γ ^3-54 γ ^2 π +15 i γπ ^2+π ^3/12 √(2)π ^3/2 -12 i ζ (3)+54 i γ ^3-54 γ ^2 π -15 i γπ ^2+π ^3/12 √(2)π ^3/2; ]. . [ i/√(2)π ^3/2 i/2 √(2)π ^3/2; -2 π +3 i γ/√(2)π ^3/2 -3 (π -i γ )/2 √(2)π ^3/2; i (54 γ ^2+72 i γπ -17 π ^2)/6 √(2)π ^3/2 i (54 γ ^2+108 i γπ -53 π ^2)/12 √(2)π ^3/2; 2 (π ^3-6 i ζ (3))+54 i γ ^3-108 γ ^2 π -51 i γπ ^2/6 √(2)π ^3/2 -4 i ζ (3)+18 i γ ^3-54 γ ^2 π -53 i γπ ^2+17 π ^3/4 √(2)π ^3/2; ]), whose columns are given by coordinates of i/(2π)^3/2Γ̂^-∪ e^-iπ c_1∪Ch(E_k), 1≤ k ≤ 4, with respect to the basis σ_0, σ_1, σ_2, σ_2,1. Conjecture <ref> holds for the Lagrangian Grassmanian LG(2,4). Note that we have proved part 1 of Conjecture <ref> for LG(2,4). Now consider the action of braid group ℬ_4, which is generated by 3 elementary braids β_12, β_23, β_34. The following sequence transforms S into S̃^-1 and C into C_Γ: * The action by β^-1_23, * Change of signs by diag(1,-1,-1,1). Therefore, we have proved part 2 and 3 of Conjecture <ref> for LG(2,4). Acknowledgements. This work was done during my master thesis project. I would like to thank Di Yang for his advice, helpful discussions and encouragements. The work was supported by CAS No. YSBR-032, by National Key R and D Program of China 2020YFA0713100, and by NSFC No. 12371254.
http://arxiv.org/abs/2409.03516v1
20240905132950
LMLT: Low-to-high Multi-Level Vision Transformer for Image Super-Resolution
[ "Jeongsoo Kim", "Jongho Nang", "Junsuk Choe" ]
cs.CV
[ "cs.CV", "cs.AI" ]
preparation Euclid Collaboration: J. Adamek0000-0002-0723-6740julian.adamek@uzh.ch<ref> B. Fiorini0000-0002-0092-4321<ref> M. Baldi0000-0003-4145-1943<ref>,<ref>,<ref> G. Brando0000-0003-0805-1905<ref> M.-A. Breton<ref>,<ref>,<ref> F. Hassani0000-0003-2640-4460<ref> K. Koyama0000-0001-6727-6915<ref> A. M. C. Le Brun0000-0002-0936-4594<ref> G. Rácz0000-0003-3906-5699<ref> H.-A. Winther0000-0002-6325-2710<ref> A. Casalino0000-0001-6709-5292<ref> C. Hernández-Aguayo0000-0001-9921-8832<ref> B. Li0000-0002-1098-9188<ref> D. Potter0000-0002-0757-5195<ref> E. Altamura0000-0001-6973-1897<ref> C. Carbone0000-0003-0125-3563<ref> C. Giocoli0000-0002-9590-7961<ref>,<ref> D. F. Mota0000-0003-3141-142X<ref> A. Pourtsidou0000-0001-9110-5550<ref>,<ref> Z. Sakr0000-0002-4823-3757<ref>,<ref>,<ref> F. Vernizzi0000-0003-3426-2802<ref> A. Amara<ref> S. Andreon0000-0002-2041-8784<ref> N. Auricchio0000-0003-4444-8651<ref> C. Baccigalupi0000-0002-8211-1630<ref>,<ref>,<ref>,<ref> S. Bardelli0000-0002-8900-0298<ref> P. Battaglia0000-0002-7337-5909<ref> D. Bonino0000-0002-3336-9977<ref> E. Branchini0000-0002-0808-6908<ref>,<ref>,<ref> M. Brescia0000-0001-9506-5680<ref>,<ref>,<ref> J. Brinchmann0000-0003-4359-8797<ref>,<ref> A. Caillat<ref> S. Camera0000-0003-3399-3574<ref>,<ref>,<ref> V. Capobianco0000-0002-3309-7692<ref> V. F. Cardone<ref>,<ref> J. Carretero0000-0002-3130-0204<ref>,<ref> S. Casas0000-0002-4751-5138<ref> F. J. Castander0000-0001-7316-4573<ref>,<ref> M. Castellano0000-0001-9875-8263<ref> G. Castignani0000-0001-6831-0687<ref> S. Cavuoti0000-0002-3787-4196<ref>,<ref> A. Cimatti<ref> C. Colodro-Conde<ref> G. Congedo0000-0003-2508-0046<ref> C. J. Conselice0000-0003-1949-7638<ref> L. Conversi0000-0002-6710-8476<ref>,<ref> Y. Copin0000-0002-5317-7518<ref> F. Courbin0000-0003-0758-6510<ref>,<ref>,<ref> H. M. Courtois0000-0003-0509-1776<ref> A. Da Silva0000-0002-6385-1609<ref>,<ref> H. Degaudenzi0000-0002-5887-6799<ref> G. De Lucia0000-0002-6220-9104<ref> M. Douspis0000-0003-4203-3954<ref> F. Dubath0000-0002-6533-2810<ref> X. Dupac<ref> S. Dusini0000-0002-1128-0664<ref> M. Farina0000-0002-3089-7846<ref> S. Farrens0000-0002-9594-9387<ref> S. Ferriol<ref> P. Fosalba0000-0002-1510-5214<ref>,<ref> M. Frailis0000-0002-7400-2135<ref> E. Franceschi0000-0002-0585-6591<ref> M. Fumana0000-0001-6787-5950<ref> S. Galeotta0000-0002-3748-5115<ref> B. Gillis0000-0002-4478-1270<ref> P. Gómez-Alvarez0000-0002-8594-5358<ref>,<ref> A. Grazian0000-0002-5688-0663<ref> F. Grupp<ref>,<ref> L. Guzzo0000-0001-8264-5192<ref>,<ref> S. V. H. Haugan0000-0001-9648-7260<ref> W. Holmes<ref> F. Hormuth<ref> A. Hornstrup0000-0002-3363-0936<ref>,<ref> S. Ilić0000-0003-4285-9086<ref>,<ref> K. Jahnke0000-0003-3804-2137<ref> M. Jhabvala<ref> B. Joachimi0000-0001-7494-1303<ref> E. Keihänen0000-0003-1804-7715<ref> S. Kermiche0000-0002-0302-5735<ref> A. Kiessling0000-0002-2590-1273<ref> M. Kilbinger0000-0001-9513-7138<ref> B. Kubik0009-0006-5823-4880<ref> M. Kümmel0000-0003-2791-2117<ref> M. Kunz0000-0002-3052-7394<ref> H. Kurki-Suonio0000-0002-4618-3063<ref>,<ref> S. Ligori0000-0003-4172-4606<ref> P. B. Lilje0000-0003-4324-7794<ref> V. Lindholm0000-0003-2317-5471<ref>,<ref> I. Lloro<ref> G. Mainetti0000-0003-2384-2377<ref> E. Maiorano0000-0003-2593-4355<ref> O. Mansutti0000-0001-5758-4658<ref> O. Marggraf0000-0001-7242-3852<ref> K. Markovic0000-0001-6764-073X<ref> M. Martinelli0000-0002-6943-7732<ref>,<ref> N. Martinet0000-0003-2786-7790<ref> F. Marulli0000-0002-8850-0303<ref>,<ref>,<ref> R. Massey0000-0002-6085-3780<ref> E. Medinaceli0000-0002-4040-7783<ref> S. Mei0000-0002-2849-559X<ref> M. Melchior<ref> Y. Mellier<ref>,<ref> M. Meneghetti0000-0003-1225-7084<ref>,<ref> E. Merlin0000-0001-6870-8900<ref> G. Meylan<ref> M. Moresco0000-0002-7616-7136<ref>,<ref> L. Moscardini0000-0002-3473-6716<ref>,<ref>,<ref> C. Neissner0000-0001-8524-4968<ref>,<ref> S.-M. Niemi<ref> C. Padilla0000-0001-7951-0166<ref> S. Paltani0000-0002-8108-9179<ref> F. Pasian0000-0002-4869-3227<ref> K. Pedersen<ref> W. J. Percival0000-0002-0644-5727<ref>,<ref>,<ref> V. Pettorino<ref> S. Pires0000-0002-0249-2104<ref> G. Polenta0000-0003-4067-9196<ref> M. Poncet<ref> L. A. Popa<ref> L. Pozzetti0000-0001-7085-0412<ref> F. Raison0000-0002-7819-6918<ref> A. Renzi0000-0001-9856-1970<ref>,<ref> J. Rhodes0000-0002-4485-8549<ref> G. Riccio<ref> E. Romelli0000-0003-3069-9222<ref> M. Roncarelli0000-0001-9587-7822<ref> R. Saglia0000-0003-0378-7032<ref>,<ref> A. G. Sánchez0000-0003-1198-831X<ref> D. Sapone0000-0001-7089-4503<ref> B. Sartoris0000-0003-1337-5269<ref>,<ref> M. Schirmer0000-0003-2568-9994<ref> T. Schrabback0000-0002-6987-7834<ref> A. Secroun0000-0003-0505-3710<ref> G. Seidel0000-0003-2907-353X<ref> S. Serrano0000-0002-0211-2861<ref>,<ref>,<ref> C. Sirignano0000-0002-0995-7146<ref>,<ref> G. Sirri0000-0003-2626-2853<ref> L. Stanco0000-0002-9706-5104<ref> J. Steinwagner0000-0001-7443-1047<ref> P. Tallada-Crespí0000-0002-1336-8328<ref>,<ref> D. Tavagnacco0000-0001-7475-9894<ref> I. Tereno<ref>,<ref> R. Toledo-Moreo0000-0002-2997-4859<ref> F. Torradeflot0000-0003-1160-1517<ref>,<ref> I. Tutusaus0000-0002-3199-0399<ref> E. A. Valentijn<ref> L. Valenziano0000-0002-1170-0104<ref>,<ref> T. Vassallo0000-0001-6512-6358<ref>,<ref> G. Verdoes Kleijn0000-0001-5803-2580<ref> A. Veropalumbo0000-0003-2387-1194<ref>,<ref>,<ref> Y. Wang0000-0002-4749-2984<ref> J. Weller0000-0002-8282-2010<ref>,<ref> G. Zamorani0000-0002-2318-301X<ref> E. Zucca0000-0002-5845-8132<ref> A. Biviano0000-0002-0857-0732<ref>,<ref> C. Burigana0000-0002-3005-5796<ref>,<ref> M. Calabrese0000-0002-2637-2422<ref>,<ref> D. Di Ferdinando<ref> J. A. Escartin Vigo<ref> G. Fabbian0000-0002-3255-4695<ref>,<ref> F. Finelli0000-0002-6694-3269<ref>,<ref> J. Gracia-Carpio<ref> S. Matthew0000-0001-8448-1697<ref> N. Mauri0000-0001-8196-1548<ref>,<ref> A. Pezzotta0000-0003-0726-2268<ref> M. Pöntinen0000-0001-5442-2530<ref> V. Scottez<ref>,<ref> M. Tenti0000-0002-4254-5901<ref> M. Viel0000-0002-2642-5707<ref>,<ref>,<ref>,<ref>,<ref> M. Wiesmann0009-0000-8199-5860<ref> Y. Akrami0000-0002-2407-7956<ref>,<ref> V. Allevato0000-0001-7232-5152<ref> S. Anselmi0000-0002-3579-9583<ref>,<ref>,<ref> M. Archidiacono0000-0003-4952-9012<ref>,<ref> F. Atrio-Barandela0000-0002-2130-2513<ref> A. Balaguera-Antolinez0000-0001-5028-3035<ref>,<ref> M. Ballardini0000-0003-4481-3559<ref>,<ref>,<ref> A. Blanchard0000-0001-8555-9003<ref> L. Blot0000-0002-9622-7167<ref>,<ref> H. Böhringer0000-0001-8241-4204<ref>,<ref>,<ref> S. Borgani0000-0001-6151-6439<ref>,<ref>,<ref>,<ref> S. Bruton0000-0002-6503-5218<ref> R. Cabanac0000-0001-6679-2600<ref> A. Calabro0000-0003-2536-1614<ref> B. Camacho Quevedo0000-0002-8789-4232<ref>,<ref> G. Cañas-Herrera0000-0003-2796-2149<ref>,<ref> A. Cappi<ref>,<ref> F. Caro<ref> C. S. Carvalho<ref> T. Castro0000-0002-6292-3228<ref>,<ref>,<ref>,<ref> K. C. Chambers0000-0001-6965-7789<ref> S. Contarini0000-0002-9843-723X<ref> A. R. Cooray0000-0002-3892-0190<ref> G. Desprez0000-0001-8325-1742<ref> A. Díaz-Sánchez0000-0003-0748-4768<ref> J. J. Diaz<ref> S. Di Domizio0000-0003-2863-5895<ref>,<ref> H. Dole0000-0002-9767-3839<ref> S. Escoffier0000-0002-2847-7498<ref> A. G. Ferrari0009-0005-5266-4110<ref>,<ref> P. G. Ferreira0000-0002-3021-2851<ref> I. Ferrero0000-0002-1295-1132<ref> A. Finoguenov0000-0002-4606-5403<ref> F. Fornari0000-0003-2979-6738<ref> L. Gabarra0000-0002-8486-8856<ref> K. Ganga0000-0001-8159-8208<ref> J. García-Bellido0000-0002-9370-8360<ref> T. Gasparetto0000-0002-7913-4866<ref> V. Gautard<ref> E. Gaztanaga0000-0001-9632-0815<ref>,<ref>,<ref> F. Giacomini0000-0002-3129-2814<ref> F. Gianotti0000-0003-4666-119X<ref> G. Gozaliasl0000-0002-0236-919X<ref> C. M. Gutierrez0000-0001-7854-783X<ref> A. Hall0000-0002-3139-8651<ref> H. Hildebrandt0000-0002-9814-3338<ref> J. Hjorth0000-0002-4571-2306<ref> A. Jimenez Muñoz0009-0004-5252-185X<ref> S. Joudaki0000-0001-8820-673X<ref> J. J. E. Kajava0000-0002-3010-8333<ref>,<ref> V. Kansal0000-0002-4008-6078<ref>,<ref> D. Karagiannis0000-0002-4927-0816<ref>,<ref> C. C. Kirkpatrick<ref> S. Kruk0000-0001-8010-8879<ref> J. Le Graet0000-0001-6523-7971<ref> L. Legrand0000-0003-0610-5252<ref> J. Lesgourgues0000-0001-7627-353X<ref> T. I. Liaudat0000-0002-9104-314X<ref> A. Loureiro0000-0002-4371-0876<ref>,<ref> G. Maggio0000-0003-4020-4836<ref> M. Magliocchetti0000-0001-9158-4838<ref> F. Mannucci0000-0002-4803-2381<ref> R. Maoli0000-0002-6065-3025<ref>,<ref> C. J. A. P. Martins0000-0002-4886-9261<ref>,<ref> L. Maurin0000-0002-8406-0857<ref> R. B. Metcalf0000-0003-3167-2574<ref>,<ref> M. Migliaccio<ref>,<ref> M. Miluzio<ref>,<ref> P. Monaco0000-0003-2083-7564<ref>,<ref>,<ref>,<ref> A. Montoro0000-0003-4730-8590<ref>,<ref> A. Mora0000-0002-1922-8529<ref> C. Moretti0000-0003-3314-8936<ref>,<ref>,<ref>,<ref>,<ref> G. Morgante<ref> S. Nadathur0000-0001-9070-3102<ref> L. Patrizii<ref> V. Popa0000-0002-9118-8330<ref> P. Reimberg0000-0003-3410-0280<ref> I. Risso0000-0003-2525-7761<ref> P.-F. Rocci<ref> M. Sahlén0000-0003-0973-4804<ref> E. Sarpa0000-0002-1256-655X<ref>,<ref>,<ref> A. Schneider0000-0001-7055-8104<ref> M. Sereno0000-0003-0302-0325<ref>,<ref> A. Silvestri0000-0001-6904-5061<ref> A. Spurio Mancini0000-0001-5698-0990<ref>,<ref> K. Tanidis<ref> C. Tao0000-0001-7961-8177<ref> N. Tessore0000-0002-9696-7931<ref> G. Testera<ref> R. Teyssier0000-0001-7689-0933<ref> S. Toft0000-0003-3631-7176<ref>,<ref> S. Tosi0000-0002-7275-9193<ref>,<ref> A. Troja0000-0003-0239-4595<ref>,<ref> M. Tucci<ref> C. Valieri<ref> J. Valiviita0000-0001-6225-3693<ref>,<ref> D. Vergani0000-0003-0898-2216<ref> G. Verza0000-0002-1886-8348<ref>,<ref> P. Vielzeuf0000-0003-2035-9339<ref> N. A. Walton0000-0003-3983-8778<ref> Received XXX; accepted ZZZ ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Recent Vision Transformer (ViT)-based methods for Image Super-Resolution have demonstrated impressive performance. However, they suffer from significant complexity, resulting in high inference times and memory usage. Additionally, ViT models using Window Self-Attention (WSA) face challenges in processing regions outside their windows. To address these issues, we propose the Low-to-high Multi-Level Transformer (LMLT), which employs attention with varying feature sizes for each head. LMLT divides image features along the channel dimension, gradually reduces spatial size for lower heads, and applies self-attention to each head. This approach effectively captures both local and global information. By integrating the results from lower heads into higher heads, LMLT overcomes the window boundary issues in self-attention. Extensive experiments show that our model significantly reduces inference time and GPU memory usage while maintaining or even surpassing the performance of state-of-the-art ViT-based Image Super-Resolution methods. Our codes are availiable at <https://github.com/jwgdmkj/LMLT>. § INTRODUCTION Single Image Super-Resolution (SISR) is a technique that converts low-resolution images into high-resolution ones and has been actively researched in the field of computer vision. Traditional methods, such as nearest neighbor interpolation and bilinear interpolation, were used in the past, but recent super-resolution research has seen significant performance improvements, particularly through CNN-based methods <cit.> and Vision Transformer (ViT)-based methods <cit.>. Since the introduction of SRCNN <cit.>, CNN-based image super-resolution architectures have advanced by utilizing multiple convolutional layers to understand contexts at various scales. These architectures deliver this understanding through residual and/or dense connections <cit.>. However, super-resolution using CNNs faces several issues in terms of performance and efficiency. Firstly, CNN-based models can become excessively complex and deep to improve performance, leading to increased model size and memory usage <cit.>. To mitigate this, several models share parameters between modules <cit.>, but this approach does not guarantee efficiency during inference <cit.>. SAFMN <cit.> addresses the balance between accuracy and complexity by partitioning image features using a multi-head approach <cit.> and implementing non-local feature relationships at various scales. However, it struggles to capture long-range dependencies due to limited kernel sizes. ViT-based models have shown superior performance compared to CNN-based models by effectively modeling global context interactions <cit.>. For example, SwinIR <cit.> utilized the Swin Transformer <cit.> for image super-resolution, demonstrating the effectiveness of the transformer architecture. Subsequently, hybrid models combining ViT and CNN have been proposed, achieving significant performance increases <cit.>. However, ViT models face quadratically increasing computational costs as input size grows <cit.>. To address this, Window Self-Attention (WSA) has been developed, which perform self-attention by dividing the image into windows <cit.>. Despite this, WSA suffers from quality degradation at window boundaries and lacks interaction between windows <cit.>. Additionally, conventional ViT-based models stack self-attention layers in series <cit.>, which significantly increases computational load and inference time. In this paper, we propose LMLT (Low-to-high Multi-Level Transformer) to improve efficiency during inference while maintaining performance. Similar to SAFMN, our approach uses a multi-head method <cit.> to split image features and apply pooling to each feature. Each head applies the self-attention mechanism. Unlike conventional self-attention blocks, which stack self-attention layers in series (Figure <ref>(a)), we stack them in parallel to reduce computation (Figure <ref>(b)). This means we integrate the number of heads and layers (depth) into a single mechanism. Note that we call the head with the most pooling the lower head, and the number of pooling applications decreases as we move to the upper heads. Since the window size is the same for all heads, the upper heads focus on smaller areas and effectively capture local context. In contrast, the lower heads focus on larger areas and learn more global information. This approach allows us to dynamically capture both local and global information. Additionally, we introduce a residual connection <cit.> to pass global information from the more pooled lower heads to the less pooled upper heads. This enables the windows of the upper heads to view a wider area, thereby resolving the cross-window communication problem. Trained on DIV2K <cit.> and Flickr2K <cit.>, our extensive experiments demonstrate that ViT-based models can effectively achieve a balance between model complexity and accuracy. Compared to other state-of-the-art results, our approach significantly reduces memory usage and inference time while enhancing performance. Specifically, our base model with 60 channels and large model with 84 channels decrease memory usage to 38% and 54%, respectively, and inference time to 22% and 19% compared to ViT-based super-resolution models like NGswin <cit.> and SwinIR-light<cit.> at scale × 4 scale. Moreover, our models achieve an average performance increase of 0.076db and 0.152db across all benchmark datasets. § RELATED WORKS CNN-Based Image Super-Resolution is one of the most popular deep learning-based methods for enhancing image resolution. Since SRCNN <cit.> introduced a method to restore high-resolution (HR) images using three end-to-end layers, advancements like VDSR <cit.> and DRCN <cit.> have leveraged deeper neural network structures. These methods introduced recursive neural network structures to produce higher quality results. ESPCN <cit.> significantly improved the speed of super-resolution by replacing bicubic-filter upsampling with sub-pixel convolution, a technique adopted in several subsequent works <cit.>. To address the limited receptive field of CNNs, some researchers incorporated attention mechanisms into super-resolution models to capture larger areas. RCAN <cit.> applied channel attention to adaptively readjust the features of each channel, while SAN <cit.> used a second-order attention mechanism to capture more long-distance spatial contextual information. CSFM <cit.> dynamically modulated channel-wise and spatial attention, allowing the model to selectively emphasize various global and local features of the image. We use the same window size, similar to CNN kernels, but vary the spatial size of each feature. This allows our model to dynamically capture both local and global information by obtaining global information from smaller spatial sizes and local information from larger spatial sizes. ViT-Based Image Super-Resolution has surpassed the performance of CNN-based models by efficiently modeling long-range dependencies and capturing global interactions between contexts <cit.>. After the success of ViT <cit.> in various fields such as classification <cit.>, object detection <cit.>, and semantic segmentation <cit.>, several models have aimed to use it for low-level vision tasks. IPT <cit.> constructed a Transformer-based large-scale pre-trained model for image processing. However, the complexity of ViT grows quadratically with input size. To mitigate this, many approaches have aimed to reduce computational load while capturing both local and global information. For example, SwinIR <cit.> used the Swin-Transformer <cit.> model for image reconstruction. Restormer <cit.> organized self-attention in the channel direction to maintain global information and achieve high performance in image denoising. HAT <cit.> combined self-attention, which captures representative information, with channel attention, which holds global information. To combine local and global information without adding extra complexity, we add features from lower heads, which contain global information, to upper heads, which contain local information. This enables the windows to see beyond their own area and cover a larger region. Efficient Image Super-Resolution research focuses on making super-resolution models more efficient. The CNN-based model FSRCNN <cit.> improved on SRCNN <cit.> by removing the bicubic interpolation pre-processing and increasing the scale through deconvolution, greatly speeding up computation. CARN <cit.> reused features at various stages through cascading residual blocks connected in a multi-stage manner. IMDN <cit.> progressively refined features passing through the network. However, improving performance often requires stacking many convolution layers, leading to increased computational load and memory usage. In contrast, the ViT-based model ELAN <cit.> aimed to enhance spatial adaptability by using various window sizes in self-attention. HNCT <cit.> integrated CNN and Transformer structures to extract local features with global dependencies. NGswin <cit.> addressed the cross-window communication problem of the original Swin Transformer <cit.> by applying Ngram <cit.>. Despite these advances, the considerable computational load of overly deep stacked self-attention mechanisms still constrains the efficiency of ViT-based super-resolution models. To address computational load, we connect self-attention layers in parallel, integrating multi-head and depth (number of layers) to lighten the computation. This, along with reducing the spatial size of features, makes our model more efficient. Additionally, efforts to lighten networks through methods such as knowledge distillation <cit.>, model quantization <cit.>, or pruning <cit.> have been made. Some approaches differentiate between classical and lightweight image super-resolution models by using the same architecture but varying hyperparameters, such as the number of network blocks or feature channels <cit.>. § PROPOSED METHOD Overall Architecture (Figure <ref>(a)). First, we use a 3 × 3 convolution to extract shallow-level features from the image. Next, we stack multiple LHS Blocks (Low-to-High Self-attention Blocks) to extract deep-level features. In each LHS Block, the features go through Layer Normalization (LN) <cit.>, our proposed LMLT (Low-to-high Multi-Level Transformer), LN again, and the CCM <cit.>. Residual connections are also employed. Finally, we use a 3 × 3 convolution filter and a pixel-shuffle layer <cit.> to reconstruct high-quality images. For more details on CCM <cit.>, refer to Appendix <ref>. Low-to-high Multi-Level Transformer (Figure <ref>(b)). LMLT operates within the LHS Block. After features pass through the first LN <cit.>, we divide them into H heads using a Multi-Head approach <cit.> and pool each split feature to a specified size. Specifically, the feature for the uppermost head is not pooled, and as we move to lower heads, the pooling becomes stronger, with the height and width halved for each subsequent head. Each feature then undergoes a self-attention mechanism. The output of the self-attention is interpolated to the size of the upper head's feature and added element-wise (called a low-to-high connection). The upper head then undergoes the self-attention process again, continuing up to the topmost head. Finally, the self-attention outputs of all heads are restored to their original size, concatenated, and merged through a 1 × 1 convolution before being multiplied with the original feature. Attention Layer (Figure <ref>(c)). In each attention layer, the feature is divided into 𝑁 non-overlapping windows. The dot product of the query and key is calculated, followed by the dot product with the value. LePE <cit.> is used as the Positional Encoding and added to the value. The output is upscaled by a factor of 2 and sequentially passed to the upper head until it reaches the topmost head. How the Proposed Method Works? Our proposed LMLT effectively captures both local and global regions. As seen in Figure <ref>, even if the window size is the same, the viewing area changes with different spatial sizes of the feature. Specifically, when self-attention is applied to smaller features, global information can be obtained. As the spatial size increases, the red window in the larger feature can utilize information beyond its own limits for self-attention calculation because it has already acquired information from other regions in the previous stage. This combination of lower heads capturing global context and upper heads capturing local context secures cross-window communication. Figure <ref> visualizes the type of information each head captures. From Figure <ref>(a) to  <ref>(d), the features extracted from each head when 𝐻 is assumed to be 4 are visualized by averaging them along the channel dimension. The first head (<ref>(a)) captures relatively local patterns, while fourth head (<ref>(d)) captures global patterns. In Figure <ref>(e), these local and global patterns are combined to provide a comprehensive representation. By merging this with the original feature (<ref>(f)), it emphasizes the parts that are important for super-resolution. Computational Complexity. We improve the model's efficiency by connecting self-attention layers in parallel and reducing spatial size. In the proposed model, given a feature 𝐹∈ℝ^𝐻×𝑊×𝐷 and a fixed window size of 𝑀×𝑀, the number of windows in our LMLT is reduced by one-fourth as we move to lower heads, by halving the spatial size of the feature map. Additionally, since each head replaces depth, the channel count is also reduced to 𝐷/ℎ𝑒𝑎𝑑. Therefore, the total computation for each head is given by Equation <ref>. Here, 𝑖 means 𝑖-1th head. Ω(LMLT) = 4[hw/4^i(D/ℎ𝑒𝑎𝑑)^2] + 2 [ M^2 hw/4^iD/ℎ𝑒𝑎𝑑]. On the other hand, in WSA <cit.>, the self-attention layers are stacked in series, and the spatial size and channel of the feature do not decrease, so the number of windows remains 𝐻𝑊/𝑀^2, and the channel count stays at 𝐷. Therefore, the total computation amount is shown in Equation <ref>. Ω(WSA) = 4hw D^2 + 2M^2 hw D. Therefore, in our proposed model, if the number of heads is greater than 1, both the number of windows and channels decrease compared to WSA <cit.>, resulting in reduced computational load. The more heads there are, the greater the reduction in computational load. R0.4 < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > (a) (b) (c) < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > (d) (e) (f) Features from each head ((a) to (d)), aggregated feature (e), and feature multiplied with the original feature (f). § EXPERIMENTS Datasets. Following previous studies <cit.>, we use DIV2K <cit.>, consisting of 800 images, and Flickr2K <cit.>, consisting of 2,650 images, as training datasets. For testing, we use the Set5 <cit.>, Set14 <cit.>, BSD100 <cit.>, Urban100 <cit.>, and Manga109 <cit.> datasets. Implementation Details. We categorize our model into four types: a Tiny model with 36 channels, a Small model with 36 channels and 12 blocks, a Base model with 60 channels, and a Large model with 84 channels. First, the low-resolution (LR) images used as training inputs are cropped into 64 × 64 patches. Rotation and horizontal flip augmentations are applied to this training data. The number of blocks, heads, and growth ratio are set to 8 (except for the Small model), 4, and 2, respectively. We use the Adam Optimizer <cit.> with β_1 = 0.9 and β_2 = 0.99, running for 500,000 iterations. The initial learning rate is set to 1×10^-3 and is reduced to at least 1×10^-5 using the cosine annealing scheme <cit.>. To accelerate the speed of our experiments, we set to and to for the 36-channel model. To account for potential variability in the results due to this setting, we conduct three separate experiments with the LMLT-Tiny model and reported the average of these results. All other experiments are conducted only once. Evaluation Metrics. The quality of the recovered high-resolution images is evaluated using Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) <cit.>. These metrics are calculated on the Y channel of the YCbCr color space. To test the efficiency of our model, we follow the method of SAFMN <cit.>, measuring GPU memory consumption (#GPU Mem) and inference time (#AVG Time) for scaling a total of 50 images across various models. #GPU Mem, obtained through PyTorch's , represents the maximum memory consumption during inference, and #AVG Time is the average time per image for inferring a total of 50 LR images at ×2, ×3, and ×4 scales. The results for ×2, ×3, and ×4 scaling are based on upscaling random images of sizes 640×360, 427×240, and 320×180, respectively. §.§ Comparisons with State-of-the-Art Methods Image Reconstruction Comparisons. To evaluate the performance of the proposed model, we compare our models with other state-of-the-art efficient and lightweight SR models at different scaling factors. PSNR, SSIM <cit.>, the number of parameters, and FLOPs are used as the main performance evaluation metrics. Note that FLOPs refer to the computational amount required to create an image with a resolution of 1280×720. We first compare the LMLT-Base with IMDN <cit.>, LatticeNet <cit.>, RFDN-L <cit.>, SRPN-Lite <cit.>, HNCT <cit.>, FMEN <cit.>, and NGswin <cit.>. Table <ref> shows that our LMLT-Base achieves the best or second-best performance on most benchmark datasets. Notably, we observe a significant performance increase on the Manga109 dataset, while our model uses up to 30% fewer parameters compared to the next highest performing model, NGswin <cit.>, while the PSNR increases by 0.27dB, 0.29dB, and 0.29dB, respectively, at all scales. Next, we compare the LMLT-Large model with other SR models. The comparison group includes ESRT <cit.>, SwinIR-light <cit.>, ELAN <cit.>, and SRFormer-light <cit.>. As shown in Table <ref>, our large model ranks first or second in performance on most datasets. Among the five test datasets, LMLT-Large shows the best performance for all scales on the Manga109 <cit.> dataset compared to others, and also the best performance on the Set14 <cit.> dataset. Specifically, compared to SRFormer-light <cit.>, which showed the highest performance on Urban100 <cit.> among the comparison group, our model shows performance gains of 0.13dB, 0.24dB, and 0.15dB at each scale on the Manga109 <cit.> dataset. In addition to this, we demonstrate that our model has a significant advantage in inference time and GPU memory occupancy at next paragraph. The comparison results of LMLT-Tiny and LMLT-Small with other state-of-the-art models can be found in Appendix <ref>. Memory and Running Time Comparisons. To test the efficiency of the proposed model, we compare the performance of our LMLT model against other ViT-based state-of-the-art super-resolution models at different scales. We evaluate LMLT-Base against NGswin <cit.> and HNCT <cit.>, and LMLT-Large against SwinIR-light <cit.> and SRFormer-light <cit.>. The results are shown in Table <ref>. We observe that LMLT-Base and LMLT-Large is quite efficient in terms of inference speed and memory usage compared to other ViT-based SR models. Specifically, compared to NGswin <cit.>, our LMLT-Base maintains similar performance while reducing memory usage by 61%, 62%, and 61% for ×2, ×3, and ×4 scales, respectively, and decreasing inference time by an average of 78%, 76%, and 78%. Similarly, when comparing SwinIR <cit.> and our LMLT-Large, despite maintaining similar performance, memory usage decreases by 44%, 43%, and 46% for each scale, respectively, and inference time decreases by an average of 87%, 80%, and 81%. This demonstrates both the efficiency and effectiveness of the proposed model. R0.5 Time consumption from LN to LHSB (Ours) and WSA (SwinIR <cit.>). A RTX 3090 GPU is used. 0.5! method × 2 × 3 × 4 LMLT-Tiny(Ours) 35.28𝑚𝑠 23.21𝑚𝑠 18.36𝑚𝑠 LMLT-Base(Ours) 49.44𝑚𝑠 25.51𝑚𝑠 21.18𝑚𝑠 LMLT-Large(Ours) 68.97𝑚𝑠 32.72𝑚𝑠 22.66𝑚𝑠 SwinIR-Light <cit.> 1084.57𝑚𝑠 336.00𝑚𝑠 185.23𝑚𝑠 Table <ref> shows the time consumption for module in the proposed method and SwinIR-light <cit.>, specifically detailing the time from the first Layer Normalization (LN) <cit.> to the LHSB and WSA. Our LMLT significantly reduces the time required for the self-attention mechanism. Especially, LMLT-Large achieves time reductions of 94% at the ×2 scale, 90% at the ×3 scale, and 88% at the ×4 scale compared to SwinIR. Given the similar performance between SwinIR and our method, this represents a significant increase in efficiency. Note that the inference time for a total of 50 random images is measured. The time measurements are conducted using 's record and , and due to hardware access latency during log output, the time of the modules might be longer than the time report in Table <ref>. Tables comparing the memory usage and inference speed of our LMLT with other models can be found in Appendix <ref>. Qualitative Comparisons. Figure <ref> illustrates the differences on the Manga109 <cit.> dataset between our model and other models. As shown, our LMLT successfully reconstructs areas with continuous stripes better than other models. Additionally, we include more comparison images, and further compare our proposed models LMLT-Tiny with CARN <cit.>, EDSR <cit.>, PAN <cit.>, ShuffleMixer <cit.> and SAFMN <cit.> on the Urban100 <cit.> dataset at ×4 scale. Detailed results can be seen in Appendix <ref>. §.§ Ablation Study Effects of Low-to-high Connection. We examine the effects of the low-to-high element-wise sum (low-to-high connection) and downsizing elements of our proposed model. As shown in Table <ref>, the low-to-high connection yields significant results. Specifically, on the Urban100 <cit.> dataset, PSNR increases by 0.04 dB to 0.05 dB across all scales, and SSIM <cit.> increases by nearly 0.0011 at the ×4 scale, demonstrating the benefits of including the low-to-high connection. Appendix <ref> visualizes the differences in features on Urban100 <cit.> with and without the low-to-high connection, showing that it significantly reduces the boundary lines between windows. Additionally, experiments adding the proposed low-to-high connection to SAFMN <cit.> for ×2, ×3, and ×4 scales are also provided in Appendix <ref>. Effects of Multi-Scale Heads. Table <ref> validates the effectiveness of using multiple scales by experimenting with cases where pooling is not applied to any head. Specifically, we compare our proposed LMLT with cases where pooling and the low-to-high connection are not applied, as well as cases where merging is also not performed. The results show that performance is lower in all cases compared to the proposed model. Appendix <ref> demonstrates that when pooling is not applied, the lack of information connection between windows hinders the proper capture of informative features, even though spatial information is retained. Importance of LHSB, CCM, and MLP. We analyze the impact of LHSB and CCM <cit.> and their interplay. Following the approach in SAFMN <cit.>, we examine performance by individually removing LHSB and CCM <cit.>. Results are shown in the `Module' row of Table <ref>. Removing LHSB reduces the number of parameters by nearly 10%, decreases memory usage to nearly 74%, and drops PSNR by 0.59 dB on the Urban100 <cit.> dataset. Conversely, removing CCM reduces the number of parameters by nearly 90% and PSNR by 1.83 dB. Adding an MLP after the self-attention module, as done in traditional Transformers <cit.>, reduces parameters by about 69% and PSNR by approximately 0.85 dB. To maintain the same number of layers as in the previous two experiments (8 layers), we conducted another experiment with only 4 blocks, resulting in a 49% reduction in parameters and only a 0.55 dB drop in PSNR, indicating the least performance loss. This suggests that the combination of LHSB and CCM effectively extracts features. Additionally, incorporating FMBConv <cit.> reduces parameters by nearly 58% and PSNR by 0.28 dB, while memory usage remains similar. Importance of Aggregation and Activation. We analyze the impact of aggregating features from each head using a 1×1 convolution or applying activation before multiplying with the original input. Results are shown in the `Act / Aggr' row of Table <ref>. Without aggregation, PSNR decreases by 0.10 dB on the Urban100 <cit.> dataset. If features are directly output without applying activation and without multiplying with the original input, PSNR decreases by 0.12 dB. Omitting both steps leads to an even greater decrease of 0.22 dB, indicating that including both aggregation and activation is more efficient. Conversely, multiplying features directly to the original feature without the activation function improves performance by 0.1 dB. Detailed experimental results are discussed in Appendix <ref>. Importance of Positional Encoding. Lastly, we examine the role of Positional Encoding (PE) in performance improvement. Results are shown in the `PE' row of Table <ref>. Removing PE results in decreased performance across all benchmark datasets, notably with a PSNR drop of 0.06 dB and an SSIM decrease of 0.0006 on the Urban100 <cit.> dataset. Using RPE <cit.> results in a maximum PSNR increase of 0.03 dB on the Set14 <cit.> dataset, but has little effect on other datasets. Additionally, parameters and GPU memory increase by 5K and 45M, respectively. § CONCLUSION In this paper, we introduced the Low-to-high Multi-Level Transformer (LMLT) for efficient image super-resolution. By combining multi-head and depth reduction, our model addresses the excessive computational load and memory usage of traditional ViT models. In addition to this, LMLT applies self-attention to features at various scales, aggregating lower head outputs to inform higher heads, thus solving the cross-window communication issue. Our extensive experiments demonstrate that LMLT achieves a favorable balance between model complexity and performance, significantly reducing memory usage and inference time while maintaining or improving image reconstruction quality. This makes the proposed LMLT a highly efficient solution for image super resolution tasks, suitable for deployment on resource-constrained devices. plainnat § IMPACT OF NUMBER OF BLOCKS, CHANNELS, HEADS AND DEPTHS In this section, we analyze how the performance of our proposed model changes based on the number of blocks, heads, channels and depths. Impact of Number of Blocks. First, We evaluate the performance by varying the number of blocks to 4, 6, 8, 10, and 12. Experiments are conducted on ×2 scale and the performance is evaluated using benchmark datasets, and analyzed in terms of the number of parameters, FLOPs, GPU memory usage, and average inference time. As shown in Table <ref>, the increase in the number of parameters, FLOPs and inference time tends to be proportional to the number of blocks, and performance also gradually improves. For the Manga109 <cit.> dataset, as the number of blocks increases from 4 to 12 in increments of 2, PSNR increases by 0.27 db, 0.16 db, 0.10 db, and 0.10 db, respectively. Interestingly, despite the increase in the number of blocks from 4 to 12, the GPU memory usage remains almost unchanged. While the number of parameters nearly triples, the GPU memory usage remains stable, 323.5M to 324.5M. We observe the overall increase in PSNR with the increase in the number of blocks and designate the model with 8 blocks as LMLT-Tiny and the model with 12 blocks as LMLT-Small. Impact of Number of Channels. Next, we evaluate how performance changes with the number of channels. Similar to the performance evaluation based on the number of blocks, this experiment evaluates performance using benchmark datasets, the number of parameters, FLOPs, GPU memory usage, and average inference time as performance metrics. As shown in Table <ref>, LMLT's performance increases with channels, along with parameters and FLOPs. However, unlike the variations in the number of blocks, increasing the number of channels results in a more significant increase in the number of parameters, FLOPs, and memory usage. Inference time, however, increases proportionally with the number of channels. For instance, with 36 channels, the average inference time is 57.16𝑚𝑠, and when doubled, it requires approximately 108.74𝑚𝑠, nearly twice the time. As the number of channels increases from 24 to 84 in increments of 12, the PSNR on the Urban100 <cit.> dataset increases by 0.42 db, 0.29 db, 0.19 db, 0.13 db, and 0.10 db, respectively. Based on the overall performance increase, we designate the model with 60 channels as the Base model and the model with 84 channels as the Large model. In this context, the Small model has an inference time about 3ms longer than the Base model, but it has fewer parameters, lower memory usage, and fewer FLOPs, thus justifying its designation. Impact of Number of Heads. In this paragraph, we compare the performance differences based on the number of heads. In our model, as the number of heads decreases, the channel and the number of downsizing operations for each head decrease. For example, in our baseline with 4 heads and 36 channels, the lowest head has a total of 9 channels and is pooled 3 times. However, if there are 2 heads, the lowest head has 18 channels and is pooled once. Additionally, the maximum pooling times and the number of heads are related to the number of windows and the amount of self-attention computation. According to equation <ref>, as the number of heads decreases, the computation increases. As a result, as the number of heads decreases, the number of parameters, FLOPs, and GPU memory usage increase. As shown in Table <ref>, the performance with 4 heads and 3 heads is similar across all scales and test datasets. However, when the number of heads is reduced to 1, the performance drops significantly. This difference is particularly noticeable in the Urban100 <cit.> dataset, where at scale ×2, the performance with 4 heads is 32.04 db, whereas with 1 head, it drops to 31.93 db, a decrease of 0.11 db. Additionally, when the scale is ×3 and ×4, the PSNR decreases by 0.05 dB and 0.04 dB, respectively. This indicates that even if the spatial size of all features is maintained with a single head, even though the number of parameters, FLOPs, and channels per head increase, the inability to capture information from other windows can lead to a decline in performance. Impact of Number of Depths. Additionally, we examine how the performance changes when we add more attention modules to our model. The proposed LMLT connects self-attention layers in parallel, where self-attention is calculated in lower heads and then connected to the upper layers. However, a different approach, like other models <cit.>, could be to calculate self-attention multiple times(i.e., in series) before sending it to the upper heads, thus mixing serial and parallel connections. However, our experimental results indicate that calculating self-attention multiple times in a single head before sending it to the upper heads is not an effective choice. As shown in Table <ref>, increasing the self-attention calculations from once to three times increases the inference time by 87.7% on the Urban100 <cit.> dataset at × 4 scale, but the PSNR only improves by 0.02 dB. Similar trends are observed in other datasets and scales, showing minimal differences in PSNR and SSIM performance. This demonstrates that having one head composed of serial self-attention layers and connecting heads in parallel does not yield good performance relative to inference time. § EFFECTS OF LOW-TO-HIGH CONNECTION AND POOLING Difference between with and without Low-to-high connection In the Table <ref>, we confirm performance differences when the low-to-high connection is not applied to LMLT. Inspired by this, we also apply low-to-high connections between heads in SAFMN <cit.> and verify the experimental results. Table <ref> shows that adding low-to-high connection to the upper head in SAFMN <cit.> does not yield significant performance differences. Moreover, at the ×4 scale, the SSIM for the Urban100 <cit.> and Set5 <cit.> datasets decreases by 0.0010 and 0.0011, respectively, indicating a reduction in performance. We then visualize the features of LMLT-Tiny to understand the effect of the low-to-high connection. Each column of Figure <ref> illustrates the original image, the aggregated feature visualization of LMLT-Tiny combining all heads <ref>(a), and the aggregated feature visualization of LMLT-Tiny without low-to-high connection <ref>(b). In  <ref>(b), the images show pronounced boundaries in areas such as stairs, buildings, and the sky. In contrast,  <ref>(a) shows these boundaries as less pronounced. This demonstrates that the low-to-high connection can address the border communication issues inherent in WSA. Difference between with and without Pooling We analyze the impact of pooling on the performance of super-resolution. As observed in Table <ref>, even though pooling preserves spatial information, the overall performance decreases when it is not applied. We investigate the reason behind this through feature visualization. Figure <ref> visualizes the features when no pooling is applied to any head in LMLT. The leftmost image is the original Urban100 <cit.> image. Figure <ref>(a) shows the aggregated features of all heads in LMLT-Tiny. Column Figure <ref>(b) visualizes the features without pooling, and Figure <ref>(c) visualizes the features without both pooling and merging, all at the ×4 scale. In  <ref>(b) and  <ref>(c), grid patterns are evident across the images, indicating that the disadvantages of being limited to local windows outweigh the benefits of maintaining the original spatial size. § IMPACT OF ACTIVATION FUNCTION In Table <ref>, we discuss that not applying the activation function GeLU <cit.> might improve performance. Therefore, we experiment with LMLT and LMLT without GeLU <cit.> across various scales and channels to confirm the results. Table <ref> shows the results for our LMLT and the model without the activation function across different scales and channels. As shown, with 36 channels, there is minimal performance difference across all scales, with the largest being a 0.04 higher PSNR on the Set5 <cit.> ×4 scale when GeLU <cit.> is removed. However, when expanded to 60 channels, our LMLT performs better on most benchmark datasets for both ×3 and ×4 scales. Specifically, on the ×4 scale of the Urban100 <cit.> dataset, PSNR and SSIM are higher by 0.05 dB and 0.0013, respectively. This demonstrates that adding GeLU <cit.> after aggregating features is more beneficial for performance improvement. § LAM AND ERF COMPARISONS To verify whether the proposed LMLT exhibits a wider receptive field, we utilize local attribution map (LAM) <cit.> and effective receptive field (ERF) <cit.>. Specifically, we use LAM to show that our proposed LMLT-Large has a wider receptive field compared to SwinIR-Light <cit.> and SRFormer-Light <cit.>. Detailed visualizations are presented in Figure <ref>. Additionally, SwinIR-NG <cit.> is included for comparison, and we visualize the ERF. The detailed results are shown in Figure <ref>. Through these two analyses, we demonstrate that our proposed model exhibits a wider receptive field than existing ViT-based SR models. Additionally, we compare the LAM of our proposed model, LMLT-Tiny (Figure <ref>(c)), with a version of the model that does not include pooling for each head (Figure <ref>(a)), and a version where the low-to-high connection is removed (Figure <ref>(b)), demonstrating that our proposed model effectively references a broader region. The results show that, even when the spatial size of the model is maintained without pooling, it fails to process information from a wider area. Moreover, the low-to-high connection proves to be effective in enabling the model to capture information from a larger region. § CCM : CONVOLUTIONAL CHANNEL MIXER R0.5 < g r a p h i c s > CCM(Convolutional Channel Mixer) proposed in SAFMN <cit.>. CCM instead of MLP. Since the feed-forward network (FFN) in the original transformer <cit.> is a fully connected layer, we assume that using it in ViT might disrupt the spatial information of the features. Therefore, we apply the convolutional channel mixer (CCM) <cit.> instead, an FFN based on FMBConv <cit.>, to preserve spatial information. CCM is a module that mixes each convolution channel. Specifically, the features pass through two convolution layers. The first layer has a 3 × 3 kernel and expands the channels. Then, GELU <cit.> is applied for non-linear mapping. Finally, a convolution layer with a 1 × 1 kernel restores the channels to their original state. In our method, the features pass through Layer Normalization <cit.>, LMLT, and another Layer Normalization before being input to CCM <cit.>. Detailed structure can be seen in Figure <ref>. § COMPARISONS ON LMLT WITH OTHER METHODS Image Reconstruction comparisons Here, We first compare the LMLT-Tiny and LMLT-Small with CARN-m, CARN <cit.>, EDSR-baseline <cit.>, PAN <cit.>, LAPAR-A <cit.>, ECBSR-M16C64 <cit.>, SMSR <cit.>, Shuffle-Mixer <cit.>, and SAFMN <cit.>. Table <ref> shows that our LMLT significantly reduces number of parameters and computation overheads while achieving substantial performance gains on various datasets. LMLT-Small performs well on most datasets, and the LMLT-Tiny also performs second and third best on the BSD100 <cit.> and Manga109 <cit.> datasets, except for the Manga109 ×4 SSIM <cit.>. In particular, the number of parameters and FLOPs are the second smallest after SAFMN <cit.>. Memory and Running time Comparisons In this paragraph, we present the memory usage and average inference time of our proposed LMLT compared to other super-resolution methods. Similar to the experimental setup in Table <ref>, #GPU Mem represents the maximum memory usage during inference, measured using PyTorch's . #AVG Time indicates the average time taken to upscale a total of 50 random images by ×2, ×3, and ×4 scales. The experiments were conducted three times, and the average inference time is reported. Each random image has sizes of 640×360 for ×2 scale, 427×240 for ×3 scale, and 320×180 for ×4 scale. As shown in Table <ref>, our proposed LMLT-Tiny uses less memory at all scales compared to all models except SAFMN <cit.>. Although LMLT-Small requires more inference time compared to other models, its GPU usage is almost similar to LMLT-Tiny, and its performance is significantly superior as demonstrated in Table <ref>. Qualitative Comparisons In this paragraph, we examine the qualitative comparisons of the LMLT-Tiny model and other models on the Urban100 <cit.> ×4 scale. The comparison includes CARN <cit.>, EDSR <cit.>, PAN <cit.>, ShuffleMixer <cit.>, and SAFMN <cit.>. The results can be seen in Figure <ref>. As mentioned in section <ref>, we observe that our model reconstructs images with continuous stripes better than other models. Additionally, we compare our proposed models LMLT-Base and LMLT-Large with IMDN <cit.>, NGswin <cit.>, SwinIR-light <cit.>, and SwinIR-NG <cit.> on the Manga109 <cit.> dataset at×4 scale. As explained earlier in section <ref>, our model shows strength in areas with continuous lines compared to other models. Figure <ref> illustrates the differences between our LMLT-Base, LMLT-Large and other state-of-the-arts models.
http://arxiv.org/abs/2409.02261v1
20240903193823
Action-Based ADHD Diagnosis in Video
[ "Yichun Li", "Yuxing Yang", "Syed Nohsen Naqvi" ]
cs.CV
[ "cs.CV", "cs.AI" ]
Action-Based ADHD Diagnosis in Video Yichun Li^1, Yuxing Yang^1, Rajesh Nair^2, Syed Mohsen Naqvi^1 1Intelligent Sensing and Communications Research Group, Newcastle University, UK. 2Cumbria, Northumberland, Tyne and Wear (CNTW), NHS Foundation Trust, UK September 9, 2024 ============================================================================================================================================================================================================================================== § ABSTRACT Attention Deficit Hyperactivity Disorder (ADHD) causes significant impairment in various domains. Early diagnosis of ADHD and treatment could significantly improve the quality of life and functioning. Recently, machine learning methods have improved the accuracy and efficiency of the ADHD diagnosis process. However, the cost of the equipment and trained staff required by the existing methods are generally huge. Therefore, we introduce the video-based frame-level action recognition network to ADHD diagnosis for the first time. We also record a real multi-modal ADHD dataset and extract three action classes from the video modality for ADHD diagnosis. The whole process data have been reported to CNTW-NHS Foundation Trust, which would be reviewed by medical consultants/professionals and will be made public in due course. § INTRODUCTION Attention deficit hyperactivity disorder (ADHD) is a worldwide prevalent neurodevelopmental disorder. While the adult population has a high rate of undiagnosed and has reached 3% of the population <cit.>. ADHD patients exhibit inattention, impulsivity, and hyperactivity symptoms, with detrimental effects on brain development <cit.>. In recent years, machine learning methods and deep learning algorithms have been used in ADHD diagnosis and classification <cit.>. Most of the research is based on Magnetic Resonance Imaging (MRI), Electroencephalography (EEG), and natural language processing which achieves high accuracy <cit.>, but also with a high cost of equipment and operational staff. Hence, we propose a new low-cost ADHD diagnosis approach on a machine learning-based ADHD action detection network in this work. We use video because it is easy to capture the action performance of the participants, and it can greatly reduce the cost of diagnosis. The main contributions of our work are listed as follows: 1) an attention test is designed for multi-modal ADHD real data recording. 2) an ADHD diagnosis system based on 3D-CNN action recognition is implemented, and video data is evaluated with different network structures; 3) classification criteria is also proposed to provide diagnosis results with time-action ADHD characteristics. § PARTICIPANTS AND PROCEDURE We recorded a multi-modal ADHD dataset which includes 7 ADHD subjects diagnosed by the NHS medical consultant under the DSM-V criteria and 10 neurotypical controls. The gender distribution for 7 subjects is 3 males and 4 females, provided by the CNTW-NHS Foundation Trust. The control group consists of 9 males and 1 female. All participants are adults aged between 18 and 50. For the control group, adults who did not have neurological problems and ADHD diagnosis history were the volunteers from Newcastle University. An attention and responsiveness test is provided for all participants. We prepare four continuous dialogue tasks: 1) a brief conversation between the participants and the interviewer, approximately 10-20 minutes long; 2) performing Cambridge Neuropsychological Test Automated Battery (CANTAB) tasks. This task takes about 40-50 minutes; 3) beep reaction task. This task takes 6 minutes; 4) watching videos, including a math video labelled `boring' and a rally video labelled `exciting'. This task takes 10 minutes. The video signals are recorded by 3 GoPro cameras which contain a front-faced camera 1 to record facial information and two side cameras 2&3 to record the information of the left and right torsos and limbs with a resolution of 3840× 2160. The block diagram of the proposed ADHD diagnosis system is shown in Fig. 1. The system contains four main parts: data processing, action recognition, stationary ratio calculation, and ADHD diagnosis. Existing action recognition datasets are not focused on typical ADHD symptoms, e.g., fidgeting of the limbs and the body when the subjects and controls are in a sitting position during the data recording. Specifically, the training dataset used in the proposed action recognition module mainly focuses on continuous actions (duration over five seconds) in the sitting position. The ADHD diagnosis result is summarized and classified by estimating the distribution of action labels of the action recognition part with a novel evaluation matrix named stationary ratio (SR). Since the raw frame size from recorded videos is too large to feed into the diagnosis system, the input frame is reduced from 3840× 2160 to 320× 180. The landmark of the participant's waist is the center of the processed frame in the sitting position. The video sequences are also down-sampled from 32FPS to 16FPS to reduce the computational cost. Then, after the frame segmentation and patch extraction step, the patches with the size 180× 180 containing the samples' torso and limb information are used for training the network. We propose a novel measurement named Stationary Ratio (SR) as the evaluation criterion for action classification of ADHD symptoms detection. It focuses on the percentage of periods that the test subject is at the still position. The SR is defined as: SR=α _1 /(α _1+α _2+α _3) where α _1 denotes the number of the samples of predicted still position, α _2 is the number of samples of small ranges (less than 30^∘) of limb fidgets, and α _3 is the number of the samples of large rotations (more than 30^∘) of torso movements. As aforementioned, we use Camera 2 and Camera 3 for left and right viewpoints, respectively. Therefore, we use the average SR measurement of the left and right viewpoint as SR_Avg. § EXPERIMENTS §.§ Datasets and Data Processing The action recognition experiments use the three-class action recognition dataset, i.e., still-position, which contains 88 video clips, limb-fidgets with 110 clips, and torso movements with 101 clips. Each of the clips is between 10-15 seconds. The training, validation, and testing data split is 6/2/2, respectively. The diagnosis dataset for the whole system consists of 34 videos, including 7 subjects and 10 controls of the whole process videos from the left and right sides, and the length of each video is 60-90 minutes. Actions are labeled per three frames in the training, testing, and diagnosis steps. §.§ Experiment Set up and Comparisons We choose a 3D-CNN structure (C3D) as the main core network <cit.>. There are 8 convolution layers that have 3×3×3 kernels with 1 stride. Different from the original C3D structure, we add a fully connected layer to fit the size of the input data. The probabilities of each action are obtained with three fully connected layers with 8192 units and a Softmax activation. The loss ℒ_c of the training process is to minimize the cross entropy of the outputs and true labels results: L_c =-∑_i^ P_l(i)log_P_o(i) where ith means the set of labels with n length, P_l and P_oare the distribution of true labels, and the distribution of classification output, respectively. The training epochs for the action classification are 80, and the learning rate is 1× 10^-9. All the experiments are run on a workstation with four Nvidia GTX 1080 GPUs and 16 GB of RAM. §.§ Action Recognition and ADHD Diagnosis Results In these experiments, the SR performance of 7 subjects and 10 controls is evaluated with SR_L, SR_R, and SR_Avg. The results are shown in Table 1. From Table 1, the average of SR_Avg for all 17 participants is 0.71. Particularly, the average of SR_Avg for 7 subjects and 10 controls are 0.50 and 0.86, respectively. Therefore, 0.71 is adapted as the threshold for the ADHD diagnosis. In the group of subjects, it is highlighted that only Subject 13 has the abnormal SR_Avg of 0.75. We have sent requests to the clinicians of CNTW-NHS Foundation Trust to query and double-check the diagnosis details of this ADHD subject. Further analysis will be a future work and meanwhile can be considered as a failure case, in case the clinician will confirm Subject 13. Based on the threshold value, i.e., 0.71, we further calculate the precision, sensitivity, accuracy, and the Area Under Curve (AUC) of two traditional neural networks: R2Plus1D and R3D <cit.>, and our proposed 3D-CNN framework in Table 2. From Table 2, the proposed model shows better performance than the R3D and R2Plus1D. Because the proposed method concentrates on the features from both the spatial and the temporal dimensions, thereby capturing the action information encoded in multiple adjacent frames, which plays an important role in ADHD typical human action recognition <cit.>. Therefore, the proposed method shows high sensitivity in the recognition results of the small range of limb fidgets and improves the performance of ADHD diagnosis results. §.§ Time-Action Based Analysis According to DSM-V, some symptoms of hyperactivity-impulsivity are observable in ADHD adults, such as difficulty in sitting still, fidgeting legs, tapping with a pen, etc. <cit.>. However, it is hard to record manually during the traditional diagnostic process. Through our system, the actions of each participant are fully captured and visualized. Fig. 2 shows the timeline bar chart from the classification results of the ADHD subject and control groups. From Fig. 2, the proportion of gray parts (keeping still or almost stationary) in the ADHD subjects group is obviously lower than that in the controls group, which is consistent with clinical observations. §.§ Comparison with State-of-the-Art Table 3 shows the performance of the state-of-the-art ADHD diagnosis systems on the different datasets containing EEG and trajectory signals collected by wearable sensors. From Table 3, the proposed method outperforms the state-of-the-art ADHD diagnosis methods. Compared to the machine learning methods for ADHD diagnosis, our proposed action-based framework can more intuitively observe ADHD-related action rules. Therefore, the generalization and applicability are improved. § CONCLUSIONS This paper proposed an ADHD diagnosis system based on the action recognition framework. Meanwhile, a novel measure was proposed to evaluate the action recognition results. The experimental results showed that our system outperformed the state-of-the-art methods regarding precision, accuracy, and AUC. Moreover, the proposed method is less expensive and suitable for a broad range of initial ADHD diagnoses compared with the existing neuroscience diagnostic methods. In our future work, we will extend the dataset to further cover real-world patient distribution and consider recording more multi-modal data, e.g., EEG and fMRI, to perform fusion and evaluate related results. unsrt
http://arxiv.org/abs/2409.03708v1
20240905171423
RAG based Question-Answering for Contextual Response Prediction System
[ "Sriram Veturi", "Saurabh Vaichal", "Nafis Irtiza Tripto", "Reshma Lal Jagadheesh", "Nian Yan" ]
cs.CL
[ "cs.CL", "cs.IR" ]
Both first authors contributed equally to this research. sriram_veturi@homedepot.com [1] saurabh_s_vaichal@homedepot.com The Home Depot Atlanta Georgia USA Work completed during internship at The Home Depot The Pennsylvania State University USA nit5154@psu.edu Work completed during employment at the Home Depot The Home Depot Atlanta Georgia USA reshma_lal_jagadheesh@homedepot.com The Home Depot Atlanta Georgia USA nian_yan@homedepot.com § ABSTRACT Large Language Models (LLMs) have shown versatility in various Natural Language Processing (NLP) tasks, including their potential as effective question-answering systems. However, to provide precise and relevant information in response to specific customer queries in industry settings, LLMs require access to a comprehensive knowledge base to avoid hallucinations. Retrieval Augmented Generation (RAG) emerges as a promising technique to address this challenge. Yet, developing an accurate question-answering framework for real-world applications using RAG entails several challenges: 1) data availability issues, 2) evaluating the quality of generated content, and 3) the costly nature of human evaluation. In this paper, we introduce an end-to-end framework that employs LLMs with RAG capabilities for industry use cases. Given a customer query, the proposed system retrieves relevant knowledge documents and leverages them, along with previous chat history, to generate response suggestions for customer service agents in the contact centers of a major retail company. Through comprehensive automated and human evaluations, we show that this solution outperforms the current BERT-based algorithms in accuracy and relevance. Our findings suggest that RAG-based LLMs can be an excellent support to human customer service representatives by lightening their workload. [500]Computing methodologies Machine learning RAG based Question-Answering for Contextual Response Prediction System Nian Yan 5th September 2024 ====================================================================== § INTRODUCTION With the advent of ChatGPT and similar tools in mainstream media, Large Language Models (LLMs) have emerged as the standard solution for addressing a wide range of language understanding tasks. However, they can generate incorrect or biased information <cit.>, as their responses are based on patterns learned from data that may not always contain necessary knowledge in a close domain. To address this issue, Retrieval Augmented Generation (RAG) <cit.> is commonly used to ground LLMs in factual information. The RAG architecture processes user input by first retrieving a set of documents similar to the query, which the language model then uses to generate a final prediction. While RAG-based architectures have been successful in various open-domain question answering (Q/A) tasks <cit.>, limited research has explored their scaling dynamics in real conversational scenarios. Therefore, our research is one of the pioneering efforts in exploring the feasibility of an RAG-based approach for developing a knowledge-grounded response prediction system specifically tailored for the contact center of a major retail company. LLMs have recently been widely adopted across various industries, particularly in contact centers, to enhance chatbot development and agent-facing automation <cit.>. A prime example is the Response Prediction System (RPS), an agent-assist solution that generates contextually relevant responses, enabling agents to efficiently address customer queries with a single click. This boosts productivity, improves customer experience, and streamlines communication processes. In industry settings, the focus is on generating accurate, contextually appropriate responses with minimal latency. Therefore, RAG-based responses, grounded in company policies, deliver swift and accurate resolutions to customer issues. Figure <ref> demonstrates a possible example of RPS in real settings, where the agent can directly utilize the generated response with a single click. However, implementing RAG for industry-specific use cases to assist human agents in generating valid responses involves several architectural decisions that can affect performance and viability. The retrieval style can be integrated into both encoder-decoder <cit.>) and decoder-only models <cit.>, with various embedding and prompting techniques influencing the final LLM output. In contact centers, where the risk of hallucinations is high and can critically impact business performance, ReAct (Reason+Act) <cit.> prompts can help mitigate issues. Therefore, our research focuses on developing an optimal RAG based knowledge-grounded RPS for a major retail company's contact center. To ensure response accuracy, we also conduct thorough evaluations with human evaluators and automated measures, comparing RAG-based responses to human ground truth and the existing BERT-based system (Figure <ref> shows an overview of traditional customer care scenario with existing and proposed system). In short, we answer the following research questions. * RQ1: What are the effects of different embedding techniques, retrieval strategies, and prompting methods on RAG performance? * RQ2: Do RAG-based responses provide greater assistance to human agents compared to the existing BERT-based system? * RQ3: Can the ReAct (Reason+Act) prompting improve factual accuracy and reduce hallucinations in LLM in real-time settings? Our findings demonstrate an overall improvement over the existing system by suggesting more accurate and relevant responses, highlighting the potential of RAG LLM as an excellent choice for customer care automation. § RELATED WORK RAG architecture: RAG has emerged as a promising solution by incorporating knowledge from external databases to overcome the hallucination, outdated knowledge, transparency issues for LLMs <cit.>. Traditional RAG, popularized after the adoption of ChatGPT, follows a simple process of indexing, retrieval, and generation <cit.> . Despite advancements in Advanced and Modular RAG, Traditional RAG remains popular in the industry due to its ease of development, integration, and quicker speed to market <cit.>. The core components of Traditional RAG include Retriever, Generator, and Augmentation Method, with research focusing on improving semantic representation <cit.>, query alignment <cit.>, and integration with LLMs <cit.>, which motivate our RQ1 to find the optimum setup in this specific use case of RPS in the contact center. RAG LLM for question answering: Several open-domain question-answering (Q/A) tasks have been completed by RAG-based architectures efficiently <cit.>. With the advent of LLMs in recent periods, multiple studies also focus on utilizing LLMs for customer assistance, specifically in recommendations <cit.> and dialogue generation <cit.>. Recent work by <cit.> proposes augmenting LLMs with user-specific context from search engine interaction histories to personalize outputs, leveraging entity-centric knowledge stores derived from users' web activities. Similarly, <cit.> introduces a customer service question-answering approach integrating RAG with a knowledge graph (KG) constructed from historical issue data. Therefore, our study is motivated by these prior researches to integrate RAG as a retrieval tool and utilize LLM to generate responses to answer customer queries. § METHODOLOGY To implement an end-to-end RAG framework with LLM, first, it is essential to create a comprehensive dataset comprising relevant question-answer pairs along with corresponding knowledge documents. Next, design choices for specific components of the RAG and LLM architecture must be finalized. Finally, the model should be thoroughly evaluated and refined before being deployed into production. §.§ Phase I: Data Preparation An ideal golden dataset for evaluating RAG architecture (Figure <ref>) should include: * Domain-specific questions (previous queries) with their corresponding grounded responses. * Relevant knowledge base (KB) articles (company documents) containing the policies that determine answers to specific queries. * Out-of-domain questions to ensure the LLM can handle generic queries without hallucinating and can guide customers to provide relevant queries. To create a robust test set (Table <ref> for details), we utilize LLM to generate both relevant question-answer pairs from the company's KB articles (refer to Section A in the Appendix for prompts). Additionally, we supplement these relevant pairs with samples from previous queries & responses in the contact center with out-of-domain questions by sampling from open-source datasets such as MS-MARCO <cit.>. §.§ Phase II: RAG The main components of the RAG architecture are the Retriever and the Generator LLM. We evaluate various strategies for each component and finalize our choices for production. Our findings are validated through experiments with several open-domain question-answer datasets, including MARCO <cit.>, SQuAD <cit.>, and TriviaQA <cit.> (details in Appendix). They present a comparable level of question-answering challenges where answers can be derived from the retrieved knowledge base. Embedding Strategy: The best embedding strategy ensures high performance of the retriever and affects downstream tasks like response generation. We compare the Universal Sentence Encoder (USE) embeddings <cit.>, Google's Vertex AI embedding model text-embedding-gecko@001 <cit.>, and SBERT-all-mpnet-base-v2 <cit.> from the sentence-transformers collection. Retrieval strategies: By retrieving relevant passages from a large corpus of KB articles, the model gains crucial contextual information, enhancing response accuracy and coherence. We specifically consider ScaNN <cit.> for its efficiency in handling large-scale datasets and KNN HNSW <cit.> for its efficient memory usage as retrieval strategies in our study. Additionally, we tested different retrieval thresholds to ensure incorrect documents are not retrieved and passed to the LLM for response generation. LLM for generation: Once the best embedding strategy, retrieval technique, and retrieval thresholds are identified, we test different prompting techniques to ensure that LLMs generate grounded factual responses. We utilize PaLM2 foundation models (text-bison, text-unicorn) <cit.> for text generation across all tasks, as they offer a clear path to production in terms of enterprise licenses and security requirements with Google’s models, compared to other available LLMs at the time of our research. The best model from Phase 2, incorporating the optimal embedding, retrieval, and prompting techniques, is packaged with relevant KB articles and deployed on a cloud Virtual Machine. For real-time usage, an endpoint is created that takes a customer query and the conversation context as input, generating response suggestions as output. § RESULTS AND FINDINGS First, we optimize RAG setup for our use cases, then evaluate LLM responses using automated metrics and human evaluations. Finally, we assess if prompting or ReAct strategies can improve real-world performance to an acceptable level. §.§ Retrieval evaluation Best setting for RAG: We assessed retriever efficiency using the "Recall at K" (R@k) metric, where K represents the top 1, 3, 5, or 10 documents retrieved, measuring how well the retriever retrieves relevant documents. The Vertex AI - textembedding-gecko@001 (768) embedding, paired with ScaNN retrieval, yielded the best outcomes. Overall, ScaNN generally outperformed KNN HNSW in most cases due to its efficient handling of large-scale datasets and superior retrieval accuracy through quantization and re-ranking techniques <cit.>, so we include only the ScaNN results in Table <ref>. Similarly, Vertex AI embeddings surpassed Sentence BERT and USE due to its superior ability to capture complex semantic relationships tailored for large-scale industry applications. Retrieval Threshold: For out-of-domain or trivial customer queries like "Hello" or "Bye," document retrieval is unnecessary, as shown by 98.59% of retrieved articles having a cosine similarity score below 0.7. In contrast, 88.96% of articles retrieved for relevant company data questions scored above 0.7 (Figure <ref>). This suggests that setting the retrieval threshold at 0.7 effectively determines when retrieval is needed, thereby enhancing response generation efficiency. §.§ Response Prediction System Evaluation To develop an effective Response Generation System (RPS), we conducted a comprehensive evaluation comparing RAG LLM-based responses with a current BERT-based algorithm. Using 1,000 real contact center chat transcripts (PII and PCI compliant), comprising over 5,000 messages, we analyzed customer queries, human agent responses, RAG LLM suggestions, BERT-based suggestions, and retrieved knowledge base documents to assess quality, consistency, and factuality through automated measures and human evaluations. §.§.§ Automated evaluations We utilize the following evaluation techniques, with Table <ref> illustrating our RAG LLM-based technique's performance against the current BERT-based system. Accuracy, Hallucination and Missing rate evaluation In a question-answer system, a response to each query can generate one of three types of responses: accurate (correctly answers the question), hallucinate (incorrect answer), or missing (no answer generated). Therefore, our approach, inspired by <cit.> which provides 98% agreement with human judgments, utilizes an LLM-based method. We employ ChatGPT-3.5-turbo as our evaluator LLM. We prompted the LLM with a query, generated response, and original human response, categorizing the LLM's responses as "correct" for factual and semantic alignment, "incorrect" for mismatches, and "unsure" for semantic challenges. Evaluation includes Accuracy (correct responses), Hallucination Rate (incorrect responses), and Missing Rate (unsure responses) metrics as the proportion of corresponding responses. Overall, RAG LLM improves accuracy by reducing hallucinations and missing rates compared to BERT responses. AlignScore: To ensure response alignment with KB articles, we use AlignScore <cit.> to measure information consistency. Evaluating RAG LLM and BERT-based models on utterances with relevant KB article retrieval by RAG, RAG LLM shows a statistically significant 5.6% improvement via Student's t-test. This enhancement derives from integrating retrieved documents as prompts for LLM responses, whereas BERT relies on query-answer pairs in its training dataset. Semantic similarity: To ensure usability by human agents, coherence between generated and original human responses is crucial. We measure semantic similarity using LongFormer embeddings <cit.>, calculating cosine similarity between generated and original human responses for both models. RAG LLM exhibits an average 20% higher similarity, a statistically significant improvement. Human touch: Customer service are typically preferred to be handled by humans <cit.>, emphasizing the importance of generating human-like responses. We use the AI text detector GPTZero <cit.>, with a 99.05% true positive rate for human responses in our dataset, to evaluate response naturalness. Assessing AI percentage (utterances identified as AI-generated), the BERT-based system, which selects responses from human-generated options, sounds more human. §.§.§ Human evaluations Our method aims to support rather than replace humans through a human-in-the-loop approach. We thoroughly evaluate the quality of RAG LLM and BERT responses using human annotators. Each response is assessed against several criteria, and the average score is computed from all annotators' evaluations. Evaluation metrics were grouped into three main categories: Human Preference Score: Following the classical approach of which version humans prefer most <cit.>, we evaluated which model's responses—"BERT" or "RAG"—were preferred by human evaluators. Quantitative Metrics: Similar to <cit.>, we evaluated factual accuracy (based on human judgment of 'correct,' 'incorrect,' or 'unsure'). Accuracy, Hallucination, and Missing rates were calculated as the number of correct, incorrect, and unsure responses divided by the total number of responses evaluated, respectively. Qualitative Metrics: * Contextual Relevance: Assessed whether the predicted responses were appropriate and in line with the context of the conversation. * Completeness: Checked if the predicted responses were fully-formed and could be used as complete answers by the agents in specific parts of the conversation. * Specificity: Determined whether the predicted responses were tailored to the specific conversation or were too general. Human annotators scored these metrics on a scale of 0 (lowest) to 2 (highest). The results, as detailed in the Table <ref>, Responses generated by the RAG model demonstrated a 45% improvement in factual accuracy and a 27% decrease in the rate of hallucinations compared to the existing model. Moreover, the human evaluators favored responses from the RAG model over the current production model 75% of the times. The Response Prediction System was deployed using Flask, a standard micro web framework, and Gunicorn, chosen for its performance, flexibility, and simplicity in production system configuration. The API receives customer queries as input and provides answers as output. The API was thoroughly load tested using Locust, an open-source performance/load testing tool for HTTP and other protocols, to ensure it meets real-time latency requirements in a production setting. Finally, the API was integrated with the Agent Workspace UI to deliver predictions to Contact Center agents, assisting customers in real-time. §.§ Evaluation for ReAct and prompting techniques Experiments with ReAct To answer our third RQ, we utilized ReAct Tools to determine when to activate the information retrieval component within the RAG framework, while maintaining the same retrieval, embeddings, and generation strategies. We evaluated two scenarios: "RAG with ReAct" and "RAG without ReAct," with K = 3. As shown in Table <ref>. While ReAct improved the accuracy by 7% and reduced hallucination by 13.5%, it resulted in slower performance <ref>, making it inconvenient in real-time conversation. Prompting Techniques Experiments We evaluated Chain of Verification (CoVe) <cit.> and Chain of Thought Prompting (CoTP) <cit.> to improve factual accuracy and reduce hallucinations. However, both techniques are time-consuming, requiring multiple LLM calls per query, and did not show significant improvements for the Company data. CoVe was 43% less accurate and CoTP was 3% less accurate (see Table <ref>). Therefore, we decided against using these prompting techniques. § CONCLUSION In this study, we demonstrate the practical challenges of implementing a RAG-based Response Prediction System in an industry setting. We evaluated various retrieval and embedding strategies combined with different prompting techniques to identify the best combinations for different use cases. Our evaluations show that retrieving relevant knowledge base articles and generating responses from LLMs can be more contextually relevant and accurate than BERT responses, which choose from the most relevant query-answer pairs. We also highlight that ReAct and advanced prompting techniques may not be practical for industry settings due to latency issues. Overall, our approach indicates that implementing RAG-based LLM response generation for contact centers is feasible and can effectively aid humans, reducing their workload. In the future, we plan to advance our work in three directions. Firstly, we aim to evaluate other LLMs. Secondly, we will test if query rewriting and reformulation can improve retrieval performance. Lastly, we intend to explore advanced RAG approaches to integrate knowledge bases from various sources. § LIMITATIONS Despite ongoing significant research, LLMs remain unpredictable. Though this paper showcases work on grounding the generated responses, LLMs to a certain degree are still capable of generating inaccurate information based on their learnt parametric memory. This work also does not focus on other LLM issues such as context length constraints, prompt injections and quality of Knowledge base data. Addressing other open challenge such as biases along with their ethical consideration are also not considered in the scope of this paper. The paper also does not address or evaluate RAG for multilingual data sources. § ETHICS STATEMENT The data set used for training and validation of LLMs in this paper do not have any unbalanced views or opinions of individuals that might bias the generated response. The LLMs are still capable of generating inaccurate responses based their parametric memory even when relevant contextual information might be provided. Filtering toxic responses and prompt injections are also not considered in this evaluation. ACM-Reference-Format § EVALUATION OF RAG APPROACH FOR OPEN SOURCE DATASETS To evaluate the effectiveness of the RAG-based approach, we also conducted a sample study using three open-domain datasets following a similar methodology. §.§ Dataset Statistics We consider three popular open source question answering datasets focusing on a varitey of topics. A random subset of MS MARCO<cit.>, SQuAD <cit.>, TriviaQA <cit.> were considered for the evaluation. Table <ref> shows the brief overview of the considered datasets. §.§ Retrieval and Embedding Performance We observed a specific trend in Recall values in lower k values (1, 3) versus higher k values (5, 10) for SQuAD and TRIVIA. For SQuAD, Vertex AI textembedding-gecko@001 (768) embedding with ScaNN retrieval performed the best at lower k but at higher k, SBERT-all-mpnet-base-v2 (768) with ScaNN performed better. For TRIVIA, SBERTall-mpnet-base-v2 (768) embedding with HNSW KNN retrieval performed the best at lower k but at higher k, SBERT-all-mpnet-base-v2 (768) with ScaNN performed better. For MSMARCO, Vertex AI -textembedding-gecko@001 (768) embedding with ScaNN retrieval combination was a clear winner. Refer Table <ref> for more details. §.§ Evaluation of Generated Responses Table <ref> shows the accuracy, hallucination, and missing rate for the open sources datasets (through automated evaluation as described in Subsection <ref>. As observed in the Retrieval evaluation section, we notice a similar relationship in the accuracy and hallucination as document size increases. From Table <ref>, we observe that the token length of the TriviaQA documents is much larger than MS-MARCO and SQuAD. Similarlt, we observed lower accuracy and higher hallucination rates with TriviaQA when compared to MS-MARCO and SQuAD. Table <ref> and <ref> show performance of Chain of Thought Prompting and Chain of Verification performance on open-source datasets. Accuracy and hallucination rate improvement vary based on the open source dataset. § PROMPT EXAMPLES We utilize LLMs for various tasks in our methodology, including question-answer pair generation from knowledge base articles, response generation, factual accuracy evaluation, and advanced CoTP & CoV prompts. Therefore, we include the specific prompts used for these different tasks. Prompt for answer generation You are a reading comprehension and answer generation expert. Please answer the question from the document provided. If the document is not related to the question, simply reply: "Sorry, I cannot answer this question". Following are the guidelines you need to follow for generating the responses: 1) They should always be professional, positive, friendly, and empathetic. 2) They should not contain words that have a negative connotation (Example: "unfortunately"). 3) They should always be truthful and honest. 4) They should always be STRICTLY less than 30 words. If the generated response if greater than 30 words, rephrase and make it less than 30 words. document: <retrieved_document>, question: <question>, output: Prompt for Hallucination Judgement: You need to check whether the prediction of a question-answering systems to a question is correct. You should make the judgement based on a list of ground truth answers provided to you. You response should be "correct" if the prediction is correct or "incorrect" if the prediction is wrong. Your response should be "unsure" where there is a valid ground truth and prediction is "Sorry, I don't know." or if you are not confident if the prediction is correct. Below are the different cases possible: 1) Examples where you should return "correct". Question: What is the customer registration process? Ground Truth: The customer registration process is a way for customers to create an account with them. This allows them to track their purchases, receive personalized offers, and more. The process is simple and can be completed in a few minutes. Prediction: The customer registration process is a process that allows customers to register their information with them. This process allows customers to receive benefits such as discounts, special offers, and personalized shopping experiences. Correctness: correct Question: What happens if my refund is pending? Ground Truth: Sorry, I don't know. Prediction: Sorry, I don't know. Correctness: correct 2) Examples where you should return "incorrect". Question: What do I need to do to get the military discount? Ground Truth: You need to have a smartphone and be registered for the discount. If you don't have a smartphone, you can use discount code RC5. If you are in the pilot 425 stores area, you can key in your phone number. Prediction: The military discount is available to active duty military members, veterans, and their families. The discount is 10 percent off eligible purchases. Correctness: incorrect Question: How do I apply for the consumer card? Ground Truth: Sorry, I don't know. Prediction: You can apply for the consumer card in-store, online or by mail. Correctness: incorrect 3) Examples where you should return "unsure". Question: What is the Return Policy? Ground Truth: The Return Policy is available on the website. You can find it by searching for "Return Policy" or by clicking on the link in the article. Prediction: Sorry, I don't know. Correctness: unsure Provide correctness for the below question, ground truth and prediction: Question: <question> Ground Truth: <ground truth> Prediction: <prediction> Correctness: Prompt for Chain of Prompting: Prompt for quote extraction: You are a reading comprehension and quote extraction expert. Please extract, word-for-word, any quotes relevant to the question. If there are no quotes in this document that seem relevant to the provided question, please say "I can’t find any relevant quotes". For document: <document>, question: <question>, output: Prompt for Generating Baseline Response and Plan Verification (Chain of Verification): Below is a question: <question> Below is the document from which the answer should be generated: <document> You are an subject matter expert working at Contact Centers. Your expertise includes quote extraction, answer generation, and asking verification questions to improve the overall factual accuracy of the answers you provide. Your first goal is to extract, word-for-word, any quotes relevant to the question that could be used to answer the question. If there are no quotes in this document that seem relevant to the provided question, simply return: "I can't find any relevant quotes". Your second goal is to use *solely* the quotes extracted from the first goal and generate a concise and accurate answer (using the below listed guideline) by rephrasing the quotes to answer the question. If the quotes could not be used to answer the question, simply return: "Sorry, I cannot answer this question". 1) They should always be professional, positive, friendly, and empathetic. 2) They should not contain words that have a negative connotation (Example: "unfortunately"). 3) They should always be truthful and honest. 4) They should always be STRICTLY less than 30 words. If the generated response if greater than 30 words, rephrase and make it less than 30 words. Your third goal is to generate a list of potential areas that might require verification based on the content of the document to increase factual accuracy of the answer. Your response should be in the below format: “` Quotes: <Your Extracted Quotes> Answer: <Your Answer> Potential Areas for Verification: 1) Your Specific point or segment from your answer. 2) Your Another point or segment from your answer. N) Your Nth point or segment from your answer. “` Prompt for Executing Verification Questions and Generating Verified Response (Chain of Verification): Below is a question: <question> Below is the answer: <answer> Below is the document from which the answer was generated: <document> Based on the potential areas for verification: <areas of verification> You are an subject matter expert working at Contact Centers. Your expertise includes improvising answers to questions about the company to increase factual correctness using the factual accuracy verification questions provided to you. Your goal is to check each verification point against the document, provide feedback on any inconsistencies, and then generate a final verified (using the below listed guidelines), concise and accurate answer in strictly less than 30 words that addresses the factual inconsistencies. 1) They should always be professional, positive, friendly, and empathetic. 2) They should not contain words that have a negative connotation (Example: "unfortunately"). 3) They should always be truthful and honest. 4) They should always be STRICTLY less than 30 words. If the generated response if greater than 30 words, rephrase and make it less than 30 words. Your response should be in the below format: “` Feedback: 1) Your Verification for point 1. 2) Your Verification for point 2. N) Your Verification for point N. Final Verified Response: [Your Revised Response] “` Prompt for generating answer from a document and question (open source datasets). You are a question answering bot. Your job is to generate answer to the question using the provided articles. The answers should be derived only from the articles. If the answer is not present in the articles, return the text - NOANSWERFOUND. The answer should be less than 10 words and in a sentence format. Example where answer could not be found in the articles: Question: Which county is Smyrna city in? Document: Georgia is a southeastern U.S. state whose terrain spans coastal beaches, farmland and mountains. Capital city Atlanta is home of the Georgia Aquarium and the Martin Luther King Jr. National Historic Site, dedicated to the African-American leader’s life and times. Return Text: NOANSWERFOUND Example where answer could be found in the articles: Question: Which county is Smyrna city in? Document: Smyrna is a city in Cobb County, Georgia, United States. Cobb County is a county in the U.S. state of Georgia, located in the Atlanta metropolitan area in the north central portion of the state. Return Text: Cobb County of the state of Georgia Provide answer to the below Question/Query using the below Document. Question: <question> Document: <document> Return Text:
http://arxiv.org/abs/2409.02876v1
20240904170017
A refined random matrix model for function field L-functions
[ "Will Sawin" ]
math.NT
[ "math.NT", "math.PR" ]
§ ABSTRACT We propose a refinement of the random matrix model for a certain family of L-functions over 𝔽_q[u], using techniques that we hope will eventually apply to an arbitrary family of L-functions. This consists of a probability distribution on power series in q^-s which combines properties of the characteristic polynomials of Haar-random unitary matrices and random Euler products over 𝔽_q[u]. The support of our distribution is contained in the intersection of the supports of the two original distributions. The expectations of low-degree polynomials in the coefficients of our series approximate the expectations of the same polynomials in the coefficients of random Euler products, while the expectations of high-degree polynomials approximate the expectations of the same polynomials in the coefficients of the characteristic polynomials of random matrices. Furthermore, the expectations of absolute powers of our series approximate the <cit.>-<cit.> prediction for the moments of our family of L-functions. A refined random matrix model for function field L-functions Will Sawin September 9, 2024 ============================================================ § INTRODUCTION We begin by defining two probability distributions: one describing uniformly random multiplicative functions and their associated Euler products, and the other uniform random matrices and their characteristic polynomials. We next construct a probability distribution as a hybrid of both, which describes non-uniform random matrices. We then state our results about this hybrid distribution, and explain how it can be used to model the behavior of a certain family of Dirichlet L-functions. Let 𝔽_q[u]^+ be the set of monic polynomials in one variable over a finite field 𝔽_q. We say a polynomial in 𝔽_q[u]^+ is prime if it is irreducible. Take for each prime π∈𝔽_q[u]^+ an independent random variable ξ(π) uniformly distributed on the unit circle in ℂ and form the random Euler product L_ξ(s)= ∏_𝔭∈𝔽_q[u]^+ prime1/1- ξ(𝔭) q^-s 𝔭 . We can extend ξ uniquely to a function ξ𝔽_q[u]^+ →ℂ that is completely multiplicative in the sense that ξ(1)=1, ξ(fg)=ξ(f)ξ(g) for all f,g∈𝔽_q[u]^+. In other words, ξ is a Steinhaus random multiplicative function. Then we can equally well express L_ξ(s) as a sum ∑_f ∈𝔽_q[u]^+ξ(f) q^- s f. We have log L_ξ(s)=∑_𝔭∈𝔽_q[u]^+ primelog1/1- ξ(𝔭) q^-s 𝔭 = ∑_𝔭∈𝔽_q[u]^+ prime∑_m=1^∞ξ(𝔭)^m q^ - s m 𝔭/m . Let X_n ,ξ be the coefficient of q^- ns in log L_ξ(s), i.e. X_n,ξ = ∑_d | n∑_𝔭∈𝔽_q[u]^+ 𝔭 =d primed/nξ(𝔭)^n/d . Fix k a natural number. Assume q>2. Let F(x_1,…, x_k) be the probability density function of the tuple of random variables X_1,ξ,…, X_k,ξ. (We assume q>2 since it is not hard to check that this probability density function does not exist for q=2 as long as k≥ 2.) Let be the set of power series in q^-s with constant coefficient 1. The power series L_ξ(s) lies in . We endow with a topology by viewing it as a product of copies of ℂ and taking the product topology, and consider the Borel Σ-algebra. Let μ_ep be the measure on given by the distribution of the random variable L_ξ. For N a natural number and M in the unitary group U(N), let L_M(s) = ( I - q^1/2 - s M). We have L_M∈. (In fact, L_M is a polynomial in q^-s and not just a power series.) Let μ_rm be the measure on given by the distribution of the random variable L_M for M Haar-random in U(N). In other words, μ_rm is the pushforward of the Haar measure μ_Haar from U(N) to . The probability distributions μ_ep and μ_rm can both be used as models for properties of random Dirichlet L-functions over 𝔽_q[u]. We now describe a distribution that combines properties of both and thus, we hope, serves as a better model than either. Fix β∈( 1/4, 1/2) and N a natural number and let k= ⌊ N^β⌋. Consider the non-uniform measure on U(N) μ_weighted = γ F( - q^1/2(M),…, -q^k/2 (M^k)/k)/∏_j=1^k ( e^ - (M^j)^2 / j j / q^j π) μ_Haar where γ is the unique constant that makes μ_weighted a probability measure. We let μ_ch be the distribution of L_M for M a matrix in U(N) distributed according to μ_weighted, i.e. the pushforward of μ_weighted from U(N) to . We think of μ_ch as a chimera in the sense of a strange hybrid of the more familiar creatures μ_ep and μ_rm. The measure μ_ch exists to serve as a model of a function field analogue of the Riemann zeta function, and hence combines the Euler product and random matrix perspectives on the zeta function. (We will meet the exact function field analogue which μ_ch models shortly in <ref>.) We prove three fundamental results about μ_ch describing the support of the measure and its integrals of the measure against a general class of test functions. These results explain which properties μ_ch shares with each of the simpler measures μ_ep and μ_rm. Combining these, we will in Theorem <ref> evaluate the integral against a specific test function that models the moments of the zeta function, and the resulting formula will look similar to predictions from <cit.> for the moments of the zeta function. Note that all our measures depend implicitly on the parameters q,N (except that μ_ep is independent of N), so it should not be surprising when error terms in our estimates for them depend on q or N. All implicit constants in big O notation will be independent of q,N except where noted. For N larger than some absolute constant, the support of μ_ch is contained in the intersection of the supports of μ_ep and μ_rm. Natural test functions to use on are polynomials in the coefficients of the power series and their complex conjugates, i.e. we consider elements of the polynomial ring ℂ[ c_1,c_2,…, c_1, c_2,…] as functions on an element 1+∑_d=1^∞ c_d q^-ds of . We define the (weighted) degree of a polynomial in ℂ[ c_1,c_2,…, c_1, c_2,…] by letting c_d and c_d have degree d for all d. We define the L^2 norm of such a polynomial using the random matrix measure, as ϕ_2^2=∫_ϕ^2 μ_rm = ∫_U(N)ϕ ( L_M) ^2 μ_Haar . Assume that q>5. Let ϕ∈ℂ[ c_1,c_2,…, c_1, c_2,…] have degree ≤ k. For N sufficiently large in terms of β, we have ∫_ϕμ_ch = ∫_ϕμ_ep + O (e^ -(1/2 - o_N(1)) N^1-βlog (N^1-β)ϕ_2 ) . Assume that q>11. Let ϕ∈ℂ[ c_1,c_2,…, c_1, c_2,…]. Assume that for all polynomials ψ∈ℂ[ c_1,c_2,…, c_1, c_2,…] of degree ≤ k we have ∫_U(N)ϕ(L_M) ψ(L_M)μ_Haar =0 . Then for N sufficiently large in terms of β, we have ∫_ϕμ_ch =O_q ( N^- βq-2/4ϕ_2 ) . In interpreting <ref>, it is helpful to consider the heuristic that the typical size of ϕ on the support of μ_rm is approximately ϕ_2, and therefore, we should expect a trivial bound for the integral ∫_ϕμ_ch to be of size roughly ϕ_2. From this point of view we can view the factors of e^- (1/2 - o_N(1)) N^1-βlog (N^1-β) in <ref> and N^- βq-2/4 in <ref> as the amount of savings over the trivial bound, although obtaining an error term of size O( ϕ_2) is not completely trivial. §.§ A family of L-functions and their moments We next explain how μ_ch can be used as a model for a certain family of L-functions. We first consider a family of characters (discussed in more detail in <cit.>). We say a Dirichlet character ν( 𝔽_q[x]/x^N+2)^×→ℂ^× is “primitive" if ν is nontrivial on elements congruent to 1 mod x^N+1, and “even" if ν is trivial on 𝔽_q^×. For a Dirichlet character ν, define a function χ on monic polynomials in 𝔽_q[u] by, for f monic of degree d, χ(f) = ν ( f(x^-1) x^d ) . It is easy to see that χ depends only on the N+2 leading terms of f. Let S_N+1,q be the set of characters χ arising from primitive even Dirichlet characters ν in this way. Because there are q^N+1 even Dirichlet characters of which q^N are imprimitive, S_N+1,q has cardinality q^N+1- q^N. For χ∈ S_N+1,q, form the associated L-function L(s,χ) = ∑_ f ∈𝔽_q[u]^+ χ(f) f^-s. The family of L-functions we consider consists of, for χ∈ S_N+1,q and t∈ [ 0,2π/log q ], the function L(s+it,χ). A random L-function of this family is obtained by choosing χ and t independently uniformly at random. Let us see why these L-functions form a reasonable model for the statistics of the Riemann zeta function. L(s+it,χ) is the Dirichlet series with coefficients f ↦χ(f) f^-it. Viewed as characters of the idele class group of 𝔽_q(u), these comprise all the unitary characters ramified only at ∞ with conductor exponent N+2 at ∞. They are thus comparable to the characters n → n^it of ℕ, which are the unitary characters of the idele class group of ℚ ramified only at ∞. Said in a more elementary fashion, n^it for t≤ T may be accurately approximated given the leading ≈log T digits of n as well as the total number of digits, and all multiplicative functions with this approximation property have the form n^it, while χ(f) f^-it may be computed exactly given the leading N+2 coefficients of f as well as the degree of f, and all multiplicative functions with this property (or even those that may be approximated given this information) are of the form χ(f) f^-it for some χ∈ S_N' +1,q for some N' ≤ N. Thus the statistics of L(s+it, χ) for random χ∈ S_N+1,q, t∈ [0, 2π/log q] are comparable to the statistics of ζ(s+it) for random t∈ [T,2T], i.e. the local statistics of the Riemann zeta function on the critical line. The distribution of the coefficients f ↦χ(f) f^-it converges in the large N limit to the distribution of a random multiplicative function ξ. (Without the average over t, they would converge to random multiplicative function subject to the restriction ξ(u)=1.) On the other hand, by work of Katz <cit.>, in the large q limit the distribution of the L-functions L(s+it,χ) converges to the distribution μ_rm, as long as N ≥ 3. (Technically, we must express our power series in the variable q^1/2-s instead of q^-s for this convergence to make sense, as otherwise μ_rm depends on q.) More precisely, <cit.> proves equidistribution of conjugacy classes in PU(N) whose characteristic polynomials correspond to L(s,χ) against the Haar measure of PU(N), and the additional averaging over t is equivalent to averaging over the fibers of U(N)→ PU(N). Because of this N→∞ and q→∞ limiting behavior, the distribution of the family of L-functions L(s+it,χ) for finite q,N is expected to have some similarity with μ_ep and some with μ_rm. Thus μ_ch, which interpolates between μ_ep and μ_rm, is a plausible model for the distribution of L(s+it,χ). To test this model we must compare to facts known or expected to hold for the family of L-functions. We begin that investigation in this paper by comparing to the Conrey-Farmer-Keating-Rubinstein-Snaith predictions <cit.>, adapted to function fields by Andrade and Keating <cit.>, for moments of L-functions. The moment of L-functions we consider is 1/ q^N (q-1) log q/2π∑_χ∈ S_N+1,q∫_0^2π/log q∏_j=1^ L(s_j+i t ,χ) ∏_j=+1^ + L ( s_j+ it ,χ) i.e. the average over this family of the product of special values of L(s+it,χ) with special values of L(s+it,χ). The recipe of <cit.> predicts a main term for this moment of MT_N^, (s_1,…, s_+ )= ∏_j=+1^+ q^ - (1/2-s_j) N∑_ S ⊆{1,…,+} S=∏_j ∈ S q^ (1/2-s_j) N∑_ f_1,…, f_+∈𝔽_q[u]^+ ∏_j ∈ S f_j = ∏_j∉ S f_j∏_j∈ Sf_j ^ -1+ s_j ∏_j∉ Sf_j ^ -s_j . In other words, in its most optimistic form the prediction is 1/ q^N (q-1) log q/2π∑_χ∈ S_N+1,q∫_0^2π/log q∏_j=1^ L(s_j+i t ,χ) ∏_j=+1^ + L ( s_j+ it ,χ) = MT_N^, (s_1,…, s_+ ) + O_q, ,,ϵ ( (q^N)^ - ( 1/2-ϵ)) and less optimistically one makes the same prediction with a larger error term. Assume that q>11. Let and be nonnegative integers and s_1,…, s_ + be complex numbers with real part 1/2. Then ∫_( ∏_j=1^ L(s_j) ∏_j=+1^ + L ( s_j) )μ_ch = MT_N^, (s_1,…, s_+ ) + O_q,,( N^ (+ )^2/2 - βq-2/4). For the integral on the left hand side of (<ref>), L should be understood as the variable of integration, i.e. L(s_j) is the function that takes a power series to its value at s_j, defined on the subset of of power series in q^-s with radius of convergence > q^-1/2. The integral is well-defined since μ_ch is supported on the even smaller subset consisting of polynomials in q^-s of degree N. If we believe the CFKRS prediction for the moments, then <ref> implies that μ_ch has the same moments as the family of Dirichlet characters, up to a certain error, and thus gives evidence that μ_ch is a good model for the L-functions of the Dirichlet characters S_N+1,q (with additional averaging in the imaginary axis). Alternately, <ref> could be seen as giving a probabilistic explanation of the CFKRS prediction. In interpreting <ref>, it is helpful to note that the main term MT_N^, (s_1,…, s_+ ) is, in the special case s _1=… = s_ +, a polynomial in N of degree . So the error term in (<ref>) is smaller than the main term by a factor of N^βq-2/4 - ^2 + ^2/2. In particular, it is actually smaller if q> 2 + 2 ^2 + 2 ^2/β and the number of coefficients of the polynomial that are visible in this estimate (in the sense that their contribution to the main term is greater than the size of the error term) is min( ⌈βq-2/4 - ^2 + ^2/2⌉ , +1 ) . Thus for q sufficiently large depending on ,, all coefficients of the polynomial are visible in this sense. We conjecture that an even stronger statement holds: Let and be nonnegative integers,q>2 a prime power, s,…, s_ + complex numbers with real part 1/2, and A a real number. ∫_( ∏_j=1^ L(s_j ) ∏_j=+1^ + L ( s_j ) )μ_ch = MT_N^, (s_1,…, s_+ ) + O_q,,,A( N^ -A ). If <ref> is true then the measure μ_ch correctly predicts every coefficient of the polynomial CFKRS main term. While this paper considers a particular family of L-functions, we hope that similar methods can be applied to essentially any family of L-functions, at least in the function field context. One just needs to consider a random Euler product whose local factors match the distribution of the local factors of the family of L-function (e.g. for the family of quadratic Dirichlet characters with prime modulus, take the Dirichlet series of random ± 1-valued completely multiplicative functions) and, if necessary, depending on the symmetry type, replace U(N) with O(N), SO(N), or Sp(N). Passing from random Euler products with continuous distributions to discrete distributions introduces some difficulties, but most likely not insurmountable ones. §.§ Prior work The oldest probabilistic model for the Riemann zeta function is the random Euler product ∏_p1/1- ξ(p) p^-σ where ξ(p) are independent and identically distributed on the unit circle. The distribution of this random Euler product was proven by Bohr and Jessen <cit.> to give the limiting distribution of ζ(σ+it) for fixed σ >1/2, and Bagchi <cit.> proved a generalization giving the distribution of ζ(s+it) as a holomorphic function on a fixed domain to the right of the critical line. On the critical line, Selberg's central limit theorem shows that logζ(1/2+it)/loglog T has a Gaussian limiting distribution for t ∈ [T,2T] as T→∞. The division by loglog T means that this result is not sensitive to the exact size of zeta, and it similarly gives no information about the zeroes. Probabilistic models for zeta and L-functions that give precise descriptions of the behavior on the critical line must, for now, be conjectural. A crucial starting point is the work of Montgomery <cit.>, who conjectured that the statistics of k-tuples of zeroes of the zeta function, in the limit over larger and larger intervals in the critical line, match the statistics of k-tuples of eigenvalues of a Haar-random matrix, in the limit of larger and larger random matrices, for each k, and provided evidence fo this. Katz and Sarnak <cit.> observed that zeta and L-functions in the function field context arise from characteristic polynomials of unitary matrices of fixed size (depending on the conductor of the L-function) and that in the large q limit these unitary matrices become Haar-random for several natural families of L-functions, so in fact all statistics match statistics of random matrices in the large q limit. Using this, they made conjectures about the distribution of the low-lying zeroes of L-functions. Keating and Snaith <cit.> used a random matrix model to study values of zeta and L-functions on the critical line, and not just their zeroes. In particular, they calculated the moments of the characteristic polynomial of a Haar-random unitary matrix at a point on the unit circle, in terms of the size of the matrix. To obtain a conjectural expression for the moments of the zeta function at a random point on the critical line, one has to substitute log T for the size of the matrix in this formula and then multiply by an arithmetic factor that expresses the contribution of small primes. Thus, if one models the values of the zeta function on a random strip of the critical line by the characteristic polynomial of a random unitary matrix on a strip of the unit circle, one obtains predictions for the moments that are conjecturally correct to within a multiplicative factor. The situation was improved by Gonek, Hughes, and Keating <cit.>, using an Euler-Hadamard product, which expresses the zeta function locally as a product of one factor which roughly consists of the Euler factors at small primes and another factor which roughly consists of the contributions of nearby zeroes to the Hadamard product. (This depends on an auxiliary parameter – the more primes one includes, the fewer zeroes are needed, and vice versa). This thereby suggests a model where the first factor is modeled by the Euler product over small primes of a uniformly random multiplicative function and the second factor is modeled by the contributions of zeroes near a given point to the characteristic polynomial of a random unitary matrix, with the two factors treated as independent. (They were later proved to be asymptotically independent, conditional on the Riemann hypothesis, by Heap <cit.>.) For the moments, the <cit.> model recovers the same prediction as <cit.> of the product of a random matrix factor and an arithmetic factor. A related but distinct approach to the moments of zeta and L-functions is the work of Conrey, Farmer, Keating, Rubinstein, and Snaith <cit.>. This work did not directly predict the moments using a characteristic polynomial. Instead the authors found a particular formula for the (shifted) moments of the characteristic polynomial of a random unitary matrix and conjectured a formally similar formula for the (shifted) moments of the Riemann zeta function or another L-function, roughly speaking by inserting suitable arithmetic factors at an intermediate step in the calculation instead of at the end. However, the intermediate stages of their recipe lack a clear number-theoretic or probabilistic interpretation. In particular, it is not even obvious that their predictions for expectations of powers of absolute values of the zeta function are positive – this has to be checked separately. It is not clear that there exists any random holomorphic function whose moments are given by the <cit.> predictions for moments of zeta, though conjecturally a random shift of the Riemann zeta function would be an example. Another approach to predicting moments of L-functions is by multiple Dirichlet series, initiated in the work of Diaconu, Goldfeld, and Hoffstein <cit.>. The highest-order terms in these predictions, made around the same time, agree with <cit.>, but the multiple Dirichlet series can be used to predict additional lower-order terms for certain families of L-functions, as in the work of Diaconu and Twiss <cit.>. Again these predictions are not probabilistic in nature, instead based on assuming that meromorphic functions defined by certain complicated multivariable sums have the greatest amount of analytic continuation allowed by their symmetry properties. The predictions of <cit.> for moments of zeta on the critical line are polynomials in log T. The leading term of these moments agrees with the leading term originally predicted in <cit.> and probabilistically modeled by <cit.>. Thus, the prediction of <cit.> agrees with what is now believed to be correct to within a factor of 1 + O(1/log T). Gonek, Hughes, and Keating <cit.> raised the question of whether their model could be extended to predict all the terms of the <cit.> polynomial. The model of <cit.> has been extended to the function field setting by Bui and Florea <cit.>, and then applied to further families of L-functions by Andrade and Shamesaldeen <cit.> and Yiasemides <cit.>. Again, in this setting the probabilistic model correctly predicts the leading term of the asymptotic that is conjectured by other methods and known in several cases, but fails to predict the lower-order terms. In this case, the predictions are polynomials in the degree of the conductor (i.e. in N) instead of log T. Theorem <ref> shows that, for q sufficiently large, a probabilistic model based on matrices that are random but not Haar-random improves on this by a power of N. The idea of integrating against Haar measure times a weight function to calculate the average of a polynomial function on the coefficients function field L-functions appeared earlier in work of Meisner <cit.>, but this work was not probabilistic in nature: the weight function, unlike a probability density function, is not positive (and not even real) and the average must be normalized by a factor depending on the highest weight of the irreducible representations used to express the polynomial (see <ref>). The idea of restricting the support of Haar measure to obtain more accurate predictions was used in the case of elliptic curve L-functions by Dueñez, Huynh, Keating, Miller, and Snaith <cit.>. Their modification of Haar measure was designed to account for the influence of formulas for the critical value of the L-function that force that value, suitably normalized to be an integer and in particular prevent it from being very close to zero but nonzero. They accordingly considered a measure on random matrices where the critical special value of the characteristic polynomial is prevented from taking small nonzero values. Our adjustment of the probability measure, on the other hand, is designed to account for the influences of small primes, it also involves changing the density and not just the support. It would be interesting to check the compatibility of our paper with some of these prior works in more detail. First, it should be possible to define an “Euler-Hadamard product" for L-functions in the support of μ_ch. One could then ask how close the distribution of the Euler factors and Hadamard factors is to a product of independent multiplicative function and random matrix distributions at a point (perhaps using an optimal transport distance for probability distributions). If these distributions are close, then not only would μ_ch and <cit.> give similar predictions of the moments, but they would give these predictions for similar reasons. It would also be enlightening to prove an analogue of <ref> for ratios of L-functions rather than products, using the work of Conrey, Farmer, and Zirnbauer <cit.> to obtain a classical prediction to compare with. More ambitiously, if an analogue of our construction was made for the family of quadratic Dirichlet characters, and an analogue of <ref> was proven with an error term of size O ( q^ - δ N ) for δ > 1/4, then one could look for a probabilistic explanation of the secondary terms in moments of quadratic Dirichlet L-functions predicted by multiple Dirichlet series. (If the error term were larger than this, it would dominate the predicted secondary terms, so including them would be meaningless.) It seems unlikely that they could appear for a direct analogue of μ_ch, since these secondary terms ultimately arise from the ability to apply a Poisson summation formula in the modulus of the Dirichlet character and recover a similar sum, and the model of Dirichlet characters based on random multiplicative functions used to construct μ_ch wouldn't reflect this Poisson symmetry, but one could very optimistically hope for a natural modification of μ_ch that predicts these terms. §.§ Motivation, variants, and the number field case The operation of multiplying the measure of one probability distribution by the density of another, or, equivalently, multiplying the density of two probability distributions may at first seem strange. However, it has a natural interpretation. Given two different probability distributions μ_1,μ_2 on ℝ^n with continuous probability density functions, we can consider the distribution of a pair of random variables X_1,X_2 independently distributed according to μ_1 and μ_2, and then condition on the event that the distance between μ_1 and μ_2 is at most δ. In the limit as δ→ 0, this conditional distribution will converge to the distribution of two identical random variables, each distributed according to a measure with probability density proportional to the product of the densities of μ_1 and μ_2. Thus, multiplying the probability densities arising from random matrices and random Euler products can be seen as, first, generating pairs of random matrices and random Euler products and, second, throwing away those pairs where the characteristic polynomial of the random matrix is not close to the Euler product. This is a plausible way to generate random functions that arise both as characteristic polynomials of matrices and Euler products (as the Dirichlet L-functions L(s,χ) do). However, it cannot be quite right as a model for Dirichlet L-functions, giving the wrong answers in the q→∞ and N→∞ limits. This can be seen most clearly if we let both q and N head to ∞, so the distributions of L_ξ and L_M both converge to the exponentials of random power series with independent complex Gaussian coefficients. Multiplying the probability densities corresponds to squaring the Gaussian probability density function, producing a Gaussian with half the variance. However, to obtain a distribution that interprets between μ_ep and μ_rm, we would like a distribution that converges to the original Gaussian in the q,N →∞ limit. We fix this by dividing by the same Gaussian. From this heuristic, the right choice of k is not clear. It seems likely that the measure μ_ch does not depend much on the parameter k. The specific value of k chosen makes the analysis as easy as possible, but similar results should be true in a broader range of k. In fact, if for each value of k we let F_k(x_1,…, x_k) be the probability density function fo X_1,ξ,…, X_k,ξ and define μ_weighted to be proportional to lim_k→∞( F_k( - q^1/2(M),…, -q^k/2 (M^k)/k)/∏_j=1^k ( e^ - (M^j)^2 / j j / q^j π) ) μ_Haar then the same results should be true. This definition is more canonical as it lacks the parameter k, and fits naturally with an infinite-dimensional version of the heuristic for multiplying two probability density functions. However, proving the same results for this measure introduces additional analytical difficulties, starting with proving that the limit as k goes to ∞ exists, that we do not pursue here. An alternate approach to constructing a measure satisfying <ref> and <ref> is to first check that the pairing ⟨ϕ, ψ⟩ = ∫_ϕψμ_rm is nondegenerate on polynomials in ℂ[c_0,c_1,…, c_0,c_1,…] of degree ≤ k, and using this, verify that there exists a unique ψ∈ℂ[c_0,c_1,…, c_0,c_1,…] of degree ≤ k such that ∫_U(N) ϕψμ_rm = ∫_ϕμ_ep for all ϕ∈ℂ[c_0,c_1,…, c_0,c_1,…] of degree ≤ k, note that ψ is real-valued, and then consider the signed measure ψμ_rm, for which <ref> and <ref> hold with vanishing error term. The main difficulty with this approach is that the signed measure may not actually be a measure, as the function ψ may be negative on the support of μ_rm. It is easy to check that ψ is nonnegative as long as q is sufficiently large with respect to N, but for q fixed, ψ is negative even for small values of N. If we view ψμ_rm as an approximation of the true distribution of L(s,χ), the problem is clear: since L(s,χ) is supported on power series with first coefficient c_1 satisfying c_1≤ q, while μ_rm is supported on power series with c_1≤ q^1/2 N, as long as q< N^2, the true measure vanishes on a large region where μ_rm is supported, and so ψ is approximating a function which is zero on that region. Since polynomials cannot be zero on a region without being identically zero, polynomial approximations of functions zero on a region will tend to oscillate between positive and negative values on that region. Thus the signed measure ψμ_rm is rarely a measure. However, this argument suggests that, given that <ref> shows that μ_ch has more reasonable support, it may be possible to multiply μ_ch by a low-degree polynomial to improve the error term in <ref> without compromising positivity. Whether the strategy of this paper can be applied in the number field context is not yet clear. The fundamental difficulty seems to be that a number field L-function contains much more information than a function field L-function, so it is harder for the supports of distributions arising from random Euler products and random matrix models to intersect. Let us make this more precise. Consider the problem of defining a random holomorphic function in a variable s whose properties approximate the behavior of ζ(s+it) for t random near a given value T. The basic steps are to define an analogue of the random matrix model, define a random Euler product model, and combine them. Since the function n^it behaves like a random multiplicative function with absolute value 1 and values at the primes independently uniformly distributed on the unit circle, we can again use the Dirichlet series of random completely multiplicative functions as our Euler products. Whatever our random matrix model looks like, the holomorphic functions it produces will probably have functional equations (since the characteristic polynomials of unitary and Hermitian matrices each satisfy a functional equation, and we are trying to approximate the zeta function, which satisfies a functional equation). One natural functional equation to choose is f(1-s) =ϵ (T/2π)^1/2-s f(s) since this is consistent with a holomorphic function having zeroes on the critical line distributed with frequency (1/2π) log (T/2π) – in other words, the frequency of zeroes of zeta near T. However, it is easy to see that there are no multiplicative functions whose Dirichlet series satisfy that functional equation, as it forces the coefficients of n^-s to vanish for n> T/2π. So the intersection of the support of the distribution of any random matrix characteristic polynomials having one of these functional equations with the support of the distribution of Dirichlet series of random multiplicative functions will simply vanish, and any attempt to multiply the densities of these distributions will produce a zero distribution. A similar conclusion can be drawn if we keep the usual functional equation of the Riemann zeta function. In that case, it follows from Hamburger's theorem <cit.> that the intersection of the supports will contain only the actual shifts of the original Riemann zeta function, and so searching for a distribution supported on the intersection, or multiplying the densities, will simply produce the distribution of random shifts of the zeta function. Of course, it is pointless to model these shifts using the shifts themselves. Observing this problem immediately suggests the rough form of the solution. Rather than looking for a distribution supported on the intersection of the supports of the distributions, we should look for a distribution supported on points which are close to the support of both distributions. In other words, in the heuristic for the product of two probability densities as a δ→ 0 limit, we should avoid taking the limit and instead fix a value of δ. Of course, the nature of this depends on exactly how we define the distance between two holomorphic functions. A natural choice is to integrate the square of the absolute value of the difference between their logarithms against some measure on a subset of the complex plane where they are both defined, but we have a great deal of choice on the measures. In fact, rather than conditioning the joint distribution of the characteristic polynomial of a random matrix and random Euler product on the event that the two holomorphic functions are close, it seems better to weight the joint probability distribution by the exponential of minus the square of the distance, or another quadratic form in the two functions, before normalizing by a constant to have the total mass one. This weighting sends Gaussians to Gaussians, and should be normalized so that inputting Gaussian approximations to the two distributions outputs a joint distribution whose marginals are the original Gaussians, coupled so that with high probability the two holomorphic functions take similar values at points near 0 to the right of the critical line. (We cannot compare them on the critical line itself since the random Euler products admit a natural boundary there). But it is not clear if there is a single natural coupling to work with. In physics, one can consider the eigenvalues of random matrices as being a statistical mechanics model of particles, either on the line or the unit circle, that repel each other and thus have a lesser probability of being close together than independent random points. Specifically, the probability density function should be the exponential of a negative constant times the energy of the system, so the terms in the Weyl integration formula involving the difference of two eigenvalues correspond to a contribution to the energy depending on the distance between two points. We can view this type of exponentially-weighted joint distribution as a statistical mechanics model of points on the critical line together with values ξ(p) on the unit circle for each prime, where, in addition to interacting with each other, the points interact with ξ(p). For the zeta function itself, the interaction is infinitely strong, to the extent that the primes determine the zeroes and the zeroes determine the primes. By choosing an interaction whose strength is not too large and not too small, we may be able to construct a model of the Riemann zeta function whose properties are amenable to computation. Regardless, this approach produces a joint distribution of two holomorphic functions, one the characteristic polynomial of a matrix and the other an Euler product, but to model the Riemann zeta function we only want one. The simplest approach is to throw out the Euler product, since its natural boundary on the critical line makes it inappropriate for modeling the zeta function on the line, but it may be possible to combine them in a subtler way. The exact random matrix model to use is of course a question. A good choice might be to take the characteristic polynomial of a random unitary matrix and plug in e^log T/2π/log N(1/2-s ). This produces a holomorphic function on the whole complex plane with zeroes on the critical line with the correct zero density. Since it is periodic in the imaginary axis, it can't be a good model for the large-scale behavior of the Riemann zeta function, but as long as N is somewhat larger than log T it may be a good model for the local behavior. (The models of <cit.> require setting N very close to log T/2π, but coupling with the Euler product will damp oscillations with frequencies less than that of the leading term 2^-s, allowing us to take larger values of N without getting obviously wrong predictions, and taking larger values of N seems necessary to accurately approximate the contribution of the 2^-s term.) However, we could also consider the eigenvalues of random Hermitian matrices of fixed size, or point processes on the whole critical line. (The determinantal point process associated to the sine kernel, which is the large N limit of random matrices, is not useful for this, as its “characteristic polynomial" is not a well-defined holomorphic function, basically because the distribution of the characteristic polynomial of a random matrix, normalized to keep the frequency of zeroes constant, doesn't converge in the N →∞ limit, but another point process might work.) One can optimistically hope that there is some reasonably natural way of making the sequence of choices discussed above for which an analogue of <ref> or, ideally, <ref> can be proven. Proving this should be analytically more difficult than <ref>, since the two distributions we are trying to combine are further from each other and further from the Gaussian model and thus showing that the combination has the desired properties of each one should be more difficult, so proving the strongest possible form of <ref> might be a stepping stone to handling the number field case. §.§ Geometric and probabilistic approaches to L-functions The probabilistic model μ_ch is compatible with the geometric and representation-theoretic approach to the moments of L-functions suggested by the same author in <cit.>. Specifically, from the geometric perspective the most natural test functions to integrate against are the characters of irreducible representations of U(N), which may be expressed as polynomials in the coefficients of the characteristic polynomial L_M using the fundamental theorem of symmetric polynomials, or, more explicitly, the second Jacobi-Trudi identity for Schur polynomials <cit.>. Conversely, any polynomial in the coefficients of L_M can be expressed as a linear combination of characters of irreducible representations. We will check in <ref> that the polynomials of degree ≤ k are exactly the linear combinations of characters whose highest weights, expressed as an N-tuple of integers, have absolute value summing to a number ≤ k. Thus, by orthogonality of characters, irreducible representations whose highest weights have absolute value sums >k are orthogonal to all polynomials of degree ≤ k. Hence <ref> applies to the characters of irreducible representations with small highest weight, showing that the averages of these functions over μ_ch match their averages over μ_ep, while <ref> applies to the characters of irreducible representations with large highest weight, showing that the averages of these functions over μ_ch cancel. By Weil's Riemann hypothesis, every L-function L(s+it,χ) can be expressed as L_M for M∈ U(N) unique up to conjugacy, so we can interpret characters of irreducible representations of U(M) as functions of L(s+it,χ). For irreducible representations of small highest weight, it is not hard to prove that the averages of their characters over L(s+it,χ) match the averages of the same characters over μ_ep. <cit.> showed that the CFKRS predictions for moments of L-functions could be explained by cancellation in the averages of characters of irreducible representations with large highest weight over the family of L-functions, which could in turn be explained by (hypothetical) vanishing of certain cohomology groups whose traces of Frobenius compute this average. <ref> shows that the cancellation of averages of characters of irreducible representations with large highest weight could also be explained by the probabilistic model μ_ch. So this cancellation could have both probabilistic and geometric explanations. (However, note that the amount of cancellation that one can prove in the probabilistic model is different from the amount one can prove under geometric hypotheses – at least currently, it is larger for some representations and smaller for others. Thus it is not possible to say the geometric hypothesis implies the probabilistic model, or vice versa.) Note that the definition of “small highest weight" used in the two contexts is not identical (the definition here is stricter). This is because the average of the character of an irreducible representation over μ_ep decreases with the highest weight of the representation, at least for representations relevant to calculating moments of fixed degree. Thus, as long as the highest weight is not too small, it is possible for the average against another measure both to cancel and to approximate the average against μ_ep, simply because the average of μ_ep is itself small. So whether we state that these averages cancel or approximate μ_ep is a matter of convenience, and how we sort representations into those two buckets can vary with the context. The only restriction is that, as the error term in our desired estimates shrinks, fewer representations are flexible in this way. For the case of L-functions of quadratic Dirichlet characters, analysis analogous to <cit.> was conducted by Bergström, Diaconu, Petersen, and Westerland <cit.>. In the quadratic Dirichlet character setting, the L-function is naturally a characteristic polynomial of a conjugacy class in USp(2N), so one considers characters of irreducible representations of USp(N). They derive the CFKRS predictions, or, equivalently in this setting, the highest-order term of the multiple Dirichlet series predictions, from the assumption of cancellation in averages of characters of irreducible representations of USp(N) of large highest weight. They prove a homological stability result which is a topological enhancement of the fact that averages of irreducible representations of small highest weight over L(s+it,χ) (where χ is now a quadratic Dirichlet character) match the averages of the same characters against a suitable analogue of μ_ep. Since the stable cohomology vanishes in low degrees for representations of large highest weight, the vanishing of cohomology groups whose traces of Frobenius compute the average and hence cancellation in the average follows (for q sufficiently large) from a certain uniform homological stability statement, later proven by Miller, Patzt, Petersen, and Randal-Williams <cit.>. §.§ Proof sketch We now sketch the proofs of the main theorems. Recall that in the definition of μ_weighted we take the Haar measure and multiply by the probability density function of X_1,ξ,…, X_k,ξ divided by a Gaussian probability density function. A key observation is that, if we instead took a suitable Gaussian measure and multiplied by the probability density function of X_1,ξ,…, X_k,ξ divided by a Gaussian probability density function, the density of the Gaussian would cancel and we would obtain the distribution of X_1,ξ,…, X_k,ξ. For this modified measure, the expectation of a low-degree polynomial matches its expectation against μ_ep simply because X_1,ξ,…, X_k,ξ are the coefficients of the random power series log L_ξ distributed according to μ_ep. (The low degree assumption is necessary here because high-degree polynomials may involve coefficients of the power series beyond the first k and thus can't be expressed as functions of X_1,ξ,…, X_k,ξ.) So proving <ref> is a matter of proving that the expectation of low-degree polynomials is not changed much by the fact that we used the Haar measure instead of the Gaussian measure to construct μ_weighted and μ_ch. It thus crucially requires a bound for the difference, in some sense, between the Haar measure and the Gaussian measure. We rely on the work of Johansson and Lambert <cit.>, who proved a bound for the total variation distance between these distributions. Multiplying a measure by a continuous function can increase the total variation distance proportionally to the sup-norm of the function, so applying this result in our setting requires bounding the sup-norm of the multiplier F( - q^1/2(M),…, -q^k/2 (M^k)/k)/∏_j=1^k ( e^ - (M^j)^2 / j j / q^j π) . This requires pointwise bounds for the probability density function F(x_1,…,x_k) which decrease rapidly as x_1,…, x_k grows. To obtain pointwise bounds for F(x_1,…,x_k), we first bound the integrals of F(x_1,…,x_k) against a a complex exponential function of x_1,…,x_k. Taking the Fourier transform, i.e. integrating against an imaginary exponential function of x_1,…,x_k would be sufficient if we only wanted a bound for F(x_1,…,x_k) which is uniform in x_1,…, x_k, while integrating against a real exponential function would be sufficient if we wanted a bound for the the integral of F(x_1,…, x_k) over a large region which decreases rapidly as the region becomes more distant from 0, but since we are interested in bounds that are both pointwise and rapidly decreasing we require exponentials of complex-valued functions. The advantage of studying these exponential integrals is that the definition of F as the probability density function of a sum of independent random variables immediately gives a factorization of the exponential integral as a product of simpler integrals, in this case over the unit circle. Thus, a large part of our proof involves proving elementary bounds for these exponential integrals over the unit circle, and then multiplying them together to obtain bounds for integrals of F. For <ref>, on the other hand, the statement becomes trivial if we replace the multiplier (<ref>) in the definition of μ_weighted and μ_ch by any polynomial of degree ≤ k in the coefficients of a power series. Thus proving <ref> is a matter of finding a suitable approximation of F( x_1,…, x_k )/∏_j=1^k ( e^ - |x_j|^2 / j q^j j / q^j π) by a low-degree polynomial in x_1,…, x_k, x_1,…, x_k and bounding the error of this approximation. We choose an approximation in the L^2 sense, with the L^2 norms calculated against the Gaussian measure. (We again use the results of <cit.> to compare the Gaussian measure to the Haar measure). The optimal L^2 approximation against the Gaussian measure can be obtained using the orthogonal polynomials for the Gaussian measure, the Hermite polynomials: Since they form an orthogonal basis, any L^2 function can be written as a linear combination of them, and then one truncates the linear combination by taking only the low-degree polynomial terms, leaving the coefficients of the high-degree polynomials as an error. Bounding the error of this approximation is equivalent to bounding the coefficients of Hermite polynomials of high degree in the Hermite polynomial expansion of (<ref>). These coefficients are naturally expressed as contour integrals of exponential integrals of F(x_1,…,x_k) and we can again bound them by bounding the exponential integrals. Finally, <ref> is obtained by expressing ∏_i=1^ L(s_i ) ∏_i=+1^ + L ( s_i ) as a polynomial in the coefficients of L and their complex conjugates, breaking that polynomial into low-degree terms and high-degree terms (using irreducible representations, as in <ref>) and applying <ref> to the low-degree terms and <ref> to the high-degree terms. §.§ Acknowledgments The author was supported by NSF grant DMS-2101491 and a Sloan Research Fellowship while working on this project. I would like to thank Amol Aggorwal, David Farmer, Jon Keating, and Andrei Okounkov for helpful conversations. § RANDOM EULER PRODUCTS The variables, X_1,ξ,…, X_n,ξ are valued in ℂ, but it will be convenient for us to treat them as valued in ℝ^2 by viewing complex numbers as real vectors in the usual way, taking their real and imaginary parts as coordinates. This is because we will mainly be interested in their dot products with other vectors in ℝ^2, which can be expressed in terms of complex numbers but less directly. To that end, we give a formula for X_n,ξ as a vector in ℝ^2. For each prime polynomial 𝔭, let θ_𝔭 be the argument of ξ(𝔭), so that (<ref>) gives X_n,ξ = ∑_d | n∑_𝔭∈𝔽_q[u]^+ 𝔭 =d primed/ne^ i n/dθ_𝔭 . Now for θ∈ℝ let (θ) = [ cosθ; sinθ ] be e^ iθ∈ℂ viewed as a vector in ℝ^2, so that we have X_n,ξ = ∑_d | n∑_𝔭∈𝔽_q[u]^+ 𝔭 =d primed/n( n/dθ_𝔭 ) . Our first goal will be to upper bound F(x_1,…, x_k). Two basic tools to do this are the Fourier transform of F(x_1,…,x_k), i.e. the characteristic function of X_1,ξ,…, X_k,ξ, represented by the expectation 𝔼 [ e^ i ∑_n=1^k X_n,ξ· w_n ] for vectors w_1,…, w_n ∈ℝ^2, and the Laplace transform of F(x_1,…,x_k), i.e. the moment generating function of X_1,ξ,…, X_k,ξ, represented by the expectation 𝔼 [ e^∑_n=1^k X_n,ξ· v_n ] for vectors v_1,…, v_n ∈ℝ^2. We will in fact need a hybrid of these, also referred to as the Laplace transform, expressed as 𝔼 [ e^∑_n=1^k X_n,ξ· v_n + i ∑_n=1^k X_n,ξ· w_n ]. We could equivalently express X_n,ξ· v_n + i ( X_n,ξ· w_n) as z_n,1 X_n,ξ + z_n,2X_n,ξ for a certain pair of complex numbers z_n,1, z_n,2 depending real-linearly on v_n and w_n, but this would be unwieldy for the calculations we want to do, which focus on the size of these expectations, as we want to separate out the parameters v_n which affect the size of the exponential from the parameters w_n which affect only its argument. Thus it is better for our purposes to work with dot products in ℝ^2. Let E_d be the number of prime polynomials of degree d in 𝔽_q[u]^+. Let v_1,…, v_k and w_1,…, w_k be vectors in ℝ^2. Then 𝔼 [ e^∑_n=1^k X_n,ξ· v_n + i ∑_n=1^k X_n,ξ· w_n ] = ∏_d=1^k ( ∫_0^2π e^∑_m=1^⌊k/d⌋ ((m θ) · v_md + i ( (m θ)· w_md ))/m dθ/2π)^E_d . From (<ref>) we have ∑_n=1^k X_n,ξ· v_n= ∑_n=1^k ∑_d | n∑_𝔭∈𝔽_q[u]^+ 𝔭 =d primed/n( n/dθ_𝔭 ) · v_n which writing m= n/d and switching the order of summation is ∑_d=1^k ∑_𝔭∈𝔽_q[u]^+ 𝔭 =d prime∑_m=1^⌊k/d⌋( m θ_𝔭 ) · v_md/m so e^∑_n=1^k X_n,ξ· v_n = ∏_d=1^k ∏_𝔭∈𝔽_q[u]^+ 𝔭 =d prime e^∑_m=1^⌊k/d⌋(m θ_𝔭) · v_md /m. An analogous identity holds with w_n. Since θ_𝔭 are independent for different 𝔭 and uniformly distributed in [0,2π], we have 𝔼 [ e^∑_n=1^k X_n,ξ· v_n + i ∑_n=1^k X_n,ξ· w_n ] =𝔼[ ∏_d=1^k ∏_𝔭∈𝔽_q[u]^+ 𝔭 =d prime e^∑_m=1^⌊k/d⌋ ((m θ_𝔭) · v_md + i ( (m θ_𝔭)· w_md ))/m ] = ∏_d=1^k ∏_𝔭∈𝔽_q[u]^+ 𝔭 =d prime𝔼[ e^∑_m=1^⌊k/d⌋ ((m θ_𝔭) · v_md + i ( (m θ_𝔭) · w_md ))/m ] = ∏_d=1^k ∏_𝔭∈𝔽_q[u]^+ 𝔭 =d prime( ∫_0^2π e^∑_m=1^⌊k/d⌋ ((m θ) · v_md + i ( (m θ)· w_md ))/m dθ/2𝔭) . In view of <ref>, we will begin by estimating ∫_0^2π e^∑_m=1^⌊k/d⌋ ((m θ) · v_md + i ( (m θ)· w_md ))/mdθ/2π, starting with the case where w_n=0 for all n before handling the general case. This will require different techniques to provide useful estimates with v_1,w_1 in different ranges. §.§ Real exponential integrals This subsection is devoted to estimating ∫_0^2π e^∑_m=1^∞(m θ) · v_m/m dθ/2π. We have expanded the finite sum to an infinite sum because our estimates need to be uniform in the length of the sum and bounds uniform in the length of the sum are equivalent to bounds in the infinite sum case but the infinite sum statements are slightly more elegant and general. A simple guess for the average of this sum, based on a second-order Taylor expansion, is e^∑_m=1^∞v_m^2/4m^2. Our goal will be to prove a bound of roughly this shape, though our final bound will be worse in some ranges and better in others. Our rough strategy to estimate ∫_0^2π e^∑_m=1^∞(m θ) · v_m/m dθ/2π is to use an argument controlling the error of a Taylor series when the variables are small and the trivial bound e^∑_m=1^∞(m θ) · v_m/m ≤ e^∑_m=1^∞v_m/m when the variables are large. The argument needs to be more complex since we have infinitely many variables, some of which may be small and sum of which may be large. We first handle the case where v_m=0 for all m>1 in Lemma <ref>, where we obtain a savings over the simple guess e^v_1^2/4 using a finite Taylor series in the small range, the trivial bound in the large range, and a different power series argument in an intermediate range. This savings will be very convenient throughout the argument as by shrinking it slightly we can absorb unwanted terms from other estimates. In <ref> we make a more complicated, multivariable Taylor series estimate. This is expressed in terms of a ratio of integrals to allow us to preserve the savings. Finally in <ref> we combine these estimates and use a version of the trivial bound that allows us to ignore an individual v_m if it is too large. There exists δ_1>0 such that for all v_1∈ℝ^2 we have log∫_0^π e^(θ) · v_1dθ/2π≤v_1^2/4 - δ_1 min ( v_1^4, v_1^2) . It is equivalent to show that the function v_1^2/4 - log∫_0^2π e^(θ) · v_1d θ/2π/min ( v_1^4, v_1^2) has a lower bound δ_1>0 for all v_1∈ℝ^2∖{0}. We first check that (<ref>) is positive on all of ℝ^2 ∖{0}. To do this, we use the power series log∫_0^2π e^(θ) · v_1d θ/2π = log∑_d=0^∞v_1^2d/ (d!)^2 2^2d . and note that (<ref>) is strictly less than v_1^2/4 = log e^v_1^2/4 = log∑_d=0^∞v_1^2d/ (d!) 2^2d. It follows immediately that (<ref>) is positive. Next, using the first couple terms of the Taylor series for logarithm, we compute the Taylor series of (<ref>) as v_1^2/4 - v_1^4/64 + … and conclude that (<ref>) is equal to v_1^2/4 - v_1^4/64 + O ( v_1^6/6) for v_1 small. Plugging this into (<ref>), we see that (<ref>) converges to 1/64 as v_1 goes to ∞. Finally, e^(θ) · v_1≤ e^v_1 so that log∫_0^2π e^(θ) · v_1d θ/2π≤v_1 and thus for v_1≥ 1, (<ref>) is at least v_1^2/4 - v_1/v_1^2 = 1/4 - 1/v_1 and thus converges to 1/4 as v_1 goes to ∞. Thus (<ref>), a continuous function on ℝ^2 ∖{0}, is positive everywhere and bounded away from 0 in both a neighborhood of 0 and a neighborhood of ∞. By compactness of ℝ^2 ∪{∞}, it follows that (<ref>) has a lower bound δ_1>0. To apply Taylor's theorem in the case of infinitely many variables, we will need some trick to relate the power series of a function in many variables to the power series of a function in fewer variables. This may be accomplished using the following lemma. Let g_1 and g_2 be power series in one or more variables, with constant coefficients 1, such that g_1 has nonnegative coefficients and the coefficient of each monomial in g_1 is greater than or equal to the coefficient of the corresponding monomial in g_2. Then -log(2-g_1) has nonnegative coefficients and the coefficient of each monomial in -log(2-g_1) is greater than or equal to the coefficient of the corresponding monomial in log g_2. We have log (g_2) = log (1 + (g_2-1)) = (g_2-1) - (g_2-1)^2/2 + (g_2-1)^3/3 - (g_2-1)^4/4 + …. Writing each term in g_2-1 as a sum of coefficients times monomials, and bounding each coefficient by the corresponding coefficient in g_1-1, we see that the coefficient of any monomial in this expression is at most the coefficient of the same monomial in (g_1-1) + (g_1-1)^2/2 + (g_1-1)^3/3 + (g_1-1)^4/4 + … = - log ( 1- (g_1-1)) =-log(2-g_1). Let × denote multiplication of complex numbers viewed as a multiplication operation for vectors in ℝ^2. Let (v_m)_m=1^∞ be a sequence of vectors in ℝ^2. For any n let u_n = ∑_m=n^∞v_m/m. Then as long as u_1< 1/2 we have log∫_0^2π e^∑_m=1^∞(m θ) · v_m/m dθ/2π - log∫_0^π e^(θ) · v_1dθ/2π = ∑_m=2^∞v_m^2/4m^2 + (v_1 × v_1) · v_2/16 + O( u_1 u_2 u_3 + u_1^3 u_2) . We use [x_1^n_1 x_2^n_2] to denote extracting the coefficient of x_1^n_1 x_2^n_2 in a power series in x_1,x_2. We begin with the observation e^ x_1 (θ) · v_1 + x_2 ∑_m=2^∞(m θ) · v_m/m [x_1^n_1 x_2^n_2] ≤v_1^n_1u_2^n_2 / (n_1! n_2!) = e^v_1 x_1+ u_2 x_2 [x_1^n_1 x_2^n_2] which implies by linearity ∫_0^2π e^ x_1 (θ) · v_1 + x_2 ∑_m=2^∞(m θ) · v_m/m dθ/2π [x_1^n_1 x_2^n_2] ≤ e^v_1 x_1+ u_2 x_2 [x_1^n_1 x_2^n_2] . From Lemma <ref> we obtain log∫_0^2π e^ x_1 (θ) · v_1 + x_2 ∑_m=2^∞(m θ) · v_m/m dθ/2π [x_1^n_1 x_2^n_2] ≤ - log (2 - e^v_1 x_1+ u_2 x_2 ) [x_1^n_1 x_2^n_2]. We have (evaluating a power series at x_1=1,x_2=1) log∫_0^2π e^∑_m=1^∞(m θ) · v_m/m dθ/2π = ∑_n_1=0^∞∑_n_2=0^∞log∫_0^2π e^ x_1 (θ) · v_1 + x_2 ∑_m=2^∞(m θ) · v_m/m dθ/2π [x_1^n_1 x_2^n_2] and (evaluating a power series at x_1=1,x_2=0) log∫_0^π e^(θ) · v_1dθ/2π = ∑_n_1=0^∞log∫_0^2π e^ x_1 (θ) · v_1 + x_2 ∑_m=2^∞(m θ) · v_m/m dθ/2π [x_1^n_1 x_2^0] so that log∫_0^2π e^∑_m=1^∞(m θ) · v_m/m dθ/2π - log∫_0^π e^(θ) · v_1dθ/2π = ∑_n_1=0^∞∑_n_2=1^∞ log∫_0^2π e^ x_1 (θ) · v_1 + x_2 ∑_m=2^∞(m θ) · v_m/m dθ/2π [x_1^n_1 x_2^n_2] . We split the sum in (<ref>) up into terms with n_1 + n_2 ≤ 3, which we evaluate, and the terms with n_1 + n_2 ≥ 4, which we bound. For any b_1 ∈ [0, v_1], we observe that (b_1,u_2) lies in the compact set { (a_1,a_2)∈ℝ^2 | a_1≥ 0, a_2 ≥ 0, a_1+a_2 ≤1/2} where the function ∂/∂ a_2log ( 2- e^ a_1 +a_2) is smooth. Thus by Taylor's theorem we have ∑_n_1, n_2 ≥ 0 n_1+n_2 ≥ 3∂/∂ a_2log (2 - e^a_1 + a_2 ) [a_1^n_1 a_2^n_2] v_1^n_1 b_2^n_2 = O( ( v_1 +b_2 )^3) = O( (v_1 + u_2)^3) = O(u_1^3) since the left-hand side of (<ref>) is the error in the second-order Taylor approximation to the function ∂/∂ a_2log (2 - e^a_1 + a_2 ) at the point v_1, b_2 and the constant in Taylor's theorem is uniform by compactness. By (<ref>) and (<ref>) we have ∑_n_1≥ 0, n_2 > 0 n_1+n_2 ≥ 4log∫_0^2π e^ x_1 (θ) · v_1 + x_2 ∑_m=2^∞(m θ) · v_m/m dθ/2π [x_1^n_1 x_2^n_2] ≤∑_n_1≥ 0, n_2 > 0 n_1+n_2 ≥ 4log∫_0^2π e^ x_1 (θ) · v_1 + x_2 ∑_m=2^∞(m θ) · v_m/m dθ/2π [x_1^n_1 x_2^n_2] ≤ - ∑_n_1≥ 0, n_2 > 0 n_1+n_2 ≥ 4log (2 - e^v_1 x_1+ u_2 x_2 ) [x_1^n_1 x_2^n_2]= - ∑_n_1≥ 0, n_2 > 0 n_1+n_2 ≥ 4log (2 - e^a_1 + a_2 ) [a_1^n_1 a_2^n_2] v_1^n_1 u_2^n_2 = - ∑_n_1≥ 0, n_2 > 0 n_1+n_2 ≥ 4log (2 - e^a_1 + a_2 ) [a_1^n_1 a_2^n_2] ∫_0^u_2 n_2 v_1^n_1 b_2^n_2-1 db_2 = - ∫_0^u_2 n_2 ∑_n_1≥ 0, n_2 > 0 n_1+n_2 ≥ 4log (2 - e^a_1 + a_2 ) [a_1^n_1 a_2^n_2] v_1^n_1 b_2^n_2-1 db_2 = - ∫_0^u_2∑_n_1, n_2 ≥ 0 n_1+n_2 ≥ 3∂/∂ a_2log (2 - e^a_1 + a_2 ) [a_1^n_1 a_2^n_2] v_1^n_1 b_2^n_2 db_2 = ∫_0^u_2 O( u_1^3) db_1 = O( u_1^3 u_2) . We evaluate the terms with n_1 + n_2 ≤ 3 by Taylor expanding each term and integrating to obtain ∫_0^2π e^ x_1 (θ) · v_1 + x_2 ∑_m=2^∞(m θ) · v_m/m dθ/2π = 1 + v_1^2/4 x_1^2 + ∑_m=2^∞v_m^2/4m^2 x_2^2 + (v_1 × v_1) · v_2/16 x_1^2 x_2 + ∑_m=2^∞ (v_1 × v_m) · v_m+1/ 8 m (m+1) x_1 x_2^2+ ∑_m_1,m_2=2^∞ (v_m_1× v_m_2) · v_m_1+m_2/ 16 m_1 m_2(m_1+m_2) x_2^3 + … Taking logarithms of both sides, we obtain log∫_0^2π e^ x_1 (θ) · v_1 + x_2 ∑_m=2^∞(m θ) · v_m/m dθ/2π = v_1^2/4 x_1^2 + ∑_m=2^∞v_m^2/4m^2 x_2^2 + (v_1 × v_1) · v_2/16 x_1^2 x_2 + ∑_m=2^∞ (v_1 × v_m) · v_m+1/ 8 m (m+1)x_1 x_2^2 + ∑_m_1,m_2=2^∞ (v_m_1× v_m_2) · v_m_1+m_2/ 16 m_1 m_2(m_1+m_2) x_2^3 + … and ignoring the terms with exponent of x_2 zero and then substituting x_1 ,x_2=1, we obtain ∑_n_1≥ 0, n_2 > 0 n_1+n_2 ≤ 3log∫_0^2π e^ x_1 (θ) · v_1 + x_2 ∑_m=2^∞(m θ) · v_m/m dθ/2π [x_1^n_1 x_2^n_2] = ∑_m=2^∞v_m^2/4m^2 + (v_1 × v_1) · v_2/16 + ∑_m=2^∞ (v_1 × v_m) · v_m+1/ 4 m (m+1) + ∑_m_1,m_2=2^∞ (v_m_1× v_m_2) · v_m_1+m_2/ 8 m_1 m_2(m_1+m_2). This gives (<ref>) once we check that ∑_m=2^∞ (v_1 × v_m) · v_m+1/ 4 m (m+1) + ∑_m_1,m_2=2^∞ (v_m_1× v_m_2) · v_m_1+m_2/ 8 m_1 m_2(m_1+m_2) = O( u_1 u_2 u_3) which is clear since ∑_m=2^∞ (v_1 × v_m) · v_m+1/ 4 m (m+1) + ∑_m_1,m_2=2^∞ (v_m_1× v_m_2) · v_m_1+m_2/ 8 m_1 m_2(m_1+m_2) ≤∑_m=2^∞v_1v_mv_m+1/ 4 m (m+1) + ∑_m_1,m_2=2^∞v_m_1v_m_2v_m_1+m_2/ 8 m_1 m_2(m_1+m_2) ≤∑_m_1=1^∞∑_m_2=2^∞v_m_1v_m_2v_m_1+m_2/ 4 m_1 m_2(m_1+m_2) ≤∑_m_1=1^∞∑_m_2=2^∞∑_m_3=3^∞v_m_1v_m_2v_m_3/4m_1m_2m_3 = u_1 u_2 u_3/4 . There exists δ_2>0 and C_1≥ 1 such that, for (v_m)_m=1^∞ a sequence of vectors in ℝ^2 with ∑_m=1^∞v_m^2<∞, we have log∫_0^2π e^∑_m=1^∞(m θ) · v_m/m dθ/2π≤v_1^2/4 - δ_2 min ( v_1^4, v_1^2) + ∑_m=2^∞min ( C_1 v_m^2, v_m/m ). A key fact we will use multiple times is that replacing v_m by 0 decreases the left hand side of (<ref>) by at most v_m/m, since it shrinks the integrand e^∑_m=1^∞(m θ) · v_m/m at each point by a factor of at most e^v_m/m and thus, because the integrand is positive, shrinks the integral by a factor of at most e^v_m/m. For m≥ 2, if v_m≥1/mC_1 then min ( C_1 v_m^2, v_m/m ) = v_m/m. If we replace v_m by 0, the left side of (<ref>) decreases by at most v_m/m while the right side decreases by exactly v_m/m so the bound after making the change implies the bound before. Repeating this for all m, we may assume v_m< 1/m C_1 for all m≥ 2. Take δ_1 as in <ref> and then set δ_2 =δ_1/2 so that by <ref> we have log∫_0^π e^(θ) · v_1dθ/2π≤v_1^2/4 - 2δ_2 min ( v_1^4, v_1^2) so if we let Disc (v_1,v_2,…) = log∫_0^2π e^∑_m=1^∞(m θ) · v_m/m dθ/2π - log∫_0^π e^(θ) · v_1dθ/2π then it suffices to check for C_1 sufficiently large that Disc (v_1,v_2,…) ≤δ_2 min ( v_1^4, v_1^2) + ∑_m=2^∞min ( C_1 v_m^2, v_m/m ). We have Disc (v_1,v_2,…) ≤∑_m=2^∞v_m/m ≤1/C_1 (ζ(2)-1) by the key fact and (<ref>). Thus we may assume that δ_2 min ( v_1^4, v_1^2) ≤1/C_1 (ζ(2)-1). because otherwise (<ref>) holds automatically. Combining (<ref>) and (<ref>) gives ∑_m=1^∞v_m/m = v_1 + ∑_m=2^∞v_m/m≤max( ( δ_2^-11/C_1 (ζ(2)-1))^1/4, ( δ_2^-11/C (ζ(2)-1))^1/4) + 1/C_1 (ζ(2)-1) and choosing C_1 sufficiently large, the right hand side of (<ref>) is <1/2, and thus we may apply <ref>, obtaining Disc (v_1,v_2,…) = ∑_m=2^∞v_m^2/4m^2 + (v_1 × v_1) · v_2/16 + O( u_1 u_2 u_3 + u_1^3 u_2). We now simplify (<ref>) by bounding the terms appearing on the right hand side. To do this, we we use the facts clear from the definitions that u_1 = v_1 + u_2 and u_3≤ u_2 as well as the assumptions (<ref>) and (<ref>) that imply u_2 and v_1, respectively, are bounded by constants. These facts imply (v_1 × v_1) · v_2/16≪v_1^2 v_2≤v_1^2 u_2 O(u_1u_2 u_3) ≪ u_1 u_2 u_3 ≤ u_1 u_2^2 = (v_1+ u_2) u_2^2 ≪ u_2^2 O(u_1^3 u_2) ≪ u_1^3 u_2 = (v_1 + u_2)^3 u_2 = v_1^3 u_2 +3 v_1^2 u_2^2 + 3 v_1u_2^3 + u_2^4 ≪v_1^2 u_2 + u_2^2 giving Disc (v_1,v_2,…) = ∑_m=2^∞v_m^2/4m^2 + O ( v_1^2 u_2 + u_2^2) . Applying the completing-the-square-bound v_1^2 u_2 ≤ϵv_1^4 + 1/4ϵ u_2^2 for some sufficiently small ϵ, we obtain Disc (v_1,v_2,…) ≤∑_m=2^∞v_m^2/4m^2 + δ_2 v_1^4+ O(u_2^2) . Applying the Cauchy-Schwarz bound u_2^2 ≤ (ζ(2)-1) ∑_m=2^∞v_m^2 we obtain Disc (v_1,v_2,…) ≤δ_2 v_1^4 + O ( ∑_m=2^∞v_m^2) . Finally (<ref>) implies (<ref>) because (<ref>) gives v_1≤ 1 (for C_1 sufficiently large) so δ_2 v_1^4 = δ_2 min ( v_1^4, v_1^2) and (<ref>) gives ∑_m=2^∞min ( C_1v_m^2, v_m/m )= ∑_m=2^∞ C_1 v_m^2 which dominates any expression of the form O ( ∑_m=2^∞v_m^2) as long as C_1 is sufficiently large. §.§ Complex exponential integrals This subsection is devoted to bounding the integral ∫_0^2π e^∑_m=1^∞(m θ) · v_m/m + i∑_m=1^∞(m θ) · w_m/m. We have three different estimates that roughly handle three different ranges for w_1. When w_1 is small we will apply Lemma <ref> which is proven using a Taylor series argument. When w_1 is large we will apply Lemma <ref> which is proven using a stationary phase argument. When w_1 is intermediate we will apply Lemma <ref> which is proven using a more elementary argument involving the range of values attained by the function ∑_m=1^∞(m θ) · w_m/m. We then multiply the bounds together to obtain bounds for the expectation 𝔼 [ e^∑_n=1^k X_n,ξ· v_n + i ∑_n=1^k X_n,ξ· w_n ], with the final bounds relevant to the remainder of the argument contained in Corollary <ref>. Fix δ_3< 1/64. Let (v_m)_m=1^∞ and (w_m)_m=1^∞ be sequences of vectors in ℝ^2. If v_1 + w_1 + ∑_m=2^∞√(v_m^2 + w_m^2)/m < 1/2 then log∫_0^2π e^∑_m=1^∞(m θ) · v_m/m + i∑_m=1^∞(m θ) · w_m/m dθ/2π ≤ v_1^2/4 - w_1^2/4 - δ_3 v_1^4 + O_δ_3 ( v_1^6 + w_1^3 + ∑_m=2^∞ (v_m^2+ w_m^2 )). Let W = ∑_m=2^∞√(v_m^2 + w_m^2)/m. For any θ, e^λ(m θ) · v_1 + iλ^2 (θ) · w_1+ λ^3 ∑_m=1^∞(m θ) · v_m/m + iλ^3 ∑_m=1^∞(m θ) · w_m/m is a power series in λ whose coefficient of λ^n is bounded by the coefficient of λ^n in e^λv_1 + λ^2 w_1 + λ^3 W. Hence the coefficient of λ^n in ∫_0^2π e^λ(m θ) · v_1 + iλ^2 (θ) · w_1+ λ^3 ∑_m=1^∞(m θ) · v_m/m + iλ^3 ∑_m=1^∞(m θ) · w_m/m dθ/2π is also bounded by the coefficient of λ^n in (<ref>). By Lemma <ref>, the coefficient of λ^n in log e^λ(m θ) · v_1 + iλ^2 (θ) · w_1+ λ^3 ∑_m=1^∞(m θ) · v_m/m + iλ^3 ∑_m=1^∞(m θ) · w_m/m dθ/2π is bounded by the coefficient of λ^n in the power series - log (2 - e^λv_1+ λ^2 w_1 + λ^3 W). We now observe that Taylor's theorem applied to the function - log ( 2 -e^ x +y^2+z^3 ) implies that the sum of all terms of degree ≥ 6 appearing in the power series for - log ( 2 -e^ x +y^2+z^3 ) is O ( x^6+y^6+z^6) uniformly for x,y,z≥ 0 such that x+y^2+z^3 ≤ 1/2. Indeed such x,y,z lie in a compact region where the function is smooth so all derivatives are bounded. Plugging in x=λv_1, y = λw_1^1/2, z = λ W^1/3 and then setting λ=1, we obtain the sum of the coefficients of λ^n in (<ref>) for n from 6 to ∞. Hence the sum of the coefficient of λ^n in (<ref>) for n from 6 to ∞ is O ( v_1^6 + w_1^3 + W^2) = O ( v_1^6 + w_1^3 + ∑_m=2^∞ (v_m^2+ w_m^2 )). On the other hand, the coefficients of λ, λ^2, λ^3,λ^4, λ^5 in (<ref>) are, respectively, 0 v_1^2/4 i v_1 · w_1/2 v_1^4/64 - w_1^2/4 (v_1 × v_1) · v_2/ 16+ i (v_1 × v_1) · w_2/16 + i v_1^2 v_1 · w_1/16 . Taking logarithms, the coefficients of λ, λ^2, λ^3, λ^4, λ^5 in (<ref>) are identical except the coefficient of λ^4 is - v_1^4/64 - w_1^2/4 . Plugging λ=1 into (<ref>) and taking the real part, this gives log∫_0^2π e^∑_m=1^∞(m θ) · v_m/m + i∑_m=1^∞(m θ) · w_m/m dθ/2π = v_1^2/4 - w_1^2/4 - v_1^4/64 + (v_1 × v_1) · v_2/16 +O (v_1^6 + w_1^3 + ∑_m=2^∞ (v_m^2+ w_m^2 )). which is exactly the desired bound (<ref>) except for the term (v_1 × v_1) · v_2/16, which can be controlled by observing that (v_1 × v_1) · v_2/16≤v_1^2 v_2/16≤( 1/64-δ_3) v_1^4 + 1/1024/1/64 - δ_3 v_2^2=( 1/64-δ_3) v_1^4 + O_δ_3( v_2^2) . Let (v_m)_m=1^∞ and (w_m)_m=1^∞ be sequences of vectors in ℝ^2. If ∑_m=1^∞v_m/m<∞ and ∑_m=1^∞w_m^2 < ∞ then ∫_0^2π e^∑_m=1^∞(m θ) · v_m/m + i∑_m=1^∞(m θ) · w_m/m dθ/2π/∫_0^2π e^∑_m=1^∞(m θ) · v_m/m dθ/2π ≤ 1- e^ -2 ∑_m=1^∞v_m/mmin( ∑_m=1^∞w_m^2/π^2 m^2, 1/ 2∑_m=1^∞w_m^2). Let ϕ be the argument of ∫_0^2π e^∑_m=1^∞(m θ) · v_m/m + i∑_m=1^∞(m θ) · w_m/m dθ/2π. We have ∫_0^2π e^∑_m=1^∞(m θ) · v_m/m + i∑_m=1^∞(m θ) · w_m/m dθ/2π = ∫_0^2π e^ -i ϕ + ∑_m=1^∞(m θ) · v_m/m + i∑_m=1^∞(m θ) · w_m/m dθ/2π = ∫_0^2π e^∑_m=1^∞(mθ) · v_m/mcos( -ϕ + ∑_m=1^∞(mθ) · w_m /m ) . Using the bound e^∑_m=1^∞(m θ) · v_m/m∈ [e^ - ∑_m=1^∞v_m/ m, e^∑_m=1^∞v_m /m ] valid for all θ, we obtain in particular ∫_0^2π e^∑_m=1^∞(m θ) · v_m/m dθ/2π≤ e^∑_m=1^∞v_m /m so that it suffices to prove ∫_0^2π e^∑_m=1^∞(mθ) · v_m/mcos ( -ϕ + ∑_m=1^∞(mθ) · w_m /m ) ≤∫_0^2π e^∑_m=1^∞(m θ) · v_m/m dθ/2π - e^ - ∑_m=1^∞v_m/mmin( ∑_m=1^∞w_m^2/π^2 m^2, 1/ 2∑_m=1^∞w_m^2) We split into two cases depending on whether e^i ∑_m=1^∞(m θ) · w_m/m = - e^i ϕ for some θ or not. First, suppose e^i ∑_m=1^∞(m θ_0) · w_m/m = - e^i ϕ for some θ_0. Let x =∑_m=1^∞(m θ_0) · w_m/m and let I be the longest interval around θ_0 on which ∑_m=1^∞(m θ) · w_m/m ∈ [x-π/2, x+π/2]. Then on the boundary of I, we have ∑_m=1^∞(m θ) · w_m/m = x ±π/2 while at θ_0 ∈ I it takes the value x so π≤∫_I d/dθ∑_m=1^∞(m θ) · w_m/m d θ = ∫_I ∑_m=1^∞(m θ+ π/2) · w_m d θ ≤√(I∫_0^2π∑_m=1^∞(m θ+ π/2) · w_m ^2 d θ) = √(Iπ∑_m=1^∞w_m^2) so I≥π/∑_m=1^∞w_m^2 but for each θ∈ I we have cos ( -ϕ + ∑_m=1^∞(mθ) · w_m /m )≤ 0 so ∫_0^2π e^∑_m=1^∞(mθ) · v_m/mcos ( -ϕ + ∑_m=1^∞(mθ) · w_m /m ) ≤∫_0^2π e^∑_m=1^∞(m θ) · v_m/m dθ/2π - ∫_Ie^∑_m=1^∞(m θ) · v_m/m dθ/2π ≤∫_0^2π e^∑_m=1^∞(m θ) · v_m/m dθ/2π - I/2π e^ - ∑_m=1^∞v_m/m ≤∫_0^2π e^∑_m=1^∞(m θ) · v_m/m dθ/2π - 1 /2 ∑_m=1^∞w_m^2 e^ - ∑_m=1^∞v_m/m giving (<ref>). Next suppose that e^∑_m=1^∞(m θ) · w_m/m≠ - e^i ϕ for any θ, or in other words ∑_m=1^∞(m θ) · w_m/m ≠ϕ + π n for any odd integer n. After shifting ϕ by an even integer multiple of 2π, we may assume ∑_m=1^∞(m θ) · w_m/m ∈ (ϕ-π, ϕ+π) for all θ. The simple trigonometric inequality cosθ≤ 1 - 2 θ^2/π^2 for θ∈ (-π,π) gives ∫_0^2π e^∑_m=1^∞(mθ) · v_m/mcos ( -ϕ + ∑_m=1^∞(mθ) · w_m /m ) ≤∫_0^2 π e^∑_m=1^∞(m θ) · v_m/m(1 - 2/π^2( ϕ - ∑_m=1^∞(m θ) · w_m/m )^2 ) d θ/2π = ∫_0^2π e^∑_m=1^∞(m θ) · v_m/mdθ/2π - 2/π^2∫_0^2π e^∑_m=1^∞(m θ) · v_m/m( ϕ - ∑_m=1^∞(m θ) · w_m/m )^2 dθ/2π ≤∫_0^2π e^∑_m=1^∞(m θ) · v_m/mdθ/2π - e^ -∑_m=1^∞v_m/m∫_0^2π( ϕ - ∑_m=1^∞(m θ) · w_m/m )^2 dθ/2π = ∫_0^2π e^∑_m=1^∞(m θ) · v_m/mdθ/2π - e^ -∑_m=1^∞v_m/m1/π^2( ∑_m=1^∞w_m^2/m^2 + ϕ^2) ≤∫_0^2π e^∑_m=1^∞(m θ) · v_m/mdθ/2π - e^ -∑_m=1^∞v_m/m1/π^2∑_m=1^∞w_m^2/m^2 giving (<ref>). Let (v_m)_m=1^∞ and (w_m)_m=1^∞ be sequences of vectors in ℝ^2. If ∑_m=1^∞v_m^2 + ∑_m=1^∞w_m^2 < ∞ then ∫_0^2π e^∑_m=1^∞(m θ) · v_m/m + i∑_m=1^∞(m θ) · w_m/m dθ/2π/sup_θ∈ [0,2π] e^∑_m=1^∞(m θ) · v_m/m ≤ 1/√(w_1)( 4/π + 1 + √(∑_m=1^∞v_m^2 + ∑_m=2^∞w_m^2)/√(π)w_1^1/4). By shifting θ we may assume w_1 = (w_1,0). Let ℱ( θ) = e^∑_m=1^∞(m θ) · v_m/m + i∑_m=2^∞(m θ) · w_m/m so that e^∑_m=1^∞(m θ) · v_m/m + i∑_m=1^∞(m θ) · w_m/m = e^ i w_1cosθℱ(θ) and thus the integral to bound in (<ref>) is ∫_0^2π e^ i w_1cosθℱ(θ) dθ/2π. We now informally explain the strategy of proof: handle this integral by the method of stationary phase. Ignoring ℱ, one standard form of the stationary phase method is to change variables from θ to cosθ, apply integration by parts, and then reverse the change of variables, giving an integral where the derivative w_1sinθ appears in the denominator. Before doing this, we remove from the integral and handle separately the region where sinθ is so small that this gives a worse bound. Since our desired bound (<ref>) shrinks as w_1 grows but grows in the other variables, we should think of w_1 as large and the other variables as small. In other words, we think of ℱ(θ) as varying more slowly than e^ i w_1cosθ. Because of this, we do not need to modify our change of variables strategy to account for F (which would give a better bound but with a considerably more complicated formula accounting for multiple potential critical points of ∑_m=1^∞(mθ) · w_m/m). We first handle the integral from 0 to π. For each δ with 0< δ < π/2 we have | ∫_0^π e^ i w_1cosθℱ(θ) dθ/2π - ∫_δ^π-δ e^ i w_1cosθℱ(θ) dθ/2π| ≤2 δ/2πℱ_∞ and the change of variables c = cosθ followed by integration by parts gives ∫_δ^π-δ e^ i w_1cosθℱ(θ) dθ/2π = ∫_- cosδ^cosδ e^ i w_1 c ℱ(arccos c ) 1/ 2 π√(1-c^2) dc = - ∫_- cosδ^cosδ e^ i w_1c/ i w_1d/dc ( ℱ(arccos c) 1/ 2 π√(1-c^2)) dc + e^ i w_1c/ i w_1ℱ(arccos c) 1/ 2 π√(1-c^2) ]_-cosδ^cosδ We have e^ i w_1c/ i w_1ℱ(arccos c) 1/ 2 π√(1-c^2) ]_-cosδ^cosδ≤2/ 2 πw_1 sinδℱ_∞ since we may bound the value at -cosδ and cosδ separately. Let 𝒢(θ) = dlogℱ(θ)/dθ = ∑_m=1^∞(mθ + π/2) · v_m + i∑_m=2^∞(m θ + π/2)· w_m. We have d/dc ( ℱ( arccos c) 1/ 2 π√(1-c^2)) = ℱ(arccos c) 1/ 2 π√(1-c^2)( c/1-c^2 -𝒢(arccos c) /√(1-c^2)) Respectively applying (<ref>) and reversing the change of variables c= cosθ, then applying Cauchy-Schwarz, and finally using the integral ∫_δ^π -δ1/sin^2 θ dθ = 2 cosδ/sinδ gives ∫_- cosδ^cosδ e^ i w_1c/ i w_1d/dc ( ℱ( arccos c) 1/ 2 π√(1-c^2)) dc = ∫_δ^π-δ e^ i w_1cosθ/ i w_1ℱ (θ) ( cosθ/sin^2 θ + 𝒢(θ) /sinθ) dθ/2π ≤ ℱ_∞/w_1∫_δ^π - δcosθ/sin^2 θdθ/2π + ℱ_∞/w_1√(∫_δ^π -δ1/sin^2 θdθ/2π·∫_δ^π-δ𝒢(θ)^2 dθ/2π) ≤ cosδℱ_∞/w_1πsinδ + ℱ_∞/w_1√(cosδ/πsinδ∫_δ^π-δ|𝒢(θ) |^2 dθ/2π). Plugging (<ref>) and (<ref>) into (<ref>), and then applying (<ref>), we obtain | ∫_0^π e^ i w_1cosθℱ(θ) dθ/2π| ≤δℱ_∞/π + ℱ_∞/πw_1sinδ + cosδ/w_1πsinδ + ℱ_∞/w_1√(cosδ/πsinδ∫_δ^π-δ𝒢(θ)^2 dθ/2π) . A symmetrical argument gives | ∫_π^2π e^ i w_1cosθℱ(θ) dθ/2π| ≤δℱ_∞/π + ℱ_∞/πw_1sinδ + cosδℱ_∞/w_1πsinδ + 1/w_1√(cosδ/πsinδ∫_π + δ^2π-δ𝒢(θ)^2 dθ/2π) . We have ∫_δ^π-δ𝒢(θ)^2dθ/2π + ∫_π+ δ^2π-δ𝒢(θ)^2dθ/2π≤∫_0^2π𝒢(θ)^2 dθ/2π =1/2∑_m=1^∞v_m^2 + 1/2∑_m=2^∞w_m^2 with the last equality using the definition (<ref>) of 𝒢. Combining (<ref>), (<ref>), and (<ref>) gives | ∫_0^2π e^ i w_1cosθℱ(θ) dθ/2π| /ℱ_∞ ≤2 δ/π + 2/πw_1sinδ + 2 cosδ/w_1πsinδ + 1/w_1√(cosδ/πsinδ ( ∑_m=1^∞v_m^2+ ∑_m=2^∞w_m^2)) ≤2 δ/π + 1/w_1δ + 2 /w_1πδ + 1/w_1√( 1/πδ( ∑_m=1^∞v_m^2+ ∑_m=2^∞w_m^2)). Taking δ= 1/√(w_1) we obtain (<ref>). | We have ∑_m=1^∞v_m/m≤ζ(2)/4 + ∑_m=1^∞min ( v_m^2, v_m/m) . For each m we have v_m/m ≤v_m^2 + 1/4m^2 by completing the square and thus v_m/m ≤min ( v_m^2, v_m/m) + 1/4m^2. Summing over m gives the statement. There exists δ_3>0, constants C_1, C_2>0, and a function 𝒮 [0,∞] → [0,1] such that the following hold: Let (v_m)_m=1^∞ and (w_m)_m=1^∞ be sequences of vectors in ℝ^2. If ∑_m=1^∞ m v_m^2+ ∑_m=1^∞w_m^2 < ∞ then ∫_0^2π e^∑_m=1^∞(m θ) · v_m/m + i∑_m=1^∞(m θ) · w_m/m dθ/2π ≤ e^v_1^2/4 - δ_3 min ( v_1^4, v_1^2) + ∑_m=2^∞min (C_1 v_m^2, v_m/m ) ( 1 + C_2 ∑_m=2^∞w_m^2 + C_2 ∑_m=2^∞v_m^2 ) 𝒮( w_1 ) . Furthermore, we have 𝒮(y) = O ( 1/√(y)), we have 𝒮(y) ≤ e^ - y^2/4 + O( y^3), and 𝒮(y) is bounded away from 1 for y in each fixed closed interval not containing 0. Here the function 𝒮 describes how much savings is obtained in our estimate from cancellation induced by w_1. The advantage of writing the bound in this way is we can treat 𝒮(w_1) as a single quantity for calculations that are uniform in w_1 but also easily break up into different ranges. From this point on, we always take 𝒮 to be a function as in <ref>. Take C_1 as in <ref>. Fix δ_3, C_2 to be chosen later. We choose δ_3 sufficiently small and C_2 sufficiently large. Neither depends on the other. We will always write a fixed closed interval not containing zero as [C_3,C_4]. (There is also no relation between C_3,C_4 and the other variables.) We let S(v_1,…, w_1,…) = ∫_0^2π e^∑_m=1^∞(m θ) · v_m/m + i∑_m=1^∞(m θ) · w_m/m dθ/2π/ e^v_1^2/4 - δ_3 min ( v_1^4, v_1^2) + ∑_m=2^∞min (C_1 v_m^2, v_m/m ) ( 1 + C_2 ∑_m=2^∞w_m^2 + C_2 ∑_m=2^∞v_m^2 ) . and define 𝒮(y) = inf_ (v_m)_m=1^∞, (w_m)_m=1^∞∈ (ℝ^2)^ℕ w_1 =y S(v_1,…, w_1,…) so that (<ref>) holds by definition and the upper bounds on 𝒮(y) can be checked by checking corresponding upper bounds on S(v_1,…, w_1,…). That is, for (<ref>) it suffices to have S(v_1,…, w_1,…) = O ( 1/√(w_1)), for (<ref>) it suffices to have S(v_1,…, w_1,…) ≤ e^ - w_1^2/4 + O(w_1^3) and for 𝒮(y) to be bounded away from 1 for y in an interval [C_3,C_4] not containing 0 it suffices to have S(v_1,…, w_1,…) bounded away from 1 for w_1∈ [C_3,C_4] First note that we can always bound the integral ∫_0^2π e^∑_m=1^∞(m θ) · v_m/m + i∑_m=1^∞(m θ) · w_m/m dθ/2π by its untwisted form ∫_0^2π e^∑_m=1^∞(m θ) · v_m/m dθ/2π which is bounded by Proposition <ref> as e^v_1^2/4 - δ_2 min ( v_1^4, v_1^2) + ∑_m=2^∞min (C_1 v_m^2, v_m/m ) ) , which can be bounded by e^v_1^2/4 - δ_3 min ( v_1^4, v_1^2) + ∑_m=2^∞min (C_1 v_m^2, v_m/m ) ) 1/ 1+ δ_3 v_1^4 since log (1+ δ_3 v_1)^4 ≤ (δ_2 -δ_3) min ( v_1^4, v_1^2) for δ_3 sufficiently small with respect to δ_2. It follows that S(v_1,…, w_1,…) ≤1/ (1 + δ_3 v_1^4) ( 1 + C_2 ∑_m=2^∞w_m^2 + C_2 ∑_m=2^∞v_m^2 ) . This in particular implies that S(v_1,…, w_1,…) ≤ 1 and thus 𝒮(y) ≤ 0. Since 𝒮(y) is clearly nonnegative we see that 𝒮 is indeed a function from [0,∞] to [0,1]. Furthermore, as long as δ_3 v_1^4 + C_2 ∑_m=2^∞ w_m^2 + C_2 ∑_m=2^∞v_m^2 ≥w_1^2/4 we have S(v_1,…, w_1,…) ≤1/ 1+ w_1^2/4. Since 1/ 1+ w_1^2/4 is O (1/√(w_1)), is equal to e^ - w_1^2/4+ O(w_1^3), and is bounded away from 1 for C_3 ≤w_1≤ C_4, for the remainder of the argument we may assume that δ_3 v_1^4 + C_2 ∑_m=2^∞ w_m^2 + C_2 ∑_m=2^∞v_m^2 < w_1^2/4 which notably implies ∑_m=1^∞ w_m^2 + ∑_m=1^∞v_m^2 < O ( w_1^2) + O(1) since δ_3 v_1^4 ≥ C_2 v_1^2 + O(1) and w_1^2= O(w_1^2). First we check (<ref>). Since S(v_1,…,w_1,…) ≤ 1, it suffices to check (<ref>) for y sufficiently small. Note that (<ref>) implies that, as long as w_1 is sufficiently small, v_1 is as small as desired, and the same holds for ∑_m=2^∞v_m^2 + ∑_m=2^∞w_m^2. By Cauchy-Schwarz v_1+ w_1+ ∑_m=2^∞√(v_m^2 + w_m^2)/m ≤v_1+w_1 + √( (ζ(2)-1) ∑_m=2^∞( v_m^2 + w_m^2)) which we can take to be as small as desired, in particular ensuring the assumption of <ref> is satisfied, and we have log∫_0^2π e^∑_m=1^∞(m θ) · v_m/m + i∑_m=1^∞(m θ) · w_m/mdθ/2π ≤v_1^2/4 - w_1^2/4 - δ_3 v_1^4 + O_δ_3 ( v_1^6 + w_1^3 + ∑_m=2^∞ (v_m^2+ w_m^2 )) and by (<ref>) we have v_1^6 = O ( w_1^3) so that term can be ignored. Thus (<ref>) is less than or equal to e^v_1^2/4- δ_3 v_1^4/ e^v_1^2/4 - δ_3 min ( v_1^4, v_1^2)e^ O( ∑_m=2^∞ (v_m^2+ w_m^2 ))/1 + C_2 ∑_m=2^∞w_m^2 + C_2 ∑_m=2^∞v_m^2 e^ - w_1^2/4 + O ( w_1^3 ) . We have dropped the term ∑_m=2^∞min (C_1 v_m^2, v_m/m ) from the denominator as it is always ≥ 1 but is unneeded. We clearly have e^v_1^2/4- δ_3 v_1^4/ e^v_1^2/4 - δ_3 min ( v_1^4, v_1^2)≤ 1 and we have e^ O( ∑_m=2^∞ (v_m^2+ w_m^2 ))/1 + C_2 ∑_m=2^∞w_m^2 + C_2 ∑_m=2^∞v_m^2≤ 1 as long as w_1 is sufficiently small and C_2 is sufficiently large since we can bound e^x by 1 + cx for any c>1 as long as x is sufficiently small. Thus (<ref>) is less then or equal to e^ - w_1^2/4 + O ( w_1^3 ) for w_1 sufficiently small which gives 𝒮(y) ≤ e^ - y^2/4 + O ( y^3) for y sufficiently small, and thus for all y, verifying (<ref>). Next we check (<ref>). Before applying <ref>, we observe that sup_θ∈ [0,2π] e^∑_m=1^∞(m θ) · v_m/m ≤ e^∑_m=1^∞v_m/m≤ e^ζ(2)/4 + ∑_m=1^∞min( v_m^2, v_m/m) ≪ e^v_1^2/4 - δ_3 min ( v_1^4, v_1^2) + ∑_m=2^∞min (C_1 v_m^2, v_m/m ) since δ_3 ≤1/4 and C_1≥ 1. Next observe that (using (<ref>) to handle the case w_1 small in the first inequality) 4/π + 1 + √(∑_m=1^∞v_m^2 + ∑_m=2^∞w_m^2)/√(π)w_1^1/4≪ 1 + √(∑_m=1^∞v_m^2 + ∑_m=2^∞w_m^2) ≤5/4 + ∑_m=1^∞v_m^2 + ∑_m=2^∞w_m^2 ≪ 1 + δ_3 v_1^4 + C_2 ∑_m=2^∞w_m^2 + C_2 ∑_m=2^∞v_m^2 . Putting these bounds together with <ref>, we obtain ∫_0^2π e^∑_m=1^∞(m θ) · v_m/m + i∑_m=1^∞(m θ) · w_m/m dθ/2π ≤sup_θ∈ [0,2π] e^∑_m=1^∞(m θ) · v_m/m /√(w_1)( 4/π + 1 + √(∑_m=1^∞v_m^2 + ∑_m=2^∞w_m^2)/√(π)w_1^1/4) ≪ e^v_1^2/4 - δ_3 min ( v_1^4, v_1^2) + ∑_m=2^∞min (C_1 v_m^2, v_m/m )1/√(w_1) ( 1 + δ_3 v_1^4 + C_2 ∑_m=2^∞w_m^2 + C_2 ∑_m=2^∞v_m^2 ) ≤ e^v_1^2/4 - δ_3 min ( v_1^4, v_1^2) + ∑_m=2^∞min (C_1 v_m^2, v_m/m )1/√(w_1) (1 + δ_3 v_1^4) ( 1 + C_2 ∑_m=2^∞w_m^2 + C_2 ∑_m=2^∞v_m^2 ) which verifies (<ref>). Next we consider w_1 in an interval I= [C_3,C_4] not containing 0. Applying <ref> and then <ref> we have S(v_1,…, w_1,…) ≤∫_0^2π e^∑_m=1^∞(m θ) · v_m/m + i∑_m=1^∞(m θ) · w_m/m dθ/2π/∫_0^2π e^∑_m=1^∞(m θ) · v_m/m dθ/2π ≤ 1- e^ -2 ∑_m=1^∞v_m/mmin( ∑_m=1^∞w_m^2/π^2 m^2, 1/ 2∑_m=1^∞w_m^2). Since ∑_m=1^∞v_m/m ≤√(ζ(2) ∑_m=1^∞v_m^2)≪w_1≤ C_4=O(1) and ∑_m=1^∞w_m^2 ≪w_1^2 ≤ C_4^2 =O(1) and ∑_m=1^∞w_m^2/π^2 m^2≥w_1^2/π^2m^2≥ C_3^2/π^2 m^2, we see that (<ref>) is at most 1-ϵ for some ϵ>0. Hence S(v_1,…, w_1,…) ≤ 1-ϵ for w_1∈ [C_3, C_4], as desired. Let (v_m)_m=1^∞ and (w_m)_m=1^∞ be sequences of vectors in ℝ^2. If ∑_m=1^∞ m v_m<∞ and ∑_m=1^∞ w_m^2 < ∞ then ∫_0^2π e^∑_m=1^∞(m θ) · v_m/m + i∑_m=1^∞(m θ) · w_m/m dθ/2π ≤ e^v_1^2/4 - δ_3 min ( v_1^4, v_1^2) 𝒮( w_1) ∏_m=2^∞( e^min ( C_1 v_m^2, v_m/m ) (1 + C_2 w_m^2 )( 1+ C_2 v_m^2) ). This follows from taking <ref> and separating terms, using the trivial bound 1 + C_2 ∑_m=2^∞w_m^2 + C_2 ∑_m=q^∞v_m^2 ≤∏_m=2^∞ (1 + C_2 w_m^2 ) (1+ C_2 v_m^2) . Write A_n for ∑_d| n, d<n E_d and B_n for ∑_d| n, d<n E_d d/n. Let v_1,…, v_k and w_1,…, w_k be vectors in ℝ^2. Then 𝔼 [ e^∑_n=1^k X_n,ξ· v_n + i ∑_n=1^k X_n,ξ· w_n ] ≤ ∏_n=1^k ( e ^ E_n v_n^2/4 + min (C_1 A_n v_n^2, B_n v_n ) - δ_3 E_n min ( v_n^4, v_n^2) (1 + C_2 v_n^2)^ A_n (1+ C_2 w_n^2)^ A_n𝒮(w_n ) ^E_n). Taking <ref>, using <ref> to bound each factor, and then rearranging terms, we obtain 𝔼 [ e^∑_n=1^k X_n,ξ· v_n + i ∑_n=1^k X_n,ξ· w_n ] ≤∏_n=1^k ( ( e^v_n^2/4 - δ_3 min ( v_n^4, v_n^2) 𝒮( w_n ) )^ E_n∏_ d| n, d<n( e^min ( C_1 v_n^2, v_nd/n ) (1 + C_2 w_n^2 )(1+ C_2 v_n^2) )^ E_d) . Using ∑_ d| n, d<n E_d min ( C_1 v_n^2, v_nd/n ) ≤min (C_1 ∑_ d| n, d<n E_d v_n^2, ∑_d| n E_d v_n d/n ) we obtain (<ref>). The following facts about A_n, B_n, and E_n will be useful in the remainder of the argument. We have the following identities and inequalities for A_n, B_n, E_n: B_n + E_n = q^n/n. A_n = O ( q^n/2/n) . B_n = O ( q^n/2/n). E_n = q^n/n + O( q^n/2/n) . E_n> 2A_n+4 as long as q>5 . There exists a positive constant c such that E_n/2- 2 A_n - q/2-1> c(A_n + E_n) as long as q>11 and n>1. For (<ref>) by definition we have B_n+ E_n = ∑_d| n E_d d/n and ∑_d| n E_d d= q^n by either counting elements of the finite field 𝔽_q^n or a zeta function argument. This in particular implies E_n ≤q^n/n, which we will use repeatedly in the remaining proofs. For (<ref>), we observe that the largest possible d satisfying d| n and d<n is n/2 and every other solution is at most n/3, so A_n=∑_d| n, d<n E_d ≤ E_n/2 + ∑_d ≤ n/3 E_n/3≤ q^n/2/ (n/2) + ∑_d≤ n/3 q^d = O(q^n/2/n) + O(q^n/3)= O(q^n/2/n). (<ref>) follows from (<ref>) upon observing B_n≤ A_n. (<ref>) follows from (<ref>) and (<ref>). To obtain (<ref>) and (<ref>) we redo the above proofs with explicit constants to prove the inequalities for all q^n sufficiently large and then use exact formulas to handle the finitely many remaining possible values of (q,n). The proof of (<ref>) gives A_n=∑_d| n, d<n E_d ≤ E_n/2 + ∑_d ≤ n/3 E_n/3≤ q^n/2/ (n/2) + ∑_d≤ n/3 q^d≤ 2 q^n/2/n + 1/1-q^-1 q^n/3 . For X> 0 we have 2X^1/6> 1/1-7^-1log_7 X so we have 2q^n/6 > 1/1-7^-1log_7 q^n ≥1/1-q^-1log_q q^n = 1/1-q^-1 n so A_n ≤ 4 q^n/2/n. Then we have B_n = A_n ≤ 4 q^n/2/n and E_n= q^n/n-B_n ≥q^n/n - 4 q^n/2/n . Thus (<ref>) is satisfied as long as q^n/n - 4q^n/2/n > 4 q^n/2/n +4 i.e. as long as 1 > 8 q^-n/2 +4 n q^-n which by (<ref>) follows from 1> 8 q^-n/2 + 8 q^-5n/6 which holds for q^n> 95.2. Because q>7 this holds for all n>2. But for n=1 we have E_n=q and A_n=0 so (<ref>) becomes q>4 which is satisfied for all q>5 and for n=2 we have E_n = q^2-q/2 and A_n=q so (<ref>) becomes q^2-5q/2 >4 which is satisfied for all q>5. For (<ref>) it follows from (<ref>), (<ref>), and (<ref>) that E_n/2- 2 A_n - q/2-1 = q^n/(2n) + O(q^n/2/n) +O(q) and A_n+ E_n =q^n/n + O(q^n/2/2) so (<ref>) is satisfied for q^n sufficiently large. Thus it suffices to prove E_n/2- 2 A_n - q/2-1> 0 as then we can choose c small enough to ensure (<ref>) is satisfied for the finitely many remaining values of q,n. Then by (<ref>), (<ref>), and (<ref>) it suffices to prove q^n/2n - 2 q^n/2/n - 8 q^n/2/n - q^1+ n/6/n- 2 q^n/6/n>0 or equivalently 1/2 > 10 q^-n/2 + q^ 1 - 5n/6 + 2 q^- 5n/6. which since n ≥ 2 follows from 1/2 > 10 q^-n/2 + q^ -n/3 + 2 q^- 5n/6 which holds for q^n>697.4. Because q>11 this holds for all n>2. For n=2 we have E_n = q^2-q/2 and A_n=q so (<ref>) becomes q^2-11q/4-1 >0 which is satisfied for all q>11. For any n>0 and v∈ℝ^2, we have e ^ - B_n v^2/4 + min (C_1 A_n v^2, B_n v ) - δ_3 E_n min ( v^4, v^2) (1 + C_2 v^2)^ A_n ≤ e^O ( min ( q^n/2v^2 /n, 1/n)) . We note first that min (C_1 A_n v^2, B_n v ) = O ( A_n v^2) and also log (1 + C_2 v^2)^ A_n = O ( A_n v^2) so the left-hand side of (<ref>) is always e^ O ( A_n v^2) = e^ O ( q^n/2v^2/n) by (<ref>). We next check that the left-hand side of (<ref>) is ≤ e^ O(1/n). First in the range v≤ 1, it suffices to check that e^ O ( A_n v^2) - δ_3 E_n v^4 ≤ e^ O(1/n) but the exponent is ≤ O ( A_n^2/δ_3 E_n) and therefore is ≤ O(1/n) by (<ref>) and (<ref>). For v≥ 1, it suffices to check that e^ O ( A_n v^2) - δ_3 E_n v^4≤ e^ O(1/n) which is automatic as long as O(A_n) - δ_3 E_n ≤ 0 which happens for all but finitely many n, again by (<ref>) and (<ref>). For these finitely many n, it suffices to check that (<ref>) goes to 0 as n goes to ∞, which is clear as e^min ( C_1 A_n v^2, v) is merely exponential in a linear function of v while (1 + C_2 v^2)^ A_n is polynomial and these are both dominated by e^ -δ_3 E_n min ( v^4, v^2) which is exponential in a quadratic function. Let v_1,…, v_k and w_1,…, w_k be vectors in ℝ^2. Then 𝔼 [ e^∑_n=1^k X_n,ξ· v_n + i ∑_n=1^k X_n,ξ· w_n ] ≤∏_n=1^k ( e ^q^n/nv_n^2/4 + O ( min ( q^n/2v_n^2 /n, 1/n))(1+ C_2 w_n^2)^ A_n𝒮(w_n ) ^E_n). This follows from plugging E_n = q^n/n-B_n from (<ref>) into <ref> and then plugging in <ref>. §.§ Pointwise bounds Recall that F(x_1,…, x_k) is the probability density function of X_1,ξ,…, X_n,ξ. Assume q>5. Let x_1,…, x_k be vectors in ℝ^2. Then F(x_1,…, x_k)/∏_n=1^k ( e^ - nx_n^2/q^nn/q^n π)≤ O( e^ O ( ∑_n=1^k min ( n q^-3n/2x_n^2, n^-1 ) ) ). Let v_n = 2 n x_n /q^n for all n from 1 to k. By the Fourier inversion formula we have (2π)^2k e^∑_n=1^k x_n · v_n F( x_1,…, x_n) = ∫_w_1,…, w_k ∈ℝ^2 e^ -i ∑_n=1^k x_n · w_n 𝔼 [ e^∑_n=1^k X_n,ξ· v_n + i ∑_n=1^k X_n,ξ· w_n ] dw_1 … dw_k . (<ref>) and <ref>, give (2π)^2k e^∑_n=1^k x_n · v_n F( x_1,…, x_n) ≤∫_w_1,…, w_k ∈ℝ^2𝔼 [ e^∑_n=1^k X_n,ξ· v_n + i ∑_n=1^k X_n,ξ· w_n ] dw_1 … dw_k ≤∏_n=1^k e^q^n/nv_n^2/4+ O ( min ( q^n/2v_n^2 /n, 1/n))∫_w_1,…, w_k ∈ℝ^2∏_n=1^k (1+ C_2 w_n^2)^ A_n𝒮( w_n )^E_n dw_1 … dw_k = ∏_n=1^k e^q^n/nv_n^2/4+ O ( min ( q^n/2v_n^2 /n, 1/n))∏_n=1^k ∫_w∈ℝ^2 (1+ C_2 w^2)^ A_n𝒮( w )^E_n dw. We first tackle the inner integral ∫_w∈ℝ^2 (1+ C_2 w^2)^ A_n𝒮( w )^E_n dw. First note that for w large, we have (1+ C_2 w^2)^ A_n = O(w^2A_n) and S( w )^E_n = O ( w^- E_n/2), so for the integral to converge, it is necessary and sufficient to have E_n/2 > A_n+2, which follows from (<ref>). Since we can absorb the integrals (<ref>) for small n into the implicit constant, we will be focused on the asymptotic evaluation of (<ref>) for large n. For w bounded we have log (1+ C_2 w^2) = 1+ C_2 w^2 + O (w^4) and log𝒮(w) ≤w^2/4 + O( w)^3 so (1+ C_2 w^2)^ A_n𝒮(w)^E_n≤ e^(C_2 A_n - E_n/4) w^2 +O ( q^n w^3/n) (using (<ref>) and (<ref>)) which for w≤ (q^n/n)^-2/5 is ≤ e^(C_2 A_n - E_n/4) w^2 e^ ( q^n/n)^-1/5 so the integral over w≤ (q^n/n)^-2/5 is bounded by e^ ( q^n/n)^-1/5 times ∫_w∈ℝ^2 e^ (C_2 A_n - E_n/4) w^2 dw = π/E_n/4 - C_2 A_n = π/q^n/4n - O ( q^n/2 /n) =4π n /q^n + O ( n/q^3n/2) (for n large), with the e^ ( q^n/n)^-1/5 factor itself contributing an error term of size O ( (q^n/n)^-6/5). The integral (<ref>) over w> (q^n/n)^-2/5 will give additional error terms. First in the range where w> (q^n/n)^-2/5 but w is bounded by a fixed small constant, we have (C_2 A_n - E_n/4) w^2= - ( q^n/4n+ O (q^n/2/n)) w^2 which is larger by a constant factor than O ( q^n w^3/n), so the integrand of (<ref>) in this range is at most e^ - c q^n w^2/n≤ e^ - c (q^n/n)^1/5 for a small constant c. Since the area of this range is O(1), this range gives an error term of size decreasing superexponentially in q^n/n. For w greater than a large fixed constant R, we have 1 + C_2 w^2 = O( w)^2and 𝒮(w) = O ( w^-1/2). This gives a bound of O(1)^A_n + E_nw^ 2A_n - E_n/2 for the integrand or O(1)^ A_n + E_n R^ 2+ 2A_n - E_n/2 for the integral (<ref>), and since both A_n + E_n and E_n -4 A_n -4 are asymptotic to q^n/n by (<ref>) and (<ref>), we can choose R large enough that the second term dominates and the error term decays superexponentially in q^n/n. For the intermediate range of w between two fixed constants, we also get superexponential decay simply by observing that (1 + C_2 w^2)^2= O(1)^A_n and 𝒮(w)^E_n≤ (1-ϵ)^E_n for some ϵ>1, while A_n = o(E_n) and E_n increases exponentially by (<ref>) and (<ref>), so the integrand has superexponential decay and the length of the integral on this range is O(1). So all these error terms are dominated by the O ( (q^n/n)^-6/5), giving ∫_w∈ℝ^2 (1+ C_2 w^2)^ A_n𝒮(w)^E_n dw = 4π n /q^n +O ( (q^n/n)^-6/5) which implies ∏_n=1^k ∫_w∈ℝ^2 (1+ C_2 w^2)^ A_n𝒮(w)^E_n dw = O ( ∏_n=1^k 4 π n/q^n) . Since v_n = 2 n x_n / q^n we have v_n · x_n = q^n/ 4nv_n^2 + n/q^nx_n^2 and plugging (<ref>) and (<ref>) into (<ref>) we obtain (2π)^2k e^∑_n=1^k (q^n/ 4nv_n^2 + n/q^nx_n^2 ) F( x_1,…, x_n) ≤∏_n=1^k ( e ^q^n/nv_n^2/4 + O( min ( q^n/2v_n^2 /n, 1/n)) ) O ( ∏_n=1^k 4 π n/q^n ) and solving for F(x_1,…, x_n) gives F( x_1,…, x_n) ≤ O(1) ∏_n=1^k ( e ^O( min ( q^n/2v_n^2 /n, 1/n))) ∏_n=1^k ( (2π)^-2 e^ - n/q^nx_n^24 π n/q^n) ≪ e^ O ( ∑_n=1^k min ( q^n/2v_n^2/n, 1/n))∑_n=1^k ∏_n=1^k ( e^ - n/q^nx_n^2n /π q^n). Plugging in the definition (<ref>) of v_n and dividing by ∏_n=1^k ( e^ - n/q^nx_n^2n /π q^n) gives (<ref>). Assume q>5. Let x_1,…, x_k be vectors in ℝ^2. Then F(x_1,…, x_k)/∏_n=1^k ( e^ - nx_n^2/q^nn/q^n π)≤ O( k^O(1) ) = O(N^O(1)). We have ∑_n=1^k min ( n q^-3n/2x_n^2, n^-1 ) ≤∑_n=1^k n^-1 = O(log k) and plugging (<ref>) into (<ref>), together with the trivial bound k≤ N, gives (<ref>). §.§ Hermite polynomial expansion bounds The goal of this subsection is to prove a formula, Corollary <ref>, for F(x_1,…, x_k) as the product of a Gaussian probability density function times a sum of Hermite polynomials weighted by certain coefficients h_a_1,1,…, a_k,2, together with bounds for the coefficients h_a_1,1,…, a_k,2. The bounds for the coefficients will start in <ref> with a complicated bound expressed in terms of an integral and conclude in <ref> which bounds a sum of squares of the h_a_1,1,…, a_k,2 which exactly equals the error, in L^2 norm integrated against the Gaussian measure, of a low-degree polynomial approximation for F(x_1,…,x_k)/∏_n=1^k ( e^ - nx_n^2/q^nn/q^n π) obtained using these Hermite polynomials. We begin with a brief review of Hermite polynomials. The (probabilist's) Hermite polynomials are defined as: 𝐻𝑒_n ( x) = (-1)^n e^x^2/2d^n/ dx^n e^ - x^2/2 and their key property is the orthogonality when integrated against the Gaussian measure ∫_-∞^∞𝐻𝑒_n ( x) 𝐻𝑒_m ( x) e^ - x^2/2/√(2π) dx = n! if n=m 0 if n ≠ m . After a change of variables, this implies for any σ>0 ∫_-∞^∞𝐻𝑒_n ( x/σ) 𝐻𝑒_m ( x/σ) e^ - x^2/2σ^2/√(2π)σ dx = n! if n=m 0 if n ≠ m . We have ∫_-∞^∞ e^ i xy𝐻𝑒_n ( x) e^ - x^2/2/√(2π) dx= ∫_-∞^∞ e^ i xy (-1)^n ( d^n/ dx^ne^ - x^2/2/√(2π)) dx = ∫_-∞^∞( d^n/ dx^ne^ i xy) e^ - x^2/2/√(2π) dx = y^n e^y^2/2 and a change of variables gives ∫_-∞^∞ e^ i xy𝐻𝑒_n ( x/σ) e^ - x^2/2σ^2 /√(2π)σ dx= σ^n y^n e^σ^2 y^2/2. Now we introduce the notation that will be needed for our first bound on the coefficients. Let C_5 be the implicit constant in the big O in <ref>. Let n be a positive integer. For a a positive integer and r a positive real number, write ℐ_n(a,r)= 1/π∫_-∞^∞ e^ C_5 min ( q^n/2v ^2 /n, 1/n)/ v+ i r^a+1 dv . For a=0, write ℐ_n(a,r)=1. For a_n,1,a_n,2 two nonnegative integers, let ℒ_n (a_n,1,a_n,2)= inf_r_n,1, r_n,2≥ 0 r_n,j=0 if and only if a_n,j=0 (1+ C_2 (r_n,1^2+r_n,2^2))^A_n𝒮(√( r_n,1^2+r_n,2^2 ))^E_n/ e^ - q^n/nr_n,1^2 + r_n,2^2 /4ℐ_n(a_n,1,r_n,1) ℐ_n(a_n,2,r_n,2). For w_n∈ℝ^2, write w_n,1 for its first coordinate and w_n,2 for its second coordinate. There exists a tuple (h_a_1,1,…, a_k,2)_ a_1,1,…, a_k,2∈ℤ^≥ 0 of complex numbers indexed by 2k-tuples of nonnegative integers such that 𝔼 [ e^ i ∑_n=1^k X_n,ξ· w_n ] = e^ - ∑_n=1^k q^n/nw_n^2/4∑_ a_1,1, …, a_k,2∈ℤ^≥ 0 h_a_1,1,…, a_k,2∏_n=1^k (w_n,1^a_n,1 w_n,2^a_n,2) and for each tuple a_1,1,…, a_k,2 of nonnegative integers we have h_a_1,1,…, a_k,2≤∏_n=1^k ℒ_n ( a_n,1, a_n,2 ). We will shortly see in <ref> that the h_a_1,1,…, a_k,2 are the coefficients of an expansion of F by Hermite polynomials. The remaining results in this subsection will be devoted to proving more straightforward upper bounds on the h_a_1,1,…, a_k,2, culminating in <ref> which bounds a certain sum of h_a_1,1,…, a_k,2 which will appear in the proof of <ref>. The fact that h_a_1,1,…, a_k,2 exist and the sum is absolutely convergent follows from the fact that 𝔼 [ e^ i ∑_n=1^k X_n,ξ· w_n ] / e^ - ∑_n=1^k q^n/nw_n^2/4 is an entire function which is clear as the random variables X_n,ξ are bounded so the numerator is entire while the denominator is entire and nowhere vanishing. To estimate the coefficients, we use the Cauchy integral formula. We explain the argument in detail only in the case that a_1,1,…,a_k,2 are all nonzero. The general case follows the same ideas, but is notationally more complicated. For f a function of a complex variable z, which is bounded on loci in the complex plane where the imaginary part of z is bounded, the coefficient of z^a in the Taylor expansion at 0 of f is given for any r>0 by 1/2π( ∫_ r i + ∞^r i - ∞ f(z) dz/z^a+1 + ∫_ -ri- ∞^-ri +∞ f(z) dz/z^a+1) or writing z= x+iy, by 1/2π( ∫_ -∞^∞ f(x+ ir ) dx/ (x+ir)^a+1 + ∫_ -∞^∞ f(x-ir ) dx/(x-ir)^a+1) . On the other hand, for a=0, the value is simply f(0). Thus for f a function of complex variables z_1,1, …, z_k,2 which is bounded on loci in ℂ^2k where the imaginary parts of all coordinates are bounded, the coefficient of ∏_n=1^k ( z_n,1^a_n,1 z_n,2^a_n,2) in f is given for any r_1,1,…, r_k,2>0 by 1/(2π)^2k∑_ϵ_1,1,…, ϵ_n,k∈± 1∫_ℝ^2k f( x_1,1+ i ϵ_1,1 r_1,1,…, x_k,2 + i ϵ_k,2 ) dx_1,1… d x_k,2/∏_n=1^k ( (x_n,1+ i ϵ_n,1 r_n,1)^a_n,1+1 (x_n,2+ i ϵ_n,2 r_n,2)^a_n,2+1 ) . If some of the a_n,j are 0, we can drop the corresponding variables x_n,j from the integral as well as drop the sums over ϵ_n,j and a corresponding number of factors of (2π). We apply this to the function of u_1,…, u_k∈ℂ^2, with u_n = v_n + i w_n, given by 𝔼 [ e^∑_n=1^k X_n,ξ· u_n ] / e^ - ∑_n=1^k q^n/n u_n · u_n/4 to obtain i^ -∑_n=1^k (a_n,1 + a_n,2) h_a_1,1,…, a_k,2 = 1/(2π)^2k∑_ϵ_1,1,…, ϵ_k,2∈± 1∫_v_1,…, v_n∈ℝ^2𝔼 [ e^∑_n=1^k X_n,ξ· v_n + i ∑_n=1^k X_n,ξ·w̃_n ] / e^∑_n=1^k q^n/nv_n^2 + 2 i v_n·w̃_n - w̃_n^2/4 dv_1… dv_k/∏_n=1^k (( v_n,1 + i ϵ_n,1 r_n,1)^a_n,1+1 ( v_n,2 + i ϵ_n,2 r_n,2)^a_n,2+1 ) where w̃_n = ( ϵ_n,1 r_n,1, ϵ_n,2 r_n,2). Taking absolute values and applying <ref> gives h_a_1,1,…, a_k,2 ≤1/(2π)^2k∑_ϵ_1,1,…, ϵ_k,2∈± 1∫_v_1,…, v_n∈ℝ^2𝔼 [ e^∑_n=1^k X_n,ξ· v_n + i ∑_n=1^k X_n,ξ·w̃_n ]/ e^∑_n=1^k q^n/nv_n^2 - w̃_n^2/4 dv_1… dv_k/∏_n=1^k ( v_n,1 + i r_n,1^a_n,1+1 v_n,2 + i r_n,2^a_n,2+1) ≤1/(2π)^2k∑_ϵ_1,1,…, ϵ_n,k∈± 1∫_v_1,…, v_n∈ℝ^2∏_n=1^k ( e ^q^n/nv_n^2/4 + C_5 min ( q^n/2v_n^2 /n, 1/n) (1+ C_2 w̃_n^2)^ A_n𝒮(w̃_n)^E_n) dv_1… dv_k / e^∑_n=1^k q^n/nv_n^2 - w̃_n^2/4∏_n=1^k ( v_n,1 + i r_n,1^a_n,1+1 v_n,2 + i r_n,2^a_n,2+1) = 1/(2π)^2k∑_ϵ_1,1,…, ϵ_k,2∈± 1∏_n=1^k (1+ C_2 w̃_n^2)^ A_n𝒮(w̃_n)^E_n/ e^ - q^n/nw̃_n^2/4∫_v_1,…, v_n∈ℝ^2∏_n=1^k e ^ C_5 min ( q^n/2v_n^2 /n, 1/n) dv_1… dv_k /∏_n=1^k ( v_n,1 + i r_n,1^a_n,1+1 v_n,2 + i r_n,2^a_n,2+1) . For w_n= (r_n,1, r_n,2), we have w̃_n=w_n and since w̃_n only appears in (<ref>) via its absolute value, we may simplify by replacing w̃_n by w_n and then removing the sum over ϵ_n,j, obtaining h_a_1,1,…, a_k,2≤1/π^2k∏_n=1^k (1+ C_2 w_n^2)^ A_n𝒮(w_n)^E_n/ e^ - q^n/nw_n^2/4∏_n=1^k∫_v_n∈ℝ^2 e^ C_5 min ( q^n/2v_n^2 /n, 1/n) / v_n,1 + i r_n,1^a_n,1+1 v_n,2 + i r_n,2^a_n,2+1 dv_n. If some of the a_n,j are 0, we drop the corresponding variables v_n,j from the integral in (<ref>) as well as a corresponding number of factors of π. We can bound e^ C_5 min ( q^n/2v_n^2 /n, 1/n) by e^ C_5 min ( q^n/2v_n,1^2 /n, 1/n) e^ C_5 min ( q^n/2v_n,2^2 /n, 1/n) so the inner integral splits as a product ∫_v_n∈ℝ^2 e^ C_5 min ( q^n/2v_n^2 /n, 1/n) / v_n,1 + i r_n,1^a_n,1+1 v_n,2 + i r_n,2^a_n,2+1 dv_n ≤∏_j=1^2∫_-∞^∞ e^ C_5 min ( q^n/2v_n,j^2 /n, 1/n) / v_n,j + i r_n,j^a_n,j+1 dv_n,j which matches π^2 ℐ_n ( a_n,1 , r_n,1) ℐ_n (a_n,2,r_n,2), giving a bound of ∏_n=1^k ℒ_n ( a_n,1, a_n,2 ) once we choose r_n,1,r_n,2 to be values where the minimum in the definition of ℒ_n is attained (or comes arbitrarily close to being attained). If some of the variables are 0, since we drop the integral and the factor of π, we obtain 1, again maching the definition of ℐ_n ( a_n,j , r_n,j ). Here we use the fact that e^ C_5 min ( q^n/2v_n,1^2 /n, 1/n) =1 if v_n,j=0 . We are now ready to state our Hermite expansion. The proof relies on <ref> below, but there is no circularity as Corollary <ref> is not used until the next section – we state it here for motivation. For (h_a_1,1,…, a_k,2)_ a_1,1,…, a_k,2∈ℤ^≥ 0 as in <ref> we have F(x_1,…, x_k) = ∏_n=1^k ( e^ - nx_n^2/q^nn/q^n π) ∑_ a_1,1, …, a_k,2∈ℤ^≥ 0 h_a_1,1,…, a_k,2∏_n=1^k He_a_n,1( x_n,1√(2n/q^n)) He_a_n,2(x_n,2√(2n/q^n) ) √(2n/q^n)^ a_n,1+ a_n,2. We take Fourier transforms of both sides. Using <ref> to compute the Fourier transform of the left-hand side and (<ref>) to compute the Fourier transform of the right-hand side, we see the Fourier transforms are equal. the The left-hand side is an L^2 function by <ref> and the right-hand side is L^2 by <ref> below and <ref>). By invertibility of the Fourier transform for L^2 functions, both sides are equal. We are now ready to estimate the inegral ℐ_n and local terms ℒ_n. The easiest, but most important, estimate is the following: We have ℒ_n(0,0)≤ 1 for all n. We set r_n,1=r_n,2=0 and all the terms are manifstly equal to 1 in this case, except for 𝒮(0), which is ≤ 1 by <ref>. (In fact one can also check 𝒮(0)=1 using <ref> but we don't need this.) For (a_n,1,a_n,2 ) ≠ (0,0) it will suffice to bound ℒ_n(a_n,1,a_n,2) to within a constant factor. To that end, we have the following bound for ℐ_n: We have ℐ_n(r,a) ≪1/ r^a √(a+1) where we adopt the convention 0^0=1. The case a=0 is 1 ≪ 1 which is clear. For a>0 convexity of the logarithm gives the lower bound v+ i r^a+1 = ( v^2 + r^2 )^a+1/2 = r^a+1(v^2/r^2 + 1)^a+1/2≥ r^ a+1( 1 + a+1/2v^2/r^2). We also have e^ C_5 min ( q^n/2v ^2 /n, 1/n)≤ e^C_5/n≪ 1. Thus ℐ_n( a,r) ≪∫_-∞^∞1 / v+ i r^a+1 dv ≤∫_-∞^∞1/ r^a+1( 1 + a+1/2v^2/r^2) dv. The change of variables x= √(a+1/2)v/r gives ∫_-∞^∞1/ r^a+1( 1 + a+1/2v^2/r^2) dv= 1/r^a√(2/a+1)∫_∞^∞1/1+x^2≪1/ r^a √(a+1). This allows us to prove the following general bound, where we have reintroduced w_n for compactness of notation. Fix integers n>0 and a_n,0, a_n,1≥ 0. Let r_n,1,r_n,2 be nonnegative real numbers such that r_n,j=0 if and only if a_n,j=0. Let w_n=(r_n,1,r_n,2). Then ℒ_n (a_n,1, a_n,2) ≪ (1+ C_2 w_n^2)^A_n𝒮( w_n )^E_n/ e^ - q^n/nw_n^2 /4 a_n,1^r_n,1 a_n,2^r_n,2√(a_n,1+1 )√(a_n,2+1). This follows from the definition of ℒ_n, after observing that a minimum is bounded by its value at any point, applying <ref> to bound ℐ_n(a_n,j, r_n,j), and substituting w_n for √(r_n,1^2+r_n,2^2). We will specialize Lemma <ref> at different values of a_n,1,a_n,2 in different ranges. First we give a lemma helpful for a_n,1+a_n,2 small: For integers n>0 and a_n,1,a_n,2≥ 0 we have ℒ_n(a_n,1, a_n,2 ) ≪ e^ O ( a_n,1 + a_n,2)( q^n/n)^a_n,1+ a_n,2/3 a_n,1^ - a_n,1/3 a_n,2^- a_n,2/31/√(a_n,1+1)√(a_n,2 +1) . We apply Lemma <ref> and set r_n,j= √(na_n,j/q^n) . We have 𝒮( w_n ) ≤ e^ - w_n^2/4 + O (w_n^3) and 1 + C_2 w_n^2 ≤ e^ C_2 w_n^2 so (1+ C_2 w_n^2)^ A_n𝒮(w_n)^E_n/ e^ - q^n/nw_n^2/4≤ e^ A_n C_2 w_n^2 + q^n/nw_n^2/4 - E_n w_n^2/4 + O ( E_n w_n^3) = e^ A_n C_2 w_n^2 + B_n w_n^2/4 + O ( E_n w_n^3) = e^ O ( q^n/2/nw_n^2 + q^n/nw_n^3 ) = e^ O ( q^n/nw_n^3 ) since if a_n,1=a_n,2=0 the exponent is 0 and otherwise w_n≥√(n/q^n)≥ q^-n/2. We have w_n^3 = (r_n,1^2 + r_n,2^2)^3/2 = n/q^n ( a_n,1^2/3 + a_n,2^2/3)^3/2 = O ( n/q^n (a_n,1 + a_n,2 ) ). Combining (<ref>) and (<ref>), we have (1+ C_2 w_n^2)^ A_n𝒮(w_n)^E_n/ e^ - q^n/nw_n^2/4 r_n,1^a_n,1 r_n,2^a_n,2 = e^ O ( a_n,1 + a_n,2)/ r_n,1^a_n,1 r_n,2^a_n,2 =e^ O ( a_n,1 + a_n,2)( q^n/n)^a_n,1+ a_n,2/3 a_n,1^ - a_n,1/3 a_n,2^- a_n,2/3 . Adding the √(a_n,1+1)√(a_n,2+1) term, we obtain the statement. Next we give a lemma for a_n,1+a_n,2 large. There exists an absolute constant C_6 such that for integers n>0 and a_n,1,a_n,2≥ 0 with a_n,1 + a_n,2 larger than C_6 q^n/n, we have ℒ_n(a_n,1, a_n,2 ) ≪ O(1)^ A_n+E_n( q^n/2n)^E_n/4- A_n+ a_n,1+ a_n,2/2 (a_n,1+a_n,2)^ A_n - E_n/4 e^ a_n,1 + a_n,2/2a_n,1^ - a_n,1/2 a_n,2^ - a_n,2/21/√(a_n,1+1)√(a_n,2 +1) . We apply Lemma <ref> and set r_n,j= = √(2n/q^n a_n,j) We have w_n = √( r_n,1^2 + r_n,2^2) = √(2n/q^n (a_n,1 + a_n,2)) which is larger than an absolute constant, so 1+ C_2 w_n^2= O ( w_n^2) = O ( 2n/q^n (a_n,1 + a_n,1) ) and 𝒮( w_n) = O ( w_n^-1/2 ) = O ( ( q^n/2n)^1/4 (a_n,1+a_n,2)^-1/4) while e^ - q^n/nw_n^2/4 = e^ - a_n,1 + a_n,2/2 and r_n,j^a_n,j = ( q^n/2n)^ - a_n,j/2 a_n,j^a_n,j/2 . Putting (<ref>), (<ref>), (<ref>), and (<ref>) all together gives (1+ C_2 w_n^2)^ A_n𝒮(w_n)^E_n/ e^ - q^n/nw_n^2/4 r_n,1^a_n,1 r_n,2^a_n,2 = O(1)^ A_n+E_n( q^n/2n)^E_n/4- A_n + a_n,1+ a_n,2/2 (a_n,1+a_n,2)^ A_n - E_n/4 e^ a_n,1 + a_n,2/2a_n,1^ - a_n,1/2 a_n,2^ - a_n,2/2 . Adding the √(a_n,1+1)√(a_n,2+1) term, we obtain the statement. Our final lemma will be used for a_n,1+a_n,2 in an intermediate range. For each C_7, there exists ϵ>0 such that for integers n>0 and a_n,1,a_n,2≥ 0 with a_n,1 + a_n,2 less than C_7 q^n/n, we have ℒ_n(a_n,1, a_n,2 ) ≪ e^( 1/2 - ϵ) (a_n,1+ a_n,2)( q^n/2n)^a_n,1 + a_n,2/2 a_n,1^ - a_n,1/2 a_n,2^ - a_n,2/21/√(a_n,1+1)√(a_n,2 +1) . For small values of n, there are finitely many possibilities and it suffices to apply <ref>, absorbing any discrepancies into the implicit constant, so we may assume n is large. We apply Lemma <ref> and set r_n,j= = √(2n/q^n a_n,j). We have w_n <C_7. There exists ϵ>0 such that 𝒮( x) < e^ - ϵ x^2 for all x ≤ C_7 (since any ϵ< 1/4 works for x sufficiently small and some ϵ works on every bounded interval away from 0). We furthermore have (1+ C_2 w_n^2)^ A_n𝒮(w_n)^E_n≤ e^ C_2 A_n w_n^2 - ϵ E_n w_n^2 = e^ C_2 A_n w_n^2 + ϵ B_n w_n^2 - ϵq^n/nw_n^2 ≤ e^ - ϵ/2q^n/nw_n^2 for n is sufficiently large. We also have r_n,j^a_n,j = ( q^n/2n)^ - a_n,j/2 a_n,j^a_n,j/2 . Using (<ref>) and (<ref>), we have (1+ C_2 w_n^2)^ A_n𝒮(w_n)^E_n/ e^ - q^n/nw_n^2/4 r_n,1^a_n,1 r_n,2^a_n,2≤ e^( 1/4 - ϵ/2) q^n/nw_n^2 ( q^n/2n)^a_n,1 + a_n,2/2 a_n,1^ - a_n,1/2 a_n,2^ - a_n,2/2 = e^( 1/2 - ϵ) (a_n,1+ a_n,2)( q^n/2n)^a_n,1 + a_n,2/2 a_n,1^ - a_n,1/2 a_n,2^ - a_n,2/2 . Adding the √(a_n,1+1)√(a_n,2+1) term, we obtain the statement. We are now ready to give our final bound we need on the h_a_1,1,…, a_k,2s. If q>11 then for (h_a_1,1,…, a_k,2)_ a_1,1,…, a_k,2∈ℤ^≥ 0 as in <ref> we have ∑_ a_1,1, …, a_k,2∈ℤ^≥ 0 ∑_n=1^k n (a_n,1+ a_n,2) > kh_a_1,1,…, a_k,2^2 ∏_n=1^k a_n,1! a_n,2! ( 2n/q^n)^ a_n,1+ a_n,2≪_q k ^ - q-2/2 . It suffices to prove ∑_ a_1,1, …, a_k,2∈ℤ^≥ 0( min(k, ∑_n=1^k n (a_n,1+ a_n,2) )^q/2h_a_1,1,…, a_k,2^2 ∏_n=1^k a_n,1! a_n,2! ( 2n/q^n)^ a_n,1+ a_n,2 = O_q(k) . The inequality ∑_n=1^k n (a_n,1+ a_n,2) ≤∏_n=1^k (1 + n (a_n,1+ a_n,2) ) and (<ref>) gives ∑_ a_1,1, …, a_k,2∈ℤ^≥ 0( min (k, ∑_n=1^k n (a_n,1+ a_n,2) ))^q/2h_a_1,1,…, a_k,2^2 ∏_n=1^k a_n,1! a_n,2! ( 2n/q^n)^ a_n,1+ a_n,2 ≤∑_ a_1,1, …, a_k,2∈ℤ^≥ 0∏_n=1^k ( min(k, 1+ n (a_n,1+ a_n,2) ))^q/2h_a_1,1,…, a_k,2^2 ∏_n=1^k a_n,1! a_n,2! ( 2n/q^n)^ a_n,1+ a_n,2 ≪∑_ a_1,1, …, a_k,2∈ℤ^≥ 0∏_n=1^k ( ( min(k, 1+ n (a_n,1+ a_n,2)) )^q/2ℒ_n ( a_n,1, a_n,2) a_n,1! a_n,2! ( 2n/q^n)^ a_n,1+ a_n,2) = ∏_n=1^k ( ∑_a_n,1, a_n,2 =0^∞( min(k, 1+ n (a_n,1+ a_n,2) ))^q/2ℒ_n ( a_n,1, a_n,2) a_n,1! a_n,2! ( 2n/q^n)^ a_n,1+ a_n,2) so it suffices to show ∑_a_n,1, a_n,2 =0^∞( min(k, 1+ n (a_n,1+ a_n,2)) )^q/2ℒ_n ( a_n,1, a_n,2) a_n,1! a_n,2! ( 2n/q^n)^ a_n,1+ a_n,2 = O(k) if n=1 1 + O_q(1/n^2) if n>1 . Removing the a_n,1,a_n,2=0 term handled by <ref>, this is equivalent to ∑_a_n,1, a_n,2∈ℤ^≥ 0 (a_n,1,a_n,2)≠ (0,0) ( min (k, n (a_n,1+ a_n,2)) )^q-2/2ℒ_n ( a_n,1, a_n,2) a_n,1! a_n,2! ( 2n/q^n)^ a_n,1+ a_n,2 = O(k) if n=1 O_q(1/n^2) if n>1 . Stirling's formula gives a_n,j! ≪ a_n,j^a_n,j e^ - a_n,j√(a_n,j+1). We split a_n,1 + a_n,2 into three ranges. We fix constants C_8 sufficiently small and C_9 sufficiently large. For a_n,1+ a_n,2≤ C_8 q^n/n^7 we apply <ref> to bound ℒ_n ( a_n,1, a_n,2). For a_n,1+ a_n,2≥ C_9 q^n/n we apply <ref>. For a_n,1,a_n,2∈ (C_8 q^n/n^7, C_9 q^n/n) we apply <ref>. Applying <ref> and (<ref>) to the terms in (<ref>) with a_n,1+ a_n,2≥ C_9 q^n/n, we obtain ∑_a_n,1, a_n,2∈ℤ^≥ 0 a_n,1+ a_n,2≥ C_9 q^n/n( min(k, n (a_n,1+ a_n,2)) )^q/2ℒ_n ( a_n,1, a_n,2) a_n,1! a_n,2! ( 2n/q^n)^ a_n,1+ a_n,2 ≪∑_a_n,1, a_n,2∈ℤ^≥ 0 a_n,1+ a_n,2≥ C_9 q^n/n( min(k, n (a_n,1+ a_n,2)) )^q/2 O(1)^ A_n+E_n( q^n/2n)^E_n/2- 2A_n (a_n,1+a_n,2)^ 2A_n - E_n/21/√(a_n,1+1)√(a_n,2 +1) ≪∑_a ∈ℤ^≥ 0 a ≥ C_9 q^n/n( min(k, n a) )^q/2 O(1)^ A_n+E_n( q^n/2n)^E_n/2- 2A_n a^ 2A_n - E_n/2. We now handle the cases n=1 and n>1 separately. For n=1, we have E_n=q and A_n=0 and the terms depending on q and n may be absorbed into the implicit constant. This gives ∑_a ∈ℤ^≥ 0 a ≥ C q ( min (k, a) )^q/2 a^ - q/2 . The terms where a ≤ k contribute at most ∑_ a=1^k 1 =k and the remaining terms contribute k^q/2∑_a=k+1^∞ a^- q/2 =O(k ), so this indeed gives O(k). For n>1 and at all subsequent points in the argument, we will ignore the min(k , · ). This gives ∑_a ∈ℤ^≥ 0 a ≥ C_9 q^n/n n^q/2 O(1)^ A_n+E_n( q^n/2n)^E_n/2- 2A_n a^ 2A_n - E_n/2 + q/2 ≪ n^q/2 O(1)^ A_n+E_n( q^n/2n)^E_n/2- 2A_n( C_9 q^n/n)^ 2 A_n - E_n/2+ q/2 +1 = n^q/2 O(1)^ A_n+E_n( C_9 )^ 2 A_n - E_n/2+ q/2+1 ( q^n/n)^q/2+1. Since q> 11, by (<ref>), the exponent E_n/2- 2 A_n - q/2-1 is greater than a constant multiple of A_n + E_n so choosing C_9 sufficiently large the ( C_9 )^ 2 A_n - E_n/2+ q/2 term dominates O(1)^A_n+E_n by a factor that is doubly exponential in n. Since ( q^n/n)^q/2 and n^q-2/2 are at most singly exponential in n, they are easily dominated and the product is O(1/n^2). So indeed (<ref>) is O(1/n^2) for n>1 and O( k) for n=1. Applying <ref> and (<ref>) to the terms in (<ref>) with a_n,1+ a_n,2∈ (C_8 q^n/n^7, C_9 q^n/n), we obtain ∑_a_n,1, a_n,2∈ℤ^≥ 0 a_n,1+ a_n,2∈ ( C_8 q^n/n^7, C_9 q^n/n ) ( n (a_n,1+ a_n,2) )^q/2ℒ_n ( a_n,1, a_n,2) a_n,1! a_n,2! ( 2n/q^n)^ a_n,1+ a_n,2 ≪∑_a_n,1, a_n,2∈ℤ^≥ 0 a_n,1+ a_n,2∈ ( C_8 q^n/n^7, C_9 q^n/n ) ( n (a_n,1+ a_n,2) )^q/2 e^ - 2ϵ (a_n,1+ a_n,2)1/√(a_n,1+1)√(a_n,2 +1) ≤∑_a_n,1, a_n,2∈ℤ^≥ 0 a_n,1+ a_n,2∈ ( C_8 q^n/n^7, C_9 q^n/n ) ( n C_9 q^n/n)^q/2 e^-2 ϵ C_8 q^n/n^7 ≪(q^n/2)^2 ( n C_9 q^n/n)^q/2 e^-2 ϵ C_8 q^n/n^7 and the term e^-2 ϵ C_8 q^n/n^7 decreases doubly exponential in n while the remaining terms increase singly-exponentially so the product decreases doubly-exponentially and in particular is O(1/n^2). Applying <ref> and (<ref>) to the terms in (<ref>) with a_n,1+ a_n,2≤ C_8 q^n/n^7, we obtain ∑_a_n,1, a_n,2∈ℤ^≥ 0 a_n,1+ a_n,2∈ (0, C_8 q^n/n^7] ( n (a_n,1+ a_n,2) )^q/2ℒ_n ( a_n,1, a_n,2) a_n,1! a_n,2! ( 2n/q^n)^ a_n,1+ a_n,2 ≪∑_a_n,1, a_n,2∈ℤ^≥ 0 a_n,1+ a_n,2∈ (0, C_8 q^n/n^7] e^ O ( a_n,1 + a_n,2)( q^n/n)^- a_n,1+ a_n,2/3 a_n,1^a_n,1/3 a_n,2^a_n,2/31/√(a_n,1+1)√(a_n,2 +1) ≤∑_a_n,1, a_n,2∈ℤ^≥ 0 a_n,1+ a_n,2∈ (0, C_8 q^n/n^7] e^ O ( a_n,1 + a_n,2)( q^n/n)^- a_n,1+ a_n,2/3 (C_8 q^n/n^7) ^a_n,1/3 (C_8 q^n/n^7) ^a_n,2/3 = ∑_a_n,1, a_n,2∈ℤ^≥ 0 a_n,1+ a_n,2∈ (0, C_8 q^n/n] ( C_8^ 1/3 e^O(1) n^-2 )^ a_n,1 + a_n,2≤∑_a_n,1, a_n,2∈ℤ^≥ 0 a_n,1+ a_n,2 >0 ( C_8^ 1/3 e^O(1) n^-2 )^ a_n,1 + a_n,2 . We have ∑_a_n,1, a_n,2∈ℤ^≥ 0 a_n,1+ a_n,2 >0 x^ a_n,1 + a_n,2 = 2x-x^2/ (1-x)^2 = O(x) for x sufficiently small so, taking C_8 sufficiently small, this is O ( C_8^1/3 e^O(1) n^-2 ) = O(n^-2), as desired. § THE CHIMERA In this section, we prove <ref>, <ref>, and <ref>. The proof of <ref> is direct and independent of the prior results. To prove <ref> and <ref>, we combine estimates from the previous section on the function F with estimates from the literature on the measure μ_rm. It suffices to check first that the support of μ_ch is contained in the support of μ_rm and second that the support of μ_ch is contained in the support of μ_ep. Since μ_rm is the pushforward of the Haar measure on U(N), and the support of Haar measure is all of U(N), the support of μ_rm is (the closure of) the image of U(N). Since μ_ch is also the pushforward of a measure on U(N), its support is also contained in (the closure of) the image of U(N) and hence in the support of μ_rm. For a point L^* ∈ to be contained in the support of μ_ep, each neighborhood of L^* must contain a random Euler product L_ξ with positive probability. Since the topology is the product topology, a basis for the neighborhood consists of the sets of power series in q^-s whose first n coefficients are all within ϵ of the first n coefficients of L^*. Whether L_ξ lies in this neighborhood only depends on ξ(𝔭) for 𝔭 of degree ≤ n. The set of functions from 𝔭 of degree ≤ n to the circle a finite-dimensional manifold, the uniform measure on this manifold is supported everywhere, and the map from this to the first n coefficients of L_ξ, so it suffices for L^* to be in the image of this manifold. In other words, it suffices to check there is a single function ξ from primes to the unit circle such that first n coefficients of L_ξ agree with the first n coefficients of L^*. If L^* lies in the support of μ_ch then there is certainly a function ξ where the first k coefficients of L_ξ agree with L^* since L^*= ( I - q^1/2 - s M) for some M with F( - q^1/2(M),…, -q^k/2 (M^k)/k)>0 where F is the probability density function of the first k coefficients of the logarithm of a random L_ξ. Since F( - q^1/2(M),…, -q^k/2 (M^k)/k)>0, the density is nonzero, so there exists ξ such that the first k coefficients of log L_ξ match - q^1/2(M),…, -q^k/2 (M^k)/k. We now check by induction on n that for each n ≥ k there exists a function ξ_n such that the first n coefficients of log L_ξ_n match - q^1/2(M),…, -q^n/2 (M^n)/n. For n=k this is what we just checked. For n=k, we choose ξ_n so that ξ_n(f) =ξ_n-1(f) for all f of degree <n, so that the first n-1 coefficients of L_ξ_n match the first n-1 coefficients of L_ξ_n-1. The nth coefficient of log L_ξ is then ∑_𝔭∈𝔽_q[u]^+ irreducible 𝔭=nξ_n(𝔭) + ∑_𝔭∈𝔽_q[u]^+ irreducible 𝔭| n 𝔭≠ n ξ_n(𝔭)/ (n/𝔭) so it suffices to choose ξ_n so that ∑_𝔭∈𝔽_q[u]^+ irreducible 𝔭=nξ_n(𝔭) = -q^n/2 (M^n)/n - ∑_𝔭∈𝔽_q[u]^+ irreducible 𝔭| n 𝔭≠ n ξ_n(𝔭)/ (n/𝔭) We can choose ∑_𝔭∈𝔽_q[u]^+ irreducible 𝔭=nξ_n(𝔭) to be any complex number of absolute value ≤ E_n so it suffices to check the right hand side has absolute value ≤ E_n. We have ∑_𝔭∈𝔽_q[u]^+ irreducible 𝔭| n 𝔭≠ n ξ_n(𝔭)/ (n/𝔭) = O( q^n/2) and q^n/2 (M^n)/n≤ q^n/2 N/n Since E_n is greater than a constant times q^n/n, the right hand side is is ≤ E_n as long as q^n/2 > O(N + n) which happens as long as n/ log N is sufficiently large. Since n>k, this happens if k/log N is sufficiently large, and since k = ⌊ N^β⌋≥⌊ N^1/4⌋, this happens for N sufficiently large. For this section, a convenient coordinate system for consists of the variables b_n defined so that b_n(L) is √(n/q^n) times the coefficient of q^-ns in log L, so that L = e^∑_n=1^∞√(q^n/n) b_n(L) q^-s. Thus b_n(L_ξ ) = √(n/q^n)X_n,ξ and because the definition (<ref>) of L_M gives L_M (s)= ( I - q^1/2 - s M) = e^∑_n=1^∞ q^n/2 q^-ns(M^n)/n , we have b_n(L_M) = - ( M^n)/√(n) . Recall the measures μ_ep and μ_rm on . We define another measure μ_g on as the unique measure where the b_n are independent complex standard normal random variables and the constant coefficient is 1. The utility of μ_g for our purposes is that it serves as an approximation for both μ_ep and μ_rm. In particular, define a projection map ηℂ[[q^-s ]] →ℂ^k that sends L to b_1(L),…, b_k(L). The next two results give strong estimates, in different forms, comparing μ_rm and μ_g. <cit.> For any β<1/2, setting k = ⌊ N^β⌋, as long as N is sufficiently large in terms of β, the total variation distance between the pushforward measures η_* μ_rm^N and η_* μ_g is ≤ e^- (1- o_N(1)) N^1-βlog (N^1-β) where o_N(1) goes to 0 as N goes to ∞ for any fixed β. This is a restatement of <cit.>, with a simplified but weaker bound. We explain how our notation compares. It is immediate from the definitions that η_* μ_g is a product of k independent standard complex Gaussian distributions. Taking the real and imaginary parts, and multiplying by -√(2), we obtain a product of 2k independent standard real Gaussians, exactly what is called 𝐆 in <cit.>. (Their m is our k). Similarly, η_* μ_rm is the distribution of b_1(M),…, b_k(M) = -(M^1)/√(1),…, - (M^k)/√(k). Taking real and imaginary parts and multiplying by - √(2), this is exactly the distribution called 𝐗 in <cit.>. The total variation bound follows from <cit.>, noting that the factor 1.4 · 10^-13 n^3β-3/2 is ≤ 1 and can be ignored. <cit.> Let ϕ∈ℂ[ c_0, c_1,…, c_0,c_1,…] have degree ≤ N. Then ∫_ϕμ_g = ∫_ϕμ_rm^N. We use the total variation distance between measures to control differences between integrals against the measures in a couple different ways: For probability measures μ_1,μ_2 with total variation distance δ and a function f, we have ∫ f μ_1 - ∫ f μ_2≤ 2 δsupf ∫ f μ_1 - ∫ f μ_2≤√(δ)( √(∫f^2 μ_1) + √(∫f^2 μ_2)) By the definition of total variation distance, we can write μ_1 - μ_1' =μ_2 -μ_2' where μ_1' ≤μ_1 and μ_2'≤μ_2 are measures with total mass δ. We obtain (<ref>) by noting the integral of f against any measure is bounded by the sup-norm of f times the total mass of that measure. We obtain (<ref>) by applying Cauchy-Schwarz to f and the density functions μ_1'/μ_1 and μ_2'/μ_2, which are 1-bounded and integrate to δ and thus have L^2 norm at most √(δ). We repeatedly use the following lemma to compare integrals against different measures: Let G be a measurable function of complex variables b_1,…, b_k. We have ∫_U(N) G( - (M),…, - (M^k)/√(k) ) μ_Haar = ∫_ G( b_1(L),…, b_k(L)) μ_rm = ∫_ℂ^k G(b_1,…,b_k) η_* μ_rm. and ∫_ G(b_1(L),…, b_k(L)) μ_ep = ∫_ℂ^k G(b_1,…,b_k) η_* μ_ep = ∫_ℂ^k G(b_1,…,b_k) F( q^1/2 b_1 ,…, q^k/2 b_k/ √(k) )/∏_j=1^k ( e^ - |b_j |^2 j / q^j π ) η_* μ_g . We prove (<ref>) first. Both equalities follow from the fact that the integral of a function against the pushforward of a measure is the integral of the pullback against the measure. For the first equality, we also use that μ_rm is the pushforward of μ_Haar along M ↦ L_M by definition, and use (<ref>) to compute the pullback of G along M ↦ L_M. We now prove (<ref>). The first equality is similar to (<ref>). For the second equality, we use that the probability density function of η_* μ_g, in the variables b_1,…, b_k, is ∏_j=1^k ( e^ - b_j ^2 1 /π), so we have ∫_ℂ^k G(b_1,…, b_k) F( q^1/2 b_1 ,…, q^k/2 b_k/ √(k) )/∏_j=1^k ( e^ - |b_j |^2 j / q^j π ) η_* μ_g = ∫_ℂ^k G(b_1,…, b_k) F( q^1/2 b_1 ,…, -q^k/2 b_k/ √(k) )/∏_j=1^k ( j / q^j ) db_1 … db_k. Because F is the probability density function of the tuple of random variables X_1,ξ,…, X_k,ξ, and by (<ref>) we have b_n(L_ξ) = √(n/q^n)X_n,ξ, it follows that F( q^1/2 b_1 ,…, -q^k/2 b_k/ √(k) )/∏_j=1^k ( j / q^j ) is the probability density function of the tuple of random variables b_1(L_ξ),…, b_k(L_ξ), i.e. of the measure η_* μ_ep, giving ∫_ℂ^k G(b_1,…, b_k) F( q^1/2 b_1 ,…, -q^k/2 b_k/ √(k) )/∏_j=1^k ( j / q^j ) db_1 … db_k= ∫_ℂ^k G(b_1,…,b_k) η_* μ_ep. The first step to proving the main theorems is to evaluate γ: If q>5 then for N sufficiently large in terms of β, ∫_U(N) F( -q^1/2(M),…, -q^k/2 (M^k)/k)/∏_j=1^k ( e^ - (M^j)^2 / j j q^j / π) μ_Haar = 1 + O ( e^ -(1- o_N(1)) N^1-βlog (N^1-β) ) . In other words, the γ in the definition (<ref>) of μ_weighted is 1 + O ( e^ -(1- o_N(1)) N^1-βlog (N^1-β) ) for N sufficiently large. (<ref>) gives ∫_U(N) F( - q^1/2(M),…, -q^k/2 (M^k)/k)/∏_j=1^k ( e^ - (M^j)^2 / j j / q^j π) μ_Haar= ∫_ℂ^k F( q^1/2 b_1 ,…, q^k/2 b_k / √(k) )/∏_j=1^k ( e^ - |b_j| ^2 j / q^j π) η _* μ_rm. Similarly, (<ref>) gives ∫_ℂ^k F( q^1/2 b_1 ,…, q^k/2 b_k / √(k) )/∏_j=1^k ( e^ - |b_j| ^2 j / q^j π) η _* μ_g = ∫_ℂ^k 1 μ_ep = 1 . Next we will prove ∫_ℂ^k F( q^1/2 b_1 ,…, q^k/2 b_k / √(k) )/∏_j=1^k ( e^ - |b_j| ^2 j / q^j π) η _*μ_rm - F( q^1/2 b_1 ,…, q^k/2 b_k / √(k) )/∏_j=1^k ( e^ - |b_j| ^2 j / q^j π) η _* μ_g = O ( e^ -(1- o_N(1)) N^1-βlog (N^1-β)) . To check (<ref>), we apply (<ref>). We use <ref> to bound the total variation distance by e^- (1- o_N(1)) N^1-βlog (N^1-β). We apply <ref> to bound the sup-norm by O(N^O(1)), and note that the O(N^O(1)) can be absorbed into the o_N(1) in the exponent. Combining (<ref>), (<ref>), and (<ref>), we obtain (<ref>). The main part of proving <ref> is the following: Assume that q>5. Let ϕ∈ℂ[ c_1,c_2,…, c_1, c_2,…] have degree ≤ k. For N sufficiently large in terms of β, we have ∫_ F( q^1/2 b_1 ,…, q^k/2 b_k / √(k) )/∏_j=1^k ( e^ - |b_j| ^2 j / q^j π) ϕμ_g = ∫_ϕμ_ep and ∫_ϕμ_ep = ∫_ F( q^1/2 b_1 ,…, q^k/2 b_k / √(k) )/∏_j=1^k ( e^ - |b_j| ^2 j / q^j π) ϕμ_rm + O (e^ -(1/2 - o_N(1)) N^1-βlog (N^1-β)ϕ_2 ) . First note that, since ϕ has degree ≤ k, we can express ϕ as a polynomial only in terms of c_1,c_2,…, c_k, c_1 , c_2,…, c_k. Since c_1,…, c_k can be expressed as polynomials in b_1,…,b_k, it follows that ϕ can be expressed as a polynomial in b_1,…,b_k, b_1,…, b_k, which we will refer to as ϕ̃. Then we have ∫_ϕμ_ep = ∫_ℂ^k F( q^1/2 b_1 ,…, q^k/2 b_k / √(k) )/∏_j=1^k ( e^ - |b_j| ^2 j / q^j π) ϕ̃η _* μ_g by (<ref>), ∫_ F( q^1/2 b_1 ,…, q^k/2 b_k / √(k) )/∏_j=1^k ( e^ - |b_j| ^2 j / q^j π) ϕμ_g = ∫_ℂ^k F( q^1/2 b_1 ,…, q^k/2 b_k / √(k) )/∏_j=1^k ( e^ - |b_j| ^2 j / q^j π) ϕ̃η _* μ_g by compatibility of integration with pushforward of measures, and ∫_ F( q^1/2 b_1 ,…, q^k/2 b_k / √(k) )/∏_j=1^k ( e^ - |b_j| ^2 j / q^j π) ϕμ_rm =∫_ F( q^1/2 b_1 ,…, q^k/2 b_k / √(k) )/∏_j=1^k ( e^ - |b_j| ^2 j / q^j π) ϕ̃η _* μ_rm by (<ref>). Combining (<ref>) and (<ref>), we obtain (<ref>). From (<ref>) and (<ref>), we see that to prove (<ref>), it suffices to prove that ∫_ℂ^k F( q^1/2 b_1 ,…, q^k/2 b_k / √(k) )/∏_j=1^k ( e^ - |b_j| ^2 j / q^j π) ϕ̃η _* μ_g = ∫_ F( q^1/2 b_1 ,…, q^k/2 b_k / √(k) )/∏_j=1^k ( e^ - |b_j| ^2 j / q^j π) ϕ̃η _* μ_rm + O (e^- (1/2 - o_N(1)) N^1-βlog (N^1-β)ϕ_2 ) . We will do this by applying (<ref>) to η _* μ_rm and η _* μ_g. We have by <ref> and <ref> ∫_ℂ^k F( q^1/2 b_1 ,…, q^k/2 b_k / √(k) )/∏_j=1^k ( e^ - |b_j| ^2 j / q^j π) ^2 |ϕ̃|^2 η _* μ_g ≪ N^O(1) ∫_ℂ^k |ϕ̃|^2 η _* μ_g = N^O(1)∫_ϕ^2 μ_g= N^O(1)∫_ϕ^2 μ_rm = N^O(1)ϕ and an identical argument, except skipping the <ref> step, gives ∫_ℂ^k F( q^1/2 b_1 ,…, q^k/2 b_k / √(k) )/∏_j=1^k ( e^ - |b_j| ^2 j / q^j π) ^2 |ϕ̃|^2 η _* μ_rm≪ N^O(1)ϕ . Plugging (<ref>), (<ref>), and <ref> into <ref>, we obtain ∫_ℂ^k F( q^1/2 b_1 ,…, q^k/2 b_k / √(k) )/∏_j=1^k ( e^ - |b_j| ^2 j / q^j π) ϕ̃η _* μ_g = ∫_ F( q^1/2 b_1 ,…, q^k/2 b_k / √(k) )/∏_j=1^k ( e^ - |b_j| ^2 j / q^j π) ϕ̃η _* μ_rm + O (N^O(1) e^ -(1/2 - o_N(1)) N^1-βlog (N^1-β)ϕ_2 ) which, absorbing the N^O(1) into the o_N(1), gives (<ref>). For the next two proofs, we observe that for ϕ∈ℂ[c_0,c_1,…, c_0,c_1,…] we have ∫_ϕμ_ch = ∫_U(N)ϕ(L_M) μ_weighted = γ∫_U(N) ϕ(L_M) F( -q^1/2(M),…, -q^k/2 (M^k)/k)/∏_j=1^k ( e^ - (M^j)^2 / j j q^j / π) μ_Haar by the definition (<ref>) of μ_weighted. We have ∫_U(N) ϕ(L_M) F( -q^1/2(M),…, -q^k/2 (M^k)/k)/∏_j=1^k ( e^ - (M^j)^2 / j j q^j / π) μ_Haar = ∫_ϕ F( -q^1/2 b_1 ,…, -q^k/2 b_k / √(k) )/∏_j=1^k ( e^ - b_j^2 j / q^j π) μ_rm = ∫_ϕμ_ep + O (e^ -(1/2 - o_N(1)) N^1-βlog (N^1-β)ϕ_2 ) by (<ref>) and <ref>. Combining (<ref>) and (<ref>) gives exactly the main term and error term of (<ref>) except with an extra factor of γ. Using <ref> to estimate γ, we see that multiplying by γ introduces an additional error term of size ∫_ϕμ_ep O( e^- (1 - o_N(1) )N^1-βlog (N^1-β) ). But ∫_ϕμ_ep≤∫_ϕμ_ep = ∫_ϕ F( -q^1/2 b_1 ,…, -q^k/2 b_k / √(k) )/∏_j=1^k ( e^ - b_j^2 j / q^j π) μ_g≤ O ( k^O(1) ) ∫_ϕμ_g≤ O ( k^O(1) ) √(∫_ϕ^2 μ_g)= O ( k^O(1) ) √(∫_ϕ^2 μ_rm)= O( k^O(1)) ϕ_2 by the trivial bound, (<ref>), <ref>, Cauchy-Schwarz, <ref>, and definition, so this error term can be absorbed into the O (e^ -(1/2 - o_N(1)) N^1-βlog (N^1-β)ϕ_2 ) error term, giving (<ref>). Fix ϕ∈ℂ[ c_0, c_1,…, c_0,c_1,…] such that for all ψ∈ℂ[ c_0, c_1,…, c_0,c_1,…] of degree ≤ k we have ∫_U(N)ϕ(L_M) ψ(L_M)μ_Haar =0 . Then for any real-valued ψ∈ℂ[ c_0, c_1,…, c_0,c_1,…] of degree ≤ k, we have ∫_U(N)ϕ(L_M) F( -q^1/2(M),…, -q^k/2 (M^k)/k)/∏_j=1^k ( e^ - (M^j)^2 / j j q^j / π) μ_Haar = ∫_U(N)ϕ(L_M) ( F( -q^1/2(M),…, -q^k/2 (M^k)/k)/∏_j=1^k ( e^ - (M^j)^2 / j j q^j / π) - ψ (L_M) ) μ_Haar + ∫_U (N)ϕ(L_M) ψ(L_M) μ_Haar = ∫_U(N)ϕ(L_M) ( F( -q^1/2(M),…, -q^k/2 (M^k)/k)/∏_j=1^k ( e^ - (M^j)^2 / j j q^j / π) - ψ (L_M) ) μ_Haar +0 ≤ ϕ_2 √(∫_U(N)( F( -q^1/2(M),…, -q^k/2 (M^k)/k)/∏_j=1^k ( e^ - (M^j)^2 / j j q^j / π) - ψ (L_M) )^2 μ_Haar). We furthermore have ∫_U(N)( F( -q^1/2(M),…, -q^k/2 (M^k)/k)/∏_j=1^k ( e^ - (M^j)^2 / j j q^j / π) - ψ (L_M) )^2 μ_Haar = ∫_( F( -q^1/2 b_1 ,…, -q^k/2 b_k/√(k) )/∏_j=1^k ( e^ - b_j^2 j q^j / π) - ψ)^2 μ_rm = ∫_( F( -q^1/2b_1,…, -q^k/2b_k /√(k))/∏_j=1^k ( e^ - b_k^2 j q^j / π) )^2 μ_rm- 2 ∫_ F( -q^1/2 b_1 ,…, -q^k/2 b_k/√(k) )/∏_j=1^k ( e^ -b_j^2 j q^j / π) ψμ_rm+ ∫_ψ^2 μ_rm. We now compare each of these integrals against μ_rm to corresponding integrals against μ_g. First, we have ∫_ψ^2 μ_rm = ∫_ψ^2 μ_g by <ref> since ψ^2 has degree 2k ≤ N. Second, using (<ref>) and then (<ref>) (inputting <ref> and <ref>), we obtain ∫_( F( -q^1/2b_1,…, -q^k/2b_k /√(k))/∏_j=1^k ( e^ - b_k^2 j q^j / π) )^2 μ_rm = ∫_ℂ^k( F( -q^1/2b_1,…, -q^k/2b_k /√(k))/∏_j=1^k ( e^ - b_k^2 j q^j / π) )^2 η _* μ_rm = ∫_ℂ^k( F( -q^1/2b_1,…, -q^k/2b_k /√(k))/∏_j=1^k ( e^ - b_k^2 j q^j / π) )^2 ν_* μ_g +O ( N^O(1) e^ -(1- o_N(1)) N^1-βlog(N^1-β )) = ∫_( F( -q^1/2b_1,…, -q^k/2b_k /√(k))/∏_j=1^k ( e^ - b_k^2 j q^j / π) )^2 μ_g+O ( e^ -(1- o_N(1)) N^1-βlog(N^1-β )) . Third, <ref> gives ∫_ F( -q^1/2 b_1 ,…, -q^k/2 b_k/√(k) )/∏_j=1^k ( e^ -b_j^2 j q^j / π) ψμ_rm = ∫_ F( -q^1/2 b_1 ,…, -q^k/2 b_k/√(k) )/∏_j=1^k ( e^ -b_j^2 j q^j / π) ψμ_g+ O (e^ - (1/2 - o_N(1)) N^1-βlog (N^1-β)ψ_2 ) . Combining (<ref>), (<ref>), and (<ref>), we obtain ∫_( F( -q^1/2 b_1 ,…, -q^k/2 b_k/√(k) )/∏_j=1^k ( e^ - b_j^2 j q^j / π) - ψ)^2 μ_rm = ∫_( F( -q^1/2 b_1 ,…, -q^k/2 b_k/√(k) )/∏_j=1^k ( e^ - b_j^2 j q^j / π) - ψ)^2 μ_g + O ( e^ -(1- o_N(1)) N^1-βlog(N^1-β )) + O (e^ - (1/2 - o_N(1)) N^1-βlog (N^1-β)ψ_2 ). To minimize (<ref>), we should choose ψ to be a good approximation in L^2 to F( -q^1/2 b_1 ,…, -q^k/2 b_k/√(k) )/∏_j=1^k ( e^ - b_j^2 j q^j / π). To do this we follow Corollary <ref> and set ψ = ∑_ a_1,1, …, a_k,2∈ℤ^≥ 0 ∑_n=1^k n (a_n,1 + a_n,2) ≤ k h_a_1,1,…, a_k,2∏_n=1^k He_a_n,1( x_n,1√(2n/q^n)) He_a_n,2(x_n,2√(2n/q^n) ) √(2n/q^n)^ a_n,1+ a_n,2 so that F( -q^1/2 b_1 ,…, -q^k/2 b_k/√(k) )/∏_j=1^k ( e^ - b_j^2 j q^j / π) - ψ = ∑_ a_1,1, …, a_k,2∈ℤ^≥ 0 ∑_n=1^k n (a_n,1 + a_n,2) > k h_a_1,1,…, a_k,2∏_n=1^k He_a_n,1( x_n,1√(2n/q^n)) He_a_n,2(x_n,2√(2n/q^n) ) √(2n/q^n)^ a_n,1+ a_n,2 and then ∫_( F( -q^1/2 b_1 ,…, -q^k/2 b_k/√(k) )/∏_j=1^k ( e^ - b_j^2 j q^j / π) - ψ)^2 μ_g = ∫_( ∑_ a_1,1, …, a_k,2∈ℤ^≥ 0 ∑_n=1^k n (a_n,1 + a_n,2) > k h_a_1,1,…, a_k,2∏_n=1^k He_a_n,1( x_n,1√(2n/q^n)) He_a_n,2(x_n,2√(2n/q^n) ) √(2n/q^n)^ a_n,1+ a_n,2)^2 μ_g = ∑_ a_1,1, …, a_k,2∈ℤ^≥ 0 ∑_n=1^k n (a_n,1 + a_n,2) > k h_a_1,1,…, a_k,2^2 ∏_n=1^k a_n,1! a_n,2! ( 2n/q^n)^ a_n,1+ a_n,2≪ k^ - q-2/2≪ N^ - βq-2/2 by (<ref>) and <ref>. We also have ψ_2^2= ∫_ψ^2 μ_rm = ∫_ψ^2 μ_g ≤∫_( F( -q^1/2 b_1 ,…, -q^k/2 b_k/√(k) )/∏_j=1^k ( e^ - b_j^2 j q^j / π) )^2 μ_g≤∫_ N^O(1) μ_g = N^O(1). Plugging these into (<ref>), we obtain ∫_( F( -q^1/2 b_1 ,…, -q^k/2 b_k/√(k) )/∏_j=1^k ( e^ - b_j^2 j q^j / π) - ψ)^2 μ_rm = O ( N^ - βq-2/2) + O ( e^ -(1- o_N(1)) N^1-βlog(N^1-β )) + O (e^ - (1/2 - o_N(1)) N^1-βlog (N^1-β)N^O(1) ) = O ( N^ - βq-2/2) as the polynomial error term dominates the exponential ones. Plugging this into (<ref>) and using (<ref>) and the fact from <ref> that γ=O(1), this gives (<ref>). § REPRESENTATIONS OF THE UNITARY GROUP AND MOMENTS This section is devoted to the proof of <ref>. We begin in <ref> by describing the relationship between irreducible representations and polynomials in ℂ[c_0,c_1,…, c_0,c_1,…], and between highest weights of representations and the degrees of polynomials. Using this, in <ref> we will prove <ref>, a variant of <ref> with the same error term but a main term expressed very differently in terms of a sum over irreducible representations. The proof of <ref> is very general and should apply with minimal modification beyond moments to other statistics of L-functions such as zero densities and ratios. The next steps are to give in <ref> an explicit expression, avoiding the language of representation theory, for the main term in <ref>, and to compare the main term of <ref> with the main term of <ref>, leading to a proof of <ref> at the end of this section. It would be possible to avoid the language of representation theory entirely, only writing down explicit polynomials and using the Weyl integration formula, but doing this would make our calculations less motivated. §.§ Preliminaries on representations and polynomials Irreducible representations of the unitary group U(N) are classified by their highest weight, a nonincreasing N-tuple of integers. An irreducible representation of GL_N(ℂ) has highest weight ω_1,…,ω_N if and only if it contains a vector on which upper-triangular unipotent matrices act trivially and on which the diagonal matrix with diagonal entries λ_1,…,λ_N acts by multiplication by ∏_ℓ=1^N λ_ℓ^ω_ℓ. An irreducible representation of U(N) has highest weight ω_1,…,ω_N if and only if it extends to a representation of GL_N(ℂ) with highest weight ω_1,…,ω_N. We denote the highest weight of V by (V). We say the norm of the highest weight ω_1,…,ω_N is ∑_ℓ=1^N ω_ℓ. The norm of (V) is denoted by (V). In this subsection, we will check that polynomials of degree ≤ k in ℂ[c_0,c_1,…, c_0,c_1,…] may be expressed as linear combinations of the characters of irreducible representations with highest weight of norm ≤ k and, conversely, characters of such irreducible representations may be expressed as low-degree polynomials. It follows that characters of irreducible representations with highest weight of norm >k are orthogonal to all low-degree polynomials, an important criterion for applying Theorem <ref>. Let V_1 and V_2 be irreducible representations of U(N). Then V_1⊗ V_2 is a sum of irreducible representations of U(N) with highest weights of norms ≤(V_1) + (V_2). If V_1 ⊗ V_2 contains an irreducible summand with highest weight ω_1,…, ω_N then it contains an eigenvector of the diagonal torus with weights ω_1,…, ω_N. Since V_1 and V_2 split as sums of eigenspaces of the diagonal torus, this is only possible if V_1 and V_2 each contain an eigenvector whose weights sum to ω_1,…, ω_N. By linearity of the norm, it suffices to prove that the weights of eigenvectors of V_j have norm at most (V_j). If this were not so, since the set of weights is S_N-invariant, there would have to be a vector whose weight had greater norm with weights in decreasing order, which could not be a sum of the highest weights and negative roots, contradicting the fact that the representation is generated by the highest weight under the negative roots. For M ∈ U(N) the definition (<ref>) of L_M implies that L_M (s) = ∑_d=0^n (-1)^n (M, ∧^d ) q^d (1/2-s ) and L_M(s)= ∑_d=0^n (-1)^n (M, ∧^d ^∨ ) q^d( 1/2 -s). For ϕ∈ℂ[c_0,c_1,…, c_0,c_1,…], there exists a finite set of irreducible representations V_o and coefficients κ_o such that for all M∈ U(N) we have ϕ(L_M) = ∑_o κ_o (M, V_o) and if ϕ has degree ≤ d then we can assume that (V_o)≤ d for all o. Since all polynomials are linear combinations of monomials, it suffices to prove this for monomials. We first check for the coefficients of L_M and their complex conjugates. The coefficient of q^-ds is (-q^1/2)^d (M, ∧^d ) and its complex conjugate is (-q^1/2)^d (M, ∧^d ^∨ ) by (<ref>) and (<ref>). The highest weight of ∧^d has d ones followed by N-d zeroes, while the highest weight of ∧^d ^∨ has N-d zeroes followed by d negative ones, and both of these have norm d. Any monomial is a product of c_ds and c_ds, hence equal to a constant multiple of a product of traces. The product of traces is the trace of the tensor product, which is the sum of the traces on the irreducible summands of the tensor product. By <ref>, these all have weights with norms bounded by the degree of the monomial. Let ϕ∈ℂ[c_0,c_1,…, c_0,c_1,…] be a polynomial and V an irreducible representation of U(N) such that for all M∈ U(N), ϕ(L_M) = (M, V) . If (V)>k then for all polynomials ψ∈ℂ[c_0,c_1,…, c_0,c_1,…] of degree ≤ k we have ∫_U(N)ϕ(L_M) ψ(L_M)μ_Haar=0. We apply <ref> to ψ to obtain ∫_U(N)ϕ(L_M) ψ(L_M)μ_Haar= ∫_U(N)(M,V) ∑_o κ_o (M, V_o) μ_Haar=∑_o κ_o ∫_U(N)(M,V) (M, V_o) μ_Haar = 0 by orthogonality of characters, since V cannot be among the V_o as (V)> k ≥(V_o) for all o. Note that the conclusion of <ref> is the assumption (<ref>) of <ref>. To obtain a supply of polynomials to which we can apply <ref>, it suffices to find polynomials ϕ satisfying the hypothesis ϕ(L_M) = (M, V) of <ref>. We can do this using the Jacobi-Trudi identity for Schur polynomials. We always take ∧^d = ∧^d ^∨=0 if d∉[0,N]. For a power series L in q^-s and arbitrary integer d, let c_d(L) be the coefficient of q^-ds in L, so that c_d=0 for d<0. Let V be a representation of U(N) with highest weight ω_1,…,ω_N. Let a,b ∈ℤ^≥ 0 be integers satisfying a ≥ω_1 and ω_n ≥ -b. Then * (M,V ⊗^b) is the determinant of the a+b × a+b matrix whose ijth entry is (M, ∧^#{ℓ|ω_ℓ≥ i-b } +j-i ). * Let d_ij = #{ℓ|ω_ℓ < i-b } + i - j for i≤ b and d_ij = #{ℓ|ω_ℓ≥ i-b } +j-i for i > b. Then (M,V ) is the determinant of the a+b × a+b matrix whose ijth entry is (M, ∧^ d_ij) for i>b and (M, ∧^d_ij^∨ ) for j>b. * (M,V ) is the determinant of the a+b × a+b matrix whose ijth entry is (-q^-1/2)^d_ij c_d_ij (L_M) for i>b and (-q^-1/2)^d_ijc_d_ij (L_M) for i ≤ b. * The determinant of the a+b × a+b matrix whose ijth entry is (-q^-1/2)^d_ij c_d_ij for i>b and (-q^-1/2)^d_ij c_d_ij for i ≤ b is a polynomial of degree at most the norm of ω_1,…, ω_n. For part (1), if we let _1,…, _N be the eigenvalues of M, then (M,V ⊗^b) is the Schur polynomial in _1,…, _N associated to the partition (ω_1+b,…, ω_N+b) and (M, ∧^#{ℓ|ω_ℓ≥ i-b } +j-i ) is the #{ℓ|ω_ℓ≥ i-b } +j-ith elementary symmetric polynomial in _1,…, _N. The claim is then a statement of the Jacobi-Trudi identity for Schur polynomials <cit.>. For part (2), we have (M,V) = (M,V, ⊗^b) (M)^-b. We take the formula of part (1) and multiply the first b rows by (M)^-1 to multiply the determinant by (M)^-b. This fixes the ij entry for i>b and changes the ij entry for i ≤ b to (M, ∧^#{ℓ|ω_ℓ≥ i-b } +j-i ) (M)^-1 =(M, ∧^#{ℓ|ω_ℓ≥ i-b } +j-i ⊗^-1) =(M, ∧^ N- (#{ℓ|ω_ℓ≥ i-b } +j-i )^∨ ) = (M, ∧^#{ℓ|ω_ℓ < i-b } +i-j)^∨ ) = (M, ∧^d_ij^∨ ). Part (3) follows from part (2) when we observe that (M, ∧^ d_ij)= (-q^-1/2)^d_ij c_d_ij (L_M) because of (<ref>) and (M, ∧^ d_ij^∨ )= (-q^-1/2)^d_ij c_d_ij (L_M) because of (<ref>). For part (4), let d(i) = #{ℓ|ω_ℓ < i-b } if i ≤ b #{ℓ|ω_ℓ≥ i-b } if i>b and let v(i) = 2b -i if i ≤ b i if i>b . Then if i,j> b we have v(j)-v(i)=j-i, if j≤ b < i we have v(j)-v(i) = 2b-j -i ≥ j-i, if i,j ≤ b we have v(j)-v(i) = i-j, and if i ≤ b < j we have v(j)-v(i)= j + i-2b ≥ i-j so in all cases we have d(i) + v(j)-v(i) ≥ d_ij . The ij-entry is a polynomial of degree d_ij. This implies the determinant has degree ≤∑_i=1^a+b d(i) since when we calculate the degree of each term in the Leibniz expansion, the v(j) and v(i) cancel. Finally ∑_i=1^a+b d(i) is the norm of ω_1,…, ω_N since if ω_ℓ≥ 0 then ω_ℓ contributes to d(b+1),…, d(b+ ω_ℓ) and thus contributes ω_ℓ to the sum while if ω_ℓ≤ 0 then ω_ℓ contributes to d(b+1+ω_ℓ),…, d(b) and thus contributes -ω_ℓ to the sum. §.§ Handling the error term Fix ,, and N. There exist coefficients κ_o (depending on s_1,…, s_+) and irreducible representations V_o such that for each M∈ U(N) we have ∏_i=1^ L_M(s_i ) ∏_i=+1^ + L _M( s_i ) = ∑_o κ_o (M, V_o). By (<ref>) and (<ref>), L_M(s) and L_M(s) may be expressed as complex-linear combinations of characters of U(N). Multiplying these expressions, it follows that ∏_i=1^ L_M(s_i ) ∏_i=+1^ + L _M( s_i ) is a complex-linear combinations of characters of U(N). Using <ref>(3), we associate to each V_o a polynomial ψ_o∈ℂ[c_0, c_1,…, c_0,c_1,…] such that ψ_o (L_M) = (M, V_o). This gives ∏_i=1^ L(s_i ) ∏_i=+1^ + L ( s_i ) = ∑_o κ_o ψ_o(L) as long as L= L_M for some M∈ U(N). We define ϕ_lf = ∑_ o , (V_o)≤ kκ_o ψ_o and ϕ_hf = ∑_ o , (V_o)> kκ_oψ_o. Note that all implicit constants in big O notation used in this section will be allowed to depend on ,,q but not on N (since the final goal is to prove <ref>, an estimate whose error term depends on ,,q but not on N). Both ϕ_hf_2 and ϕ_lf_2 are O( N ^(+)^2 /2). We have ϕ_hf + ϕ_lf^2 = ∫_U(N)ϕ_hf(L_M) + ϕ_lf(L_M) ^2 μ_Haar = ∫_U(N)∏_i=1^ L_M(s_i ) ∏_i=+1^ + L _M( s_i ) ^2 μ_Haar = ∏_i=1^+( ∫_U(N)L_M(s_i)^2(+)μ_Haar)^1/(+) = ∫_U(N)L_M(1/2)^2(+)μ_Haar = O( N ^ (+)^2 ) by Hölder's inequality, the invariance under translation by diagonal matrices of Haar measure on U(N), and the classical calculation of the moments of the characteristic polynomial of random unitary matrices. By <ref> and <ref>(4) we have ∫_U(N)ϕ_hf(L_M) ϕ_lf(L_M)μ_Haar=0, i.e. ϕ_lf and ϕ_hf are orthogonal, so ϕ_hf_2^2+ ϕ_lf_2^2 = ϕ_hf + ϕ_lf^2 and thus the indvidual norms are bounded as well. Assume that q>11. Let and be nonnegative integers and s_1,…, s_ + be complex numbers with real part 1/2. We have ∫_( ∏_i=1^ L(s_i ) ∏_i=+1^ + L ( s_i ) ) μ_ch = ∫_ℂ[[q^-s ]]ϕ_lfμ_ep + O ( N^(+)^2/2 -βq-2/4). From (<ref>) and the definitions of ϕ_lf and ϕ_hf we have ∫_( ∏_i=1^ L(s_i ) ∏_i=+1^ + L ( s_i ) ) μ_ch = ∫_∑_o κ_o ψ_o μ_ch = ∫_ℂ[[q^-s ]] (ϕ_lf +ϕ_hf ) μ_ch =∫_ℂ[[q^-s ]]ϕ_lfμ_ch + ∫_ℂ[[q^-s ]]ϕ_hfμ_ch. To ∫_ℂ[[q^-s ]]ϕ_lfμ_ch we apply <ref>, using <ref>(4) to check the hypothesis, and to ∫_ℂ[[q^-s ]]ϕ_hfμ_ch we apply <ref>, using <ref> to check the hypothesis (<ref>). From these results and (<ref>) we obtain ∫_( ∏_i=1^ L(s_i ) ∏_i=+1^ + L ( s_i ) ) μ_ch = ∫_ℂ[[q^-s ]]ϕ_lfμ_ep + O (e^ (1/2 - o_N(1)) N^1-βlog (N^1-β)ϕ_lf_2 )+ O ( N^- βq-2/4ϕ_hf_2 ) . Since e^ (1/2 - o_N(1)) N^1-βlog (N^1-β) is bounded by N^- βq-2/4, <ref> together with (<ref>) gives (<ref>). §.§ Comparing the main terms To prove <ref>, it remains to compare ∫_ℂ[[q^-s ]]ϕ_lfμ_ep to MT^,_N(s_1,…, s_+). To begin, we will calculate ϕ_lf more precisely, which requires making the representations V_o and coefficients κ_o appearing in (<ref>) explicit. We also make use of the change of variables α_i = s_i- 1/2, so in particular q^1/2-s_i= q^-α_i and α_i is imaginary so α_i=-α_i. Our calculation will culminate in the formula (<ref>) which expresses ∫_ℂ[[q^-s ]]ϕ_lfμ_ep using a sum over polynomials ψ_𝐞 against coefficients κ_𝐞 indexed by certain tuples of integers 𝐞. To motivate the definitions of κ_𝐞 and ψ_𝐞 the proof will proceed in steps. After proving (<ref>), we will equate a certain longer sum to MT^,_N. The difference between this longer sum and the original introduces a secondary error term which we will also bound. To prove (<ref>), we use the method of Bump and Gamburd <cit.>, i.e. we apply the Cauchy identity for Schur functions to express the desired moment as a sum of products of pairs of Schur functions. One Schur function in each pair will beocme κ_𝐞 and the other will become ψ_𝐞. This method was originally used to calculate expectations of products of the characteristic polynomial of a unitary matrix against Haar measure, but here we apply it (together with other tools) to calculate the expectation againt a non-uniform measure. If _1(M),…, _N(M) are the eigenvalues of M then L_M(s_i) = ( I - q^ -α_i M ) = ∏_ℓ=1^N ( 1- q^-α_i _ℓ(M)) while L_M(s_i) = ( I - q^ -α_i M ) =(-1)^N q^ N α_i ( M)^-1( I - q^α_i M^-1 ) = (-1)^N q^ N α_i ( M)^-1( I - q^-α_i M ) = (-1)^N q^ N α_i ( M)^-1∏_ℓ=1^N ( 1 -q^-α_i _N(M)) so ∏_i=1^ L_M(s_i ) ∏_i=+1^ + L _M( s_i ) = (-1)^N ( M)^-∏_i=+1^+ q^ N α_i ∏_i=1^+∏_ℓ=1^N ( 1 -q^-α_i _N(M)) . The Cauchy identity for Schur functions gives ∏_i=1^+∏_ℓ=1^N ( 1 -q^-α_i _N(M)) = ∑_ρ s_ρ ( q^-α_1,…, q^-α_+ ) s_ρ' ( _1(M),…, _N(M)) where ρ denotes a partition, s_ρ the Schur function associated to that partition, ρ' the dual partition, and s_ρ' the corresponding Schur function. The +-variable Schur function s_ρ vanishes unless ρ has at most + parts and the N-variable Schur function s_ρ' vanishes unless all parts of ρ have size at most ≤ N. Partitions satisfying both of these can be equivalently expressed as tuples e_1,…, e_+ of integers satisfying N ≥ e_1 ≥…≥ e_+≥ 0, giving ∏_i=1^+∏_ℓ=1^N ( 1 -q^α_i_N(M)) =∑_ e_1,… e_+∈ℤ N ≥ e_1 ≥…≥ e_+≥ 0 s_(e_1,…, e_+) ( q^-α_1,…, q^-α_+ ) s_ (e_1,…, e_+)' ( _1(M),…, _N(M) ) and thus ∏_i=1^ L_M(s_i ) ∏_i=+1^ + L _M( s_i ) = ∑_ e_1,… e_+∈ℤ N ≥ e_1 ≥…≥ e_+≥ 0 ( (-1)^N ∏_i=+1^+ q^ Nα_i s_(e_1,…, e_+) ( q^-α_1,…, q^-α_+ ) ) ( ( M)^- s_ (e_1,…, e_+)' ( _1(M),…, _N(M) )) . Now the significance of this expression is that s_ (e_1,…, e_+)' ( _1(M),…, _N(M) ) is the trace of M acting on the irreducible representation of U(N) with highest weight (e_1,…, e_+)' so that ( M)^- s_ (e_1,…, e_+)' ( _1(M),…, _N(M) ) is the trace of M acting on the irreducible representation of U(N) with highest weight obtained from (e_1,…, e_+)' by subtracting from each entry. We refer to this representation as V_𝐞. These representations V_𝐞 have distinct highest weights, and thus are not isomorphic, for distinct V_𝐞. Thus the expression ∏_i=1^ L_M(s_i ) ∏_i=+1^ + L _M( s_i ) = ∑_ e_1,… e_+∈ℤ N ≥ e_1 ≥…≥ e_+≥ 0 ( (-1)^N ∏_i=+1^+ q^ Nα_i s_(e_1,…, e_+) ( q^-α_1,…, q^-α_+) ) (M, V_𝐞) is a precise form of (<ref>). We now let ψ_𝐞 be the determinant of the +× + matrix whose ijth entry is (-q^-1/2)^ e_i + j-i c_ e_i+j-i for i> and (-q^-1/2)^ N-e_i + i - j c_ N-e_i +i -j for i ≤. * ψ_𝐞 is the polynomial associated to V_𝐞 by <ref>(3) with a= and b=. * The norm of the highest weight of V_𝐞 is ∑_i=1^ (N-e_i) + ∑_i=+1^+ e_i. (e_1,…, e_+)' is the vector consisting of e_+ copies of +, e_i-e_i+1 copies of i for all i from +-1 to 1, and N-e_1 copies of 0. Subtracting from each entry gives the vector ω_1,…, ω_N with e_+ copies of , e_i-e_i+1 copies of i- for all i from +-1 to 1, and N-e_1 copies of -. It follows that #{ℓ|ω_ℓ < i-} is N- e_i and #{ℓ|ω_ℓ≥ i-} is e_i. With notation as in <ref>(2), this implies d_ij= e_i +j-i for i> and d_ij= N-e_i+i-j for i≤, which plugged into <ref>(3) gives ψ_𝐞. The norm of ω_1,…,ω_N is e_+r + ∑_i=1^+-1 (e_i -e_i+1 ) i - + (N-e_1) = ∑_i=1^+ e_i (i- - i--1) + N = ∑_i=1^ (N-e_i) + ∑_i=+1^+ e_i by a telescoping sum. We also let κ_𝐞 = (-1)^N ∑_σ∈ S_+(σ) ∏_i=1^+ q^ - (e_i ++- i) α_σ(i) /∏_1 ≤ i_1 < i_2 ≤+ (q^-α_i_1 - q^ - α_i_2 ) ∏_i=+1^+ q^ Nα_i . That κ_𝐞 = (-1)^N s_(e_1,…, e_+) ( q^-α_1,…, q^-α_+) ∏_i=+1^+q^ Nα_i follows from the definition of Schur polynomials (or, defining them in terms of irreducible representations, from the Weyl character formula). (<ref>), <ref>, and (<ref>) imply that ϕ_lf = ∑_ e_1,… e_+∈ℤ N ≥ e_1 ≥…≥ e_+≥ 0 ∑_i=1^ (N - e_i) + ∑_i=+1^+ e_i ≤ k κ_𝐞ψ_𝐞 and therefore ∫_ϕ_lfμ_ep = ∑_ e_1,… e_+∈ℤ N ≥ e_1 ≥…≥ e_+≥ 0 ∑_i=1^ (N - e_i) + ∑_i=+1^+ e_i ≤ k κ_𝐞∫_ψ_𝐞μ_ep. However, we have defined both κ_𝐞 and ϕ_𝐞 to make sense for an arbitrary tuple of integers 𝐞, not the ones where the Schur polynomials are defined. We will use this flexibility to extend the range of summation, which will allow us to compare this sum to MT_N^,, which is similarly formed by extending a sum beyond the obviously appropriate range. Observe first that (<ref>) implies that ∫_ϕ_lfμ_ep = ∑_ e_1,… e_+∈ℤ N ≥ e_1≥… e_ e_+1≥… e_+≥ 0 ∑_i=1^ (N - e_i) + ∑_i=+1^+ e_i ≤ k κ_𝐞∫_ψ_𝐞μ_ep since the sole condition appearing in (<ref>) but not (<ref>) is e_≥ e_+1, but this is implied by the other conditions of (<ref>) since e_ - e_+1 = N - (N-e_) - e_+1≥ N -k ≥ 0. Our next two goals will be to check that ∑_ e_1,… e_+∈ℤ N ≥ e_1≥… e_ e_+1≥… e_+≥ 0 κ_𝐞∫_ψ_𝐞μ_ep = MT_N^, and ∑_ e_1,… e_+∈ℤ N ≥ e_1≥… e_ e_+1≥… e_+≥ 0 ∑_i=1^ (N - e_i) + ∑_i=+1^+ e_i > k κ_𝐞∫_ψ_𝐞μ_ep = O (k^ O(1) q^ - k/4 ). For both of these, we begin by evaluating ∫_ψ_𝐞μ_ep explicitly in terms of polynomials in 𝔽_q[u]. This evaluation in Lemma <ref> will let us prove a bound for ∫_ψ_𝐞μ_ep in Lemma <ref>. This bound will be used to prove (<ref>) and then both the evaluation and the bound will be used to prove (<ref>). For L= ∑_d=0^∞ c_d q^-d s, we have an identity of formal Laurent series in q^-α_1,… ,q^-α_+ ∑_e_1,…, e_+∈ℤψ_𝐞∏_i=1^ (-q^α_i)^-e_i∏_i=+1^+ (-q^-α_i)^e_i = ∏_1 ≤ i_1< i_2 ≤+ (q^α_i_1 - q^α_i_2) ∏_i=1^( (-q^α_i)^1-i-NL (q^- 1/2 - α_i)) ∏_i=+1^+ ((-q^-α_i)^i-1L ( q^- 1/2 - α_i ) ). Consider the +×+ matrix whose ijth entry is ∑_e_i ∈ℤ (-q^-1/2)^ e_i + j-i c_ e_i+j-i(L) (- q^-α_i)^e_i = (-q^-α_i)^i-j L ( q^- 1/2 - α_i ) for i> and ∑_e_i ∈ℤ (-q^-1/2)^ N-e_i + i - j c_ N-e_i +i -j (-q^α_i)^- e_i =(-q^α_i)^j-i-NL (q^- 1/2 - α_i) for i ≤. By additivity of determinants in each row, the determinant of this matrix is (<ref>). On the other hand, removing a factor of (-q^-α_i)^i-1L ( q^- 1/2 - α_i ) from the ith row for i> and (-q^α_i)^1-i-NL (q^- 1/2 - α_i) from the i'th row for i≤, we obtain the matrix whose ij-entry is ( -q^α_i)^j-1 for all i,j, which is a Vandermonde matrix and thus has determinant ∏_1 ≤ i_1< i_2 ≤+ ( - q^α_i_2 - (- q^α_i-1)) = ∏_1 ≤ i_1< i_2 ≤+ (q^α_i_1 - q^α_i_2) so by the compatibility of determinants with scalar multiplication of rows, the determinant is also (<ref>). We have an identity of formal Laurent series in q^-α_1,… ,q^-α_+ ∫_∏_i=1^L (q^- 1/2 - α_i) ∏_i=+1^+ L ( q^- 1/2 - α_i ) μ_ep = ∑_f_1,…, f_+∈𝔽_q[u]^+ ∏_i=1^ f_i =∏_i=+1^+ f_i ∏_i=1^f_i^-1/2+α_i∏_i=+1^+f_i^-1/2-α_i where the integration is applied separately to each term in the formal Laurent series. Expanding out we have ∏_i=1^L_ξ (q^- 1/2 + α_i) ∏_i=+1^+ L_ξ ( q^- 1/2 - α_i ) = ∑_f_1,…, f_+∈𝔽_q[u]^+ ∏_i=1^(ξ(f_i)f_i^-1/2+α_i )∏_i=+1^+( ξ(f_i) f_i^-1/2-α_i) and orthogonality of characters on a product of circle groups gives 𝔼[ ∏_i=1^ξ(f_i)∏_i=+1^+ξ(f_i)] = 1 if ∏_i=1^ f_i =∏_i=+1^+ f_i 0 otherwise. Taking the expectation of (<ref>) over ξ and then plugging in (<ref>) gives (<ref>). For each e_1,…, e_+∈ℤ, ∫_ψ_𝐞μ_ep is (-1)^∑_i=1^+ e_i + +2 +N times the coefficient of q^∑_i=1^(i-1+N-e_i) α_i - ∑_i=+1^+ (d_i-i+1)α_i in ∏_1 ≤ i_1< i_2 ≤+ (q^α_i_1 - q^α_i_2) ∑_f_1,…, f_+∈𝔽_q[u]^+ ∏_i=1^ f_i =∏_i=+1^+ f_i ∏_i=1^f_i^-1/2+α_i∏_i=+1^+f_i^-1/2-α_i. Integrating Lemma <ref> against μ_ep and then plugging in (<ref>) gives ∑_e_1,…, e_+∈ℤ∫_ψ_𝐞μ_ep∏_i=1^ (-q^α_i)^-e_i∏_i=+1^+ (-q^-α_i)^e_i = ∏_1 ≤ i_1< i_2 ≤+ (q^α_i_1 - q^α_i_2) ∏_i=1^ (-q^α_i)^1-i-N∏_i=+1^+ (-q^-α_i)^i-1∑_f_1,…, f_+∈𝔽_q[u]^+ ∏_i=1^ f_i =∏_i=+1^+ f_i ∏_i=1^f_i^-1/2+α_i∏_i=+1^+f_i^-1/2-α_i . Moving some factors to the left-hand side, we obtain ∑_e_1,…, e_+∈ℤ∫_ψ_𝐞μ_ep∏_i=1^ (-q^α_i)^i-1+N-e_i∏_i=+1^+ (-q^-α_i)^e_i-i+1 = ∏_1 ≤ i_1< i_2 ≤+ (q^α_i_1 - q^α_i_2) ∑_f_1,…, f_+∈𝔽_q[u]^+ ∏_i=1^ f_i =∏_i=+1^+ f_i ∏_i=1^f_i^-1/2+α_i∏_i=+1^+f_i^-1/2-α_i . Extracting the coefficient of a single term and grouping together all the powers of (-1), we obtain the statement. We have ∫_ψ_𝐞μ_ep = O ( (O(1) + ∑_i=1^ (N-e_i) + ∑_i=+1^+ e_i)^O(1) q^- max ( ∑_i=1^ (N-e_i), ∑_i=+1^+ e_i /2)). Furthermore, ∫_ψ_𝐞μ_ep =0 unless ∑_i=1^(i-1+N-e_i) - ∑_i=+1^+ (e_i-i+1) = +2. We will apply bounds in <cit.> for the coefficients of the series M_S, defined in <cit.> as M_S (α_1,…,α_+) = ∏_1 ≤ i_1< i_2 ≤+ (q^α_i_1 - q^α_i_2) ∑_ f_1,…, f_+∈𝔽_q[u]^+ ∏_i ∈ S f_i / ∏_i∉ S f_i ∈ u^ℤ∏_i ∈ Sf_i ^ -1/2 + α_i∏_i∉ Sf_i ^-1/2 - α_i . Taking S = {1,…,}, this definition specializes to M_{1,…,}(α_1,…,α_+) = ∏_1 ≤ i_1< i_2 ≤+ (q^α_i_1 - q^α_i_2) ∑_ f_1,…, f_+∈𝔽_q[u]^+ ∏_i =1^ f_i / ∏_i=+1^+ f_i ∈ u^ℤ∏_i=1^f_i^-1/2+α_i∏_i=+1^+f_i^-1/2-α_i . (<ref>) and (<ref>) agree except that the condition ∏_i =1^ f_i / ∏_i=+1^+ f_i ∈ u^ℤ in (<ref>) is laxer than the condition ∏_i =1^ f_i = ∏_i=+1^+ f_i in (<ref>). If f_1,…, f_+ satisfy ∏_i =1^ f_i / ∏_i=+1^+ f_i = u^n for n∈ℤ, then ∑_i=1^ f_i = ∑_i=+1^+ f_i +n, so ∏_i=1^f_i^-1/2+α_i∏_i=+1^+f_i^-1/2-α_i is a monomial in q^α_1,…, q^α_+ of total degree n. Since the Vandermonde ∏_1 ≤ i_1< i_2 ≤+ (q^α_i_1 - q^α_i_2) has total degree +2 in q^α_1,…, q^α_+, this implies that (f_1,…, f_+) contributes to terms in the formal Laurent series (<ref>) with total degree n + +2. Thus, restricting to f_1,…, f_+ satisfying ∏_i =1^ f_i = ∏_i=+1^+ f_i, i.e., restricting to the case n=0, is equivalent to restricting the series (<ref>) to terms of total degree +2. In particular, since ∫_ψ_𝐞μ_ep is by <ref> ± the coefficient of q^∑_i=1^(i-1+N-e_i)α_i - ∑_i=+1^+ (d_i- i+ 1)α_i in (<ref>), it follows that ∫_ψ_𝐞μ_ep is either ± the coefficient of q^∑_i=1^(i-1+N-e_i)α_i - ∑_i=+1^+ (e_i-i+1 )α_i in (<ref>) or equal to zero. Hence any upper bound on the coefficients of (<ref>) also gives an upper bound on ∫_ψ_𝐞μ_ep, which we will shortly use to establish the first part of the statement. Furthermore, the case where ∫_ψ_𝐞μ_ep =0 occurs when the total degree ∑_i=1^(i-1+N-e_i) - ∑_i=+1^+ (e_i-i+1) of q^∑_i=1^(i-1+N-e_i)α_i - ∑_i=+1^+ (e_i-i+1 )α_i is not equal to +2, giving the second part of the statement. We apply the upper bound <cit.>, which is stated as an upper bound on the coefficient of ∏_i=1^+ q^α_i d_i, so we must substitute in ( N-e_1 ,…, N- e_ + -1 , - e_+1+, …, -e_+ + + -1) for (d_1,…,d_+) in the bounds of <cit.>. (Also, d_1,…, d_+ are required to satisfy the inequalities of <cit.>, but <cit.> guarantees the coefficient vanishes if the inequality is not satisfied, so the bound holds also in that case.) The bound of <cit.>, is a product of two factors, the first of which is O ( ( O(1) + ∑_i ∈ S d_i - ∑_i∉ S d_i)^O(1) ) = O ( ( O(1) + ∑_i =1^ d_i - ∑_i=+1^+ d_i)^O(1) ) and substituting gives O ((O(1) + ∑_i=1^ (N-e_i) + ∑_i=+1^+ e_i)^O(1)) since the i-1 terms may be absorbed into the O(1). The second factor is the minimum of four different bounds, of which we will only need the middle two, which are min( q ^ -∑_i∈ S d_i +S2/2, q^∑_i∉ S d_i + S2 - +2/2) = min( q ^- -∑_i=1^ d_i + 2/2, q^∑_i=+1^+ d_i + 2 - +2/2) and substituting gives min( q ^ -∑_i=1^ (N-e_i )/2, q^ - ∑_i=+1^+ e_i /2) =q^ - max ( ∑_i=1^ (N-e_i), ∑_i=+1^+ e_i )/2 since the - ∑_i=1^ (i-1) term cancels 2 in the exponent of the first q and ∑_i=+1^+ (i-1) cancels 2 - +2 in the exponent of the second q. We have ∑_ e_1,… e_+∈ℤ N ≥ e_1≥… e_ e_+1≥… e_+≥ 0 ∑_i=1^ (N - e_i) + ∑_i=+1^+ e_i > k κ_𝐞∫_ψ_𝐞μ_ep = O ( N^ O(1) q^ - k/4 ). We have κ_𝐞 = ∑_σ∈ S_+(σ) ∏_i=1^+ q^ - (e_i + + - i) α_σ(i) /∏_1 ≤ i_1 < i_2 ≤+ (q^-α_i_1 - q^ - α_i_2 ) ≤∏_ 1≤ i_1 < i_2 ≤ + e_i_1 -i_1 - e_i_2 + i_2/∏_ 1≤ i_1 < i_2 ≤ +i_1- i_2 since the value of the Weyl character formula for the trace of the unitary representation at a unitary matrix with eigenvalues q^ -α_1,… ,q^-α_+ is bounded by the dimension of that representation which is given by the Weyl dimension formula. (Even if e_1 ≥ e_2 ≥… e_+ is not satisfied, the Weyl character formula still gives a formula for the either plus or minus the trace of some irreducible representation, or the zero representation, and the Weyl dimension formula gives plus or minus the dimension of that representation so the absolute value of the dimension formula still bounds the absolute value of the character formula.) If N ≥ e_1≥…≥ e_ and e_+1≥… e_+≥ 0 then each factor e_i_1 -i_1 - e_i_2 + i_2 is certainly bounded by ∑_i=1^ (N-e_i) + ∑_i=+1^+ e_i + O(N) so κ_𝐞≤(O (N) + ∑_i=1^ (N-e_i) + ∑_i=+1^+ e_i)^O(1) which together with <ref> implies κ_𝐞∫_ψ_𝐞μ_ep = O ( (O (N) + ∑_i=1^ (N-e_i) + ∑_i=+1^+ e_i)^O(1) q^- max ( ∑_i=1^ (N-e_i), ∑_i=+1^+ e_i )/2) = O ( (O (N) + ∑_i=1^ (N-e_i) + ∑_i=+1^+ e_i)^O(1) q^- ∑_i=1^ (N-e_i)-∑_i=+1^+ e_i /4) . The number of tuples e_1,…, e_+ satisfying N ≥ e_1≥… e_ and e_+1≥… e_+≥ 0 and ∑_i=1^ (N-e_i) + ∑_i=+1^+ e_i = d is O ( d^O(1) ), so in total we have ∑_ e_1,… e_+∈ℤ N ≥ e_1≥… e_ e_+1≥… e_+≥ 0 ∑_i=1^ (N - e_i) + ∑_i=+1^+ e_i > k κ_𝐞∫_ψ_𝐞μ_ep= ∑_d=k+1^∞ O ( d^O(1) ) O ( (O(N)+ d)^ O(1) q^ - d/4 ) ≪ N^O(1)∑_d=k+1^∞ d^O(1) q^- d/4 = O ( N^O(1) k^O(1) q^ - k/4 ) = O ( N^ O(1) q^ - k/4 ). Recall that our desired main term has the form (after substituting 1/2+α_i for s_i) MT_N^, (1/2+α_1,…, 1/2+α_+ )= ∏_i=+1^+ q^α_i N∑_ S ⊆{1,…,+} S=∏_i ∈ S q^ -α_i N∑_ f_1,…, f_+∈𝔽_q[u]^+ ∏_i ∈ S f_i = ∏_i∉ S f_i∏_i∈ Sf_i ^ -1/2- α_i ∏_i ∉ Sf_i ^ -1/2-α_i . This is interpreted by continuing each term ∑_ f_1,…, f_+∈𝔽_q[u]^+ ∏_i ∈ S f_i = ∏_i∉ S f_i∏_i∈ Sf_i ^ -1/2-α_i ∏_i ∉ Sf_i ^ -1/2+α_i meromorphically from its domain of absolute convergence and then summing the meromorphic functions. Thus, in this segment of the proof only, we will allow the α_i to be arbitrary complex numbers instead of imaginary numbers. We first evaluate the summand associated to S ={1,…, }, before using this to evaluate the summand associated to arbitrary S, and finally evaluate the full main term. We have an equality of holomorphic functions on {α_1,…, α_+∈ℂ|Re(α_i)< 1/4 for i ≤, Re(α_i)> - 1/4 for i > } given by ∏_1 ≤ i_1 < i_2 ≤+ (q^-α_i_1 - q^ - α_i_2 ) ∏_i=1^ q^ - α_iN ∑_ f_1,…, f_+∈𝔽_q[u]^+ ∏_i=1^ f_i = ∏_i=+1^+ f_i∏_i=1^f_i ^ -1/2+α_i ∏_i=+1^+f_i ^ -1/2-α_i = ∑_ e_1,… e_+∈ℤ N ≥ e_1≥… e_ e_+1≥… e_+≥ 0 ∑_σ∈ S_× S_(σ) ∏_i=1^+ q^ - (e_i ++- i) α_σ(i) ∫_ψ_𝐞μ_ep where (<ref>) is interpreted by holomorphic continuation from its domain of absolute convergence and (<ref>) is absolutely convergent, and where S_× S_ is embedded in S_ + as the subgroup preserving the partition into {1,…, } and {+1,…, +}. We first check absolute convergence of (<ref>). By <ref> we have ∫_ψ_𝐞μ_ep = O ( (O(1) + ∑_i=1^ (N-d_i) + ∑_i=+1^+ d_i)^O(1) q^- ∑_i=1^ (N-d_i) - ∑_i=+1^+ d_i /4 ). For i ≤ we have q^ - (e_i ++- i) α_σ(i) = q^ - (e_i ++- i) Re (α_σ(i) ) = q^ (N-e_i) Re (α_σ(i) ) q^ - (N + +-i) Re (α_σ(i) )≪_N, α q^ (N-e_i) Re (α_σ(i) ) ≤ q^ (N-e_i) max_1≤ j ≤Re (α_j ) where ≪_N,α denotes an implicit constant that may depend on N and α_1,…, α_+ but does not depend on e_1,…, e_+ (which is all that is needed for absolute convergence). Similarly, for i> we have q^ - (e_i ++- i) α_σ(i) = q^ - (e_i ++- i) Re (α_σ(i) ) = q^ -e_i Re (α_σ(i) ) q^ - (+-i) Re (α_σ(i) ) ≪_α q^ -e_i Re (α_σ(i) ) ≤ q^ -e_i min_+1 ≤ j ≤+Re (α_j ) so ∏_i=1^+ q^ - (e_i ++- i) α_σ(i) ≪_N,α q^∑_i=1^ (N-e_i) max_1≤ j ≤Re (α_j ) - ∑_i=+1^+ e_i min_+1 ≤ j ≤+Re (α_j ) and thus ∏_i=1^+ q^ - (e_i ++- i) α_σ(i) ∫_ψ_𝐞μ_ep ≪_N,α q^ -∑_i=1^ (N-e_i) ( 1/4 - max_1≤ j ≤Re (α_j ) ) - ∑_i=+1^+ e_i ( 1/4+ min_+1 ≤ j ≤+Re (α_j ) ) O ( (O(1) + ∑_i=1^ (N-e_i) + ∑_i=+1^+ e_i)^O(1)) When we sum over e_1,…, e_+, the assumptions on Re(α_i) imply that the exponential term in q is exponentially decreasing and thus dominates the polynomial term and leads to absolute convergence of (<ref>). Both (<ref>) and (<ref>) may be expressed as formal Laurent series in q^-α_1, …, q^-α_+. It now suffices to check that, for each d_1,…, d_+∈ℤ, the coefficient of q^∑_i=1^+ d_i α_i in (<ref>) equals the coefficient of q^∑_i=1^+ d_i α_i in (<ref>). Then both sides will be equal on the locus where both are absolutely convergent, hence equal everywhere by analytic continuation. This in particular implies (<ref>) is holomorphic on the same region as (<ref>). To check the equality of coefficients, we first observe that ∑_σ∈ S_× S_(σ) ∏_i=1^+ q^ - (e_i ++- i) α_σ(i) is antisymmetric in the variables α_1,…,α_ in the sense that swapping two variables is equivalent to multiplying the sum by -1, and similarly antisymmetric in the variables α_+1,…, α_+. Antisymmetry is stable under linear combinations, so (<ref>) is antisymmetric in α_1,…,α_ and in α_+1,…, α_+. Similarly, ∑_ f_1,…, f_+∈𝔽_q[u]^+ ∏_i=1^ f_i = ∏_i=+1^+ f_i∏_i=1^f_i ^ -1/2-α_i ∏_i=+1^+f_i ^ -1/2+α_i is symmetric in α_1,…,α_ and in α_+1,…, α_+ and the Vandermonde is antisymmetric, so their product (<ref>) is antisymmetric. It follows that the coefficient of q^∑_i=1^+ d_i α_i in either (<ref>) or (<ref>) is antisymmetric in d_1,…, d_ and in d_+1,…, d_+. In particular, the coefficient vanishes if we have d_i=d_j for i< j ≤ or +1< i<j, as in that case swapping d_i and d_j both preserves the value of the coefficient and negates it. Furthermore, to check that the coefficients of q^∑_i=1^+ d_i α_i are equal for all d_1,…, d_+, it suffices to check for only tuples such that d_1< … < d_ and d_+1< … < d_+: If the d_1,…, d_ are distinct, we can swap them until they are in increasing order, multiplying both coefficients by the same power of -1, and similarly with the d_+1,…, d_+, but if they are not distinct, both coefficients vanish and are trivially equal. So it remains to check for d_1<… < d_ and d_+1< … <d_+ that the coefficients of q^∑_i=1^+ d_i α_i in (<ref>) and (<ref>) are equal. First, we observe that the only σ that contributes to this coefficient in (<ref>) is the identity, since we have e_1 ≥…≥ e_ and e_+1≥… e_+ so that -(e_1++-1 )<-( e_2 + +-2) < … <-( e_+) and -(e_+1 + -1)< … <- e_+ and any permutation other than the identity would change the order and thus not take these exponents to d_1,… , d_+. So the only relevant term is the one with σ = id and e_i = i - - -d_i for all i. If N ≥ 1-- -d_1 and d_+≤ 0 so that N ≥ e_1 and e_+≥ 0 then this term has coefficient ∫_ψ_𝐞μ_ep which by <ref> is (-1)^∑_i=1^+ e_i + +2 +N =(-1)^∑_i=1^+ d_i+N times the coefficient of q^∑_i=1^(+ -1+N +d_i ) α_i - ∑_i=+1^+ (- d_i - - +1)α_i in (<ref>). Even if N < 1-- -d_1 or d_+ > 0 then the same conclusion holds as then no term in (<ref>) contributes so the coefficient is zero but it is compared to the coefficient in (<ref>) of a monomial whose exponent in q^α_1 is <- N-- +1 or whose exponent of q^α_+ is > 0 and no monomial of this form appears so this is also trivially zero. On the other hand, (<ref>) is equal to the product of (<ref>) with ∏_1 ≤ i_1 < i_2 ≤+ (q^-α_i_1 - q^ - α_i_2 ) /∏_1 ≤ i_1 < i_2 ≤+ (q^α_i_1 - q^α_i_2 )∏_i=1^ q^ - α_iN = ∏_1 ≤ i_1 < i_2 ≤+( - q^-α_i_1 q^- α_i_2) ∏_i=1^ q^ - α_iN so the coefficient of q^∑_i=1^+ d_i α_i in (<ref>) is (-1)^+2 times the coefficient of q^∑_i=1^+α_i (d_i ++-1) + ∑_i=1^α_i N in (<ref>). Since ∑_ i=1^+α_i (d_i ++-1) + ∑_i=1^α_i N = ∑_i=1^(+ -1+N +d_i ) α_i - ∑_i=+1^+ (- d_i - - +1)α_i the two coefficients agree up to a factor of (-1)^∑_i=1^+ d_i + +2 +N. However, by <ref>, that coefficient vanishes unless +2 = ∑_i=1^(+ -1+N+ d_i ) - ∑_i=+1^+ (-d_i - - +1)= (+) (+-1) + N + ∑_i=1^ d_i - ∑_i=+1^+ d_i in which case ∑_i=1^+ d_i + +2 +N is even, so either the two coefficients are equal or they are negatives of each other but zero and hence equal anyways. For S ⊆{1,…, +} of size , we have an equality of holomorphic functions on {α_1,…, α_+∈ℂ|Re(α_i)< 1/4 for i ∈ S, Re(α_i)> - 1/4 for i ∉ S } given by ∏_1 ≤ i_1 < i_2 ≤+ (q^-α_i_1 - q^ - α_i_2 ) ∏_i∈ S q^ - α_iN ∑_ f_1,…, f_+∈𝔽_q[u]^+ ∏_i∈ S f_i = ∏_i∉ S f_i∏_i∈ Sf_i ^ -1/2+α_i ∏_i∉ Sf_i ^ -1/2-α_i = ∑_ e_1,… e_+∈ℤ N ≥ e_1≥… e_ e_+1≥… e_+≥ 0 ∑_σ∈ S_+ σ^-1 (S) = {1,…, }(σ) ∏_i=1^+ q^ - (e_i ++- i) α_σ(i) ∫_ψ_𝐞μ_ep. We let τ be any fixed permutation in S_+ with τ^-1 (S) ={1,…, } and perform the change of variables replacing α_i with α_τ(i) in the statement of <ref>, obtaining ∏_1 ≤ i_1 < i_2 ≤+ (q^-α_τ(i_1) - q^ - α_τ(i_2) ) ∏_i=1^ q^ -α_τ(i) N ∑_ f_1,…, f_+∈𝔽_q[u]^+ ∏_i=1^ f_i = ∏_i=+1^+ f_i∏_i=1^f_i ^ -1/2+ α_τ(i)∏_i=+1^+f_i ^ -1/2-α_τ(i) = ∑_ e_1,… e_+∈ℤ N ≥ e_1≥… e_ e_+1≥… e_+≥ 0 ∑_σ∈ S_× S_(σ) ∏_i=1^+ q^ - (e_i ++- i) α_τ( σ(i) )∫_ψ_𝐞μ_ep. We have ∏_i=1^ q^ -α_τ(i) N = ∏_i∈ S q^ - α_iN and ∑_ f_1,…, f_+∈𝔽_q[u]^+ ∏_i=1^ f_i = ∏_i=+1^+ f_i∏_i=1^f_i ^ -1/2-+ α_τ(i)∏_i=+1^+f_i ^ -1/2-α_τ(i) = ∑_ f_1,…, f_+∈𝔽_q[u]^+ ∏_i∈ S f_i = ∏_i∉ S f_i∏_i∈ Sf_i ^ -1/2+α_i ∏_i∉ Sf_i ^ -1/2-α_i using the change of variables f_i ↦ f_τ(i), and we have ∏_1 ≤ i_1 < i_2 ≤+ (q^-α_τ(i_1) - q^ - α_τ(i_2) ) = (τ) ∏_1 ≤ i_1 < i_2 ≤+ (q^-α_i_1 - q^ - α_i_2 ) so (<ref>) is equal to (τ) times (<ref>). Similarly, the change of variables σ→τ^-1σ gives ∑_σ∈ S_× S_(σ) ∏_i=1^+ q^ - (e_i ++- i) α_τ( σ(i) ) = ∑_τ^-1σ∈ S_× S_(τ^-1σ) ∏_i=1^+ q^ - (d_i ++- i) α_σ(i) = (τ) ∑_σ∈ S_+ σ^-1 (S) = {1,…, }(σ) ∏_i=1^+ q^ - (e_i ++- i) α_σ(i) so (<ref>) is equal to (τ) times (<ref>). Thus (<ref>) and (<ref>) are equal to each other. ∑_ e_1,… e_+∈ℤ N ≥ e_1≥… e_ e_+1≥… e_+≥ 0 κ_𝐞∫_ψ_𝐞μ_ep = MT_N^, . Since the sum ∑_ e_1,… e_+∈ℤ N ≥ e_1≥… e_ e_+1≥… e_+≥ 0 κ_𝐞∫_ψ_𝐞μ_ep is uniformly convergent as a function of α_1,…, α_+ on the imaginary axis, both sides are continuous functions of α_1,…, α_+. So it suffices to prove this identity after restricting to a dense subset, and therefore suffices to prove it after multiplying by the Vandermonde ∏_1 ≤ i_1 < i_2 ≤+ (q^-α_i_1 - q^ - α_i_2 ). When we do this, the right-hand side becomes, by definition of MT, ∏_1 ≤ i_1 < i_2 ≤+ (q^-α_i_1 - q^ - α_i_2)∏_i=+ 1^+ q^α_i N∑_ S ⊆{1,…,+} S=∏_i ∈ S q^ -α_i N∑_ f_1,…, f_+∈𝔽_q[u]^+ ∏_i ∈ S f_i = ∏_i∉ S f_i∏_i∈ Sf_i ^ -1/2+α_i ∏_i ∉ Sf_i ^ -1/2-α_i which is the sum over S of (<ref>) multiplied by ∏_i=+1^+ q^α_i N. The left-hand side becomes, by definition of κ, ∑_ e_1,… e_+∈ℤ N ≥ e_1≥… e_ e_+1≥… e_+≥ 0 (-1)^N ∑_σ∈ S_+(σ) ∏_i=1^+ q^ - (e_i ++- i) α_σ(i) ∏_i=+1^+ q^ Nα_i ∫_ψ_𝐞μ_ep = ∏_i=+1^+ q^ Nα_i ∑_ S ⊆{1,…,+} S=∑_ e_1,… e_+∈ℤ N ≥ e_1≥… e_ e_+1≥… e_+≥ 0 (-1)^N ∑_σ∈ S_+ σ^-1 (S) = {1,…,}(σ) ∏_i=1^+ q^ - (e_i ++- i) α_σ(i) ∫_ψ_𝐞μ_ep which is the sum over S of (<ref>) multiplied by ∏_i=+1^+ q^α_i N, since each permutation σ sends {1,…,} to exactly one set S of cardinality . (The rearrangement of the sum is justified by absolute convergence.) The claim then follows from <ref>. This follows from combining <ref> and (<ref>) with <ref> and <ref>, noting the error term O ( N^ O(1) q^ - k/4 ) of <ref> is easily absorbed into the error term O (N^ (+)^2/2 - βq-2/4) of <ref> since k = ⌊ N^β⌋ so the exponential term q^ - k/4 dominates any polynomial in N. alpha
http://arxiv.org/abs/2409.02240v1
20240903191233
Computational Methods to Investigate Intrinsically Disordered Proteins and their Complexes
[ "Zi Hao Liu", "Maria Tsanai", "Oufan Zhang", "Julie Forman-Kay", "Teresa Head-Gordon" ]
physics.bio-ph
[ "physics.bio-ph", "q-bio.BM" ]
What makes a face looks like a hat: Decoupling low-level and high-level Visual Properties with Image Triplets Maytus Piriyajitakonkij1,20000-0002-7610-8953 Sirawaj Itthipuripat30000-0001-9302-0964 Ian Ballard†*,40000-0003-1814-3141 Ioannis Pappas*,50000-0002-0168-7014 September 9, 2024 ================================================================================================================================================================== § ABSTRACT In 1999 Wright and Dyson highlighted the fact that large sections of the proteome of all organisms are comprised of protein sequences that lack globular folded structures under physiological conditions. Since then the biophysics community has made significant strides in unraveling the intricate structural and dynamic characteristics of intrinsically disordered proteins (IDPs) and intrinsically disordered regions (IDRs). Unlike crystallographic beamlines and their role in streamlining acquisition of structures for folded proteins, an integrated experimental and computational approach aimed at IDPs/IDRs has emerged. In this Perspective we aim to provide a robust overview of current computational tools for IDPs and IDRs, and most recently their complexes and phase separated states, including statistical models, physics-based approaches, and machine learning methods that permit structural ensemble generation and validation against many solution experimental data types. § INTRODUCTION Despite the widely accepted protein structure-function paradigm central to folded proteins, it is increasingly appreciated that all proteomes also encode intrinsically disordered proteins and regions (IDPs/IDRs), which do not adopt a well-defined 3D structure but instead form fluctuating and heterogeneous structural ensembles.<cit.> The so-called "Dark Proteome" is made up of IDPs/IDRs that comprise over 60% of proteins in eukaryotes<cit.>, and this abundance together with growing experimental evidence challenge the assumptions that protein function and protein interactions require stable folded structures.<cit.> Proteins with intrinsic disorder confers certain advantages over folded protein states. Plasticity of disordered protein states facilitates conformational rearrangements and extended conformations that allow them to interact simultaneously with multiple spatially separated partners, changing shape to fold upon binding or in creating dynamic complexes.<cit.> Disordered regions in protein complexes may control the degree of motion between domains, permit overlapping binding motifs, and enable transient binding of different binding partners, facilitating roles as signal integrators and thus explaining their prevalence in signaling pathways.<cit.> Disorder is also highly over-represented in disease-associated proteins, and IDPs have been shown to be involved in a variety of fundamental processes including transcription, translation and cell cycle regulation that when altered lead to cancer and neurological disorders<cit.>. Recent evidence suggests that IDPs/IDRs are enriched in biomolecular condensates.<cit.> Biomolecular condensates arise from phase separation, percolation and other related transitions <cit.> to induce a biomacromolecule-rich phase and a dilute phase depleted of such biomacromolecules <cit.>, a phenomenon well-established by polymer physics<cit.>. IDPs/IDRs have been suggested to promote phase separation and other related transitions due to their structural plasticity, low-complexity sequence domains, and multivalency.<cit.> Furthermore, functional dynamic complexes of IDPs/IDRs and biomolecular condensates are known to be sensitive to post-translational modifications (PTMs).<cit.> Regulatory PTMs<cit.> often target residues within IDPs/IDRs, as they are more accessible and flexible than folded protein elements<cit.>. Modification of IDPs/IDRs by PTMs is well known to modulate many cellular processes, including dynamic complexes <cit.> and the assembly/disassembly, localization, and material properties of biomolecular condensates <cit.>. One of the primary goals in understanding disordered proteins and their roles in biology is to create, validate, and analyze IDP/IDR structural ensembles to imbue insight into the conformational substates that give rise to IDP/IDR function (Figure <ref>). Folded proteins have well-defined experimental approaches, mostly using X-ray crystallography, electron crystallography and microscopy, and recent computational approaches such as AlphaFold2<cit.> can determine their structure with high accuracy. IDPs/IDRs bring new challenges to both experiment and computer simulation and modeling, requiring an integrative biology approach of experimental and computational methods that work together to characterize their diverse and dynamic structural ensembles. In this perspective, we review the computational tools for structural ensemble creation and validation for isolated IDPs/IDRs, including those that operate by generating and evaluating structural ensembles that are consistent with the collective experimental restraints derived from Nuclear Magnetic Resonance (NMR), Small Angle X-ray Scattering (SAXS), and other available solution experimental measurements. But to fully address the biological activity of IDPs/IDRs, we must further develop computational methods to characterize dynamic complexes and biological condensates, and to account for changes due to PTMs. Here we provide a current snapshot of available computational methods, workflows, and software that are available for characterizing IDPs and IDRs and their complexes, while also identifying current gaps for future progress in their characterization. § ENSEMBLE GENERATION Generating a conformational ensemble can be categorized into knowledge-based approaches, physical models, and machine learning (ML)/artificial intelligence (AI) methods. Knowledge-based generation protocols are a popular choice as ensembles can be calculated in a few minutes on a workstation or laptop<cit.>. Although computationally more expensive, all-atom (AA) and coarse-grained (CG) physical models combined with molecular dynamics (MD) simulations are also widely used, becoming increasingly more accurate when using many-body force fields, better algorithms for sampling, and rapid quarterly developments on silicon hardware as well as parallel application-specific integrated circuits (ASICs) like the Anton 3 supercomputer.<cit.> Finally, ML/AI tools have exploded onto the scene with improved generative models being utilized to predict structural ensembles of disordered proteins. §.§ Knowledge-Based Methods Knowledge-based ensemble generation approaches have three things in common: 1) Building protein chains using fragments of residues to retain the local information of a modeled peptide. 2) Exploiting a database of high-resolution non-redundant PDB structures as an empirical force-field. 3) Being exceptionally computationally efficient while generating free conformers without unphysical steric clashes. Since the early 2000s, there have been many software packages released for modeling disordered proteins using statistical or static methods. TraDES, introduced as FOLDTRAJ, models IDP structures <cit.> using a 3-residue fragment chain growth protocol, derived from a curated database of non-redundant protein structures from the RCSB PDB<cit.>. Flexible-meccano creates statistical ensembles by randomly sampling amino acid-specific backbone torsion (ϕ/ψ) angles from high-resolution X-ray crystallographic protein structures, and attempts to validate them with experimental NMR as well as SAXS data<cit.>. More recently, the FastFloppyTail method allows for the prediction of disordered ensembles by exploiting the Rosetta AbInitio protein prediction model <cit.>, for not only isolated IDPs but also for IDRs at the termini of folded structures. The latest software platform for modeling and analyzing IDPs/IDRs, IDPConformerGenerator, can also use NMR and SAXS experimental data to bias conformer generation, but unlike the other methods allows for the generation of ensembles in different contexts such as transmembrane systems, dynamic complexes, and applications in biological condensates <cit.> (see Figure <ref>). IDPConformerGenerator also includes the ability to generate side chain ensembles using the Monte Carlo Side Chain Ensemble (MCSCE) method<cit.> including with PTMs<cit.> for IDPs/IDRs and their complexes. §.§ Physics-Based Methods MD has been extensively used to study the conformations, dynamics, and properties of biological molecules, and can reproduce and/or interpret thermodynamic and spectroscopic data, and can also provide accurate predictions for processes inaccessible to experiment<cit.>. Because most AA force fields used in MD were developed to represent folded proteins, they can lead to inaccuracies when studying disordered proteins and regions. Thus, intensive work has been dedicated to refining well-established "fixed charge" force fields (FFs), resulting in notable improvements. The D. E. Shaw group employed six fixed charge FFs with explicit solvent to study the structural properties of both folded and disordered proteins <cit.>. By modifying the torsion parameters and the protein-water interactions, the a99SB-disp FF<cit.> was shown to provide accurate secondary structure propensities for a plethora of disordered proteins, while accurately simulating folded proteins as well. However, recent studies have reported that the a99SB-disp model is too soluble for studying the condensation of some disordered proteins <cit.>. Jephtah et al. utilized different versions of the CHARMM <cit.> and AMBER <cit.> FFs with different water models <cit.> to study five peptides that exhibit a polyproline II helix (PPII) structure <cit.>. Interestingly, most models managed to capture, to varying extents, the ensemble averages of the radius of gyration and the PPII secondary structure, but the models differed substantially in the under- or over-sampling of different secondary structures. Recently, Liu and co-workers considered a new direction – the connection of FFs to configurational entropy – and how that might qualitatively change the nature of our understanding of FF development that equally well encompasses globular proteins, IDPs/IDRs, and disorder-to-order transitions.<cit.> Using the advanced polarizable AMOEBA FF model, these many-body FFs generate the largest statistical fluctuations consistent with the radius of gyration (Rg) and universal Lindemann values (correlated with protein melting temperature) for folded states. These larger fluctuations of folded states were shown to translate to their much greater ability to simultaneously describe IDPs and IDRs such as the Hst-5 peptide, the stronger temperature dependence in the disorder-to-order transition for (AAQAA)_3, and for maintaining a folded core for the TSR4 domain while simultaneously exhibiting regions of disorder.<cit.> This supports the development and use of many-body FFs to described folded proteins and to create IDP/IDR ensembles, by naturally getting the energy-entropy balance right for all biomolecular systems. CG models reduce the resolution of an all-atom model in order to simulate larger systems and for longer timescales, which often is necessary when considering disordered proteins and dynamic complexes and condensates. Heesink et al. used a CG model that represents each amino-acid with a single interaction site or "bead" to investigate the structural compactness of α-synuclein <cit.>. The SIRAH model represents only the protein backbone with three beads and can be applied to both folded and disordered proteins <cit.>. Joseph et al. developed the Mpipi model, a residue-level CG model parameterized through a combination of atomistic simulations and bioinformatics data <cit.>. By focusing on pi-pi and cation-pi interactions, they successfully reproduced the experimental phase behavior of a set of IDPs and of a poly-arginine/poly-lysine/RNA system. However, due to the absence of explicit protein-solvent interactions, it leads to a poor representation of protein solubility upon temperature modifications. Tesei et al. developed CALVADOS, a CG model trained using Bayesian learning of experimental data, including SAXS and Paramagnetic Resonance Enhancement (PRE) NMR data <cit.>. CALVADOS is able to capture the phase separation of the low complexity domains of FUS, Ddx4, hnRNPA1 and LAF proteins, and has been used to calculate the ensembles of the IDRs within the human proteome <cit.>. Finally, CG models with a higher resolution, such as the Martini model, have been also used to study monomeric IDPs/IDRs and their ensembles <cit.>. Although originally developed to model folded proteins, the model accurately reproduces the phase separation of disordered regions, the partitioning of molecules and the simulation of chemical reactions within the ensembles <cit.>. Thomasen et al. demonstrated that the Martini CG model tends to underestimate the global dimensions of disordered proteins <cit.>; by augmenting the protein-water interactions, they successfully reproduced SAXS and PRE data for a set of disordered and multidomain proteins. Multiscale approaches that consider a combination of AA/CG resolutions have also been proposed.<cit.> §.§ Machine-learning Methods With the advent of AlphaFold2 <cit.>, RoseTTAFold <cit.>, and ESMFold <cit.>, it is clear that different ML models can produce high quality predicted structures of globular proteins. These methods use multiple sequence alignments (MSA) and clustering to generate different conformers of folded proteins, but caution is warranted for IDPs for which MSAs are not fully applicable.<cit.> In a recent study, Alderson and co-workers suggested that when high confident predictions are made for IDRs from AlphaFold, these conformations likely reflect those in conditionally folded states.<cit.> However, many IDRs from AlphaFold2 seem to be built as ribbons in a way that only satisfies the steric clash restraints rather than any secondary structure, or global shape parameters and are designated as low-confidence predictions. The large influx of recent ML methods to predict ensembles of disordered proteins can use pre-existing protein structure prediction algorithms or by training on CG or AA MD data. Denoising diffusion models can be used to generate coarse-grained IDP/IDR ensembles, as in the work of Taneja and Lasker <cit.>. Phanto-IDP, a generative model trained on MD trajectory data that uses a modified graph variational auto-encoder, can generate 50,000 conformations of a 71-residue IDP within a minute <cit.>. Similarly, Janson and coworkers have trained a Generative Adversarial Network (GAN) based on CG and AA simulations of IDPs (idpGAN) <cit.>, in which the model learns the probability distribution of the conformations in the simulations to draw new samples based on sequences of IDPs. Lotthammer and coworkers have trained a Bidirectional Recurrent Neural Network with Long Short-Term Memory cells (BRNN-LSTM) on IDP ensembles generated with the Mpipi CG force field, to predict properties of disordered proteins (ALBATROSS).<cit.> There have also been methods to generate ensembles of protein structures using AI-augmented MD simulations with the option to use AlphaFold in the post-processing stage in a method called "Re-weighted Autoencoded Variational Bayes for Enhanced Sampling" (AlphaFold-RAVE) <cit.>. § ENSEMBLE VALIDATION Although there are many creative solutions to generate conformer ensembles of IDPs, the ensembles must be validated against experimental data to further improve upon their utility. Extracting structural and dynamic information from IDPs and IDRs can be done using a variety of solution-based experimental procedures, although these observables tend to be highly averaged. Hence computational tools go hand-in-hand with experiments, providing structural detail while being validated against back-calculation for each experimental data type. Additionally, ensemble generation pipelines can make use of back-calculated data to enforce experimental restraints during an integrative modeling process.<cit.> §.§ Experimental Observables NMR spectroscopy is a powerful technique used to provide structural insights and dynamic properties of disordered proteins at close-to physiological conditions <cit.>. A key NMR observable is the chemical shift that provides structural information that is sensitive to functional groups and their environment. Hence methods for predicting secondary structure propensities from chemical shifts have been developed, such as SSP <cit.>, δ2D<cit.>, and CheSPI<cit.> used to inform local fractional secondary structure of an ensemble. Back-calculators that predict atomic chemical shift values of protein structures include ML feature-based approaches such as SPARTA+ <cit.>, ShiftX2 <cit.>, and UCBShift <cit.>. J-couplings provide essential information about the connectivity and bonding between nuclei, with 3-bond J-couplings, ^3J, reporting on torsion angle distributions, and can reveal information about the timescales of molecular motions in IDPs <cit.>. In the case of PTM-stabilized or folding-upon-binding category of IDPs, changes in ^3J-coupling patterns may indicate the presence of secondary structure elements or structural motifs. The back-calculation of J-coupling data is relatively straight forward by using the Karplus equation <cit.> and the protein backbone torsion angles. An application of the Karplus equation to back-calculate ^3J-coupling constants can be found in the `jc` module in SPyCi-PDB <cit.> as well as in optimization algorithms such as X-EISD <cit.> described below. PRE and nuclear Overhauser effect (NOE) experiments provide distance information between pairs of residues, such as information about transient interactions and conformational changes <cit.>, and the presence of persistent local structures or long-ranged contacts that stabilize IDPs/IDRs and protein complexes and condensates<cit.>. Due to the conformational heterogeneity of IDPs/IDRs, both measured PRE and NOE data are averaged values across the conformational landscape. Furthermore, there are different interpretations of how to use PREs and NOEs in the form of distance restraints or as dynamical observables, which is a consideration for the back-calculation approach. When PRE and NOE values are interpreted as inter-atomic distances, the back-calculation is straightforward using Euclidean distance formulas as seen in SPyCi-PDB<cit.>. When interpreted as dynamic quantities, the NOE back-calculation is a time correlation function that can more accurately represent the experimental observable as shown by Ball and co-workers. <cit.> Ideally the back-calculation of PREs would be in the form of intensity ratios or rates, as they are native to the experimental protocol and thus subject to less error due to different interpretations of the conversion from T_1/T_2 relaxation rates to distances using the Solomon-Bloembergen equation <cit.>. An example of back-calculating PRE ratios can be seen in DEER-PREdict <cit.>, where intensity ratios are estimated instead of distances. NMR relaxation experiments can describe the dynamics of disordered proteins. The transverse relaxation time (T_2) and the associated R_2 relaxation rates (i.e. R_2 = 1/T_2) can be obtained by from NMR experiments in which shorter T_2 values are associated with increasing protein dynamics. The S^2 order parameter provides a measure of the amplitude of motion on a fast picosecond-nanosecond timescale which ranges from 0 (completely disordered) to 1 (folded). S^2 values can be derived from relaxation data (T_1, T_2, NOE measurements) and is useful for identifying regions of a protein that are more flexible/disordered. As seen in the work from Naullage et al., R_2 and S^2 values were used in addition to other experimental restraints to elucidate a structural ensemble for the unfolded state of the drkN SH3 domain <cit.>. Backbone ^15N R_2 values in ENSEMBLE<cit.> are calculated as the number of heavy atoms within 8 Å. The R_2 restraint has also been used within the work of Marsh et al., where the authors model three nonhomologous IDPs, I-2, spinophilin, and DARPP-32.<cit.> Double Electron-Electron Resonance (DEER), also known as pulsed electron-electron double resonance (ELDOR), is an EPR spectroscopy that uses site-directed spin labeling <cit.> to explore flexible regions of proteins to measure distances in the range of 18-60 Å <cit.> between two spin-labeled sites. Due to the effective distance measurements, DEER can help characterize the range of distances sampled by different regions within an IDP. Furthermore, DEER can be used to study conformational changes that occur when IDPs undergo folding upon binding to their interaction partners and provide insights into the structural transition from a disordered to ordered state <cit.>. Using a rotamer library approach, the Python software package called DEER-PREdict effectively back-calculates electron-electron distance distributions from conformational ensembles <cit.>. Additionally, a plugin for the PyMOL molecular graphics system <cit.> can estimate distances between spin labels on proteins in an easy-to-use graphical user interface format <cit.>. Single-molecule Förster resonance energy transfer (smFRET) is a popular fluorescence technique used to study the conformational dynamics of disordered protein systems.<cit.>. It is commonly referred to as a "spectroscopic ruler" with reported uncertainties of the FRET efficiency ≤ 0.06, corresponding to an interdye distance precision of ≤ 2 Å and accuracy of ≤ 5 Å <cit.>. Because of high uncertainties for distances between inter-residue donor and acceptor fluorophores, it is better to back-calculate the FRET efficiencies ⟨E⟩ instead. Prediction of FRET efficiencies can be done using a recently developed Python package FRETpredict <cit.>, for which the same rotamer library approach used to predict DEER and PRE values (using DEER-PREdict) is used here to obtain either a static, dynamic, or average FRET efficiency of an conformer ensemble. Calculations of FRET efficiencies can also be done through MD simulations<cit.>; AvTraj is an open-source program to post-predict FRET efficiencies from MD trajectories <cit.> of conformer ensemble models. Photoinduced electron transfer (PET) coupled with fluoresence correlation spectroscopy (FCS), known as PET-FCS, identifies the contact (≤ 10 Å) formation rate between a fluorophore and a quencher (an aromatic residue or another dye) <cit.>. Furthermore, FCS measurements are not the efficiencies of energy transfer as reported for FRET but FCS measures the amplitude of the intensity of fluorescence over time as a reporter for protein diffusion and concentration. <cit.> Due to the time dependency of PET-FCS however, back-calculation techniques are linked to MD simulation data. For example, by exploiting the ns-μs timescale of protein chain dynamics, PET-FCS is a valuable experimental protocol to study disordered protein kinetics as presented in the example of studying the disordered N-terminal TAD domain of the tumor suppressor protein p53 <cit.>. Finally, for the measurement of global structural dimensions, small angle X-ray scattering (SAXS) can determine the radius of gyration, R_g, while the hydrodynamic radius (R_h) can be measured by using NMR pulsed field gradient diffusion or by Size Exclusion Chromatography (SEC). SAXS is commonly reported for isolated IDPs or IDPs/IDRS in complexes under a variety of in vitro experimental conditions <cit.>. For disordered protein systems, SAXS scattering data can also unveil disorder-to-order transitions, and assist in ensemble modeling by restraining global dimensions <cit.>. The value of the obtained R_h value reflects the protein's size in a solvent, accounting for its shape and the surrounding medium's viscosity. SAXS measurements can also be used to derive the hydrodynamic radius (R_h). Although back-calculations can be performed for R_g with relative ease from structural ensembles, predicting SAXS intensity profiles requires increased biophysical rigor as presented in CRYSOL <cit.>, AquaSAXS <cit.>, Fast-SAXS-pro <cit.>, and FoXS <cit.>.Back-calculations of R_h values, however, have been more ambiguous due to the various solvent effects that change the distribution of R_h values for a given protein. Popular back-calculation methods for R_h include: i) HYDROPRO <cit.>, by implementing a bead modeling algorithm that expands the volume of atomic spheres based on their covalent radii, ii) HullRad <cit.>, which uses the convex hull method to predict hydrodynamic properties of proteins, and iii) Kirkwood-Riseman equation <cit.> by using the center of mass of each residues instead of the Cα. §.§ Statistical subsetting/reweighting methods Given an "initial" IDP/IDR ensemble generated using one of the different ensemble generation protocols, these must be modified by imposing that the IDP/IDR structural ensembles agree with available experimental data.<cit.> This integrative biology step has often relied on reweighting methods such as maximum-parsimony (MaxPar) or maximum-entropy (MaxEnt) approaches to create better validated IDP/IDR structural ensembles. MaxEnt is a probabilistic method based on the principle of maximizing the degree of disorder while also satisfying a set of experimental restraints, giving conformations higher weights when they are more consistent with an experimental observable. MaxPar approaches instead focus on finding the simplest IDP/IDR ensemble by minimizing the number of conformations in the ensemble to those that satisfy the experimental constraints. Usually, MaxPar approaches are used when there is more confidence in the available experimental data, where the goal is to simplify the ensemble to a minimum set of conformations <cit.>. For a more in-depth comparison of MaxEnt and MaxPar approaches of ensemble reweighting, please refer to the review of Bonomi et al. <cit.>. Examples of software packages and methods that fall into the MaxPar umbrella include: the 'ensemble optimization method' (EOM) <cit.>, the 'selection tool for ensemble representations of intrinsically disordered states' (ASTEROIDS) <cit.>, and the 'sparse ensemble selection' (SES) method <cit.>. MaxEnt approaches for ensemble reweighting include the 'ensemble-refinement of SAXS' (EROS) <cit.>, 'convex optimization for ensemble reweighting' (COPER) <cit.>, and ENSEMBLE <cit.>. But more recent developments of the MaxEnt approach have opted to include Bayesian inference using back-calculated and experimental data. Since this statistical framework allows for combining different sources of data, it allows for independent accounting for experimental and back-calculator errors. The use of Bayesian statistics can be found in algorithms that re-weight ensembles directly from MD simulations such as the 'Bayesian energy landscape tilting' (BELT) method <cit.>, 'Bayesian Maximum Entropy' (BME) <cit.>, and 'Bayesian inference of conformational populations' (BICePs) <cit.>. The Bayesian Extended Experimental Inferential Structure Determination (X-EISD) for IDP ensemble selection evaluates and optimizes candidate ensembles by accounting for different sources of uncertainties, both back-calculation and experimental, for smFRET, SAXS, and many NMR experimental data types<cit.>. §.§ ML Ensemble Generation and Validation Creating IDP ensembles that agree with experimental data such as NMR, SAXS, and smFRET has typically involved operations on static structural pools, i.e. by reweighting the different sub-populations of conformations to agree with experiment.<cit.> But if important conformational states are absent, there is little that can be solved with subsetting and reweighting approaches for IDP/IDR generation and validation. Recently, Zhang and co-workers bypassed standard IDP ensemble reweighting approaches and instead directly evolved the conformations of the underlying structural pool to be consistent with experiment, using the generative recurrent-reinforcement ML model, termed DynamICE (Dynamic IDP Creator with Experimental Restraints).<cit.> DynamICE learns the probability of the next residue torsions X_i+1=[ϕ_i+1,ψ_i+1, χ1_i+1, χ2_i+1,...] from the previous residue in the sequence X_i to generate new IDP conformations. As importantly, DynamICE is coupled with X-EISD in a reinforcement learning step that biases the probability distributions of torsions to take advantage of experimental data types. DynamICE has used J-couplings, NOE, and PRE data to bias the generation of ensembles that better agree with experimental data for α-synuclein, α-beta, Hst5, and the unfolded state of SH3<cit.>. § SOFTWARE AND DATA REPOSITORIES FOR IDPS/IDRS Table <ref> presents a summary of current ensemble modeling (M), experimental validation (V), and statistical reweighting/filtering (R) approaches that are available for both users and developers. We also highlight software that has the ability to model protein-protein interactions. Emerging developments are being made to exploit the intra- and intermolecular contact data in the RCSB PDB to model dynamic complexes of disordered proteins de novo, and current docking methods exist for some disordered protein systems. An example of a docking method specifically designed for disordered proteins involved in complexes with folded proteins is IDP-LZerD <cit.>. Coupled with software, curated databases play an immensely important role in IDP/IDR generation and validation. The Biological Magnetic Resonance Data Bank (BMRB) is an international open data repository for biomolecular NMR data<cit.>, and the small-angle scattering biological data bank (SASBDB) <cit.> have seen increases in IDP/IDR data depositions. Examples of curated sequence and structural IDP ensemble coordinates include the protein ensemble database (PED) <cit.>, DisProt <cit.>, the ModiDB <cit.>, the database of disordered protein prediction D2P2 <cit.>, the DescribeProt database <cit.>, and the Eukaryotic Linear Motif Resource (ELM) <cit.>. P0.75cm|P0.75cm|P2.5cm|P5cm|P6cm Computational tools for studying ensembles of disordered proteins. Categorized by ensemble modeling (M), experimental validation (V), and statistical reweighting/filtering (R). *Asterisk labeled tools have the ability to model protein-protein interactions. Computational tools are sorted in chronological order from the latest published software at the time of writing (August 2024). Year Type Name Accessibility Short Description 2024 M PTM-SC github.com/THGLab/ptm_sc Packing of AA side chains with PTMs. 2024 V FRETpredict github.com/KULL-Centre/FRETpredict Calculate FRET efficiency of ensembles and MD trajectories. 2024 M Phanto-IDP github.com/HFChenLab/PhantoIDP Generative ML model to reconstruct IDP ensembles. 2023 M IDPConfGen* github.com/julie-forman-kay-lab/IDPConformerGenerator Platform to generate AA ensembles of IDPs, IDRs, and dynamic complexes. 2023 V SPyCi-PDB github.com/julie-forman-kay-lab/SPyCi-PDB Platform to back-calculate different types of experimental data from IDP ensembles. 2023 M DynamICE github.com/THGLab/DynamICE Generative ML model to generate new IDP conformers biased towards experimental data. 2023 M idpGAN github.com/feiglab/idpgan ML ensemble generator for CG models of IDPs. 2021 M DIPEND github.com/PPKE-Bioinf/DIPEND Pipeline to generate IDP ensembles using existing software. 2021 V DEER-PREdict github.com/KULL-Centre/DEERpredict Calculates DEER and PRE predictions from MD ensembles. 2020 M IDP-LZerD* github.com/kiharalab/idp_lzerd Models bound conformation of IDP to an ordered protein. 2020 R X-EISD github.com/THGLab/X-EISDv2 Bayesian statistical reweighting method for IDP ensembles using maximum log likelihood. 2020 M FastFloppyTail github.com/jferrie3/AbInitioVO-and-FastFloppyTail Ensemble generation of IDPs and terminal IDRs using Rosetta foundations. 2020 V UCBShift github.com/THGLab/CSpred ML chemical shift predictor. 2020 R BME github.com/KULL-Centre/BME Bayesian statistical reweighting method for IDP ensembles using maximum entropy. 2018 V HullRad http://52.14.70.9/ Calculating hydrodynamic properties of a macromolecule. 2017 VR ATSAS V3 embl-hamburg.de/biosaxs/crysol.html Collection of software. 2017 MVR NMRbox nmrbox.nmrhub.org/ Collection of software. 2013 MVR ENSEMBLE pound.med.utoronto.ca/~JFKlab/ Modeling ensembles of IDPs using experimental data. 2012 M Flexible-meccano ibs.fr/en/communication-outreach/scientific-output/software/flexible-meccano-en Modeling conformer ensembles of IDPs using backbone Phi-Psi torsion angles. 2012 V AvTraj github.com/Fluorescence-Tools/avtraj Calculate FRET observables from MD trajectories. 2009 R ASTEROIDS closed source- contact authors Stochastic search of conformers that agree with experimental data. § CONCLUSION AND FUTURE DIRECTIONS For folded proteins, crystal structures provide concrete, predictive, and yet conceptually straightforward models that can make powerful connections between structure and protein function. X-ray data has informed the deep learning model AlphaFold2 such that the folded protein structure problem is in many (but not all) cases essentially solved. IDPs and IDRs require a broader framework to achieve comparable insights, requiring significantly more than the one dominant experimental tool and computational analysis approach. Instead, a rich network of interactions must occur between the experimental data and computational simulation codes, often facilitated by Bayesian analysis and statistical methods that straddle the two approaches, e.g., by constraining the simulations and/or to provide confidence levels when comparing qualitatively different structural ensembles for the free IDP and disordered complexes and condensates. Given the diverse range of contexts in which disordered proteins manifest their function, the development of a comprehensive software platform capable of aiding IDP/IDR researchers holds immense promise. Crafting scientific software with a focus on best practices, modularity, and user-friendliness will not only enhance its longevity, ease of maintenance, and overall efficiency, but also contribute to its utility as a plug-and-play software pipeline for the diversity of IDP/IDR studies. An inspirational example from the NMR community is the creation of NMRBox<cit.>, which provides software tools, documentation, and tutorials, as well as cloud-based virtual machines for executing the software. As emphasized earlier, the modeling of multiple disordered chains within a complex or condensate is the next frontier. However, many of these powerful computational tools and models have been developed for the characterization, analysis, and modeling of single chain IDPs/IDRs. More focus is needed for software that can handle disordered proteins that are part of dynamic complexes. Most often modeling dynamic complexes and condensates can be done by using CG MD simulations, and given a template, modeling of dynamic complexes could be done within the Local Disordered Region Sampling (LDRS) module within IDPConformerGenerator <cit.>. Although AlphaFold multimer <cit.> seems promising for multi-chain complexes of folded proteins, its core MSA protocol cannot be readily exploited for disordered proteins. The developers of HADDOCK <cit.>, a biophysical and biochemical driven protein-protein docking software are working on HADDOCK v3 to dock IDP/IDR conformer ensembles to other disordered or folded templates to generate ensembles of complexes. It may also be possible to model condensates this way by expanding the docking between protein chains. Centralized data repositories and resources for IDPs/IDRS are important for several reasons. They help establish community standards for IDP/IDR data quality, and hence control the quality of the resulting IDP ensemble generation and validation. Because ML methods require robust training datasets, there is a great need to expand on both experimental and MD trajectory data deposition for disordered protein and proteins with disordered regions. For wet-lab scientists, we would like to have a call-to-action for authors in future publications to have clear deposition of experimental data in order to develop better in silico back-calculators and ultimately ensure better ensemble generation, validation, and reweighting protocols. For computational scientists, we would like to ask the community for a standardized method for evaluating ML/AI based tools for ensemble prediction and generation. Due to the rapid rate of silicon development, it is important to have a rigorous benchmark for new validation and ensemble generation protocols. § ACKNOWLEDGEMENTS We thank the National Institutes of Health (2R01GM127627-05) for support of this work. J.D.F.-K. also acknowledges support from the Natural Sciences and Engineering Research Council of Canada (NSERC, 2016-06718) and from the Canada Research Chairs Program. § AUTHOR CONTRIBUTIONS STATEMENT Z.H.L., M.T. and T.H.G. wrote the paper. All authors discussed the perspective topics, references, and made comments and edits to the manuscript. § COMPETING INTERESTS STATEMENT The authors declare no competing interests.
http://arxiv.org/abs/2409.02420v1
20240904035200
Two-pole structure of the $h_1(1415)$ axial-vector meson: resolving mass discrepancy
[ "Samson Clymton", "Hyun-Chul Kim" ]
hep-ph
[ "hep-ph", "hep-ex" ]
INHA-NTG-03/2024 [E-mail: ]sclymton@inha.edu Department of Physics, Inha University, Incheon 22212, Republic of Korea [E-mail: ]hchkim@inha.ac.kr Department of Physics, Inha University, Incheon 22212, Republic of Korea School of Physics, Korea Institute for Advanced Study (KIAS), Seoul 02455, Republic of Korea § ABSTRACT We investigate isoscalar axial-vector mesons using a coupled-channel formalism. The kernel amplitudes are constructed from meson-exchange diagrams in the t- and u-channels, which are derived from effective Lagrangians based on hidden local symmetry. We incorporate six channels: πρ, ηω, KK̅^*, ηϕ, η'ω, and η'ϕ, and solve the off-shell coupled integral equations. We first discuss the dynamical generation of the h_1(1170). The pole diagram for h_1(1595) has a certain effect on the generation of h_1(1170). We observe two poles at (1387-i6) MeV and (1452-i51) MeV, which exhibit a two-pole structure of the h_1(1415) meson. This two-pole structure may resolve the discrepancy in the experimental data on the mass of h_1(1415). The results show that the lower pole couples strongly to the KK̅^* channel, while the higher pole couples predominantly to the ηϕ channel. This provides insights into the nature of h_1 mesons and explains possible discrepancies in the mass of h_1(1415). Two-pole structure of the h_1(1415) axial-vector meson: resolving mass discrepancy Hyun-Chul Kim September 9, 2024 ==================================================================================== § INTRODUCTION Recently, the BESIII Collaboration reported new data on the excited h_1(1415) meson with quantum numbers I^G(J^PC)=0^-(1^+-), based on a partial-wave analysis of J/ψ→γη'η'. They found its mass to be m_h_1(1415)=(1384± 6_-0^+9) MeV and width Γ = (66± 10_-10^+12) MeV <cit.>. These new results are consistent with the earliest measurements of h_1(1415)<cit.>, which reported m_h_1(1415)=(1380± 20) MeV and Γ=(80± 30) MeV from the K^-p→ K_S^0 K̅πΛ process. However, they differ from several other experimental findings. The Crystal Barrel experiment<cit.> measured respectively its mass and width as (1440± 60) MeV and (170± 80) MeV from pp̅→ K_LK_Sπ^0π^0. An earlier BESIII experiment <cit.> found m=(1412± 12) MeV and Γ=(84±52) MeV from χ_1,2,J→ϕ KK̅π. In 2018, BESIII <cit.> reported m=(1423.2± 9.4) MeV and Γ=(90.3±27.3) MeV from J/ψ→η'KK̅π, with interference effects yielding m=(1441.7±4.9) MeV and Γ=(111.5±12.8) MeV. Notably, the KK̅^* threshold energy (E_th^KK̅^*≈ 1390 MeV) lies between the h_1(1415) masses reported in Refs.<cit.> and those in Refs.<cit.>. It is crucial to understand the origin of these discrepancies. While h_1(1415) decays primarily into KK̅^̅*̅, other channels near its mass likely contribute as well. The lowest h_1(1170) state decays into πρ, suggesting this channel may also affect h_1(1415) production. The Particle Data Group (PDG) classifies the h_1 mesons as isoscalar qq̅ states, i.e. c_1(uu̅+dd̅) + c_2 ss̅ like η, η', ω, and ϕ <cit.>. This indicates that the h_1 mesons are considered to be orbitally excited states as the isoscalar pseudoscalar or vector mesons with the same quark content. Similarly, the a_1(1260) isovector axial-vector meson is also regarded as the isovector qq̅ state. However, a series of studies suggests that the a_1(1260) meson may be a possible molecular state <cit.>. Very recently, we have investigated the b_1 isovector axial-vector mesons, demonstrating that they can be dynamically generated by considering the four different channels, i.e. πω, ηρ, πϕ, and KK̅^* channels. Interestingly, it was shown that the b_1(1235) arises from the b_1(1306) and b_1(1356) <cit.>, indicating that the b_1(1235) has a two pole structure. As will be shown in the current work, the h_1(1415) meson originates from the two poles: h_1(1387) and h_1(1452). In fact, various hadrons exhibit the two-pole structures. For example, the K_1(1270) may possibly be regarded as the meson with the two-pole structure <cit.>. Albaladejo et al. <cit.> showed that the D^*(2400) arises as a two-pole structure, based on light pseudoscalar and D meson interactions in the coupled-channel formalism (see a recent review <cit.> for detailed discussion). The two-pole structure is also found in the baryonic sector: Λ(1405) is now well established as the hyperon with the two-pole structure <cit.>. In the present work, we will show that the discrepancy in the experimental data on the mass of h_1(1415) is rooted in its two-pole structure. To this end, we formulate the off-shell coupled-channel formalism <cit.>, introducing six different channels, i.e. πρ, ηω, KK̅^*, ηϕ, η'ω, and η'ϕ of which the threshold energies lie below 2 GeV. We first construct the kernel amplitude corresponding to each channel, using the meson-exchange diagrams. We compute the coupled Blankenbecler-Sugar (BbS) equations, which are obtained from the three-dimensional (3D) reduction of the Bethe-Salpeter equations <cit.>. We have already investigated how the a_1(1260) and b_1(1235) axial-vector mesons, D_s0^*(2317) and B_s0^* mesons, and the hidden charm pentaquark states are generated dynamically within the same framework <cit.>. We will demonstrate that the h_1(1170) and h_1(1415) are dynamically generated even without introducing the corresponding pole diagrams. Remarkably, the two poles emerge in the second Riemann sheet, which are related to the h_1(1415) meson, of which one lies just below the KK̅^* threshold, and the other is found to have a larger mass than its threshold energy. The outline of the current work is sketched as follows: In Section <ref>, we first explain how the kernel amplitude for each channel can be constructed by using the Feynman diagrams based on the effective Lagrangian. Then, we perform the partial-wave expansion for the coupled BbS integral equations, so that we examine the relevant partial waves with proper quantum numbers corresponding to the h_1 mesons. In Section <ref>, we discuss the results for the h_1 mesons. We examine the role of each channel in producing them dynamically. In particular, we focus on the two-pole structure of the h_1(1415) meson. The last section summarizes the present work. § GENERAL FORMALISM The scattering amplitude is defined as 𝒮_fi = δ_fi - i (2π)^4 δ(P_f - P_i) 𝒯_fi, where P_i and P_f stand for the total four momenta of the initial and final states, respectively. The transition amplitude 𝒯_fi for a two-body reaction can be derived from the Bethe-Salpeter integral equations 𝒯_fi (p',p;s) = 𝒱_fi(p',p;s) + 1/(2π)^4∑_k ∫ d^4q 𝒱_fk(p',q;s)𝒢_k(q;s) 𝒯_ki(q,p;s), where p and p' denote the relative four-momentum of the initial and final states, respectively. q is the momentum transfer for the intermediate states in the center of mass (CM) frame. s represents the square of the total energy, which is just one of the Mandelstam variables, s=P_i^2=P_f^2. The coupled integral equations given in Eq. (<ref>) can be illustrated as in Fig. <ref>. The summation Σ represents the inclusion of various coupled channels. To avoid the complexity due to the four-dimensional integral equations, we make a 3D reduction. Among several methods for the 3D reduction, we employ the BbS scheme <cit.>, which expresses the two-body propagator in the form of 𝒢_k(q) = δ(q_0-E_k1(q)-E_k2(q)/2) π/E_k1(q)E_k2(q)E_k(q)/s-E_k^2(q). Here, E_k represents the total on-mass-shell energy of the intermediate state, E_k = E_k1+E_k2, and q designates the three-momentum transfer of the intermediate state. Utilizing Eq. (<ref>), we obtain the following coupled BbS integral equations 𝒯_fi (p',p) = 𝒱_fi (p',p)+1/(2π)^3∑_k∫d^3 q/2E_k1(q)E_k2(q)𝒱_fk (p',q)E_k(q)/s-E_k^2(q) +iε𝒯_ki(q,p), where p and p' are the relative three-momenta of the initial and final states in the CM frame, respectively. In this manner, the T matrix can be generated, the entire Hilbert space being considered with the off-shell components. Before we solve the coupled BbS integral equations, we need to construct the kernel amplitudes 𝒱. We compute 𝒱_fi by using the effective Lagrangians for the meson-meson interactions. Since the vector mesons are involved, we consider hidden local symmetry, where the vector meson is considered as a dynamic gauge boson <cit.>. In Ref. <cit.>, the effective interactions among vector, vector, and pseudoscalar mesons were derived, based on the SU(3) hidden local symmetry. The effective Lagrangians are then expressed as ℒ_PPV = -ig_PPV√(2) Tr([P,∂_μ P] V^μ),ℒ_VVV = ig_VVV√(2) Tr((∂_μ V_ν - ∂_ν V_μ)V^μ V^ν),ℒ_PVV =-g_PVV/m_V√(2) ε^ μναβTr(∂_μ V_ν∂_α V_β P), where P and V represent respectively the pseudoscalar and vector matrices in flavor space P = [ 1/√(2)π^0+1/√(6)η_8+1/√(2)η_1 π^+ K^+; π^- -1/√(2)π^0+1/√(6)η_8 +1/√(2)η_1 K^0; K^- K̅^0 -2/√(6)η_8+1/√(2)η_1 ], V = [ 1/√(2)ρ^0_μ+1/√(2)ω_μ ρ_μ^+ K_μ^*+; ρ_μ^- -1/√(2)ρ_μ^0+1/√(2)ω_μ K_μ^*0; K_μ^*- K̅^*0_μ ϕ_μ ]. We assume the ideal mixing of the isoscalar vector meson singlet and octet. For η and η', we define them in terms of the pseudoscalar octet η_8 and singlet η_1: η = η_8 cosθ_P - η_1sinθ_P, η' = η_8 sinθ_P + η_1cosθ_P, with mixing angle θ_P = -17^∘ taken from Ref. <cit.>. Note that the universal coupling constant is given as g=g_PPV=g_VVV due to the hidden local symmetry. As done in the previous works <cit.>, we consider the five different isoscalar axial-vector channels coupled to the πρ channel, which are relevant to the h_1 mesons, i.e., the ηω, KK̅^*, ηϕ, η'ω, and η'ϕ channels. Thus, the kernel matrix is now expressed as 𝒱 = [ 𝒱_πρ→πρ 𝒱_ηω→πρ 𝒱_KK̅^*→πρ 𝒱_ηϕ→πρ 𝒱_η'ω→πρ 𝒱_η'ϕ→πρ; 𝒱_πρ→ηω 𝒱_ηω→ηω 𝒱_KK̅^*→ηω 𝒱_ηϕ→ηω 𝒱_η'ω→ηω 𝒱_η'ϕ→ηω; 𝒱_πρ→ KK̅^* 𝒱_ηω→ KK̅^* 𝒱_KK̅^*→ KK̅^* 𝒱_ηϕ→ KK̅^* 𝒱_η'ω→ KK̅^* 𝒱_η'ϕ→ KK̅^*; 𝒱_πρ→ηϕ 𝒱_ηω→ηϕ 𝒱_KK̅^*→ηϕ 𝒱_ηϕ→ηϕ 𝒱_η'ω→ηϕ 𝒱_η'ϕ→ηϕ; 𝒱_πρ→η'ω 𝒱_ηω→η'ω 𝒱_KK̅^*→η'ω 𝒱_ηϕ→η'ω 𝒱_η'ω→η'ω 𝒱_η'ϕ→η'ω; 𝒱_πρ→η'ϕ 𝒱_ηω→η'ϕ 𝒱_KK̅^*→η'ϕ 𝒱_ηϕ→η'ϕ 𝒱_η'ω→η'ϕ 𝒱_η'ϕ→η'ϕ; ] Each matrix element 𝒱_fi is obtained from the corresponding tree-level Feynman diagram illustrated in Fig. <ref>. Note that we do not have any pole diagrams in the s-channel, which indicates that the h_1(1170) and h_1(1415) mesons will be dynamically generated. However, the h_1(1595) meson requires the s-channel pole digram, which we will discuss later its physical implications. The Feynman amplitudes for the tree-level diagrams are evaluated from the effective Lagrangians given in Eq.(<ref>). Imposing flavor SU(3) symmetry for the coupling constants, we find that the following kernel amplitudes become null: 𝒱_πρ→ηϕ, 𝒱_πρ→η' ϕ, 𝒱_ηω→ηϕ, 𝒱_ηω→η'ϕ, 𝒱_ηϕ→η'ω and 𝒱_η'ω→η'ϕ. The amplitudes of the meson exchange diagrams shown in Fig. <ref> from left to right are written as 𝒜_P^u(p',p) = -IS g^2_PPV F^2(p,p') (2p_2-p_3)·ϵ^* 𝒫(p_1-p_4) (2p_4-p_1)·ϵ, 𝒜_V^u(p',p) = -IS g^2_ PVV/m_V^2 F^2(p,p') ε_μναβ p_3^μϵ^ *ν(p_3-p_2)^α 𝒫^βδ(p_1-p_4) ε_γσηδp_1^γϵ^σ(p_1-p_4)^η, 𝒜_V^t(p',p) = -IS g^2_PPV F^2(p,p') (p_2+p_4)^μ𝒫_μν (p_1-p_3) ×[(2p_1-p_3)·ϵ^*ϵ^ν+(2p_3-p_1) ·ϵϵ^*ν-ϵ·ϵ^*(p_1+p_3)^ν], where the IS factor is related to the corresponding SU(3) Clebsch-Gordan coefficient and isospin factor. In Table <ref>, we list the values of the IS factors for all relevant processes. For the coupling constants, we use the values for the coupling constants: g_PPV^2/4π=0.72 and g_PVV^2/4π=1.88 from the previous works <cit.>. The propagators for the spin-0 and spin-1 mesons are expressed by 𝒫(p) = 1/p^2-m^2, 𝒫_μν(p) = 1/p^2-m^2(-g_μν+p_μ p_ν/m^2), where m denotes the mass corresponding to the exchange meson. As done in the previous works <cit.>, we have turned off the energy-dependence in the denominator of the propagator. Since hadrons have finite sizes, we introduce a form factor at each vertex. We employ the following parametrization <cit.>: F(p,p') = (nΛ^2-m^2/nΛ^2+p^2+p'^2)^n, where n is the power of the form factor. The form given in Eq. (<ref>) has a notable advantage: the value of Λ remains constant regardless of change in n. As n approaches infinity, Eq. (<ref>) converges to a Gaussian form. In most cases, we use n=1. However, we need to use n=2 for vector-meson exchange, because they have stronger momentum dependence. While the cut-off masses Λ in Eq. (<ref>) are experimentally unknown for the current hadronic processes, we minimize associated uncertainties using the following strategy as done in the previous works <cit.>: we choose Λ by adding approximately (500-700) MeV to the corresponding masses of the exchange meson. Consequently, we define the cutoff mass as Λ=Λ_0+m. We set Λ_0 to be 600 MeV for most cases, as listed in Table. <ref>. However, to fit the total cross section for πρ scattering, we need to use larger values of Λ_0 for some of the vector-meson exchange diagrams, as shown in Table <ref>. To compute the coupled integral equations, we utilize a partial wave decomposition, transforming the equation into a one-dimensional integral equation as follows 𝒯^J(fi)_λ'λ (p',p) = 𝒱^J(fi)_λ'λ (p',p) + 1/(2π)^3∑_k,λ_k∫q^2dq/2E_k1E_k2𝒱^J(fk)_λ'λ_k(p', q)E_k/ s-E_k^2𝒯^J(ki)_λ_kλ (q,p). Here, λ', λ and λ_k denote the relative helicity of the final, initial and intermediate two-body states, respectively. Their corresponding momenta are represented by p', p and q, respectively. The partial-wave component is given by 𝒱^J(fi)_λ'λ(p',p) = 2π∫ d( cosθ) d^J_λ',λ(θ) 𝒱^fi_λ'λ(p',p,θ), where θ stands for the scattering angle, and d^J_λ', λ are the Wigner d-matrices. To obtain the transition amplitudes numerically, it is crucial to deal with the singularities in the BbS propagator. The one-dimensional integral, free from energy singularities, takes the form 𝒯^fi_λ'λ (p',p) = 𝒱^fi_λ'λ (p',p) + 1/(2π)^3∑_k,λ_k[∫_0^∞dqqE_k/E_k1E_k2ℱ(q) -ℱ(q̃_k)/s-E_k^2+ 1/2√(s)(ln|√(s)-E_k^thr/√(s) +E_k^thr|-iπ)ℱ (q̃_k)], where q̃_k represents the momentum when E_k1+E_k2=√(s) and ℱ(q) is defined as ℱ(q)=1/2q 𝒱^fk_λ'λ_k(p', q)𝒯^ki_λ_kλ(q, p). Regularization is applied only when the total energy √(s) surpasses the threshold energy of the k-th channel, denoted as E_k^thr. Notably, the transition amplitudes derived from these equations can be analytically continued to the complex energy plane directly, as the energy singularities have been eliminated. Moreover, Eq. <ref> enables the use of the matrix inversion method to compute the transition amplitudes. The partial-wave transition amplitudes are then obtained by transforming the helicity basis into the LSJ basis. For the specific case of pseudoscalar and vector meson scattering, this transformation is represented as 𝒯^JS_L'L = √((2L+1)(2L'+1))/2J+1∑_λ'λ(L'0Sλ'|Jλ') (1λ' 00|1λ')(L0Sλ|Jλ) (1λ 00|Sλ)𝒯^J_λ', λ. Here, (j_1m_1j_2m_2|JM) denotes the Clebsch-Gordan coefficient. So far, we have focused on using only the t and u channel exchange diagrams. However, it is necessary to include the pole diagram for the h_1(1595) meson in the s channel. It is essential to improve the results in comparison with the experimental data on πρ scattering. A similar approach was demonstrated in the case of ππ scattering <cit.>, where the explicit inclusion of the f_0(1400) pole was necessary to reproduce the experimental phase shift of the scalar-isoscalar channel in the energy region below 2 GeV. The presence of f_0(1400) influenced both the high-energy region and the structure below 1 GeV. Similarly, we incorporate the s-channel pole diagram for h_1(1595) in πρ scattering, as illustrated in Fig. <ref>. The interaction vertices are determined by the following effective Lagrangian ℒ_h_1πρ = g_h_1πρ/m_h_1^0(∂_μρ_ν-∂_νρ_μ) π∂^μ h_1^ν. The bare mass and coupling of h_1(1595) are set to be m_h_1(1595)^(0)=1320 MeV and g_h_1(1595)πρ^2/4π=0.54. Additionally, we employ the form factor in the s channel F_s(p) = (Λ^2+(m_h_1^0)^2/Λ^2 +p^2)^4, to ensure that the contribution of the pole diagram in the high momentum region is negligible. Consequently, the pole diagram has a small impact on the dynamical generation of the h_1 meson. We set the cutoff mass, denoted as Λ, to be 1920 MeV, following the previously mentioned rule. As a result, the dressed mass and width of h_1(1595) evaluated in the complex energy plane match the PDG average value <cit.>. We found the u-channel contribution to the transition amplitude to be negligible and omitted it for simplicity. The t-channel diagram is not allowed due to G parity and isospin symmetry. The necessity of including the pole diagram suggests that the h_1(1595) may be predominantly a genuine qq̅ state. § RESULTS AND DISCUSSIONS §.§ h_1 resonances The ground state of the h_1 axial-vector meson was first observed in the 3π mass spectra of the charge exchange reaction π^- p →π^+π^-π^0 n <cit.>. Its existence was later confirmed by only one other experiment using the same reaction <cit.>. According to the PDG, there are two excited states of h_1 meson: h_1(1415) and h_1(1595) <cit.>. The latter was observed in the ηω mass distribution <cit.>, while the former exhibits an intriguing structure similar to the renowned Λ (1405). Initially, referred to as h_1(1380), subsequent experimental studies determined its mass to be approximately 1.42-1.44 GeV, leading to its renaming as h_1(1415). However, a very recent experiment again suggests that the mass of h_1(1415) is roughly 1.38 GeV. Currently, there is no explanation for the conflicting mass measurements of h_1(1415). Interestingly, h_1(1380) is located just below the KK̅^* threshold, whereas the renamed h_1(1415) is found to be above KK̅^* threshold. This implies that h_1(1380) (h_1(1415)) may have a two-pole structure. In this work, we will examine each h_1 meson within the coupled-channel framework. Specifically, we will demonstrate that the conflicting mass measurements of h_1(1415) can be explained by a two-pole structure. In Ref. <cit.>, two axial-vector resonances, a_1 and h_1, were identified in the charge exchange reaction. Previously, we analyzed experimental data on the former one, and in this current study, we extend our analysis to the latter one, keeping values of the parameters the same as the earlier investigation <cit.>. The assumption of mixing between η and η' alters the isospin value of η exchange in the KK̅^* elastic channel. However, the mixing has a negligible effect on the previous results for the a_1 meson, given the fact that η exchange barely contributes to KK̅^* elastic scattering. To describe the experimental data in the isoscalar channel, we relate the total cross section to the transition amplitude 𝒯 using a constant factor C, expressed as σ≡ -C Im[𝒯_πρ(M_πρ) ], where C has a different value from that in the isovector channel due to the absorption of the isospin factor into C. The pole diagram for h_1(1595) comes into essential play in enhancing the width of h_1(1170), resulting in good agreement with the experimental data. The total cross section for πρ scattering in the isoscalar channel clearly reveals the resonance of the h_1(1170) meson, of which the mass and width are determined to be m_h_1(1170) = (1.19±0.06) GeV and Γ = (0.32± 0.05) GeV, respectively <cit.>. The present results describe the experimental data well. We also compare them with those in Ref. <cit.> that utilized a single πρ channel including explicitly the h_1(1170) pole diagram. As shown in Fig. <ref>, the single channel with the explicit pole of the h_1(1170) is insufficient to describe the experimental data, specifically the h_1(1170) peak structure. In contrast, we find that the h_1(1595) pole diagram has an important contribution to explain the h_1(1170) resonance. Moreover, various coupled channels play essential roles in dynamically generating the h_1 (1170) resonance. The dynamical generation of h_1 (1170) has also been observed in Ref. <cit.>. So, we conclude that the h_1(1170) does not solely originate from the qq̅ state but contains a substantial component of the molecular state. Furthermore, we predict a small peak structure near the KK̅^* threshold, which is barely seen in the experimental data. To identify dynamically generated resonances in the current approach, we examine the T amplitude in the second Riemann sheet. Since the T amplitude generated by Eq. (<ref>) is a meromorphic function in the complex energy plane, we can directly identify the h_1 resonances in the second Riemann sheet. In Fig. <ref>, the contour plot of the modulus of T clearly exhibits the existence of four poles. The first pole below the ηω threshold is positioned at (1080-i118) MeV, so its mass and width are respectively given as 1080 MeV and 236 MeV. The value of the width given in the PDG is (375± 35) MeV <cit.>, which was taken from the fit by using the Bowler model. Compared with it, the current result is smaller than the empirical data. In Ref. <cit.>, the corresponding pole for h_1(1170) was found at √(s)=(919-i17) MeV, which deviates significantly from the experimental data. The second and third poles are located at (1387-i6) MeV and (1452-i51) MeV, respectively. These two poles are related to h_1(1415), once called as h_1(1380). Since the second pole lies just below the KK̅^* threshold, it may be considered as the KK̅^* molecular state in the isoscalar channel. Interestingly, its width is very small. The third pole is located at about 60 MeV above the KK̅^* threshold, and its width is 102 MeV. While the average value from the PDG is given as (78± 11) MeV, the experimental data on its width ranges from (66± 10_-10^+12) MeV <cit.> to (170± 80) MeV <cit.>. Thus, the width of h_1(1415) should be measured more precisely. If h_1(1415) has a two-pole structure, its width may be determined more precisely. Finally, the fourth pole is observed at (1603-i158) MeV that corresponds to the h_1(1595) meson, placed above the ηϕ threshold. Note that we have included the pole diagram for h_1(1595) in the s channel with the bare mass m_h_1(1595)^(0)=1320 MeV. It is dressed to be 1603 MeV, as the pole diagram for h_1(1595) is “renormalized” by the coupled integral equations. When considering only the πρ elastic channel, the dressed mass is smaller than the bare one. However, the pole diagram gains additional mass after being coupled to all channels. A similar effect is observed in ππ elastic scattering, where the scalar-isoscalar meson f_0(1400) has a bare mass of 1300 MeV with the scalar coupling, and after coupling to other channels, the physical mass becomes 1400 MeV <cit.>. While a recent study shows that the h_1(1595) is a ground state of a light tetraquark state <cit.>, the current result suggests that the structure of the h_1(1595) has a large component of the qq̅, being dressed by various channels. The coupling strengths of the h_1 resonances can be derived from the residues of the 𝒯 matrix, defined as ℛ_a,b := lim_s→ s_R(s-s_R) 𝒯_a,b/4π=g_ag_b. It is impossible to determine the signs of the coupling strengths, so we choose the coupling strengths to the πρ channel to be positive. Then, we can determine the relative signs for other coupling strengths. Table <ref> lists the results for the coupling strengths to all possible channels. The first resonance, h_1(1170), couples most strongly to the πρ channel, which is in line with the experimental observations of the h_1(1170)→πρ decay <cit.>. The next strongest coupling comes from the ηϕ channel. Considering that the ηϕ threshold energy is around 1570 MeV, it is remarkable that the ηϕ channel is the second most dominant one for h_1(1170). This indicates that the h_1(1170) contains a significant ss̅ component, resembling the case of dynamically generated a_1(1260) and b_1(1235) <cit.>. The ηϕ channel couples most strongly to the second and third resonances, located at (1387-i6) MeV and (1452-i51) MeV, which are considered to be h_1(1415). As depicted in Fig. <ref>, one can clearly observe the strongest peak structure below the KK̅^* threshold in the squared modulus of the ηϕ→ηϕ transition amplitude multiplied by q_πρ, q_πρ|T|^2, where q_πρ is the magnitude of the momentum of the πρ system. Interestingly, we also have nonvanishing πρ→ηϕ transition amplitudes, as shown in Fig. <ref>. Although we impose the flavor SU(3) symmetry, which results in the absence of the πρ→ηϕ kernel amplitude, the πρ→ηϕ transition amplitude is generated through the KK̅^* intermediate states, revealing a resonance structure below the KK̅^* threshold. As expected, the second resonance has a strong coupling strength to the KK̅^* channel (see also the dash-dotted curve in Fig. <ref>). On the other hand, the coupling strengths to the S-wave ηω, KK̅^*, η'ω, and η'ϕ channels are of the same order. It is worth noting that the D-wave πρ channel has a sizable coupling strength. As for the h_1(1595), many channels are strongly coupled to it. As explained eralier, the mass of h_1(1595) would have been less than the bare mass had we considered the πρ channel only. As shown in Table <ref>, the πρ, ηω, KK̅^*, and η'ω contribute to the generation of the h_1(1595) meson. §.§ Two-pole structure of the h_1(1415) Table <ref> presents the mass of the h_1(1415) as measured by various experiments. The LASS Collaboration initially detected a signal of h_1(1415) in the KK̅π system using the K^-p→ K_S^0K̅πΛ reaction <cit.>. The data exhibited an enhancement near 1.4 GeV, diminishing rapidly as the energy increased. Using the Breit-Wigner parameterization, the mass and width were determined to be M=1380± 20 MeV and Γ= 80± 30 MeV, respectively. Subsequent experiments, however, reported different results. The Crystal Barrel Collaboration <cit.> performed proton annihilation to produce the KK̅π system and discovered the h_1 meson above the KK̅^* threshold, with a mass of M=(1440± 60) MeV and a width of Γ=(170± 80) MeV. Two experimental results from the BESIII Collaboration, obtained from different charmonium state decays, confirmed these findings with a slightly lower mass. It is noteworthy that interference influences the determination of the mass and width of the h_1(1415) <cit.>, and these results align with those of the Crystal Barrel experiment. In a recent experiment conducted by the BESIII <cit.> Collaboration, the mass and width of the h_1(1415), measured in the γη' invariant mass, were found to be consistent with those obtained by the LASS Collaboration but contradictory to the two previous BESIII results. This discrepancy strongly suggests that the h_1(1415) does not originate from a single pole, indicating a more complex structure. Evidently, this structure cannot be explained by the quark model alone. Consequently, based on the results of this study, we propose a two-pole structure to account for the discrepancy in the measurements of the h_1(1415) meson mass. The current work reveals two poles around 1.4 GeV, one at (1387 -i6) MeV and the other at (1452-i51) MeV, as shown in Table <ref>. Figure <ref> demonstrates how these two poles appear in the complex energy plane. The lower pole is located close to the KK̅^* threshold, while the higher pole is situated 80 MeV above it. Due to its narrowness, detecting the lower pole experimentally may pose a challenge. Examining the coupling strengths in Table <ref>, we observe that both poles couple strongly to open strange and hidden strange channels. However, the higher pole couples to the ηϕ channel far more strongly than to other channels. This suggests that the higher pole might be an ηϕ molecular state, while the lower pole, being located very close to the KK̅^* threshold, could be a KK̅^* molecular state. Notably, the g_KK̅^* for the higher pole has a larger imaginary part than its real part, causing destructive interference. This results in the disappearance of the higher pole in KK̅^* elastic scattering, a feature also observed in the case of the higher pole of the b_1 meson in a previous study <cit.>. This characteristic is the essential clue to explaining the absence of the higher pole in the LASS Collaboration experiment. Based on this analysis, we propose an explanation for the conflicting mass measurements of h_1(1415). As shown in Fig. <ref>, the crucial aspect is that the higher pole vanishes in KK̅^* elastic scattering, leaving only the KK̅^* threshold enhancement. The LASS collaboration investigated the h_1(1415) in the K^-p→ K_S^0K̅πΛ process, which can effectively be represented as KK̅^* elastic scattering. They observed a threshold enhancement near the KK̅^* threshold, aligning with the current results. In contrast, the Crystal Barrel <cit.> and BESIII experiments <cit.> measured the KK̅π state as a final state, which can be considered as the KK̅^* state. Consequently, both collaborations measured the h_1(1415) above the KK̅^* threshold. The most recent measurement from the BESIII Collaboration comes from the J/ψ→γη'η' decay. In conclusion, we propose that the h_1(1415) can be constructed from two different resonances: the h_1(1380) and h_1(1440) mesons, with the higher pole vanishing in KK̅^* elastic scattering. This two-pole structure accounts for the seemingly conflicting experimental results. § SUMMARY AND CONCLUSIONS In this work, we aimed at investigating the isoscalar axial-vector h_1 mesons using a coupled-channel formalism. We first constructed kernel amplitudes from meson-exchange diagrams in the t- and u-channels, derived from effective Lagrangians based on hidden local symmetry. In addition, we introduced the pole diagram for h_1(1595) to generate the h_1(1595) resonance. We found that the h_1(1595) pole diagram also has a significant effect on the generation of h_1(1170). Six channels were incorporated: πρ, ηω, KK̅^*, ηϕ, η'ω, and η'ϕ channels. We solved the off-shell coupled integral equations and discussed the dynamical generation of the h_1(1170). The present analysis revealed two poles at (1387-i6) MeV and (1452-i51) MeV, exhibiting a two-pole structure of the h_1(1415) meson. This two-pole structure may resolve the discrepancy in the experimental data on the mass of h_1(1415). The results showed that the lower pole couples strongly to the KK̅^* channel, while the higher pole couples predominantly to the ηϕ channel. This provides an essential clue to understand the nature of h_1 mesons and explains possible discrepancies in the mass of h_1(1415). The two-pole structure of h_1(1415) can account for the seemingly conflicting experimental results, with the lower pole corresponding to h_1(1380) and the higher pole to h_1(1440). Notably, the higher pole vanishes in KK̅^* elastic scattering, which explains why some experiments observe only the lower pole. The authors are grateful to Terry Mart for valuable discussions. Part of the work was done at Department of Physics, University of Indonesia. HCK wants to express his gratitude to Tetsuo Hyodo, Makoto Oka, and Qian Wang for valuable discussions. The present work is supported by Inha University Research Grant in 2024 (No.73014-1). apsrev4-2
http://arxiv.org/abs/2409.02827v1
20240904154936
Muon $g$$-$$2$: blinding for data-driven hadronic vacuum polarization
[ "Alexander Keshavarzi", "Daisuke Nomura", "Thomas Teubner", "Aidan Wright" ]
hep-ph
[ "hep-ph", "hep-ex" ]
LTH 1383 Department of Physics and Astronomy, The University of Manchester, Manchester M13 9PL, U.K. Department of Radiological Sciences, International University of Health and Welfare, Tochigi 324-8501, Japan Department of Mathematical Sciences, University of Liverpool, Liverpool L69 3BX, U.K. Department of Mathematical Sciences, University of Liverpool, Liverpool L69 3BX, U.K. § ABSTRACT The KNT(W) data-driven determinations of the hadronic vacuum polarization (HVP) are crucial inputs to previous and future Standard Model (SM) predictions of the muon's anomalous magnetic moment, a_μ. With the muon g-2's new physics case uncertain due to disagreeing HVP evaluations, new SM predictions and experimental measurements of a_μ expected soon, and a complete revamp of the KNTW analysis framework underway, this letter motivates and describes a blinding scheme for data-driven HVP determinations that has been implemented for future KNTW analyses. Muon g-2: blinding for data-driven hadronic vacuum polarization Aidan Wright September 9, 2024 =============================================================== Introduction - The anomalous magnetic moment of the muon, a_μ, and its potential for discovering new physics stand at a crossroads. The accuracy and precision of the Standard Model (SM) prediction, a_μ^ SM <cit.>, relies on resolving significant tensions in evaluations of the hadronic vacuum polarization (HVP) contributions, a_μ^ HVP. Data-driven evaluations of the HVP using e^+e^-→ hadrons cross section data as input <cit.> result in a value for a_μ^ SM that is ∼5σ below the most recent experimental measurement from the Muon g-2 Experiment at Fermilab, a_μ^ exp <cit.>. With an unprecedented 200 parts-per-billion (ppb) precision <cit.>, confirmation of previous measurements <cit.>, and final results (expected in 2025) projected to improve the experimental precision by another factor of two, the measurements of a_μ appear to be on solid ground.[Alternative future measurements of a_μ are also planned at J-PARC <cit.> and PSI <cit.>.] However, high-precision lattice QCD calculations (incorporating QED corrections) <cit.> and the most recent experimental measurement of the dominant e^+e^-→π^+π^- cross section from the CMD-3 experiment <cit.> result in independent, but consistent values for a_μ^ HVP that are >4σ larger than previous data-driven evaluations. They therefore generate values for a_μ^ SM that are consistent with a_μ^ exp and support a no-new-physics scenario in the muon g-2, whilst leaving an unexplained discrepancy with the vast catalogue of previously measured hadronic cross section data. The KNT <cit.> (now KNTW) data-driven determinations of a_μ^ HVP are crucial inputs to previous and future community-approved predictions for a_μ^ SM from the Muon g-2 Theory Initiative <cit.>. With multiple, independent lattice QCD evaluations of a_μ^ HVP becoming significantly competitive only in recent years, it was one of only a few data-driven HVP evaluations <cit.> which exclusively formed the value for a_μ^ HVP used in the SM prediction that exhibits the ∼ 5σ discrepancy with a_μ^ exp <cit.>. Future SM predictions are expected to incorporate both lattice QCD and updated data-driven evaluations, with KNTW being a key input to the latter. An alternative approach to determine a_μ^ HVP by experimentally measuring the spacelike vacuum polarization is under preparation at the MUonE Experiment <cit.>. The KNTW procedure for evaluating the total hadronic cross section and a_μ^ HVP (plus other precision observables which depend on hadronic effects) is undergoing a major overhaul and modernization of the analysis framework. The aim of this revamp is to make use of sophisticated analysis tools, perform new evaluations of various contributions, incorporate handles in the analysis structure that result in flexible and robust ways to test various systematic effects, improve determinations of corresponding systematic uncertainties and ultimately produce a new state-of-the-art in the determination of these quantities. These changes will be described in detail in the next full KNTW update. Such future data-driven evaluations of a_μ^ HVP depend largely on new experimentally measured hadronic cross section data, particularly for the π^+π^- final state. These require increased precision and a more robust understanding of higher-order radiative corrections, which are currently being studied in detail within the STRONG-2020 program <cit.> and The RadioMonteCarlow 2 Effort <cit.> (see also e.g. <cit.>). Whilst a discussion of these improvements is outside the scope of this letter, such future results have been announced from the BaBar <cit.>, Belle II <cit.>, BESIII <cit.>, CMD-3 <cit.>, KLOE <cit.> and SND <cit.> experiments within the next few years. These new measurements could either fundamentally adjust the previous data-driven evaluations of a_μ^ HVP to bring them more in line with e.g. the recent CMD-3 π^+π^- measurement or make the current tensions even worse if new measurements confirm lower cross section values with increased precision. Importantly, and as will be discussed in the next section, analysis choices in how to use these data can produce significantly different results. With this being the case, the future of a_μ^ HVP and a_μ^ SM being so uncertain, and the crossroads in the current tensions ultimately suggesting either a discovery of new physics or a multi-method confirmation of the SM, analysis blinding for data-driven determinations of the HVP is now paramount. This is compounded by the fact that all other critical inputs: results from the Muon g-2 Experiment at Fermilab, lattice QCD calculations, and future e^+e^-→ hadrons cross section measurements, are blind analyses. As such, this letter motivates and describes the first blinding scheme for data-driven HVP determinations that has been implemented for future KNTW analyses. Blinding motivation - In a given data-driven analysis, e^+e^-→ hadrons cross section data from different experiments are combined in a statistically robust procedure. Many measurements can exist for each distinct hadronic final state (channel), each with different features: energy range, measurement technique, luminosity normalization, energy binning in √(s), treatment of radiative corrections, prescription for provided experimental uncertainties, etc. The combination procedure then largely derives from four stages: (1) ensuring different data are consistent before combining, (2) defining a new energy binning onto which the input data will be combined, (3) combining the data in a fit procedure that is weighted in some form by the experimental uncertainties, and (4) applying additional systematic uncertainties arising from the combination procedure to the combined cross section. Derived quantities such as a_μ^ HVP are calculated from the combined data. Statistical comparisons can then be performed between the data combination and the input datasets for the cross section values and any calculated observables. All four stages depend on analysis choices. In (1), for example, combining the data for the extraction of a_μ^ HVP necessitates the cross section data to be undressed of vacuum polarization effects and to include final state radiation, requiring the analyzer to define a procedure to ensure corrections for either or both are applied to any data that are not in that form. In stage (2), analyzer freedom to choose the target bin centers in √(s), the bin width, and overall number of bins can lead to significantly different results, particularly in important hadronic resonance regions. For (3), the degree to which an analysis chooses to weight the fit/combination procedure by the experimental uncertainties, particularly regarding weighting with correlated uncertainties, can vastly change the influence of different data sets. Correspondingly, the resulting systematic uncertainties in (4) can depend on the choices in the earlier stages. Together, the choices in these stages can lead to different results for the mean values and uncertainties for both the combined cross section and any derived quantity like a_μ^ HVP. The combination procedure used in the previous KNT analyses are described in <cit.> (and earlier in <cit.>). In general, however, none of the data-driven evaluations of the HVP (e.g. <cit.>) can avoid making or already having made these choices and correspondingly different results are observed between different evaluations. Such differences were explored in detail in <cit.>. A prominent example is stage (3) for the dominant π^+π^- channel, where it has been observed that choosing to maximally weight the fitted cross section by the correlated uncertainties favors the three high-precision, highly-correlated, lower-valued measurements from the KLOE experiment <cit.> leading to an overall lower value of a_μ^π^+π^-, whilst choosing minimal weighting of correlated uncertainties favors the single high-precision, narrowly-binned, but higher-valued measurement from the BaBar experiment <cit.>. This effect is the main reason for the two global, data-driven HVP analyses that featured in <cit.>: KNT19 <cit.> and DHMZ19 <cit.>, yielding mean values for a_μ^π^+π^- that differ at the level of the final uncertainty (see Section 2.3.5 in <cit.>). The future presents the potential for even larger differences. Already the CMD-3 π^+π^- data <cit.> is several standard deviations higher than all other π^+π^- data and all new data-driven evaluations will have to include these data in their combinations. As mentioned previously, new higher-precision data could lead to even more exaggerated tensions, particularly for π^+π^- where they could favor either the CMD-3 result or the previous data. There is also potential for some future data-driven analyses to resurrect the use of hadronic τ-decay data <cit.> in their combinations and evaluations of a_μ^ HVP. These historically exhibit higher results for a_μ^ HVP that are more in-line with the CMD-3 data and lattice QCD evaluations but have been avoided in recent years due to an incomplete understanding of the necessary isospin breaking corrections <cit.>.[Efforts are ongoing to calculate these isospin breaking corrections using lattice QCD (see e.g. <cit.>) which could make the hadronic τ-decay data competitive within the next few years.] In general, all the above has the potential for analysis bias. In the face of significant tensions from current and possibly future data, this applies to any previously decided analysis choices and any new ones, as either may consciously or unconsciously bias a data-driven analysis towards one result or another. Given that the new physics case in a_μ rests upon resolving the tensions in a_μ^ HVP which are currently reflected in the e^+e^-→ hadrons data, the future of its data-driven determinations requires unbiased results achieved through fully blind analyses that have re-evaluated all choices and any corresponding systematic uncertainties. Blinding procedure - Blinding an analysis of already measured data presents individual challenges. Retrospectively blinding publicly available experimental data and corresponding results is impossible as many experimental analyses make comparisons with previous data and calculate quantities like a_μ^ HVP. The aim is therefore to implement a robust procedure that meaningfully blinds the outputs of any data combinations without introducing new biases to or interfering with the data combination methodology. Blinding by adjusting the input data that enter a combination is consequently avoided.[The ability to scrutinize input datasets when not combining or comparing to a combination is retained.] Instead, offsets are applied to the combined cross section to blind all visual and other outputs of the analysis, namely plots of the combined data and any values derived from them (e.g. a_μ^ HVP). Importantly, no combined data are saved without blinding offsets applied. Defining the experimentally measured hadronic cross section as σ_ had(s) ≡ e^+e^-→ hadrons, the offsets adjust the cross section for a given hadronic channel i as σ^ blind_ had,i(s) = a_i b_i (s+s_0,i)^c_i σ_ had,i(s) . A different set of offsets for each channel i ensures that a change in one channel cannot be disentangled by knowledge of a change in another. The subscript i is suppressed in the following for simplicity of notation. The offsets a, b, c, and s_0 apply an amalgamation of the following: change in overall sign (a), an energy-independent multiplicative scale factor to conceal changes in size (b), power adjustments to the energy-dependence (c) and additive adjustments to the energy-dependence (s_0). In a given channel, the data used to calculate derived values and those entering plots are blinded with different offsets. This provides additional relative blinding between plots of the data and any derived quantities. To compare a resulting data combination with its input datasets for any derived quantity or plot, the input datasets are adjusted by the same blinding scheme only for that comparison. This allows for analysis of the combination under blinding (see e.g. Fig. <ref>). The unknown energy-dependent adjustments, which are different for derived quantities and plots, ensure that no comparison of a combination with input datasets can result in full knowledge of the influence of that dataset on a combination or consequent extraction of the blinding offsets. No input datasets are saved with blinding offsets applied. At the level of the individual channels, the generic dispersion integral used to extract the blinded value of the HVP contribution to an observable O_ HVP then has the form O^ blind_ HVP = ∫^s_ high_s_ low ds f(s) σ^ blind_ had(s) = a b ∫^s_ high_s_ low ds (s+s_0)^c f(s) σ_ had(s) , where f(s) is an energy-dependent kernel function. Observables calculated as part of the KNTW analyses other than a_μ^ HVP include (but are not restricted to) the HVP contributions to the electron and tau g-2, the hadronic contributions to the running electromagnetic coupling, and the HVP contributions to the hyperfine splitting of muonium (see <cit.>). Whilst the σ_ had(s) is the same for each O_ HVP, the kernel functions f(s) are different and induce a different energy-dependent weighting. The blinding scheme effectively adjusts this energy-dependence by a different, unknown amount for each integrated O_ HVP. This not only adds another layer of blinding between each observable, but also has the benefit of removing possible bias towards a_μ^ HVP being the primary output of the analysis, putting more emphasis on the combined data as the primary product and figure of merit. A description and the allowed values for each offset are given in Table <ref>. The offset a is only applied to the data entering Eq. (<ref>). The allowed values for b have been chosen to not be too extreme and importantly avoid b=1, which would provide no overall scale factor. The values for c have been chosen to moderately adjust the energy-dependence without unreasonably distorting the shape of the hadronic cross section and to avoid c = 0, which would provide no adjustment. The additive s_0 provides an additional energy-dependent shift without which it would be known that no energy-dependent blinding due to c would be present at exactly s = 1 GeV^2. The offsets are set by a person (blinder) external to KNTW. The blinder is provided a software package which, when executed, asks the blinder to choose (and make a private note of) five blinding seeds known only to them. Four are for the offsets listed in Table <ref>. The fifth (r_5) results in an unknown integer in the range 1-100 that will offset the integer ID number of each hadronic channel, of which there are less than 100 in total. This ID number offset is concatenated to the beginning (for integrals) or end (for plots) of the seed for each of the other four offsets, resulting in distinct blinding for each channel and different blinding offsets for the data entering integrals and plots. Each seed can be any signed, 32-bit integer and initializes a random number generator for each offset that yields a value from the allowed ranges provided in Table <ref>. The package then automatically generates compiled and obfuscated software routines containing the offsets which the blinder provides to KNTW. In this way, neither the blinding offsets nor random seeds can be disentangled or reverse-engineered by accident or without the intention of removing the blinding entirely. To unblind the analysis, i.e. to remove the blinding offsets, requires correct input of the offsets on execution of the software. The blinding scheme has been devised to have two layers to allow for additional comparisons of results and systematic cross checks without fully unblinding. The unblinding will therefore happen in two stages. First, when the analyses and data combinations of all individual hadronic channels are complete, a relative unblinding will be performed where the blinder inputs only r_5 to the KNTW software. This removes the ID number offset for the individual channels, leaving a common blinding for all channels. In this relatively unblind stage, the final results are still concealed, but cross checks of the analysis can be performed by comparing the results from different hadronic channels under a common set of offsets. Once these final cross checks are complete, the KNTW analysis will be frozen, and no further changes will be made. Only then will the blinder be asked to input all offsets and fully unblind the analysis. Once an analysis is unblind, a new analysis (i.e. to incorporate significant changes or new hadronic cross section data) will require a blinder to repeat the process and introduce new blinding offsets to the new KNTW analysis. Example implementation - As an example, consider the blinder choosing the seeds r_1 = 11111, r_2= 2222, r_3 = 333, r_4 = 44 and r_5 = 5. The seed r_5 = 5 results in a value of 62 (from the range 1-100) from the random number generator. For a hadronic channel with ID number n, the ID number offset is then (n+62). In the most recent KNT analysis (denoted KNT19) <cit.>, the π^+π^- channel ID was n=10. In this example, the ID number offset for this channel would then be n+62 = 72 and the blinding seeds r̃ for the data entering the integrals and plots of the KNT19 π^+π^- channel would be: These in turn result in the following randomly generated offset values: Inputting these values into Eq. (<ref>) for the KNT19 π^+π^- data combination results in Fig. <ref>. Note that Fig. <ref> is displaying the plotted cross section and so is subject to the offset values for “Plots". In this case, the value for multiplicative scale factor, b=1.702, is clearly visible, whilst the energy-dependent shifts are not discernible. Fig. <ref> is comparing integrated values and has therefore been blinded by the “Integrals" offsets. In this case, a=+1 so the integrated values are positive and the value b=0.617 is also apparent. Here, the energy-dependent changes from c and s_0 are noticeable in the comparisons between the input data sets and the resulting combination, which are subtly different before and after blinding. Under such a blinding scheme, the impact from changes to the analysis or the introduction of new datasets would not be fully evident until after unblinding. There are >50 hadronic channels in the KNTW analysis each with an ID number n that, when offset by an unknown integer from the seed r_5, results in a different set of offsets for each channel that cannot be disentangled. Continuing with the same example, the integrated values for a_μ^ HVP from the dominant hadronic channels in the KNT19 analysis are shown in Fig. <ref>. Here, the distinct blinding for each channel is clearly evident, with some even having negative values due to the random assignment of a=-1. As intended, extracting knowledge of the blinding from comparing results for different channels is impossible. Conclusions and outlook - Current tensions in different evaluations of a_μ^ HVP indicate either a discovery of new physics or a multi-method confirmation of the SM when comparing the resulting contrasting values for a_μ^ SM with a_μ^ exp. Under particular scrutiny are the data-driven evaluations of a_μ^ HVP <cit.>, which combine measured e^+e^-→ hadrons cross section data to input into dispersion integrals, allowing for the extraction of a_μ^ HVP and other observables sensitive to hadronic vacuum polarization effects. The most severe tension in the data-driven determinations is due to a measurement of the dominant e^+e^-→π^+π^- cross section by the CMD-3 experiment <cit.> that has a cross section several standard deviations larger than all other previous data. Used in isolation to calculate a_μ^ HVP, it results in a value for a_μ^ SM that is consistent with a_μ^ exp. Analysis choices by different groups of how to combine these data lead to different results and in previous cases have been shown to differ at the level of the uncertainty on the combined cross section <cit.>. The impact on future data-driven determinations of a_μ^ HVP from including the CMD-3 data and new, more precise experimental data (which could either resolve the current data tensions or make them worse with increased precision) will be influenced by past or future analysis choices on how to combine them. The new KNTW analysis framework is undergoing a complete overhaul and modernization aimed at providing a new state-of-the-art in data-driven evaluations of a_μ^ HVP. Given the high-stakes due to the current tensions and the crossroads in the search for new physics in the muon g-2, future data-driven determinations of a_μ^ HVP must attempt to avoid analysis bias wherever possible. Given that results from the Muon g-2 Experiment at Fermilab, lattice QCD calculations, and future e^+e^-→ hadrons cross section measurements are all blind analyses, implementing analysis blinding in data-driven determinations of a_μ^ HVP is paramount. This is crucial before including new data whose impact on the resulting a_μ^ HVP will be influenced by unavoidable analysis choices. The first blinding scheme for data-driven evaluations of the HVP has been proposed here and is in place as part of the new, ongoing KNTW analysis. With the final results from the Muon g-2 Experiment at Fermilab and an updated value of a_μ^ SM from the Muon g-2 Theory Initiative expected soon, it is a pivotal time for the study of the muon's anomalous magnetic moment. Future KNTW analyses will improve upon previous determinations, fully re-evaluate the HVP and contribute to the studies of lepton anomalies for many years to come. To ensure their integrity, unblinding can happen only when an analysis is complete and at an appropriate time with respect to the release of new or updated hadronic cross section data. Acknowledgements - Special thanks are extended to Mark Lancaster for his input and for agreeing to be the external blinder for the new KNTW analysis. We would like to thank the Muon g-2 Theory Initiative and, in particular, the DHMZ group (Michel Davier, Andreas Hoecker, Bogdan Malaescu, Zhiqing Zhang), Antoine Gérardin and Christoph Lehner for numerous useful discussions. AK is supported by The Royal Society (URF\R1\231503), STFC (Consolidated Grant ST/S000925/) and the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 858199 (INTENSE). DN is supported by the Japan Society for the Promotion of Science under Grant Number 20K03960. TT is supported by STFC (Consolidated Grants ST/T000988/ and currently ST/X000699/). AW is supported by a PGR studentship jointly funded by STFC and the Leverhulme Trust under LIP-2021-01. apsrev4-1.bst
http://arxiv.org/abs/2409.02630v1
20240904115018
Discrete-modulated continuous-variable quantum key distribution secure against general attacks
[ "Ignatius William Primaatmaja", "Wen Yu Kon", "Charles Lim" ]
quant-ph
[ "quant-ph" ]
backgrounds backgrounds thmTheorem dfDefinition lemmaLemma corollaryCorollary protcounter prot protcounter Protocol : Department of Electrical & Computer Engineering, National University of Singapore, Singapore Squareroot8 Technologies Pte Ltd, Singapore Department of Electrical & Computer Engineering, National University of Singapore, Singapore Global Technology Applied Research, JPMorgan Chase & Co, Singapore Department of Electrical & Computer Engineering, National University of Singapore, Singapore Global Technology Applied Research, JPMorgan Chase & Co, Singapore § ABSTRACT In recent years, discrete-modulated continuous-variable quantum key distribution (DM-CV-QKD) has gained traction due to its practical advantages: cost-effectiveness, simple state preparation, and compatibility with existing communication technologies. This work presents a security analysis of DM-CV-QKD against general sequential attacks, including finite-size effects. Remarkably, our proof considers attacks that are neither independent nor identical, and makes no assumptions about the Hilbert space dimension of the receiver. To analyse the security, we leverage the recent generalised entropy accumulation theorem and the numerical methods based on quasi-relative entropy. We also develop a novel dimension reduction technique which is compatible with the entropy accumulation framework. While our analysis reveals significant finite-size corrections to the key rate, the protocol might still offer advantages in specific scenarios due to its practical merits. Our work also offers some insights on how future security proofs can improve the security bounds derived in this work. Discrete-modulated continuous-variable quantum key distribution secure against general attacks Charles Lim September 9, 2024 ================================================================================================ § INTRODUCTION Quantum key distribution (QKD) is amongst the most mature quantum technologies, with some companies pushing for its commercial applications. Since its inception in 1984 <cit.>, QKD has undergone a rapid development in both theoretical and experimental aspects. Novel QKD protocols have been proposed and demonstrated, typically with an emphasis on improving the secret key rate and/or the range of the protocol. However, for QKD to be widely deployed, it is also crucial to take into account the simplicity of the setup as well as the cost of the hardware. Broadly speaking, based on how classical information is being encoded into quantum information, QKD protocols can be distinguished into two major categories: discrete-variable (DV) and continuous-variable (CV). DV-QKD protocols encode discrete classical information into optical modes (e.g., polarisation, time-bins, etc) of a light source, typically a laser. To decode the classical information, photon counting techniques are used, usually using threshold single-photon detectors. DV-QKD typically has good secret key rates and range (e.g.,  <cit.>), but the single-photon detectors required are less mature, more expensive and sometimes require cooling. On the other hand, CV-QKD protocols typically encode continuous classical information into the optical phase space <cit.> (i.e., the X and P quadratures). To decode the information, coherent detection techniques, such as homodyne and heterodyne detection, are typically employed. Remarkably, these detectors are also used in the classical communication infrastructure, thus homodyne and heterodyne detectors are commercially more mature than single-photon detectors. Furthermore, they are also typically more cost-effective than the single-photon detectors counterparts and they can operate at room-temperature. Moreover, these detectors are readily integrated on chips <cit.>. Therefore, the use of coherent detection techniques provide an edge for practical applications, especially over metropolitan distances and last-mile key exchanges <cit.>. However, the requirement that classical information has to be continuous (often Gaussian distributed <cit.>) affects the practicality of CV-QKD protocols. Real optical modulators have finite precision which means that continuous modulation is only a theoretical idealisation, which does not correspond to the practical implementation. Such theory-experiment gap can open up security loopholes for QKD and it may be exploited by a sufficiently advanced adversary. To ease this requirement, two solutions has been proposed in the literature. Firstly, one can encode discrete information into optical modes, like in DV-QKD, and use coherent detection techniques to perform photon counting <cit.>. This method has the additional benefit of removing the requirement of sharing a global phase reference between the two communicating parties at the price of shorter distances. Another alternative is to simply encode discrete classical information into a discrete constellation in the phase space. This approach is known as discrete-modulated CV-QKD (DM-CV-QKD). While this solution requires the two honest parties to share a global phase reference, protocols with this feature typically have longer ranges and higher secret key rate. In this work, we shall focus our attention to the latter. For practical implementations, the quantum communication phase in QKD is performed round-by-round. Attacks on QKD can be classified based on what the adversary can do in the quantum channel in each round <cit.>. For the most general attack (also known as coherent attacks), the adversary is allowed to execute any strategy that is allowed by quantum mechanics (e.g., adjusting her attack based on a measurement result in the preceding rounds, entangle multiple states together before sending them to the receiver, etc). Analysing the security of a QKD protocol against general attack may require multi-round analyses, would be intractable to parameterise as QKD protocols typically involve a large number of rounds. Nevertheless, techniques have been developed for protocols that exhibit certain structures (e.g., permutation symmetry <cit.>, Markov condition in sequential protocols <cit.>, non-signalling condition in sequential protocols <cit.>, etc). In these special cases, the techniques reduce the problem to single-round analyses. Interestingly, many of these techniques use the conditional von Neumann entropy of the secret bit, given the adversary's quantum side information and the classical announcements for a single round, as the quantity of interest. This is the same quantity of interest for security under collective attacks, where an adversary acts independently and identically in each round, allowing the security of the protocol to be analyzed by only looking at a single round of the protocol. This causes many of the works that are tailored to analysing the security against collective attacks to be relevant to the security analysis against general attacks. One of these techniques is the generalised entropy accumulation theorem (GEAT) <cit.>. It is a technique that is applicable to any QKD protocols that has the so-called no-signalling structure. Informally, the no-signalling structure requires that classical and quantum information leaked in a given round does not leak additional information about the secret bits generated in preceding rounds. Fortunately, many QKD protocols exhibit this feature which makes the GEAT a versatile tool to prove the security of a QKD protocol. Another important feature of the GEAT is that it naturally incorporates the finite-size effects into the security analysis. As any practical QKD protocol is executed within a finite duration, finite-size effects such as statistical fluctuations, have to be accounted for carefully. DM-CV-QKD protocols have enjoyed a significant progress in their security analysis in recent years. First, the security of the protocol is analysed under restricted attacks <cit.>. Then, Ghorai et al <cit.> has proven its security in the asymptotic limit (i.e., when there are infinite number of rounds). The security proof relies on the extremality of Gaussian states, a technique that is commonly used in Gaussian modulated CV-QKD. However, for DM-CV-QKD, it turns out that the bound is too pessimistic, which leads to sub-optimal secret key rate. The asymptotic bound was then improved by Lin et al <cit.> under the so-called photon number cutoff assumption. This assumption effectively assumes that the adversary sends a state that lives in a finite-dimensional Hilbert space (corresponding to the low photon number subspace of the Fock space). While this assumption may be a good approximation which greatly simplifies the computation, it is hardly justifiable. However, the work of Lin et al <cit.> showed that DM-CV-QKD could potentially allow the exchange of secret keys at relatively high key rate and over reasonably long distances. Following their initial work in Ref. <cit.>, Lin-Lütkenhaus also analysed the security of the DM-CV-QKD protocol under the trusted detector noise scenario <cit.>. This refers to the scenario where the limitations of the detectors, such as imperfect quantum efficiency and electrical noise, are characterised and known to the users. In the normal circumstances, these imperfections are attributed to the adversary. Ref. <cit.> showed that by characterising these imperfections, the performance of the protocol can be greatly improved. Another important milestone is presented in Ref. <cit.>, which removes the need for photon number cutoff assumption. The work of Ref. <cit.> offered the so-called dimension reduction technique, which effectively gives a correction term to the computation of the conditional von Neumann entropy with the photon number cutoff. Remarkably, Ref. <cit.> showed that the correction term is relatively small and DM-CV-QKD can still offer practical advantages even after accounting for this correction. Given the potential of DM-CV-QKD shown from the studies of the protocol's security in the asymptotic regime, in recent years, researchers have tried to study the finite-size security of the protocol. Notably, Kanitschar et al <cit.> studied the finite-size security of the protocol under the assumption that the adversary performs collective attacks. On the other hand, Bauml et al <cit.> studied the finite-size security of the protocol against general attacks but under the photon number cutoff assumption. Remarkably, while these works focused on protocols with quadrature phase-shift keying (QPSK) encoding, DM-CV-QKD protocols with different constellations have also been studied. The two-state protocol has been analysed by Ref. <cit.> in the asymptotic limit and by Ref. <cit.> in the finite-key regime. On the other hand, the three-state protocol was analysed in Ref. <cit.>. Recently, the variant of the protocol with large constellation size has also been analysed <cit.>. At the time of writing, the security of DM-CV-QKD protocols against general attacks have not been performed without the photon number cutoff assumptions (except for the two-state case <cit.>). Unfortunately, it is not straightforward to remove the collective attack assumption using Ref. <cit.>'s framework since techniques that relies on permutation symmetry has a correction term that depends on the Hilbert space dimension, which is infinite for DM-CV-QKD. Moreover, entropy accumulation frameworks require the parameter estimation to be based on frequency distributions while Ref. <cit.> estimates moments. On the other hand, it is also difficult to remove the photon number cutoff assumption in Ref. <cit.> as the dimension reduction technique requires the estimate of the mean photon number of the incoming states, which is again incompatible with the entropy accumulation framework. In this work, we successfully remove these two assumptions simultaneously by employing the entropy accumulation framework (in particular, the GEAT version <cit.>) and developing a novel dimension reduction technique that only requires estimation of probability distributions. However, our work reveals that when both the collective attack and photon number cutoff assumptions are removed, DM-CV-QKD protocols suffer from a significant finite-size penalty. The paper is organised as follows. We first introduce some preliminary notions in Section <ref>, such as the notation that we use in this paper, how security is defined for QKD and the GEAT framework which will be used in this paper. In Section <ref>, we define the DM-CV-QKD protocol that we will analyse. We will present the security analysis of the protocol in Section <ref>. Following which, we present a simulation of the protocol's performance in Section <ref>. Lastly, we will discuss the results that we obtain, some potential future works, and we conclude our paper in Section <ref>. § PRELIMINARIES §.§ Notations and definitions §.§.§ Basic notations We will first introduce the notations that we use in this paper. The capital letters A, B, and E are typically associated to the parties Alice, Bob and Eve, respectively. The other capital letters from the Latin alphabet typically refer to random variables, unless explicitly stated. In particular, we denote N as the total number of rounds in the QKD protocol. Many of the random variables in this work are indexed in the subscripts to indicate the round associated to the particular random variable. For example, we use the letter Z to denote Bob's measurement outcome. Then, Z_j denotes Bob's measurement outcome in the j-th round. On the other hand, when we index a random variable in the superscript, this refers to a sequence of random variables up to that particular round. For example, the random variable Z^j refers to the sequence Z_1, Z_2 , ..., Z_j. When referring to an entire sequence, we will write the particular letter in boldface. For example, when writing Z, we refer to the sequence Z_1, Z_2 ,..., Z_N. When referring to a key of length ℓ, we will also use the boldface letter K = K_1, K_2, ... , K_ℓ where K_j refers to the j-th bit of the key. For any positive integer j, we write [j] to denote the set {1, 2, ..., j}. The calligraphic capital letters (e.g., , , , etc) are typically reserved for quantum channels. There are certain exceptions for this rule, which we will explicitly state when we first introduce the notation in the relevant section. Notable exceptions for this rule is which denotes the range of the random variable C, and which we generally reserve to denote Hilbert spaces. We use ℕ, ℤ and ℝ to denote the sets of natural numbers, integers and real numbers, respectively. We write 𝖫() to denote the set of linear operators acting on the Hilbert space , 𝖡() denotes the set of bounded operators acting on the Hilbert space , 𝖣() denotes the set of sub-normalised quantum states living in the Hilbert space . §.§.§ Distance measures In QKD, we often consider how close a quantum state to another quantum state. This is made precise using distance measures. In this paper, there are two distance measures that we use. Firstly, we use Δ to denote the trace distance, which is defined as Δ(ρ, σ) = 1/2ρ - σ_1, for some ρ, σ∈𝖣() and some Hilbert space . For some operator M, the trace norm is defined as M_1 = [√(M^† M)], where M^† denotes the adjoint of M. The trace distance is used in defining the security of QKD. The second distance measure that we use is the purified distance, denoted by Δ_P. It is defined as Δ_P(ρ,σ) = √(1 - F(ρ, σ)^2), for some ρ, σ∈𝖣() and some Hilbert space . The generalised fidelity F(ρ,σ) is defined as F(ρ,σ) = √(ρ)√(σ)_1 + √((1 - [ρ])(1-[σ])) In this work, the purified distance is used to define the conditional smooth min-entropy. §.§.§ Entropic quantities The quantum communication phase of a QKD protocol can be thought of as a weak random source – it generates random strings that may not be uniformly distributed nor independent of the adversary's side information. To characterise the quality of a weak random source, it is customary to consider its “entropy”. There are many entropic quantities in the literature, each appropriate to a particular information processing task. We shall introduce some of these entropic quantities that we will use in this work. We recall that the relevant scenario for QKD is one where there is a classical register held by the honest parties and a quantum register held by the adversary. In this section, we shall denote the classical register by A and the quantum register by B. Let ρ_AB∈𝖣(_A ⊗_B) be a classical-quantum state. We can write ρ_AB = ∑_a p(a) |a⟩⟨a|⊗ρ_B^a, for some probability distribution {p(a)}_a. We will define the entropic quantities with respect to this state. The first entropic quantity that we consider is the conditional min-entropy, which is relevant for the task of privacy amplification. This quantity is a measure of the probability of guessing the value of the register A, given the quantum side information stored in register B. The conditional min-entropy is defined as H_min(A|B)_ρ = - log_2 max_{Π_a}_a∑_a p(a) [ρ_B^a Π_a]. Here, the maximisation is taken over all possible measurement operators {Π_a}_a acting on _B. However, when allowing the output of a QKD protocol to slightly deviate from an ideal key, we can consider smoothing on the conditional min-entropy. In this case, given the classical-quantum state ρ_AB, we do not actually evaluate the conditional min-entropy on the state ρ_AB, but on another state that is close to ρ_AB. Now, let ∈ (0,1). is the so-called smoothing parameter, which measures how close the state in which the conditional min-entropy is evaluated to the state under consideration, i.e., ρ_AB. In this work, the distance measure that we use for the smoothing is the purified distance. We define the -ball around the state ρ_AB, which we denote as _(ρ_AB) ℬ_(ρ_AB) = {σ_AB: σ_AB∈𝖣(_A ⊗_B), Δ_P(ρ_AB, σ_AB) ≤}. Given the smoothing parameter and state ρ_AB, the conditional smooth min-entropy is defined as H_min^(A|B)_ρ = max_σ_AB∈_(ρ_AB) H_min(A|B)_σ. In general, there exists a state that is -close to ρ_AB (in purified distance) but has significantly higher conditional min-entropy than ρ_AB. This means that in general, the conditional smooth min-entropy can be significantly higher than its non-smoothed counterpart. Therefore, the conditional smooth min-entropy gives a more optimal characterisation of randomness of a weak random source The next entropic quantity that we consider is the conditional von Neumann entropy, defined as H(A|B)_ρ = H(AB)_ρ - H(B)_ρ where H denotes the von Neumann entropy, defined as H(AB)_ρ = - [ρ_ABlog_2(ρ_AB)] H(B)_ρ = - [ρ_B log_2(ρ_B)]. The conditional von Neumann entropy is a relevant quantity when using the generalised entropy accumulation theorem. Finally, we also introduce the quantum relative entropy. Given a quantum state ρ and positive semi-definite operator σ, the quantum relative entropy is given by D(ρ||σ) = [ρ (log_2(ρ) - log_2(σ))] if supp(ρ) ⊆supp(σ) +∞ otherwise, where for an operator M ∈𝖫(), supp is defined as supp(M) = {|ψ⟩∈: M|ψ⟩≠ 0} Given a classical-quantum state ρ_AB, the conditional von Neumann entropy is related to the quantum relative entropy H(A|B)_ρ = D(ρ_AB||_A ⊗ρ_B). §.§ Security definition The goal of a QKD protocol is to allow two distant parties, whom we call Alice and Bob, to share a pair of identical keys that is random from the point of view of the adversary, whom we call Eve. In the literature, this is often formalised in terms of the “real-versus-ideal-world” paradigm. Under this paradigm, we consider two protocols – the real QKD protocol, which outputs the state ρ_K_A K_B L E (where K_A stores Alice's key, K_B stores Bob's key, L stores Eve's classical side information and E stores Eve's quantum side information), and a hypothetical ideal protocol, which outputs a pair of perfect keys whenever the protocol is not aborted – we denote the event in which the protocol is not aborted by Ω. In particular, this perfect pair of keys are always identical, uniformly distributed and independent from the adversary's side information E and L. In practice, when proving the security of a QKD protocol, we usually consider the following criteria * Correctness: For a fixed ∈ (0,1), a QKD protocol is said to be -correct if it satisfies [K_A ≠K_B ∧Ω] ≤. * Completeness: For a fixed ∈ (0,1), a QKD protocol protocol is said to be -complete if [Ω]_honest≥ 1 - . The subscript “honest” specifies that the probability is evaluated based on the honest implementation. This refers to the scenario where the implementation might be noisy, but the adversary does not introduce more noise than expected. * Secrecy: For a fixed ∈ (0,1) and ℓ∈ℕ, a QKD protocol is said to be -secret (with respect to Bob's key) if [Ω] ·Δ(ρ_K_B L E|Ω, τ_ℓ⊗ρ_L E|Ω)≤, where ℓ denotes the output length of the QKD protocol and τ_ℓ = 2^-ℓ∑_k ∈{0,1}^ℓ|k⟩⟨k|. Specifically for secrecy, the conditional smooth min-entropy is central in the security analysis of QKD, due to the so-called leftover hash lemma. Let _PA be the quantum channel that describes the privacy amplification step of a QKD protocol. We write ρ_K_B L E|Ω = _PA[ρ_ZLE|Ω]. The leftover hash lemma states that if two-universal hashing is used, then we have <cit.> Δ(ρ_K_B L E|Ω, τ_ℓ⊗ρ_L E|Ω) ≤ 2^-1/2( H_min^(Z|L,E)_ρ_ZLE|Ω - ℓ + 2) + 2 . In other words, the distance between the state describing the output of the real QKD protocol and the output of the ideal protocol is related to the conditional smooth min-entropy of the raw key Z (i.e., the string that is obtained before the privacy amplification step). This reduces the problem of proving the secrecy of a QKD protocol to deriving a lower bound on the conditional smooth min-entropy of the raw key. In turn, this can be solved using GEAT. §.§ Generalised entropy accumulation Generalised entropy accumulation theorem (GEAT) <cit.> is a technique for lower bounding the conditional smooth min-entropy H_min^(Z|L, E_N) of a string Z = Z_1 ... Z_N given the classical side information L = L_1 ... L_N and quantum side-information E_N that are generated from a certain class of N-step sequential processes. GEAT formalises this class of processes in terms of the so-called GEAT channels, which we define below [The definition we use in this work is slightly less general than the definition of GEAT channels used in Refs. <cit.>. In particular, we do not consider any memory registers since we focus on the device-dependent framework in this paper]. [GEAT channels] The quantum channels {_j}_j=1^N where _j: E_j-1 L^j-1→ Z_j C_j E_j L^j (registers Z_j, L_j and C_j are classical registers) are called GEAT channels if each _j satisfies the following two conditions * Non-signalling condition: There exists a quantum channel _j: E_j-1 L^j-1→ E_j L^j such that _Z_j C_j∘_j = _j * For all j ∈ [N], C_j has a common alphabet and is a deterministic function of Z_j and L_j. In the context of QKD, the non-signalling condition requires that in any given round, Eve is capable to update her side-information without Bob's private information that has been generated in the preceding rounds. This is indeed the case for many QKD protocols, including the DM-CV-QKD protocol that we will analyse. Roughly speaking, the GEAT states that H_min^(Z|L, E) ≥ N h - O(√(N)), where the constant h can be obtained by analysing a single round of the protocol, rather than analysing the entire protocol directly. In fact, the constant h is related to the so-called min-tradeoff function which we define below. [Min-tradeoff functions] Let {_j}_j=1^N be a sequence of GEAT channels and let _ be the set of probability distributions over . An affine function f: _→ℝ is called a min-tradeoff function for the GEAT channels {_j}_j=1^N if it satisfies f(q) ≤inf_σ∈Σ(q) H(Z_j|E_j L^jẼ_j-1)_σ, where Σ(q) = {σ_Z_j C_j L^j E_j Ẽ_j-1 = (_j ⊗𝕀_Ẽ_j-1)[ω_L^j-1E_j-1Ẽ_j-1]: σ_C_j = ∑_c ∈ q(c) |c⟩⟨c|_C_j}. The constant h is the value of the min-tradeoff function evaluated at the worst probability distribution that is accepted in the parameter estimation step of the protocol. To calculate the second order terms O(√(N)), it is useful to define some properties of the min-tradeoff functions [f] = max_q ∈_ f[q], [f] = min_q ∈_ f[q], _Σ[f] = min_q: Σ(q) ≠∅ f[q], [f] = max_q: Σ(q) ≠∅∑_c q(c) f(ê_c)^2 - (∑_c q(c) f(ê_c))^2. Having introduced these properties of the min-tradeoff function, we can now present the lower bound on the conditional smooth min-entropy Let ∈ (0,1), β∈ (0, 1/2). Let ρ_E_0 be an initial state and ρ_ZCL E_N = _N ∘ ... ∘_1[ρ_E_0] be the final state after applying the GEAT channel N times. Let f be a min-tradeoff function and Ω⊆^N be an event such that h = min_c∈Ω f(_c). Then, H_min^(Z|L, E_N)_ρ_ZL E_N |Ω ≥ N h - β/1 - βln 2/2 V^2- N (β/1 - β)^2 K_β - log_2(1 - √(1 - ^2)) - (1+β) log_2 [Ω]/β, where V = log_2(2d_Z^2 + 1) + √(2 + [f]) K_β = (1 - β)^3/6(1 - 2 β)^3 ln 2 2^β/1 - β(2 log_2 d_Z + [f] - _Σ[f]) ×ln^3 (2^2 log_2 d_Z + [f] - _Σ[f] + e^2 ). To construct the min-tradeoff function, it is customary to decompose the GEAT channel into two parts: the testing channel and the generation channel _j = (1-γ) _j^gen + γ_j^test, where the C_j register of the output of _j^gen is always |⊥⟩⟨⊥|. On the other hand, _j^test never outputs |⊥⟩⟨⊥| on C_j. Then, consider another affine function g: _∖{⊥}→ℝ that satisfies the following property g(q) ≤ inf_σ H(Z_j|E_j, L^j Ẽ_j-1)__j[σ] s.t. _j^test[σ]_C_j = ∑_c ≠⊥ q(c) |c⟩⟨c|. Intuitively, g is a lower bound on the conditional von Neumann entropy subject to the probability distribution of the test rounds. From g, we can construct the min-tradeoff function from the function g by using the following construction f(ê_c) = [g] + g(ê_c) - [g]/γ, c ∈∖{⊥}, f(ê_⊥) = [g]. This construction will lead to the following properties of the min-tradeoff function [f] = [g], [f] = (1 - 1/γ)[g] + 1/γ[g], _Σ[f] ≥[g], [f] ≤([g] - [g])^2/γ. The explicit construction of the function g for DM-CV-QKD will be presented in Section <ref>. § PROTOCOL We shall now give the description for the protocol that we consider. For the sake of concreteness, we consider a variant of the protocol with quadrature-phase-shift-keying (QPSK) encoding, heterodyne detection and reversed reconciliation for error-correction. However, our security analysis can be easily adapted to the case where another constellation is used for the encoding of the quantum states as we leverage on numerical methods that did not exploit any symmetry of the encoding scheme. The dimension reduction technique can also be modified to the protocol variant which uses homodyne detection rather than heterodyne detection. We fix N ∈ℕ, α > 0, ^key > 0, 0 ≤ <, γ∈ (0, 1) and the parameter estimation scores f_PE and its tolerated range _. * State preparation: For every round j ∈ [N], Alice uniformly chooses X_j ∈{0,1,2,3}. Next, she prepares the coherent state |ψ_X_j⟩ = |α e^i (π X_j/2 + π/4)⟩ for some fixed α > 0 that is specified by the protocol. She sends the state to Bob via an untrusted quantum channel. * Measurement: Bob randomly assigns T_j ∈{0, 1} with probability {1-γ, γ}, respectively. Upon receiving the optical signal from the untrusted quantum channel, Bob performs a heterodyne detection to obtain an outcome Y_j ∈ℂ. From the measurement outcome, he writes Y_j = √(W_j) e^i θ_j for some θ_j ∈ [0, 2π) and W_j = Y_j^2. If T_j = 0, he applies the following discretisation map Y_j → Z_j (see Fig. <ref>b): Z_j = ∅ W_j < ^key, 0 W_j ≥^key∧θ_j ∈ [0 , π/2), 1 W_j ≥^key∧θ_j ∈ [π/2 , π), 2 W_j ≥^key∧θ_j ∈ [π , 3π/2), 3 W_j ≥^key∧θ_j ∈ [3π/2 , 2π), If T_j = 1, he applies the following discretisation map instead (see Fig. <ref>a): Z_j = (∅, 0) W_j < ∧θ_j ∈ [0 , π/2), (∅, 1) W_j < ∧θ_j ∈ [π/2 , π), (∅, 2) W_j < ∧θ_j ∈ [π , 3π/2), (∅, 3) W_j < ∧θ_j ∈ [3π/2 , 2π), 0 ≤ W_j ≤∧θ_j ∈ [0 , π/2), 1 ≤ W_j ≤∧θ_j ∈ [π/2 , π), 2 ≤ W_j ≤∧θ_j ∈ [π , 3π/2), 3 ≤ W_j ≤∧θ_j ∈ [3π/2 , 2π), ⊤ W_j > Repeat step 1 to 2 for N times. * Parameter estimation: For each j ∈ [N], Bob announces T_j. If T_j = 0, Bob announces the rounds j in which Z_j = ∅. For each T_j = 1, Alice announces X_j. Bob computes C_j = f_PE(T_j, X_j, Z_j) for some deterministic function f_PE where C_j = ⊥ if and only if T_j = 0 and C_j = ∅ if and only if T_j = 1 and Z_j = ⊤. If _C∈_, they continue to the next step, else they abort. * Classical post-processing: Bob sends Alice a syndrome of his data. They apply error correction and error verification. If the error verification passes, then they apply privacy amplification on their data to obtain the final secret key. § SECURITY ANALYSIS In this section, we shall present the security analysis of DM-CV-QKD protocols. As defined in Section <ref>, to prove the security of a QKD protocol, we need to show that it satisfies the completeness, correctness and secrecy criteria. §.§ Correctness The proof of the correctness of the protocol is pretty straightforward. The correctness of the protocol is guaranteed by the error verification step, which is performed after the error correction step. In the error correction step, Bob sends the syndrome of his key so that Alice can produce a guess Ẑ of his key, Z. The purpose of the error verification step is to verify that the error correction step has successfully enables Alice to correctly guess Bob's key. If the error correction step fails (i.e., when Ẑ≠Z), we want the error verification step to detect this failure with high probability. This can be done using the property of two-universal hashing. In the error verification step, Bob randomly picks a hash function f_EV from the two-universal family and calculate the hash of his key, f_EV(Z). He would then announce the hash function that he chose and the corresponding hash value to Alice using the public but authenticated classical communication channel. Alice would then compute the hash value of her guess of Bob's key, f_EV(Ẑ). If f_EV(Ẑ) ≠ f_EV(Z), then they abort the protocol. The correctness of the protocol is due to the property of the two-universal hash function. Let ℓ_EV be the length of the hash that Bob announced. Then, by the property of two-universal hash functions, we have [f_EV(Ẑ) ≠ f_EV(Z) | Ẑ≠Z] ≤1/2^ℓ_EV Thus, by taking ℓ_EV≥log_2(1/), and noting that Ẑ≠Z implies K_A≠K_B, then [K_A≠K_BΩ] ≤[Ẑ≠Z][Ω | Ẑ≠Z] ≤, which proves that the protocol is -correct. §.§ Completeness To prove completeness, we recall that the protocol can be aborted in either the parameter estimation step or in the error verification step. Therefore, we can write [abort] ≤[abort in PE] + [abort in EV], where [abort in PE] is the probability that the protocol is aborted in the parameter estimation step and [abort in EV] is the probability that the protocol is aborted in the error verification step. The latter can be upper bounded also [abort in EV] ≤[EC fail], where [EC fail] is the probability that the error correction algorithm fails to make Alice's key identical to Bob's key. Therefore, we can choose the completeness parameter to be equal to = ^PE + ^EC, where ^PE≥[abort in PE] and ^EC≥[EC fail]. While there is an information-theoretic bound on [EC fail] based on the error rate in the honest implementation, the error correction algorithm that saturates this bound is difficult to implement in practice <cit.>. On the other hand, it is typically difficult to prove an upper bound on the failure probability of practical error correction algorithms and one often relies on heuristics. Therefore, in this work, we shall focus on bounding the probability of aborting the protocol in the parameter estimation step. To do that, we need to assume that the honest implementation produces a probability distribution p on the score C. Then, for each c ∈, we fix ζ_c, ζ'_c > 0 such that the protocol is accepted if and only if p(c) - ζ'_c ≤(c) ≤ p(c) + ζ_c for all c ∈. Now, we assume that for each c ∈, there exists ^PE,c > 0 such that [ p(c) - ζ'_c ≤freq(c) ≤ p(c) + ζ_c] ≥ 1 - ^PE,c. We can construct such ^PE,c by taking the complement of the above event, and simplify using the union bound, ^PE, c≥[(c) ≥ p(c) - ζ'_c] + [(c) ≤ p(c) + ζ_c]. In other words, we want to find an upper bound on the sum of the probabilities of the two extreme events. To obtain such upper bound, we can consider the following bound on the binomial distribution, which was derived in Ref. <cit.> Let n ∈ℕ, p ∈ (0,1) and let X be a random variable distributed according to X ∼Binomial(n,p). Then, for any integer k such that 0 ≤ k < n, we have F(n,p,k) ≤[X ≤ k] ≤ F(n,p,k+1), where D(q,p) = q ln( q/p) + (1-q) ln( 1-q/1-p), Φ(a) = 1/√(2π)∫_-∞^a d x e^-x^2/2, F(n,p,k) = Φ(sign(k/n - p ) √(2n D( k/n, p))). To apply the above lemma, for each c ∈ and for each i ∈ [N], we set the random variable V_i^(c) as V_i^(c) = 1 if C_i = c 0 otherwise. Then, V_tot^(c) = ∑_i = 1^N V_i^(c) is a binomial random variable which allows us to apply the lemma. For [(c) ≥ p(c) - ζ'_c], we can write [(c) ≥ p(c) - ζ'_c] ≤[V_tot^(c)≥N (p(c) - ζ'_c)] = 1 - [V_tot^(c)≤N (p(c) - ζ'_c) - 1] ≤ 1 - F(N, p(c), N (p(c) - ζ'_c) - 1). Similarly, for [(c) ≤ p(c) + ζ_c] we write [(c) ≤ p(c) + ζ_c] ≤[V_tot^(c)≤N (p(c) + ζ_c)] ≤ F(N, p(c), N (p(c) + ζ_c) + 1). Therefore, we can choose ^PE,c as ^PE,c = 1 - F(N, p(c), N(p(c) - ζ'_c) - 1) + F(N, p(c), N(p(c) + ζ_c) + 1). Finally, one can vary the choice of ζ'_c, ζ_c > 0 to maximise the secret key rate subject to ∑_c ∈^PE, c≤^PE. Thus, any choice of ζ'_c and ζ_c that satisfies the above constraint will automatically satisfy the completeness criterion for the parameter estimation. §.§ Secrecy §.§.§ Leftover hash lemma To prove the secrecy of the DM-CV-QKD protocol, we shall leverage on the leftover hash lemma. Before we do that, for a fixed ∈ (0,1), we consider the following two distinct cases * [Ω] ≤ In this case, we can choose the secrecy parameter = since the trace distance term Δ(ρ_K_B L E | Ω, τ_ℓ⊗ρ_L E|Ω) in the definition of secrecy is upper bounded by one. * [Ω] > In this case, we can use the leftover hash lemma to bound Δ(ρ_K_B L E | Ω, τ_ℓ⊗ρ_L E|Ω) in terms of the conditional smooth min-entropy H_min^(Z|L, E)_ρ_ZLE|Ω, which in turn can be lower bounded using GEAT while substituting [Ω] with in Theorem <ref>. §.§.§ GEAT channels For our purpose, we consider the following GEAT channel _j: E_j-1 L^j-1→ Z_j E_j L^j C_j for the j-th round, where the register L_j stores the classical information accessible to Eve in the j-th round. * Alice randomly generates X_j and then prepares the state |ψ_X_j⟩_Q_j. * Eve applies her attack, given by the CPTP map _j: E_j-1 Q_j → E_j B_j, where B_j is the quantum signal received by Bob in the j-th round. * Bob randomly generates T_j ∈{0,1} to decide whether a given round is a generation or test round. Bob measures the incoming signal using a heterodyne detection to obtain the outcome Y_j. He then performs the appropriate discretisation on Y_j based on T_j to obtain Z_j (refer to Section <ref>). * If Z_j = ∅ and T_j = 0, Bob sets V_j = 1, otherwise he sets V_j = 0. Next, Bob announces T_j and V_j. If T_j = 1, Alice announces X_j. If T_j = 1, set L^test_j = X_j. If T_j = 0, set L^test_j = ⊥. Finally, set L_j = (V_j, T_j, L^test_j). Note that L_j is accessible to Eve. We can compute C_j from Z_j and L_j. Finally, we trace out X_j. The above channel satisfies the properties of GEAT channels in the sense that it satisfies the requirements that the register C_j is a deterministic function of Z_j and L_j and also the no-signalling condition. To see that this is indeed the case, note that since Alice and Bob do not have an internal memory register (since we are working in device dependent framework), we can consider a channel _j where Eve can simply simulate their actions (i.e., Alice's state preparation and Bob's measurement) and trace out their private registers in the end. By construction, such channel will always satisfy _j = _Z_j C_j∘_j. Thus, we can use these GEAT channels to apply the generalised EAT. §.§.§ Constructing min-tradeoff functions via dimension reduction Next, we construct the min-tradeoff function. As discussed in Section <ref>, to construct the min-tradeoff function, it is customary to find a function g that satisfies the property stated in Eq. (<ref>). We realise that function g(q) is essentially a lower bound on the conditional entropy subjected to the statistics from the test rounds. It is not easy to construct such function directly as the dimension of Bob's Hilbert space is infinite. Instead, it is more convenient to consider a virtual scenario where after Eve performs her attack (at the end of Step 2 of the GEAT channel), there is another completely positive, trace-non-increasing (CPTNI) map _j that truncates Bob's state into the space spanned by the Fock states {|0⟩, ..., |⟩}. More precisely, the map _j is given by _j[ρ_B_j E_j] = (N_0 ⊗_E_j) ρ_B_j E_j (N_0 ⊗_E_j), where N_0 = ∑_n = 0^|n⟩⟨n| is the projector to the subspace spanned by {|0⟩, ..., |⟩}. We want to find a function g(q) that has the following form: g(q) = g̃(q) - g_^(ν_c)(q_∅), where the function g̃(q) has similar properties as the function g(q), except that it is evaluated on the output state of the truncated GEAT channel, where we applied the CPTNI map _j at the end of Step 2 (instead of the actual GEAT channel). In other words, the function g̃: _∖{⊥}→ℝ is an affine function that gives the lower bound on the conditional von Neumann entropy evaluated on the truncated state. The function g_^(ν_c)(q_∅) gives the appropriate correction term and it only depends on q_∅, which is the probability that C_j = ∅. Importantly, unlike the function g(q), the function g̃(q) can be computed using a finite-dimensional optimisation. More precisely, to construct the function g(q), we can use the following dimension reduction theorem, which we prove in Appendix <ref>. For a fixed ∈ℕ, we denote κ = Γ( + 2, 0) / Γ(+2, ), where Γ denotes the upper incomplete gamma function, and we denote N_0 = ∑_n = 0^|n⟩⟨n|. We let ρ_ABE∈𝖣(_A ⊗_B ⊗_E) be a normalised quantum state and ρ̃_ABE = (_A ⊗ N_0 ⊗_E) ρ_ABE (_A ⊗ N_0 ⊗_E) be the truncated version of the state. For each c ∈∖{⊥}, we let Π_c ∈𝖫(_A ⊗_B) be the projector that corresponds to the score c and Π̃_c = (_A ⊗ N_0) Π_c (_A ⊗ N_0) be its truncated version. We let ν_c ∈ (0, 2/3κ), ν_L ∈ (0, (5+√(5))/10κ], and ν_U ∈ (0, (5-√(5))/10κ] be given. Finally, we let ρ̃_ZLE be the state resulting from applying Bob's measurement on the truncated state ρ̃_ABE, storing the output on the classical register Z (with dimension d_Z), making the appropriate announcement L, and then tracing out Alice's subsystem. Then, we have g(q) = g̃(q) - g_^(ν_c)(q_∅), where g̃(q) is the linearisation of the dual g̃'(q) g̃'(q) ≤inf_ρ̃ H(Z|L,E)_ρ̃_ZLE s.t. ρ̃_AB≽ 0 [ρ̃_AB] ≤ 1, [ρ̃_AB] ≥ 1 - κ q_∅, Δ(ρ̃_A, _Q |Ψ⟩⟨Ψ|_AQ) ≤1/2κ q_∅ [ρ̃_ABΠ̃_c] ≤ q_c - ξ_U(q_∅) ∀ c ∈∖{⊥}, [ρ̃_ABΠ̃_c] ≥ q_c - ξ_L(q_∅) ∀ c ∈∖{⊥}, and where we define the following correction terms g^(ν_c)_(q_∅) = m_ (q_∅ - ν_c) + c_, ξ_L(ν) = κν + 2√(κν(1-κν)) if ν< 5+ √(5)/10κ 1+√(5)/2 if ν≥5+ √(5)/10κ ξ_U(ν) = κν - 2 √(κν(1-κν)) if ν < 5 - √(5)/10κ 1 - √(5)/2 if ν≥5 - √(5)/10κ with w_c = κν_c δ_c = 1/2√(2)[√(w_c (2 - w_c) + w_c√(w_c (4 - 3 w_c))) + √(w_c (2 - w_c) - w_c √(w_c (4 - 3 w_c)))] m_ = [3 w_c + √(w_c (4 - 3 w_c))/√((2 - w_c) + √(w_c (4 - 3 w_c))) - 3w_c - √(w_c (4 - 3w_c))/√((2 - w_c) - √(w_c (4 - 3w_c)))] c_ = δ_c log_2 d_Z + (1 + δ_c) h_2 (δ_c/1 + δ_c). and linearised terms ξ_U(q_∅) ≥ξ̂_U^(ν_U)(q_∅) = (1 - (1- 2 κν_U)/√(κν_U (1 - κν_U))) κ q_∅ - √(κν_U/1-κν_U), ξ_L(q_∅) ≤ξ̂_L^(ν_L)(q_∅) = (1 + (1- 2 κν_L)/√(κν_L (1 - κν_L))) κ q_∅ + √(κν_L/1-κν_L), replacing their respective terms in the dual. Refer to Appendix <ref>. Theorem <ref> allows us to efficiently construct the function g(q) since constructing g̃(q) involves an optimisation over finite dimensional quantum states. This can be done using numerical techniques <cit.>. In particular, the above optimisation can be solved using SDP and since the constraints are linear in q, the dual solution of the SDP will readily provide us with an affine function. We shall show the explicit SDP and its corresponding dual solution in the next two subsections. This completes the min-tradeoff construction. §.§.§ SDP method to bound the conditional entropy The minimisation of the conditional von Neumann entropy in Eq. (<ref>) involves non-linear objective function which can be difficult to solve using readily available numerical solvers. Thankfully, one can lower bound the conditional von Neumann entropy as SDP using the method proposed in Ref. <cit.> which is based on the bound in Ref. <cit.>. Before we use the bound, we first simplify the minimisation by only keeping the term corresponding to the generation rounds and the case in which the outcome is not discarded H(Z|L,E)_ρ̃ ≥ (1-γ) [Z ≠∅| T = 0] H(Z|T = 0, Z ≠∅, E)_ρ̃ =: H(Z|E)_ρ^*, where ρ^*_ZE = _T[Π_ρ̃_ZTEΠ_], with Π_ = ∑_z ≠∅|z⟩⟨z|_Z ⊗|0⟩⟨0|_T ⊗_E, is the sub-normalised state corresponding to the case T = 0 and Z ≠∅. We then convert the conditional von Neumann entropy into quantum relative entropy. Let ρ^*_ZE be a finite dimensional (sub-normalised) cq-state and ρ^*_E = _Z[ρ̃^*_ZE]. Then, H(Z|E)_ρ^* = -D(ρ^*_ZE|| _Z ⊗ρ^*_E) Then, Ref. <cit.> formulated an upper bound on the quantum relative entropy (which gives a lower bound on the conditional von Neumann entropy) based on Gauss-Radau quadrature. Let m ≥ 2 be an integer and let (w_i, t_i) be weights and nodes of the Gauss-Radau quadrature with fixed node at t = 1. Let Λ_i ∈𝖡(_ZE) be an arbitrary bounded operator in the Hilbert space _ZE for all i ∈ [m]. The conditional von Neumann entropy is lower bounded by inf_{Λ_i}_i, ρ̃∑_i = 1^m w_i/t_i ln 2 [ρ^*_Z E(_ZE + Λ_i + Λ_i^† + (1-t_i) Λ_i^†Λ_i) + t_i (_Z ⊗ρ^*_E) Λ_i Λ_i^†] Since _Z is finite dimensional, the above bound can be simplified by writing Λ_i = ∑_z,z'|z⟩⟨z'|⊗Λ_i^(z,z'). Furthermore, we write ρ̃_ZTE = ∑_z, t[T = t] |z⟩⟨z|_Z ⊗|t⟩⟨t|_T ⊗_AB[ρ̃_ABE (_A ⊗ P_z|t⊗_E)] where P_z|t is Bob's POVM element. From our definition of the state ρ^*_ZE, this results in H(Z|E)_ρ^*≥ (1-γ) inf_ρ̃, {Λ_i,z}_i,z∑_i = 1^m w_i/t_i ln 2[ρ̃_ABE (P_PS⊗_E + ∑_z ≠∅ (_A ⊗ P_z|0) ⊗ (Λ_i,z + Λ_i,z^† + (1-t_i) Λ_i,z^†Λ_i,z) + t_i P_PS⊗Λ_i,zΛ_i,z^†)], where P_PS = ∑_z ≠∅_A ⊗ P_z|0 Since the optimisation is over both ρ̃ and Λ, the problem is still non-linear. However, as shown in Ref. <cit.>, the problem can be linearised by defining the map Ξ[M] = _E[ρ̃_ABE· (_AB⊗ M^T)] for any operator M, which has the following properties Ξ[M^†] = Ξ[M]^† [ρ̃_ABE· (K_AB⊗ M^T)] = [K_AB·Ξ(M)] for any operator K_AB. We then define the matrices σ = Ξ[] ω_i, z = Ξ[Λ_i,z] η_i, z = Ξ[Λ_i,z^ †Λ_i,z] θ_i,z = Ξ [Λ_i,zΛ_i,z^†] and consider block moment matrices. The resulting lower bound of the conditional entropy is given by inf_σ, {ω_i,z, η_i,z, θ_i,z}_i,z, ζ_1, ζ_2 (1-γ) ∑_i = 1^m w_i/t_i ln 2[σ P_PS + ∑_z ≠∅ (_A ⊗ P_z|0) (ω_i,z + ω_i,z^† + (1-t_i) η_i,z) + t_i θ_i,z P_PS] s.t. [σ] ≤ 1 [σ] ≥ 1 - κ q_∅ [ζ_1 + ζ_2] ≤κ q_∅ [σΠ̃_c] ≤ q_c - ξ_U(q_∅) ∀ c ∈∖{⊥}, [σΠ̃_c] ≥ q_c - ξ_L(q_∅) ∀ c ∈∖{⊥}, [ σ ω_i, z; ω_i, z^† η_i, z ]≽ 0 ∀ i, z [ σ ω_i, z^†; ω_i, z θ_i, z ]≽ 0 ∀ i, z [ ζ_1 _B[σ] - _Q [|Ψ⟩⟨Ψ|]; _B[σ] - _Q[|Ψ⟩⟨Ψ|] ζ_2 ]≽ 0 §.§.§ Dual solution We take the optimisation (<ref>) and associate to each constraint that depends on q with a dual variable λ λ_norm: 1 - [σ] ≤κ q_∅ λ_dist : [ζ_1 + ζ_2] ≤κ q_∅ λ_c^U: [σΠ̃_c] ≤ q_c - ξ_U(q_∅) λ^L_c: [σΠ̃_c] ≥ q_c - ξ_L(q_∅) Then, the dual solution of the SDP (<ref>) will have the form g̃'(q) ≥φ - λ_normκ q_∅ - λ_distκ q_∅ + ∑_c ≠⊥, ∅(- λ_c^U(q_c - ξ_U(q_∅)) + λ_c^L(q_c - ξ_L(q_∅)) ) - λ_∅^U (q_∅ - ξ_U(q_∅)) + λ_∅^L ( q_∅ - ξ_L(q_∅)), where φ contains all the dual variables that are associated to the non-statistical constraints and the forms are chosen such that all dual λ≥0. The dual solution is linearised by replacing the terms δ, ξ_L and ξ_U with linear terms ξ_U(q_∅) ≥ m_U q_∅ + c_U ξ_L(q_∅) ≤ m_L q_∅ + c_L, as presented in Eq. (<ref>) and Eq. (<ref>) respectively. Then, we have g̃(q) ≥φ' + ∑_c ≠⊥, ∅ (-λ^U_c + λ_c^L) q_c + q_∅( -κλ_norm - κλ_dist - λ^U_∅ + λ^L_∅ + ∑_c ≠⊥ (λ_c^U m_U - λ_c^L m_L)), with φ' = φ + ∑_c ≠⊥ (λ_c^U c_U - λ_c^L c_L). Finally, considering the g_ term, we have g(q) ≥Φ + ∑_c ≠⊥λ'_c q_c where Φ = φ' - c_ + m_ν_c = φ + ∑_c ≠⊥ (λ_c^U c_U + λ_c^L c_L) - c_ + m_ν_c , and λ'_c = λ_c^L - λ_c^U if c ≠⊥, ∅, -λ_∅^U + λ_∅^L - κλ_norm - κλ_dist + ∑_c ≠⊥ (-λ_c^L m_L + λ_c^U m_U) - m_ if c = ∅. Denote λ'_max := max_cλ'_c and λ'_min := min_c λ'_c. We have the following min-tradeoff function f(ê_c) = Φ - (1-γ)/γλ'_max + λ'_c/γ, c ∈∖{⊥}, f(ê_⊥) = Φ + λ'_max. The min-tradeoff function f has the following properties [f] = Φ + λ'_max _Σ[f] ≥Φ + λ'_min [f] ≤(λ'_max - λ'_min)^2/γ Having constructed the min-tradeoff function f and calculate the its corresponding , _Σ, and , we can then apply the GEAT to bound the conditional smooth min-entropy H_min^(Z|L, E)_ρ_ZLE|Ω. Combined with the argument in Sec <ref>, this completes the proof for the secrecy of the protocol. § SIMULATION To study the performance of the protocol which is secure against coherent attacks, we simulated the key generation rate of the protocol with different number of signals, N, sent. For simplicity, an ideal error correcting code is utilised, which is assumed to correct errors by communicating information at the Shannon limit, with information loss NH(X_j|Z_j)_ρ^* and without failure, ^EC=0. We consider a protocol run with Bob performing ideal heterodyne detection with zero excess noise, χ=0, and unit detection efficiency. More details of the model can be found in Appendix <ref>. Based on preliminary optimisation, we choose the discretisation τ_min^key=0.6, τ_min=1.5 and τ_max=√(20). Furthermore, we fix the completeness parameter for parameter estimation as ^PE= 1e-10, correctness parameter as = 1e-15, secrecy parameter as = 1e-6, and n_max= 12. The correctness parameter results in l_EV= 50 bits error verification string. To obtain the dual g̃(q), i.e. values for φ, and dual parameters λ, one can solve the SDP in Eqn. (<ref>) with any set of trial parameters q_dual. The dual parameters and φ obtained can compute a valid dual function g̃^(q_dual)(q) of the form in Eqn. (<ref>). Importantly, this dual function is a lower bound of the dual function with trial parameters matching the actual parameters, g̃^(q_dual)(q)≤g̃^(q)(q), by definition of the dual. In general, we are free to choose the trial parameters q_dual to optimise the key rate (note that q_dual=q may not be the optimal choice for EAT). However, to simplify, we would consider only the set of parameters q_dual that can be generated from a run of the protocol with the same model as described in the previous paragraph, except that the excess noise can be χ_dual≠χ. The key rate, ℓ/N, where ℓ is the length of the key, is then optimised over the amplitude α, the EAT parameter β, the linearisation parameters ν_c, ν_L, and ν_U, the trial parameter χ_dual, and the various components of the completeness parameter, ^PE,c. Fig. <ref> shows the optimised key rate for N= 1e14 to 1e16, and the asymptotic key rate. There is a significant penalty from finite size effect, with the key rate for finite block size being much lower than the asymptotic key rate. § DISCUSSION AND CONCLUSION The results show a significant gap between the finite size key rate and asymptotic key rates, even for large N= 1e16. This may be the consequence of the linearisation terms. In general, the probability of exceeding τ_max is small, indicating a low probability of having large energy. As such, the choice of linearisation parameters ν_U and ν_L would be correspondingly small as well to have a small gap between the actual value (e.g. ξ_L) and linearised value (e.g. ξ̂_L^(ν_L)). Since this term is present in the denominator of the gradient, it leads to a large gradient and causes the penalty from GEAT to be significant. As the finite-size penalty in both EAT and GEAT are very sensitive to the gradient of the min-tradeoff function, we expect that the huge finite-size penalty is a feature of the DM-CV-QKD protocol which estimates the dimension reduction correction using the probability of exceeding τ_max. Therefore, to improve the finite-size penalty, we may require a new technique that is less sensitive to the gradient of the min-tradeoff function. Alternatively, we may require another method to estimate the dimension reduction corrections. Another potential room for improvement is to find corrections to the statistical constraints that have gentler gradients at q_∅≈ 0. Recently, Ref. <cit.> proposed the Rényi entropy version of the GEAT. Interestingly, this version of entropy accumulation theorem does not require the usual analysis via affine min-tradeoff functions, which removes the necessity of linearising the correction terms. This might potentially reduce the minimum block size N. The asymptotic key rate also appears to be significantly worse than in Refs. <cit.>, where the loss tolerance is over 40 (>200 km with perfect detector) compared to around 4 here. This is likely attributable to both the number of nodes m (for the Gauss-Radau quadrature approximation) selected in SDP optimisation, and the dimension reduction penalty (we note that the asymptotic key rate evaluated in Ref. <cit.> does not account for the dimension reduction penalty). The low value of m=4 chosen here to reduce the computation time may not be tight in the estimation of the conditional entropy and thus lead to loss of key rate. Besides the penalty term g_corr^(ν_c)(q_∅), the bounds in the SDP may not be tight either. As such, both would result in a lower key rate by directly reducing the conditional von Neumann entropy and increasing the set of feasible states in the SDP minimisation. Comparing our work with that of Ref. <cit.>, the protocol that is being considered in both Ref. <cit.> and ours is very similar, with the exception that we allow post-selection in the key generation rounds. Indeed, the discretisation that we use in the test rounds is almost identical to the one considered in Ref. <cit.>. Since the asymptotic key rates obtained in Ref. <cit.> are close to the one obtained when considering the dimension reduction penalty <cit.>, we suspect that either a very large number of nodes m is required for a tight bound on the conditional von Neumann entropy, or the protocol variant that monitors probability distribution instead of moments will incur higher dimension reduction penalty. This can be investigated by computing the conditional von Neumann entropy using the methods in Refs. <cit.>. Alternative methods of bounding the relative entropy can also be explored, such as that in Ref. <cit.>. To summarise, we presented a complete security analysis of a variant of the DM-CV-QKD protocol using four coherent-states and heterodyne detection. Unlike previous works, our security analysis accounts for both finite-size effects as well as dimension reduction. To achieve this, we apply the newly proposed GEAT and we modify the dimension reduction theorem presented in Ref. <cit.> to suit the technical requirements of GEAT. Additionally, we also applied the Gauss-Radau quadrature approximation to the conditional von Neumann entropy to construct the min-tradeoff function, which we need to apply GEAT. Our method is versatile as it can be easily extended to other variants of DM-CV-QKD protocols, and it is also applicable to the scenario where the detector imperfection is characterised and trusted (similar to the one considered in Ref. <cit.>). Unfortunately, our result suggests that DM-CV-QKD protocols may suffer from severe finite-size effects when analysed using EAT or GEAT due to the large gradient of the min-tradeoff function. While our bounds can potentially be improved, we leave this investigation for future works. § NOTE ADDED While preparing this manuscript, Ref. <cit.> independently published an improved finite-size analysis of DM-CV-QKD based on the GEAT. In that work, the authors claimed that the four-state DM-CV-QKD protocol (similar to the one analysed in this work) only requires the minimum block size of the order N ∼ 10^8, which is much smaller than what we have shown in this work. However, the result of Ref. <cit.> also assumes the photon number cutoff without properly accounting for the dimension reduction penalties – which we have shown to be non-negligible. Indeed, the authors remarked that they have not successfully obtained a positive key rate while accounting for the dimension reduction penalties. Contrary to Ref. <cit.>, this work presented a full security proof, which includes the analysis on the dimension reduction penalty. Our analysis is also applicable to the DM-CV-QKD protocol that employs post-selection, while Ref. <cit.> focused on the class of protocols without post-selection. Wen Yu Kon and Charles Lim, in their present role at JPMorgan Chase & Co contributed to this work for information purposes. This paper is not a product of the Research Department of J.P. Morgan, Singapore, or its affiliates. Neither J.P. Morgan, Singapore nor any of its affiliates make any explicit or implied representation or warranty and none of them accept any liability in connection with this paper, including, but limited to, the completeness, accuracy, reliability of information contained herein and the potential legal, compliance, tax or accounting effects thereof. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction. § PROOF OF DIMENSION REDUCTION THEOREM The goal of dimension reduction theorem is to reduce the original problem of constructing a min-tradeoff function which involves optimisation over infinite-dimensional quantum states into a simpler problem which only involves optimisation over finite-dimensional quantum states. The idea is similar to a similar dimension reduction theorem presented in Ref. <cit.>: we perform a truncation on Bob's quantum state and calculate the correction terms attributed to this truncation. Before calculating these correction terms, we first present the original, untruncated optimisation problem. §.§ The original optimisation problem To derive a lower bound on the conditional von Neumann entropy, it is easier to consider a virtual entanglement-based scenario that is equivalent to the actual protocol from the adversary's point-of-view. More precisely, in the virtual scenario, the adversary will have the same classical and quantum-side information and the honest parties will obtain the same classical registers at the end of the procedure. The virtual entanglement-based protocol that we consider is as follows: * Alice prepares the state |Ψ⟩_AQ = 1/2∑_x = 0^3|x⟩_A ⊗|ψ_x⟩_Q inside her secure lab. * Alice measures the register A in the computational basis and stores the outcome in the classical register X. * Alice sends the quantum register Q to Bob via the untrusted quantum channel. We assigns the output of the quantum channel that Bob receives to the quantum register B. * Upon receiving the quantum register B from the untrusted quantum channel, Bob randomly assigns T ∈{0,1} with probability {1-γ, γ}. He then applies heterodyne measurement on the quantum state and, depending on T, he applies the appropriate discretisation to obtain the discretised outcome Z. It is easy to see that the above procedure is equivalent to the quantum communication phase of the DM-CV-QKD protocol that we consider in this work. Step 1–2 is equivalent to uniformly choosing a classical information x and encoding it to the quantum state |ψ_x⟩. On the other hand, step 3–4 is identical to the one performed in the actual protocol. Therefore, the above virtual scenario is indeed equivalent to the actual protocol that we are analysing. The advantage of considering the entanglement-based scenario is that it allows us to delay Step 2 (Alice's measurement) since local measurements on different subsystems commute. With this delayed measurement, we can consider the quantum state ρ_AB, shared between Alice and Bob, just before their respective measurement. In general, the Hilbert space _B, associated with the register B is infinite-dimensional as we do not restrict what Eve can do in the quantum channel. On the other hand, as the quantum register A is generated and stored in Alice's secure lab, we have the constraint on Alice's marginal state: ρ_A = _B[ρ_AB] = _Q[|Ψ⟩⟨Ψ|_AQ]. On top of this constraint, we have the constraint on the normalisation of the quantum state ρ_AB [ρ_AB] = 1, the positive-semidefiniteness of the state ρ_AB≽ 0, and the constraints due to the statistics that are being monitored in the test rounds of the protocol [ρ_ABΠ_c] = q_c, ∀ c ∈∖{⊥}. where Π_c is the POVM element associated to the score C = c. Putting all these things together, we have the following optimisation problem g(q) = inf_ρ_AB H(Z|L,E)_[ρ_AB] s.t. ρ_AB≽ 0, [ρ_AB] = 1, _B[ρ_AB] = _Q[|Ψ⟩⟨Ψ|_AQ] [ρ_ABΠ_c] = q_c, ∀ c ∈∖{⊥}. Here, is the quantum channel that describes Bob's measurement (which converts the register B to Z), the announcement of register L (which depends on T and Z), and lastly, tracing out Alice's register A. Without loss of generality, we assume that the quantum side-information E that is available to Eve is the purification of the state ρ_AB. As mentioned earlier, the above optimisation involves an optimisation over infinite-dimensional quantum state ρ_AB since _B is infinite-dimensional. Our strategy is to truncate the Hilbert space dimension by projecting ρ_AB into a finite-dimensional subspace of the full Hilbert space, and calculate the correction terms due to this truncation. The rest of Appendix <ref> is dedicated to these calculations. §.§ Bounding the trace distance between truncated and original state The most important ingredient to obtain the correction terms due to the Hilbert truncation is an upper bound on the trace distance between the truncated state and the original state. We let ρ_ABE be the state shared between Alice, Bob and Eve. Without loss of generality, we can assume ρ_ABE to be a pure state: there exists a normalised vector |ψ⟩_ABE such that ρ_ABE = |ψ⟩⟨ψ|_ABE. Just like in the main text, we fix ∈ℕ and denote N_0 = ∑_n = 0^|n⟩⟨n| as the projection to the “low energy subspace” . Let N_1 = - N_0. Our goal is to find some small δ such that Δ(ρ_ABE, (_A ⊗ N_0 ⊗_E) ρ_ABE (_A ⊗ N_0 ⊗_E)) ≤δ, where Δ denotes the trace distance. For a fix , let V_1 = ∫_β^2 > ^2 β|β⟩⟨β|/π be the POVM element corresponding to the “large heterodyne outcome”. From Appendix A Eq. (A3) of Ref. <cit.>, the following operator inequality holds. N_1 ≼Γ( + 2, 0)/Γ( + 2,) V_1. Here Γ denotes the upper incomplete Gamma function. Γ(n, x) = ∫_x^∞ t t^n-1 e^-t. Consequently, for any state σ, we have [σ N_1] ≤Γ(+2, 0)/Γ(+2,)[σ V_1] In passing, we note that there is a slight difference in how we define N_0 and N_1 as compared to the corresponding quantities in Ref. <cit.>. In Ref. <cit.>, the cut-off photon number is defined as n_c = + 1. Now, the trace distance can be upper bounded in terms of the weight of the state in the “high energy space” using the gentle measurement lemma (Lemma 9 of Ref. <cit.>) Δ(ρ_ABE, (_A ⊗ N_0 ⊗_E) ρ_ABE (_A ⊗ N_0 ⊗_E)) ≤√(2 [ρ_ABE (_A ⊗ N_1 ⊗_E)]) ≤√(2Γ(+2, 0)/Γ( +2,)[ρ_B V_1]) = √(2Γ(+2, 0)/Γ(+2,)[W > ]) where the last term can be monitored in the test rounds as (C = ∅). Let us introduce the shorthand [W > ] = ν, and κ = Γ( + 2, 0)/Γ( + 2,). We can write δ = √(2κν). The gentle measurement lemma given in Ref. <cit.> applies to any state and measurement. However, for our case, we can tighten the bound as we are dealing with a special case of gentle measurement. First, notice that |ψ⟩_ABE can be further decomposed as |ψ⟩_ABE = √(1-w)|ψ_0⟩ + √(w)|ψ_1⟩, where w = ⟨ψ| (⊗ N_1 ⊗) |ψ⟩ is the weight of the and (⊗ N_0 ⊗) |ψ⟩⟨ψ|_ABE (⊗ N_0 ⊗) = (1-w) |ψ_0⟩⟨ψ_0|, (⊗ N_1 ⊗) |ψ⟩⟨ψ|_ABE (⊗ N_1 ⊗) = w |ψ_1⟩⟨ψ_1|. Since N_0 and N_1 are orthogonal, |ψ_0⟩ and |ψ_1⟩ are also orthogonal. They are also normalised. Then, the trace distance between the truncated state and the original state is given by Δ(|ψ⟩⟨ψ|_ABE, (1-w) |ψ_0⟩⟨ψ_0|) = 1/2[√((|ψ⟩⟨ψ| - (1-w) |ψ_0⟩⟨ψ_0|)^†(|ψ⟩⟨ψ| - (1-w) |ψ_0⟩⟨ψ_0|))] To compute the trace distance, we first compute M = |ψ⟩⟨ψ| - (1-w) |ψ_0⟩⟨ψ_0| = [ 0 √(w(1-w)); √(w(1-w)) w ], then M^† M is given by M^† M = [ w(1-w) w√(w(1-w)); w√(w(1-w)) w^2 + w(1-w) ] The eigenvalues of M^† M are given by eig_±[M^† M] = w(2-w) ± w√(w(4-3w))/2 Therefore, the trace distance is given by Δ(|ψ⟩⟨ψ|_ABE, (1-w) |ψ_0⟩⟨ψ_0|) = [√(M^† M)] = ∑_j = +,-√(eig_j[M^† M]) = 1/2√(2)[√(w(2-w) + w√(w(4-3w))) + √(w(2-w) - w√(w(4-3w)))] =: δ(w) The behaviour of the trace distance δ against the weight for the high energy subspace w is given in Fig. <ref>. However, we cannot access the value of w directly since we only have an upper bound w ≤κν. Since our bound is not monotonously non-decreasing, we need to convert it to a bound that have this monotonicity property. First, we take the first derivative δ/ w = (1-w)/2√(2)w√(4-3w)[3w + √(w(4-3w))/√((2-w) + √(w(4-3w))) - 3w - √(w(4-3w))/√((2-w) - √(w(4-3w)))] The pre-factor is non-negative for 0 ≤ w ≤ 1. We want the term in the square bracket to be non-negative. From Mathematica, this condition is satisfied if w ≤ 2/3. Thus, the maximum trace distance is given by δ(2/3) = 1/√(3). Therefore, as a function of ν, we have δ = 1/2√(2)[√(κν (2-κν) + κν√(κν(4-3κν))) + √(κν(2-κν) - κν√(κν(4-3κν)))] if κν≤ 2/3 1/√(3) otherwise. To construct the min-tradeoff function, we want to linearise the upper bound on the trace distance. First, we check that the function δ is concave in w, so that we can simply take a tangent line. We take the second derivative ^2 δ/ w^2 = - (√(w) + √(4-3w))√((2-w) - √(w(4-3w))) - (√(w) - √(4-3w))√((2-w) + √(w(4-3w)))/2√(2)(1-w) [w(4-3w)]^3/2 Since √(w) < √(4-3w) for any 0 ≤ w ≤ 1, both the numerator and denominator (excluding the negative sign outside) are positive. Thus, we have ^2 δ / w^2 < 0 which means that δ is concave in the interval 0 ≤ w ≤ 1. Since w = κν is linear in ν, the trace distance δ is also concave in ν. Therefore, we can simply take a tangent line at any ν such that κν≤ 2/3 to derive a linear upper bound on δ. Suppose we take the tangent at ν = ν_0 with ν_0 ∈ (0, 2/3κ), we use the shorthand w_0 = κν_0 δ≤(1-w_0)/2√(2)√(4-3w_0)[3w_0 + √(w_0(4-3w_0))/√((2-w_0) + √(w_0(4-3w_0))) - 3w_0 - √(w_0(4-3w_0))/√((2-w_0) - √(w_0(4-3w_0)))] (ν - ν_0)/ν_0 + 1/2√(2)[√(w_0(2-w_0) + w_0√(w_0(4-3w_0))) + √(w_0(2-w_0) - w_0√(w_0(4-3w_0)))]. Therefore, δ≤ m_0 (ν - ν_0) + c_0, with m_0 =(1-w_0)/2√(2)ν_0 √(4-3w_0)[3w_0 + √(w_0(4-3w_0))/√((2-w_0) + √(w_0(4-3w_0))) - 3w_0 - √(w_0(4-3w_0))/√((2-w_0) - √(w_0(4-3w_0)))], c_0 = 1/2√(2)[√(w_0(2-w_0) + w_0√(w_0(4-3w_0))) + √(w_0(2-w_0) - w_0√(w_0(4-3w_0)))] . §.§ Continuity of conditional von Neumann entropy Now that we have an upper bound on the trace distance between the truncated and the original state, we can use the continuity bound for the von Neumann entropy to prove that the truncation does not significantly reduce the entropy. We use the following Lemma 1 from Ref. <cit.> Let _Z and _E be two Hilbert spaces, where dim(_Z) = d_Z while _E can be infinite dimensional. Let ρ, σ∈𝖣(_Z ⊗_E) be subnormalised states with [ρ] ≥[σ]. If Δ(ρ, σ) ≤δ, then H(Z|E)_ρ≥ H(Z|E)_σ - δlog_2 d_Z - (1 + δ) h_2(δ/1+δ), where h_2(x) = -x log_2(x) - (1-x) log_2(1-x) is the binary entropy function. Now, let : ABE → ZE be the quantum channel that corresponds to Bob's measurement (and tracing out Alice). In this context, the state ρ in the above lemma corresponds to ρ = [ρ_ABE] and σ = [(_A ⊗ N_0 ⊗_E) ρ_ABE (_A ⊗ N_0 ⊗_E)]. Because of the monotonicity property of trace distance, we can also re-use the upper bound on the trace distance that we have derived in the previous subsection. The correction term for the conditional entropy is given by v(δ) = δlog_2 d_Z + (1 + δ) h_2 (δ/1 + δ) Again, for the purpose of constructing min-tradeoff functions, we want to obtain a bound that is affine in ν. We adopt a similar strategy of taking an upper bound by taking a tangent line at ν = ν_c where ν_c ∈ (0, 2/3κ). We need to calculate the gradient of v against ν. To that end, we calculate the gradient of v against δ v/δ = log_2 d_Z + log_2 (1 + δ/δ). On the other hand, from Eq. (<ref>), the gradient of δ against ν is given by δ/ν = δ/ w w/ν = (1-w)/2√(2)w√(4-3w)[3w + √(w(4-3w))/√((2-w) + √(w(4-3w))) - 3w - √(w(4-3w))/√((2-w) - √(w(4-3w)))] ·κ, where w = κν depends implicitly on ν. We introduce the shorthand w_c = κν_c δ_c = 1/2√(2)[√(w_c (2 - w_c) + w_c√(w_c (4 - 3 w_c))) + √(w_c (2 - w_c) - w_c √(w_c (4 - 3 w_c)))] Then, m_ = v/ν|_ν = ν_c = ( v/δ·δ/ν) |_ν = ν_c = (1-w_c) κ/2√(2)w_c √(4 - 3w_c)[log_2 d_Z + log_2 (1 + δ_c/δ_c)] ×[3 w_c + √(w_c (4 - 3 w_c))/√((2 - w_c) + √(w_c (4 - 3 w_c))) - 3w_c - √(w_c (4 - 3w_c))/√((2 - w_c) - √(w_c (4 - 3w_c)))] and c_ = v(δ_c) = δ_c log_2 d_Z + (1 + δ_c) h_2 (δ_c/1 + δ_c). This gives a linear correction term to the conditional entropy g^(ν_c)_(ν) = m_(ν - ν_c) + c_. §.§ Correction to the constraints The projection to the low energy subspace does not only affect the conditional entropy but also the constraints. We consider the entanglement based picture before Bob's measurement and denote the entangled state between Alice and Bob by ρ_AB and σ_AB = (⊗ N_0) ρ_AB (⊗ N_0). §.§.§ Normalisation constraint The first constraint is the normalisation, this modification is straightforward, we have 1 - κν≤[σ_AB] ≤ 1. §.§.§ Constraint on Alice's marginal state Next, we have the constraint on Alice's marginal states. Originally, we have the constraint ρ_A = _Q [|Ψ⟩⟨Ψ|_AQ]. However, due to the cut-off, we instead have to constrain the state σ_A=(1-w)_BE[|ψ_0⟩⟨ψ_0|_ABE]. Let us expand the original state ρ_A, ρ_A = _BE[|ψ⟩⟨ψ|_ABE] = w_BE[|ψ_1⟩⟨ψ_1|_ABE] + √(w(1-w))_BE[|ψ_0⟩⟨ψ_1|_ABE] + √(w(1-w))_BE[|ψ_1⟩⟨ψ_0|_ABE] + σ_A By expanding |ψ⟩_ABE=∑_ac_a|a⟩_A⊗|ϕ_a⟩_BE, the cross-terms can be shown to be _BE[|ψ_0⟩⟨ψ_1|_ABE] = _BE[(⊗ N_0 ⊗)|ψ⟩⟨ψ|_ABE(⊗ N_1 ⊗)] = ∑_aa'c_a(c_a')^*[(N_0 ⊗)|ϕ_a⟩⟨ϕ_a|_BE(N_1 ⊗)] = 0, where the final equality notes that the projectors N_0 and N_1 are orthogonal. As such, we have the following constraint, Δ(σ_A,_Q |Ψ⟩⟨Ψ|_AQ) = Δ(σ_A,ρ_A) = 1/2w_BE[|ψ_1⟩⟨ψ_1|_ABE]_1 ≤1/2w, where the second line invokes the definition of trace distance, and the third line utilise the fact that the trace norm of a quantum state is bounded by 1. The trace distance constraint can be re-cast as SDP constraints (Example 1.20 of Ref. <cit.>) by defining the matrix M M = [ ζ_1 σ_A - _Q |Ψ⟩⟨Ψ|; σ_A - _Q |Ψ⟩⟨Ψ| ζ_2 ] then imposing that [M] ≤κν and M ≽ 0. §.§.§ Statistical constraints Finally, the most challenging correction is to the statistical constraints. To do this, we can write the original state ρ as ρ = [ ρ_0 ρ_c; ρ_c^† ρ_h ]. Similarly, we can write the measurement operator as Π = [ Π_0 Λ; Λ^† Π_h ]. Let ρ_0 and Π_0 be linear operators in the low-energy subspace _low. We shall truncate the subspace outside of _low. The constraint involves terms of the form [ρΠ] = [ρ_0 Π_0] + [ρ_h Π_h] + [ρ_c Λ^†] + [ρ_c^†Λ] = [ρ_0 Π_0] + [ρ_h Π_h] + 2 [ρ_c Λ^†]. The LHS is simply the original constraint which is related to the statistics that we see in the experiment. The second inequality is due to the fact that [X] = [X^†] and the cyclic permutation symmetry of trace. Bounding the [ρ_h Π_h] is easy: 0 ≤[ρ_h Π_h] ≤κν, since 0 ≼Π_h ≼. Next, we need to bound [ρ_c Λ^†]. To do that we need the following lemmas Let A ≽ 0, B ≽ 0 and [ A X; X^† B ]≽ 0 be positive semidefinite. Then, we have B - X^† A^-1 X ≽ 0 Let A ≽ 0, B ≽ 0 and [ A X; X^† B ]≽ 0 be positive semidefinite. Then, there exists some matrix K such that X = A^1/2 K B^1/2 and K^† K ≼. Based on Lemma <ref>, we have B ≽ X^† A^-1 X. Hence, by multiplying both sides with B^-1/2 to the left and to the right, we have ≽ B^-1/2 X^† A^-1 X B^-1/2 = ( A^-1/2 X B^-1/2)^†( A^-1/2 X B^-1/2) We define K := A^-1/2 X B^-1/2. It is straightforward to see that this implies the two claims. Let A and B be matrices. Then, [A^† B]^2 ≤[A^† A] ·[B^† B] Let ρ and Π be given by Eqs. (<ref>) and (<ref>), respectively. Then, we have -√([ρ_0 Π_0] [ρ_h Π_h])≤[ρ_c Λ^†] ≤√([ρ_0 Π_0] [ρ_h Π_h]) Based on Corollary <ref>, we can write ρ_c = ρ_0^1/2 K_1 ρ_h^1/2 Λ = Π_0^1/2 K_2 Π_h^1/2, for some K_1, K_2 such that K_1^† K_1 ≼ and K_2^† K_2 ≼. Then, we can write [ρ_c Λ^†] = [(ρ_0^1/2 K_1 ρ_h^1/2)·(Π_h^1/2 K_2^†Π_0^1/2) ] = [ (Π_0^1/2ρ_0^1/2) ·(K_1 ρ_h^1/2Π_h^1/2 K_2^†) ] ≤√([ Π_0 ρ_0 ] ·[(K_1 ρ_h^1/2Π_h^1/2 K_2^†) ·(K_2 Π_h^1/2ρ_h^1/2 K_1^†)]) ≤√([ Π_0 ρ_0 ] ·[ρ_hΠ_h]) where in the first line, we substitute ρ_c and Λ^† with their identities which involved K_1 and K_2. In the second line, we use the cyclic permutation symmetry of trace and group the terms accordingly. In the third line, we apply Lemma <ref>. In the fourth line, we use the fact that K_1^† K_1 ≼, K_2^† K_2 ≼ and the monotonicity of trace. This proves the upper bound claim. The lower bound claim can be obtained similarly by modifying the third line where we consider the negative square root of Lemma <ref> instead. Let ρ and Π be given by Eqs. (<ref>) and (<ref>), respectively. Let p = [ρΠ] and w ≥[ρ_h]. Then, we have p - ξ_L ≤[ρ_0 Π_0] ≤ p - ξ_U where ξ_L = w + 2√(w(1-w)) if w < 5+ √(5)/10 1+√(5)/2 if w ≥5+ √(5)/10 ξ_U = w - 2 √(w(1-w)) if w < 5 - √(5)/10 1 - √(5)/2 if w ≥5 - √(5)/10 First, we write [ρ_0 Π_0] = [ρΠ] - [ρ_h Π_h] - 2 [ρ_c Λ^†] Thus, the correction term ξ is given by ξ = [ρ_h Π_h] + 2 [ρ_c Λ^†] We assume that [ρ_h] = ω≤ w. We remark that ω is not observable in practice and we can only bound it with w which is a function of ν. To prove the lower bound, this is easy because we simply have to maximise the correction term ξ≤[ρ_h Π_h] + 2 √([ρ_h Π_h]·[ρ_0 Π_0])≤ω + 2 √(ω(1-ω)) Differentiating the upper bound with respect to ω, we see that there is a critical ω at ω_c = (5 + √(5))/10 ≈ 0.724. Therefore, we have ξ≤ξ_L := w + 2√(w(1-w)) if w < ω_c 1 + √(5)/2 if w ≥ω_c The proof for the upper bound is slightly more complicated. We now have to minimise the correction term ξ. ξ≥[ρ_h Π_h] - 2 √([ρ_h Π_h] ·[ρ_0 Π_0]) = √([ρ_h Π_h])·(√([ρ_h Π_h]) - 2 √([ρ_0 Π_0])) The minimisation over Π_0 is easy: we want ρ_0 and Π_0 to be perfectly overlapped and hence [ρ_0 Π_0] ≥ 1 - ω. We then parameterise [ρ_h Π_h] = ωλ for some λ∈ [0,1]. Thus, we have ξ≥√(ωλ)·(√(ωλ) - 2 √(1 - ω)) := F(ω, λ) Differentiating with respect to λ, we have ∂ F/∂λ = ω(1- √(1-ω/ωλ)) For ω < 1/2, we have ∂ F/∂λ < 0. On the other hand, for ω≥ 1/2, we have ∂ F/∂λ = 0 when λ = (1-ω)/ω. We consider the case where ω < 1/2 first. In this case, it is optimal to take λ = 1 since the first derivative with respect to λ is negative. In the second case, we shall take λ = (1-ω)/ω. Therefore, our correction term is given by ξ≥ G(ω) :=ω - 2 √(ω(1-ω)) if ω < 1/2 ω - 1 if ω≥ 1/2 Note that G(ω) ≤ 0 for all ω∈ [0,1], hence p - G(ω) ≥ p. Taking the first derivative of G with respect to ω: d G/dω = 2ω + √(ω(1-ω))-1/√(ω(1-ω)) for ω < 1/2 1 for ω≥ 1/2. Thus, it is to see that the optimal ω must be within the [0, 1/2) interval. We solve for dG/dω = 0 in this interval and obtain the optimal ω_* = (5-√(5))/10 ≈ 0.276. Also, we have dG/dω < 0 for ω < ω_* and dG/dω > 0 for ω > ω_*, which implies the minimum ξ is given by ξ≥ξ_+ := w - 2√(w(1-w)) if w < ω_* 1-√(5)/2 if w ≥ω_*. This concludes the proof of the upper bound, and hence the claim. Corollary <ref> gives an upper and lower bound on the statistical constraints in terms of the actual probability p associated to the original state and the upper bound on the weight of the high photon number components w. However, these bounds are not linear in w (hence, not linear in ν) whereas for the purpose of constructing min-tradeoff functions, it is convenient to obtain correction terms that are linear in ν. This ensures that the dual solution obtained from our numerical methods can be readily used as min-tradeoff functions [Alternatively, we can keep the expressions for ξ_L and ξ_U as they are and linearise the min-tradeoff functions later.] Fortunately, the correction terms ξ_L (and ξ_U) can be easily linearised since they are concave (and convex, respectively). By taking a tangent line at any point on the curve, we will obtain an upper bound on ξ_L (and lower bound on ξ_U), which relaxes the constraint in the right direction. We denote these bounds by ξ̂_L^(ν_L) and ξ̂_U^(ν_U), respectively. Here ν_L and ν_U denotes the point in which the tangent line is taken. Let ν_L ∈ (0,(5+√(5))/10κ]. Then, ξ̂_L^(ν_L)(ν) is given by ξ̂_L^(ν_L)(ν) = (1 + (1- 2 κν_L)/√(κν_L (1 - κν_L))) κν + √(κν_L/1-κν_L) On the other hand, let ν_U ∈ (0, (5-√(5))/10κ]. Then, ξ̂_U^(ν_U)(ν) is given by ξ̂_U^(ν_U)(ν) = (1 - (1- 2 κν_U)/√(κν_U (1 - κν_U))) κν - √(κν_U/1-κν_U) §.§ Proof of dimension reduction theorem Putting everything together, we can define the function g̃'(q) g̃'(q) = inf_σ_AB H(Z|L,E)_[σ_AB] s.t. σ_AB≽ 0, [σ_AB] ≤ 1, [σ_AB] ≥ 1 - κ q_∅, Δ(σ_A, _Q[|Ψ⟩⟨Ψ|_AQ]) ≤κ q_∅/2, [σ_ABΠ_c] ≤ q_c - ξ_U(q_∅), [σ_ABΠ_c] ≥ q_c + ξ_L(q_∅) Here, we use the notation σ_AB = (_A ⊗ N_0) ρ_AB (_A ⊗ N_0). We can define the function g̃(q) as the linearised version of the above function which can be obtained by replacing ξ_L and ξ_U with ξ̂_L and ξ̂_U, respectively. (Refer to Eqs. (<ref>) and (<ref>) for those expressions). Then, using Lemma <ref>, we have g(q) = inf_ρ_AB∈_q H(Z|L,E)_[ρ_AB] ≥inf_σ_AB: Δ(ρ_AB, σ_AB) ≤δ(q_∅), ρ_AB∈_q H(Z|L,E)_[σ_AB]- δ(q_∅) log_2 d_Z - (1+δ(q_∅)) h_2 (δ(q_∅)/1 + δ(q_∅))_-v(δ) ≥g̃(q) - g_^(ν_c)(q_∅). Here, we denote _q as a shorthand for the feasible set defined in Eq. (<ref>). The second line is due to Lemma <ref>. The last line can be deduced by identifying that the constraints Δ(ρ_AB, σ_AB) ≤δ(q_∅) and ρ_AB∈_q are equivalent to the one written in Eq. (<ref>). Additionally, we notice that the correction terms v(δ(q_∅)) is upper bounded by g_^(ν_c)(q_∅), as derived previously. This concludes the proof of Theorem <ref>. § NUMERICAL ANALYSIS §.§ Channel Model For numerical simulation, we consider a channel with channel loss η along with excess noise χ. After the prepared state |ψ_X_j⟩ = |α e^i (π X_j/2 + π/4)⟩ passes through the channel, the probability that it would result in a heterodyne detection in some phase space region ℛ with Y_j^2∈[τ_1^2,τ_2^2] and θ∈[zπ/2,(z+1)π/2] is [Y_j∈ℛ]=1/π∫_τ_1^τ_2dβ∫_zπ/2^(z+1)π/2dθβ/1+ηζ/2 e^-β^2-ηα^2 + 2αβ√(η)cos(θ-(2X_j+1)π/4)/1+ηζ/2. We can compute the integral analytically for Y_j from 0 to infinity when considering regions extending to ∞, [θ∈[zπ/2,(z+1)π/2]]=1/2π∫_zπ/2^(z+1)π/2dθ e^-φ^2{1+√(π)φ f(θ)e^φ^2f(θ)^2erf[φ f(θ)+ 1]}, where φ=√(ηα^2/1+ηζ/2) and f(θ)=cos(θ-(2X_j+1)π/4). §.§ Min-tradeoff Function As discussed in Sec. <ref>, one could compute a lower bound on the conditional von Neumann entropy by solving the SDP in Eqn. (<ref>). We first define the score C (excluding ⊥) determined by Alice's state preparation choice and the measurement outcome discretisation for T_j=1, C_j=⊤ Z_j=⊤ 0 ∩_k∈[0,3]{X_j=k Z_j=k (mod 4)} 1 ∩_k∈[0,3]{X_j=k Z_j=k+1 (mod 4)} 2 ∩_k∈[0,3]{X_j=k Z_j=k+2 (mod 4)} 3 ∩_k∈[0,3]{X_j=k Z_j=k+3 (mod 4)} (∅,0) ∩_k∈[0,3]{X_j=k Z_j=(∅,k) (mod 4)} (∅,1) ∩_k∈[0,3]{X_j=k Z_j=(∅,k+1 (mod 4))} (∅,2) ∩_k∈[0,3]{X_j=k Z_j=(∅,k+2 (mod 4))} (∅,3) ∩_k∈[0,3]{X_j=k Z_j=(∅,k+3 (mod 4))} Based on the protocol, we can model the projectors of the measurement operators, Π̃_c, when they are projected into the photon number subspace for n≤ n_max. The POVM Π_c projects the state in phase space to a region ℛ_c^a (conditioned on A=a), Π_c=∑_a|a⟩⟨a|⊗1/π∫_ℛ^a_cdY_j |Y_j⟩⟨Y_j|. The projected POVM is simply Π̃_c= ∑_a|a⟩⟨a|⊗∑_n,n'≤ n_max1/π∫_ℛ^a_cdY_j ⟨n|Y_j⟩⟨Y_j|n'⟩|n⟩⟨n'| = ∑_a|a⟩⟨a|⊗∑_n,n'≤ n_max1/π∫_ℛ^a_cdY_j e^-Y_j^2Y_j^n(Y_j^*)^n'/√(n!n'!)|n⟩⟨n'|, where the second line notes ⟨n|Y_j⟩=e^-Y_j^2/2Y_j^n/√(n!). We can compute the overlap _Q[|Ψ⟩⟨Ψ|_AQ]=1/4∑_X_jX_j'e^-α^2(1-i^X_j-X_j')|X_j⟩⟨X_j'|. Having these terms allows us to solve the SDP numerically with mosek <cit.> in Matlab and obtain the dual solution. Solving the SDP with any trial parameter q_dual with unit length provides a dual g̃^(q_dual)(q_dual)=φ(λ_q_dual)+λ_q_dual·h(q_dual), where g̃^(q_dual) explicitly indicates that the dual function is obtained, and h is some fixed function independent of the dual variables. We note here that λ_q_dual refer to the optimal dual variable when the SDP is solved with q_dual, i.e. λ_q_dual=argmax_λφ(λ)+λ·h(q_dual). Importantly, for any q, we can shown that g̃^(q_dual)(q) lower bounds g̃^(q)(q), g̃^(q)(q)= max_λφ(λ)+λ·h(q) ≥ φ(λ_q_dual)+λ_q_dual·h(q) = g̃^(q_dual)(q). As such, we can always select one value of q_dual, and utilise g̃^(q_dual)(q) to construct the min-tradeoff function with the same linearisation method via Eq. (<ref>) and conversion via Eq. (<ref>). Since this value always lower bounds the conditional von Neumann entropy for all q, the resulting function is a valid min-tradeoff function. In the numerical optimisation, the choice of q_dual is optimised. For simplicity, we optimise only over q_dual generated from the same probability distribution as the model, with the same channel loss η and amplitude α. The only parameter that could differ from the actual channel is χ_dual, which we optimise over.
http://arxiv.org/abs/2409.03522v1
20240905133409
Euclid preparation. Simulations and nonlinearities beyond $Λ$CDM. 1. Numerical methods and validation
[ "Euclid Collaboration", "J. Adamek", "B. Fiorini", "M. Baldi", "G. Brando", "M. -A. Breton", "F. Hassani", "K. Koyama", "A. M. C. Le Brun", "G. Rácz", "H. -A. Winther", "A. Casalino", "C. Hernández-Aguayo", "B. Li", "D. Potter", "E. Altamura", "C. Carbone", "C. Giocoli", "D. F. Mota", "A. Pourtsidou", "Z. Sakr", "F. Vernizzi", "A. Amara", "S. Andreon", "N. Auricchio", "C. Baccigalupi", "S. Bardelli", "P. Battaglia", "D. Bonino", "E. Branchini", "M. Brescia", "J. Brinchmann", "A. Caillat", "S. Camera", "V. Capobianco", "V. F. Cardone", "J. Carretero", "S. Casas", "F. J. Castander", "M. Castellano", "G. Castignani", "S. Cavuoti", "A. Cimatti", "C. Colodro-Conde", "G. Congedo", "C. J. Conselice", "L. Conversi", "Y. Copin", "F. Courbin", "H. M. Courtois", "A. Da Silva", "H. Degaudenzi", "G. De Lucia", "M. Douspis", "F. Dubath", "X. Dupac", "S. Dusini", "M. Farina", "S. Farrens", "S. Ferriol", "P. Fosalba", "M. Frailis", "E. Franceschi", "M. Fumana", "S. Galeotta", "B. Gillis", "P. Gómez-Alvarez", "A. Grazian", "F. Grupp", "L. Guzzo", "S. V. H. Haugan", "W. Holmes", "F. Hormuth", "A. Hornstrup", "S. Ilić", "K. Jahnke", "M. Jhabvala", "B. Joachimi", "E. Keihänen", "S. Kermiche", "A. Kiessling", "M. Kilbinger", "B. Kubik", "M. Kümmel", "M. Kunz", "H. Kurki-Suonio", "S. Ligori", "P. B. Lilje", "V. Lindholm", "I. Lloro", "G. Mainetti", "E. Maiorano", "O. Mansutti", "O. Marggraf", "K. Markovic", "M. Martinelli", "N. Martinet", "F. Marulli", "R. Massey", "E. Medinaceli", "S. Mei", "M. Melchior", "Y. Mellier", "M. Meneghetti", "E. Merlin", "G. Meylan", "M. Moresco", "L. Moscardini", "C. Neissner", "S. -M. Niemi", "C. Padilla", "S. Paltani", "F. Pasian", "K. Pedersen", "W. J. Percival", "V. Pettorino", "S. Pires", "G. Polenta", "M. Poncet", "L. A. Popa", "L. Pozzetti", "F. Raison", "A. Renzi", "J. Rhodes", "G. Riccio", "E. Romelli", "M. Roncarelli", "R. Saglia", "A. G. Sánchez", "D. Sapone", "B. Sartoris", "M. Schirmer", "T. Schrabback", "A. Secroun", "G. Seidel", "S. Serrano", "C. Sirignano", "G. Sirri", "L. Stanco", "J. Steinwagner", "P. Tallada-Crespí", "D. Tavagnacco", "I. Tereno", "R. Toledo-Moreo", "F. Torradeflot", "I. Tutusaus", "E. A. Valentijn", "L. Valenziano", "T. Vassallo", "G. Verdoes Kleijn", "A. Veropalumbo", "Y. Wang", "J. Weller", "G. Zamorani", "E. Zucca", "A. Biviano", "C. Burigana", "M. Calabrese", "D. Di Ferdinando", "J. A. Escartin Vigo", "G. Fabbian", "F. Finelli", "J. Gracia-Carpio", "S. Matthew", "N. Mauri", "A. Pezzotta", "M. Pöntinen", "V. Scottez", "M. Tenti", "M. Viel", "M. Wiesmann", "Y. Akrami", "V. Allevato", "S. Anselmi", "M. Archidiacono", "F. Atrio-Barandela", "A. Balaguera-Antolinez", "M. Ballardini", "A. Blanchard", "L. Blot", "H. Böhringer", "S. Borgani", "S. Bruton", "R. Cabanac", "A. Calabro", "B. Camacho Quevedo", "G. Cañas-Herrera", "A. Cappi", "F. Caro", "C. S. Carvalho", "T. Castro", "K. C. Chambers", "S. Contarini", "A. R. Cooray", "G. Desprez", "A. Díaz-Sánchez", "J. J. Diaz", "S. Di Domizio", "H. Dole", "S. Escoffier", "A. G. Ferrari", "P. G. Ferreira", "I. Ferrero", "A. Finoguenov", "F. Fornari", "L. Gabarra", "K. Ganga", "J. García-Bellido", "T. Gasparetto", "V. Gautard", "E. Gaztanaga", "F. Giacomini", "F. Gianotti", "G. Gozaliasl", "C. M. Gutierrez", "A. Hall", "H. Hildebrandt", "J. Hjorth", "A. Jimenez Muñoz", "S. Joudaki", "J. J. E. Kajava", "V. Kansal", "D. Karagiannis", "C. C. Kirkpatrick", "S. Kruk", "J. Le Graet", "L. Legrand", "J. Lesgourgues", "T. I. Liaudat", "A. Loureiro", "G. Maggio", "M. Magliocchetti", "F. Mannucci", "R. Maoli", "C. J. A. P. Martins", "L. Maurin", "R. B. Metcalf", "M. Migliaccio", "M. Miluzio", "P. Monaco", "A. Montoro", "A. Mora", "C. Moretti", "G. Morgante", "S. Nadathur", "L. Patrizii", "V. Popa", "P. Reimberg", "I. Risso", "P. -F. Rocci", "M. Sahlén", "E. Sarpa", "A. Schneider", "M. Sereno", "A. Silvestri", "A. Spurio Mancini", "K. Tanidis", "C. Tao", "N. Tessore", "G. Testera", "R. Teyssier", "S. Toft", "S. Tosi", "A. Troja", "M. Tucci", "C. Valieri", "J. Valiviita", "D. Vergani", "G. Verza", "P. Vielzeuf", "N. A. Walton" ]
astro-ph.CO
[ "astro-ph.CO" ]
Simulations and nonlinearities beyond ΛCDM. 1. Numerical methods and validation Simulations and nonlinearities beyond ΛCDM – numerical methods and validation Department of Astrophysics, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX, UK Dipartimento di Fisica e Astronomia, Università di Bologna, Via Gobetti 93/2, 40129 Bologna, Italy INAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna, Via Piero Gobetti 93/3, 40129 Bologna, Italy INFN-Sezione di Bologna, Viale Berti Pichat 6/2, 40127 Bologna, Italy Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Am Muhlenberg 1, D-14476 Potsdam-Golm, Germany Institute of Space Sciences (ICE, CSIC), Campus UAB, Carrer de Can Magrans, s/n, 08193 Barcelona, Spain Institut de Ciencies de l'Espai (IEEC-CSIC), Campus UAB, Carrer de Can Magrans, s/n Cerdanyola del Vallés, 08193 Barcelona, Spain Laboratoire Univers et Théorie, Observatoire de Paris, Université PSL, Université Paris Cité, CNRS, 92190 Meudon, France Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029 Blindern, 0315 Oslo, Norway Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA, 91109, USA Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85748 Garching, Germany Department of Physics, Institute for Computational Cosmology, Durham University, South Road, DH1 3LE, UK Jodrell Bank Centre for Astrophysics, Department of Physics and Astronomy, University of Manchester, Oxford Road, Manchester M13 9PL, UK INAF-IASF Milano, Via Alfonso Corti 12, 20133 Milano, Italy Istituto Nazionale di Fisica Nucleare, Sezione di Bologna, Via Irnerio 46, 40126 Bologna, Italy Institute for Astronomy, University of Edinburgh, Royal Observatory, Blackford Hill, Edinburgh EH9 3HJ, UK Higgs Centre for Theoretical Physics, School of Physics and Astronomy, The University of Edinburgh, Edinburgh EH9 3FD, UK Institut für Theoretische Physik, University of Heidelberg, Philosophenweg 16, 69120 Heidelberg, Germany Institut de Recherche en Astrophysique et Planétologie (IRAP), Université de Toulouse, CNRS, UPS, CNES, 14 Av. Edouard Belin, 31400 Toulouse, France Université St Joseph; Faculty of Sciences, Beirut, Lebanon Institut de Physique Théorique, CEA, CNRS, Université Paris-Saclay 91191 Gif-sur-Yvette Cedex, France School of Mathematics and Physics, University of Surrey, Guildford, Surrey, GU2 7XH, UK INAF-Osservatorio Astronomico di Brera, Via Brera 28, 20122 Milano, Italy IFPU, Institute for Fundamental Physics of the Universe, via Beirut 2, 34151 Trieste, Italy INAF-Osservatorio Astronomico di Trieste, Via G. B. Tiepolo 11, 34143 Trieste, Italy INFN, Sezione di Trieste, Via Valerio 2, 34127 Trieste TS, Italy SISSA, International School for Advanced Studies, Via Bonomea 265, 34136 Trieste TS, Italy INAF-Osservatorio Astrofisico di Torino, Via Osservatorio 20, 10025 Pino Torinese (TO), Italy Dipartimento di Fisica, Università di Genova, Via Dodecaneso 33, 16146, Genova, Italy INFN-Sezione di Genova, Via Dodecaneso 33, 16146, Genova, Italy Department of Physics "E. Pancini", University Federico II, Via Cinthia 6, 80126, Napoli, Italy INAF-Osservatorio Astronomico di Capodimonte, Via Moiariello 16, 80131 Napoli, Italy INFN section of Naples, Via Cinthia 6, 80126, Napoli, Italy Instituto de Astrofísica e Ciências do Espaço, Universidade do Porto, CAUP, Rua das Estrelas, PT4150-762 Porto, Portugal Faculdade de Ciências da Universidade do Porto, Rua do Campo de Alegre, 4150-007 Porto, Portugal Aix-Marseille Université, CNRS, CNES, LAM, Marseille, France Dipartimento di Fisica, Università degli Studi di Torino, Via P. Giuria 1, 10125 Torino, Italy INFN-Sezione di Torino, Via P. Giuria 1, 10125 Torino, Italy INAF-Osservatorio Astronomico di Roma, Via Frascati 33, 00078 Monteporzio Catone, Italy INFN-Sezione di Roma, Piazzale Aldo Moro, 2 - c/o Dipartimento di Fisica, Edificio G. Marconi, 00185 Roma, Italy Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT), Avenida Complutense 40, 28040 Madrid, Spain Port d'Informació Científica, Campus UAB, C. Albareda s/n, 08193 Bellaterra (Barcelona), Spain Institute for Theoretical Particle Physics and Cosmology (TTK), RWTH Aachen University, 52056 Aachen, Germany Institut d'Estudis Espacials de Catalunya (IEEC), Edifici RDIT, Campus UPC, 08860 Castelldefels, Barcelona, Spain Dipartimento di Fisica e Astronomia "Augusto Righi" - Alma Mater Studiorum Università di Bologna, Viale Berti Pichat 6/2, 40127 Bologna, Italy Instituto de Astrofísica de Canarias, Calle Vía Láctea s/n, 38204, San Cristóbal de La Laguna, Tenerife, Spain European Space Agency/ESRIN, Largo Galileo Galilei 1, 00044 Frascati, Roma, Italy ESAC/ESA, Camino Bajo del Castillo, s/n., Urb. Villafranca del Castillo, 28692 Villanueva de la Cañada, Madrid, Spain Université Claude Bernard Lyon 1, CNRS/IN2P3, IP2I Lyon, UMR 5822, Villeurbanne, F-69100, France Institute of Physics, Laboratory of Astrophysics, Ecole Polytechnique Fédérale de Lausanne (EPFL), Observatoire de Sauverny, 1290 Versoix, Switzerland Institut de Ciències del Cosmos (ICCUB), Universitat de Barcelona (IEEC-UB), Martí i Franquès 1, 08028 Barcelona, Spain Institució Catalana de Recerca i Estudis Avançats (ICREA), Passeig de Lluís Companys 23, 08010 Barcelona, Spain UCB Lyon 1, CNRS/IN2P3, IUF, IP2I Lyon, 4 rue Enrico Fermi, 69622 Villeurbanne, France Departamento de Física, Faculdade de Ciências, Universidade de Lisboa, Edifício C8, Campo Grande, PT1749-016 Lisboa, Portugal Instituto de Astrofísica e Ciências do Espaço, Faculdade de Ciências, Universidade de Lisboa, Campo Grande, 1749-016 Lisboa, Portugal Department of Astronomy, University of Geneva, ch. d'Ecogia 16, 1290 Versoix, Switzerland Université Paris-Saclay, CNRS, Institut d'astrophysique spatiale, 91405, Orsay, France INFN-Padova, Via Marzolo 8, 35131 Padova, Italy INAF-Istituto di Astrofisica e Planetologia Spaziali, via del Fosso del Cavaliere, 100, 00100 Roma, Italy Université Paris-Saclay, Université Paris Cité, CEA, CNRS, AIM, 91191, Gif-sur-Yvette, France FRACTAL S.L.N.E., calle Tulipán 2, Portal 13 1A, 28231, Las Rozas de Madrid, Spain INAF-Osservatorio Astronomico di Padova, Via dell'Osservatorio 5, 35122 Padova, Italy Max Planck Institute for Extraterrestrial Physics, Giessenbachstr. 1, 85748 Garching, Germany Universitäts-Sternwarte München, Fakultät für Physik, Ludwig-Maximilians-Universität München, Scheinerstrasse 1, 81679 München, Germany Dipartimento di Fisica "Aldo Pontremoli", Università degli Studi di Milano, Via Celoria 16, 20133 Milano, Italy Felix Hormuth Engineering, Goethestr. 17, 69181 Leimen, Germany Technical University of Denmark, Elektrovej 327, 2800 Kgs. Lyngby, Denmark Cosmic Dawn Center (DAWN), Denmark Université Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France Max-Planck-Institut für Astronomie, Königstuhl 17, 69117 Heidelberg, Germany NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK Department of Physics and Helsinki Institute of Physics, Gustaf Hällströmin katu 2, 00014 University of Helsinki, Finland Aix-Marseille Université, CNRS/IN2P3, CPPM, Marseille, France Université de Genève, Département de Physique Théorique and Centre for Astroparticle Physics, 24 quai Ernest-Ansermet, CH-1211 Genève 4, Switzerland Department of Physics, P.O. Box 64, 00014 University of Helsinki, Finland Helsinki Institute of Physics, Gustaf Hällströmin katu 2, University of Helsinki, Helsinki, Finland NOVA optical infrared instrumentation group at ASTRON, Oude Hoogeveensedijk 4, 7991PD, Dwingeloo, The Netherlands Centre de Calcul de l'IN2P3/CNRS, 21 avenue Pierre de Coubertin 69627 Villeurbanne Cedex, France Universität Bonn, Argelander-Institut für Astronomie, Auf dem Hügel 71, 53121 Bonn, Germany Dipartimento di Fisica e Astronomia "Augusto Righi" - Alma Mater Studiorum Università di Bologna, via Piero Gobetti 93/2, 40129 Bologna, Italy Université Paris Cité, CNRS, Astroparticule et Cosmologie, 75013 Paris, France University of Applied Sciences and Arts of Northwestern Switzerland, School of Engineering, 5210 Windisch, Switzerland Institut d'Astrophysique de Paris, 98bis Boulevard Arago, 75014, Paris, France Institut d'Astrophysique de Paris, UMR 7095, CNRS, and Sorbonne Université, 98 bis boulevard Arago, 75014 Paris, France Institut de Física d'Altes Energies (IFAE), The Barcelona Institute of Science and Technology, Campus UAB, 08193 Bellaterra (Barcelona), Spain European Space Agency/ESTEC, Keplerlaan 1, 2201 AZ Noordwijk, The Netherlands DARK, Niels Bohr Institute, University of Copenhagen, Jagtvej 155, 2200 Copenhagen, Denmark Waterloo Centre for Astrophysics, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada Department of Physics and Astronomy, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada Perimeter Institute for Theoretical Physics, Waterloo, Ontario N2L 2Y5, Canada Space Science Data Center, Italian Space Agency, via del Politecnico snc, 00133 Roma, Italy Centre National d'Etudes Spatiales – Centre spatial de Toulouse, 18 avenue Edouard Belin, 31401 Toulouse Cedex 9, France Institute of Space Science, Str. Atomistilor, nr. 409 Măgurele, Ilfov, 077125, Romania Dipartimento di Fisica e Astronomia "G. Galilei", Università di Padova, Via Marzolo 8, 35131 Padova, Italy Departamento de Física, FCFM, Universidad de Chile, Blanco Encalada 2008, Santiago, Chile Universität Innsbruck, Institut für Astro- und Teilchenphysik, Technikerstr. 25/8, 6020 Innsbruck, Austria Satlantis, University Science Park, Sede Bld 48940, Leioa-Bilbao, Spain Instituto de Astrofísica e Ciências do Espaço, Faculdade de Ciências, Universidade de Lisboa, Tapada da Ajuda, 1349-018 Lisboa, Portugal Universidad Politécnica de Cartagena, Departamento de Electrónica y Tecnología de Computadoras, Plaza del Hospital 1, 30202 Cartagena, Spain Kapteyn Astronomical Institute, University of Groningen, PO Box 800, 9700 AV Groningen, The Netherlands INFN-Bologna, Via Irnerio 46, 40126 Bologna, Italy Dipartimento di Fisica, Università degli studi di Genova, and INFN-Sezione di Genova, via Dodecaneso 33, 16146, Genova, Italy Infrared Processing and Analysis Center, California Institute of Technology, Pasadena, CA 91125, USA INAF, Istituto di Radioastronomia, Via Piero Gobetti 101, 40129 Bologna, Italy Astronomical Observatory of the Autonomous Region of the Aosta Valley (OAVdA), Loc. Lignan 39, I-11020, Nus (Aosta Valley), Italy Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK School of Physics and Astronomy, Cardiff University, The Parade, Cardiff, CF24 3AA, UK Junia, EPA department, 41 Bd Vauban, 59800 Lille, France ICSC - Centro Nazionale di Ricerca in High Performance Computing, Big Data e Quantum Computing, Via Magnanelli 2, Bologna, Italy Instituto de Física Teórica UAM-CSIC, Campus de Cantoblanco, 28049 Madrid, Spain CERCA/ISO, Department of Physics, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH 44106, USA INFN-Sezione di Milano, Via Celoria 16, 20133 Milano, Italy Departamento de Física Fundamental. Universidad de Salamanca. Plaza de la Merced s/n. 37008 Salamanca, Spain Departamento de Astrofísica, Universidad de La Laguna, 38206, La Laguna, Tenerife, Spain Dipartimento di Fisica e Scienze della Terra, Università degli Studi di Ferrara, Via Giuseppe Saragat 1, 44122 Ferrara, Italy Istituto Nazionale di Fisica Nucleare, Sezione di Ferrara, Via Giuseppe Saragat 1, 44122 Ferrara, Italy Center for Data-Driven Discovery, Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583, Japan Ludwig-Maximilians-University, Schellingstrasse 4, 80799 Munich, Germany Max-Planck-Institut für Physik, Boltzmannstr. 8, 85748 Garching, Germany Dipartimento di Fisica - Sezione di Astronomia, Università di Trieste, Via Tiepolo 11, 34131 Trieste, Italy Minnesota Institute for Astrophysics, University of Minnesota, 116 Church St SE, Minneapolis, MN 55455, USA Institute Lorentz, Leiden University, Niels Bohrweg 2, 2333 CA Leiden, The Netherlands Université Côte d'Azur, Observatoire de la Côte d'Azur, CNRS, Laboratoire Lagrange, Bd de l'Observatoire, CS 34229, 06304 Nice cedex 4, France Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822, USA Department of Physics & Astronomy, University of California Irvine, Irvine CA 92697, USA Department of Astronomy & Physics and Institute for Computational Astrophysics, Saint Mary's University, 923 Robie Street, Halifax, Nova Scotia, B3H 3C3, Canada Departamento Física Aplicada, Universidad Politécnica de Cartagena, Campus Muralla del Mar, 30202 Cartagena, Murcia, Spain Instituto de Astrofísica de Canarias (IAC); Departamento de Astrofísica, Universidad de La Laguna (ULL), 38200, La Laguna, Tenerife, Spain Department of Physics, Oxford University, Keble Road, Oxford OX1 3RH, UK CEA Saclay, DFR/IRFU, Service d'Astrophysique, Bat. 709, 91191 Gif-sur-Yvette, France Department of Computer Science, Aalto University, PO Box 15400, Espoo, FI-00 076, Finland Instituto de Astrofísica de Canarias, c/ Via Lactea s/n, La Laguna E-38200, Spain. Departamento de Astrofísica de la Universidad de La Laguna, Avda. Francisco Sanchez, La Laguna, E-38200, Spain Ruhr University Bochum, Faculty of Physics and Astronomy, Astronomical Institute (AIRUB), German Centre for Cosmological Lensing (GCCL), 44780 Bochum, Germany Univ. Grenoble Alpes, CNRS, Grenoble INP, LPSC-IN2P3, 53, Avenue des Martyrs, 38000, Grenoble, France Department of Physics and Astronomy, Vesilinnantie 5, 20014 University of Turku, Finland Serco for European Space Agency (ESA), Camino bajo del Castillo, s/n, Urbanizacion Villafranca del Castillo, Villanueva de la Cañada, 28692 Madrid, Spain ARC Centre of Excellence for Dark Matter Particle Physics, Melbourne, Australia Centre for Astrophysics & Supercomputing, Swinburne University of Technology, Hawthorn, Victoria 3122, Australia School of Physics and Astronomy, Queen Mary University of London, Mile End Road, London E1 4NS, UK Department of Physics and Astronomy, University of the Western Cape, Bellville, Cape Town, 7535, South Africa ICTP South American Institute for Fundamental Research, Instituto de Física Teórica, Universidade Estadual Paulista, São Paulo, Brazil IRFU, CEA, Université Paris-Saclay 91191 Gif-sur-Yvette Cedex, France Oskar Klein Centre for Cosmoparticle Physics, Department of Physics, Stockholm University, Stockholm, SE-106 91, Sweden Astrophysics Group, Blackett Laboratory, Imperial College London, London SW7 2AZ, UK INAF-Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, 50125, Firenze, Italy Dipartimento di Fisica, Sapienza Università di Roma, Piazzale Aldo Moro 2, 00185 Roma, Italy Centro de Astrofísica da Universidade do Porto, Rua das Estrelas, 4150-762 Porto, Portugal Dipartimento di Fisica, Università di Roma Tor Vergata, Via della Ricerca Scientifica 1, Roma, Italy INFN, Sezione di Roma 2, Via della Ricerca Scientifica 1, Roma, Italy HE Space for European Space Agency (ESA), Camino bajo del Castillo, s/n, Urbanizacion Villafranca del Castillo, Villanueva de la Cañada, 28692 Madrid, Spain Aurora Technology for European Space Agency (ESA), Camino bajo del Castillo, s/n, Urbanizacion Villafranca del Castillo, Villanueva de la Cañada, 28692 Madrid, Spain INAF-Osservatorio Astronomico di Brera, Via Brera 28, 20122 Milano, Italy, and INFN-Sezione di Genova, Via Dodecaneso 33, 16146, Genova, Italy Theoretical astrophysics, Department of Physics and Astronomy, Uppsala University, Box 515, 751 20 Uppsala, Sweden Department of Physics, Royal Holloway, University of London, TW20 0EX, UK Mullard Space Science Laboratory, University College London, Holmbury St Mary, Dorking, Surrey RH5 6NT, UK Department of Astrophysical Sciences, Peyton Hall, Princeton University, Princeton, NJ 08544, USA Cosmic Dawn Center (DAWN) Niels Bohr Institute, University of Copenhagen, Jagtvej 128, 2200 Copenhagen, Denmark Center for Cosmology and Particle Physics, Department of Physics, New York University, New York, NY 10003, USA Center for Computational Astrophysics, Flatiron Institute, 162 5th Avenue, 10010, New York, NY, USA Euclid Collaboration To constrain models beyond ΛCDM, the development of the analysis pipeline requires simulations that capture the nonlinear phenomenology of such models. We present an overview of numerical methods and N-body simulation codes developed to study the nonlinear regime of structure formation in alternative dark energy and modified gravity theories. We review a variety of numerical techniques and approximations employed in cosmological N-body simulations to model the complex phenomenology of scenarios beyond ΛCDM. This includes discussions on solving nonlinear field equations, accounting for fifth forces, and implementing screening mechanisms. Furthermore, we conduct a code comparison exercise to assess the reliability and convergence of different simulation codes across a range of models. Our analysis demonstrates a high degree of agreement among the outputs of different simulation codes, providing confidence in current numerical methods for modelling cosmic structure formation beyond ΛCDM. We highlight recent advances made in simulating the nonlinear scales of structure formation, which are essential for leveraging the full scientific potential of the forthcoming observational data from the mission. preparation Euclid Collaboration: J. Adamek0000-0002-0723-6740julian.adamek@uzh.ch<ref> B. Fiorini0000-0002-0092-4321<ref> M. Baldi0000-0003-4145-1943<ref>,<ref>,<ref> G. Brando0000-0003-0805-1905<ref> M.-A. Breton<ref>,<ref>,<ref> F. Hassani0000-0003-2640-4460<ref> K. Koyama0000-0001-6727-6915<ref> A. M. C. Le Brun0000-0002-0936-4594<ref> G. Rácz0000-0003-3906-5699<ref> H.-A. Winther0000-0002-6325-2710<ref> A. Casalino0000-0001-6709-5292<ref> C. Hernández-Aguayo0000-0001-9921-8832<ref> B. Li0000-0002-1098-9188<ref> D. Potter0000-0002-0757-5195<ref> E. Altamura0000-0001-6973-1897<ref> C. Carbone0000-0003-0125-3563<ref> C. Giocoli0000-0002-9590-7961<ref>,<ref> D. F. Mota0000-0003-3141-142X<ref> A. Pourtsidou0000-0001-9110-5550<ref>,<ref> Z. Sakr0000-0002-4823-3757<ref>,<ref>,<ref> F. Vernizzi0000-0003-3426-2802<ref> A. Amara<ref> S. Andreon0000-0002-2041-8784<ref> N. Auricchio0000-0003-4444-8651<ref> C. Baccigalupi0000-0002-8211-1630<ref>,<ref>,<ref>,<ref> S. Bardelli0000-0002-8900-0298<ref> P. Battaglia0000-0002-7337-5909<ref> D. Bonino0000-0002-3336-9977<ref> E. Branchini0000-0002-0808-6908<ref>,<ref>,<ref> M. Brescia0000-0001-9506-5680<ref>,<ref>,<ref> J. Brinchmann0000-0003-4359-8797<ref>,<ref> A. Caillat<ref> S. Camera0000-0003-3399-3574<ref>,<ref>,<ref> V. Capobianco0000-0002-3309-7692<ref> V. F. Cardone<ref>,<ref> J. Carretero0000-0002-3130-0204<ref>,<ref> S. Casas0000-0002-4751-5138<ref> F. J. Castander0000-0001-7316-4573<ref>,<ref> M. Castellano0000-0001-9875-8263<ref> G. Castignani0000-0001-6831-0687<ref> S. Cavuoti0000-0002-3787-4196<ref>,<ref> A. Cimatti<ref> C. Colodro-Conde<ref> G. Congedo0000-0003-2508-0046<ref> C. J. Conselice0000-0003-1949-7638<ref> L. Conversi0000-0002-6710-8476<ref>,<ref> Y. Copin0000-0002-5317-7518<ref> F. Courbin0000-0003-0758-6510<ref>,<ref>,<ref> H. M. Courtois0000-0003-0509-1776<ref> A. Da Silva0000-0002-6385-1609<ref>,<ref> H. Degaudenzi0000-0002-5887-6799<ref> G. De Lucia0000-0002-6220-9104<ref> M. Douspis0000-0003-4203-3954<ref> F. Dubath0000-0002-6533-2810<ref> X. Dupac<ref> S. Dusini0000-0002-1128-0664<ref> M. Farina0000-0002-3089-7846<ref> S. Farrens0000-0002-9594-9387<ref> S. Ferriol<ref> P. Fosalba0000-0002-1510-5214<ref>,<ref> M. Frailis0000-0002-7400-2135<ref> E. Franceschi0000-0002-0585-6591<ref> M. Fumana0000-0001-6787-5950<ref> S. Galeotta0000-0002-3748-5115<ref> B. Gillis0000-0002-4478-1270<ref> P. Gómez-Alvarez0000-0002-8594-5358<ref>,<ref> A. Grazian0000-0002-5688-0663<ref> F. Grupp<ref>,<ref> L. Guzzo0000-0001-8264-5192<ref>,<ref> S. V. H. Haugan0000-0001-9648-7260<ref> W. Holmes<ref> F. Hormuth<ref> A. Hornstrup0000-0002-3363-0936<ref>,<ref> S. Ilić0000-0003-4285-9086<ref>,<ref> K. Jahnke0000-0003-3804-2137<ref> M. Jhabvala<ref> B. Joachimi0000-0001-7494-1303<ref> E. Keihänen0000-0003-1804-7715<ref> S. Kermiche0000-0002-0302-5735<ref> A. Kiessling0000-0002-2590-1273<ref> M. Kilbinger0000-0001-9513-7138<ref> B. Kubik0009-0006-5823-4880<ref> M. Kümmel0000-0003-2791-2117<ref> M. Kunz0000-0002-3052-7394<ref> H. Kurki-Suonio0000-0002-4618-3063<ref>,<ref> S. Ligori0000-0003-4172-4606<ref> P. B. Lilje0000-0003-4324-7794<ref> V. Lindholm0000-0003-2317-5471<ref>,<ref> I. Lloro<ref> G. Mainetti0000-0003-2384-2377<ref> E. Maiorano0000-0003-2593-4355<ref> O. Mansutti0000-0001-5758-4658<ref> O. Marggraf0000-0001-7242-3852<ref> K. Markovic0000-0001-6764-073X<ref> M. Martinelli0000-0002-6943-7732<ref>,<ref> N. Martinet0000-0003-2786-7790<ref> F. Marulli0000-0002-8850-0303<ref>,<ref>,<ref> R. Massey0000-0002-6085-3780<ref> E. Medinaceli0000-0002-4040-7783<ref> S. Mei0000-0002-2849-559X<ref> M. Melchior<ref> Y. Mellier<ref>,<ref> M. Meneghetti0000-0003-1225-7084<ref>,<ref> E. Merlin0000-0001-6870-8900<ref> G. Meylan<ref> M. Moresco0000-0002-7616-7136<ref>,<ref> L. Moscardini0000-0002-3473-6716<ref>,<ref>,<ref> C. Neissner0000-0001-8524-4968<ref>,<ref> S.-M. Niemi<ref> C. Padilla0000-0001-7951-0166<ref> S. Paltani0000-0002-8108-9179<ref> F. Pasian0000-0002-4869-3227<ref> K. Pedersen<ref> W. J. Percival0000-0002-0644-5727<ref>,<ref>,<ref> V. Pettorino<ref> S. Pires0000-0002-0249-2104<ref> G. Polenta0000-0003-4067-9196<ref> M. Poncet<ref> L. A. Popa<ref> L. Pozzetti0000-0001-7085-0412<ref> F. Raison0000-0002-7819-6918<ref> A. Renzi0000-0001-9856-1970<ref>,<ref> J. Rhodes0000-0002-4485-8549<ref> G. Riccio<ref> E. Romelli0000-0003-3069-9222<ref> M. Roncarelli0000-0001-9587-7822<ref> R. Saglia0000-0003-0378-7032<ref>,<ref> A. G. Sánchez0000-0003-1198-831X<ref> D. Sapone0000-0001-7089-4503<ref> B. Sartoris0000-0003-1337-5269<ref>,<ref> M. Schirmer0000-0003-2568-9994<ref> T. Schrabback0000-0002-6987-7834<ref> A. Secroun0000-0003-0505-3710<ref> G. Seidel0000-0003-2907-353X<ref> S. Serrano0000-0002-0211-2861<ref>,<ref>,<ref> C. Sirignano0000-0002-0995-7146<ref>,<ref> G. Sirri0000-0003-2626-2853<ref> L. Stanco0000-0002-9706-5104<ref> J. Steinwagner0000-0001-7443-1047<ref> P. Tallada-Crespí0000-0002-1336-8328<ref>,<ref> D. Tavagnacco0000-0001-7475-9894<ref> I. Tereno<ref>,<ref> R. Toledo-Moreo0000-0002-2997-4859<ref> F. Torradeflot0000-0003-1160-1517<ref>,<ref> I. Tutusaus0000-0002-3199-0399<ref> E. A. Valentijn<ref> L. Valenziano0000-0002-1170-0104<ref>,<ref> T. Vassallo0000-0001-6512-6358<ref>,<ref> G. Verdoes Kleijn0000-0001-5803-2580<ref> A. Veropalumbo0000-0003-2387-1194<ref>,<ref>,<ref> Y. Wang0000-0002-4749-2984<ref> J. Weller0000-0002-8282-2010<ref>,<ref> G. Zamorani0000-0002-2318-301X<ref> E. Zucca0000-0002-5845-8132<ref> A. Biviano0000-0002-0857-0732<ref>,<ref> C. Burigana0000-0002-3005-5796<ref>,<ref> M. Calabrese0000-0002-2637-2422<ref>,<ref> D. Di Ferdinando<ref> J. A. Escartin Vigo<ref> G. Fabbian0000-0002-3255-4695<ref>,<ref> F. Finelli0000-0002-6694-3269<ref>,<ref> J. Gracia-Carpio<ref> S. Matthew0000-0001-8448-1697<ref> N. Mauri0000-0001-8196-1548<ref>,<ref> A. Pezzotta0000-0003-0726-2268<ref> M. Pöntinen0000-0001-5442-2530<ref> V. Scottez<ref>,<ref> M. Tenti0000-0002-4254-5901<ref> M. Viel0000-0002-2642-5707<ref>,<ref>,<ref>,<ref>,<ref> M. Wiesmann0009-0000-8199-5860<ref> Y. Akrami0000-0002-2407-7956<ref>,<ref> V. Allevato0000-0001-7232-5152<ref> S. Anselmi0000-0002-3579-9583<ref>,<ref>,<ref> M. Archidiacono0000-0003-4952-9012<ref>,<ref> F. Atrio-Barandela0000-0002-2130-2513<ref> A. Balaguera-Antolinez0000-0001-5028-3035<ref>,<ref> M. Ballardini0000-0003-4481-3559<ref>,<ref>,<ref> A. Blanchard0000-0001-8555-9003<ref> L. Blot0000-0002-9622-7167<ref>,<ref> H. Böhringer0000-0001-8241-4204<ref>,<ref>,<ref> S. Borgani0000-0001-6151-6439<ref>,<ref>,<ref>,<ref> S. Bruton0000-0002-6503-5218<ref> R. Cabanac0000-0001-6679-2600<ref> A. Calabro0000-0003-2536-1614<ref> B. Camacho Quevedo0000-0002-8789-4232<ref>,<ref> G. Cañas-Herrera0000-0003-2796-2149<ref>,<ref> A. Cappi<ref>,<ref> F. Caro<ref> C. S. Carvalho<ref> T. Castro0000-0002-6292-3228<ref>,<ref>,<ref>,<ref> K. C. Chambers0000-0001-6965-7789<ref> S. Contarini0000-0002-9843-723X<ref> A. R. Cooray0000-0002-3892-0190<ref> G. Desprez0000-0001-8325-1742<ref> A. Díaz-Sánchez0000-0003-0748-4768<ref> J. J. Diaz<ref> S. Di Domizio0000-0003-2863-5895<ref>,<ref> H. Dole0000-0002-9767-3839<ref> S. Escoffier0000-0002-2847-7498<ref> A. G. Ferrari0009-0005-5266-4110<ref>,<ref> P. G. Ferreira0000-0002-3021-2851<ref> I. Ferrero0000-0002-1295-1132<ref> A. Finoguenov0000-0002-4606-5403<ref> F. Fornari0000-0003-2979-6738<ref> L. Gabarra0000-0002-8486-8856<ref> K. Ganga0000-0001-8159-8208<ref> J. García-Bellido0000-0002-9370-8360<ref> T. Gasparetto0000-0002-7913-4866<ref> V. Gautard<ref> E. Gaztanaga0000-0001-9632-0815<ref>,<ref>,<ref> F. Giacomini0000-0002-3129-2814<ref> F. Gianotti0000-0003-4666-119X<ref> G. Gozaliasl0000-0002-0236-919X<ref> C. M. Gutierrez0000-0001-7854-783X<ref> A. Hall0000-0002-3139-8651<ref> H. Hildebrandt0000-0002-9814-3338<ref> J. Hjorth0000-0002-4571-2306<ref> A. Jimenez Muñoz0009-0004-5252-185X<ref> S. Joudaki0000-0001-8820-673X<ref> J. J. E. Kajava0000-0002-3010-8333<ref>,<ref> V. Kansal0000-0002-4008-6078<ref>,<ref> D. Karagiannis0000-0002-4927-0816<ref>,<ref> C. C. Kirkpatrick<ref> S. Kruk0000-0001-8010-8879<ref> J. Le Graet0000-0001-6523-7971<ref> L. Legrand0000-0003-0610-5252<ref> J. Lesgourgues0000-0001-7627-353X<ref> T. I. Liaudat0000-0002-9104-314X<ref> A. Loureiro0000-0002-4371-0876<ref>,<ref> G. Maggio0000-0003-4020-4836<ref> M. Magliocchetti0000-0001-9158-4838<ref> F. Mannucci0000-0002-4803-2381<ref> R. Maoli0000-0002-6065-3025<ref>,<ref> C. J. A. P. Martins0000-0002-4886-9261<ref>,<ref> L. Maurin0000-0002-8406-0857<ref> R. B. Metcalf0000-0003-3167-2574<ref>,<ref> M. Migliaccio<ref>,<ref> M. Miluzio<ref>,<ref> P. Monaco0000-0003-2083-7564<ref>,<ref>,<ref>,<ref> A. Montoro0000-0003-4730-8590<ref>,<ref> A. Mora0000-0002-1922-8529<ref> C. Moretti0000-0003-3314-8936<ref>,<ref>,<ref>,<ref>,<ref> G. Morgante<ref> S. Nadathur0000-0001-9070-3102<ref> L. Patrizii<ref> V. Popa0000-0002-9118-8330<ref> P. Reimberg0000-0003-3410-0280<ref> I. Risso0000-0003-2525-7761<ref> P.-F. Rocci<ref> M. Sahlén0000-0003-0973-4804<ref> E. Sarpa0000-0002-1256-655X<ref>,<ref>,<ref> A. Schneider0000-0001-7055-8104<ref> M. Sereno0000-0003-0302-0325<ref>,<ref> A. Silvestri0000-0001-6904-5061<ref> A. Spurio Mancini0000-0001-5698-0990<ref>,<ref> K. Tanidis<ref> C. Tao0000-0001-7961-8177<ref> N. Tessore0000-0002-9696-7931<ref> G. Testera<ref> R. Teyssier0000-0001-7689-0933<ref> S. Toft0000-0003-3631-7176<ref>,<ref> S. Tosi0000-0002-7275-9193<ref>,<ref> A. Troja0000-0003-0239-4595<ref>,<ref> M. Tucci<ref> C. Valieri<ref> J. Valiviita0000-0001-6225-3693<ref>,<ref> D. Vergani0000-0003-0898-2216<ref> G. Verza0000-0002-1886-8348<ref>,<ref> P. Vielzeuf0000-0003-2035-9339<ref> N. A. Walton0000-0003-3983-8778<ref> Received XXX; accepted ZZZ ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTION Significant progress in cosmological observations is expected in the upcoming years, in particular from the survey <cit.>, Vera Rubin Observatory’s Legacy Survey of Space and Time <cit.>, the Roman Space Telescope <cit.> and the Dark Energy Spectroscopic Instrument <cit.>. These surveys will offer precision observations to high redshifts, allowing us to study the evolution of the Universe with unprecedented accuracy and potentially uncover the nature of dark matter and dark energy (DE). Gaining a deeper understanding of the nature of DE and addressing the long-standing question of whether the cosmological constant (Λ) is responsible for the late-time accelerated expansion of the Universe is indeed one of the primary goals of the Euclid survey <cit.>. The space telescope was launched on July 1, 2023, and is going to observe billions of galaxies out to redshift z ≈ 2, covering more than a third of the sky in optical and near-infrared wavelengths. will deliver precise measurements of the shapes and redshifts of galaxies <cit.>, from which we will measure weak gravitational lensing <cit.> and galaxy clustering <cit.>. These primary probes can be used to rigorously investigate different cosmological scenarios, in particular those related to DE that go beyond the Λ-Cold-Dark-Matter (ΛCDM) concordance model. Although the ΛCDM model is generally very successful in matching observations, the true identities of CDM and the cosmological constant Λ remain unknown. Additionally, some tensions have persisted in recent years, most notably the Hubble tension <cit.> where local measurements of the Hubble parameter today, H_0, appear to disagree with those inferred from high-redshift observations by around 5 σ. Further examples are the S_8 tension <cit.> and some anomalies found in measurements of the cosmic microwave background <cit.>. The presence of these tensions may hint at a breakdown of the ΛCDM model and further motivates the exploration of alternative scenarios. Over the past few years, cosmologists have explored different possibilities to account for the late-time accelerating expansion of the Universe <cit.> either by introducing a new field, referred to as the DE field or by proposing a modified theory of gravity (MG). A wide range of MG or DE models is equivalent to adding a new light scalar degree of freedom to the theory of General Relativity (GR). In these theories, the scalar degree of freedom exhibits time evolution, sometimes accompanied by spatial fluctuations within the cosmic horizon. Even in the absence of such fluctuations, the background evolution may be different from ΛCDM, leading to modifications in structure formation. Significant spatial fluctuations in these models may arise due to various factors, including a low characteristic speed of sound in the theory <cit.>, or as a result of the non-minimal coupling of the scalar field to matter or gravity <cit.>. MG and DE theories featuring a coupling of the scalar field to matter can further affect perturbations at sub-horizon scales by mediating a fifth force. If the coupling is universal and includes baryons, a screening mechanism is essential to evade the precise constraints of local experiments <cit.>. Screening mechanisms are typically achieved through nonlinear phenomena in such theories. If, on the other hand, the coupling to matter is non-universal and is confined entirely to the dark sector, local experiments have no constraining power, and cosmological observations provide the main constraints. Given the diversity of possible DE or MG scenarios, a large information gain is expected from nonlinear scales in the cosmological large-scale structure. These scales must be studied using N-body simulations that capture the essential aspects of the DE or MG models under consideration. This usually means that at least one additional equation needs to be solved for the extra degree of freedom. In many cases, this leads to a difficult nonlinear problem that could require special techniques or approximations that need to be developed. This makes N-body simulations for models of DE or MG a challenging task. In this paper, we first review the main features of the different classes of DE and MG models that have been proposed over the past years <cit.>. For each of them, we then discuss the numerical methods implemented within a selection of existing N-body codes (summarised in Table <ref>). Focusing on MG models with a universal coupling, we then compare the results of different N-body implementations for two well-studied theories, namely the Hu–Sawicki f(R) gravity <cit.> and the `normal branch' of the Dvali–Gabadadze–Porrati braneworld model <cit.>. We choose simulation parameters following the code comparison paper by <cit.> [W15 hereafter], allowing us to validate a number of new codes against existing results. This article is part of a series that collectively explores simulations and nonlinearities beyond the ΛCDM model: * Numerical methods and validation (this work). * Results from non-standard simulations (Rácz et al. in prep.). * Cosmological constraints on non-standard cosmologies from simulated Euclid probes (D'Amico et al. in prep.). * Constraints on f(R) models from the photometric primary probes (Koyama et al. in prep.). For further details, see our companion papers. The purpose of this first article in the series is to serve as a reference for models beyond ΛCDM and their existing implementations in various codes. This paper is structured as follows. In Sect. <ref> we give a broad overview of different numerical approaches to treat the additional physics of models beyond ΛCDM. In Sect. <ref> we discuss a number of different codes that implement those approaches and carry out a validation exercise, comparing several recently developed codes with the existing state of the art. We conclude in Sect. <ref>. In an Appendix, we discuss some performance considerations. § METHODS §.§ Non-standard background evolution A wide range of models beyond the simplest cosmological constant scenario are based on an additional scalar degree of freedom – e.g. a classical scalar field ϕ – that evolves dynamically in the expanding Universe and whose background energy density ρ_ϕ provides the source for the observed DE abundance. To induce cosmic acceleration and to match existing constraints on the background expansion history, the equation-of-state parameter w of such an additional field must be sufficiently negative at recent epochs but is poorly constrained at earlier times, which allows for models where w also evolves dynamically as long as it converges to values close to w ≈ -1 in the late Universe. For these models, the DE component modifies the background expansion history of the Universe, which is encoded by the general expression of the Hubble function, H^2(a)/H^2_0 = m a^-3 + r a^-4 + k a^-2 + DEe^ -3∫ _1^a1+w(a')/a'da' , where the equation-of-state parameter of DE can be obtained by solving the background field equations – including the evolution of the additional scalar degree of freedom ϕ – or can be parameterised. A common parameterisation suggested by Chevallier, Polarski & Linder <cit.> is based on the desired evolution of w at low redshifts, w(a) = w_0 + w_a (1 - a) . Alternatively, one can set the desired relative abundance of DE at late (DE = 1-m) and early (Ω _ EDE) epochs as in the Early Dark Energy <cit.> parameterisation, w(a) = w_0/1 + bln (1/a) , b = 3w_0/ln1 - Ω _ EDE/Ω _ EDE + ln1 - m/m . The modified expansion history expressed by Eq. (<ref>) will indirectly affect the evolution of matter density fluctuations and modify the formation process of collapsed structures by changing the Hubble friction term in the equation for linear matter perturbations, which in Newtonian gauge and in Fourier space for sub-horizon scales reads: δ̈_m + 2Hδ̇_m = 4π G ( mm + DEDE), where m and DE are the density contrasts of matter and DE perturbations, respectively, G is Newton's constant, and a dot represents a derivative with respect to cosmic time. Besides the richer background dynamics that is endowed by an evolving field, whenever DE is promoted from a cosmological constant to a dynamical degree of freedom, the model also acquires an additional layer of complexity: the presence and evolution of DE fluctuations around the mean-field configuration. This corresponds to the situation where δ_DE in Eq. (<ref>) is non-negligible, whereas in ΛCDM it would vanish identically at all scales. Like any other density perturbations, inhomogeneities in the DE would then contribute to the peculiar gravitational potential that governs the evolution of matter perturbations and thus the formation of cosmic structures as shown by Eq. (<ref>). However, in many of the simplest scalar-field scenarios, such perturbations are negligible at sub-horizon scales because the speed of sound c_s of the scalar field is naturally close to the speed of light. Ignoring them for the purpose of numerical simulations, the only modification of N-body algorithms required to simulate these DE models is given by an appropriate calculation of the cosmic expansion rate. The most common approach amounts to tabulating the specific expansion rate of the universe for the model to be simulated according to Eq. (<ref>) and replacing the standard analytical calculation of the Hubble function within the N-body algorithm with an interpolated value from the tabulated solution that is provided to the code as an input. This approach is implemented by most of the simulation codes employed within the Euclid Collaboration to perform cosmological simulations in homogeneous DE models beyond ΛCDM. §.§ Linearised DE perturbations Although a wide range of DE models are characterised by negligible DE fluctuations as discussed above, some specific scenarios may not fulfil such a condition at all scales and/or at all times, either because they feature a lower value of the DE speed of sound, allowing DE perturbations to grow on scales above the associated Jeans length that then falls inside the cosmological horizon, or because additional interactions – besides gravity – can induce the growth of such perturbations. The former case corresponds to the class of clustering DE models, while the latter is known as coupled DE. §.§.§ Clustering DE The clustering DE models are characterised by two time-dependent variables: the speed of sound c_s and the equation of state parameter w. In these theories, the DE component clusters on scales larger than the associated sound-horizon/Jeans scale, λ_s = H/c_s and DE perturbations decay quickly below the sound-horizon scale. For a sufficiently small speed of sound, we may even expect nonlinear DE structures to form. At a fundamental level, clustering DE models are analogous to the k-essence type of theories, so that the action reads <cit.> S = ∫d^4x √(-g)[ c^4 R/ 16π G + P(X, ϕ) + ℒ_m], where P is a general function of the kinetic term X ≡ -1/2∇_μϕ∇^μϕ and the scalar field ϕ, and ℒ_m is the matter Lagrangian. For a given P(X, ϕ), the speed of sound and the equation of state are given by <cit.> w=P/P-2X P_,X , c_s^2 = P_,X/2XP_,XX+P_,X , where the subscript “,X” denotes the partial derivative with respect to X. We therefore need to specify the function P(X, ϕ) to derive the equations of motion for the k-essence scalar field. However, since there are many possible choices, we can instead employ the effective field theory (EFT) approach to model the dynamics of k-essence DE. The EFT framework, although not a fundamental theory, offers several advantages <cit.>, such as being a description of a wide range of theories within some scales. The EFT is a perturbative approach based on the assumption that the scalar field perturbations remain small over the scales of interest. It is worth noting that the regime of nonlinear matter clustering is accessible to the EFT framework as long as the scalar field perturbations remain small. The k-essence theories or clustering DE models are implemented in several N-body and Einstein–Boltzmann codes. In <cit.> and <cit.>, these theories are implemented using the fluid picture. In <cit.>, the EFT equations are implemented and can be controlled using the EFT parameter α_K≡ 3(1+w)c_s^-2 within the code. On the other hand, in <cit.>, which is an N-body code based on <cit.>, nonlinear equations for clustering DE are implemented as an independent component, and the k-essence field for small c_s can form nonlinear structures. In some N-body codes, for example in <cit.>, clustering DE is implemented through a linear solution from an Einstein–Boltzmann solver. This is a good assumption for large speeds of sound, but for small ones, this method does not allow for the response of DE to the nonlinear matter structures. §.§.§ Coupled quintessence Moving to the case of coupled DE models, the interaction can be formulated at a fundamental level by introducing a direct coupling between the scalar field and the spatial curvature R in the so-called Jordan frame <cit.>, so that the action reads S = ∫d^4x √(-g)[c^4 f(ϕ, R)/16π G-1/2Z(ϕ)∇^μϕ∇_μϕ - V(ϕ) + ℒ_m] , where f(ϕ, R) is a function that couples the scalar field to the curvature, Z(ϕ) is a function that allows for non-standard kinetic terms, V(ϕ ) is the scalar field self-interaction potential, and the matter Lagrangian contains at least one cold species characterised by some rest mass m_0. Alternatively, the interaction can be formulated by including source terms in the covariant conservation equations of the interacting species in the so-called Einstein frame, ∇^μ T^(c)_μν = -β_(c)(ϕ)/M_ Pl T^(c)∇_νϕ , ∇^μ T^(b)_μν = -β_(b)(ϕ)/M_ PlT^(b)∇_νϕ , ∇^μ T^(ϕ)_μν = 1/M_ Pl[β_(c)(ϕ)T^(c) + β_(b)(ϕ)T^(b)]∇_νϕ , where T_μν^(Y) is the stress-energy tensor of a given species Y, T^(Y) is its trace, β_(Y)(ϕ) is the coupling function of species Y, the labels c, b, ϕ refer to the dark matter, baryon, and scalar field species, respectively, and M_ Pl≡ (ħ c)^1/2 (8π G)^-1/2 is the reduced Planck mass. While in the former case the interaction will be universal (i.e. involving all matter species with the same strength), which goes under the name of Extended Quintessence, the latter approach allows for non-universal couplings that may selectively involve individual species, for example by separately choosing the coupling functions for baryons and dark matter. In the case of a universal coupling (that is, if β_(b) = β_(c)), the two approaches can be related to one another through a Weyl transformation of the metric <cit.>, and are therefore equivalent. On the other hand, the possibility to leave the baryonic component of the Universe only minimally coupled evades Solar System constraints <cit.> on the deviations from standard gravity thereby avoiding the need for screening mechanisms. This is the case of Coupled Quintessence models <cit.>, where the direct coupling between the scalar field and massive (non-baryonic) particles can support stable perturbations of the DE field at sub-horizon scales <cit.>. In general, such perturbations may even become nonlinear in the presence of a sufficiently strong coupling <cit.>. Nonetheless, a large class of widely studied coupled DE models is known to feature scalar perturbations of the order of the standard Newtonian gravitational potential <cit.>, thereby remaining in the linear regime at all times and scales of cosmological interest. This allows us to linearise the corresponding field equations and derive modified equations of motions for massive particles, including the contribution of the additional force arising from the direct coupling with the scalar field <cit.>. In fact, a general feature of coupled DE models is the existence of a `fifth force' mediated by the scalar field. The new force can be expressed as an additional acceleration experienced by a massive coupled particle, which in comoving coordinates will be given by a⃗_Y,5th = -β_(Y)(ϕ )∇⃗δϕ , where Y identifies a coupled matter species, and δϕ is the scalar field fluctuation. This extra acceleration term is added to the standard Newtonian acceleration acting on all massive particles, a⃗_N= -Hv⃗ -∇⃗Φ_N , where v⃗ is the peculiar particle velocity in comoving coordinates, and Φ_N is the peculiar Newtonian potential obeying the standard Poisson equation ∇ ^2Φ_N = 4π G a^2 ∑_Yρ_Yδ_Y , where the sum runs over all clustering species in the Universe. Therefore, solving for the dynamical evolution of massive coupled particles requires solving for the scalar field perturbation δϕ entering in Eq. (<ref>), which in the most general case follows a nonlinear elliptic equation, ∇ ^2δϕ = F(δϕ ) + ∑_Y 8π Ga^2 β_(Y)(ϕ)δ_Y , with F a function of the scalar field fluctuation δϕ, and where the sum runs over all the coupled matter species with their respective couplings β _(Y)(ϕ ). For the particular case of a coupled DE model with a non-universal interaction <cit.> involving only dark matter and leaving baryons uncoupled (i.e. β_(b) = 0) the function F(δϕ ) in Eq. (<ref>) is negligible compared to the term associated with matter density perturbations <cit.> and can be safely discarded. As a result, the scalar-field equation reduces to ∇ ^2δϕ≈ 8π Ga^2 β_(c)(ϕ)δ_c = 2β_(c)(ϕ)∇ ^2Φ_c , where Φ_c is the Newtonian potential generated by the distribution of the coupled dark matter particles, that is ∇ ^2Φ_c = 4π G a^2 ρ_cδ_c . Therefore, the solution for the scalar field perturbations will be directly proportional to the potential Φ_c according to the relation δϕ≈ 2β_(c)Φ_c . The acceleration equation for a coupled particle can then be rewritten as a⃗_c = -Hv⃗_c - ∇⃗Φ_b - ∇⃗(1 + 2β_(c)^2(ϕ))Φ_c , assuming here for simplicity that other clustering species (such as massive neutrinos) give a negligible contribution to the total Newtonian potential such that Φ_N = Φ_c + Φ_b. This modified acceleration equation introduces a further modification to be implemented in N-body simulation codes for Coupled Quintessence cosmologies besides the specific expansion history of each particular model. This often requires substantial modifications in the gravity solvers of conventional N-body codes, as the algorithms need to evolve coupled and uncoupled massive particles (typically dark matter and baryons, respectively) with different equations and should therefore treat these components separately. Even under the approximation of a purely collisionless treatment (i.e., ignoring the hydrodynamical and astrophysical processes that affect standard baryonic matter leading to the formation of stars and galaxies) that is often employed for large-volume simulations targeted at galaxy surveys such as Euclid, both coupled and uncoupled matter species must be included in the simulation to provide a consistent representation of the dynamics at all scales: as baryons and dark matter evolve differently, assuming that all matter is dark would lead to an overestimation of the effects of the coupling and, thus, biased results. This approach is implemented in the code <cit.> which is employed for Coupled DE simulations performed within the Euclid Collaboration. Distinguishing between coupled and uncoupled particle types in simulations of Coupled Quintessence is also crucial for proper treatment of two other effects that characterise these cosmological models beyond the fifth force described by Eq. (<ref>). The first is the mass variation of coupled particles due to the exchange of rest-frame energy with the DE scalar field, which arises as a direct consequence of the modified continuity equations (<ref>) and of the assumption of particle number conservation. More specifically, the mass of coupled particles evolves as a result of the evolution of the background scalar field according to m_Y(a) = m_0exp(- ∫_ϕ _0^ϕ (a)β_(Y) (ϕ )dϕ/M_ Pl) . Such a mass variation, which involves only particle species with a non-vanishing coupling to the scalar field, must be taken into account in N-body algorithms by changing the mass of individual simulation particles at every time step. This is normally done by tabulating the mass as a function of scale factor a by numerically integrating Eq. (<ref>) along with the background dynamics of the scalar field ϕ, and interpolating from that table as the simulation progresses. The second effect is an additional force (on top of the fifth force) acting on coupled particles as a consequence of momentum conservation due to the particles' mass variation described by Eq. (<ref>), which takes the form of a velocity-dependent extra acceleration behaving either as a friction or as a drag, depending on the relative signs of the coupling function β (ϕ ) and of the scalar field velocity ϕ̇ <cit.>, a⃗_Y,v = -β _(Y)(ϕ)/M_ Plϕ̇v⃗ . Such a velocity-dependent acceleration is responsible for a very rich phenomenology characterising Coupled Quintessence models, especially on highly nonlinear scales <cit.>, and must be included in N-body simulations as well for a fully consistent treatment of these scenarios. This is done by adding the extra acceleration described in Eq. (<ref>) to the total acceleration (i.e. Newtonian plus fifth force) of all coupled particles in each time step, a⃗_Y = a⃗_N + a⃗_Y,5th + a⃗_Y,v . The relevant quantities β (ϕ) and ϕ̇ can again be interpolated from a table obtained by integrating the background dynamics of the system. This is the approach implemented in the code that has been used to run Coupled Quintessence simulations within the Euclid Collaboration. §.§.§ Momentum exchange and dark scattering A further example of interacting DE cosmologies characterised by scalar-field perturbations that always remain linear is given by models of pure momentum exchange <cit.> between the DE field and massive particles like dark matter or baryons. A limiting case is given by the Dark Scattering scenario <cit.> where the momentum transfer between the two components is modelled as the elastic scattering of massive particles moving through a homogeneous DE fluid with equation of state w. This results in an extra force acting on the moving massive particles which is proportional to their comoving velocity, similar to the velocity-dependent force described by Eq. (<ref>) for Coupled Quintessence models. However, the origin of this force is completely different in this case, as it does not originate from the mass variation of particles but rather from the momentum transfer with the DE field. As a result, the scattering acceleration can be expressed as a⃗_s = -(1+w)3H^2Ω_DE/8π Gξv⃗ , where the parameter ξ is defined as ξ≡σ/m , with σ denoting the scattering cross section and m the typical mass of the scattering particle species. This type of interaction can be implemented in N-body algorithms <cit.> in a very similar way as the velocity-dependent acceleration in Coupled Quintessence scenarios, as the factors entering Eq. (<ref>) are all either constants or background quantities that can be interpolated at every timestep from tabulated data. This is the approach implemented in the code that has been used to run the and simulations <cit.>. Although Dark Scattering represents a limiting case of the more general class of pure momentum-exchange models between matter and DE <cit.>, for which further modifications to the standard particle dynamics are expected besides the drag force of Eq. (<ref>), recent works <cit.> have shown that such additional modifications are generally subleading with respect to the drag force so that their effect on structure formation can be neglected. This ensures that the current implementation of Dark Scattering within the simulations used in the Euclid Collaboration can be considered representative of the general class of momentum-exchange cosmologies. §.§ Nonlinear scalar field perturbations In models where a scalar field couples to matter universally, or at least to baryons in a relevant way, some mechanism to suppress the coupling is required to satisfy the stringent local tests of gravity. This is commonly referred to as `screening'. Screening mechanisms are achieved by nonlinearity in the scalar-field equation coupled to matter. The equation determining the evolution of the scalar field is typically a wave equation of the form □ϕ = S(ϕ, ∇_μϕ, ∇_μ∇_νϕ, m) . Here, □≡∇_μ∇^μ represents the d'Alembertian operator and S is a nonlinear function that depends on the matter density, the scalar field, and its derivatives. Various methods have been developed to solve this nonlinear scalar field equation in N-body simulations, where the nonlinear density m is modelled by collisionless particles . Several approximations are often used to solve these nonlinear equations. The most common one is the quasi-static approximation. The scalar field can be split into a background part, ϕ̅, and a perturbation, δϕ, as ϕ = ϕ̅ + δϕ. The quasi-static approximation amounts to ignoring the time dependence of the scalar field perturbation, i.e. assuming ϕ̇≃ϕ̇̅̇. The partial differential equation (PDE) of the field perturbation, which in its original form may have been of the hyperbolic or parabolic type, is therefore cast into an elliptic form so that the scalar field solution at any given time depends solely on the matter configuration at that time. This is a good approximation whenever the speed of sound of the scalar field is small <cit.>, which is the case for the MG models considered here. Non-quasistatic cosmological simulations have been conducted for several MG models using different techniques, such as the explicit leap-frog method and the implicit Newton–Gauss–Seidel method <cit.>. The scalar-field solution is required for the computation of the total gravitational potential Φ that acts on the matter particles, ∇^2 Φ = 4 π G a^2 (δm + δeff (ϕ) ) , where the effective density depends on the scalar field. There are two common ways of solving for this total gravitational force. The first option is to solve first for ϕ and then use this solution to compute the source term in Eq. (<ref>) and solve for Φ using a standard Poisson solver to get the total force ∇Φ. The other option is to apply the total force ∇Φ_N + ∇ϕ to the particles. Under the quasi-static approximation, the scalar-field equation assumes the same form as the usual Poisson equation, ∇^2 ϕ = S(ϕ, ∇_i ϕ, ∇_i ∇_j ϕ, δm) . The main difference is that the scalar-field equation is generally nonlinear. This nonlinear behaviour implies that conventional techniques, such as using Fourier analysis, cannot be used to solve the equation. Numerous approaches have been developed to address this challenge, and we refer the reader to for details. For computational methods that aim to accurately solve a nonlinear equation on refined grids, the approach typically involves discretising the equation in a suitable way and employing an iterative algorithm, such as the Newton–Raphson method, to successively refine solutions based on an initial guess. To speed up convergence, many of these methods incorporate so-called `multigrid' acceleration techniques which we quickly review here. §.§.§ Nonlinear multigrid algorithm A generic way to solve nonlinear elliptic PDEs is to couple the multigrid algorithm to the Newton–Raphson method, u^ new = u^ old - ℒ(u^ old)/∂ℒ/∂ u^ old , where u is the discretised field, ℒ is the differential operator (which is a Laplacian for Newtonian gravity) and the superscripts refer to the new or old estimate of the solution in one Newton–Raphson iteration. The Newton–Raphson method produces linear equations for the correction terms, which are solved by the Full-Approximation-Storage Multigrid algorithm <cit.>. For a review of these methods applied to MG simulations, see e.g. <cit.>, <cit.> and . A simple sketch of the algorithm goes as follows. One starts with a guess for the solution on a grid; this could be anything from a constant value across the grid to using the solution from the previous timestep in the simulation. One then loops a few times over all cells in the grid, updating the solution using Eq. (<ref>). This solution is then restricted to a grid with half the resolution, the solution is updated again, and this process is repeated recursively up to the coarsest grid (one with only 2^3 cells). The solution is then interpolated to the finer grid, updated once more, and this is done recursively until one reaches the finest grid we started with. One such cycle is called a V-cycle, and one repeats such V-cycles until convergence is achieved. The advantage of having this stack of coarser grids is that it helps to accelerate the convergence of the largest modes in the solution. §.§.§ Screening with nonlinearity in potentials In models where screening is achieved by nonlinearity in a potential or coupling function, the equation for the scalar field becomes ∇^2 ϕ = 4 π G a^2 β(ϕ) δm + V(ϕ) , where β(ϕ) and V(ϕ) are nonlinear functions of ϕ. A typical example is f(R) gravity. In this class of models, the value of the scalar field changes by orders of magnitude. To enhance numerical stability, a common technique involves redefining the scalar field in terms of a new variable. The redefinition to choose depends on the specific model under consideration. It is typically chosen to prevent the occurrence of unphysical values of the scalar field during Newton–Raphson iterations. For example, for f(R) models the scalar field ϕ = f_,R will be driven towards zero in high-density regions, but at the same time f_,R cannot cross zero, as the potential becomes singular in this scenario. To avoid this issue, a commonly used field redefinition is u ≡ln[f_,R/f̅_,R(a)] <cit.>. However, this transformation introduces additional nonlinearity and in some models, such as Hu–Sawicki f(R) models, this transformation is not necessary and might even lead to considerable performance losses in a simulation. In some cases, this can be avoided. For example, <cit.> noticed that for Hu–Sawicki f(R) gravity with n = 1 <cit.>, when making the change of variable u = √(-f_,R), the field equation could be recast as a depressed cubic equation, u^3 + pu + q = 0 , which possesses analytical solutions <cit.>. Although the Gauss-Seidel smoothing procedure is still needed (because p depends on the values of the field u in neighbouring cells), this removes the Newton–Raphson part and expensive exponential/logarithmic operations from the method of <cit.>, therefore leading to significant performance gains. <cit.> also generalised this improved relaxation approach to the cases of n=0 (which strictly speaking is not a variation of the Hu–Sawicki model) and n=2. For other models like the symmetron, which has a Higgs-like potential, the scalar field is free to cross zero, and no field redefinition is needed (apart from a simple rescaling). §.§.§ Screening with nonlinearity in kinetic terms In another class of models, nonlinearity emerges within the kinetic term. For example, in models with the Vainshtein screening mechanism <cit.>, the equation exhibits nonlinearity in the second derivatives of the scalar field, ∇^2 ϕ = 4 π G a^2β(a) δm + g(∇_i ϕ, ∇_i ∇_j ϕ) , where β(a) is a time-dependent coupling function. The simplest example here is the DGP model which was first simulated by <cit.>. In such cases, the operator-splitting trick <cit.> can be employed. This approach can simplify the equations, avoiding potential issues associated with imaginary square roots, and improving code performance. This trick is particularly useful for the DGP braneworld models and other Vainshtein screening models, such as Cubic and Quartic Galileons . §.§.§ Approximate treatments of screening Some models allow linearisation of the nonlinear equation using some approximation. One approach ( ; ; see also the appendix of ) is to introduce the screening factor for the matter density perturbation ∇^2 ϕ = c^2m^2a^2/ħ^2ϕ + 4π Ga^2 δmϵ_ screen(Φ_N, |∇Φ_N|, ∇^2 Φ_N) , where the screening function depends on the Newtonian potential Φ_N. This type of parameterised modified gravity is referred to as `type 1' in Table <ref>. One specific method, developed in <cit.>, starts from linearising Klein–Gordon's equation. In this formalism, one solves the Poisson equation in Fourier space, - k^2 Φ= 4 π G_ eff(a,k) a^2 δm,, where the function G_ eff(a,k) approximates the effective Newton's constant introduced by the screening effect of the scalar field on small scales. This function is given by G_eff(a,k) = G + Δ G_eff(a,k) = G + (G_eff^lin(a) - G)M(a,k) , with G_ eff^ lin(a) being the asymptotic linear effective Newton's constant that depends only on time, and M(a,k) is a function that approximately captures the nonlinear corrections introduced by the scalar field on small scales. This function allows Eq. (<ref>) to transition from G_ eff(a,k)→ G_ eff^ lin(a) on large scales to G_ eff(a,k)→ G on small scales. This type of parameterised modified gravity is referred to as `type 2' in Table <ref>. A procedure to fix M(a,k) is described in <cit.>, which has the advantage of avoiding additional parameters to tune the screening efficiency. One can also choose to parameterise the nonlinear contribution using an effective Newton's constant at both small and large scales. If the modifications of gravity are encoded in a scale-dependent function Δ G_eff(a,k) as in Eq. (<ref>), then we can propose a similar equation in real space, G̃_eff(a, r) = G +ΔG̃_eff(a, r) , where ΔG̃_eff(a,r) is the Fourier transform of Δ G_eff(a,k). In practice, an additional approximation is made, namely ΔG̃_eff(a,r) ≈Δ G_eff(a,k → 1/r). This approach allows the encoding of nonlinear contributions over the whole range of scales modelled by N-body algorithms through real-space equations, for instance, the Tree Particle-Mesh (TreePM) method implemented in codes like . Provided the parameterisation is effective, this is expected to increase the accuracy of the estimation of the nonlinear effects. Several parameterisations have been proposed for this kind of approach, either with additional tuning parameters, such as in <cit.>, or based on local small-scale environmental properties to avoid the need for any extra parameters, such as in <cit.>. § ADDITIONAL CODE VALIDATION A code comparison for simulations that solve nonlinear scalar field perturbations is presented in . Since then, various new codes have been developed. In this section, we show a comparison of the predictions for the power spectrum and the halo mass function using the simulations of as a reference and starting from the same initial conditions. These were generated using second-order Lagrangian perturbation theory in a ΛCDM cosmology with m = 0.269, Ω_Λ = 0.731, h = 0.704, n_s = 0.966 and σ_8 = 0.801. The simulations have N_p = 512^3 particles of mass M_ p≃ 8.756 × 10^9 h^-1 M_⊙ in a box of size B = 250 h^-1 Mpc and start at redshift z = 49. As in , we compare simulations for f(R) and nDGP models. In these models <cit.>, the background expansion history is closely approximated by that of ΛCDM. Furthermore, the effect of modified gravity can be ignored at z = 49, thus it is justified to use the initial conditions of ΛCDM. The measurements of the power spectrum and mass function are performed by the pipeline developed in the second article of this series, Rácz et al. (in prep.). Based on two models only, our comparison does not encompass the full diversity of numerical methods discussed in the previous section. In many cases some validation of the various implementations can be found in the corresponding references. §.§ Summary of codes used in the validation Table <ref> shows an overview of the simulation codes considered in this section and provides a quick reference of their capabilities and limitations. For each of them, a short summary is presented here. In the Appendix we comment on the trade-off between accuracy and computational cost of the implementations. §.§.§ First presented in <cit.>, this code is based on the moving-mesh N-body and hydrodynamical simulation code <cit.>, which uses a TreePM algorithm to calculate gravitational forces. The additional modified gravity force (fifth force) is calculated with a relaxation solver <cit.> that is accelerated by the multigrid method and uses adaptive mesh refinement (AMR). It currently also supports simulations for the nDGP model <cit.>, as well as massive neutrinos implemented using the δ f method <cit.>. To solve the modified gravity equations, the density field is projected onto the AMR grid, constructed in such a way that each cell on the highest refinement level contains at most one particle (except if a pre-set maximum refinement level is reached; the cell size at this level is of the order of the smoothing length of the standard gravity solver). Once the field equation is solved to obtain the scalar field configuration, the modified gravity force can be computed from its gradient using finite differencing. Since computes the forces on all particles simultaneously, and the modified gravity field equations are generally highly nonlinear (with a poor convergence rate of the relaxation algorithm), this is computationally expensive compared to 's Newtonian gravity solver. However, the maximum acceleration of the modified gravity force is smaller than that of Newtonian gravity, mainly because the latter occurs in regions with high density where screening occurs. This allows the modified gravity solver to run using larger time steps <cit.>, resulting in significantly reduced computational cost. Together with 's efficient MPI parallelisation and lean memory footprint, these have made it possible to run the large number of f(R) simulations used in various recent works, such as the - <cit.> simulation suite of 200 f(R) and nDGP models. This has allowed accurate emulators of various physical quantities or observables to be constructed. Another highlight of is its capability to run realistic galaxy formation simulations in a cosmological box <cit.>, thanks to the use of the Illustris-TNG subgrid physics model <cit.>. More recently, it has been used for larger-box hydrodynamical simulations with a realistic recalibrated Illustris-TNG model <cit.>, enabling the study of galaxy clusters in modified gravity. The simulations used in this paper were run using a residual criterion of ϵ = 10^-2 and a maximum refinement level () of 10 for the nDGP simulations and 18 for the f(R) models with a gravitational softening of 0.01 h^-1 Mpc. §.§.§ <cit.> is a modified version of the TreePM N-body code <cit.> implementing an AMR solver for the scalar degree of freedom f_,R characterising the widely-studied Hu–Sawicki f(R) gravity model. In , the same tree structure that is employed to solve for standard Newtonian gravity is also used as an adaptive grid to solve for the scalar field configuration through an iterative Newton–Gauss–Seidel (NGS) relaxation scheme (see Sect. <ref>) with the Full-Approximation-Storage Multigrid method (see Sect. <ref>) and with the field redefinition u ≡ln[f_,R/f̅_,R(a)] (see Sect. <ref>). also allows MG simulations to be run with massive neutrinos <cit.> using the neutrino particle method <cit.>. For the simulations presented here, the relative tree opening criterion was used with an acceleration relative error threshold of 0.0025, and a uniform grid with 512^3 cells was employed to compute long-range Newtonian forces. Concerning the MG field solver, a residual tolerance of ϵ = 10^-2 was set for the V-cycle iteration and a maximum refinement level of 18 was used for the AMR grid, corresponding to a spatial resolution of 1 h^-1 kpc at the finest grid level, compared to a gravitational softening of 18 h^-1 kpc, following the setup adopted in . §.§.§ <cit.> is a generic modified gravity simulation code based on the publicly-available N-body and hydrodynamical simulation code <cit.>. Originally developed for f(R) gravity, this code takes advantage of the adaptive mesh refinement of to achieve the high resolution needed to solve the scalar field and hence the fifth force in high-density regions. The nonlinear f(R) field equation is solved with the standard Gauss-Seidel approach as first applied by <cit.>, but it was later replaced by the more efficient algorithm of <cit.>. The code has since been extended for simulations for the generalised chameleon <cit.>, symmetron and dilaton <cit.>, nDGP <cit.>, cubic Galileon <cit.>, quartic Galileon <cit.>, vector Galileon <cit.> and nonlocal gravity <cit.>. For the simulations used in this paper, we have used a domain grid (the uniform mesh that covers the whole simulation domain) with 2^9=512 cells per dimension, and the cells are hierarchically refined if they contain 8 or more effective[Since the code uses a cloud-in-cell mass assignment scheme, it is tricky to count particles in cells because a particle can contribute to the densities of 8 nearby cells. Here “effective” is used to mean that the density in a given cell, multiplied by the volume of the cell, is equivalent to the total mass of a specified number of particles.] particles. The highest refined levels have effectively 2^16 cells, leading to a force resolution of about 0.0075 h^-1 Mpc. §.§.§ ISIS The code <cit.>, like the code above, is based on <cit.>. It contains a scalar field solver that can be used to simulate generic MG models with nonlinear equation of motion and has been used to simulate models such as f(R) gravity, the symmetron model, nDGP and disformal coupled models <cit.>. It also allows for hydrodynamical simulations that have been used to study the interplay between baryonic physics and modified gravity <cit.>. Furthermore, the code has the capability to go beyond the quasistatic limit and study the full time dependence of the scalar field <cit.>. The scalar field solver used in the code is a Gauss–Seidel relaxation method with multigrid acceleration, very similar to the one in described above. For the simulations presented in this paper, we have used a domain grid (the uniform mesh that covers the whole simulation domain) with 2^9 = 512 cells per dimension, and the cells are hierarchically refined if they contain 8 or more effective particles. §.§.§ [https://github.com/mianbreton/pyscogithub https://github.com/mianbreton/pysco] is a particle-mesh (PM) code written in and accelerated with which currently supports Newtonian and f(R) gravity <cit.>. While multiple flavours of solvers based on Fast Fourier Transforms (FFT) are available, for the present paper, we use a multigrid solver to propose something different from other codes in this comparison project (other codes that also use multigrid are AMR-based). uses a triangular-shaped cloud mass assignment scheme and solves the linear Poisson equation using multigrid V-cycles with a tolerance threshold of the residual of 10^-3, and two F-cycles <cit.> to solve the additional field in f(R) gravity with the nonlinear multigrid method described in Sect. <ref> and Eq. (<ref>). Our convergence threshold is very conservative since we do not intend to conduct a convergence study in this paper (a less conservative threshold could still give reasonable results at much lower computational cost). Furthermore, to resolve the small scales we use a coarse grid with 2048^3 cells, resulting in roughly 500 time steps to complete the simulations. §.§.§ simulations were performed using the Fourier-Multigrid Library .[https://github.com/HAWinther/FMLgithub https://github.com/HAWinther/FML] These simulations are based on the COmoving Lagrangian Acceleration (COLA) method <cit.>, which combines Lagrangian perturbation theory with the PM method to reduce the number of time steps that are required to recover clustering on large scales. The N-body solver in the library contains implementations of various DE and MG models like the DGP model, the symmetron model, f(R) gravity and the Jordan–Brans–Dicke model <cit.>. For the PM part, we used N_mesh^1/3 = 5 N_p^1/3, i.e. a mesh discretisation five times smaller than the mean particle separation, and the total of 150 time-steps linearly spaced along the scale factor to achieve a good agreement of the mass function with AMR simulations. We also ran low-resolution simulations with N_mesh^1/3 = 3 N_p^1/3 with 100 time steps, and checked that the nonlinear enhancement of the power spectrum from these low-resolution simulations agrees well with the one from the high-resolution ones. Screening is included using approximate treatments described in Sect. <ref>. For f(R) gravity models, the code has one parameter to tune the strength of chameleon screening called . For the model with f̅_,R = 10^-6, we used the default value while we used for f̅_,R = 10^-5. For nDGP, we used Gaussian smoothing with a smoothing radius of 1 h^-1 Mpc to compute the density field for screening and did not use an option to enforce the linear solution at small wavenumbers k. We also ran simulations based on the screening approximation using G_eff(a, k) given by Eq. (<ref>) for nDGP, and in these simulations, we have used the same settings as in the other screening approximation implementation. The biggest advantage of this screening implementation is that it does not require any additional tuning parameter related to screening, that is, the function G_eff(a,k) is completely defined by the theoretical model one wants to simulate. §.§.§ is an extension of the TreePM code . It introduces modifications at large scales in the PM part with an effective Newton's constant according to Eq. (<ref>), while the force at small scales is modified in the tree part with Eq. (<ref>). In this respect, implements an approximate solver for the extra force induced by different possible MG theories. On the other hand, differently from other approximate methods, the dynamics of matter particles under the effect of the (dominant) standard gravity force is treated with the full TreePM solver of of which it retains the accuracy in modeling the nonlinear density field. For nDGP, the functional form of G̃_eff(a,r) is described in <cit.>, while for f(R) it is introduced as in <cit.>. In the latter case, no additional parameters are needed to describe the screening. For the first nDGP case instead, in addition to the theoretically defined parameters N_0=1, B = 1/[3β(a)], b=2 and a=3, we consider the screening scale k_th = 0.4 h Mpc^-1 defined by the parameterisation r_V≅2/3(3 Ω_m H^2 r_c^2/β^2)^1/3 r_th . where r_ V is the Vainshtein radius of the model for a spherically symmetric density perturbation <cit.>. §.§ Comparison of the power spectrum Before discussing the effect of extra degrees of freedom, we compare the different codes within a ΛCDM cosmology. For reference and only for the case of ΛCDM, we include results from the code that was also used for the Flagship simulations in <cit.>. Figure <ref> shows the matter power spectra at two different redshifts, z=1 (left panels) and z=0.667 (right panels). The lower panels show the relative difference to the simulation carried out with that we use as a reference throughout. Since and have both been developed from the code, the agreement between these two codes is better than 1%. These AMR codes tend to underestimate the power on small scales when compared to tree-based codes, mainly due to the mesh refinement criterion used in our simulations. Fine-tuning some precision parameters, we expect that a better agreement can be achieved, cf. <cit.>. As we can see, the codes that perform closely to our benchmark at small scales are the ones that also share the same tree-PM gravity solver, and . , which uses a tree structure and the fast multipole method to compute gravitational forces, is in excellent agreement with up to k ∼ 1 h Mpc^-1, yielding slightly more power at higher k. Also shown in the figure are the results from , a pure PM code that uses a fixed grid to solve the Poisson equation. The fixed resolution of the PM grid explains why consistently suffers from low force resolution at k ≳ 1 h Mpc^-1. All our simulations use the exact same initial particle data, such that our comparisons should not be contaminated by cosmic variance. However, for simulations, we also reconstructed the initial density field from the initial particle distribution to estimate the displacement fields required for the 2LPT calculations in those simulations. While incorporates a full N-body solver, its small-scale accuracy is constrained by resolution limitations that arise from the absence of AMR. Moreover, within the scope of this code comparison project, employs a multigrid algorithm, resulting in reduced small-scale clustering compared to FFT (on a regular grid), thereby explaining the observed deficiency in power at wavenumbers k ≳ 1 h Mpc^-1. For modified gravity models, we compare the ratio between the power spectrum computed for that model and the one found in the ΛCDM reference cosmology. Taking such ratios largely cancels out the differences seen in ΛCDM between the codes, making the differences due to the treatments of the extra scalar field more apparent. Following , we consider two MG models that exhibit two different types of growth dependence and screening mechanisms. The first one is the so-called Hu–Sawicki f(R) which exhibits a scale-dependent growth factor at linear order, and a screening mechanism realised by nonlinearities in the potential, see Sect. <ref>. The strength of the modification of gravity is characterised by the background value of the scalar degree of freedom, and we study the cases f̅_,R = 10^-5 and f̅_,R = 10^-6, labelled `F5' and `F6', respectively, the latter being closer to GR. The second one is nDGP, a braneworld theory defined in a five-dimensional spacetime. The growth factor in this theory is scale independent, and screening is realised through nonlinearities in the kinetic terms, as discussed in Sect. <ref>. Here, the strength of the modification of gravity is characterised by the value of the cross-over scale, and we study the cases r_c = 1.2 H_0^-1 and r_c = 5.6 H_0^-1, labelled `N1.2' and `N5.6', respectively, the latter being closer to GR. Figure <ref> shows the relative change (with respect to ΛCDM) of the matter power spectrum due to modifications of gravity, sometimes called `boost', for the two f(R) scenarios at redshifts z=1 (left panels) and z=0.667 (right panels). Since f(R) already exhibits a scale-dependent growth factor at linear order, we can see that on scales of k≈ 0.1 h Mpc^-1 this effect is already present, and gets enhanced at large wavenumbers due to nonlinearities of the density field. As we can see the codes , , , , and all roughly agree to better than one percent down to scales of k≈ 10 h Mpc^-1, as expected. These minor discrepancies can be caused by the refinement criterion used in the latter code or by slight variations in the redshift of the particle snapshot output. The present results are largely in agreement with a similar analysis performed in . In the same plots, we can also see how approximation schemes to introduce screening from nonlinearities in the potential perform in contrast to the exact solutions. The results that use these methods are showcased by the examples of and , where each uses a different scheme to approximate the effects of the dynamics of the scalar field in dense environments. As expected, the two codes do not exhibit the same level of agreement down to scales of k≈ 10 h Mpc^-1 as their counterparts using full solvers. However, in both cases, we can see that the deviations from the reference results are limited to 2% even in the most extreme regime of departure from GR. Through closer inspection, we can see small differences in the agreement between the approximate schemes and full solvers at different values of k. These deviations are caused by different implementations of the approximate MG solvers in the two codes. In fact, while is an approximate method that uses a PM algorithm to solve the Poisson equation on a fixed grid with the use of the screening approximation to linearise the Poisson equation, is a new implementation that exploits the TreePM structure of the baseline code to solve for the small-scale particle dynamics by incorporating the MG effects (including the screening) through a scale-dependent Newton's constant in both real and Fourier space. Figure <ref> shows results for the matter power spectra in the two nDGP scenarios we consider, characterised by the two values of the cross-over scale, r_c = 1.2 H_0^-1 (N1.2) and r_c = 5.6 H_0^-1 (N5.6). As before, we compare simulation results at two different redshifts, z=1 (left panels) and z=0.667 (right panels). Since the linear growth function has a scale-independent enhancement compared to ΛCDM, we see a constant amplification of power at small values of k. The Vainshtein mechanism suppresses the deviation from ΛCDM on small scales, leading to a diminishing boost at large k. The agreement between different codes at k < 1 h Mpc^-1 is better than 1% for all codes, including the approximate simulations with and . At k > 1 h Mpc^-1, the AMR-based codes, and , show larger deviations at the level of 2%, where the suppression of the deviation from ΛCDM is slightly underestimated compared with . This could be caused by the difference in the ΛCDM power spectrum shown in Fig. <ref>, even though this difference in the baseline code is largely cancelled out in the boost. On the other hand, despite their approximations in the MG solvers, and agree with at the level of 1% even on these scales. §.§ Comparison of the halo mass function To gain further insight into the nonlinear dynamics of the different models and their respective implementations we also compare the cumulative halo mass functions measured in our simulations. For this purpose, halo catalogues are obtained with the halo finder by running a pipeline described in the second paper of this series (Rácz et al. in prep.). To establish the baseline for the comparison, in the spirit of the previous section, Fig. <ref> shows a comparison of the cumulative halo mass function for the different codes in a ΛCDM cosmology. Since and are based on the same TreePM gravity solver, their agreement is excellent. They also agree very well with results from which, as we like to remind the reader, are only available for the case of ΛCDM and are shown for reference here. On the other hand, and are based on the AMR method. They agree with each other, but these simulations underestimate the abundance of low-mass halos below 10^13 h^-1 M_⊙. uses a fixed-grid PM method, thus it also underestimates the abundance of low-mass halos. With relatively high resolution in these simulations (N_mesh^1/3 = 5 N_p^1/3), agrees well with and . We note that exhibits a similar behaviour to , and , albeit with a lower amplitude. This discrepancy can be attributed to the fact that has the least small-scale clustering (see Fig. <ref>), resulting in a smaller number of halos within the simulation. The chameleon screening in f(R) depends on the halo mass such that high-mass halos are typically screened. The critical mass above which screening is effective is determined by the parameter f̅_R. As we can see from Fig. <ref>, the ratio of the halo mass function between f(R) and ΛCDM is enhanced for low-mass halos but approaches unity at the high-mass end where all halos are effectively screened. The agreement between full simulations ( and , , ) is around 4% for all halo masses. uses an approximation for screening based on the thin-shell condition for a spherically symmetric object. Although this captures an overall effect of screening, it fails for low-mass halos in F6 and high-mass halos in F5, leading to larger deviations. In the case of the Vainshtein mechanism, there is no halo-mass dependence on the screening. Despite screening being effective inside dark matter halos, these halos still feel enhanced gravitational attraction. This increases the merger rate and ultimately leads to larger enhancements of the halo mass function for halos of larger masses. In nDGP, shown in Fig. <ref>, all codes agree within 4%, with a sub-per cent agreement seen in the intermediate mass range 10^12 h^-1 M_⊙ < M < 10^13 h^-1 M_⊙. shows a large deviation for M ≲ 5 × 10^11 h^-1 M_⊙. For these masses, there are less than 50 dark matter particles assigned to each halo, which indicates that the results could be affected by the refinement criteria of the simulations. The deviations are larger for the most massive halos, but the number of these halos is low and the halo mass function therefore becomes very noisy in this regime. § CONCLUSIONS In this work, we have presented a comprehensive review of numerical methods for cosmological N-body simulations in scenarios extending beyond the standard ΛCDM model. Our exploration spanned a variety of alternative DE and MG theories, highlighting the critical role of N-body simulations in connecting theoretical models with observational data. Through the detailed examination of numerical solvers and approximations tailored to these extended theories, we have showcased the state of the art of modelling the nonlinear scales of cosmic structure formation under a wide range of cosmological scenarios. Our code comparison exercise, based on the simulations from and extended by incorporating new codes and approximation techniques, has demonstrated a fair consensus among different numerical implementations. This validation is particularly important for the mission, as the forthcoming observational data will require precise nonlinear modelling to constrain cosmological parameters effectively. This article is part of a series that explores simulations and nonlinearities in models beyond ΛCDM. The simulation codes that we have considered in this article are used to generate simulation products in the companion paper by Rácz et al. (in preparation) and are crucial for generating the simulations needed to create the nonlinear modelling in the companion paper by Koyama et al. (in preparation), which forecasts constraints for f(R) gravity from photometric primary probes of the mission. The validation checks performed in this paper are therefore critical for being able to trust these results. The main outcomes of this work can be summarised as follows: ⋆ N-body simulation codes have been developed for a wide range of extended cosmological scenarios, ranging from simple DE models to more complex interacting scalar field models and MG theories, to non-standard dark matter and initial conditions; the availability of such codes will be a crucial asset for the future developments of large galaxy surveys such as ; we have provided a concise yet comprehensive overview of several such codes, their main features, implementation methods, assumptions and approximations. ⋆ As a matter of fact, among these simulation codes the ones involving algorithms for the solution of nonlinear differential equations of some additional degrees of freedom, as e.g. for the case of MG theories, are the most challenging in terms of implementation and numerical convergence; we have therefore performed a thorough validation effort of these methods through a code comparison study, extending the approach adopted in to more recent and diverse algorithms. ⋆ As a result of our validation effort, we found agreement in the power spectrum boost at ≲ 1% up to k ≲ 1 h Mpc^-1 and at ≲ 3% up to k ≲ 10 h Mpc^-1 among all the codes implementing full field solvers (), while approximate methods () display slightly larger deviations not exceeding 3% up to k ≲ 7 h Mpc^-1. ⋆ The halo mass function shows larger deviations among the codes, also due to larger differences in the outcomes for the baseline ΛCDM simulations; nonetheless, all codes agree within less than 5% on the relative change of the halo mass function except for the very largest and the smallest mass ranges where poor statistics and insufficient resolution, respectively, may impact the results. ⋆ We also compared the computational requirements of the different codes by measuring the CPU time needed to complete a reference MG simulation starting from identical initial conditions; we found that while full field solvers generally imply a substantial increase – up to a factor of ten – of the total CPU time relative to a ΛCDM simulation, approximate solvers are not significantly more demanding for MG simulations compared to standard ΛCDM runs. Looking forward, the continued evolution of simulation techniques will be paramount in leveraging the full potential of upcoming large-scale structure surveys such as Euclid. N-body simulations therefore continue to set a solid foundation for the next generation of cosmological inquiries. By persistently pushing the boundaries of computational astrophysics, we are poised to uncover the underlying physics driving the accelerated expansion of the Universe, thereby opening new windows onto the fundamental nature of DE, dark matter, and gravity itself. The lead authors thank A. Schneider and R.E. Smith for their diligent work as internal referees and P. Schneider for carefully proofreading the manuscript. The work of JA is supported by the Swiss National Science Foundation. During part of this work, AMCLB was supported by a fellowship of PSL University hosted by the Paris Observatory. This project was provided with computer and storage resources by GENCI at CINES thanks to the grant 2023-A0150402287 on the supercomputer Adastra's GENOA partition. GR’s research was supported by an appointment to the NASA Postdoctoral Program administered by Oak Ridge Associated Universities under contract with NASA. GR was supported by JPL, which is run under contract by the California Institute of Technology for NASA (80NM0018D0004). aa § COMPUTATIONAL COST One of the obstacles in producing accurate predictions in the nonlinear regime of structure formation is presented by the computational cost of running large N-body simulations. This issue becomes even more pronounced in the case of MG simulations, due to the additional computational cost associated with the solution of the Klein–Gordon equation for the scalar field describing the additional degree of freedom in these theories. To provide some insights into the trade-offs between accuracy and time-to-solution, we attempt here a comparative analysis of the computational cost of the simulations run for this paper. Given that these simulations were run on different machines and with different parallelisation settings, we cannot conduct a precise assessment of their computational cost. Instead, we limit this analysis to an order-of-magnitude comparison of the simulations run with the various codes and models discussed in this paper. For this exercise, we use the information on the wall-clock time, T_ real, and the number of cores, N_ cores, as recorded in the log files of the simulation runs. This information was available for all simulations except for the ones run with . We estimate the computational cost of the simulations, C, as the product of the wall-clock time and the number of cores: C ≡ N_ cores T_ real . The estimates of the computational cost for the simulations are compared in Fig. <ref>. We can see that the cost of ΛCDM simulations of Tree-PM and AMR codes is C ∼ 10^3 CPUh, while for and the cost is about one order of magnitude lower at C ∼ 10^2 CPUh. For f(R) gravity models instead, the computational cost increases significantly (approximately by a factor of ten) for the codes that solve the full Klein–Gordon equation of the scalar field, namely , , , and , while the overhead is smaller for the codes that adopt screening approximations, namely and . Finally, in nDGP gravity, only has a significant overhead compared to ΛCDM, while and the approximate codes, and , have just a small overhead. The performance of multigrid codes depends on the convergence criterion chosen. For , we chose an extremely conservative approach (see Sect. <ref>) with a very low tolerance threshold, resulting in more V-cycles and almost double the CPU time needed to solve the linear Poisson equation compared to a more standard setup. Regarding the nonlinear solver, we use two F-cycles instead of a single one (which should in principle be enough, but it is not the goal of the present paper to provide a convergence study), therefore roughly doubling the CPU time needed for the f(R) gravity models. We stress that a thorough assessment of the efficiency of the codes is beyond the scope of this paper and would have required a much more methodical effort including (but not limited to) * running the simulations in a controlled environment, * conducting convergence tests for the various hyper-parameters, * testing the scaling performance of each code. In fact, when focusing only on predictions of the amplification factors, it is possible to achieve a similar level of accuracy with lower force, mass or time resolution, since resolution effects mostly cancel out when taking ratios of quantities affected by the same inaccuracies <cit.>. This has been shown to be the case for simulations in <cit.>, where the use of a lower resolution allowed accurate predictions of power spectrum boosts in nDGP gravity with a theoretical gain of ∼ 300 with respect to the computational cost of the COLA simulations presented here. Such large speed-ups have paved the way for creating emulators for the nonlinear amplification of the power spectrum in models beyond ΛCDM in a cost-effective way, i.e. without the need for supercomputers. This has already been done for some of the models we consider in this paper <cit.>. For instance, <cit.> found that an emulator for the nDGP model can be constructed with as little as a few thousand CPUh worth of computational time. Likewise, <cit.> who presented a generic pipeline for using COLA to create such emulators, used f(R) gravity as an example and found similar numbers for the required computational time. <cit.> described a simulation setup that can also be used for emulating the full power spectrum (up to a reasonable high wavenumber k∼ 1 h Mpc^-1), requiring around ∼ 100 CPUh per simulation on a modern CPU. Emulators have also been constructed for beyond-ΛCDM models using high-resolution direct simulations in the same way as has been done for ΛCDM. This approach generally gives more accurate emulators than those created with approximate methods, but this comes at a much higher cost. For example, both <cit.> and <cit.> have each presented a high-fidelity emulator for the f(R) model considered in this paper, but at a higher cost of about 3.5-4 million CPUh.
http://arxiv.org/abs/2409.03574v1
20240905142909
Latitudinal dependence of variations in the frequencies of solar oscillations above the acoustic cut-off
[ "Laura Jade Millson", "Anne-Marie Broomhall", "Tishtrya Mehta" ]
astro-ph.SR
[ "astro-ph.SR", "physics.space-ph" ]
addressref=aff1,corref,email=laura.millson@warwick.ac.uk]L. J.Laura Jade Millson0009-0003-4254-2676 addressref=aff1,email=a-m.broomhall@warwick.ac.uk]A-M.Anne-Marie Broomhall0000-0002-5209-9378 addressref=aff1]T.Tishtrya Mehta0000-0002-4875-9142 [id=aff1]Centre for Fusion, Space and Astrophysics (CFSA), Department of Physics, University of Warwick, Coventry, CV4 7AL, UK Millson et al. Pseudo-mode frequencies across solar latitude § ABSTRACT At high frequencies beyond the acoustic cut-off, a peak-like structure is visible in the solar power spectrum. Known as the pseudo-modes, their frequencies have been shown to vary in anti-phase with solar magnetic activity. In this work, we determined temporal variations in these frequencies across the solar disc, with the aim of identifying any potential latitudinal dependence of pseudo-mode frequency shifts. We utilised nearly 22 years of spatially resolved GONG data for all azimuthal orders, m, for harmonic degrees 0 ≤ l ≤ 200, and determined shifts using the resampled periodogram method. Periodogram realisations were created from overlapping, successive 216d-long segments in time, and cropped to 5600-6800μHz. Cross-correlation functions were then repeatedly generated between these realisations to identify any variation in frequency and the uncertainty. We categorised each mode by its latitudinal sensitivity and used this categorisation to produce average frequency shifts for different latitude bands (15^∘ and 5^∘ in size) which were compared to magnetic proxies, the F_10.7 index and GONG synoptic maps. Morphological differences in the pseudo-mode shifts between different latitudes were found, which were most pronounced during the rise to solar maximum where shifts reach their minimum values. At all latitudes, shift behaviour was strongly in anti-correlation with the activity proxy. Additionally, periodicities shorter than the 11-year cycle were observed. Wavelet analysis was used to identify a periodicity of four years at all latitudes. § INTRODUCTION The Sun exhibits periodic magnetic activity of varying lengths and amplitudes. One such phenomenon, the 22-year solar cycle, features two 11-year sunspot cycles where the polarity is reversed in successive cycles. This periodic variation is reflected in solar proxies and through helioseismic studies. Acoustic p-mode frequencies have been found to vary in-phase with the 11-year solar cycle (frequencies shifted to higher values as magnetic activity increased) (; ; ; ; ; see and and references therein for a full review), and the p mode heights in anti-phase (; ; ; ). The amplitude of frequency shifts have also been shown to be greater for higher frequency modes (; ; ). As higher frequency modes have higher upper turning points in the Sun, frequency variations with the magnetic activity cycle are thought to result from changes to the acoustic properties of the uppermost layer of the solar interior, just beneath the photosphere (; ; ; ; ). Shorter-term modulation of the solar magnetic activity cycle has also been identified in numerous solar activity proxies and through seismology. Known as quasi-biennial oscillations (QBO), these oscillations have periods on timescales of 0.6-4 years (; ; ). The seismic QBO has been observed in p-mode frequency shifts with an ≈2-year periodicity (; ; ), and is modulated by the 11-year solar cycle (it has a greater amplitude at solar maximum, but is still visible when activity is at a minimum) <cit.>. There is also evidence to suggest that the frequency dependence of this seismic QBO is weaker than the dependence of the 11-year solar cycle (; ), implying perturbations generating the seismic QBO are deeper in the solar interior compared to perturbations of the uppermost layer for the 11-year cycle. Throughout the solar cycle, sunspots drift from mid-latitudes (±30^∘) to lower latitudes (±5-10^∘). The mapping of sunspot locations over magnetic cycles results in the well-known butterfly diagram. This pattern is also visible when mapping the magnitude of p-mode frequency shifts over solar latitude and time (; ). To investigate whether this latitudinal progression of the solar cycle is observed in the acoustic p modes, <cit.> searched for temporal variations in p-mode frequencies as a function of latitude throughout Solar Cycle 23 and the rising phase to Cycle 24 maximum (June 1995 - July 2013) utilising intermediate (20 ≤ l ≤ 147) and high (180 ≤ l ≤ 1000) harmonic degree modes. The authors identified that whilst frequency shifts for all latitudes above ±15^∘ increased with the rise in magnetic activity, there was a delayed onset (1-2 years) in the upturn of frequency shift values for the lowest latitude band ±0^∘-15^∘. Frequency shifts in the ±0^∘-15^∘ band were found to have the fastest rise time (to solar maximum), and a delay in the decrease of the frequency shift amplitude after solar maximum (compared to the other latitudes). This delay resulted in an overlap of successive cycles at the frequency shift minimum. Above ±15^∘, the authors identified a double peak structure at solar maximum (with peaks in 2000 and 2002), which was interpreted as a manifestation of the QBO. In this work, we aim to conduct the first study on the latitudinal behaviour of frequency shifts of the pseudo-modes. In a solar power spectrum, we observe a mode-like pattern which extends beyond the acoustic cut-off frequency, ν_ac (; ; ; ). This structure, known as the pseudo-modes, is a result of waves travelling directly towards the observer, and indirectly upon refraction in the solar interior or at the back of the star, which interfere 'geometrically' and are observed projected onto spherical harmonics (; ; ; ). Pseudo-modes are an interesting tool for studies on solar magnetic activity. Firstly, for the Sun, pseudo-mode frequency variations are strongly anti-correlated with solar magnetic activity. The frequencies of the pseudo-modes have been shown to vary in opposition with the 11-year magnetic cycle between solar minimum and maximum (; ), and as a function of time <cit.>. In <cit.>, the authors determined pseudo-mode frequency shifts between time segments of 100 days (overlapping by 50 days) over nearly 20 years of GONG data averaged over all azimuthal orders. The pseudo-mode frequencies were found to vary significantly in anti-phase with the solar cycle for harmonic degrees 4 ≤ l ≤ 200. Secondly, the amplitude of the shift of the pseudo-mode frequencies is larger than that of the p-mode frequencies. <cit.> showed pseudo-mode frequency shift amplitude to vary by ≈1.5μHz throughout Solar Cycles 23 and 24. Currently, we only have a global view of the behaviour of pseudo-mode frequency shifts, where shifts are known to be anti-correlated with the solar magnetic activity cycle. However, prior analysis of the latitudinal dependence of p-mode frequency shifts has shown differences in the progression of the solar cycle below ±15^∘ between shifts and sunspots, providing insight on magnetic field structures and solar dynamo mechanisms <cit.>. Latitudinal analysis of pseudo-mode frequency shifts may, therefore, better constrain the origin and nature of variations in pseudo-mode frequencies, and the temporal interplay between shifts and magnetic activity. In this study, we utilised 22 years of GONG data to determine the behaviour of the solar pseudo-mode frequency shifts over time as a function of latitude across the solar disc. We present our methods in Section <ref>, the results in Section <ref>, and our conclusions in Section <ref>. § METHODOLOGY §.§ Data preparation In this work, we searched for temporal variations in the frequency shifts of the pseudo-modes at different latitudes across the solar disc. For this, we utilised data from the Global Oscillations Network Group (GONG, , <https://gong.nso.edu>). GONG consists of a network of six sites around the Earth which continuously monitor the Sun. Network merged timeseries (mrvmt) are produced by merging individual site timeseries, and concatenating 36 days of observations (known as a GONG month) <cit.>. We downloaded 223 of these timeseries, covering the time between 16th June 2001 to 7th June 2023. We started our analysis from 16th June 2001 onwards, as this is when the cameras were upgraded from 256 x 256 pixels to 1024 x 1024 pixels (with 60s cadence). The 36-day timeseries is spatially resolved for harmonic degrees 0 ≤ l ≤ 200. For each l, there are 2l+1 azimuthal orders, m, such that for l=200, there are 401 azimuthal orders ranging from -200 ≤ m ≤ 200. §.§ Temporal frequency shifts To search for temporal changes in frequency shifts of the pseudo-modes, and the uncertainty, we made use of <cit.>'s resampled periodogram approach. For each azimuthal order for every harmonic degree over 0 ≤ l ≤ 200, we concatenate six 36-day timeseries to create segments 216 days in length. Each successive segment overlaps the prior by half (108 days). We considered the first 216-day long segment (16th June 2001-17th January 2002) the reference segment, and so all frequency shifts are relative to this. A Lomb-Scargle (L-S) periodogram was then generated for both the reference segment and the comparison segment. Both periodograms were smoothed using a boxcar window with a width of 10μHz, following the approach of <cit.>. Each periodogram was then cropped to the range of frequencies in which we expected to observe the signature of the pseudo-modes. For this analysis, the range 5600-6800μHz was used. The frequency 5600μHz was chosen as a lower limit to ensure no p modes were included in the analysis (ν_ac,⊙≈5100μHz, ). The upper limit, 6800μHz, was selected so as many pseudo-mode peaks (6-8 peaks separated by ≈Δν) would be included in the analysis as possible, but that the upper limit was below both the frequency at which frequency shifts have been shown to switch to be in-phase with magnetic activity (≈6800μHz) <cit.> and the Nyquist frequency (≈8333μHz). An example of a L-S periodogram for the solar pseudo-mode region for mode l=150, m=0 is shown in Figure <ref>(a). To generate the periodogram realisations, we then took the square root of the periodograms and multiplied by a zero mean normal distribution, N(0, 1). This was repeated for both a 'real' and 'imaginary' instance. The modulus of the complex value was then determined, and this absolute value was squared. This generated new periodogram realisations for each segment, which retained the statistical properties (χ^2 with 2dof) of the original periodograms. A cross-correlation function (CCF) was then computed between the new periodogram realisations, and the CCF was cropped to ±30μHz. This value was selected to avoid any contribution from side peaks, the influence of which we would expect to observe at one-half of the large frequency separation, but to comfortably include the expected range of pseudo-mode frequency shifts determined previously (<2μHz, ). A Lorentzian function was then fitted to the CCF, and the centre of the Lorentzian was recorded as the frequency shift for this iteration. This process was then repeated 100 times. An example of the CCF is shown in Figure <ref>(b), with the fitted Lorentzian shown by the red line. The frequency shift between the two time segments was determined as the mean of the 100 lags, and the shift uncertainty was the standard deviation. This was then repeated for each successive, overlapping time segment over the 22 years of GONG data. This results in 70 frequency shifts being calculated between 2001-2023. For later sections (where frequency shifts are compared to GONG magnetogram synoptic maps which start from September 2006 onwards), the magnetic activity proxy is interpolated to 51 frequency shifts between 2006-2023. §.§ Latitudinal sensitivity of modes To identify the temporal pseudo-mode frequencies as a function of latitude, we utilised the ratio described in <cit.>. For each mode, we calculate θ as θ = arccos( m/√(l(l+1))) . A circle at angle, θ, from the equator passes through l number of nodes. Mode sensitivity is greatest at all latitudes below this, and so we use this angle to approximate the upper latitude at which each mode is sensitive. Modes where |m|=l, known as sectoral modes, are most sensitive to regions near the solar equator, and modes where m=0, known as zonal modes, are more sensitive to polar regions. We used this upper latitude to categorise the modes into our defined latitude bands. We then compute a weighted mean to produce the frequency shift and uncertainty for each latitude range. Pseudo-mode frequency shifts are determined between 0^∘≤θ < 75^∘, as this covers the full latitudinal range over which sunspots drift throughout the solar cycle. From here on, when referring to latitudinal ranges, they should be considered as symmetric about the equator. §.§ Magnetic activity proxies To compare variation in the temporal pseudo-mode frequency shifts with changes in solar magnetic activity, we employ two proxies. §.§.§ F_10.7 index The F_10.7 index is a global measure of solar magnetic activity <cit.>. It quantifies the solar radio flux at a wavelength of 10.7cm, where this emission is integrated over one hour to obtain each measurement (in solar flux units). The F_10.7 radio emission originates high in the chromosphere and low in the corona, and has been shown to be well correlated with sunspot number and p-mode frequency shifts <cit.>. The F_10.7 index has also been recorded since the mid-20th century, and so data is available for all years that we determined our pseudo-mode frequency shifts measurements for. Throughout this work, the F_10.7 index was averaged over 216-day segments (overlapping by 108 days) for ease of comparison to the pseudo-mode frequency shift values determined. §.§.§ Magnetic flux density proxy To compare variations in the frequencies of the pseudo-modes with the latitudinal progression of the solar cycle, we utilise GONG magnetogram synoptic maps <cit.>. Minute by minute observations taken from the six global GONG sites are condensed to create full surface photospheric magnetic flux density maps for each Carrington rotation. Utilising these synoptic maps, the absolute magnetic flux density was determined over selected latitudinal bands. The categorisation of the modes into each latitude band was determined utilising their upper latitude, which therefore means each mode is sensitive to all latitudes below this (i.e. the mode sensitivity is not confined only within the limits of the latitude band). Therefore, for the magnetic activity proxy, we also consider the full range of values below an upper limit i.e. for frequency shifts calculated for latitude band 30^∘ ≤ θ < 45^∘, we compare against all magnetic flux density data between -45^∘ and 45^∘. For each latitudinal band, we calculated the average, absolute magnitude of the magnetic flux density for each synoptic map (one per Carrington rotation). The values are interpolated to the same time frame as the pseudo-mode frequency shifts, and then linearly scaled and shifted for comparison in all figures. The synoptic maps are available from September 2006 onwards. § RESULTS §.§ Frequency shifts as a function of time In our work, we aimed to identify frequency shifts of the solar pseudo-modes as a function of time and latitude across the solar disc using GONG data. To start with, we considered the scenario where a weighted average was taken for all azimuthal orders across all modes (0 ≤ l ≤ 200), and the temporal pseudo-mode frequency shifts over the solar disc as a whole were determined. The results of this are shown by the blue data points in Figure <ref>, where the grey line shows the F_10.7 index. The error bars are too small to be seen, as the shifts are the weighted average of 40 401 modes. An anti-phase relationship is clearly observed between the pseudo-mode frequency shifts and the F_10.7 index throughout the 22 years of GONG data. The Spearman's rank correlation coefficient (ρ) identified a very strong negative relationship of -0.93 (p < 10^-31). The frequency shifts were found to vary throughout the timeseries by 1.2μHz, an amplitude similar to that observed previously <cit.>. In our analysis, we also observe a double peak structure at the frequency shift maxima (corresponding to cycle minima). The peak is most defined at the solar minimum after Cycle 23, but less so for the minimum after Cycle 24 (which was the weakest solar cycle in 100 years). The structure was also observed by <cit.>, who note the double peak is reminiscent of the double maximum observed in other activity proxies at maxima and has been associated with quasi-biennial oscillations. Overall, our method is able to replicate well the behaviour of the pseudo-mode frequency shifts throughout multiple solar cycles. We also investigated whether there is any relationship between the GONG duty cycle and the pseudo-mode frequency shifts determined. However, no significant correlation was found (ρ=0.03, p=0.8), implying that the magnitude of the pseudo-mode frequency shifts are not impacted by the duty cycle. Pseudo-mode frequency shift variations (μHz) across Solar Cycles 23, 24, and the full timeseries. The minimum values of pseudo-mode frequency shifts and their uncertainties for each cycle correspond to the maximum of the solar magnetic cycles. The maximum frequency shift for Solar Cycle 23 corresponds to the solar minimum between Cycle 23 and 24, and the maximum shift for Cycle 24 corresponds to the solar minimum between Cycle 24 and 25. *As Solar Cycle 23 (≈1996-2008) was not fully covered by our analysis, the minimum frequency shift values may be an underestimate. 3cSolar Cycle 23 3cSolar Cycle 24 Latitude band Minimum* Maximum Variation Minimum Maximum Variation 0^∘ ≤ θ < 15^∘ -0.19 ±0.02 1.04 ±0.03 1.23 ±0.04 -0.08 ±0.03 0.68 ±0.03 0.76 ±0.04 15^∘ ≤ θ < 30^∘ -0.18 ±0.01 1.07 ±0.03 1.25 ±0.03 -0.02 ±0.02 0.74 ±0.03 0.76 ±0.04 30^∘ ≤ θ < 45^∘ -0.16 ±0.01 0.98 ±0.01 1.14 ±0.01 0.06 ±0.01 0.62 ±0.01 0.56 ±0.01 45^∘ ≤ θ < 60^∘ -0.16 ±0.01 0.89 ±0.01 1.05 ±0.01 0.06 ±0.01 0.62 ±0.01 0.56 ±0.01 60^∘ ≤ θ < 75^∘ -0.21 ±0.01 0.95 ±0.01 1.16 ±0.01 0.07 ±0.01 0.74 ±0.01 0.67 ±0.01 §.§ Frequency shifts as a function of latitude The variation in the solar pseudo-mode frequency shifts over time as a function of latitude is shown in Figure <ref>, and the number of modes used to calculate the frequency shifts for each latitude band is displayed in Table <ref>. The results for all five latitude bands (of size 15^∘ covering 0^∘ ≤ θ < 75^∘) are shown in the top panel of Figure <ref>. As expected, for all latitude bands, the pseudo-mode frequency shifts are clearly in anti-phase with the solar magnetic cycle. Frequency shift values are shown to increase whilst magnetic activity declines up until around 2009; fall whilst magnetic activity increases until solar maximum around 2014; and then increase again as solar minimum is approached around 2020. In all five latitude bands, a double peak structure is observable as frequency shifts reach their maximum values. It is particularly noticeable around the 2009 solar minimum. This double peak structure has previously been observed at solar minimum in pseudo-mode frequency shifts <cit.>, and at solar maximum in temporal p-mode frequency shifts (; ). The morphology of the shifts also appear to fluctuate regularly throughout the time series with a shorter periodicity than the 11-year cycle. We elaborate further on this in Section <ref>. We also observe differences in the morphology of the shifts between the latitude bands. This is best shown in the middle panel of Figure <ref>, where the lowest latitude band 0^∘ ≤ θ < 15^∘ (black points) and the highest 60^∘ ≤ θ < 75^∘ (red points) are shown. From visual inspection, shifts at lower latitudes appear to have greater amplitudes of fluctuation throughout the timeseries, compared to the higher latitudes. In addition, the amplitude of the shifts for lower latitudes is greater over the full 22-year timeseries, with lower latitude shifts reaching a larger maximum value and a sharper minimum (1.23 ±0.04 μHz for lower, 1.16 ±0.01 μHz for higher) (Table <ref>). The behaviour of the mid-latitude bands is displayed in the last panel of Figure <ref>, where the pseudo-mode frequency shifts for 15^∘ ≤ θ < 30^∘ are shown by the purple data points, 30^∘ ≤ θ < 45^∘ by the blue points, and 45^∘ ≤ θ < 60^∘ by the green. Overall, the morphology of the shifts is quite similar across these latitudes. The main difference is the amplitude of the frequency shift, where shifts at 15^∘ ≤ θ < 30^∘ reach higher values at solar minimum and lower values at solar maximum with an overall amplitude of 1.25 ±0.03 μHz (Table <ref>). The total shift variation is smaller at 30^∘ ≤ θ < 45^∘ (1.14 ±0.01 μHz), and smaller still over 45^∘ ≤ θ < 60^∘ (1.05 ±0.01 μHz). §.§ Comparison to magnetic flux density In order to compare our latitudinal pseudo-mode frequency shifts with trends in solar magnetic activity over the solar disc, we utilised GONG magnetogram synoptic maps. These maps allowed us to determine absolute, average values for magnetic flux density over various latitude ranges. The results of this comparison are shown in Figure <ref>, where the grey line represents this magnetic activity proxy. Only magnetic flux density data below ±60^∘ is used as data above this latitude is noisier due to line-of-sight projection effects. The latitudinal drift of sunspots with the progression of the 11-year solar cycle is reflected in the magnetic activity proxy. Initially, sunspots appear at mid-latitudes (around ±30^∘). They then drift closer to the equator, and each cycle concludes with most sunspots around ±10^∘ (which may overlap with the start of the next cycle). This latitudinal drift is clearly visible in Figure <ref> where for latitudes above ±15^∘, the magnetic activity proxy rises sharply in value towards solar maximum. For latitudes below ±15^∘, only a single peak in magnetic activity is observed which appears nearer the end of the solar maximum, reflecting the progression of sunspots to lower latitudes. Visually, the morphology of the pseudo-mode frequency shifts at all latitudes are clearly in anti-phase with the behaviour of the magnetic activity proxy. The main difference in frequency shift morphology is between latitudes above and below ±15^∘. Below ±15^∘ (Figure <ref>a), the pseudo-mode frequency shifts display a more steady, continuous decline in value to a sharper minimum on the approach to Cycle 24 maximum (from 2009 to 2014). Around 2014, the shift value reflects the single peak behaviour of the magnetic activity proxy, reaching a minimum value at solar maximum. However, above ±15^∘ (Figure <ref>b, c, and d), pseudo-mode frequency shifts fall in value more suddenly between 2009-2011, and then largely plateau throughout solar maximum until around 2015. This plateau in frequency shift value reflects the continued high values of the magnetic activity proxy between 2011-2015. The pseudo-mode frequency shift values are also shown to differ between the minimum between Solar Cycles 23 and 24, and the minimum between Cycles 24 and 25, where the value of the frequency shift is greater for the former. Typically with magnetic activity proxies, the value at cycle minimum is similar between consecutive minima (i.e. for sunspot number, there cannot be less than 0 sunspots). However, as this is not the case with pseudo-mode frequency shifts (which are sensitive to both the strong and weak components of the magnetic field), the difference of the pseudo-mode frequency shifts between solar minima may provide insight into the nature of the weak magnetic field between solar cycles. The strength of the correlation between the pseudo-mode frequency shifts and the magnetic activity proxy are shown in the correlation matrix in Figure <ref> (see the left hand panel). As expected, at all latitudes, frequency shifts display a significant, strong negative correlation coefficient with magnetic activity proxy. To ascertain how frequency shifts for each latitude band progress during the magnetic cycle compared to shifts from other latitudes, we correlate both the pseudo-mode frequency shifts with frequency shifts observed in other latitude bands, and the magnetic activity proxy with the magnetic activity observed at other latitudes (see the right hand panel of Figure <ref>). While we observe changes to frequency shift morphology visually, this correlation better quantifies any agreement between each 15^∘ band. When correlating the pseudo-mode frequency shifts with the shifts observed at other latitudes, a lower correlation for frequency shifts in the 0^∘≤θ < 15^∘ band compared to all other latitude bands was found (ρ=0.91–0.94). In contrast, a higher correlation between the 15^∘≤θ < 30^∘, 30^∘≤θ < 45^∘, and 45^∘≤θ < 60^∘ latitude bands (ρ=0.98-0.99) showed that frequency shifts progressed more similarly with time at all latitudes above ±15^∘. A similar behaviour was found for magnetic activity proxy, with a weaker correlation below ±15^∘ (ρ=0.70-0.86), and a stronger one between all latitude bands above ±15^∘ (ρ=0.95-0.99). To summarise, both the behaviour of pseudo-mode frequency shifts below ±15^∘, and that of the magnetic activity proxy below ±15^∘, appear to differ compared to all latitude bands above ±15^∘, while above ±15^∘ all latitude bands show a high degree of agreement. Furthermore, pseudo-mode frequency shifts of higher latitude bands are more strongly correlated with the magnetic activity proxy at the equivalent band (i.e. for 45^∘≤θ < 60^∘, ρ=-0.87), compared to lower latitudes (for 0^∘≤θ < 15^∘, ρ=-0.72). The correlation between frequency shifts from the lowest latitude band and the highest latitude magnetic activity proxy (ρ=-0.78), is also stronger than the correlation between the highest latitude shifts and lowest latitude magnetic activity proxy (ρ=-0.58). We suggest this is the result of modes still being somewhat sensitive to the magnetic field present at higher latitudes. This occurs because the value of θ defined by Equation <ref> is not strictly the highest latitude that oscillations are sensitive to. Instead, a mode's sensitively to latitudes greater than θ is greatly reduced compared to latitudes below θ, but is, importantly, not zero. In fact, when the magnetic activity proxy is correlated with itself, the correlation is weaker at 0^∘≤θ < 15^∘, reaching ρ=0.70. This is to be expected (we know the magnetic activity cycle progresses differently across latitude). However, when the frequency shifts are correlated with themselves, whilst shifts replicate the behaviour of the magnetic activity proxy in that the correlation is weaker below ±15^∘, the frequency shifts show a higher degree of agreement between each latitude band than the magnetic activity proxy does. Again, this highlights how mode sensitivity is not confined only within the latitude band. §.§ Frequency shift trends for the lower latitudes As sunspots exist predominantly across low to mid-latitudes on the solar disc, we extended our analysis to focus on changes to pseudo-mode frequency shifts as a function of latitude for smaller latitude bands of 5^∘ covering 0^∘ ≤ θ < 30^∘. The results of this analysis are shown in Figure <ref>, with the magnetic activity proxy (grey line) for comparison. The error bars are largest for frequency shifts where less modes are included in the weighted average (Table <ref>). Again, as expected, all pseudo-mode frequency shifts are anti-correlated with the solar magnetic cycle. As with the results shown in Section <ref>, the morphology of the frequency shifts show a shorter-term (shorter than the 11-year cycle) periodicity. Furthermore, from visual inspection of the frequency shifts, the shape of the temporal variations appears to differ over latitude. A sudden drop in frequency shifts is more apparent (around 2009-2011) in the 20^∘≤θ < 25^∘ and 25^∘≤θ < 30^∘ bands. There is then a plateau in shift values throughout solar maximum. In contrast, the frequency shifts in the lowest latitude bands fall more steadily. Particularly for 0^∘≤θ < 5^∘, the shifts appear to be continually declining at an almost constant rate from 2009 to 2014. The maximum and minimum value of the frequency shifts also differs. At Solar Cycle 24 maximum, the shifts in 0^∘≤θ < 5^∘ reach a much deeper minimum value of -0.23 ±0.08 μHz, where the minimum for 25^∘≤θ < 30^∘ is 0.07 ±0.03 μHz. In contrast, at the solar minimum between Cycle 23 and 24, the maximum frequency shift reaches 1.09 ±0.03 μHz for the 25^∘≤θ < 30^∘ latitude band, but only 0.78 ±0.08 μHz for the 0^∘≤θ < 5^∘ band. Again, a Spearman's correlation was performed to quantify the strength of the anti-correlation between the pseudo-mode frequency shifts and the magnetic activity proxy for the 5^∘ latitude bands. The correlation matrix is shown in Figure <ref>. As expected, correlations between pseudo-mode frequency shifts and the magnetic activity proxy are all significantly and strongly negatively correlated. We also note that the correlation between pseudo-mode frequency shifts at 0^∘≤θ < 5^∘ and the magnetic activity proxy at 25^∘≤θ < 30^∘ are more strongly correlated (ρ=-0.73), compared to the correlation between frequency shifts at 25^∘≤θ < 30^∘ and the magnetic activity proxy at 0^∘≤θ < 5^∘ (ρ=-0.42). Again, this may be a reflection of the fact that modes categorised into lower latitude bands still have some sensitivity to higher latitudes. Furthermore, when correlating the pseudo-mode frequency shifts with itself, and the magnetic activity proxy with itself (see the right hand panel of Figure <ref>), we again observe the weakest correlation at lower latitudes for both, highlighting how the temporal progression of the frequency shifts (and magnetic activity) between lower latitudes bands are less similar to those between higher latitude bands. The morphology of pseudo-mode frequency shifts appear to reflect the trends in the magnetic activity proxy, caused by the progression of magnetic activity to lower latitudes. §.§ Double peak structure and shorter-term quasi-biennial periodicity For all pseudo-mode frequency shifts produced in this work, there appears to be some shorter-term periodicity within the shifts (shorter than the 11-year solar cycle). To further investigate this we utilised wavelet analysis. The analysis is shown in Figure <ref>. The periodicity of the 11-year solar cycle was first removed from the pseudo-mode frequency shifts by smoothing with a Savitzky-Golay filter. A filter with a window length of 21 and a degree 3 polynomial was used as this captured the periodicity of the 11-year solar cycle well, without including any shorter-term periodicity. This 11-year cycle was then removed from the original data. The resultant detrended pseudo-mode frequency shifts are shown in the top panel of Figure <ref>. The wavelet analysis was performed using a Morlet wavelet for five latitude bands (of size 15^∘ covering 0^∘≤θ < 75^∘), and the results are shown in the continuous wavelet transform (CWT) heatmap plot for each latitude band. The white contours on the heatmaps are the 98% confidence contours, and are used to calculate the error on the periodicity at maximum power (by using the minimum and maximum periods within the 98% confidence contour at the periodicity at maximum power). A global wavelet transform (GWT) is also shown for each latitude band, which displays the sum over time of the power of the periodicities identified in the CWT spectrum. The red line on this plot signifies the 98% significance level. For each latitude, we identified a significant periodicity. At the lowest latitude (0^∘≤θ < 15^∘) the period at maximum power is located at 1417^+134_-138 days. For all other latitudes above ±15^∘, the period at maximum power is the same, at 1450^+142_-168 days for 15^∘≤θ < 30^∘, 1450^+139_-150 days for 30^∘≤θ < 45^∘, 1450^+167_-196 days for 45^∘≤θ < 60^∘, and 1450^+183_-228 days for 60^∘≤θ < 75^∘. This is just under four years for each latitude, and may be a manifestation of the QBO. In addition, all heatmaps display a secondary white contoured area between 2002-2010. This region corresponds to a periodicity in the range of 288-850 days (0.8-2.3 years) depending on the latitude band, and, for each band, it is present throughout the decline of Cycle 23 to solar minimum (corresponding to the increase in pseudo-mode frequency shift value). However, the power is not significant above the 98% significance level on the GWT (shown by the red line). This does not reappear for the decline of Solar Cycle 24 (but may be due to the reduced strength of the latter cycle). In addition, the location of maximum power of this four-year periodicity varies depending on latitude. At mid-latitudes (30^∘ ≤ θ < 45^∘) maximum power exists during August 2010 (which equates to the rising limb of magnetic activity towards Cycle 24 maximum). However, the power of this four-year periodicity for the lower latitudes (0^∘ ≤ θ < 15^∘) does not reach its maximum until June 2014 (where solar maximum is well established). Like the pseudo-mode frequency shifts at lower latitudes, other solar activity proxies have too been shown to reach maximum QBO power at solar maximum. § CONCLUSION In this work, we aimed to identify trends in temporal pseudo-mode frequency shifts as a function of latitude across the solar disc. While previous work on the latitudinal dependence of p-mode frequency shifts has highlighted the potential to better constrain solar dynamo models and magnetic field structures <cit.>, no analysis, to the best of our knowledge, of the latitudinal behaviour of pseudo-mode frequency shifts currently exists. Here, we utilised GONG data and the resampled periodogram method to calculate temporal pseudo-mode (5600-6800μHz) frequency shifts for all azimuthal orders, m, of harmonic degrees 0 ≤ l ≤ 200. We then categorised these pseudo-mode frequency shifts into latitude bands, classifying each mode using a ratio to define an upper latitude where mode sensitivity is greatest at latitudes below. Our method was validated by reproducing the strong anti-correlation (ρ = -0.93, p < 10^-31) which had previously been found <cit.> between the weighted average of pseudo-mode frequency shifts (for all m over 0 ≤ l ≤ 200) and the F_10.7 index as a function of time. We analysed the morphology of the pseudo-mode frequency shifts for latitude bands of 15^∘ covering 0^∘≤θ<75^∘. A strong anti-correlation was identified at all latitudes (0^∘≤θ<60^∘) compared to the magnetic flux density determined from GONG synoptic maps. Fluctuations in the shifts were also observed at all latitudes with shorter periodicities than the 11-year magnetic cycle. The morphology of the shifts differed between the lowest and highest latitude bands that was best observed on the rise to Solar Cycle 24 maximum (between 2009-2014). The lowest latitude band (0^∘≤θ<15^∘) had a more gradual, consistent decline in the shift values, whereas the higher latitudes (above ±15^∘) had a faster drop to minimum, and then a plateau in shift values. This behaviour was reflected in the magnetic activity proxy, with a single peak at solar maximum for latitudes below ±15^∘ but a double peak for latitudes above ±15^∘. Due to the differences between shift morphology above and below ±15^∘, we focused our analysis on lower latitudes, using latitude bands of 5^∘ covering 0^∘ ≤ θ < 30 ^∘. The pseudo-mode frequency shifts for 0^∘≤θ < 5^∘ reached a sharp minimum value at the maximum of the single peak in the magnetic activity proxy, whereas shifts at 25^∘≤θ < 30^∘ had a faster decline in value and then remained low throughout the double peak in the magnetic activity proxy. The nature of the temporal pseudo-mode frequency shifts determined here, whereby shifts below ±15^∘ differ greatly to those above ±15^∘ (while shifts across each latitude band above ±15^∘ are in good agreement), is replicated in the magnetic activity proxy. We, therefore, expect the morphological differences in pseudo-mode frequency shifts above and below ±15^∘ to be a result of the delay in magnetic regions reaching the lower latitudes in each cycle. <cit.> modelled the anti-phase variation between pseudo-mode frequencies and solar activity by varying the height of an acoustic potential, which, in turn, impacts the reflection experienced by acoustic oscillations in the solar atmosphere. Assuming such a variation in acoustic potential height is caused by the presence of a magnetic field in the photosphere, it would make sense that the pseudo-mode frequency shifts display the same latitudinal sensitivity as that magnetic activity. Furthermore, a periodicity shorter than the 11-year solar cycle was identified in the pseudo-mode frequency shifts. To characterise this, we utilised a wavelet analysis for latitude bands of size 15^∘ covering 0^∘ ≤ θ < 75^∘. For each latitude, a significant (at a 98% confidence level) periodicity of just under four years was identified. Periodicities shorter than the 11-year solar cycle are well-documented within other solar activity proxies (including the acoustic p modes), and these have been shown to have a range of periods between 0.6-4 years (; ; ). The behaviour of pseudo-mode frequency shifts has previously been shown to reflect trends (by moving in anti-phase) in solar magnetic activity. We aimed to extend this to search for a latitudinal dependence to pseudo-mode frequency shifts, to ascertain whether there is any difference in pseudo-mode shifts and amplitudes across the solar disc through multiple magnetic activity cycles. In doing so, we also identified shorter four-year long periodicities. Our work emphasises the use and contribution of high frequencies beyond the acoustic cut-off in our understanding of the latitudinal progression of the 11-year solar cycle, and the variable nature of shorter activity cycles. We gratefully acknowledge support from the UK Science and Technology Facilities Council (STFC) grant ST/W507908/1. A-M. Broomhall acknowledges support from STFC grant ST/X000915/1, and A-M. Broomhall and T. Mehta acknowledge support from STFC grant ST/T000252/1. This study also acknowledges use of Python packages <cit.>, <cit.>, <cit.>, and the Python library. This work also made use of , a community-developed core Python package and an ecosystem of tools and resources for astronomy <cit.>. Wavelet analysis software was provided by T. Mehta is available at <https://github.com/TishtryaMehta/QBO_evolution>. This work utilises data from the National Solar Observatory Integrated Synoptic Program, which is operated by the Association of Universities for Research in Astronomy, under a cooperative agreement with the National Science Foundation and with additional financial support from the National Oceanic and Atmospheric Administration, the National Aeronautics and Space Administration, and the United States Air Force. The GONG network of instruments is hosted by the Big Bear Solar Observatory, High Altitude Observatory, Learmonth Solar Observatory, Udaipur Solar Observatory, Instituto de Astrofísica de Canarias, and Cerro Tololo Interamerican Observatory. L. J. Millson conducted the main data analysis and wrote the manuscript and code used. A-M. Broomhall conceptualised the research aims and reviewed the manuscript. T. Mehta provided the wavelet analysis code used in Section <ref>. This research was supported by the UK Science and Technology Facilities Council (STFC). All data used in this study is publicly available. F_10.7 index data is available online at <ftp://ftp.seismo.nrcan.gc.ca/spaceweather/solar flux/>). Network merged timeseries (mrvmt) and synoptic maps from the Global Oscillations Network Group (GONG) are available at <https://gong.nso.edu>). spr-mp-sola
http://arxiv.org/abs/2409.02807v1
20240904152506
Primordial regular black holes as all the dark matter (II): non-tr-symmetric and loop quantum gravity-inspired metrics
[ "Marco Calzà", "Davide Pedrotti", "Sunny Vagnozzi" ]
gr-qc
[ "gr-qc", "astro-ph.CO", "hep-ph", "hep-th" ]
marco.calza@unitn.it Department of Physics, University of Trento, Via Sommarive 14, 38123 Povo (TN), Italy Trento Institute for Fundamental Physics and Applications (TIFPA)-INFN, Via Sommarive 14, 38123 Povo (TN), Italy M.C. and D.P. contributed equally to this work davide.pedrotti-1@unitn.it Department of Physics, University of Trento, Via Sommarive 14, 38123 Povo (TN), Italy Trento Institute for Fundamental Physics and Applications (TIFPA)-INFN, Via Sommarive 14, 38123 Povo (TN), Italy sunny.vagnozzi@unitn.it Department of Physics, University of Trento, Via Sommarive 14, 38123 Povo (TN), Italy Trento Institute for Fundamental Physics and Applications (TIFPA)-INFN, Via Sommarive 14, 38123 Povo (TN), Italy § ABSTRACT It is a common belief that a theory of quantum gravity should ultimately cure curvature singularities which are inevitable within General Relativity, and plague for instance the Schwarzschild and Kerr metrics, usually considered as prototypes for primordial black holes (PBHs) as dark matter (DM) candidates. We continue our study, initiated in a companion paper, of non-singular objects as PBHs, considering three regular non-tr-symmetric metrics, all of which are one-parameter extensions of the Schwarzschild space-time: the Simpson-Visser, Peltola-Kunstatter, and D'Ambrosio-Rovelli space-times, with the latter two motivated by loop quantum gravity. We study evaporation constraints on PBHs described by these regular metrics, deriving upper limits on f_pbh, the fraction of DM in the form of PBHs. Compared to their Schwarzschild counterparts, these limits are weaker, and result in a larger asteroid mass window where all the DM can be in the form of PBHs. Our work demonstrates as a proof-of-principle that quantum gravity-inspired space-times can simultaneously play an important role in the resolution of singularities and in the DM problem. Primordial regular black holes as all the dark matter (II): non-tr-symmetric and loop quantum gravity-inspired metrics Sunny Vagnozzi September 9, 2024 ======================================================================================================================= § INTRODUCTION Once regarded as objects of pure mathematical interest, over the past decade black holes (BHs) have gone on to become some of the most fascinating objects in the Universe <cit.>. At the time of writing, observational effects associated to astrophysical BHs are detected on a regular basis <cit.>, allowing us to use these extreme regions of space-time as unique laboratories for testing fundamental physics in the strong-field regime <cit.>. On the more theoretical end of the spectrum, a widespread hope is that BHs may hold the key towards the unification of quantum mechanics and gravity, although a somewhat more humble goal could be that of using BH observations to test candidate theories of quantum gravity (QG). On the more phenomenological side, the possible role of BHs in accounting for the dark matter (DM) which makes up ≃ 25% of the energy budget of the Universe <cit.> is now widely acknowledged. In our work, these two aspects – DM and candidate theories of QG – will naturally meet, with BHs being the common denominator, and astrophysical observations the playing ground. The collapse of large density perturbations upon horizon re-entry in the early Universe can lead to the formation of hypothetical relics known as primordial BHs (PBHs), whose role as potential DM candidates has long been recognized <cit.> (for recent reviews on the topic, see Refs. <cit.>). A wide range of observations (mainly of astrophysical nature) severely limit the ability of PBHs to account for the entire DM component: in practice, this is potentially possible (although this possibility is not completely free of debates) only in the so-called “asteroid mass window”, i.e. 10^14 kg≲ M_pbh≲ 10^20 kg, with lighter and heavier PBHs being tightly constrained by observational signatures of their evaporation and microlensing respectively <cit.>. However, it is important to note that virtually all constraints on PBHs, including those determining the existence and extension of the asteroid mass window, are subject to the underlying assumption about these object being either Schwarzschild or Kerr BHs <cit.>. This assumption is perfectly reasonable from the phenomenological and observational point of view, but at the same time may be cause of some apprehensiveness on the more theoretical side. In fact, these metrics feature pathological curvature singularities, whose existence is virtually inevitable in General Relativity (GR), and is at the essence of the well-known singularity problem <cit.>. Given that significant efforts are being devoted to the study of so-called regular space-times, free of curvature singularities, a relevant question is therefore what happens if PBHs are regular. This is a question we started to systematically address in a companion paper focused on tr (time-radius)-symmetric metrics, i.e. where the product of the coefficients of the dt^2 and dr^2 terms in the line element in four-dimensional Boyer–Lindquist coordinates is equal to -1 <cit.>: these metrics include, for instance, the well-known Bardeen <cit.> and Hayward regular BHs <cit.>. As we show in our companion paper, the phenomenology of the resulting primordial regular BHs (PRBHs) can be very rich, and can result in the asteroid mass window opening by up to an extra decade in mass <cit.>. The choice of studying tr-symmetric metrics was adopted to make the equations simpler to handle, but is certainly not exhaustive. Indeed, as we shall soon see, such a choice does not cover a number of well-known and well-motivated metrics, potentially including space-times rooted into candidate theories of QG. In this sense, it is worth recalling that the metrics considered in our companion paper <cit.> are purely phenomenological in nature. It is therefore our goal in the present work to extend our earlier study of PRBHs to non-tr-symmetric metrics, some of which carry very strong theoretical motivation and can arise within candidate theories of QG. To be concrete, in what follows we will consider three regular, static spherically symmetric space-times, characterized by an additional regularizing parameter ℓ, and recovering the Schwarzschild space-time in the ℓ→ 0 limit. All three space-times enjoy quite different properties compared to the phenomenological ones considered in our companion paper <cit.>. The first metric we consider is the so-called Simpson-Visser metric: this is arguably one of the best known black-bounce space-times, and interpolates between the Schwarzschild metric, regular BHs, and traversable wormholes. The other two metrics are instead deeply rooted within Loop Quantum Gravity (LQG) <cit.>: arguably one of the leading QG approaches, LQG is a fully non-perturbative and manifestly background-independent approach towards a consistent theory of QG, wherein space-time is fundamentally discrete (see e.g. Refs. <cit.> for various follow-up studies and applications). More specifically, the two regular LQG-motivated metrics we analyze as candidates for DM in the form of PRBHs are the Peltola-Kunstatter <cit.> and D'Ambrosio-Rovelli space-times <cit.>. As a cautionary note, we remark that ours is to be intended as a pilot study in this direction, and that much more follow-up work is needed before primordial regular BHs as DM candidates are characterized to the same extent as their Schwarzschild counterparts. [As far as we are aware, only four earlier works considered primordial regular BHs as DM <cit.>, but mostly focused on aspects other than the ones studied here (with the exception of Refs. <cit.>, which however studied a different LQG-inspired metric, albeit reaching conclusions qualitatively similar to ours).] The rest of this paper is then organized as follows. We briefly introduce the regular space-times studied in our work in Sec. <ref>. Theoretical aspects of the Hawking evaporation process are presented in the next two sections, with Sec. <ref> devoted to the derivation of the greybody factors, Sec. <ref> to the calculation of the resulting photon spectra, and Sec. <ref> to the derivation of constraints on the fraction of DM which may be in the form of primordial regular BHs. The resulting limits are discussed in Sec. <ref>. Finally, in Sec. <ref> we draw concluding remarks. Technical issues regarding the asymptotic solutions of the radial Teukolsky equation which may be of interest to some readers are discussed in Appendix <ref>. Unless otherwise specified, we adopt units where G=c=1. We recall once again that a related study focusing on tr-symmetric, phenomenological metrics is presented in our companion paper <cit.>. If time allows our recommendation is that the interested reader consult our companion paper <cit.> prior to reading the present work. § REGULAR BLACK HOLES As is well known, GR predicts the almost unavoidable existence of essential space-time singularities, where curvature invariants diverge. Nevertheless, it is a commonly held belief that these unwanted features are merely a reflection of our ignorance of a more fundamental theory of QG, which would ultimately cure these singularities. Various regular BH (RBH) metrics, free of singularities in the entire space-time, have in fact been studied in recent years, both from a more phenomenological standpoint <cit.> as well as from a first-principles theoretical basis <cit.>. [We note that finiteness of curvature invariants does not necessarily imply geodesic completeness, and viceversa, and issue which plagues a number of well-known regular BHs, including the Bardeen and Hayward ones <cit.>. We further remark that the stability of several regular BH solutions is currently being debated <cit.>.] These RBHs are usually controlled by an extra regularizing parameter (which we denote by ℓ), and typically (but not necessarily) recover the Schwarzschild metric as ℓ→ 0. In what follows, similarly to our companion paper <cit.> we will entertain the possibility that DM may be in the form of primordial RBHs. The line element of the space-times we consider can all be written in the following general form: ds^2=-𝔣(r̃) dt^2 + 1/𝔣(r̃) [ 1-𝔤_ℓ(r̃) ] dr̃ ^2+r̃^2dΩ^2 , where dΩ^2=dθ^2+sin^2(θ)dϕ^2 is the metric on the 2-sphere and r̃ is manifestly the areal radius. On the other the function 𝔤_ℓ, which depends on the regularizing parameter ℓ, goes to 𝔥_ℓ(r̃) → 0 for both ℓ→ 0 and r̃→∞. Such a space-time possesses horizon-like structures located at radial coordinates r̃ such that: 𝔣(r̃) [ 1-𝔤_ℓ(r̃) ] =0 , i.e. at r̃=r̃_H,r̃_0 such that 𝔣(r̃_H)=0 and/or 𝔤_ℓ(r̃_0)=1. If r̃_0>r̃_H, the value r̃_0 determines the location of a wormhole (WH) throat, whereas no event on the manifold is associated to the location of r̃_H. Note that, in general r̃_0 depends on the regularizing parameter, i.e. r̃_0 = r̃_0(ℓ). On the other hand, when r̃_H>r̃_0, the value r̃_H characterizes the event horizon of a BH (in this case the WH throat is located within the BH event horizon and is therefore causally disconnected from the relevant BH exterior space-time). Typically, in regions where these space-times are regular, the above geometry describes a bounce into a future incarnation of the universe <cit.>. Due to this peculiar characteristic, geometries of this type are sometimes referred to as black-bounce space-times. For these types of metric, it generally proves advantageous to perform a change of variable for what concerns the radial coordinate, going from an extrinsic description to an intrinsic one through the coordinate transformation r̃=√(r^2+ℓ^2). The metric in Eq. (<ref>) can then be expressed in the following form: ds^2 = -f(r)dt^2+g(r)^-1dr^2+h(r)dΩ^2 , where h(r)=r^2+ℓ^2. When expressed in the above form, the Petrov-D nature of this class of metrics is manifest. In addition, we additionally require asymptotic flatness, in other words that 𝔣(r̃) → 1 for r̃→∞, from which it follows that: f(r) 1 , g(r) 1 , h(r) r^2 . Finally, we note that the metrics in question are non-tr-symmetric, since in general f(r) ≠ g(r) and h(r) ≠ r^2. The tr-symmetric case is treated in our companion paper <cit.>, whereas we have chosen to deal with the non-tr-symmetric case in a separate work both because it introduces non-trivial complications on the mathematical side, and at the same time allows us to treat metrics which are strongly motivated from first-principles theoretical considerations (unlike those considered in our companion paper, which are introduced on purely phenomenological grounds), such as LQG. A key quantity characterizing the RBHs we are considering is their temperature T, since this directly controls the strength of the radiation emitted from Hawking evaporation. Assuming that the temperature is the usual Gibbons-Hawking one, which in turn tacitly implies that we are assuming the standard Boltzmann-Gibbs distribution (see our companion paper for a slightly more detailed discussion on this point <cit.>), the temperature is given by the following: T=f'(r)/4π√(-g(r)f(r))|_r_H , where the prime indicates a derivative with respect to r, and r_H denotes the location of the event horizon. In Fig. <ref> we show the evolution of the temperatures, normalized to the temperature of Schwarzschild BHs T_Sch=1/8π M, of the three RBHs we will discuss shortly, as a function of the regularizing parameter ℓ normalized by the event horizon radius r_H. As we see, for all three metrics the temperature is a monotonically decreasing function of the regularizing parameter. One may therefore qualitatively expect that the intensity of the Hawking evaporation radiation should decrease relative to that of Schwarzschild BHs of the same mass: this expectation in fact turns out to be correct, as we will explicitly show later, with important consequences for f_pbh limits. §.§ Simpson-Visser space-time The Simpson-Visser (SV) metric is a one-parameter extension of the Schwarzschild space-time and easily one of the best known black-bounce metrics. In the words of Simpson and Visser, in some sense this metric “represents the minimal violence to the standard Schwarzschild solution” needed to enforce regularity <cit.>. The line element analytically interpolates between black holes and traversable wormhole according to the value of the regularizing parameter. In the notation of Eq. (<ref>), the SV space-time is characterized by the following functions: [For all the space-times we will consider, the parameter M appearing in the function f(r) always corresponds to the mass of the space-time (either the Komar, ADM, Misner-Sharp-Hernandez, or Brown-York mass).] 𝔣(r̃)=1-2M/r̃ , 𝔤_ℓ (r̃) = ℓ^2/r̃^2 , whereas, in the notation of Eq. (<ref>), the line element of the SV space-time is given by the following: ds^2 = - ( 1-2M/√(r^2+ ℓ^2) ) dt^2 + dr^2 /1-2M/√(r^2+ ℓ^2) +(r^2+ℓ^2)dΩ^2 . The SV space-time encompasses a rich phenomenology, as it interpolates between the Schwarzschild BH (ℓ=0), a regular BH with a one-way space-like throat (0<ℓ/M<2), a one-way WH with an extremal null throat (ℓ/M=2), and a traversable WH with a two-way time-like throat (ℓ/M>2). This metric has been the subject of several follow-up studies (see e.g. Refs. <cit.>) and, while originally introduced on phenomenological grounds, can potentially originate as a solution of GR coupled to non-linear electrodynamics in the presence of a minimally coupled phantom scalar field <cit.>. §.§ Peltola-Kunstatter space-time The Peltola-Kunstatter (PK) space-time is a LQG-motivated metric obtained upon applying effective polymerization techniques to 4D Schwarzschild BHs. Although there are indications that LQG may be capable of resolving the singularities which plague GR, the inherent difficulty in solving the complete system has led to the development of semi-classical polymer quantization techniques, which provide an unitarily inequivalent alternative to Schrödinger quantization while maintaining the key aspect of space-time discreteness. The PK space-time is obtained polymerizing only area but not the conformal mode, and results in a space-time whose singularity is replaced by a complete and regular bounce, where the space-time reaches a minimum radius before expanding into a Kantowski-Sachs metric <cit.>. In the notation of Eq. (<ref>), the SV space-time is characterized by the following functions: 𝔣(r̃)=√(1-ℓ^2/r̃^2)- 2M/r̃ , 𝔤_ℓ (r̃) = ℓ^2/r̃^2 whereas, in the notation of Eq. (<ref>), the line element of the PK space-time is given by the following: ds^2 = - ( r-2M/√(r^2+ℓ^2) ) dt^2 + dr^2/r-2M/√(r^2+ℓ^2) + (r^2+ ℓ^2)dΩ^2 . In what follows, we shall take the PK space-time as an example of regular metric motivated by first-principles quantum gravity considerations, unlike the other phenomenological metrics considered earlier. §.§ D'Ambrosio-Rovelli space-time The D'Ambrosio-Rovelli (DR) space-time was originally developed with motivations other than singularity avoidance, and is in fact also somewhat motivated by LQG considerations. This space-time represents a natural extension of the Schwarzschild space-time which crosses the r=0 singularity smoothly into the interior of a white hole, and one can see it as the ħ→ 0 limit of an effective QG metric <cit.>. It has been argued that this black hole-to-white hole tunneling mechanism can shed light on possible solutions to the information paradox. Of interest to us is the fact that the DR metric is regular, as a result of the curvature of the effective metric being bound a the Planck scale. The ansatz for the effective metric written by D'Ambrosio and Rovelli is similar to that of the SV metric, but differs in terms of the 𝔤 function – specifically, the two relevant functions are given by the following: 𝔣(r̃)=1-2M/r̃ , 𝔤_ℓ(r̃)=ℓ/r̃ , whereas, in the notation of Eq. (<ref>), the line element of the DR space-time is given by the following: ds^2 = - ( 1-2 m/√(r^2+ ℓ^2) ) dt^2 + dr^2 /1-2 m/√(r^2+ ℓ^2) ( 1+ℓ/√(r^2+ ℓ^2) ) +(r^2+ ℓ^2) dΩ^2 . Much like the PK space-time, we will take the DR space-time as another well-motivated example of QG-inspired metric. We note that the assumption of primordial DR BHs inevitably leads to the existence of long-lived primordial DR white holes from quantum transitions near the would-be singularity. This can potentially lead to an interesting phenomenology whose exploration, however, is well beyond the scope of this work. § METHODOLOGY §.§ Greybody factors We now discuss the computation of the greybody factors (GBFs), functions of energy and angular momentum which characterize the shape of the emitted Hawking radiation (and in particular its deviation from a blackbody) and therefore play a key role in determining the resulting evaporation constraints <cit.>. It is worth noting that, in the notation of Eq. (<ref>), both the SV and PK metrics share the fact that f(r)=g(r), which makes the calculations significantly easier. This is not the case, however, for the DR space-time. Therefore, in what follows, we consider the more general case where f(r) ≠ g(r), which also differs from the simpler tr-symmetric case considered in our companion paper <cit.>. We adopt the Newman-Penrose (NP) formalism <cit.>, denoting by Υ_s a general perturbation of spin s (defined by the appropriate NP scalars) and dropping the l and m indices to lighten the notation. Then, the Teukolsky equation for the evolution of massless perturbations of given spin upon a background metric characterized by functions f(r), g(r), h(r) as in Eq. (<ref>) reduces to the following master equation <cit.>: - h/g∂^2_t Υ_s + s √(f/g)( hg'/g - h' ) ∂_t Υ_s + f h ∂_r^2 Υ_s + ( hf'/2 + (s+1/2) fhg'/g+(s+1)fh' ) ∂_r Υ_s + ( 1/sinθ∂_θ (sinθ ∂_θ )+1/sin^2θ∂_ϕ^2 . . + 2is θ/sinθ∂_ϕ - s^2 ^2θ-s ) Υ_s ( s hfg”/g +3s-2s^2/4(2fh”+f'h') + s/2 (hf'g'/g -fhg'^2/g^2) . . +2s^2-s/4fh'^2/h + 2 s^2 +5s/4f g' h'/g ) Υ_s = 0 , which is separable with the following ansatz: Υ_s= ∑_l,m e^-i ω t e^i m ϕ S^l_s(θ) R_s(r) , with ω, l, and m being the perturbation frequency, angular node number, and azimuthal node number respectively, whereas S^l_s(θ) are related to the so-called spin-weighted spherical harmonics S^s_l,m(θ, ϕ) through S^s_l,m(θ, ϕ)=∑ S^l_s(θ) e^imϕ. We now define the functions A_s, B_s, and C_s as follows: A_s= √(f/g)1/(g h)^s , B_s=√(f g ) (g h)^sh , C_s = s f h g”/g + s/2 ( h f' g' /g - f h g'^2/g^2 ) + s(3-2s)/4( 2 f h” + f' h' ) +s(2s-1)/4f h'^2/h . With these definitions, the decoupled radial Teukolsky equation reduces to the following general form <cit.>: A_s(B_s R'_s)'+ [ h/gω^2+iω s√(f/g) ( h'-h g'/g ) + C_s ] R_s=0 . We further define the tortoise coordinate r_⋆ as follows: dr_⋆/dr=1/√(f(r)g(r)) , noting that, being our space-times asymptotically flat, r_⋆→ r for large r. In order to compute the GBFs for the metrics in question, we set purely ingoing boundary conditions. In addition, we need to know the asymptotic behaviour of R_s as at infinity and close to the horizon. These asymptotic behaviours are given as follows: R_s ∼ R^in_s e^-iω r_⋆/r+ R^out_s e^iω r_⋆/r^2s+1 (r→∞) R_s ∼ R^hor_s A_s e^-i ω r_⋆ (r → r_H) , as proven in more detail in Appendix <ref> (the case of the DR metric is not trivial). To compute the GBFs, we make use of the shooting method, widely used earlier in similar contexts (see e.g. Refs. <cit.>), including in our companion paper <cit.>. We begin by defining the rescaled coordinate x: x≡r-r_H/r_H , where r_H is the largest real root of the equation f(r)=0. In order to further simplify our notation, in what follows we work in units of horizon radius, setting r_H=1, so that r=x+1. The decoupled radial Teukolsky equation, Eq. (<ref>), then takes the following form: 𝒜_sR̈_s+ℬ_sṘ_s+𝒞_sR_s=0 , where the functions 𝒜, ℬ, and 𝒞 are defined as follows: 𝒜_s= f^2h , ℬ_s = ( ( s + 1/2) f^2 h ġ/g + h f /2 +(1+s) f^2 ḣ ) , 𝒞_s = f/4( s(2s-1)f ḣ^2 /h + 2s(3-2s)f ḧ- 2s f h ġ^2 /g^2. . +1/g( s(5+2s)f ġḣ + 2h ( 2 ω^2 + 2 s f g̈. . . . . . - s ġ( 2 i ω√(f/g+ḟ)) ) ) + s h ( (3-2s) ḟ + 4 i ω√(f/g) ) ) . while the dot denotes a derivative with respect to the rescaled coordinate x. For completeness, we note that f and g as as function of x for the metrics considered here are given by the following: h_SV(x)=h_PK(x)=h_DR(x)=(x+1)^2+ℓ^2 g_SV(x)=f_SV(x)= 1 - √(1+ℓ^2)/√(ℓ^2+ (x+1)^2) g_PK(x)=f_PK(x)=x/√(ℓ^2+ (x+1)^2) g_DR(x)=1 - √(1+ℓ^2)/√(ℓ^2+ (x+1)^2) f_DR(x)=g_DR(x)( 1+ ℓ/√(ℓ^2+ (x+1)^2))^-1 . We express the solution to Eq. (<ref>) as a Taylor expansion as follows <cit.>: R_s(x)=x^-s-iω/τ∑_n=0^∞ a_n x^n . Here, τ is a function of the field's spin and regularizing parameter ℓ, and varies with the metric being considered. For further details, we refer the reader to the Appendix of our companion paper <cit.>, where the issue is discussed in more depth. We then determine the coefficients a_n iteratively, by repeatedly substituting Eq. (<ref>) into Eq. (<ref>). Once we have the near-horizon solution, we treat it as a boundary condition from which we numerically integrate outwards, where the solution takes the following form: R(x) R^in_s e^-i ω x/x + R^out_s e^i ω x/x^2s+1 , with the GBFs then given by the following: Γ^s_l m(ω)=δ_s |_s R^l m_in(ω)|^-2 , where δ_s is given by: δ_s = ατie^iπ s(2 ω)^2s-1Γ(1-s-2iω/τ)/Γ(s-2 i ω/τ) . with α and τ depending on the metric considered. More specifically, for what concerns α, we find that within the SV and PK metrics for which f=g the following holds: α_SV=α_PK=1+ℓ^2 , whereas for the DR metric we find the following: α_DR=1+ℓ(ℓ+√(1+ℓ^2)) . Finally, for the three metrics τ is given by the following: τ_SV=1/1+ℓ^2 , τ_PK=1/√(1+ℓ^2) , τ_DR= √(1+ ℓ(ℓ-√(1+ℓ^2)))/1+ℓ^2 . As can be seen in Eqs. (<ref>,<ref>), deviations from α=1 are associated to the non-tr-symmetric nature of the metrics. As a general consideration, we note that the computation of GBFs for the metrics under consideration is more involved compared to those considered in our companion paper <cit.>. §.§ Evaporation spectra As in our companion paper <cit.>, we only consider the primary photon spectrum resulting from Hawking evaporation, while also checking that within the mass region of interest the secondary spectrum will provide a negligible contribution. For a particle species i of spin s (characterized by n_i degrees of freedom), the rate of emission (particles per unit time per unit energy) through Hawking radiation is given by the following <cit.>: d^2N_i/dtdE_i=1/2π∑_l,mn_iΓ^s_l,m(ω)/ e^ω/T± 1 , with ω=E_i being the mode frequency, whereas the positive (negative) sign in the denominator corresponds to fermions (bosons). To compute the (photon) GBFs Γ^s_l,m(ω) we adopt the methodology discussed earlier, going up to node number l=4, but verifying that including higher l modes leads to negligible corrections. Examples of the resulting evaporation spectra are provided in Figs. <ref>, <ref>, and <ref> for a representative PBH of mass M_pbh=10^13 kg, located somewhat halfway in the mass range of interest (although the features we discuss shortly do not depend on the chosen mass). For the Simpson-Visser and D'Ambrosio-Rovelli BHs, increasing ℓ leads to the intensity of the spectra decreasing at all energies. This feature confirms the intuition raised when we discussed the temperatures of these BHs (see Fig. <ref>), which we observed to decrease with increasing ℓ. However, we remark that a decrease in T alone is not sufficient to draw this conclusion, since the GBFs also enter in Eq. (<ref>). This effect can be noticed in the case of the Peltola-Kunstatter space-time, whose temperature evolution as a function of ℓ is nearly identical to that of the Simpson-Visser space-time (see Fig. <ref>). However, as ℓ is increased, the evaporation spectra of Peltola-Kunstatter PBHs (Fig. <ref>) decreases in intensity only at energies approximately above the peak (located roughly between 5 MeV and 10 MeV), while conversely increasing in intensity for lower energies, albeit only slightly compared to the decrease at higher energies. Despite sharing essentially the same temperature, the difference in the evaporation spectra for Simpson-Visser versus Peltola-Kunstatter PBHs is therefore attributable to the GBFs, which display a different dependence on energy. Already at a qualitative level, inspecting the spectra just discussed, we can expect that the upper limits on f_pbh obtained assuming Schwarzschild PBHs should loosen (thereby opening the asteroid mass window) when considering the PRBHs in question, at the very least for the Simpson-Visser and D'Ambrosio-Rovelli metrics – as we shall see shortly, the expectation is in fact confirmed for all three space-times. §.§ Evaporation constraints In the mass range 10^10 kg≲ M_pbh≲ 10^15 kg, the dominant constraints on the PBH abundance come from measurements of the extragalactic photon background <cit.>, and more precisely of the diffuse extragalactic γ-ray background (EGRB) in the energy range 100 keV≲ E_γ≲ 5 GeV, which can be directly compared against theoretical expectations for their Hawking evaporation spectra. Of particular interest to us is the fact that the lower edge of the asteroid mass window, where PBHs could make up the entire DM, is set precisely by evaporation constraints. In what follows, we set evaporation constraints on the fraction of DM in the form of PBHs f_pbh(M) ≡Ω_pbh/Ω_dm, assuming that PBHs are described by the three metrics discussed so far. We work under the same set of approximations adopted in our companion paper <cit.>. Namely, we assume that PBHs are isotropically distributed on sufficiently large scales and cluster in the galactic halo in the same way as other forms as DM, we only compute the primary photon spectrum, and finally we assume a monochromatic mass distribution (see e.g. Refs. <cit.> for studies on the effects of an extended mass distribution). We refer the reader to Sec. IIIC of our companion paper <cit.> for a more detailed discussion of why these, which are clearly all approximations, are appropriate for the scope of our work (while allowing for a more direct comparison to earlier works, including our companion paper). We therefore focus on a population of PRBHs which all share the same mass M_pbh. Following Ref. <cit.>, the number of emitted photons in the logarithmic energy bin Δ E_γ≃ E_γ is approximated as Ṅ_γ(E_γ) ≃ E_γ(dṄ_γ/dE_γ). The rate of emitted photons with present-day energy E_γ 0 per unit time per unit area per unit solid angle is then obtained by integrating over the entire cosmological time, accounting for the redshift scaling of the photon energy and density, and is given by the following: I(E_γ 0)= A_I∫^z_⋆_0dz/H(z)d^2N_γ/dt dE_γ(M_pbh,(1+z)E_γ 0) , where the normalization factor A_I is given by: A_I=c/4πn_pbh(t_0)E_γ 0 . In Eqs. (<ref>,<ref>), d^2N_γ/dtdE_γ is computed via Eq. (<ref>), whereas z_⋆ is the redshift of recombination, H(z) is the expansion rate, and n_pbh(t_0) is the present-day PBH number density, itself related to the parameter of interest f_pbh via the following: f_pbh(M_pbh) ≡Ω_pbh/Ω_dm = n_pbh(t_0)M_pbh/ρ_crit,0Ω_dm , with ρ_crit,0=3H_0^2/8π G being the present-day critical density, H_0 the Hubble constant, and Ω_dm the present-day DM density parameter. In what follows, in order to specify H(z), ρ_crit, and Ω_dm, we adopt the same spatially flat ΛCDM cosmological model used by the seminal Ref. <cit.>. This allows us to have a reliable reference against which we can cross-check our limits on f_pbh in the Schwarzschild case (ℓ→ 0), although we stress that the choice of underlying cosmology does not play a significant role in determining our constraints. With the cosmological model specified, the only unknown parameter in Eqs. (<ref>,<ref>) is the present-day PBH number density n_pbh(t_0), or equivalently, through Eq. (<ref>), the PBH fraction f_pbh. For each value of M_pbh, we set upper limits on the only free parameter f_pbh using EGRB measurements, and more specifically from the HEAO-1 X-ray telescope in the 3-500 keV range <cit.>, the COMPTEL imaging Compton γ-ray telescope in the 0.8-30 MeV range <cit.>, and the EGRET γ-ray telescope <cit.>. To do so, for given values of M_pbh and ℓ/r_H, the maximum allowed value of f_pbh is determined by the requirement that the theoretical prediction for the photon flux given in Eq. (<ref>) does not overshoot any of the ERGB measurements by more than 1σ (see e.g. Fig. 6 in our companion paper <cit.>, and note that different datapoints are first overshot when changing M_pbh). [This method was first discussed in the seminal Ref. <cit.>, and later adopted by most of the works studying EGRB constraints on PBHs. While a more robust statistical analysis is of course possible, such an approach is sufficiently accurate for the purposes of our work and allows for a more direct comparison to earlier results. See Sec. IIIC of our companion paper <cit.> for more detailed comments on this point, as well as on the potential use of other datasets, including local galactic measurements of the galactic γ-ray background <cit.>, positron flux <cit.>, 0.511 MeV annihilation radiation <cit.>, and more recent measurements of the EGRB from Fermi-LAT <cit.>.] For each of the three metrics, we use this method to set upper limits on f_pbh as a function of M_pbh for fixed, representative values of ℓ/r_H=0.3, 0.6, and 0.9. Finally, we note that while the origin of the EGRB is not fully understood <cit.>, our approach is conservative in this sense given that we remain agnostic as to the level of astrophysical (non-PBH) contribution to the EGRB. § RESULTS The resulting upper limits on f_pbh as a function of PRBH mass M_pbh, for different values of ℓ, are shown in Figs. <ref>,  <ref>, and <ref> for the Simpson-Visser, Peltola-Kunstatter, and D'Ambrosio-Rovelli space-times respectively. In each figure, shown as blue solid curves are the corresponding constraints in the Schwarzschild PBH case (ℓ→ 0), which we have verified to recover the results of Ref. <cit.>. We stress that for a given value of ℓ, the value of M_pbh where the overclosure limit f_pbh<1 is saturated sets the lower edge of the modified asteroid mass window (potentially enlarged or contracted). For all three cases, we see that increasing the regularizing parameter ℓ results (at a given value of M_pbh) in weaker limits on f_pbh. This confirms the expectation raised at the end of Sec. <ref> upon inspection of the resulting evaporation spectra, all of which decrease in intensity relative to the Schwarzschild case (except for the slight increase in the Peltola-Kunstatter BH case for energies below the peak, which we recall reflects the different behaviour of the GBFs). We see that, for ℓ/r_H=0.9, the upper limits on f_pbh weaken by up to an order of magnitude at a given M_pbh relative to the Schwarzschild limit for the Simpson-Visser and D'Ambrosio-Rovelli BHs, whereas for the Peltola-Kunstatter BH the extent to which the f_pbh limits weaken is more limited (again unsurprisingly, given the enhanced intensity of the evaporation spectrum at low energies). The aforementioned shifts result in the asteroid mass window being enlarged for all three metrics considered, because the lower edge of the window (lying roughly at M_pbh≃ 10^14 kg in the Schwarzschild case) moves towards lower masses. In general, we observe that the asteroid mass window further opens up by about half a decade in mass (an increase which is less dramatic than what we observed for the phenomenological metrics in our companion paper <cit.>). As a result, there is a wider available range of parameter space where PBHs of the type we are considering could make up all the DM. We note that our constraints assume that all PRBHs in the Universe carry the same value of “hair parameter” ℓ/r_H (in the language of Ref. <cit.>, we are treating it as an “universal hair”). Whether or not this is a reasonable assumption requires a deeper investigation of the theoretical underpinning of the adopted metrics (in particular the LQG-inspired one) which, in the spirit of the present work being a pilot study, we defer to follow-up work. Finally, in our companion paper <cit.> we extensively commented on a few caveats concerning the extension of the asteroid mass window and, more generally, on other existing constraints on f_pbh, which bear repeating here, albeit in a more condensed form (we refer the reader to Sec. IV of our companion paper for a significantly more detailed discussion). While evaporation constraints set the lower edge of the asteroid mass window, the upper edge thereof is instead set by lensing constraints. Our claim that the asteroid mass window is enlarged because the lower edge moves towards even lower values is therefore contingent upon the upper edge remaining the standard Schwarzschild one, even within the adopted metrics. We do, in fact, expect this to be the case since, at fixed M_pbh, lensing constraints depend only on the mass of the lensing object. We can therefore assert that the asteroid mass window is indeed enlarged when considering the three PBH metrics introduced here. Finally, a variety of other constraints on f_pbh exist, including dynamical, accretion, and CMB constraints (see e.g. Ref. <cit.> for a recent summary): however, with the exception of a few debated constraints <cit.>, we expect these to be relevant within significantly different mass ranges (unless accretion dynamics are significantly different around the RBHs under consideration), although we reserve a detailed study to follow-up work. § CONCLUSIONS It is a commonly held belief that the singularities which plague General Relativity, and represent one of the most important open problems in theoretical physics, will eventually be solved once the long sought after theory of quantum gravity is unveiled. While a consensus theory of quantum gravity remains elusive, progress on the singularity problem can still be made by considering ansätze for singularity-free space-times, either introduced phenomenologically or somewhat motivated from candidate quantum gravity frameworks (such as LQG). These regular black holes, if produced early on in the Universe from the collapse of large density perturbations (thus being primordial regular BHs), could also have a role to play in the dark matter problem. Our work is a pilot study which goes precisely in this direction, examining what are the consequences of PBHs being described by non-singular metrics. In fact, it bears reminding that the usual constraints on the fraction of DM in the form of PBHs, f_pbh, are derived under the assumption of PBHs being described by the Schwarzschild metric, which is well-known to be plagued by the r=0 singularity. In the present work, we have explored three so-called non-tr-symmetric metrics as candidates for describing PBHs: the Simpson-Visser black-bounce, Peltola-Kunstatter, and D'Ambrosio-Rovelli black-to-white-hole-bounce space-times, with the latter two enjoying strong theoretical motivation from LQG (we note that the mathematically simpler tr-symmetric case is covered in our companion paper <cit.>, and includes well-known phenomenological regular BHs such as the Bardeen and Hayward BHs). After discussing the impact of the regularizing parameter ℓ (with the Schwarzschild BH corresponding to the ℓ→ 0 limit) on the resulting evaporation spectra, we find that as ℓ increases, at a fixed PBH mass M_pbh the corresponding upper limit on f_pbh weakens for all three metrics considered. This results in the lower edge of the asteroid mass window shifting down by up to approximately half a decade in M_pbh parameter space (down from M_pbh≃ 10^14 kg to M_pbh≃ 3 × 10^13 kg). As a result, there is a larger range of available parameter space where the PBHs in question could make up the entire DM in the Universe, which could be targeted by proposed probes of the asteroid mass window <cit.>. Our work (alongside our companion paper <cit.>) demonstrates, as a proof-of-principle, that the intersection of the DM and singularity problems is a fertile terrain worthy of further studies. The most important avenue for immediate follow-up work would be to systematically revisit, within the metrics under consideration, all other non-evaporation constraints which have been extensively discussed for Schwarzschild PBHs (including lensing, accretion, and dynamical constraints). Moreover, since the Peltola-Kunstatter and D'Ambrosio-Rovelli space-times are rooted within an underlying quantum gravity theoretical framework, a first-principles study of their formation mechanism (which is otherwise not possible for metrics introduced at a phenomenological level, such as the Bardeen and Hayward BHs) and whether this leads to additional interesting complementary signatures is definitely worth pursuing. For instance, quantum transitions near the would-be singularity of the D'Ambrosio-Rovelli space-time should lead to the existence of long-lived (primordial) white holes, which in turn could potentially lead to a wide range of exotic signatures one could hope to search for. In continuing our exciting program at the interface of the DM and singularity problems, it is our intention to return to these and related issues in future follow-up work. We acknowledge support from the Istituto Nazionale di Fisica Nucleare (INFN) through the Commissione Scientifica Nazionale 4 (CSN4) Iniziativa Specifica “Quantum Fields in Gravity, Cosmology and Black Holes” (FLAG). M.C. and S.V. acknowledge support from the University of Trento and the Provincia Autonoma di Trento (PAT, Autonomous Province of Trento) through the UniTrento Internal Call for Research 2023 grant “Searching for Dark Energy off the beaten track” (DARKTRACK, grant agreement no. E63C22000500003). This publication is based upon work from the COST Action CA21136 “Addressing observational tensions in cosmology with systematics and fundamental physics” (CosmoVerse), supported by COST (European Cooperation in Science and Technology). § ASYMPTOTIC SOLUTIONS TO THE RADIAL TEUKOLSKY EQUATION We recall that, in order to compute the GBFs for the regular BHs studied in this work, we need to know the asymptotic behaviour of the function R_s, introduced in the ansatz of Eq. (<ref>) and solution to the radial Teukolsky equation given by Eq. (<ref>), both at infinity and close to the horizon. In the main text we reported these limits as being given by Eqs. (<ref>,<ref>). We now set out to prove this more formally. The radial Teukolsky equation, Eq. (<ref>), simplifies considerably if we make the following substitution <cit.>: dr_⋆/dr=1/√(f(r)g(r)) , Y_s=√(B_s/√(fg))R_s . This allows one to get rid of first derivatives of Y_s, with the radial Teukolsky equation as a function of the latter now taking the following form: Y_s,_⋆⋆+ [ ω^2 + iω s√(f/g) ( h' - hg'/g ) g/h + C_s g/h -√(β),_⋆⋆/√(β) ] Y_s=0 , where β≡ (hg^2)h=B_s/√(fg), and ,_⋆ denotes differentiation with respect to the tortoise coordinate r_⋆. We now consider the r→ +∞ and r→ r_H limits separately. §.§ Asymptotic behaviour at infinity In this limit, we trivially see that Eq. (<ref>) reduces to the following: Y_s,_⋆⋆ + ( ω^2 + 2iω s/r ) Y_s=0 , whose asymptotic solutions are given by: Y_s ∼ r^± se^∓ iω r_⋆ . which implies that R_s scales as follows: R_s ∼e^-iω r_⋆/r and R_s ∼e^iω r_⋆/r^(2s+1) , confirming the asymptotic scaling quoted in Eq. (<ref>). §.§ Asymptotic behaviour near the horizon In the vicinity of the event horizon, Eq. (<ref>) reduces to the following: Y_s,_⋆⋆ + [ ( ω - i s √(f/g)g'/4 ) ^2 + g's/4 ( f' - fg'/g ) ] Y_s=0 , where we kept only terms scaling as ∼ f^0, g^0. It is easy to see that in the cases of the Simpson-Visser and Peltola-Kunstatter space-times, for which f=g, the second term in square brackets vanishes. With some effort, one can show that this is true for the D'Ambrosio-Rovelli metric as well. This then leaves us with the following equation for Y_s: Y_s,_⋆⋆ + ( ω - i s √(f/g)g'/4 ) ^2Y_s = 0 , whose asymptotic solutions are given by: Y_s ∼exp [ ± i ( ω - isg'/2√(f/g) ) r_⋆ ] . By definition, the tortoise coordinate r_⋆ is determined by the following integral: r_⋆ = ∫dr/√(f(r)g(r))Kln(r - r_H) , where r_H is the radial coordinate of the event horizon and K is a coefficient to be determined. Combining Eqs. (<ref>,<ref>) we reach the following expression for the asymptotic scaling of Y_s: Y_s ∼ e^± i ω r_⋆exp [ ±sg'/2√(f/g)Kln(r - r_H) ] . It is straightforward to check that in the case of Schwarzschild BH, Eq. (<ref>) reduces to the following: r_⋆ = ∫dr/1 - 2M/r 2Mln(r - 2M) , from which we recover the asymptotic behaviour found in Ref. <cit.>: Y_s ∼ e^± i ω r_⋆Δ^± s/2 R_s ∼ e^iω r_⋆ or R_s ∼Δ^-se^-iω r_⋆ , since it is easy to show that the following holds asymptotically: ln(r -r_H) ∼ln(Δ) , where Δ≡ r^2-2Mr=r^2g(r). Note that the R_s ∼ e^iω r_⋆ solution in Eq. (<ref>) is then discarded on the basis of the purely ingoing boundary conditions we fix at the horizon when setting up the scattering problem, thereby confirming the asymptotic scaling quoted in Eq. (<ref>). The above results hinged upon the r_⋆∝ln(r-r_H) scaling in Eq. (<ref>). We now check that this scaling does indeed hold for the three space-times we consider in our work. §.§.§ Simpson-Visser space-time We recall that the functions f(r) and g(r) are given by the following: f(r)=g(r)=1-2M/√(r^2 + ℓ^2) , implying that the tortoise coordinate is given by the following: r_⋆ = ∫dr/1 - 2M/√(r^2 + ℓ^2)4M^2/√(4M^2-ℓ^2)ln ( r - r_H^SV ) , where r_H^SV = √(4M^2 - ℓ^2). For this metric g'(r)=2Mr/(r^2 + ℓ^2)^3/2, from which one easily finds that Eq. (<ref>) can be expressed as follows: Y_s ∼ e^± i ω r_⋆e^±s/2ln(r - r_H)∼ e^± i ω r_⋆ [ h(r)g(r) ] ^±s/2 , where in the last step, in light of the discussion in Ref. <cit.>, we generalized Δ to the following: Δ≡ r^2g(r) ⟶D≡ ( h(r)g(r) ) . Finally, returning to R_s, it is easy to show that the asymptotic behaviour of the latter is given by: R_s ∼ e^iω r_⋆, and R_s ∼ A_s e^-iω r_⋆ where A_s is defined according to Eq. (<ref>), and confirming the asymptotic scaling quoted in Eq. (<ref>). §.§.§ Peltola-Kunstatter For the Peltola-Kunstatter space-time the functions f(r) and g(r) are given by the following: f(r)=g(r)=r-2M/√(r^2 - ℓ^2) , so the tortoise coordinate is given by the following: r_⋆ = ∫dr/1 - 2M/√(r^2+ℓ^2)√(4M^2 + ℓ^2)ln ( r - r_H^PK ) , with r_H^PK = 2M. For this metric we have g'(r) = ℓ^2+2mr/(r^2 + ℓ^2)^3/2, from which one can easily show that Eq. (<ref>) can be expressed in the same way as Eq.  (<ref>), from which it follows that g'√(f/g) cancels with the term proportional to K in Eq. (<ref>), and therefore that the asymptotic scaling of R_s is the same as in Eq. (<ref>), thereby confirming the asymptotic scaling quoted in Eq. (<ref>). §.§.§ D'Ambrosio-Rovelli For the D'Ambrosio'Rovelli space-time the functions f(r) and g(r) are given by the following: f(r)=1-2M/√(r^2 + ℓ^2) and g(r)=√(r^2 + ℓ^2) - 2M/√(r^2 + ℓ^2) + ℓ , so the tortoise coordinate is given by the following: r_⋆ = ∫√(r^2 + ℓ^2 + ℓ√(r^2 + ℓ^2))/√(r^2 + ℓ^2) - 2M 2√(2)M^3/2/√(2M - ℓ)ln ( r - r_H^DR ) , with r_H^DR = √(4M^2 - ℓ^2). From Eq. (<ref>) it follows that K = 2√(2)M^3/2(2M-ℓ), which cancels once again with the g'√(f/g) term in Eq. (<ref>). Analogously to the previous case, one can therefore conclude that the asymptotic solutions of the radial Teukolsky equation for R_s in the near horizon region is indeed given by Eq. (<ref>), thereby confirming the asymptotic scaling quoted in Eq. (<ref>).
http://arxiv.org/abs/2409.02521v1
20240904082945
Fundamental properties of linear factor models
[ "Damir Filipovic", "Paul Schneider" ]
q-fin.ST
[ "q-fin.ST", "stat.AP", "62P20" ]
Fundamental properties of linear factor models Peter M. Kraus 3 September 2024 ================================================ § ABSTRACT We study conditional linear factor models in the context of asset pricing panels. Our analysis focuses on conditional means and covariances to characterize the cross-sectional and inter-temporal properties of returns and factors as well as their interrelationships. We also review the conditions outlined in <cit.> and show how the conditional mean-variance efficient portfolio of an unbalanced panel can be spanned by low-dimensional factor portfolios, even without assuming invertibility of the conditional covariance matrices. Our analysis provides a comprehensive foundation for the specification and estimation of conditional linear factor models. Keywords: asset pricing, factor models, characteristics, covariances, mean-variance efficient portfolio, stochastic discount factor, covariance estimation JEL classification: G11, G12, C38 § INTRODUCTION Since the capital asset pricing model <cit.> and the arbitrage pricing theory of <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>, academia and industry alike have shown a strong desire to compress vast asset pricing panels into low-dimensional linear factor representations. Accordingly, an extensive econometric literature has simultaneously developed that exploits statistical arbitrage relations and asymptotic results in the time series and cross section to develop estimators for factor loadings and return covariances <cit.>. Recent literature devises models for conditional means of panels <cit.>, or conditional covariances <cit.>. To date, however, there is no comprehensive collection of properties that any conditional linear factor model should satisfy, how they relate to first and second conditional moments, and how these properties change when the factors are tradable. In this short note, we fill the gap in this literature and discuss exhaustively and from first principles the fundamental properties obeyed by (un)conditional linear factor models. Our analysis focuses on conditional means and covariances to characterize the cross-sectional and inter-temporal properties of returns and factors as well as their interrelationships. The focus is on a concise, rigorous discussion with minimal assumptions and the identification of pitfalls. For example, we do not assume invertibility of the covariance matrices, which is a strong assumption given the large cross-sectional dimensions in modern unbalanced asset pricing panels. In particular, we investigate under which conditions * factors and residuals are conditionally uncorrelated, * tradable factors (i.e. factor portfolios) span the conditional mean-variance efficient portfolio, * a generative risk factor model can be represented by tradable factors that load on the same coefficient matrix and fulfill the first two properties, and how these three questions are interrelated.[Question <ref> is also studied in <cit.>, but under more restrictive assumptions. We further extend their paper by relating <ref> to aspects of <ref> and <ref>.] For example, we find that a linear factor model with tradable factors and conditionally uncorrelated residuals, as in <ref>, cannot possibly have an invertible residual covariance matrix. We also review the conditions outlined in <cit.> and show how the conditional mean-variance efficient portfolio of an unbalanced panel can be spanned by low-dimensional factor portfolios, relating to <ref>. We find that property <ref> holds almost universally, which is remarkable and suggests a general structure for consistent estimators of conditional means and covariance matrices for unbalanced asset pricing panels. Our analysis is relevant in particular also for the broader context of models that are linear in nonlinear functions of factors, as they are commonly used in financial machine learning based upon neural nets, or kernel-based methods. It is complemented by extensive examples and provides a comprehensive foundation for the specification and estimation of conditional linear factor models. The structure of this short note is as follows. In Section <ref> we introduce the formal setup for conditional linear factor models and give an overview of our main results. In Section <ref> we answer question <ref> and derive related results based on the covariances. We also give an example that serves as counterexample in some of our proofs. In Section <ref> we discuss risk premia and show that characteristics are covariances under certain conditions. In Section <ref> we address question <ref>. Using the counterexample, we also show some pitfalls with false implications. In Section <ref> we address question <ref>. In Section <ref> we conclude. The appendix collects all the proofs. § CONDITIONAL LINEAR FACTOR MODELS We study conditional linear factor models of the form x_t+1 = Φ_t f_t+1 + ϵ_t+1, where x_t+1 denotes the vector of excess returns of n_t assets over period [t,t+1], for a n_t× m matrix Φ_t of asset characteristics that are measurable with respect to the information set (i.e. observable) at t, loading on a vector of m common risk factors f_t+1, and residuals ϵ_t+1.[Formally, all random variables are modeled on a probability space (Ω,,) along with a sequence of information sets (σ-algebras) _t⊆_t+1⊆⋯⊆. We write _t[·] and _t[·] for the conditional mean and covariance given _t. We assume a regular conditional probability _t such that _t[1_A]=_t[A] for any event A∈. Conditional expectations given _t hence amount to expectations under _t, which we always assume to exist and be finite. Equality between random variables means _t-almost sure equality. Assuming a trivial information set, _t={∅,Ω}, this setup includes also unconditional moments.] The above panel specification inherently combines two notions, one within generative statistical models, and one within the study of optimal portfolios. The first interprets the right-hand side of (<ref>) as a data-generating process, where abstract risk factors f_t+1 and idiosyncratic risk ϵ_t+1 generate returns x_t+1, given Φ_t. The second assumes that the factors are tradable portfolios of the form f_t+1 = W_t^⊤ x_t+1, for a n_t× m matrix W_t of weights that are measurable with respect to the information set at t. Such tradable factors should serve as an accurate low-rank representation of the full cross section x_t+1, with implied residuals ϵ_t+1 = x_t+1 - Φ_t f_t+1, given Φ_t. In this portfolio context an eminent question concerns the conditional Sharpe ratios that can be attained from trading in the full cross section x_t+1 or only in the factor portfolios f_t+1, respectively.[Our setup is the same as in <cit.> and in fact more general, as they assume that matrices have full rank. In view of Footnote <ref>, our analysis even applies for m≥ n_t.] Our analysis of the introductory questions <ref>–<ref> is based exclusively on the first two conditional moments. We therefore denote by μ_t_t[ x_t+1] and Σ_t_t[ x_t+1] the conditional mean and covariance matrix of the excess returns, and analogously we write μ_ f, t_t[ f_t+1], Σ_ f, t_t[ f_t+1] and Σ_ϵ,t_t[ϵ_t+1] for those of the factors and residuals, respectively. We do not assume that these matrices have full rank. Instead of the regular matrix inverses we use the pseudoinverse, denoted by A^+ for any matrix A.[Also called Moore–Penrose generalized inverse. Let A= V D W^⊤ be the singular value decomposition of a matrix A, with orthogonal matrices V and W. Then the pseudoinverse of A is given by A^+ = W D^+ V^⊤, where D^+ is the transpose of D in which the positive singular values are replaced by their reciprocals, see <cit.> or <cit.>. Consequently, A^+ A and A A^+ are the orthogonal projections on the images of A^⊤ and A, respectively. If A^⊤ A is invertible then A^+ = ( A^⊤ A)^-1 A^⊤.] To simplify the exposition, we will omit the qualifier “conditional” from probabilities, expectations, covariances, and correlations in the following. We will also refer to excess returns simply as “returns”. Figure <ref> gives an overview of some of our main findings. § FACTOR AND RESIDUAL CORRELATION Our first result gives necessary and sufficient conditions that factors and residuals are uncorrelated, which answers the introductory question <ref>, and relates this property to the covariance matrix of the returns. We refer to Φ_t f_t+1 as the factor-spanned component of the returns whose covariance is given by _t[Φ_t f_t+1]=Φ_t Σ_ f,tΦ_t^⊤. The factor-spanned component and the residuals are uncorrelated, _t[Φ_t f_t+1, ϵ_t+1] = 0, if and only if the covariance matrix of the returns is given by the sum Σ_t = Φ_t Σ_ f,tΦ_t^⊤ + Σ_ϵ,t. Moreover, if (Φ_t)=m then any of (<ref>) or (<ref>) is equivalent to _t[ f_t+1, ϵ_t+1] = 0. We obtain stronger results under the assumption that the factors are tradable. Assume that (<ref>) holds. Then any of (<ref>) or (<ref>) is equivalent to Σ_t W_t Φ_t^⊤ = Φ_t W_t^⊤Σ_t W_t Φ_t^⊤ = Φ_t W_t^⊤Σ_t. Moreover, if W_t^⊤∩Φ_t={ 0}, then any of (<ref>), (<ref>) or (<ref>) is equivalent to (<ref>). Factor-spanned components that are uncorrelated with residuals imply a zero matrix-product relation between covariance matrices, but not conversely, as the following proposition shows. Assume that (<ref>) holds. Then any of (<ref>), (<ref>) or (<ref>) implies the zero matrix-product relation Φ_t W_t^⊤Σ_ϵ,t = 0. However, the converse is not true, as (<ref>) does not imply any of (<ref>), (<ref>) or (<ref>). As an important corollary of Proposition <ref>, we furthermore obtain an impossibility result. Assume that (<ref>) holds and the factor model (<ref>) is non-degenerate, Φ_t W_t^⊤≠ 0. Then the residual covariance matrix cannot have full rank, Σ_ϵ,t<n_t, under any of the equivalent conditions (<ref>), (<ref>) or (<ref>). Corollary <ref> implies in particular that there exists no linear factor model (<ref>) with tradable factors (<ref>), which are portfolios in x _t+1, and uncorrelated idiosyncratic risk components ϵ_t+1 with nonsingular covariance matrix Σ_ϵ,t. Corollary <ref> thus has ramifications for the literature on shrinkage of covariance matrices with factor structure <cit.>, where oftentimes the spectrum of the residual covariance is lifted in order to obtain invertibility. Tradable factors with weight matrix inducing a projection imply cross-sectional orthogonality, which is stronger than (<ref>), in the following sense. Assume that (<ref>) holds for a weight matrix such that the matrix product Φ_t W_t^⊤ =(Φ_t W_t^⊤)^2 is a (generally non-orthogonal) projection. Then the rows of Φ_t W_t^⊤ and the residuals are cross-sectionally orthogonal, Φ_t W_t^⊤ϵ_t+1= 0, and therefore (<ref>) holds. If in addition the matrix product Φ_t W_t^⊤ = W_tΦ_t^⊤ is self-adjoint, and thus an orthogonal projection, then the factor-spanned component and the residuals are cross-sectionally orthogonal, (Φ_t f_t+1)^⊤ϵ_t+1= 0. Even in this case, uncorrelatedness (<ref>) generally does not follow. We are able to further qualify the above results under more specific assumptions on the weight matrix, as the following lemma shows. The weight matrix W_t satisfies W_t^⊤Φ_t W_t^⊤ = W_t^⊤ if and only if (<ref>) and (<ref>) hold. In this case, we also have (Φ_t W_t^⊤)= W_t^⊤. Property (<ref>) holds in particular for weight matrices of the form W_t^⊤ = R_t ( S_tΦ_t R_t)^+ S_t, for some m× m-matrix R_t and n_t× n_t-matrix S_t. If in addition we have ( Φ_t R_t)∩ S_t ={ 0}, then (Φ_t W_t^⊤) = (Φ_t R_t). Weight matrices of the form (<ref>) include the following two examples. For R_t= I_m and S_t= I_n_t in (<ref>), we obtain the OLS factors f^ OLS_t+1:=Φ_t^+ x_t+1. Here, Φ_t W_t^⊤=Φ_tΦ_t^+ is the orthogonal projection in ^n_t onto the image of Φ_t. The OLS factors f^ OLS_t+1 minimize the cross-sectional least-squares problem x_t+1 - Φ_t f_t+1_2 over all factors f_t+1 in ^m.[ If (Φ_t)=m then the OLS problem has a unique solution. In general, all factors of the form f_t+1= f^ OLS_t+1 + ( I_m - Φ_t^+Φ_t) w_t+1, for any w_t+1∈^m, are OLS solutions. Hence f^ OLS_t+1 is the unique OLS solution f_t+1 with minimal norm f_t+1_2.] For R_t= I_m and S_t in (<ref>) such that S_t^⊤ S_t= Σ_ϵ,t^+, we obtain the GLS factors f^ GLS_t+1:=( S_tΦ_t)^+ S_t x_t+1. The GLS factors f^ GLS_t+1 minimize the squared Mahalanobis length ( x_t+1 - Φ_t f_t+1)^⊤Σ_ϵ,t^+( x_t+1 - Φ_t f_t+1) over all factors f_t+1 in ^m.[Subject to similar aspects as discussed in Footnote <ref>.] More generally, we see that (<ref>) allows for rotated GLS factors by an appropriate choice of R_t. The following example illustrates the dichotomy between cross-sectional orthogonality as defined in Lemma <ref>, and orthogonality in the time series, as it shows OLS factors that are correlated with residuals. It serves as counterexample in the proofs of Proposition <ref> and Lemmas <ref> and <ref>, and prepares and complements Proposition <ref> below. [label=exspanning2] Let n_t=3, m=2, and let ξ_1, ξ_2, ξ_3 be uncorrelated random variables with variances _t[ξ_i]=a_i>0 and means b_i=_t[ξ_i]∈, for i=1,2,3. Assume returns and their characteristics are given by [ x_t+1,1; x_t+1,2; x_t+1,3 ][ ξ_1; ξ_2; ρ/a_1ξ_1 + ρ/a_2ξ_2 + ξ_3 ] , Φ_t [ 1 0; 1 0; 0 1 ], for some ρ∈. This gives the following mean and covariance matrix of x_t+1, μ_t = [ b_1; b_2; ρ/a_1 b_1 + ρ/a_2 b_2 + b_3 ], Σ_t =[ a_1 0 ρ; 0 a_2 ρ; ρ ρ ρ/a_1 + ρ/a_2 + a_3 ]. Matrix Φ_t has full column rank m=2 and its pseudoinverse equals Φ_t^+ = [ 1/2 1/2 0; 0 0 1 ]. We set the portfolio weight matrix W_t^⊤Φ_t^+, so that we obtain the OLS factors (cf. Example <ref>), f_t+1 = f^ OLS_t+1 = Φ_t^+ x_t+1 = [ 1/2 x_t+1,1 + 1/2 x_t+1,2; x_t+1,3 ], and Φ_t W_t^⊤ =Φ_t Φ_t ^+ =[ 1/2 1/2 0; 1/2 1/2 0; 0 0 1 ] is the orthogonal projection onto the image of Φ_t. As shown in (the first part of) Lemma <ref>, the factor-spanned component and residuals, given by Φ_t f_t+1= [ 1/2 x_t+1,1 + 1/2 x_t+1,2; 1/2 x_t+1,1 + 1/2 x_t+1,2; x_t+1,3 ] and ϵ_t+1 = [ 1/2 x_t+1,1 -1/2 x_t+1,2; 1/2 x_t+1,2 -1/2 x_t+1,1; 0 ], are cross-sectionally orthogonal, (<ref>), and the zero matrix-product condition (<ref>) holds. However, their covariance, _t[ Φ_t f_t+1,ϵ_t+1] = [ 1/2(a_1-a_2) 1/2(a_2-a_1) 0; 1/2(a_1-a_2) 1/2(a_2-a_1) 0; 0 0 0 ] , is nonzero if a_1≠ a_2. § RISK PREMIA The preceding properties of linear factor models have not yet directly used the asset risk premia μ_t so far. In this section, we investigate the role of the mean in particular with respect to the time series properties of linear factor models. The following elementary lemma gives an algebraic condition in terms of μ_t such that residual risk is unpriced for tradable factors. Assume that (<ref>) holds. Then residual risk is unpriced, _t[ϵ_t+1]= 0, if and only if μ_t =Φ_t W_t^⊤μ_t. The following proposition further extends our knowledge of the properties of linear factor models, by showing that characteristics are covariances under the assumption of uncorrelated factors and residuals and unpriced residual risk. This property in turn concerns optimality of the characteristics matrix Φ_t in the time series. It extends the discussion of OLS factors in <cit.>, and GLS factors in <cit.> with respect to their times series, cross-sectional, and asset pricing properties <cit.>. Assume that (<ref>) and (<ref>) hold. Then the characteristics Φ_t are also covariances for the factors, in the sense that Φ _t∈β _t _t x_t+1 - β_t f_t+1_2^2 where the minimum is taken over all n_t× m-matrices β_t that are measurable with respect to the information set at t. The minimizer in (<ref>) is unique if and only if the Gram matrix of the factors, Σ_ f,t + μ_ f,tμ_ f,t^⊤, has full rank. Sufficient conditions for (<ref>) to hold for tradable factors are given in Proposition <ref> and Lemma <ref>. § SPANNING FACTORS We henceforth assume absence of arbitrage, in the weak sense that[An arbitrage is a strategy w_t that yields a nonnegative return, w_t^⊤ x_t+1≥ 0, and such that _t[ w_t^⊤ x_t+1> 0]>0. Condition (<ref>) is necessary but not sufficient for the absence of such arbitrage in the strict sense, which is outside the scope of this short note. Indeed, the orthogonal decomposition μ_t = μ_t,0 + μ_t,1 according to Σ_t⊕Σ_t shows that μ_t,0^⊤ x_t+1 = μ_t,0_2^2 is risk-free and strictly positive, and thus μ_t,0 is an arbitrage, unless μ_t,0= 0. ] μ_t ∈Σ_t . This is equivalent to the existence of the mean-variance efficient (MVE) portfolio maximizing the objective w_t^⊤μ_t - 1/2 w_t^⊤Σ_t w_t over w_t∈^n_t.[The MVE portfolio is unique up to scaling by the risk aversion parameter, which cancels out in the Sharpe ratio and which we therefore set equal to one.] Its weights are given by w_t = Σ_t^+μ_t, and it attains the maximum squared Sharpe ratio, SR_t^2=μ_t^⊤Σ_t^+ μ_t. Absence of arbitrage (<ref>) is also equivalent to the existence of the minimum-variance stochastic discount factor (SDF), which solves the problem M_t+1∈ L^2_ _tminimize _t[ M_t+1^2 ] subject to _t[ M_t+1 x _t+1 ]= 0 and _t[ M_t+1 ]=1, and is given in terms of the MVE portfolio by M_t+1= 1 - μ_t^⊤Σ_t^+ ( x_t+1 - μ_t). We henceforth assume that factors are tradable, and consider m factor portfolios f_t+1 with returns given by (<ref>). We denote by μ_ f,t_t[ f_t+1] = W_t^⊤μ_t their mean and, as above, by Σ_ f,t = W_t^⊤Σ_t W_t their covariance. Then the MVE portfolio spanned by f_t+1 with weights given by Σ_ f,t^+ μ_ f,t is equally well defined and yields the squared Sharpe ratio SR_ f,t^2=μ_ f,t^⊤Σ_ f,t^+μ_ f,t.[In fact, we have μ_ f,t∈Σ_ f,t, which follows from (<ref>) and using that A = A A^⊤ for any matrix A.] There exists also a minimum-variance SDF that prices only factors. In general, we have SR_ f,t^2≤ SR_t^2. The following proposition gives sufficient and necessary conditions for equality and answers the introductory question <ref>. It generalizes <cit.> to the case where the matrices Σ_t, Σ_ f,t and Φ_t do not have full rank.[<cit.> derive the proof of their Lemma 1 from <cit.>, which assumes that matrices are invertible.] The following are equivalent: * the factor MVE portfolio attains the maximum Sharpe ratio, SR_ f,t^2= SR_t^2; * asset risk premia μ_t are given as covariances with the factors, _t[ x_t+1, f_t+1]=Σ_t W_t, as μ_t ∈( Σ_t W_t); * factors f _t+1 span the full MVE portfolio, μ_ f,t^⊤Σ_ f,t^+ f_t+1 = μ_t^⊤Σ_t^+ x_t+1; * factors f _t+1 span the minimum-variance SDF, M_t+1= 1 - μ_ f,t^⊤Σ_ f,t^+ ( f_t+1 - μ_ f,t). For a large class of tradable factors, and under the assumption that factors and residuals are uncorrelated, the spanning condition (<ref>) holds if and only if the residual risk is unpriced. This is the content of the following proposition. Assume that any of (<ref>), (<ref>) or (<ref>) holds. Then any of (<ref>) or (<ref>) implies the spanning condition (<ref>). Conversely, if the weight matrix W_t satisfies (<ref>), then the spanning condition (<ref>) implies any of (<ref>) or (<ref>). From Proposition <ref> we deduce that any of (<ref>), (<ref>) or (<ref>) and any of (<ref>) or (<ref>) together imply the spanning condition (<ref>). However, the converse is not true, as the following continuation of Example <ref> shows.[Proposition 2 in <cit.> is similar to our Proposition <ref>, but differs in some important points. First, their proposition assumes that the weight matrix is of the form W_t =Φ_t R_t, for some invertible m× m matrix R_t. However, this excludes GLS factors (Example <ref>) unless they are OLS factors (Example <ref>) and Φ_t={ 0} holds, in which case R_t=(Φ_t^⊤Φ_t)^-1. Second, they replace the spanning condition (<ref>) by the stronger assumption that Φ_t⊆(Σ_t W_t) for the converse implication in their proposition. Third, they replace our decomposition (<ref>) by the weaker property that Σ_t =Φ_tΨ_tΦ_t^⊤ + U_tΩ_t U_t^⊤ for some conformable matrices Ψ_t, Ω_t, and a n_t× (n_t-m) matrix U_t for which U_t^⊤Φ_t= 0. The latter holds automatically in our case, see (<ref>). Hence their proposition makes no inference on the correlation between factors and residuals as we do.] [continues=exspanning2] Assume additionally b_1=b_2 and ρ≠ 0. Then (<ref>) follows by inspection. Moreover, some algebra shows that Σ_t W_t [ 0; b_1/ρ ]=Σ_t (Φ_t^+)^⊤[ 0; b_1/ρ ] = μ_t, if b_3 = (1-ρ)b_1/a_1 + (1-ρ)b_1/a_2 + b_1/ρa_3, which shows (<ref>). Hence (<ref>), (<ref>) and the spanning condition (<ref>) hold, while (<ref>), (<ref>) and (<ref>) do not. The following lemma puts Example <ref> into a larger perspective and shows what goes wrong with the failed implication.[In fact, in Example <ref> we have b_t= W_t c_t for c_t= [ 0; b_1/ρ ].] Assume that the weight matrix W_t satisfies either (<ref>) or Φ_t⊆ W_t. Then any of (<ref>) or (<ref>) and the spanning condition (<ref>) imply that there exists a vector b_t∈^n_t such that the vector equality Σ_t W_tΦ_t^⊤ b_t=Φ_t W_t^⊤Σ_t W_tΦ_t^⊤ b_t holds. However, not the matrix equality (<ref>), in general. § GENERATIVE MODELS HAVE SPANNING FACTORS In the following we answer the introductory question <ref>. Concretely, we establish sufficient conditions on a data-generating linear factor model (<ref>), which we write here as x_t+1 =Φ_t g_t+1 + z_t+1, such that GLS-type factors are spanning. Here g_t+1∈^m denote some abstract (generally non-tradable) systematic risk factors and z_t+1 the idiosyncratic risk components of the returns x_t+1. We denote their mean and covariance matrices by μ_ g,t_t[ g_t+1], Σ_ g,t_t[ g_t+1], and Σ_ z,t_t[ z_t+1]. We first prove two technical lemmas. Assume that systematic risk factors g_t+1 and idiosyncratic risk z_t+1 are uncorrelated, _t[ g_t+1, z_t+1]= 0, covariance matrix Σ_ g,t is invertible, and idiosyncratic risk is unpriced, _t[ z_t+1]= 0. Then the implied return covariance matrix decomposes as Σ_t = Φ_t Σ_ g,tΦ_t^⊤ + Σ_ z,t, and absence of arbitrage (<ref>) holds. The following auxiliary lemma gives a matrix extension in the case where the idiosyncratic covariance matrix does not have full rank. Let U_t be a n_t× n_t-matrix such that U_t^⊤ U_t= Σ_ z,t^+. Then Σ_ z,t= U_t^⊤ and Σ_ z,t= U_t. Moreover, there exists an invertible n_t× n_t-matrix S_t extending U_t in the sense that S_t = U_t on U_t^⊤ and S_t^⊤= U_t^⊤ on U_t, and S_t: U_t→ U_t^⊤ is a bijection. In particular, S_t= U_t if Σ_ z,t is invertible, and a possible choice is S_t = I_n_t in the degenerate case where Σ_ z,t= 0. We can now prove our announced result. Under the assumptions of Lemmas <ref> and <ref>, the GLS-type factors f_t+1( S_tΦ_t)^+ S_t x_t+1 satisfy spanning condition (<ref>). The implied residuals ϵ_t+1 x_t+1 - Φ_t f_t+1 have zero mean (<ref>) and are uncorrelated with the factors (<ref>). The implied matrix Φ_t W_t^⊤ = Φ_t ( S_tΦ_t)^+ S_t is a (generally non-orthogonal) projection onto Φ_t, and the residuals are equal to the projection of idiosyncratic risk, ϵ_t+1 = ( I_n_t -Φ_t W_t^⊤) z_t+1. Mean and covariance matrices of f_t+1 and ϵ_t+1 are given by the expressions μ_ f,t = ( S_tΦ_t)^+ S_tΦ_tμ_ g,t, Σ_ f,t = ( S_tΦ_t)^+ S_tΦ_t Σ_ g,tΦ_t^⊤ S_t^⊤ (Φ_t^⊤ S_t^⊤)^+ + Q_t , Σ_ϵ,t = Σ_ z,t - Φ_t W_t^⊤Σ_ z,t W_tΦ_t^⊤ = Σ_ z,t -Φ_t Q_t Φ_t^⊤, where Q_t ( S_tΦ_t)^+ S_t Σ_ z,t S_t^⊤ (Φ_t^⊤ S_t^⊤)^+ = ( S_tΦ_t)^+ U_t U_t^+ (Φ_t^⊤ S_t^⊤)^+. Note that S_t S_t^⊤≠Σ_ϵ,t in general in the above statement. Hence, the factors f_t+1 in Proposition <ref> are only of “GLS-type”, rather than the GLS factors as described in Example <ref>. Nevertheless, Proposition <ref> remarkably promises that any return panel linearly generated by abstract risk factors, which are not necessarily tradable, also has a linear representation in terms of tradable factors f_t+1 that load on the same characteristics Φ_t. However, the same statement also reveals that the properties of these tradable factors crucially rely on Φ _t, as well as on Σ _g,t and Σ _z,t in their population formulation. Any empirical application thus necessitates consistent estimators of Φ _t and Σ _t, or, alternatively Σ_ g,t and Σ_ z,t. § CONCLUSION This short note provides a collection of elementary and fundamental properties of conditional linear factor models for unbalanced panels, in particular in the context of asset pricing. Our results are derived for a finite cross section, they are comprehensive, and based upon elementary linear algebra. We exhaustively describe the relation between the covariance matrix of returns, the covariance matrix of the residuals, risk premia, and Sharpe ratios. Our results range from the question when a covariance matrix of a panel generated by a linear factor structure can be decomposed into a factor, and a residual part, to the question when the maximum Sharpe ratio attained by the full return panel can also be attained a linear factor portfolio. If the factors are themselves portfolios of the returns, a number of additional powerful properties and qualifications arise that are useful in a portfolio context. We show that any return panel whose data-generating process is linear in abstract, non-tradable factors, has a linear factor representation in terms of tradable factors that load on the same characteristics. The same result also shows that consistent estimators for factor, residual, and return covariances are indispensable for econometric factor analysis. Our analysis provides a simple framework together with simple guidelines for the estimation and specification of linear factor models, comprising also modern formulations based upon neural nets or kernels. Future work could tackle the connection of the results in this short note, to the asymptotic notions introduced in <cit.> and <cit.>. § PROOFS The appendix collects all proofs. §.§ Proof of Lemma <ref> Equivalence of (<ref>) and (<ref>) follows from the elementary decomposition Σ_t = Φ_t Σ_ f,tΦ_t^⊤ + _t[Φ_t f_t+1, ϵ_t+1] + _t[ϵ_t+1,Φ_t f_t+1] + Σ_ϵ,t. Assuming (<ref>), equivalence of (<ref>) and (<ref>) follows because _t[Φ_t f_t+1, ϵ_t+1] = Φ_t _t[ f_t+1, ϵ_t+1]. §.§ Proof of Proposition <ref> Given (<ref>), it follows that _t[Φ_t f_t+1,ϵ_t+1] = _t[Φ_t f_t+1, x_t+1 - Φ_t f_t+1] = Φ_t W_t^⊤Σ_t - Φ_t W_t^⊤Σ_t W_t Φ_t^⊤. Hence (<ref>) is equivalent to (<ref>). Assuming (<ref>), equivalence of (<ref>) and (<ref>) follows because _t[Φ_t f_t+1, ϵ_t+1] = Φ_t _t[ f_t+1, ϵ_t+1] and _t[ f_t+1, ϵ_t+1]= W_t^⊤_t[ x_t+1, ϵ_t+1]. §.§ Proof of Proposition <ref> Given (<ref>), we have Σ_ϵ,t = _t[ x_t+1 - Φ_t f_t+1] = Σ_t - Σ_t W_t Φ_t^⊤- Φ_t W_t^⊤Σ_t + Φ_t W_t^⊤Σ_t W_t Φ_t^⊤ . Hence (<ref>) implies Σ_ϵ,t =Σ_t - Σ_t W_t Φ_t^⊤ and therefore Φ_t W_t^⊤Σ_ϵ,t =Φ_t W_t^⊤Σ_t - Φ_t W_t^⊤Σ_t W_t Φ_t^⊤ = 0, which yields (<ref>). That the converse implication is not true is proved by means of the counterexample given in Example <ref>. §.§ Proof of Corollary <ref> This follows from (<ref>). §.§ Proof of Lemma <ref> Given (<ref>), the residuals ϵ_t+1=( I_n_t-Φ_t W_t^⊤) x_t+1 satisfy Φ_t W_t^⊤ϵ_t+1= 0, and therefore (<ref>). The second statement follows by elementary linear algebra. The last statement follows by means of the counterexample given in Example <ref>. §.§ Proof of Lemma <ref> Property (<ref>) clearly implies (<ref>). Now let v∈ W_t^⊤∩Φ_t. This means that v= W_t^⊤ c for some c∈^n_t such that Φ_t W_t^⊤ c= 0. Given (<ref>), this implies that v= W_t^⊤ c= W_t^⊤Φ_t W_t^⊤ c= 0, which proves (<ref>). This also implies that (Φ_t W_t^⊤)= W_t^⊤.[By elementary linear algebra, this again implies that the adjoint W_tΦ_t^⊤ is a projection with ( W_tΦ_t^⊤)= W_t^⊤. On the other hand, we only have (Φ_t W_t^⊤)⊆Φ_t, without equality in general.] Conversely, (<ref>) reads Φ_t W_t^⊤Φ_t W_t^⊤=Φ_t W_t^⊤, and given (<ref>) this again implies (<ref>), which proves the first part of the lemma. For the second part of the lemma, given (<ref>), matrix algebra shows that W_t^⊤Φ_t W_t^⊤ = R_t ( S_tΦ_t R_t)^+ S_t Φ_t R_t ( S_tΦ_t R_t)^+ S_t= R_t ( S_tΦ_t R_t)^+ S_t= W_t^⊤, where we used that A^+ A A^+= A^+ for any matrix A, which implies (<ref>). For the last statement, we first note that (Φ_t W_t^⊤) ⊆(Φ_t R_t) and consider the identity S_tΦ_t W_t^⊤ (Φ_t R_t) = S_tΦ_t R_t ( S_tΦ_t R_t)^+ S_t Φ_t R_t = S_t (Φ_t R_t). Given (<ref>), we infer that Φ_t W_t^⊤ (Φ_t R_t)=Φ_t R_t, and therefore (Φ_t W_t^⊤) = (Φ_t R_t). §.§ Proof of Lemma <ref> This follows from the identity x_t+1=Φ_t W_t^⊤ x_t+1 + ϵ_t+1. §.§ Proof of Proposition <ref> The first claim follows from the L_ _t^2-orthogonality of the factors and residuals, _t[ f_t+1ϵ_t+1^⊤]= 0, which holds due to (<ref>) and (<ref>). The second claim about the uniqueness of the minimizer follows elementary. §.§ Proof of Proposition <ref> <ref>⇔<ref>: Given (<ref>), we can write μ_t =Σ_t^1/2ν_t, and thus μ_ f,t = W_t^⊤Σ_t^1/2ν_t, for some ν_t∈^n_t. We obtain SR_t^2 = ν_t^⊤ν_t and, using the matrix identity A^⊤ ( A A^⊤)^+ = A^+ for any matrix A, SR_ f,t^2 =ν_t^⊤ ( W_t^⊤Σ_t^1/2)^⊤(( W_t^⊤Σ_t^1/2) ( W_t^⊤Σ_t^1/2)^⊤)^+ ( W_t^⊤Σ_t^1/2) ν_t = ν_t^⊤ ( W_t^⊤Σ_t^1/2)^+( W_t^⊤Σ_t^1/2) ν_t = ν_t^⊤ P_Σ_t^1/2 W_t ν_t , where P_Σ_t^1/2 W_t denotes the orthogonal projection onto the image of Σ_t^1/2 W_t. It follows that (<ref>) holds if and only if ν_t = P_Σ_t^1/2 W_t ν_t lies in the image of Σ_t^1/2 W_t, which again is equivalent to (<ref>). <ref>⇒<ref>: Given (<ref>), and because (Σ_t^1/2 W_t)⊆(Σ_t W_t), there exists a vector b_t∈( W_t^⊤Σ_t^1/2) such that μ_t=Σ_t W_t b_t and therefore Σ_ f,t^+μ_ f,t=( W_t^⊤Σ_t W_t)^+ W_t^⊤Σ_t W_t b_t = b_t. On the other hand, we have x_t+1=Σ_tΣ_t^+ x_t+1 with probability one. We obtain μ_ f,t^⊤Σ_ f,t^+ f_t+1 = b_t^⊤ W_t^⊤ x_t+1 = b_t^⊤ W_t^⊤Σ_tΣ_t^+ x_t+1=μ_t^⊤Σ_t^+ x_t+1, with probability one, which is (<ref>). <ref>⇒<ref>: This direction is trivial. <ref>⇔<ref>: This follows from (<ref>) and the aforementioned expression of M_t in terms of the full MVE portfolio. §.§ Proof of Proposition <ref> Using (<ref>) and (<ref>), property (<ref>) implies μ_t = Φ_t W_t^⊤μ_t =Φ_t W_t^⊤Σ_t b_t = Σ_t W_tΦ_t^⊤ b_t= Σ_t W_t c_t, for c_tΦ_t^⊤ b_t, for some b_t∈^n_t, which proves (<ref>)⇒(<ref>). Conversely, using (<ref>) and (<ref>), the spanning condition (<ref>) implies Φ_t W_t^⊤μ_t =Φ_t W_t^⊤Σ_t W_t c_t = Σ_t W_tΦ_t^⊤ W_t c_t =Σ_t W_t c_t=μ_t, for some c_t∈^m, which proves (<ref>)⇒(<ref>). §.§ Proof of Lemma <ref> Under either assumption, we have that (<ref>) implies that there exists a vector b_t∈^n_t such that μ_t =Σ_t W_tΦ_t^⊤ b_t. The claimed vector equality now follows from (<ref>). The last statement follows by means of the counterexample given in Example <ref>. §.§ Proof of Lemma <ref> The decomposition (<ref>) follows from Lemma <ref>. This clearly implies that Σ_t = (Φ_t Σ_ g,tΦ_t^⊤) + Σ_ z,t, and as μ_t=Φ_tμ_ g,t∈Φ_t =(Φ_t Σ_ g,tΦ_t^⊤) by assumption, absence of arbitrage (<ref>) follows. §.§ Proof of Lemma <ref> We have Σ_ z,t= ( U_t^+ ( U_t^+)^⊤) = U_t^+ = U_t^⊤, and Σ_ z,t= U_t, where we used that ( A^⊤ A)^+ = A^+ ( A^+)^⊤ for any matrix A. This proves the first statement. We now construct S_t. By elementary linear algebra, we have ^n_t= U_t⊕ U_t^⊤ = U_t^⊤⊕ U_t, and U_t: U_t^⊤→ U_t is a bijection, and U_t= U_t^⊤ k≥ 0. Let {ξ_1,…,ξ_k} and {ζ_1,…,ζ_k} be any orthonormal bases of U_t and U_t^⊤, respectively. Setting S_t U_t on U_t^⊤ and S_tξ_iζ_i on U_t gives the desired extension. §.§ Proof of Proposition <ref> Lemma <ref> implies that S_t Σ_ z,t S_t^⊤ = U_t ( U_t^⊤ U_t)^+ U_t^⊤= U_t U_t^+ is the orthogonal projection on U_t, where we used that ( A^⊤ A)^+ A^⊤ = A^+ for any matrix A. We obtain ( S_t Σ_ z,t S_t^⊤) S_t = U_t U_t^+ U_t = U_t. Given that ( U_tΦ)⊆ ( S_tΦ), by definition of S_t, we thus have proved that S_tΣ_ z,t S_t^⊤ = U_t U_t^+ maps ( S_tΦ_t) into itself. As S_t is invertible, condition (<ref>) for R_t= I_m holds, and Lemma <ref> implies property (<ref>). This again implies (<ref>) and (<ref>) and, by Lemma <ref>, thus (<ref>). By the same token and given (<ref>), condition (<ref>) is equivalent to Σ_ z,t W_tΦ_t^⊤ = Φ_t W_t^⊤Σ_ z,t W_tΦ_t^⊤, which again is equivalent to S_tΣ_ z,t W_tΦ_t^⊤ = S_tΦ_t W_t^⊤Σ_ z,t W_tΦ_t^⊤. Plugging in (<ref>) for Φ_t W_t^⊤, the right hand side of (<ref>) can be written as S_tΦ_t ( S_tΦ_t)^+ S_t Σ_ z,t S_t^⊤ (Φ_t^⊤ S_t^⊤)^+ Φ_t^⊤. Given (<ref>), and as ((Φ_t^⊤ S_t^⊤)^+ Φ_t^⊤)⊆ ((Φ_t^⊤ S_t^⊤)^+)= ( S_tΦ_t) and S_tΦ_t ( S_tΦ_t)^+ is the orthogonal projection on ( S_tΦ_t), we infer that the right hand side of (<ref>) is equal to S_t Σ_ z,t S_t^⊤ (Φ_t^⊤ S_t^⊤)^+ Φ_t^⊤. This again is equal to the left hand side of (<ref>), which implies (<ref>). Proposition <ref> and Lemma <ref> prove (<ref>). Proposition <ref> now implies that the spanning condition (<ref>) holds for f_t+1. Expression (<ref>) and the first summand in expression (<ref>) follow directly from (<ref>). For the second summand in (<ref>), we have the identity ( S_tΦ_t)^+ S_t Σ_ z,t S_t^⊤ (Φ_t^⊤ S_t^⊤)^+ = ( S_tΦ_t)^+ U_t U_t^+ (Φ_t^⊤ S_t^⊤)^+, using (<ref>). For (<ref>), as in the proof of Proposition <ref>, we first derive that (<ref>) implies Σ_ϵ,t =Σ_t - Φ_t W_t^⊤Σ_t W_t Φ_t^⊤ =Σ_ z,t -Φ_t W_t^⊤Σ_ z,t W_t Φ_t^⊤, where we used (<ref>). By similar calculations as above, we derive Φ_t W_t^⊤Σ_ z,t W_t Φ_t^⊤ = Φ_t ( S_tΦ_t)^+ U_t U_t^+ (Φ_t^⊤ S_t^⊤)^+ Φ_t^⊤, which completes the proof. apalike
http://arxiv.org/abs/2409.02920v1
20240904175952
RoboTwin: Dual-Arm Robot Benchmark with Generative Digital Twins (early version)
[ "Yao Mu", "Tianxing Chen", "Shijia Peng", "Zanxin Chen", "Zeyu Gao", "Yude Zou", "Lunkai Lin", "Zhiqiang Xie", "Ping Luo" ]
cs.RO
[ "cs.RO", "cs.AI", "cs.CL" ]
Yao Mu, Tianxing Chen, et al. The University of Hong Kong AgileX Robotics Shanghai AI Laboratory Shenzhen University Institute of Automation, Chinese Academy of Sciences <https://robotwin-benchmark.github.io/early-version> RoboTwin: Dual-Arm Robot Benchmark with Generative Digital Twins (early version) Yao Mu ^*†1, 3 Tianxing Chen ^*1, 3, 4 Shijia Peng ^*2, 4 Zanxin Chen ^*2, 4 Zeyu Gao 5 Yude Zou 4 Lunkai Lin 2 Zhiqiang Xie 2 Ping Luo ^†1 Received ????; accepted ???? =============================================================================================================================================== * Equal Contributions. †Corresponding authors: Ping Luo (pluo.lhi@gmail.com) and Yao Mu (muyao@connect.hku.hk). Tianxing Chen completed this work while he was an intern at the University of Hong Kong. § ABSTRACT Effective collaboration of dual-arm robots and their tool use capabilities are increasingly important areas in the advancement of robotics. These skills play a significant role in expanding robots' ability to operate in diverse real-world environments. However, progress is impeded by the scarcity of specialized training data. This paper introduces RoboTwin, a novel benchmark dataset combining real-world teleoperated data with synthetic data from digital twins, designed for dual-arm robotic scenarios. Using the COBOT Magic platform, we have collected diverse data on tool usage and human-robot interaction. We present a innovative approach to creating digital twins using AI-generated content, transforming 2D images into detailed 3D models. Furthermore, we utilize large language models to generate expert-level training data and task-specific pose sequences oriented toward functionality. Our key contributions are: 1) the RoboTwin benchmark dataset, 2) an efficient real-to-simulation pipeline, and 3) the use of language models for automatic expert-level data generation. These advancements are designed to address the shortage of robotic training data, potentially accelerating the development of more capable and versatile robotic systems for a wide range of real-world applications. § INTRODUCTION In the fast-evolving robotics field, the integration of dual-arm coordination and advanced tool use is crucial for developing sophisticated autonomous systems. These capabilities are essential for enabling robots to function effectively in diverse real-world settings such as manufacturing plants, healthcare centers, and homes. By using tools, robots can significantly expand their operational scope, adapting to a variety of tasks and challenges with greater flexibility. However, the advancement in these areas is substantially hindered by the lack of specialized, high-quality training data. These activities, which often require tailored solutions, are difficult to standardize and are typically not well-represented in conventional datasets. Addressing this critical gap, we introduce "RoboTwin", a comprehensive benchmark that includes both real-world teleoperated data and corresponding synthetic data generated by a digital twin. Specifically designed for scenarios involving dual-arm robotic tool use and human-robot interactions. RoboTwin features high-quality annotations and diversity of examples to ensure robust training and evaluation. To collect real-world data, we employ the open-source COBOT Magic platform developed by AgileX Robotics. This platform is outfitted with four AgileX Arms and four Intel Realsense D-435 RGBD cameras, mounted on a robust Tracer chassis. The data encompasses a variety of typical tasks, including tool usage and human-robot interaction. Transforming from the collection of real-world data to its virtual replication, the challenge is to create accurate and cost-effective digital twins. Traditional methodologies often rely on expensive, high-fidelity sensors, limiting their widespread adoption. To circumvent these limitations, we have developed a novel, cost-effective approach using Artificial Intelligence Generated Content (AIGC) to construct 3D models from a single 2D RGB image. This method significantly reduces costs while providing lifelike visual representations and supporting physical simulations. Our process begins with converting a 2D image into a detailed 3D model, featuring complex geometry, surface textures, and accurate details, which are crucial for realistic visualizations and simulations. We further enhance the model by defining functional coordinate axes on the object parts, enabling the automated computation of grasp poses essential for robotic manipulations. To enhance the utility and relevance of our dataset, we have also established an innovative pipeline leveraging large language models (LLMs) for the automatic generation of expert-level training data. This methodology not only enriches the dataset with high-quality, scenario-specific examples but also integrates the versatility of LLMs in synthesizing complex interactive sequences. Integrating the reasoning capabilities of GPT4-V <cit.>, we automate the generation of task-specific pose sequences, thereby increasing the precision of task executions. Moreover, we employ GPT4-generated scripts to activate trajectory planning tools, which streamline the programming efforts and expedite the deployment of robotic systems in various environments. The core contributions of this work are: 1) The development of "RoboTwin", a benchmark that includes both real-world teleoperated data and high-fidelity synthetic data generated for corresponding scenarios. 2) The establishment of a convenient real-to-simulation pipeline that requires only a single RGB image from the real world to generate the 3D models of target objects and corresponding scenes. 3) The utilization of large language models (LLMs) combined with simulation environment information to generate code that automatically creates expert-level data. These advancements collectively aim to bridge the gap in robotic training data, significantly enhancing the potential for robots to learn and operate using tools in a manner that mimics human dexterity and interaction finesse. § RELATED WORK §.§ Datasets and Benchmarks for Robotics To enhance the collection of effective demonstrations for robotic tasks, human teleoperation has traditionally been employed. In this method, a human operator manually guides a robot through various tasks <cit.>. Recent advancements have extended this methodology by employing teams of human operators over prolonged periods to assemble substantial real-world datasets <cit.>. An alternative method involves the use of algorithmic trajectory generators within simulations <cit.>, which, while efficient, often depend on privileged information and hand-designed heuristics, making them labor-intensive for arbitrary tasks. However, current systems often fail to produce high-fidelity expert simulation data that accurately mimics data from actual machine operations. Although initiatives like MimicGen<cit.> and RoboCaca<cit.> strive to generate simulated expert data using limited human demonstrations, they still heavily rely on predefined scenes and interactive objects. To overcome these limitations, we introduce RoboTwin. This innovative system not only generates expert data and simulation scenes derived from real-world scenarios but also utilizes large language models (LLMs) to generate demonstration codes and expert data for similar tasks involving the same class of objects. This strategy significantly reduces the dependence on continuous human intervention, thereby streamlining the generation of reliable training data for robotic tasks. §.§ Robot Manipulation Learning Methods The adoption of human demonstrations to instruct robots in manipulation skills is a prevalent method in Robot Manipulation Learning <cit.>. Among the techniques, Behavioral Cloning stands out for learning policies offline from these demonstrations. It replicates observed actions from a curated dataset <cit.>. Conversely, Offline Reinforcement Learning enhances policy learning by optimizing actions based on a predefined reward function and exploiting large datasets <cit.>. The Action Chunking with Transformers (ACT) technique integrates a Transformer-based visuomotor policy with a conditional variational autoencoder to structure the learning of action sequences <cit.>. Recently, the Diffusion Policy method has gained prominence. It employs a conditional denoising diffusion process for visuomotor policy representation, effectively reducing the accumulative error in trajectory generation that is often observed in Transformer-based visuomotor policies <cit.>. The 3D Diffusion Policy <cit.> uses point clouds for environmental observations, enhancing spatial information utilization and managing various robotic tasks in both simulated and real environments with only a small number of demonstrations. § REAL-TO-SIM TRANSFER OF THE SCENE §.§ Generative Digital Twin System To synthesize high-fidelity data through simulation, a major challenge is the creation of accurate and cost-effective digital twins. Traditional methods often depend on costly high-precision sensors, which can hinder widespread adoption. In response, we have developed a more economical approach using Artificial Intelligence Generated Content (AIGC) to construct 3D models from simple 2D RGB images powered by Deemos’s Rodin platform[We use Deemos's 3D digital assert Generation Model (from text or image) Rodin: https://hyperhuman.deemos.com/rodinhttps://hyperhuman.deemos.com/rodin]. This technique significantly reduces the reliance on expensive sensors while achieving realistic visual effects and supporting physical simulations. Our innovative pipeline commences with generating a detailed 3D mesh and texture of the target object involved in a robot's task, created from a single real-world image. This capability ensures a high-fidelity recreation of real-world scenarios within a simulated environment. The process begins by transforming a single 2D image into a 3D model that encompasses detailed geometry, surface normals, wireframes, and textures. These features enhance the visual realism and ensure compatibility with physics engines for simulations. Once the 3D model is ready, we assign specific coordinate axes to functional parts of objects within the model. For instance, as shown in Fig. <ref>, for a hammer, one axis is aligned with the hammerhead—identifying the functional part—while another axis indicates the approach direction. This strategic alignment is crucial for automating the calculation of grasp poses, which are essential for robotic manipulation and tool usage. Grasp poses are computed perpendicular to the surface normal of the functional part along the designated approach direction axis, facilitating correct and efficient tool use with minimal manual intervention. §.§ Expert Data Generation We leverage the reasoning capabilities of GPT4-V <cit.> to write code that calculates the relationships between key poses and the functional coordinate axes of objects. GPT4-V analyzes task requirements and generates a sequence of poses that align with these requirements, ensuring precise task execution. We also generate code via GPT4 <cit.> to invoke trajectory planning tools based on the computed poses. This automation substantially decreases the time and labor associated with manual programming, facilitating the swift deployment of robotic systems across diverse applications. It also offers a scalable approach for generating high-quality data essential for robotic learning. § BENCHMARK To further research and development in this area, as shown in Fig. <ref>, we introduce a comprehensive benchmark specifically designed to assess dual-arm robots in a variety of scenarios. This benchmark encompasses a diverse set of tasks, each presenting unique challenges that are critical for assessing the dexterity, coordination, and operational efficiency of robotic arms in a simulated environment. The tasks range from simple object manipulation to complex, coordinated actions requiring synchronized movements of both arms. Appendix <ref> outlines the specific tasks and their descriptions, providing a clear framework for comparative analysis and further development of advanced robotic capabilities. For each task, we provide a robust API that supports the generation of expert data across infinitely variable scenarios, such as different object placements and environmental conditions. This feature allows researchers to extensively test and refine the adaptability and precision of robotic systems under controlled yet varied conditions. Additionally, an offline dataset is available for each task, offering pre-generated expert data to facilitate offline training and benchmarking of algorithms. This benchmark aims to bridge the gap between theoretical robotic control models and their practical implementation, ensuring that the robotic systems can perform reliably in dynamic, real-world environments. § REAL-WORLD DATASET For the acquisition of real-world data, we employed the open-source Cobot Magic [Platform Introduction: https://global.agilex.ai/products/cobot-magichttps://global.agilex.ai/products/cobot-magic] platform from AgileX Robotics, which is equipped with four AgileX Arms and four Intel Realsense D-435 RGBD cameras and is built on the Tracer chassis. These cameras are strategically positioned: one on the high part of the stand for an expansive field of view, two on the wrists of the robot's arms, and one on the low part of the stand which is optional for use. The front, left, and right cameras capture data simultaneously at a frequency of 30Hz, as depicted in Figure <ref>. The data collection and alignment are facilitated by tools provided by the ARIO Data Alliance, available at our GitHub repository[Tool for data alignment: https://github.com/ario-dataset/ario-toolshttps://github.com/ario-dataset/ario-tools]. Each captured frame consists of three images from the cameras, each providing an RGB and depth image at a resolution of 640×480 pixels. Additionally, the data includes the poses of the robotic arms' joints and end-effectors for both master and slave configurations, encompassing both left and right arms. All data storage and formatting adhere to the unified standards established by the ARIO Data Alliance. Our dataset task design features two major highlights: a focus on human-robot interaction and tool usage. As shown in Appendix <ref>, we have designed 17 tasks, 9 of which emphasize tool usage, 5 involve interpersonal interactions and 6 tasks are dual-arm. We collected 30 trajectories for each task. During trajectory collection, we broke down the tasks into multiple stages and conducted slower data collection for key sub-trajectories that required precise operations, enhancing the detail of the trajectories for better model learning. § EXPERIMENT Our experimental aim is not to delve into the design choices of different strategy networks but to explore the correctness and effectiveness of our Benchmark expert data. Our experiments are intended to verify: a) the rationality of the COBOT Magic platform settings, and b) the effectiveness of the automatically generated expert data. We utilized the 3D Diffusion Policy (DP3) <cit.> to test six tasks within the benchmark, with each task being tested using strategies trained from 10 sets, 20 sets, and 50 sets of expert data, respectively, to obtain the success rates. For the success criteria of each task, please refer to the appendix. The experimental results, as summarized in Table <ref>, demonstrate the performance of the 3D Diffusion Policy (DP3) across six tasks, each trained with varying quantities of expert demonstration data (10, 20, and 50 sets). Notably, for the "Block Hammer Beat" task, the success rate improved from 24% with 10 demonstrations to 80% with 50 demonstrations. The "Empty Cup Place" task saw success rates soar from 10% to 96% with increased demonstrations. The "Dual-Bottles Pick" task success rates climbed from 10% to 74% as demonstrations increased. The "Block Sweep" task success improved steadily from 28% to 86%, while the "Apple Cabinet Storage" task showed more modest gains, from 30% to 64%. The "Block Handover" task achieved the most significant improvement, reaching 98% success with 50 demonstrations, up from 50%. These results suggest a strong correlation between the number of expert demonstrations and task success, highlighting the effectiveness of the automatically generated expert data in enhancing task performance on the COBOT Magic platform. The data further underscores the importance of ample training examples in the development of robust strategies for complex tasks. § CONCLUSION In this study, we introduce RoboTwin, a benchmark integrating real-world and synthetic data to evaluate dual-arm robots, addressing the significant shortage of specialized training data in robotics. Our dataset, developed using the AgileX Robotics platform and enhanced through generative digital twins powered by Deemos’s Rodin platform, effectively accelerates the training of robotic systems, enabling performance improvements across diverse tasks. Our results demonstrate the potential of this hybrid data approach to refine robotic dexterity and efficiency, providing a scalable tool that could revolutionize robotic research and applications. splncs04
http://arxiv.org/abs/2409.02161v1
20240903180000
An Introduction to the Weak Gravity Conjecture
[ "Tom Rudelius" ]
hep-th
[ "hep-th" ]
⌈⌉ ⌊⌋ DefinitionDefinitionProvisionalConjectureProvisional ConjectureConjectureConjectureNewConjectureNew ConjectureNaiveConjectureNaïve ConjectureToyConjectureToy Conjecturenamed0.750.75..5em#3named*namedconjectureConjecture*prominentconjectureConjecture =15.5pt empty An Introduction to the Weak Gravity Conjecture 1430 Tom Rudelius Department of Mathematical Sciences, Durham University, Durham, DH1 3LE, UK The Weak Gravity Conjecture holds that gravity must be the weakest force. This is true of the familiar forces in our own universe—electromagnetism, for instance, is many orders of magnitude stronger than gravity. But the bold claim of the Weak Gravity Conjecture is that this statement is true, not only for electromagnetism in our universe, but for any similar force in any consistent universe governed by quantum mechanics. In this brief introduction, aimed at advanced undergraduates or beginning graduate students, we elaborate on the precise definition of the Weak Gravity Conjecture, the evidence for it, and some of the remarkable implications of it. We explain how the Weak Gravity Conjecture may play a role in bridging the gulf between the formal mathematics of string theory and the real-world observations of particle physics and cosmology. September 9, 2024 Published in Contemporary Physics. § INVITATION: THE WEAK GRAVITY CONJECTURE AND THE ELECTRON In a first course on Newtonian mechanics, we learn that the gravitational force between a pair of objects of mass m_1 and m_2 is given by F⃗_ grav = - G m_1 m_2/r^2r̂ , where G = 6.67 × 10^-11N·m/kg^2 is Newton's gravitational constant. Not long afterward, in a first course on electromagnetism, we learn that the electromagnetic force between a pair of objects of charge q_1 and q_2 is given by F⃗_ EM = k q_1 q_2/r^2r̂ , where k = 8.99 × 10^9 N·m/C^2 is Coulomb's constant. Let us consider now a situation in which a pair of electrons are separated by a large distance r. Both electrons have a mass of m_e = 9.11 × 10^-31 kg and a charge of q_e = -1.60 × 10^-19 C, which means that the gravitational and electromagnetic forces felt by one electron due to its interaction with the other electron are given respectively by F⃗_ grav = - G m_e^2/r^2r̂ = - 5.54 × 10^-71 N · m^2/r^2r̂ F⃗_ EM = k q_e^2/r^2r̂ = 2.30 × 10^-28 N · m^2/r^2r̂ . In particular, the electromagnetic repulsion between the electrons is much stronger than their gravitational attraction, by around 43 orders of magnitude: in the absence of other forces, a pair of distantly separated electrons will accelerate in opposite directions. Thus, as far as the electron is concerned, gravity is the weakest force. The idea behind the Weak Gravity Conjecture (WGC) <cit.> is to promote this observed fact of nature to a universal principle: any repulsive force, in any mathematically consistent universe,[The precise meaning of “mathematically consistent” will explained in further detail below.] must be stronger than gravity, in the sense that the charge of a particle is larger than its mass, suitably normalized by the constants k and G. In what follows, we will see that simple statement has profound implications for particle physics, cosmology, mathematics, and more. But before we get there, let us zoom out a bit and put the Weak Gravity Conjecture into its proper context: the landscape of string theory. § STRING THEORY Albert Einstein is famous for his work in two of the most important paradigms in modern physics: quantum mechanics and general relativity. Quantum mechanics is what earned Einstein (and many other physicists) a Nobel Prize. It is a theory of very small objects—protons, electrons, atoms, and so on—governed by equations like the Schrödinger equation. General relativity, on the other hand, is Einstein's theory of gravity. Thus it describes the physics of very heavy objects: galaxies, stars, planets, and the universe taken as a whole. The confirmation of general relativity was what turned Einstein from a successful physicist into a household name. But what happens if you are faced with objects that are both very small and also very heavy (or, more precisely, very energetic)? For such objects, one must combine quantum mechanics and general relativity into a quantum theory of gravity, or, as it's usually called, a theory of quantum gravity. It turns out that this is not so easy to do: the laws of quantum mechanics and the laws of general relativity don't play nicely for sufficiently dense objects, indicating that an entirely new paradigm is needed. What exactly does “sufficiently dense” mean, you might wonder? How often does this actually come up, in practice? The answer is: only in extreme circumstances, which you're unlikely to experience in the real world. Quantum gravity is important for understanding the singularity at the center of a black hole, or the cosmic singularity at the beginning of our universe known as the “big bang.” It is important for understanding the period right after the big bang, known as cosmological inflation (see Section <ref> for more on this topic). And it is important for understanding fundamental issues facing modern physics, such as the cosmological constant problem. More broadly, quantum gravity is important because, in a sense, it is the last missing piece of the puzzle of fundamental physics. The first step in the puzzle was the combination of Newtonian mechanics with electromagnetism to give special relativity. Special relativity combines with gravity to give general relativity. Meanwhile, electromagnetism and thermodynamics together give rise to quantum mechanics, which combines with special relativity into relativistic quantum field theory. The last step, then, is to combine quantum field theory with general relativity into a theory of quantum gravity, thereby yielding a complete description of the fundamental laws of nature. Quantum field theory and general relativity are, by now, among the most well-tested ideas in all of human thought. Quantum gravity, on the other hand, is far more mysterious. However, we aren't completely in the dark when it comes to quantum gravity, and this is where string theory comes in, because... String theory is (by far) the best understood theory of quantum gravity. Indeed, some would go so far as to say that string theory is the only mathematically consistent theory of quantum gravity. In this article, we will not take a stand on either side of this debate; for our purposes, the most important thing about string theory is that it is a mathematically consistent theory of quantum gravity, which means that we can use it as a toy model to understand what quantum gravity is like. String theory is called string theory because it is a theory of strings. In particle physics, we are accustomed to thinking of the fundamental constituents—quarks, leptons, gauge bosons, etc.—as being point-like in space. In string theory, however, the fundamental constituents are extended in one dimension, so we call them strings. They have a length, which we call the string length. Strings can be open (with disjoint endpoints) or their endpoints can join together to form a closed string (i.e., a loop). There are a number of different types of string theory, of which Type I, Type IIA, Type IIB, and heterotic are among the most famous. All of these types of string theory exist naturally in 10 dimensions (1 dimension of time, 9 of space). At first, these different types of string theory appeared to be different theories of quantum gravity, but in the mid 1990s, people realized that these seemingly distinct theories are actually just different limits of a single theory, called M-theory <cit.>. M-theory exists naturally in 11 dimensions, and different limits of M-theory reduce to different types of string theory similarly to how one limit of general relativity reduces to Newtonian gravity, while another limit reduces to special relativity. The different types of string theory are thus said to be related by dualities, i.e., they represent different descriptions of the same theory. The collection of all of these dualities is called the string duality web. Of course, the world we observe is 4-dimensional (including time), not 10- or 11-dimensional. Indeed, this is quite clear from the 1/r^2 falloff of the gravitational force we saw in (<ref>): this scaling behavior would be replaced by 1/r^d-2 in d dimensions, i.e., 1/r^8 in 10d and 1/r^9 in 11d, so the observed gravitational force immediately tells us that we live in 4d. In string theory, the way to deal with this problem is to curl up, or “compactify,” the remaining 6 spatial dimensions into a microscopic, compact manifold, leaving just 4 large dimensions of spacetime. Provided this compact space is small enough, the gravitational effects of the compact dimensions will be negligible, resulting in the correct 1/r^2 scaling of gravity. Above, we agreed to remain agnostic about the question of whether or not string theory is the unique theory of quantum gravity. In 10 dimensions, however, the duality web gives us a hint in this direction: what naively seems like a collection of distinct theories is actually a single theory with a collection of different, dual descriptions. Relatedly, string theory in 10 dimensions has no free, dimensionless parameters: the only parameter of string theory is the string length, which has dimensions of length, and there are no other parameters with dimensions of length with which to form a dimensionless ratio. If dimensionless parameters did exist, they could be varied to move from one theory to another. Thus the absence of such parameters indicates the uniqueness of the theory. This absence of dimensionless parameters in string theory stands in stark contract with the world we observe: the standard models of particle physics and cosmology have many free parameters (e.g. Newton's Gravitational constant, the Planck constant, the cosmological constant, the fine structure constant, the mass of the electron, etc.), which can be used to form many dimensionless ratios. How does one go from string theory in 10d, with no free parameters, to the 4d standard models of particle physics and cosmology that we know and love, which have many? The answer is that, upon compactifying on a 6-manifold, the resulting 4d parameters are related to the details of the compactification, such as the topology, size, and shape of the 6-manifold in question, and the generalized fluxes threading certain submanifolds of this manifold. There are millions of eligible 6-manifolds and an exponentially large number of ways to thread fluxes through the submanifolds of one of these manifolds, leading to an exponentially large number of possible solutions to the equations of string theory, and thus an exponentially large number of possibilities for the parameters of the resulting 4d theory. Not every 6-manifold will work, and not every configuration of fluxes will lead to a consistent, stable universe in 4d. We don't know for sure how many solutions there are, but naive estimates include numbers as big as 10^500 and 10^272,000<cit.>! These estimates should be taken with an enormous grain of salt, but suffice it to say that the evidence points to a vast collection of 4-dimensional solutions to string theory. Each of these solutions represents a different effective field theory (EFT), governed by a different choice for the laws of nature in the 4-dimensional universe.[An effective field theory is a framework for describing a system using quantum field theory. The adjective “effective” emphasizes that the field theory is an approximation to a more fundamental theory of quantum gravity. An EFT is reliable at low energies but breaks down at high energies, where it must be replaced by the more fundamental theory.] The set of all EFTs that are consistent with string theory form what is known as the string landscape. Whether you love the landscape or hate the landscape depends on your perspective. Certainly, though, if your goal is to test string theory experimentally, the landscape is a headache. Simply put, the more possibilities you have, the harder it is to make a unique prediction. From this perspective, there is some good news: although the string landscape may be very large, it is likely only a small part of an even larger swampland of EFTs <cit.>. EFTs in the swampland have the property that, while they may seem consistent to a low-energy effective field theorist, they are ultimately incompatible with string theory. At present, our ability to map the string landscape and the swampland is in its early stages. However, to the best of our understanding, the cartoon picture looks a little bit like Figure <ref>. The landscape consists of a chain of islands inside a much larger ocean of swampland. In a sense, the Landscape is very large, there are possibly 10^500 or 10^272,000 islands in this island chain. On the other hand, there is another sense in which the Landscape is very small. It likely represents only a measure zero subset of the space of all EFTs. The goal, then, is to try to delineate the boundary between the landscape and the swampland, to determine what are the essential, universal features of the EFTs in the string landscape which distinguish them from the inconsistent EFTs in the swampland. This task is especially important for string theorists who are concerned with experimental tests of string theory, because if you can show that some model of cosmology or particle physics involves an EFT that lies in the swampland, then experimental verification of that model would rule out string theory. However, even from a purely theoretical perspective, the question of what are the universal features of string theory is a crucial one as we seek to better understand the mysterious paradigm that is quantum gravity. Thus, the task of charting the string landscape is fundamentally a mathematical one, but it often proceeds with an eye towards the real world of particle physics and cosmology. And this, finally, is where the WGC comes into the story. The WGC is one example of what is known as a “swampland conjecture.” It is a statement which is observed to be true in a vast collection of EFTs in the string landscape, and which is conjectured to be true in all EFTs in the landscape. As with any conjecture in mathematics or physics, there are three key questions that we must ask of the WGC: 1) What is the precise statement of the conjecture? 2) Is it true? 3) If it is true, what does it imply? We will now look at each of these questions in turn. § THE WEAK GRAVITY CONJECTURE, PRECISELY In Section <ref>, we introduced the WGC as a statement about the relative size of the repulsive electromagnetic force and the attractive gravitational force between a pair of electrons. This is a useful way to think of the WGC in most cases, but there is a slightly more precise and more general way to define it, with reference to black hole physics. Recall that a black hole is a region of spacetime in which gravity is so strong that even light cannot escape. In classical general relativity, a black hole is defined by three numbers: its mass M, angular momentum J, and charge Q. Here and below, we assume that the angular momentum vanishes, J=0, leaving us with a charged, Reissner-Nordström black hole of mass M and charge Q. Such charged black holes can be sorted into three types: * Subextremal: Q< M. * Extremal: Q=M. * Superextremal: Q> M. Here and below, we are working in so-called “natural units,” in which various fundamental constants such as Newton's gravitational constant and the speed of light are set equal to 1: G = c = 4 πε_0 = 1. These three different types of black holes have very different behavior. The case of a subextremal black hole is the most commonplace: The black hole has a time-like singularity hidden behind a light-like event horizon, which is the point of no return. Anything that passes through the event horizon–including light–is trapped inside it. There is a second type of horizon, known as a Cauchy horizon, inside the black hole. The extremal black hole is similar to the subextremal black hole, except that here the inner (Cauchy) horizon and the outer (event) horizon coalesce into a single horizon. The superextremal black hole is the most unusual: for Q > M, the event horizon disappears entirely, and we are left with a naked singularity. The (weak) cosmic censorship hypothesis in general relativity holds that such naked singularities cannot arise from generic initial conditions: any singularity must be hidden from a distant observer by an event horizon. This ensures that no light ray emanating from the singularity can reach a distant observer: the only way to get information about the black hole singularity is to fall past the event horizon. If this hypothesis is correct, it means that a superextremal black hole cannot form through ordinary dynamical processes; in contrast, a subextremal black hole can certainly be created via the gravitational collapse of a shell of charged matter. Finally, note that this discussion of charged black holes applies not only to electromagnetism in our universe, but to any gauge force associated with the Lie group U(1). (For more on gauge forces and Lie groups, see the appendix below.) Electromagnetism is the most famous and most relevant example of such a U(1) gauge theory, but a different EFT in the landscape might have a different U(1) gauge group. For instance, some models of dark matter include a “dark photon,” which is the gauge boson associated with a different U(1) gauge force, analogous to the ordinary photon for ordinary electromagnetism. The above discussion of subextremal, extremal, and superextremal black holes applies to black holes charged under the dark U(1) just as it does for black holes charged under under electromagnetism. With this background, we are finally ready to give the precise definition of the WGC: [The Weak Gravity Conjecture] Given some EFT in the landscape with a U(1) gauge force, there exists a “superextremal” particle, i.e., a particle that satisfies[For the sake of simplicity, we assume without loss of generality that the charge of a given particle or black hole is positive: q > 0, Q>0. Any particle of negative charge -q, such as an electron, has an antiparticle of positive charge q, such as the positron.] q/m≥Q/M|_ ext , where Q/M|_ ext is the charge-to-mass ratio of a large, extremal black hole. Let us break down this definition piece by piece. To begin, note that the WGC is a swampland conjecture, which means it deals with EFTs in the landscape. It is not hard to write down a Lagrangian for an EFT with a single subextremal particle that violates (<ref>), but the claim of the WGC is that this theory is not consistent with string theory. Second, note that the term “superextremal” in the definition of the WGC is really a shorthand for “superextremal or extremal.” There are important examples of theories in the string landscape in which the WGC bound (<ref>) is exactly saturated, and there are no strictly superextremal particles. Third, note that the WGC does not require every particle in the theory to be superextremal: the claim is merely that there exists at least one. For example, in the case of electromagnetism in our universe, neutrinos are massive (m ≠ 0) but uncharged (q = 0), so they violate the WGC bound (<ref>). However, the WGC is still satisfied because the electron satisfies this bound. Fourth, in simple cases, the ratio Q/M|_ ext appearing on the right-hand side of the WGC bound is a constant. By an appropriate choice of units, this constant can be set to 1, and the WGC bound (<ref>) is given simply by q > m. However, in theories with massless scalar fields, the extremality bound can be modified by changing the values of these scalar fields. In this case, the ratio Q/M|_ ext is given by some order-one number γ in natural units, but γ=γ(ϕ^i) may depend on the massless fields ϕ^i. Finally, the definition of the WGC specifies that we are dealing with a “large” black hole because the charge-to-mass ratio Q/M of small black holes can be modified slightly relative to the value at infinity. The value in (<ref>) should be understood as the limiting value as the size of the black hole goes to infinity, i.e., lim_M →∞ Q/M|_ ext. How does this definition of the WGC compare to the definition given in the Section <ref> in terms of the relative strength of the electromagnetic repulsion and gravitational attraction of a pair of widely separated electrons? In the absence of massless scalar fields, these turn out to be exactly the same: the condition that a particle is superextremal is precisely the condition that two copies of the particle will repel each other at long distances. However, this precise correspondence breaks down in the presence of massless scalar fields. Such fields mediate a long-range attractive force,[A massive scalar field like the Higgs boson also mediates a force between particles, but this force is not a “long-range” force because its strength decays exponentially at long distances r as F ∼exp(- α m_H r), where m_H is the Higgs mass, α is a constant, and we are working in natural units, with ħ = c =1.] and as a result two superextremal particles may attract each other <cit.>; conversely, two subextremal particles may repel each other. In practice, we expect that massless scalar fields will exist only in supersymmetric theories. In non-supersymmetric theories, quantum corrections will generate a nonzero mass for a particle even if its mass is set to zero classically, but in supersymmetric theories, these quantum corrections cancel between bosons and fermions, and the mass of the scalar field is protected. As a result, for practical applications to the real world (where supersymmetry is broken), the distinction between self-repulsiveness and superextremality is irrelevant. However, most of the evidence for the WGC in string theory comes from supersymmetric EFTs, which feature massless scalar fields; in these theories, the distinction is important to keep in mind. §.§ The WGC with multiple photons So far, we have focused on theories with a single U(1) gauge force, with gauge field A_μ = (Φ, A_1, A_2, A_3). What happens, though, if the theory has multiple such forces? In this case, we expect that the WGC should apply to every U(1) in the theory, individually. In a theory with two U(1)s, then, we expect one particle to satisfy the bound (<ref>) with respect to the first U(1) and a second particle to satisfy it with respect to the second U(1). However, we can also make a basis change on the U(1) gauge gauge fields: ([ (A_μ^(1))'; ( A_μ^(2))' ]) = ([ cosθ sinθ; - sinθ cosθ ]) ·([ A_μ^(1); A_μ^(2) ]) . Denoting the charge of a particle under the ith U(1) as q_i, this induces a corresponding transformation on the charge of the particle: ([ q_1'; q_2' ]) = ([ cosθ -sinθ; sinθ cosθ ]) ·([ q_1; q_2 ]) , This basis choice is unphysical: it is merely a relabeling of the degrees of freedom of the theory. As a result, the WGC must be satisfied not only for the original gauge fields A_μ^(1), A_μ^(2), but also for the rotated gauge fields (A_μ^(1))', (A_μ^(2))'. It turns out that there is a simple geometric way to rephrase the above definition of the WGC for a theory with multiple gauge fields <cit.>. To begin, given some particle species, we consider the vector of charges q_i of that particle under each of the U(1) gauge fields in the theory. In a theory with n U(1)s, this vector will have n components, one for each U(1). Next, we define the charge-to-mass vectorz_i of the particle as z_i = q_i/m . With this the WGC is equivalent to the statement that the convex hull of the charge-to-mass vectors of each of the charged particles must contain the black hole region, where subextremal and extremal black holes live. In the simplest case where there are no massless scalar fields, the black hole region is simply the unit ball (in natural units), so the WGC holds that the convex hull of the particle charge-to-mass vectors must contain the unit ball. It is not hard to see that this reduces to the bound (<ref>) in the case of a single U(1): The 1-dimensional unit ball is simply an interval of length 2 centered at the origin, and the statement that a 1-dimensional charge-to-mass vector z = q/m lies outside this ball is simply the familiar statement that q > m. A 2-dimensional case is shown in Figure <ref>. In a theory with massless scalar fields, the black hole region may be modified, taking a different size or shape. However, this black hole region will always contain the unit ball. This means that the WGC can only become stronger by the addition of massless scalar fields to the theory, never weaker. §.§ The WGC for strings and branes So far, we have focused on the ordinary WGC, which bounds the mass of a particle in terms of its charge. But the WGC admits a natural generalization to higher-dimensional objects charged under higher-dimensional gauge forces. In electromagnetism, the gauge field A_μ = (Φ, A_1, A_2, A_3) transforms under a gauge transformation as A_μ→ A_μ' = A_μ + ∂_μλ(x) , where λ is a scalar function. Mathematically speaking, A_μ is an example of a 1-form gauge field. More generally, we may consider a p-form gauge field B_μ_1, ..., μ_p, which has p indices. This transforms under a gauge transformation as B_μ_1 ,...,μ_p→ B_μ_1 ,...,μ_p' = B_μ_1 ,...,μ_p + ∂_[μ_1Λ_μ_2 ,...,μ_p](x) , where Λ_μ_2,...,μ_p is a (p-1)-form. The objects charged under an ordinary gauge field are pointlike (i.e., 0-dimensional) particles. In contrast, the objects charged under a p-form gauge field are (p-1)-dimensional objects, often referred to as (p-1)-branes. So, a 0-brane is a particle, a 1-brane is a string, and so on. Whereas a particle is characterized by its mass, a higher-dimensional brane is characterized by its tension, which has units of mass per unit length. The ordinary WGC places a lower bound on the charge-to-mass ratio of a particle charged under a 1-form gauge field. This admits a natural generalization to the case of a p-form gauge field, requiring a (p-1)-brane of tension T, charge q, which satisfies q/T≥Q/T|_ ext∼ 1 , where now Q/T |_ ext represents the charge-to-tension ratio of an extremal black brane charged under the p-form of interest <cit.>. This definition of the (p-1)-brane WGC generally makes sense for p = 1, ..., d-3, where d is the dimension of spacetime. However, it may be possible to extend the definition to the p=0 case as well. Here, the 0-form gauge field in question is a periodic scalar field ϕ, i.e., a scalar field ϕ subject to the identification ϕ≡ϕ + 2 π f for some constant f. The charged objects are (-1)-branes, which may seem strange, but can be understood as follows: recall that a particle is 0-dimensional, which means that it is localized in space, but still it travels through time, i.e., it has a 1-dimensional worldline. In contrast, a (-1)-brane is localized in both space and time. It is known as an instanton, because it is associated with just a single instant in time. In place of a mass or tension, an instanton has a quantity known as an action, S, which is dimensionless. The instanton version of the (<ref>) is a bit nebulous, because there is no such thing as a “black instanton,” so Q/S|_ ext is not a well-defined quantity. The typical response to this is simply to allow the instanton version of the WGC (also referred to as the axion WGC) to be a nebulous statement, which requires q/ S≥ c , where c ∼ O(1) is some order-one number in natural units, and the minimum charge of the instanton is given by the inverse of the periodicity parameter, q = 1/f. The question of what the precise value of c should be in this expression is still open <cit.>. Answering this question is important because, as we will see below, the instanton version of the WGC is arguably the most important version for cosmological purposes. § EVIDENCE FOR THE WEAK GRAVITY CONJECTURE The WGC is a conjecture, not a theorem; it has yet to be proven in full generality. However, several lines of evidence point to its validity. In this section, we will discuss a few of them. §.§ Heuristic motivation: black hole evaporation Let us begin with the original argument given in favor of the WGC <cit.>. We will see that this argument has a big hole; it is better understood as a heuristic argument than a serious attempt at proof. Nonetheless, this argument will help us think about the WGC from a more physical perspective, and ultimately it may point us in the right direction to prove the WGC. The argument runs as follows: consider a large, extremal Reissner-Nordström black hole of mass M and charge Q=M. If the WGC is satisfied, then there exists a particle of mass m and charge q ≥ m, and the black hole can decay through a process known as Hawking evaporation by emitting one of these particles,[More precisely, this evaporation process should be called Schwinger pair production, as it involves the emission of a charged particle.] as shown in Figure <ref> (left). The resulting black hole will then have mass M' ≈ M - m and charge Q' = Q - q and will be subextremal, M' ≥ Q'. This decay process can then repeat itself ad naseum, until the black hole has evaporated completely. Indeed, this process is roughly what would happen to a hypothetical extremal black hole in our own universe: it would emit electrons until it had shed all of its charge, and eventually it would decay to zero size by emitting uncharged Hawking radiation. But what if the WGC is violated? Then, as shown in Figure <ref> (right), the extremal black hole in question could only shed its charge by emitting a subextremal particle, with m > q. After this, the resulting black hole would be superextremal, with M' = M - m < Q' = M- q. As discussed above, this superextremal black hole would introduce a naked singularity to the spacetime, and thus this decay process would violate the cosmic censorship hypothesis. Thus, if we assume that there are no superextremal black holes in the theory, the extremal black hole is kinematically stable: any decay process is forbidden, and the extremal black hole just sits there indefinitely. Thus, if stable, extremal black holes are forbidden, we conclude that the WGC must be satisfied. The problem with this argument is that no one has yet come up with a fully convincing reason as to why stable extremal black holes present a problem for quantum gravity. Indeed, there exist supersymmetric theories in which the WGC is saturated, i.e., q=m: in these cases, an extremal black hole can decay only at threshold, and the lifetime of such a decay process is infinite: extremal black holes are marginally stable. To finish off the argument and prove the WGC, then, we would need to come up with an argument against stable, extremal, non-supersymmetric black holes that leaves open the possibility of stable, extremal black holes. §.§ Top-down evidence: Examples in string theory At this point, the strongest evidence for the WGC arguably comes from many examples in string theory, and the lack of a counterexample. In this subsection, we detail one illustrative example in which the WGC is satisfied: heterotic string theory. There are actually two different types of heterotic string theory, which are distinguished by their gauge groups: one has SO(32) gauge group, while the other has E_8 × E_8. Everything we have to say about heterotic string theory will apply to both types, but for concreteness we will focus on the SO(32) case. Further details on these groups can be found in the appendix. Both types of heterotic string theory exist naturally in ten dimensions, but as usual, we may compactify six of these dimensions on a 6-manifold to get a 4d EFT. For simplicity, let us focus on the simplest 6-manifold: a 6-dimensional torus T^6, which may be thought of as the direct product of six circles, generalizing the construction of the “doughnut”T^2 as a product of two circles. In the process of compactifying to 4d, we may break the SO(32) gauge group down to its maximal abelian subgroup, U(1)^16. This is a collection of 16 U(1)'s, i.e., 16 different types of electromagnetism, each with their own photon. For concreteness, let us focus here on one particular U(1) gauge force (any other U(1) will be more or less equivalent). For any positive integer n, the theory has a particle of charge q = g n and mass m^2 = g^2 (n^2 - 1) , where g is the coupling constant of the U(1) gauge force (e.g., g ≈ 0.3 for electromagnetism in our universe). This spectrum of particles charged under this U(1) is shown in Figure <ref>. The lightest charged particle is massless, so clearly it satisfies the WGC bound, as q/m →∞. However, as you can see from the figure, there is actually a whole tower of superextremal particles of increasing charge and increasing mass, which asymptote to the extremality bound Q=M. Indeed, for very large charge, the states in this tower represent the extremal black holes themselves, which have M ≫ 1 in natural units. This example is illustrative of a far more general phenomenon in string theory: typically, the WGC is satisfied not only by a single particle, or even by a finite number of particles, but rather by an infinite tower of superextremal objects of increasing charge and increasing mass. At small charge, these objects can be thought of as particles, like the electron for electromagnetism. At very large charge, they can be thought of as extremal black holes, whose charge-mass-ratio is greater than or equal to the charge-to-mass ratio of an infinitely large extremal black hole, i.e., Q/M|_ext^finite M≥lim_M →∞Q/M|_ ext . This phenomenon is sometimes elevated to a conjecture, called the tower Weak Gravity Conjecture<cit.>, which holds that every U(1) must have such a tower of superextremal states. Clearly, this conjecture implies the ordinary WGC. Here, we have focused on one simple example, where SO(32) heterotic string theory is compactified on T^6. However, the WGC (and the stronger tower WGC) have been verified in a wide array of string compactifications on more complicated 6-manifolds <cit.>. Notably, <cit.> proved that the tower WGC holds for any U(1) gauge group that arises from perturbative physics in string theory, leaving just the non-perturbative cases yet to prove in full generality. §.§ Bottom-up evidence: EFT arguments The vast array of string compactifications offer compelling evidence for the WGC, but ultimately it is difficult to imagine a proof of the WGC coming from such means: at present, we have a relatively good handle on certain classes of supersymmetric EFTs in the string landscape; we know very little about any non-supersymmetric EFTs in the landscape, let alone 10^500 or 10^272,000 of them! As a result, if we are going to prove the WGC, it is likely going to come from a “bottom-up” perspective, as some sort of low-energy consistency condition on black hole physics. While no one has yet come up with a completely convincing proof, there seems to be a general consensus on the form such a proof would take. Recall from our discussion above that the charge-to-mass ratio of a finite-sized black hole can, in general, depend on the size of the black hole <cit.>: Q/M|_ ext^finite M = 1 - ε(M) , where |ε(M)| ≪ 1 is a function of the mass M of the extremal black hole satisfying lim_M →∞ε(M) = 0. If ε(M) is negative, then an extremal black hole does not satisfy the WGC bound (<ref>). In order to satisfy the WGC in this case, we would need additional, superextremal particles of small charge. This possibility is shown in Figure <ref>. However, if ε(M) is positive (Figure <ref>), then an extremal black hole of mass M satisfies the bound (<ref>), and the WGC is satisfied! At least, the letter of the law is satisfied, though perhaps not the spirit of it: after all, the original motivation of the WGC was to ensure that black holes can decay by emitting superextremal particles, so it isn't very satisfying if the only superextremal states in the theory are themselves black holes. Nonetheless, a proof of ε(M) ≤ 0 for some M would suffice to prove the WGC in its mildest form, and as a result many works have sought to prove that consistency of black hole thermodynamics/EFT requires ε(M) ≥ 0<cit.>. A proof of ε≥ 0 would further lend support to the tower WGC introduced in Section <ref>, which requires the existence of both superextremal particles at small charge as well as extremal black holes with ε(M) ≥ 0 at large charge. Indeed, comparing (<ref>) with (<ref>) for n ≫ 1 (or Figure <ref> with Figure <ref>), we see that the spectrum of black holes in heterotic string theory on T^6 has (in the lowest order approximation) the correct sign ε > 0. Thus, a general proof of ε≥ 0 would go a long way towards confirming the tower WGC, and conversely an example of a theory in the landscape with ε(M) < 0 would be a big surprise, forcing us to rethink our understanding of the WGC and the string landscape. But let us suppose that we are interested in the spirit of the WGC, not merely the letter of the law. Is there a reason to believe that the WGC should be satisfied by a light particle, not merely a finite-sized black hole? Several lines of reasoning suggest the answer to this question is yes; for lack of space, we will focus on one particular one: the connection between the WGC and the cosmic censorship hypothesis (defined above in Section <ref>). Already, the heuristic argument of Section <ref> suggests that such a connection should exist. However, the arguments of <cit.> make this connection far more precise. In <cit.>, Horowitz, Santos, Way, and Crisford found examples of spacetimes that violate the cosmic censorship hypothesis in theories with U(1) gauge fields. However, following up on a suggestion of Vafa, Crisford, Horowitz, and Santos subsequently showed that these violations disappear upon adding a superextremal scalar field to theory <cit.>. In other words, cosmic censorship is restored precisely when the WGC is satisfied! In subsequent work <cit.>, Horowitz and Santos showed that this precise connection between the WGC and the cosmic censorship hypothesis persists in theories with massless scalar fields (where the WGC bound (<ref>) is modified by the massless scalars) and in theories with multiple U(1) gauge fields (where the WGC becomes equivalent to the convex hull condition of Section <ref>). This remarkable connection has a simple explanation: if we assume that the cosmic censorship hypothesis is true for EFTs in the landscape, then any EFT that violates the hypothesis must lie in the swampland. In the scenario of <cit.>, EFTs that violate the WGC also violate cosmic censorship. So, if cosmic censorship is true in the landscape, then the WGC must be true as well. Of course, this argument for the WGC still requires us to prove that the cosmic censorship hypothesis is true in the landscape, which could be viewed as a bit of a lateral move: we have exchanged one conjecture for another. Nonetheless, in light of old arguments for cosmic censorship from numerical studies of black hole mergers and recent arguments for cosmic censorship in wide swaths of the landscape <cit.>, this censorship connection seems like an important piece of evidence in favor of the WGC. § IMPLICATIONS OF THE WEAK GRAVITY CONJECTURE The implications of the WGC spread out in many different directions. In the case of a string compactification, the charge and mass of a charged particle may be related to the size and shape of the compactification manifold; as a result, the WGC translates to a geometric statement about 6-manifolds, which has been verified in many examples. Using the AdS/CFT correspondence, which relates a quantum gravity theory in d+1 dimensions to a non-gravitational quantum theory in d dimensions, the WGC translates into a non-gravitational statement, which similarly can be verified in examples. Ultimately, though, we would like not only to understand string theory at a formal, mathematical level, but also to connect it to observable physics. This is the primary driving force behind the recent wave of excitement over the WGC: its implications for particle physics and cosmology. A number of such implications have been discussed in the literature, of varying degrees of rigor, precision, and significance. In this section, I will focus on one of the first and (in my humble, biased opinion) most significant applications of the WGC to cosmology: axion models of cosmological inflation. Cosmological inflation, or simply inflation, is a postulated period of exponential growth in the early universe. According to inflation, the universe in its very earliest moments was undergoing a period of explosive, exponential growth, i.e., inflating like a balloon. In a tiny fraction of a second, inflation occurred, then stopped, and the universe continued to expand but at a much slower rate (see Figure <ref>). Microscopically, inflation can be envisioned as a ball rolling down a hill with friction. In this analogy, the role of the ball is played by a scalar field ϕ known as an inflaton, which to good approximation is homogenous in space and depends only on time, ϕ = ϕ(t). The hill is an inflationary potential V(ϕ) > 0, and friction is an effect caused by the expansion of the universe, known as Hubble friction. The equation of motion governing the evolution of the inflaton is given by ϕ̈+ 3 H ϕ̇+ ∂_ϕ V = 0 , where one dot denotes a derivative with respect to time, two dots denotes two derivatives with respect to time, ∂_ϕ V is the derivative of V with respect to ϕ, and H^2 = ρ/3 = 1/3( 1/2ϕ̇^2 + V(ϕ) ) .. Here, ρ is the energy density, which is split into a kinetic term ϕ̇^2/2 and a potential term V, similar to a ball rolling down a hill. Inflation—exponential growth of the universe—occurs when the Hubble parameterH is approximately constant in time; this requires the potential V(ϕ) to dominate over the kinetic term ϕ̇^2/2, which in turn implies an upper bound on the first and second derivative of the potential |∂_ϕ V|/V≪ 1 ,    - ∂_ϕ∂_ϕ V/V≪ 1 . So, successful inflation requires a potential which satisfies these conditions for a suitably large range of the field space as the inflaton rolls down its hill; eventually, the conditions are violated, the inflaton starts rolling quickly, and inflation ends: the universe continues to expand, but not at an exponential rate. Any potential which takes this form–typically involving some sort of a plateau followed by some sort of cliff–represents a distinct model of inflation. At a broad, paradigmatic level, the experimental evidence for inflation is substantial. However, no individual model of inflation is problem-free; such models usually require a fine-tuning of the initial conditions of the inflaton field, a fine-tuning of the potential V(ϕ), or both. However, one popular model of inflation—which attempts to get around both of these issues—is known as natural inflation<cit.>. This model involves a periodic scalar field, also known as an axion, with periodicity ϕ≡ϕ + 2 π f. Its potential is generated by instantons and takes the sinusoidal form V(ϕ) = V_0 e^-S( 1 - cos (ϕ/f) ) + O(e^-2S) , where the O(e^-2S) terms come from the next harmonic sin(2 ϕ). Here, inflation occurs when the inflaton rolls slowly near the peak of the cosine term at ϕ≈π f + 2 π n f, and it ends once the field starts rolling quickly, eventually settling into a minimum at ϕ≈ 2 π n f, as shown in Figure <ref>. To agree with experimental observations, natural inflation requires f ≳ 10. Meanwhile, exponential suppression of the higher harmonics requires S ≫ 1, which together implies f S ≫ 1. But, as we saw above in (<ref>), this violates the instanton version of the WGC: natural inflation is in tension with the WGC! In principle, there are ways to get around this bound, using more complicated models of natural inflation than the simple one considered here, which both satisfy the WGC as well as the requirements on f and S. But then, the question becomes: does string theory allow these more complicated models? Or might we satisfy the WGC at the expense of violating some other consistency condition on the string landscape? So far, the evidence suggests the latter: all attempts to produce an axion-instanton system with f S > 1 in string theory have failed <cit.>. Even invoking multiple axions <cit.> doesn't seem to work, as it violates the convex hull condition discussed in Section <ref><cit.>. If natural inflation is consistent with string theory, it has yet to be found. § CONCLUSIONS In this brief introduction to the WGC, we have covered a lot of ground. We have explained the necessity of quantum gravity and the importance of string theory, we have introduced the string landscape and the swampland, and we have explored how the WGC fits into this picture as a candidate consistency criterion for distinguishing the landscape from the swampland. From here, we examined the definition of the WGC in detail, including its relation to black hole physics and its generalization to theories with higher-form gauge fields and multiple gauge fields. We next looked briefly at the vast body of literature on evidence for the WGC and implications of the WGC. We highlighted evidence for the WGC from string theory, an intriguing connection between the WGC and the cosmic censorship hypothesis, and the importance of the WGC for natural inflation, among other things. It is remarkable that the simple observation that a pair of electrons will repel (rather than attract) each other could touch on so many different topics at the cutting edge of modern high-energy physics, and in this article we have really only seen the tip of the iceberg. Given further space, we could explore more evidence for the WGC, the attempted proofs of the WGC, and its connections with other areas of particle physics, cosmology, mathematics, and more. In this article, we touched on the tower WGC, but there are other proposed variants of the WGC that are plausibly true and potentially important <cit.>. For a more thorough review of the WGC, see <cit.>. And yet, even this apparent iceberg turns out not to be an iceberg at all, but rather the tip of an even larger iceberg called the swampland program<cit.>, which aims to identify universal features of EFTs in the landscape and delineate it from the swampland. The WGC is one candidate for such a universal feature, but there are many others, each of which could easily fill a Contemporary Physics article in its own right. As a whole, the swampland program has yet to achieve its ultimate goal of bridging the gap between string theory and experiment, but it has already changed the way that string theory is studied, and most likely the best is yet to come. § ACKNOWLEDGEMENTS We are grateful to Muldrow Etheredge, Matthew Reece, and Robert Rudelius for their comments on a draft. The work of TR was supported in part by STFC through grant ST/T000708/1. § GAUGE THEORY AND LIE GROUPS An effective field theory, or EFT, is a mathematical description of a physical system. The dynamics of such a system can often be described in terms of a Lagrangianℒ, which is a mathematical function of the degrees of freedom of the system, represented by different fields. The simplest example of such a field is a scalar field, often denoted by the Greek letter ϕ, which assigns a number to every point in spacetime. A familiar example of a scalar field is temperature: to a given place on earth 𝓍 at a given time t, we may assign a temperature T=T(t, 𝓍). A more fundamental example of a scalar field is the celebrated Higgs boson.[Indeed, all fundamental scalar fields are bosons rather than fermions.] Symmetries play a key role in the study of EFTs. Some symmetries are discrete, like the reflection symmetry that flips the two sides of an isosceles triangle. Other symmetries are continuous, such as the rotations that act on a circle. The set of symmetry transformations forms a mathematical object known as a group. In EFTs, symmetries are realized by transformations of the fields that keep the Lagrangian invariant. For example, the Lagrangian of a non-interacting, complex-valued scalar field φ of mass m in four dimensions takes the form ℒ = 1/2( φ̇^†φ̇- (∇⃗φ^†) · (∇⃗φ) - m^2 ) , where ∇⃗ is the gradient, dot denotes a derivative with respect to time, and φ^† denotes the complex conjugate of φ. One can check that this Lagrangian is invariant under the symmetry transformation φ→ e^i αφ ,   α∈ [0, 2π) . I.e., ℒ→ℒ under this transformation of the field φ. Here, the parameter α is circle-valued, which tells us that the symmetry group is the Lie group U(1). The fact that φ transforms nontrivially tells us that it is charged under the symmetry. A gauge symmetry is a special type of symmetry in an EFT. One important consequence of a gauge symmetry is the existence of a gauge field, which mediates a force between two charged particles. In the case of electromagnetism, the gauge field is the usual electromagnetic 4-potential A_μ = (Φ, A_1, A_2, A_3). The particle associated with this gauge field is the photon, which mediates a force (the electromagnetic force) between two charged particles, e.g. two electrons. Thus, in EFT, gauge forces are associated with symmetry groups. The symmetry group for electromagnetism is U(1)—the group of rotations of a circle. U(1) is one example of a Lie group, which is a group that is also a smooth manifold. Another example of a Lie group is SO(3)—the group of rotations of a 2-sphere. Yet another is SU(N), the set of N × N unitary matrices of determinant 1. Given any such Lie group G, we can construct a gauge theory, i.e., an EFT with gauge group G. For example, the standard model of particle physics has a gauge symmetry group given by the direct product of three Lie groups: SU(3)×SU(2)×U(1): the SU(3) factor is associated with a gauge force called the strong force, the SU(2) is associated with a gauge force called the weak force, and the U(1) is associated with electromagnetism. Some Lie groups come in infinite families; for instance, SO(N), N ≥ 2 represents the set of N × N orthogonal matrices of determinant 1. There are some Lie groups, however, which do not fall into such infinite families—examples include the exceptional Lie groups E_6, E_7, and E_8; we encountered the last of these in our discussion of heterotic string theory in Section <ref> above. JHEP
http://arxiv.org/abs/2409.02726v1
20240904135950
Molecular spin-probe sensing of H-mediated changes in Co nanomagnets
[ "A. Fétida", "O. Bengone", "C. Goyhenex", "F. Scheurer", "R. Robles", "N. Lorente", "L. Limot" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "cond-mat.mes-hall" ]
Université de Strasbourg, CNRS, IPCMS, UMR 7504, F-67000 Strasbourg, France=-1 Centro de Física de Materiales CFM/MPC (CSIC-UPV/EHU), Paseo Manuel de Lardizabal 5, 20018 Donostia-San Sebastián, Spain=-1 Centro de Física de Materiales CFM/MPC (CSIC-UPV/EHU), Paseo Manuel de Lardizabal 5, 20018 Donostia-San Sebastián, Spain=-1 Donostia International Physics Center (DIPC), 20018 Donostia-San Sebastián, Spain=-1 limot@ipcms.unistra.fr Université de Strasbourg, CNRS, IPCMS, UMR 7504, F-67000 Strasbourg, France=-1 § ABSTRACT The influence of hydrogen on magnetization is of significant interest to spintronics. Understanding and controlling this phenomenon at the atomic scale, particularly in nanoscale systems, is crucial. In this study, we utilized scanning tunneling microscopy (STM) combined with a nickelocene molecule to sense the spin of a hydrogen-loaded nanoscale Co island grown on Cu(111). Magnetic exchange maps obtained from the molecular tip revealed the presence of a hydrogen superstructure and a 90^∘ rotation of the magnetization compared to the pristine island. Ab initio calculations corroborate these observations, indicating that hydrogen hybridization with Co atoms on the island surface drives the spin reorientation of the island. This reorientation is further reinforced by hydrogen penetration into the island that locates at the Co/Cu interface. However, the subsurface sensitivity of the magnetic exchange maps indicate that this effect is limited. Our study provides valuable microscopic insights into the chemical control of magnetism at the nanoscale. Molecular spin-probe sensing of H-mediated changes in Co nanomagnets L. Limot September 9, 2024 ==================================================================== INTRODUCTION Magnetic anisotropy determines the orientation of spins in metallic thin films and multilayers, thereby constituting a critical aspect of spintronic device functionality. Among various approaches for controlling magnetic anisotropy, the incorporation of mobile ionic species in thin films through voltage-gated transport stands out as promising. Hydrogen, in particular, has proven superior to other ionic species, offering non-destructive and rapid toggling of magnetic anisotropy of multilayered heterostructures <cit.>. The rationale for hydrogen loading is built upon the observation that even subtle interactions, such as charge transfer between hydrogen and a metal atom, can trigger changes in the magnetic anisotropy and effective magnetic moment of the atom <cit.>, while also affecting the exchange coupling among atoms <cit.>. The magnetization orientation in ultrathin films can be modified by the adsorption and penetration of hydrogen <cit.>. This effect is observed not only in collinear magnetic films, but also in noncollinear ones <cit.>, where hydrogen has the ability to stabilize skyrmion states <cit.>. The position and concentration of hydrogen in and on the host metal are important factors in this process <cit.>. While recent advancements have enabled microscopic-level imaging of hydrogen <cit.>, these measurements have not yet been associated with magnetism. Single hydrogen molecules are commonly studied using STM. Hydrogen molecules assemble into coverage-dependent lattices on metal surfaces <cit.> and have a characteristic vibrational structure facilitating their identification <cit.>. They can also be manipulated by the STM tip <cit.>, adding chemical functionality to the microscope <cit.>. Dissociative adsorption of hydrogen molecules has also been reported on magnetic surfaces, where hydrogen atoms then form superlattices <cit.>. However, conducting spin-sensitive STM measurements on hydrogen-exposed magnets remains a challenging endeavor <cit.>, as it necessitates a magnetic tip, which, similar to the surface, is susceptible to hydrogen contamination <cit.>. Rigorous monitoring of the tip apex status is essential, preferably with an external magnetic field. RESULTS AND DISCUSSION To overcome this challenge, we passivated the apex of a copper-coated W tip with a magnetic nickelocene molecule [Ni(C_5H_5)_2, denoted as Nc, see Fig. <ref>f], and focused on a model system comprising a nanoscale Co magnet that was exposed to hydrogen. Owing to the direct exchange coupling between the Nc-terminated tip and the Co surface of the magnet, we employed the inelastic tunnel current to monitor the orientation of the surface magnetization with atomic-scale sensitivity <cit.>. Magnetic exchange maps obtained with the molecular tip reveal that hydrogen is predominantly located on the magnet's surface, forming a hydrogen superstructure, while the subsurface sensitivity of the magnetic probe tip indicates weaker hydrogen penetration into the magnet. This results in a 90^∘ rotation of the magnetization compared to the hydrogen-free state. These observations, corroborated by ab initio calculations, represent an advancement in our comprehension of how hydrogen alters the magnetism of a nanoscale system at the atomic level. The sample consisted of Co islands grown on Cu(111), which are two atomic layers high and exhibit a triangular-like structure (Fig.<ref>a) <cit.>. The apparent height is 325± 10 pm, and typical lateral sizes range from 10 to 30 nm. After island growth, Nc is deposited onto the sample, which is then exposed to hydrogen (see Materials and Methods for details). To assess H-adsorption onto Co islands, a Nc-tip is prepared by transferring a single Nc from the surface to a mono-atomically sharp tip. Imaging can then be conducted via two operating modes. The first mode involves tunneling with parameters 50 mV and 50 pA, for which the exchange interaction with the Co surface is absent (Fig. S1, for details, see the Supporting Materials) <cit.>. In this mode, which we define as a high-bias mode, the Nc-tip functions as a standard STM tip. The second mode, which we define as a low-bias mode, involves tunneling with parameters 1 mV and 100 pA, where the Nc-tip is 50 pm away from the magnetic surface for the exchange interaction to be active. This mode carries magnetic information. Switching between high- and low-bias allows us to visualize the electronic and magnetic properties, respectively. Hydrogen superstructure on a Co bilayer island To determine the hydrogen coverage on a nanoscale island, we employed the Nc-tip in the high-bias mode. Figure <ref>a shows a typical image of an island after hydrogen exposure (see Materials and Methods), which appears similar to a pristine island. To visualize the presence of hydrogen atoms, it is necessary to zoom in on the island. A close-up view of a Co island is presented in Fig.<ref>b, along with an overlaid simulation of the Co network of the pristine island. Hydrogen atoms are positioned on the hollow sites of the Co surface and alternate between two distinct hollow sites, resulting in a 2 pm difference in their apparent height (Fig.<ref>c). The hydrogen coverage is 0.5 monolayers (ML), suggesting a hydrogen arrangement in a 2H-(2×2) superstructure (Fig. <ref>d). The unit cell of the superstructure contains two Co atoms: one at the center of a hexagon without neighboring H atoms [noted Co(I), yellow in Fig. <ref>b], and a second, which has two neighboring H atoms [noted Co(II), blue]. The apparent height of Co(II) is 4 pm greater than that of Co(I). To corroborate these observations, DFT calculations were carried out, yielding good agreement with the experimental images (Fig. <ref>e). The strongest corrugation in the computed image corresponds to hydrogen in an hcp site [noted H(I), green], while the weaker corrugation corresponds to hydrogen in an fcc site [noted H(II), orange]. This assignment, however, needs to be validated also by tunneling spectroscopy. The 1H-(2×2) superstructure, where one H atom is present in a (2×2) Co unit cell corresponding to a 0.25 ML coverage, is in fact indistinguishable from its 2H counterpart in the computed images (Fig. S2). Figure <ref>f presents a typical dI/dV spectrum of a pristine Co island, revealing a prominent peak at -0.33 eV, arising from minority d_z^2 states hybridized with s-p states <cit.>. The sharp feature at zero-bias is due to the inelastic tunnel current of the Nc-tip. After H exposure, the peak shifts out of our energy window, i.e., below -0.6 eV. This behavior is consistent with the computed LDOS for both Co(I) and Co(II) in the 2H-(2×2) superstructure (Fig. S2). It contrasts with the LDOS of a 1H-(2×2) superstructure, which maintains instead a spin-polarized d-structure comparable to the pristine surface (Fig. S2). Hence, the islands in this study exhibit a 2H-(2×2) superstructure after hydrogen exposure. Magnetization in the presence of hydrogen To investigate the magnetism of the 2H-(2×2) superstructure, the surface is imaged by using the low-bias mode of the Nc-tip. The low-bias image of Fig. <ref>a differs from the one previously recorded at higher bias. The Co(I) and Co(II) atoms show a stronger contrast due to their distinct corrugation (Fig. <ref>b), with a measured difference of 8 pm between them. The two hydrogen atoms are also visible, but their contrast is reversed compared to the high-bias image, H(I) now exhibiting a height 20 pm larger than H(II). To determine the orientation of the island magnetization, we recorded a series of d^2I/dV^2 spectra by approaching the tip above each Co atom. The zero tip-displacement (δ z=0) is set to a conductance of G=0.01 G_0 above a Co(I) atom, where G_0=2e^2/h is the quantum of conductance. Conductance versus tip-displacement traces are presented in Fig. S3. As shown in Fig. <ref>c for Co(I) and in Fig. <ref>d for Co(II), at a tip displacement of δ z=0 the spectra exhibit a dip and a peak at biases of -4.0 and +4.0 mV. These features are usual for Nc-tips above non-magnetic copper <cit.>, and indicate the absence of exchange interaction. As the tip approaches the surface, the peak (dip) for both Co atoms shifts upward (downward) in energy, indicating exchange coupling between the Nc-tip and Co. This is also visible in the 2D intensity plots acquired above both cobalt atoms (Fig. S4). The spectra can only be reproduced using a dynamical scattering model with the magnetization of the islands laying in-plane (solid red lines in Figs. <ref>c and <ref>d) <cit.>. Peaks and dips at opposite voltage polarities exhibit different amplitudes (refer to Fig. <ref>e, solid red line). This disparity arises from the selection rule for spin excitation and their intensity ratio provides insight into the magnitude and sign of the spin polarization of the tunnel junction <cit.>. The spin polarization (P) found from our simulated spectra is constant with tip displacement with P=-0.12 for both Co(I) and Co(II) (Fig. S5). The spectra differ from those of the pristine island. This becomes evident upon removing hydrogen by locally heating the island with the STM tip <cit.>. Following hydrogen removal, the Co network is restored (Fig. <ref>e), and Co atoms display a characteristic spin-split d^2I/dV^2 spectrum, indicating out-of-plane magnetization and nearly-zero spin polarization as in a pristine Co bilayer island  <cit.>. The 2H-(2×2) superstructure can therefore be associated to a rotation of the island magnetization, from out-of-plane to in-plane. Figure <ref>a presents the distance-dependent behavior of the exchange energy derived from the spectra above Co(I) and Co(II). For δ z>-40 pm, the exchange energy exhibits an exponential variation exp(-δ z/λ) with a decay length of λ=46±10 pm for both atoms, similar to a Co atom of the pristine island <cit.>. Notably, the exchange energy above Co(II) is nearly twice as strong as the exchange energy measured above Co(I). When the tip is approached further (δ z<-40 pm), the exchange energy becomes less sensitive to tip displacement, showing a smaller decay length. We assign this behavior to an Nc-hydrogen repulsion that is more pronounced on Co(II) than on Co(I) (Fig. S3). The exchange coupling also varies across the surface. To visualize these variations, we acquire a voxel image consisting of a dataset in the form of d^2I/dV^2 (x,y,V), while maintaining a fixed tip-sample distance. We then determine the exchange energy at each lateral tip position by fitting the spectra. Figure <ref>b shows the resulting magnetic exchange map, while Fig. <ref>c displays d^2I/dV^2 spectra at the Co and H atoms. The exchange map is similar to the low-bias image of Fig. <ref>a since the corrugation in both images is governed by the shift of the inelastic threshold energy. If the low-bias image allows quickly assessing the lateral dependence of the exchange coupling, it is done at the expense of a quantitative estimate of the exchange energy. The exchange map reveals a corrugation of the exchange energy over a 1.5 meV range, with clear differences among Co and H atoms. However, these differences are not reflected in their magnetic moments. The DFT computed magnetic moments are 1.9 μ_B and 1.6 μ_B for Co(I) and Co(II), respectively, while the hydrogen atoms exhibit weak magnetism, with H(I) at -0.02 μ_B and H(II) at -0.01 μ_B. Consistent with findings on pristine Co surfaces <cit.>, the magnetic exchange map is reproduced by the DFT-computed spin density. Figures <ref>d-e present the spin density computations at two distinct distances from the surface, based on a 2H-(2×2) superstructure on a Co bilayer. Remarkably, this agreement extends to H-covered Co islands of varying Co island thicknesses (Fig. S6). The maps indicate that Co(II) exhibits a stronger spin density compared to Co(I) at all distances from the surface, aligning with the experimentally observed stronger exchange energy. The observed differences in the Co atoms, also visible in their computed LDOS (Fig. S2), are attributed to varying hybridization with the hydrogen atoms. Additionally, the spin density of H(II) is stronger than that of H(I), confirming the contrast reversal seen experimentally for these hydrogen atoms compared to the high-bias images. Computed magnetic anisotropy energy To gain deeper insight into our findings, we have performed a computational study of hydrogen adsorption and its impact on the island's magnetization orientation. The key quantity to calculate is the Magnetic Anisotropy Energy (MAE) which is the sum of two contributions. The first contribution, known as the magnetocrystalline anisotropy Energy (MCA), stems from the spin-orbit coupling and dictates the preferred alignment direction of magnetic moments within the material arising from the crystalline structure and symmetry. The second component, termed shape anisotropy, is driven by magnetostatic dipole-dipole interactions and manifests as a tendency for the magnetic moments to align in a specific direction dictated by the object's shape. Here, a negative MAE corresponds to out-of-plane magnetization (denoted as ⊥), while a positive value indicates in-plane magnetization (denoted as ∥). Calculation details are given in Materials and Methods. To keep the calculations within practical time limits, we use the approximation of infinite bilayers of Co on a Cu(111) substrate. This approach is reasonable given that the triangular islands seen in experiments are larger than 10 nm on each side. The shape anisotropy in a triangle is then almost the same than the one of an infinite film of the same height. In the first step of this theoretical study, we consider only the hydrogen adsorption on top of the Co bilayer. We vary the hydrogen coverage from 0 to 1 ML and calculate the MAE for each coverage. Since these calculations are conducted using a 2×2-(111) cell configuration, these coverages correspond to the addition of 0 to 4 hydrogen atoms per unit cell on the (111) surface. The H atoms are positioned in hollow fcc sites. For the 0.5 ML coverage we also consider a superstructure alternating fcc and hcp positions as more stable, in agreement with STM experiments. The results are presented in Fig. <ref>a. For pristine Co (0 ML of H), the MCA is negative and exceeds the shape anisotropy, resulting in a out-of-plane magnetization (MAE<0). The main effect of adding hydrogen is to bring the MAE close to zero. This is driven by the MCA changing from -1.5 to -0.5 meV/cell, while the shape anisotropy remains constant at 0.6 meV/cell. At a hydrogen coverage of 0.5 ML, the two contributions compensate, resulting in an MAE that is nearly zero. Above 0.5 ML, the further growth of the MCA is slower but sufficient to favor in-plane magnetization. Thus, the MCA evolution with hydrogen coverage is the driving force behind the rotation of the magnetization toward the in-plane configuration. A deeper understanding of the physical origin can be gained through a Co site-resolved analysis, which involves plotting the MCA for each Co site (Fig. <ref>b). As the H coverage on the surface increases, the MCA increases at the Co sites on the surface (numbered 1 to 4 in Fig. <ref>d). The MCA becomes large and positive above 0.5 ML, favoring an in-plane orientation of the magnetization. In contrast, the Co sites of the first layer (noted 5 to 8 in Fig. <ref>b), which have no hydrogen atoms as nearest neighbors, exhibit opposite behavior when the H coverage is >0.5 ML, thus favoring an out-of-plane magnetization. Adding up all the MCA per site yields a negative value, as shown in Fig. <ref>a. The different MCA for surface and first-layer Co atoms is attributed to their hybridization with H atoms that changes the relative importance of different d orbitals of Co (Fig. S7), as also evidenced in Pd/Co/Pd thin films <cit.>. Determining hydrogen loading in the experiment The above considerations regarding the MCA per Co site lead us to consider the possibility of H insertion as an enhancing factor of the island magnetization re-orientation. We start from a coverage of 0.5 ML where the hydrogen is present only on the surface in a 2H-(2×2) superstructure, and progressively insert H atoms in octahedral sites within the island. The insertion of 1 or 2 H atoms (corresponding respectively to a coverage of 0.75 and 1 ML) can be carried out at different locations among the 4 available sites in the 2×2 cell. We retain the configurations displaying in-plane anisotropy as in the experiment, and energetically most stable. We find that up to 1 ML coverage, it is more favorable to load the Co/Cu interface. The two corresponding values of MAE have been reported on Fig. <ref>c as green circles. Increasing the concentration above 1 ML, involves inserting H at octahedral sites in both Co/Co and Co/Cu interfaces, leading to a limited number of configurations with in-plane magnetization. For concentrations >1.5 ML (with 0.5 ML located at the surface), the MCA energy becomes positive for all possible configurations. Having identified potential configurations, it becomes feasible to estimate the hydrogen loading in the Co bilayer by computing the corresponding spin density maps and comparing them to the exchange maps. Figure <ref>a shows a computed spin density map with an extra 0.25 ML of hydrogen at the octahedral sites of the Co/Cu interface. There is no significant difference compared to the experimental map. However, increasing the hydrogen to 0.50 ML at the Co/Cu interface (Fig. <ref>b) and further hydrogen loading (Fig. <ref>c) result in spin density patterns that diverge from the experimental observations (Fig. <ref>b). Based on these findings, if hydrogen is present in the island, the total coverage must be between 0.5 ML and 0.75 ML, with 0.5 ML located on the surface. CONCLUSIONS In summary, our study demonstrates that exposing a nanoscale magnet to sufficient hydrogen can induce a 90^∘ rotation of its magnetization. Although our DFT computations are conducted on infinite Co layers, they successfully reproduce the interplay between hydrogen loading and magnetism observed in finite-size Co magnets, typically few tens of nm wide in our measurements. This rotation is primarily driven by hydrogen adsorption on the magnet's surface and is further reinforced by the presence of hydrogen at the Co/Cu interface. While hydrogen adsorption also occurs at step edges in the experiment, our findings suggest that this effect can be neglected to first order. MATERIALS AND METHODS Experimental details We employed a customized Omicron ultrahigh vacuum STM with a pressure maintained below 10^-10 mbar. Inelastic tunneling spectroscopy was performed at 2.4 K, while all other measurements were carried out at 4.4 K. The Cu(111) substrate was cleaned through multiple cycles of Ar^+ sputtering and annealing at a temperature of 520^∘C. The W tip used in this investigation was first cleaned by Ar^+ sputtering. Cobalt islands were grown on Cu(111) by evaporating Co onto the surface at room temperature. Cobalt was evaporated from a rod that had undergone thorough out-gassing, with an evaporation rate of 0.3 ML min^-1. After a deposition of 1 ML, the cobalt sample was transferred to the pre-cooled STM. Nickelocene was then deposited by exposing the sample, maintained below 100 K, to a molecular flux of 2.5×10^-2 ML min^-1 for a few seconds. For the Nc-tip preparation, we first indented the tungsten tip into the copper surface, resulting in the creation of a copper-covered, mono-atomically sharp tip apex. Subsequently, we positioned the tip above an Nc molecule adsorbed on either a cobalt or copper step edge. To attach Nc to the tip, the tunneling parameters were set at -1 mV and 50 pA, and the the tip was carefully approached towards the molecule by a minimum of 200 pm. Notably, molecular attachment was also possible with Nc molecules adsorbed on the Co islands, employing tunneling parameters of 50 mV and 50 pA, and approaching the tip by 350 pm. Details concerning Nc-tip characterization can be found elsewhere <cit.>. The dI/dV spectra were recorded using a lock-in amplifier operating at a frequency of 6.2 kHz and a modulation of 5 mV rms. The d^2I/dV^2 were recorded using a lower modulation of 500 μV rms. Following the deposition of nickelocene, we initiated the hydrogenation of the sample. Hydrogen molecules constitute the predominant residual gas in the UHV chamber. In our UHV environment, where the pressure is <5×10^-11 mbar, hydrogen contaminates the Co islands over time. Achieving the desired hydrogen superstructure on the islands requires however an “efficient” hydrogenation. This consists in increasing the temperature of the cryostat to more than 17 K for less than 1 hour in order to prompt the release of H_2 from the gold-plated copper walls of the STM cryostat <cit.>. Computational details DFT calculations were performed using the VASP code <cit.>. The PBE <cit.> form of GGA was used as an exchange and correlation functional. Core electrons were treated following the PAW method <cit.>. Two supercell structures were used, both representing epitaxially extended Co bilayers on a (111) oriented Cu substrate. The first one is a 2×2-(111) supercell including 2 layers of Co (4 Co atoms per layers) and 9 layers of underlying Cu (4 Cu atoms per layer). It enables to achieve surface hydrogen coverages between 0.25 and 1 ML by the addition of 1 to 4 hydrogen atoms in hollow sites. Possible insertion in the most stable inner octahedral sites was also considered with 4 octahedral sites available at the CoCo and CoCu interfaces, i.e. 8 in total or 12 including the hollow surface adsorption sites. The second geometry used is the simple 1×1-(111) supercell including 2 layers of Co (1 Co atom per layer) and 9 layers of underlying Cu (1 Cu atom per layer). In this case, the addition of one hydrogen atom at the surface or at the CoCo and CoCu interfaces in interstitial position, corresponds to full hydrogen layers. A vacuum layer of 10 Å was always added in the direction perpendicular to the surface in order to minimize interactions introduced by periodical conditions. In any case an energy cutoff of 500 eV is used. Then for the 2×2 cell a 12×12×1 k-point sampling was applied while a larger 30×30×1 k-point sampling was used for the smaller 1×1-cell. Considering pristine Co and full adsorbed or inserted H layers, we checked that using one cell geometry or the other, with its associated k-point set, led to the same magnetocrystalline anisotropy energies with an error of less than 20 μeV per atom. Calculations are performed into two main steps following the magnetic force theorem <cit.>, one involving spin-polarized calculations in the collinear scheme and the other including spin-orbit coupling. In the collinear case, the supercell is relaxed along the z direction while the lateral lattice constant is fixed to the one of Cu (a=3.635 Å, obtained after optimization of the copper substrate alone) according to an epitaxial growth. The positions of all atoms except for those in the two bottom layers were relaxed (along z direction) until all forces were smaller than 0.01 eV/Å and the total energy converged within an accuracy of 1·10^-7 eV. At this stage, the output charge densities are used to deduce the spin-densities, whose maps are plotted with the VESTA software <cit.>, and STM images are simulated following Tersoff-Hamann theory <cit.> using the STMpw code <cit.>. The same charge densities are also used in order to perform calculations including the spin-orbit interaction as implemented in VASP <cit.>. The magnetocrystalline anisotropy energy (MCA) was determined by rotating the spins according to different crystallographic directions. In our case, the spin-orbit coupling was taken into account non-self-consistently for the spin orientations corresponding respectively to in-plane and out-of-plane magnetizations. Site- and orbital-resolved energies are provided within the spin-orbit calculations in VASP. More precisely, we get E_soc on each ion which represents the accumulated energy contribution inside the augmentation sphere that is centered at each ion position. In order to determine the total magnetic anisotropy, we add to the MCA the so-called shape anisotropy, which results from magnetostatic dipole-dipole interactions and therefore depends on the geometry of the system under study. In the case of the infinite bilayers of the present work, this contribution E_dd was evaluated numerically using the following summation up to an in-plane cut-off radius of 150 Å <cit.>: E_dd=μ_0/8π∑_i ≠ j1/|r_ij|^3[m_i·m_j-3 (r_ij·m_i)·(r_ij·m_j)/|r_ij|^2] The total MAE, between out-of-plane (⊥) and in-plane magnetization (∥), is therefore given by the difference of energies: MAE = MCA+Δ E_dd = (E_tot,⊥^DFT-E_tot,∥^DFT)+(E_dd,⊥-E_dd,∥) Using this latter equation leads to positive (negative) values of MAE for in-plane (out-of-plane) orientation. ACKNOWLEDGEMENTS The Strasbourg authors acknowledge support from the EU’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant 847471, from the International Center for Frontier Research in Chemistry (Strasbourg), from project ANR-23-CE09-0036 funded by the ANR, and from the High Performance Computing Center of the University of Strasbourg. Part of the computing resources were funded by the Equipex Equip@Meso project (Programme Investissements d'Avenir) and the CPER Alsacalcul/Big Data. R.R. and N.L. thank financial support from projects RTI2018-097895-B-C44, PID2021-127917NB-I00 funded by MCIN/AEI/10.13039/501100011033, from project QUAN-000021-01 funded by the Gipuzkoa Provincial Council and from project IT-1527-22 funded by the Basque Government. Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them. 47 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Tan et al.(2019)Tan, Huang, Avci, Büttner, Mann, Hu, Mazzoli, Wilkins, Tuller, and Beach]Tan2019 author author A. J. Tan, author M. Huang, author C. O. Avci, author F. Büttner, author M. Mann, author W. Hu, author C. Mazzoli, author S. Wilkins, author H. L. Tuller, and author G. S. D. Beach, title title Magneto-ionic control of magnetism using a solid-state proton pump, 10.1038/s41563-018-0211-5 journal journal Nat. Mater. volume 18, pages 35–41 (year 2019)NoStop [Lee et al.(2020)Lee, Jo, Tan, Huang, Choi, Park, Ji, Son, Chang, Beach, and Woo]Lee2020 author author K.-Y. Lee, author S. Jo, author A. J. Tan, author M. Huang, author D. Choi, author J. H. Park, author H.-I. Ji, author J.-W. Son, author J. Chang, author G. S. D. Beach, and author S. Woo, title title Fast magneto-ionic switching of interface anisotropy using yttria-stabilized zirconia gate oxide, 10.1021/acs.nanolett.0c00340 journal journal Nano Lett. volume 20, pages 3435–3441 (year 2020)NoStop [Kossak et al.(2023)Kossak, Huang, Reddy, Wolf, and Beach]Kossak2023 author author A. E. Kossak, author M. Huang, author P. Reddy, author D. Wolf, and author G. S. D. Beach, title title Voltage control of magnetic order in RKKY coupled multilayers, 10.1126/sciadv.add0548 journal journal Sci. Adv. volume 9, pages eadd0548 (year 2023)NoStop [Dubout et al.(2015)Dubout, Donati, Wäckerlin, Calleja, Etzkorn, Lehnert, Claude, Gambardella, and Brune]Dubout2015 author author Q. Dubout, author F. Donati, author C. Wäckerlin, author F. Calleja, author M. Etzkorn, author A. Lehnert, author L. Claude, author P. Gambardella, and author H. Brune, title title Controlling the spin of Co atoms on Pt(111) by hydrogen adsorption, 10.1103/PhysRevLett.114.106807 journal journal Phys. Rev. Lett. volume 114, pages 106807 (year 2015)NoStop [Khajetoorians et al.(2015)Khajetoorians, Valentyuk, Steinbrecher, Schlenk, Shick, Kolorenc, Lichtenstein, Wehling, Wiesendanger, and Wiebe]Khajetoorians2015 author author A. A. Khajetoorians, author M. Valentyuk, author M. Steinbrecher, author T. Schlenk, author A. Shick, author J. Kolorenc, author A. I. Lichtenstein, author T. O. Wehling, author R. Wiesendanger, and author J. Wiebe, title title Tuning emergent magnetism in a Hund's impurity, 10.1038/nnano.2015.193 journal journal Nat. Nanotechnol. volume 10, pages 958–964 (year 2015)NoStop [Jacobson et al.(2015)Jacobson, Herden, Muenks, Laskin, Brovko, Stepanyuk, Ternes, and Kern]Jacobson2015 author author P. Jacobson, author T. Herden, author M. Muenks, author G. Laskin, author O. Brovko, author V. Stepanyuk, author M. Ternes, and author K. Kern, title title Quantum engineering of spin and anisotropy in magnetic molecular junctions, 10.1038/ncomms9536 journal journal Nat. Commun. volume 6, pages 8536 (year 2015)NoStop [González-Herrero et al.(2016)González-Herrero, Gómez-Rodríguez, Mallet, Moaied, Palacios, Salgado, Ugeda, Veuillen, Yndurain, and Brihuega]Gonzalez2016 author author H. González-Herrero, author J. M. Gómez-Rodríguez, author P. Mallet, author M. Moaied, author J. J. Palacios, author C. Salgado, author M. M. Ugeda, author J.-Y. Veuillen, author F. Yndurain, and author I. Brihuega, title title Atomic-scale control of graphene magnetism by using hydrogen atoms, 10.1126/science.aad8038 journal journal Science volume 352, pages 437–441 (year 2016)NoStop [Jacobson et al.(2017)Jacobson, Muenks, Laskin, Brovko, Stepanyuk, Ternes, and Kern]Jacobson2017 author author P. Jacobson, author M. Muenks, author G. Laskin, author O. Brovko, author V. Stepanyuk, author M. Ternes, and author K. Kern, title title Potential energy-driven spin manipulation via a controllable hydrogen ligand, 10.1126/sciadv.1602060 journal journal Sci. Adv. volume 3, pages e1602060 (year 2017)NoStop [Steinbrecher et al.(2021)Steinbrecher, van Weerdenburg, Walraven, van Mullekom, Gerritsen, Natterer, Badrtdinov, Rudenko, Mazurenko, Katsnelson, van der Avoird, Groenenboom, and Khajetoorians]Steinbrecher2021 author author M. Steinbrecher, author W. M. J. van Weerdenburg, author E. F. Walraven, author N. P. E. van Mullekom, author J. W. Gerritsen, author F. D. Natterer, author D. I. Badrtdinov, author A. N. Rudenko, author V. V. Mazurenko, author M. I. Katsnelson, author A. van der Avoird, author G. C. Groenenboom, and author A. A. Khajetoorians, title title Quantifying the interplay between fine structure and geometry of an individual molecule on a surface, 10.1103/PhysRevB.103.155405 journal journal Phys. Rev. B volume 103, pages 155405 (year 2021)NoStop [Hjörvarsson et al.(1997)Hjörvarsson, Dura, Isberg, Watanabe, Udovic, Andersson, and Majkrzak]Hjorvarsson1997 author author B. Hjörvarsson, author J. A. Dura, author P. Isberg, author T. Watanabe, author T. J. Udovic, author G. Andersson, and author C. F. Majkrzak, title title Reversible tuning of the magnetic exchange coupling in Fe/V (001) superlattices using hydrogen, 10.1103/PhysRevLett.79.901 journal journal Phys. Rev. Lett. volume 79, pages 901–904 (year 1997)NoStop [Leiner et al.(2003)Leiner, Westerholt, Blixt, Zabel, and Hjörvarsson]Leiner2003 author author V. Leiner, author K. Westerholt, author A. M. Blixt, author H. Zabel, and author B. Hjörvarsson, title title Magnetic superlattices with variable interlayer exchange coupling: A new approach for the investigation of low-dimensional magnetism, 10.1103/PhysRevLett.91.037202 journal journal Phys. Rev. Lett. volume 91, pages 037202 (year 2003)NoStop [Sander et al.(2004)Sander, Pan, Ouazi, Kirschner, Meyer, Krause, Müller, Hammer, and Heinz]Sander2004 author author D. Sander, author W. Pan, author S. Ouazi, author J. Kirschner, author W. Meyer, author M. Krause, author S. Müller, author L. Hammer, and author K. Heinz, title title Reversible H-induced switching of the magnetic easy axis in Ni/Cu(001) thin films, 10.1103/PhysRevLett.93.247203 journal journal Phys. Rev. Lett. volume 93, pages 247203 (year 2004)NoStop [Munbodh et al.(2011)Munbodh, Perez, Keenan, Lederman, Zhernenkov, and Fitzsimmons]Munbodh2011 author author K. Munbodh, author F. A. Perez, author C. Keenan, author D. Lederman, author M. Zhernenkov, and author M. R. Fitzsimmons, title title Effects of hydrogen/deuterium absorption on the magnetic properties of Co/Pd multilayers, 10.1103/PhysRevB.83.094432 journal journal Phys. Rev. B volume 83, pages 094432 (year 2011)NoStop [Santos et al.(2012)Santos, Gallego, Mascaraque, McCarty, Quesada, N'Diaye, Schmid, and de la Figuera]Santos2012 author author B. Santos, author S. Gallego, author A. Mascaraque, author K. F. McCarty, author A. Quesada, author A. T. N'Diaye, author A. K. Schmid, and author J. de la Figuera, title title Hydrogen-induced reversible spin-reorientation transition and magnetic stripe domain phase in bilayer Co on Ru(0001), 10.1103/PhysRevB.85.134409 journal journal Phys. Rev. B volume 85, pages 134409 (year 2012)NoStop [Chen et al.(2021)Chen, Robertson, Hoffmann, Ophus, Fernandes C., Lo Conte, Ding, Wiesendanger, Blügel, Schmid, and Liu]Chen2021 author author G. Chen, author M. Robertson, author M. Hoffmann, author C. Ophus, author André L. Fernandes C., author R. Lo Conte, author H. Ding, author R. Wiesendanger, author S. Blügel, author A. K. Schmid, and author K. Liu, title title Observation of hydrogen-induced Dzyaloshinskii-Moriya interaction and reversible switching of magnetic chirality, 10.1103/PhysRevX.11.021015 journal journal Phys. Rev. X volume 11, pages 021015 (year 2021)NoStop [Wang et al.(2022a)Wang, Pan, Liu, Lin, and Jiang]Wang2022 author author W.-H. Wang, author C.-Y. Pan, author C.-M. Liu, author W.-C. Lin, and author P.-h. Jiang, title title Chirality-induced noncollinear magnetization and asymmetric domain-wall propagation in hydrogenated copd thin films, 10.1021/acsami.1c23276 journal journal ACS Appl. Mater. Interfaces volume 14, pages 20151–20158 (year 2022a)NoStop [Hsu et al.(2018)Hsu, Rózsa, Finco, Schmidt, Palotás, Vedmedenko, Udvardi, Szunyogh, Kubetzka, von Bergmann, and Wiesendanger]Hsu2018 author author P.-J. Hsu, author L. Rózsa, author A. Finco, author L. Schmidt, author K. Palotás, author E. Vedmedenko, author L. Udvardi, author L. Szunyogh, author A. Kubetzka, author K. von Bergmann, and author R. Wiesendanger, title title Inducing skyrmions in ultrathin Fe films by hydrogen exposure, 10.1038/s41467-018-04015-z journal journal Nat. Commun. volume 9, pages 1571 (year 2018)NoStop [Chen et al.(2022)Chen, Ophus, Quintana, Kwon, Won, Ding, Wu, Schmid, and Liu]Chen2022 author author G. Chen, author C. Ophus, author A. Quintana, author H. Kwon, author C. Won, author H. Ding, author Y. Wu, author A. K. Schmid, and author K. Liu, title title Reversible writing/deleting of magnetic skyrmions through hydrogen adsorption/desorption, 10.1038/s41467-022-28968-4 journal journal Nat. Commun. volume 13, pages 1350 (year 2022)NoStop [Klyukin et al.(2020)Klyukin, Beach, and Yildiz]Klyukin2020 author author K. Klyukin, author G. Beach, and author B. Yildiz, title title Hydrogen tunes magnetic anisotropy by affecting local hybridization at the interface of a ferromagnet with nonmagnetic metals, 10.1103/PhysRevMaterials.4.104416 journal journal Phys. Rev. Mater. volume 4, pages 104416 (year 2020)NoStop [de Graaf et al.(2020)de Graaf, Momand, Mitterbauer, Lazar, and Kooi]Graaf2020 author author S. de Graaf, author J. Momand, author C. Mitterbauer, author S. Lazar, and author B. J. Kooi, title title Resolving hydrogen atoms at metal-metal hydride interfaces, 10.1126/sciadv.aay4312 journal journal Science Advances volume 6, pages eaay4312 (year 2020)NoStop [Natterer et al.(2013a)Natterer, Patthey, and Brune]Natterer2013a author author F. D. Natterer, author F. Patthey, and author H. Brune, title title Distinction of nuclear spin states with the scanning tunneling microscope, 10.1103/PhysRevLett.111.175303 journal journal Phys. Rev. Lett. volume 111, pages 175303 (year 2013a)NoStop [Li et al.(2013)Li, Yu, Toledo, Han, Wang, He, Wu, and Ho]Li2013 author author S. Li, author A. Yu, author F. Toledo, author Z. Han, author . Wang, author H. Y. He, author R. Wu, and author W. Ho, title title Rotational and vibrational excitations of a hydrogen molecule trapped within a nanocavity of tunable dimension, 10.1103/PhysRevLett.111.146102 journal journal Phys. Rev. Lett. volume 111, pages 146102 (year 2013)NoStop [Lotze et al.(2012)Lotze, Corso, Franke, von Oppen, and Pascual]Lotze2012 author author C. Lotze, author M. Corso, author K. J. Franke, author F. von Oppen, and author J. I. Pascual, title title Driving a macroscopic oscillator with the stochastic motion of a hydrogen molecule, 10.1126/science.1227621 journal journal Science volume 338, pages 779–782 (year 2012)NoStop [Merino et al.(2019)Merino, Rosławska, Leon, Grewal, Große, González, Kuhnke, and Kern]Merino2019 author author P. Merino, author A. Rosławska, author C. C. Leon, author A. Grewal, author C. Große, author C. González, author K. Kuhnke, and author K. Kern, title title A single hydrogen molecule as an intensity chopper in an electrically driven plasmonic nanocavity, 10.1021/acs.nanolett.8b03753 journal journal Nano Lett. volume 19, pages 235–241 (year 2019)NoStop [Wang et al.(2022b)Wang, Xia, and Ho]Wang2022b author author L. Wang, author Y. Xia, and author W. Ho, title title Atomic-scale quantum sensing based on the ultrafast coherence of an H_2 molecule in an STM cavity, 10.1126/science.abn9220 journal journal Science volume 376, pages 401–405 (year 2022b)NoStop [Weiss et al.(2010)Weiss, Wagner, Kleimann, Rohlfing, Tautz, and Temirov]Weiss2010 author author C. Weiss, author C. Wagner, author C. Kleimann, author M. Rohlfing, author F. S. Tautz, and author R. Temirov, title title Imaging Pauli repulsion in scanning tunneling microscopy, 10.1103/PhysRevLett.105.086103 journal journal Phys. Rev. Lett. volume 105, pages 086103 (year 2010)NoStop [Sicot et al.(2008)Sicot, Kurnosikov, Adam, Swagten, and Koopmans]Sicot2008a author author M. Sicot, author O. Kurnosikov, author O. A. O. Adam, author H. J. M. Swagten, and author B. Koopmans, title title STM-induced desorption of hydrogen from Co nanoislands, 10.1103/PhysRevB.77.035417 journal journal Phys. Rev. B volume 77, pages 035417 (year 2008)NoStop [Park et al.(2017)Park, Park, Yoon, and Li]Park2017 author author J. Park, author C. Park, author M. Yoon, and author A.-P. Li, title title Surface magnetism of cobalt nanoislands controlled by atomic hydrogen, 10.1021/acs.nanolett.6b04062 journal journal Nano Lett. volume 17, pages 292–298 (year 2017)NoStop [Hofer et al.(2008)Hofer, Palotás, Rusponi, Cren, and Brune]Hofer2008 author author W. A. Hofer, author K. Palotás, author S. Rusponi, author T. Cren, and author H. Brune, title title Role of hydrogen in giant spin polarization observed on magnetic nanostructures, 10.1103/PhysRevLett.100.026806 journal journal Phys. Rev. Lett. volume 100, pages 026806 (year 2008)NoStop [Verlhac et al.(2019)Verlhac, Bachellier, Garnier, Ormaza, Abufager, Robles, Bocquet, Ternes, Lorente, and Limot]Verlhac2019 author author B. Verlhac, author N. Bachellier, author L. Garnier, author M. Ormaza, author P. Abufager, author R. Robles, author M.-L. Bocquet, author M. Ternes, author N. Lorente, and author L. Limot, title title Atomic-scale spin sensing with a single molecule at the apex of a scanning tunneling microscope, 10.1126/science.aax8222 journal journal Science volume 366, pages 623–627 (year 2019)NoStop [Fétida et al.(2024)Fétida, Bengone, Romeo, Scheurer, Robles, Lorente, and Limot]Fetida2024 author author A. Fétida, author O. Bengone, author M. Romeo, author F. Scheurer, author R. Robles, author N. Lorente, and author L. Limot, title title Single-spin sensing: A molecule-on-tip approach, 10.1021/acsnano.4c02470 journal journal ACS Nano volume 18, pages 13829–13835 (year 2024)NoStop [Diekhöner et al.(2003)Diekhöner, Schneider, Baranov, Stepanyuk, Bruno, and Kern]Diekhoner2003 author author L. Diekhöner, author M. A. Schneider, author A. N. Baranov, author V. S. Stepanyuk, author P. Bruno, and author K. Kern, title title Surface states of cobalt nanoislands on Cu(111), 10.1103/PhysRevLett.90.236801 journal journal Phys. Rev. Lett. volume 90, pages 236801 (year 2003)NoStop [Pietzsch et al.(2004)Pietzsch, Kubetzka, Bode, and Wiesendanger]Pietzsch2004 author author O. Pietzsch, author A. Kubetzka, author M. Bode, and author R. Wiesendanger, title title Spin-polarized scanning tunneling spectroscopy of nanoscale cobalt islands on Cu(111), 10.1103/PhysRevLett.92.057202 journal journal Phys. Rev. Lett. volume 92, pages 057202 (year 2004)NoStop [Rastei et al.(2007)Rastei, Heinrich, Limot, Ignatiev, Stepanyuk, Bruno, and Bucher]Rastei2007 author author M. V. Rastei, author B. Heinrich, author L. Limot, author P. A. Ignatiev, author V. S. Stepanyuk, author P. Bruno, and author J. P. Bucher, title title Size-dependent surface states of strained cobalt nanoislands on Cu(111), 10.1103/PhysRevLett.99.246102 journal journal Phys. Rev. Lett. volume 99, pages 246102 (year 2007)NoStop [SM()]SM @noop note See supplementary information.Stop [Ternes(2015)]Ternes2015 author author M. Ternes, title title Spin excitations and correlations in scanning tunneling spectroscopy, http://stacks.iop.org/1367-2630/17/i=6/a=063016 journal journal New J. Phys. volume 17, pages 063016 (year 2015)NoStop [Ormaza et al.(2017)Ormaza, Bachellier, Faraggi, Verlhac, Abufager, Ohresser, Joly, Romeo, Scheurer, Bocquet, Lorente, and Limot]Ormaza2017a author author M. Ormaza, author N. Bachellier, author M. N. Faraggi, author B. Verlhac, author P. Abufager, author P. Ohresser, author L. Joly, author M. Romeo, author F. Scheurer, author M.-L. Bocquet, author N. Lorente, and author L. Limot, title title Efficient spin-flip excitation of a nickelocene molecule, 10.1021/acs.nanolett.6b05204 journal journal Nano Lett. volume 17, pages 1877–1882 (year 2017)NoStop [Loth et al.(2010)Loth, von Bergmann, Ternes, Otte, Lutz, and Heinrich]Loth2010a author author Sebastian Loth, author Kirsten von Bergmann, author Markus Ternes, author Alexander F. Otte, author Christopher P. Lutz, and author Andreas J. Heinrich, title title Controlling the state of quantum spins with electric currents, http://dx.doi.org/10.1038/nphys1616 journal journal Nat. Phys. volume 6, pages 340–344 (year 2010)NoStop [Natterer et al.(2013b)Natterer, Patthey, and Brune]Natterer2013b author author F.D. Natterer, author F. Patthey, and author H. Brune, title title Quantifying residual hydrogen adsorption in low-temperature STMs, https://doi.org/10.1016/j.susc.2013.04.008 journal journal Surf. Sci. volume 615, pages 80–87 (year 2013b)NoStop [Kresse and Furthmüller(1996)]kresse_efficiency_1996 author author G. Kresse and author J. Furthmüller, title title Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set, 10.1016/0927-0256(96)00008-0 journal journal Comput. Mater. Sci. volume 6, pages 15–50 (year 1996)NoStop [Perdew et al.(1996)Perdew, Burke, and Ernzerhof]perdew_generalized_1996 author author John P. Perdew, author Kieron Burke, and author Matthias Ernzerhof, title title Generalized Gradient Approximation Made Simple, 10.1103/PhysRevLett.77.3865 journal journal Phys. Rev. Lett. volume 77, pages 3865 (year 1996)NoStop [Kresse and Joubert(1999)]kresse_ultrasoft_1999 author author G. Kresse and author D. Joubert, title title From ultrasoft pseudopotentials to the projector augmented-wave method, 10.1103/PhysRevB.59.1758 journal journal Phys. Rev. B volume 59, pages 1758 (year 1999)NoStop [Steiner et al.(2016)Steiner, Khmelevskyi, Marsmann, and Kresse]steiner_calculation_2016 author author Soner Steiner, author Sergii Khmelevskyi, author Martijn Marsmann, and author Georg Kresse, title title Calculation of the magnetic anisotropy with projected-augmented-wave methodology and the case study of disordered Fe_1-xCo_x alloys, http://link.aps.org/doi/10.1103/PhysRevB.93.224425 journal journal Phys. Rev. B volume 93, pages 224425 (year 2016)NoStop [Momma and Izumi(2011)]momma_vesta_2011 author author K. Momma and author F. Izumi, title title VESTA 3 for three-dimensional visualization of crystal, volumetric and morphology data, 10.1107/S0021889811038970 journal journal . Appl. Crystallogr volume 44, pages 1272–1276 (year 2011)NoStop [Tersoff and Hamann(1985)]tersoff_theory_1985 author author J. Tersoff and author D. R. Hamann, title title Theory of the scanning tunneling microscope, 10.1103/PhysRevB.31.805 journal journal Phys. Rev. B volume 31, pages 805–813 (year 1985)NoStop [Lorente and Robles(2019)]lorente_stmpw_2019 author author Nicolás Lorente and author Roberto Robles, 10.5281/zenodo.3581159 title STMpw v1.0b2 (Zenodo), can be found under https://doi.org/10.5281/zenodo.3581159, (year 2019)NoStop [Hammerling et al.(2002)Hammerling, Uiberacker, Zabloudil, Weinberger, Szunyogh, and Kirschner]Hammerling2002 author author R. Hammerling, author C. Uiberacker, author J. Zabloudil, author P. Weinberger, author L. Szunyogh, and author J. Kirschner, title title Magnetic anisotropy of thin films of co on cu(111), 10.1103/PhysRevB.66.052402 journal journal Phys. Rev. B volume 66, pages 052402 (year 2002)NoStop
http://arxiv.org/abs/2409.02592v1
20240904101653
Exploring Citation Diversity in Scholarly Literature: An Entropy-Based Approach
[ "suchismita Banerjee", "Abhik Ghosh", "Banasri Basu" ]
physics.soc-ph
[ "physics.soc-ph", "cs.DL" ]
[Email: ]suchib.1993@gmail.com [Email: ]abhik.ghosh@isical.ac.in [Email: ]sribbasu1@gmail.com § ABSTRACT This study explores global citation diversity, examining its various patterns across countries and academic disciplines. We analyzed citation distributions in top institutes worldwide, revealing that the higher citation end of the distribution follow Power law or Pareto law pattern and the Pareto law's scaling exponent changes with the number of institutes considered. An entropy based novel citation inequality measure has been introduced, enhancing the precision of our analysis. Our findings show that countries with small and large economies often group similarly based on citation diversity, with shifting the groupings as the number of institutes considered changes. Moreover, we analyzed citation diversity among award-winning scientists across six scientific disciplines, finding significant variations. We also explored the evolution of citation diversity over the past century across multiple fields. A gender-based study in various disciplines highlights citation inequalities among male and female scientists. Our innovative citation diversity measure stands out as a vital tool for evaluating citation inequality, providing insights beyond what traditional citation counts can offer. This thorough analysis deepens our understanding of global scientific contributions and promotes a more equitable view of academic accomplishments. Exploring Citation Diversity in Scholarly Literature: An Entropy-Based Approach Banasri Basu July 2024 =============================================================================== Keywords: Citation inequality; Diversity measure; General class of Entropy; Scholarly literature; Award winners. § INTRODUCTION Citations are the currency of academia, reflecting impacts and influences of research publications. Ideally, a fair citation landscape would see recognition distributed proportionally to the quality and contribution of research. However, a growing concern in recent years is the phenomenon of citation inequality. This refers to the uneven distribution of citations, where a small number of papers garner a disproportionate share of citations but the vast majority receive significantly fewer <cit.>. Understanding the dynamics of citation inequality is crucial for several reasons. Firstly, it raises concerns about the fairness and effectiveness of the current evaluation system in academia. Overemphasis on citations can lead to a `publish or perish' mentality, potentially favoring sensational or trendy research over groundbreaking discoveries that take longer to gain recognition. Secondly, citation inequality can hinder the dissemination and application of valuable research, particularly from less established scholars or emerging fields. Measuring citation inequality is a challenging task that has garnered significant attention in the research community. Interestingly, different approaches have been employed to quantify these measurements, drawing parallels to methods used in assessing economic inequality <cit.>. The Hirsch index, or h-index, is also widely recognized as a tool for measuring citation inequality, as highlighted by multiple studies <cit.>. However, in the studies of citation inequality, the generalised entropy index, used in the area of income inequality <cit.>, has not yet been adopted as a useful measure. Motivated from all these works, here, for the first time, an entropy based diversity measure is used to study the citation inequality. This research article delves into the complexities of citation inequality through an innovative approach utilizing entropy, a concept from information theory, to quantify citation diversity in scholarly literature. The underlying idea is that an entropy measure allows us to capture the evenness of citation distribution across a set of publications (or, institutions); so higher entropy values illustrate higher diversity and lower inequality of the citation distributions, and vice versa. To be more specific, high entropy signifies more even distribution of citation, while low entropy indicates a concentrated distribution with a few highly-cited papers. In general, diversity means how spread out individuals are within a group with respect to a certain trait, that can be easily categorized or measured by numbers. It may be mentioned here that assessing diversity within a population is a crucial issue across various applied sciences, such as ecology, biology, economics, sociology, physics, and management sciences; see, e.g., <cit.>. In the literature, we mostly find the use of classical Shannon entropy as the entropy-based diversity measure in different contexts <cit.>. However, in this study, we use a new measure for diversity quantification, based on the concept of logarithmic norm entropy <cit.>, to analyze the citation diversity across a range of scenarios. Firstly, we analyze the citation diversity of top-ranked institutions in different countries around the world, which allows us to explore potential geographical variations in citation diversity. Then, we extend our investigation to analyze the diversity of citations received by the publications of top award winning scientists (Nobel prize winners, Abel winners and Turing award winners). This analysis will encompass various disciplines, ensuring a holistic understanding of citation patterns in research publications across different academic fields. We also explore the time evolution of the citation diversity of the award winning scientists by analysing the diversity of recent and century old award winners across various disciplines. Finally, we dis-aggregate our findings by gender, enabling a nuanced exploration of potential gender-based disparities in citation practices of various academic disciplines. This entropy-based multifaceted approach on citation diversity offers a detailed picture, capturing the subtlety and variations in citation distribution within the scientific landscape. This research facilitating an understanding of citation diversity in scholarly literature is presented in a clear and logical structure. In Section II, we outline the data sources for citation information and meticulously describe the data employed in the study. We provide details regarding the selection criteria for the award winning scientists, the specific disciplines included, and the identification process for top institutes across different countries. Section III serves as the foundation of our analysis. We provide a step-by-step analytical framework in developing the concept of general class of logarithmic norm entropy and its use as a measure of citation diversity. Section IV presents the findings of our investigation where we delve into the analysis of citation diversity across various scenarios and the significance of our findings are also presented in this section. Finally a concise summary of our key findings are provided in Section V, describing the main takeaways from the analysis of citation diversity across disciplines, institutions, and publication references. Furthermore, we have discussed the broader implications of our research and potential avenues for future inquiry. § DATA DESCRIPTION The `Ranking Web of Universities' (also commonly known as the Webometrics) <cit.> is a comprehensive academic ranking system established in 2004, which appears twice per year since 2006. This public resource, developed by the Cybermetrics lab, encompasses over 31,000 higher education institutions (HEIs) across more than 200 countries. Webometrics employs a mix of webometric (all missions) and bibliometric (research mission) indicators to assess university performance, promoting open access to scholarly knowledge. <cit.> provides the detailed citation data of the top Institutes/ universities across the world through `Transparent Ranking: Top Universities by Citations in Top Google Scholar profiles' <cit.>. We used the January 2024 edition of this data (retrieved on April 1, 2024) for our citation analysis. The detailed data of the citation counts of top institutes can be found in <cit.>. Moreover, our study also leverages data from `Scopus' <cit.>, a extensive bibliographic database of peer-reviewed literature. This resource provides publication and citation information for award-winning authors across various disciplines. This unified data source allows for robust comparisons and minimizes potential biases arising from using disparate data sources. To ensure consistency, this data source is utilized for all analyses within this study. We obtained total citation data for 21 scientists in each discipline from `Scopus' <cit.> on May 23, 2024. Additionally, we collected publication and citation data for individual scientists, including 30 Nobel laureates in physics, chemistry, and physiology/medicine (split evenly between recent and century old awardees), 15 recent Abel prize winners in mathematics, 15 recent Turing award winners in computer science, and 15 recent Nobel laureates in economics from the same source on May 23, 2024. § METHODOLOGY §.§ General Class of logarithmic norm entropy (LNE) and diversity measure (D) It was long known that the potential families of entropies can be used as generalised diversity measures <cit.>. Recently, the concept of logarithmic norm entropy (LNE) has been introduced in <cit.> as a new measure for quantifying diversity, justified by its better statistical efficiency and robustness properties compared to other existing classes of entropy based diversity measures. Building upon the established concept of Shannon entropy <cit.> and Renyi entropy <cit.>, the LNE offers a scale-invariant generalization of the latter <cit.>. In this study, we leverage LNE to quantify citation diversity in scholarly literature. This unique approach allows us to assess the robustness of our findings and gain a more complete understanding of citation diversity patterns. The generalized classes of Shannon entropy and Renyi entropy are defined, respectively, as: H_β^(S)(p)= - ∑_ip_i^βlog p_i/∑_ip_i^β, H_(α,β)^(R)(p) = 1/1 - αlog[(∑_ip_i^α + β -1)/(∑_ip_i^β)], where p_i is the estimated normalized probability or frequency of the occurrence for the i-th group/class among the M possible mutually exclusive groups, for each i = 1,… M, and α, β > 0 are two parameters (constants). Clearly, at β = 1, Eq. (<ref>) and Eq. (<ref>) coincides with the classical Shannon entropy and Renyi entropy, respectively. In this study, we consider the novel scale-invariant generalization of the Renyi entropy, namely the LNE measure defined as <cit.> H_(α,β)^(LN)(p) = αβ/β - αlog[(∑_ip_i^α)^1/α/(∑_ip_i^β)^1/β], where α, β are two positive constants (tuning parameters) leading to different entropy measures. At β = 1 or α = 1, the LNE reduces to the Renyi entropy family and is generally symmetric in the choice of (α, β). We readily note the limiting interrelations between these entropies as: lim_α→ 1 H_(α,1)^(R)(p) = lim_α→ 1 H_(α,1)^(LN)(p) = H_1^(S)(p) = - ∑_ip_ilog p_i. Imagine a finite set of M categories representing diverse perspectives, denoted by S = s_1, ..., s_M, such as citations across top institutes, the author's own publications, gender, and discipline-wise variations, where the (discrete) probability distribution of citing a particular perspective in these categories is represented as p = (p_1, ..., p_M), with p_i signifying the probability of a publication belonging to category s_i for each i = 1, ..., M. Formally, we define the LNE based diversity measure (in percentage) for such a probability distribution, p, given by D= H_(α,β)^(LN)(p)/log M× 100%. Additionally, it is noteworthy that the maximum value of the diversity measure equals 100% for all members of the LNE family regardless of the values of the tuning parameters (α,β). It always lies between 0 to 100 (both inclusive) with higher values indicating greater diversity (and hence lower inequality) and vice versa. The citation diversity measure (D), based on LNE, will be computed for each country and disciplinary group of prize winning scientists, as well as for individual award winners, replacing p with its estimates p derived from empirical data. We will also compute these metrics separately among males and females scientists within the award winning cohort. §.§ Asymptotic standard error and confidence interval Since we are estimating the LNE based diversity measures based on empirical data, we must additionally quantify the possible extent of statistical errors associated with our estimates to draw more effective conclusions. As proved in <cit.>, such estimates of the diversity measure (D) will be √(n)-consistent and asymptotically normal with the asymptotic variance being σ_(α,β)^2(p)/n(log M)^2, where σ_(α,β)^2(p) = α^2 β^2/(β -α)^2[W_2α -1/W_α^2 + W_2β -1/W_β^2 - 2W_α + β -1/W_αW_β], with the notation W_c(p) = ∑_i=1^Mp_i^c for any c > 0. Note that, σ_(α,β)^2(p) is symmetric in the choice of (α, β), as intuitively expected from similar behavior of the LNE measure itself. Since σ^2_H(p) varies continuously in the citation distribution (p), we can reliably estimate it using our empirical data by replacing p by its estimates p. Finally, taking square root of the estimated asymptotic variances, we get the (asymptotic) standared error (say s) of the estimated diversity measure (D), with lower values indicating more reliable diversity estimates. By utilizing the standard errors (s) of the estimated diversity (D) in all our cases, we have additionally computed and plotted the 95% confidence intervals for the diversity measures as given by (D-1.96s,D+1.96s). This formula is obtained from the standard theory of statistical inference by utilizing the asymptotic normality of the diversity estimate. Note that, the length of the confidence interval is directly proportional to the standard error, and hence indicates the reliability of the estimated diversity; shorter the confidence interval more reliable our estimates are. Moreover, such confidence intervals also help us to statistically compare the diversity measure for two contexts (e.g., countries, subjects, or scientists); the two diversity values can be inferred to be significantly different at 5% level if the associated 95% confidence intervals do not overlap. This gives us a simple visual way to identify contexts having statistically similar or dissimilar diversities by just comparing the plots of their confidence intervals as presented in the following sections. Throughout our entire analysis, we have used α = 2.0 and β = 0.5 in the definition of the LNE based diversity to achieve lower (asymptotic) standard error, and hence the shorter confidence interval, based on the illustrations provided in <cit.>. § RESULTS AND DISCUSSIONS §.§ Citation analysis among top institutes across different countries There is an increasing interest in world wide university rankings through various metrics <cit.>. The rich and exhaustive data of university rankings motivates one to analyse it from different angles and various perspectives <cit.>. In the present investigation of citation diversity we consider the ranking of the universities/ institutes, (N_I), according to their total number of citations (N_cit). This helps us in examining the distribution pattern of total citations, across all disciplines, of the world wide top institutions of a country as well as the citation diversity measure (D) among the top institutions of each country. This study helps in delineating the geographical variation <cit.> of research activities. §.§.§ Distribution pattern of citation counts of worldwide top institutes Initially, we examine the distribution pattern of total citation counts N_cit corresponding to a large number of institutes/universities (N_I) of various countries around the world. According to the Webomterics data (See Appendix for the data), considered for the analysis, rank 1 institute is the Harvard University of USA with citation count 27589889 and the Institute of Technology and Business of Czech Republic corresponds to rank 5661 with citation count 1004. The data furnishing a wide range of variation in N_cit. The distribution pattern (Fig. <ref>) provides valuable insights in understanding how the citation data is spread out or clustered around the world's leading universities and institutes. Fig. <ref> is the bar plot for the rank-size distribution, based on the ranks of the institutes N_I and the corresponding size of citation counts N_cit; with an inset displaying the same plot in log-log scale for a clear understanding of the trend. Fig. <ref> depicts the frequency distribution curve of total citation counts across different institutions in log-log scale. The distribution plot exhibits a power-law behavior in the higher citation end. The robustness of the fitted power law is checked by a goodness-of-fit test yielding satisfactory Kolmogorov-Smirnov distance (KS) and p-value for the fit. To proceed further, we consider the citation data of top 10, 20, and 50 institutes or universities from each country across the globe. This data is found to be spread over 72, 55 and 25 countries respectively for top 10, 20 and 50 institutions. It is fascinating to note from Fig. <ref> that the power law behaviour holds for all these 3 separate cases also. Each plot in the figure is accompanied with its respective KS distance (KS) and p-value for the fit as well as the corresponding power law exponent (α). However, there is a variation in the value of the exponent with the change in the number of institutions considered for the analysis. The adherence of the consistent pattern of power law in the higher end of the citation counts indicates a predictable relationship between the rank of an institution and its citation count across different scales. Some recent studies have also demonstrated this power law trend in citation analyses <cit.>. §.§.§ Citation diversity in top institutes across the globe This part of the discussion zooms on the diversity in the distribution of the citations of the top institutes or universities within each country. We employ the calculated D values, as previously explained in Section III, derived from the total citation count across all disciplines for each country's top 10, 20, and 50 institutions. This approach allows us to classify the countries based on these diversity values (D) and also into 3 subgroups within each country based on high, medium and low citation counts (N_c). Tables <ref>,<ref> and <ref> provide detailed breakdown of these grouping, respectively, for the top 10, 20 and 50 institutions across various nations; associated confidence intervals of the diversity measures are presented in Figs. <ref>,<ref> and <ref>, respectively, along with box-plot visualizations of the raw total citation data in each cases. Given the wide range of N_c counts per country, we use a logarithmic scale for the y-axis in our box-plots (Figs. <ref>,<ref>,<ref>) to effectively capture and represent the distribution of citation counts within different countries. In general, we have noted that some countries, despite having a high N_c count, do not necessarily have high D values. Conversely, there are countries with lower N_c counts that exhibit very high D values. Therefore, a combined analysis offers insights into both the overall diversity and the spread of citations among top institutes/universities across various countries. Results for top 10 institutions In this analysis of top 10 institutes across various countries, we examine 72 countries and divide them into six distinct groups (Group A - Group F) based on their decreasing diversity values (D), with each group being closely homogeneous in terms of their values of D. Each of these groups are again divided into 3 subgroups based on high, medium, and low citation counts N_c for each country (see Table <ref>). For example, Group A countries with very high D, can be sub-grouped into A1, A2 and A3 group of countries with high, medium and low N_c counts respectively. Notably, Fig. <ref> highlights the remarkably small confidence interval for each country's citation diversity, signifying a high degree of certainty in our diversity estimates. It is evident from the N_c count data of Table <ref> that, within Group A, the USA stands out as the most highly cited country when examining its top 10 institutions. However, our analysis reveals a different leader in Group A, with Turkey emerging as the country with the highest citation diversity (lowest citation inequality). In Group B, while Switzerland emerges as the most frequently cited country, our study shows that Austria exhibits the highest citation diversity within the group. This indicates that although Switzerland may dominate in terms of citation volume, Austria's citations are more evenly distributed. Conversely, Finland, despite being a part of the highly cited subgroup B1, registers the lowest citation diversity in Group B, suggesting a more concentrated citation pattern. Meanwhile, in Group D, Belgium stands out as the most frequently cited country, yet Norway surpasses it in citation diversity, indicating a wider spread of citations across top Norwegian institutes/universities. In all other groups and subgroups similar kind of results can be inferred. It is apparent that relying solely on total citation value or average citation counts fails to adequately appreciate the impression of citation analysis; the citation diversity measures are also required for a complete picture. [1]excluding Oman due to its significantly lower value compared to other countries Results for top 20 institutes By broadening our analysis to include the top 20 institutes, we have been able to study 55 countries across the globe as per the availability of data. In this case, we observe significant changes, compared to top 10 institutions, in the diversity measure (D) and the average citation count (N_c) for each country. The countries are again grouped as per their values of D and N_c as in the case of top 10 institutes (Table <ref> and Fig. <ref>). This expanded view makes the distinctions between countries based on these metrics more apparent. For instance, Israel is initially ranked in Group C with high D and maximum N_c within this group when considering the top 10 institutes. However, when the scope is broadened to include the top 20 institutes, its performance metrics decline, moving it to Group E with a significantly lower D value. Similarly, the Netherlands is categorized in Group C with lower D and N_c values when examining the top 20 institutes, but rises to Group A with much higher D and N_c values when focusing on the top 10 institutes. In contrast, while considering the top 10 institutes, Morocco is positioned in Group A with a much higher D and N_c values, but completely drops out of the rankings when the scope is expanded to the top 20 institutes. Results for top 50 institutes When we focus on the citation data for the worldwide top 50 institutes, we get data only on 25 countries whose diversity values are calculated from their total citations (Table <ref> and Fig. <ref>). In the analysis, when focusing on the top 50 institutions, Taiwan and Thailand fall into Group E, characterized by lower D values. However, when considering only the top 20 institutions in these countries, they move to Group B, which has comparatively higher D values. Conversely, Spain is placed in Group A with high D and N_c values when considering the top 20 institutions, but it shifts to Group B with lower D and N_c values when the top 50 institutions are considered. In Group C, Canada is grouped with South Korea, Japan, and others, sharing similar D values but with significantly different N_c values when considering the top 50 institutions. However, when focusing on the top 20 institutions, Canada moves to Group A, while South Korea and Japan are in Group B, with different D values. In conclusion, our novel citation diversity metric demonstrates its superiority over simple total citation counts in capturing a nation's research landscape. We observed significant variations in diversity depending on different numbers of top institutes. For instance, Israel's diversity dropped from 86.68% (top 10) to 70.07% (top 20), even falling out of the top 50 list altogether, indicating that the country has fewer than 50 renowned institutes. This highlights the crucial influence of a country's concentration of high-performing institutes on its overall diversity score. Moreover, we observe that while the UK exhibits very high total citations across its top 10, 20, and 50 institutes, it is only classified in group A, characterized by a diversity range of 93% to 99%, when considering its top 10 and 20 institutes. However, when the top 50 institutes are taken into account, despite the high citation counts, the diversity value decreases, placing the UK in group B, with a diversity range of 89% to 93%. This indicates that although the UK maintains a strong citation performance, the citation diversity varies significantly with the number of institutes considered. When focusing on a smaller number of top institutes, the UK demonstrates a broader citation diversity, suggesting a wide-reaching influence of its most prominent research institutions. Conversely, India and USA displayed remarkable consistency in its diversity across all three institute tiers, suggesting a more balanced distribution of citations. However, expanding the scope to include more institutes reveals a drop in diversity, implying a more concentrated citation pattern. This highlights the importance of considering both citation count and diversity to fully understand the impact and reach of a country’s research output across different academic institutions across the globe. This analysis underscores the importance of considering citation distribution within a country, which total citation counts alone cannot effectively capture. Total citation counts often mask the underlying distribution of citations, potentially misleading interpretations. By employing our novel metric, we gain a clearer picture of how citations are distributed across a country's top research institutions. Our approach provides a more nuanced understanding of a nation's research landscape by revealing the distribution of citations amongst its leading institutions. §.§ Citation diversity in various scientific disciplines Our citation diversity analysis in the previous section has been performed at the institutional level, irrespective of individual scientists or any specific scientific discipline. We now shift our focus to study the citation diversity in the publication data of various scientific disciplines. We specifically explore the citation data of 126 internationally acclaimed elite researchers, in six important disciplines; physics, chemistry, mathematics, computer science, economics and physiology/medicine, 21 from each discipline. Additionally, to see whether the citation pattern in various scientific disciplines has changed over recent times or not, we also explored the citation data of a total of 63 Nobel prize winners in physics, chemistry and physiology/medicine. §.§.§ Award winning scientists in recent times in different scientific disciplines To develop a thorough understanding of citation diversity in different scientific disciplines, we implement our methodology from three distinct viewpoints. Table <ref> showcases the calculated D values and N_c counts per scientist for every discipline and Fig. <ref> depicts these diversity measures, along with their 95% confidence intervals, and the distributions of individual numbers through their box-plots. It is noted that N_c count per scientist in mathematics is minimum whereas its D value is not so low. On the other hand, the N_c count per scientist in computer science is maximum but its D value is the lowest. So we can say that the citation inequality of award winners in computer science is very high as compared to the other disciplines. Additionally, we observe that the diversity of papers in computer science is relatively low compared to other subjects. However, the difference in paper diversity among various other subjects is not as pronounced as the difference in citation diversity between computer science and them. §.§.§ Award winning scientists in old times in three principal disciplines We now extend our analysis to examine the citation diversity in the publication of century old Nobel winning scientists in physics, chemistry and physiology/medicine. We employ the three aforementioned viewpoints to calculate diversity percentage values for each discipline across historical periods (Table <ref>). Fig. <ref> illustrates their citation diversity and count distributions (box-plots). It is evident that, in earlier period, the diversity values for physics were the lowest in both the total citation and per-paper citation perspectives. However, in terms of the total publication perspective, physics had the highest diversity value. This suggests that citation inequality in physics was significant in previous times, whether considering the total citations or per-paper citations of award winners in this discipline. Conversely, the number of papers in physics was more evenly distributed compared to the other two subjects in historical time, as indicated by the higher diversity value for total publications. Additionally, the average citation count for chemistry was the lowest in both total citation and per-paper citation perspectives, while for the total publication, physics had the minimum average citation count. This again demonstrates that to obtain an exhaustive understanding of citation analysis, it is essential to look at the citation diversity values along with the citation counts of the publication data. §.§.§ Comparing citation diversity between recent and old times award winners in three principal disciplines We now compare the citation diversity values across three principal disciplines between recent and old times. Table <ref> shows the D values for both recent and past times from 3 different viewpoints. Fig. <ref> clearly illustrates a significant increase in D in recent times for all three disciplines in all cases. From this comparison, it is evident that diversity values have increased across all disciplines from past eras to recent times, indicating a decrease in citation inequality. Notably, the recent era shows very small differences in diversity values among the three disciplines, highlighting a more even distribution of total number of citations, total number of papers, and per-paper citations among award winners across all disciplines. In contrast, these differences were much larger in the past. Additionally, we observe that while physics had the lowest diversity for the total citation in earlier times, it now has the highest D value compared to the other two disciplines in recent times. This suggests a significant improvement in the equality of citation distribution for physics. Again Fig. <ref> reveals that the confidence interval for the total citation perspective is minimal, whereas the confidence intervals for the total publication and per-paper citation are comparatively large in both the recent and past eras. Thus, we can infer that the data for the total citation is more accurate compared to the other two perspectives. Overall, this comparison reveals that the citation distribution in physics has improved markedly in recent times compared to previous times. §.§ Citation diversity in the publication of Individual prize winning scientists across different disciplines We now aspire to inspect the citation diversity in the publications of the individual prize winning scientists across various scientific disciplines. We have chosen the citation data for a total of 135 eminent scholars from the aforesaid scientific disciplines. We fixed on the Nobel prize winners in physics (30), chemistry (30), physiology/medicine (30) and economics (15), Abel prize winners in mathematics (15) and Turing award winners in computer science (15). §.§.§ The Nobel Prize winners in Physics The Nobel prize in physics has been awarded to 224 individuals between 1901 and 2023. For our investigation we have explored the citation counts of 30 Nobel laureates in physics, 15 from recent times (2019-2023) and 15 from the period (1901-1915). Using this data, we calculated the citation diversity values in the publication of these scientists following the methodology outlined above in Section III. Table <ref> provides the citation diversity (D) values for each scientist considered along with their average citation counts (N_c). Fig. <ref> illustrates the citation diversity values for each recent laureate, with Fig. <ref> specifically highlighting the citation diversity of recent laureates. Notably, the confidence intervals for each point are very small, confirming the accuracy of these values. Additionally, Fig. <ref> presents a box-plot of the citation counts of the laureates from raw citation data of their publications. The citation diversity values for the 15 recent laureates range from about 60% to 90%, with higher diversity correlating with lower average citation counts, underscoring the limitations of using average citations alone to represent a laureate's citation distribution. Instead, the citation diversity measure offers a more precise depiction of these patterns. In Fig. <ref>, we observe the citation diversity values of earlier Nobel laureates, with Fig. <ref> revealing a wide range of citation diversity from 20% to 80%. Larger confidence intervals further extend this range. Fig. <ref> shows a box-plot based on their raw citation data, revealing that earlier laureates generally had lower citation counts but higher diversity values. This highlights how citation diversity accurately reflects citation distribution over time. Overall, the increase in diversity values from earlier to more recent laureates suggests a decline in citation inequality among Nobel laureates in physics over the years. §.§.§ Nobel Prize winners in Chemistry From 1901 to 2023 the Nobel prize in chemistry has been bestowed on a total of 192 individuals. For our analysis, we have picked up the citation data of 30 Nobel prize winners in chemistry, 15 during 1901-1920 and 15 between 2018-2023. Table <ref> represents the calculated diversity values and average citation counts of all the listed Nobel prize winners in chemistry. In Fig. <ref>, we see the citation diversity of 15 recent Nobel laureates in chemistry, with diversity values ranging from 60% to 90%. Two distinct groups emerge: one containing four laureates with citation diversity between 60%-70%, indicating higher citation inequality, and another between 80%-90%, reflecting a more balanced citation distribution. The minimal confidence intervals confirm the reliability of these values. Fig. <ref> further shows that despite similar average citation counts, diversity values differ significantly, revealing more insightful patterns in citation distribution. Meanwhile, Fig. <ref> shows earlier laureates' citation diversity ranging from 65% to 90%, though with larger confidence intervals, indicating greater variability. §.§.§ Abel prize winners in Mathematics The Abel prize is awarded annually (2003 onwards) to one or more outstanding mathematicians and is widely considered the Nobel prize of mathematics. Here we consider the data for 15 Able prize winners during 2012-2023. Table <ref> presents the average citation counts and diversity values for each Abel prize winner considered. In Fig. <ref>, we observe the citation diversity and box-plot for the citation counts of each Abel prize winner. These citation diversity values of each scientist range from 70% to 90%, indicating moderate citation inequality. The confidence intervals for most diversity values are not high, though a few have slightly higher confidence intervals. This suggests that most calculated diversity values are reliable, with some variability due to the use of publicly available data. In this case, we observe that one of the 2015 Abel Prize winners has the highest N_c value but a lower D value. Conversely, one of the 2020 Abel Prize winners has a comparatively lower N_c but the highest D value. It is also noteworthy that, despite having similar N_c values (with the exception of three cases), their D values vary significantly, ranging from 71% to 89%. §.§.§ Turing Award winners in Computer Science The ACM A. M. Turing Award is an annual prize given by the Association for Computing Machinery (ACM) for contributions of lasting and major technical importance to computer science. Commencing in 1966, as of 2024, 77 people have been awarded the prize. We settled on the publication data of 15 highly recognized computer scientists between 2015-2023 only. Table <ref> presents the calculated diversity values along with the average citation counts for each Turing prize winner. In Fig. <ref>, we present an overview of the diversity values and citation ranges for each Turing award winner in computer science. Notably, the diversity value of one of the 2015 Turing award winners is significantly lower compared to the others, indicating a high citation inequality for that individual. Conversely, the diversity values of the other computer scientists range between 55% and 85%, with a distinct division at 70%. Below this threshold, there are 7 Turing award winners, and above it, there are also 7 winners. This suggests that the citation inequality is higher for the 7 scientists below 70% compared to those above it. In this analysis, the confidence interval is minimal, except for two cases where it is slightly larger to increase the possibility range of that value. For better understanding, we maintain the same order of the award winners on the x-axis in Fig. <ref> as in Fig. <ref>, where they are arranged in ascending order of diversity values. However, no meaningful pattern is observed from the total citation range in Fig. <ref> but there is a clear pattern in diversity values of each scientists shown in Fig. <ref>. §.§.§ Nobel prize winners in Economics The first Nobel Memorial Prize in economic sciences was awarded in 1969. As of the awarding of the 2023 prize, 55 prizes in economic sciences have been given to 93 individuals. We considered the publication data of 15 distinguished economists between 2017-2023. Table <ref> presents the diversity values and average citation counts for these 15 individual Nobel laureates in economics. Fig. <ref> displays the citation diversity values for each laureate, showing that two recent laureates have very low diversity, indicating high citation inequality. In contrast, citation diversity of the other laureates ranging from 70% to 85%, implying a more balanced citation distribution across their publications. The small confidence intervals for these values reinforce the accuracy of our diversity calculations. Fig. <ref> presents a box-plot of the total citations for each laureate, maintaining the same order of laureates as in Fig. <ref> for easy comparison. Although the total citation ranges vary among the laureates, they do not provide significant insights into citation distribution. Instead, our diversity values effectively illustrate the extent of citation inequality among these economics laureates. §.§.§ The Nobel Prize winners in Physiology/Medicine The Nobel prize in physiology/medicine has been awarded 114 times to a total of 227 laureates from 1901 to 2023. Our analysis focuses on two distinct groups: 15 laureates from the period 2017 to 2023 and another 15 from the period 1902 to 1923. Table <ref> presents the citation diversity values and corresponding average citation counts for all of their publications. In Fig. <ref>, we show the diversity values and total citation ranges for 15 recent Nobel laureates in Physiology/Medicine. Fig. <ref> categorizes them into three groups: five laureates with diversity below 85%, eight between 85%-90%, and two above 90%, indicating low citation inequality overall. The minimal confidence intervals confirm the reliability of these diversity values. In Fig. <ref>, we examine diversity values and citation ranges of earlier laureates, with most diversity values falling between 65%-87%, except for a laureate from 1904 who shows significant citation inequality. The broader confidence intervals for these early laureates suggest greater uncertainty in the data, unlike the more reliable and precise values seen in recent times. In summary, this comprehensive analysis across multiple disciplines— physics, chemistry, mathematics, computer science, economics, and physiology or medicine demonstrates the significance of citation diversity values as a more insightful metric than total or average citation counts. In physics, our analysis illustrate that diversity values have risen over time, indicating decreased citation inequality among Nobel laureates. In chemistry, despite an increase in average citation counts, diversity values have remained stable, underscoring their role in revealing citation distribution. Similar findings in mathematics reinforce the relevance of this measure. For Turing award winners in computer science, our calculated diversity measure effectively captures citation distribution, providing clearer insights into citation inequality. In economics, high diversity values denote low citation inequality, offering a comprehensive understanding of laureates' citation patterns. In physiology or medicine, diversity values have increased in recent times, indicating a more equitable distribution of citations among publications. Since the awards in mathematics, computer science and economics began relatively later we cannot provide a comparative analysis, over time, of the citation diversity values in these 3 disciplines. Our analysis advocate for the adoption of more nuanced metrics in evaluating scholarly impact, fostering a fairer assessment of academic contributions across various scientific fields. §.§ Citation diversity in the publication of prize winning male and female scientists Gender bias in paper citations plays a crucial role in making women's research less visible. Some well-documented studies <cit.>,<cit.> highlight the under-attribution of women's contributions in scientific research, evidenced by a citation gap between male and female authors. However, men and women still publish at similar annual rates and have comparable career-wise impact, with career length and dropout rates explaining many disparities <cit.>. In an unique approach to gender-based citation analysis, our objective is to examine the uniformity in the distribution of citations of the publications of recent award-winning male and female scientists across six scientific disciplines using the citation diversity measure. Table <ref> provides detailed information on the number of male and female award-winning scientists across different scientific disciplines, along with the period of our analysis. Additionally, the table includes the average citation count per publication for all male and female award winners in different disciplines and our calculated diversity values for these scientists. It is noted that among 126 recent award winners across six disciplines, there were no female scientists in mathematics and computer science during the period under consideration. In a graphical representation, Fig. <ref> reveals that although both male and female scientists exhibit high diversity (lower inequality) values, indicating low citation inequality, male scientists generally have higher diversity values than female scientists in physics, economics, and physiology/medicine (in physics and physiology/medicine the diversity values of male are very close). In chemistry, however, female award winners show a more even citation distribution than their male counterparts. Both male and female scientists in physics and chemistry have high diversity values (above 90%). Conversely, in economics and physiology/medicine, there is a significant difference in diversity values, with female scientists experiencing higher citation inequality. Additionally, Fig. <ref> depicts the total citation range for male and female award winners in these four disciplines. Given the greater number of male scientists, their total citation range is higher. However, when examining the average citation count per scientist, female scientists in chemistry have a higher average, supporting the diversity value findings. While the average citation count and total citation range provide some insights, the diversity values more effectively illustrate citation distribution and inequality among male and female award winners in each discipline. § CONCLUSION Our extensive study on citation analysis sheds light on various aspects of global citation inequality, offering a detailed understanding of citation patterns across different countries and academic disciplines. Key highlights of our work may be summarized as follows: * Distribution pattern of citation counts: We examined the distribution of citation counts in top institutes across various countries, revealing the Pareto law nature of the upper end of the distribution with a breakdown at the lower end. We also showed how the Pareto law's scaling exponent changes with the number of top institutes across the globe. * Novel citation diversity measure: A new log-normal entropy (LNE) based diversity measure has been introduced to measure citation inequality, enhancing the accuracy and depth of our analysis. Previous research has extensively explored diversity measures across various fields, also employing different metrics to assess citation distributions. However, this study marks the first instance of using an entropy-based diversity measure specifically to quantify citation distribution. We have utilized this innovative metric to effectively measure citation inequality, enhancing our understanding of the disparities in citation patterns. * Institutional citation diversity measure across the world: We calculated citation diversity measures with confidence intervals, grouping countries based on these measures in top institutes (10, 20, 50) worldwide. This revealed that many small countries share groups with large economic powers, and groupings shift with the number of institutes considered. Box-plots were utilized to study the total number of citations, suggesting the emergence of subgroups based on citation counts. * Discipline-wise citation diversity: We further calculated citation diversity along with total citation counts (box-plots) of award winning scientists in six disciplines (21 scientists from each discipline), uncovering the importance of measuring citation diversity of award winners across disciplines. Time evolution of the citation diversity across disciplines over the century was also studied in three main disciplines. * Citation diversity of publications of award winning individual scientists: Citation diversity measures were analyzed for publications by award winning scientists in six disciplines (from 2000-2023 with publicly available data of 15 scientists from each disciplines), showing significant variation across fields. This was also extended to individual award winners from 1901-1920 in three principal disciplines. Also the time evolution of author wise diversity measures in three disciplines provided insights into how citation patterns change over time. * Gender-based study in citation diversity: A gender-based analysis of citation diversity for male and female scientists in four disciplines highlighted areas of both low and high inequality, with a notable absence of female award winners in two disciplines during our considered period of analysis from 2007 to 2023. This extensive study, based on the data of the top institutes or highly acclaimed elite researchers, underscores the complexity and diversity of citation practices across the scientific landscape, offering a detailed examination that considers multiple dimensions and perspectives. Our findings suggest that the new citation diversity measure serves as a vital metric for assessing citation inequality, providing exceptional insights that citation counts alone cannot achieve. As a future research project, to portray the citation diversity analysis of the entire scientific community, one may incorporate the data from a larger and more diverse group of scientists, beyond just the elite group of top award winners. Additionally, the investigation of the lower end of citation distributions, either in isolation or in conjunction with other distributional models that adequately fit the overall data, is another important open research question for future research works. This could provide deeper insights into the factors influencing lower citation counts and shed light on the dynamics governing citation inequalities. Through this, we aim to foster a more clearer understanding of scientific contributions globally. apsrev4-2
http://arxiv.org/abs/2409.02873v1
20240904165600
Bounds on detection of Bell correlations with entangled ultra-cold atoms in optical lattices under occupation defects
[ "Tanausú Hernández Yanes", "Youcef Bamaara", "Alice Sinatra", "Emilia Witkowska" ]
cond-mat.quant-gas
[ "cond-mat.quant-gas" ]
§ ABSTRACT Bell non-locality stems from quantum correlations effectively identified using inequalities. Spin chains, simulated with ultra-cold atoms in optical lattices, Rydberg atoms in tweezer arrays, trapped ions, or molecules, allow single-spin control and measurement. Therefore, they are suitable for studying fundamental aspects of these correlations and non-locality. Occupation defects, such as vacancies or multiple atoms occupying a single site due to imperfect system preparation, limit the detection of Bell correlations. We investigate their impact using a simplified toy model parameterized by the probability of a site being singly occupied. We derive the corresponding Bell inequality and identify the smallest probability that establishes a lower bound for detecting Bell correlations. We relate the bound to two physical parameters leading to defects in occupations: non-zero temperature and filling factor, focusing on entangled ultra-cold atoms in optical lattices. Finally, we numerically validate the predictions of the toy model by full many-body simulations. Bounds on detection of Bell correlations with entangled ultra-cold atoms in optical lattices under occupation defects E. Witkowska September 9, 2024 ======================================================================================================================= *These authors contributed equally to this workfootnote § INTRODUCTION Quantum mechanics introduced groundbreaking concepts of non-local correlations and entanglement that challenged well-established principles of classical physics, including realism, causality and locality <cit.>. A classical picture can be recovered at the price of introducing local hidden variables, and the corresponding (LV) theories are shown to satisfy Bell inequalities <cit.>. Quantum correlations that violate Bell inequalities are inconsistent with LV theory and are, therefore, referred to as non-local <cit.>. The most robust method for entanglement certification is provided by violating Bell's inequalities, as it avoids assumptions about the physical nature and degrees of freedom to be measured or the calibration of measurements <cit.>. This is known as the device-independent scenario, which is a powerful resource in many quantum information tasks, such as self-testing <cit.>, randomness amplification and expansion <cit.>, quantum key distribution <cit.>, and quantum sensing and metrology <cit.>. Recent experimental demonstrations have employed various platforms, including entangled photons <cit.>, spins in nitrogen-vacancy centers <cit.>, superconducting circuits <cit.>, pairs of Josephson phase qubits <cit.>, and neutral and ultra-cold atoms <cit.>. Many studies have concentrated on few-particle scenarios; however, correlations also naturally emerge in quantum many-body systems <cit.>. Atom assembly in optical lattices and optical tweezer arrays <cit.>, as well as trapped ions and molecules <cit.>, offer excellent approaches for studying many-body entanglement, with capabilities for single-atom preparation, control, and detection. In the above-mentioned platforms, entanglement can be generated dynamically from the initial coherent state well approximated by the one-axis twisting (OAT) model <cit.> where spin-squeezed and GHZ states are produced. Given the control and local measurements possible in such architectures and their scalability, the systems offer the powerful setup to generate and study fundamental aspects of Bell correlations and non-locality, enabling the exploration of quantum information concepts. Imperfections, however, e.g. in system preparation, may introduce occupation defects, limiting Bell inequalities' application and entanglement certification. In this paper, we study the role of occupation defects in detecting Bell correlations focusing on N two-level bosonic atoms in a spin-entangled state distributed among M sites of an optical lattice, such that N≤ M. In the Bell scenario, we consider the measurement of two local collective spin observables at each lattice site where 0, 1 or 2 atoms are present. For each measurement result there are three possible outcomes: ±1/2 when the site is singly occupied, and 0 whenever a defect such as an empty or doubly occupied site is present, as illustrated in Fig. <ref>. We introduce a simplified toy model where the atoms' internal and external degrees of freedom are decoupled. The dynamics of the internal spin degrees of freedom, including possible entanglement among the spins, is assumed to be given by the OAT model with N atoms. The external degrees of freedom are parameterized by the probability p of having a single occupied site allowing vacant, double, etc. occupancy sites when p<1. We derive the corresponding p-parameterized Bell inequalities based on M- and two-sites Bell correlations <cit.>. We analytically estimate the lowest (critical) value of the probability p for violation of Bell inequality. We obtain p_c≈ 1/√(2) for the M-site Bell correlations. For the two-site Bell correlations, we obtain p_c=4/5 and p_c=√(3)/2 when N=pM<M and N=M, respectively. These results are general and can be relevant for any platform where individual addressing of spins is possible  <cit.>, for example as in recent experiments using an array of trapped ions <cit.> and Rydberg atoms <cit.> demonstrating generation of entanglement in terms of scalable spin-squeezing with tens of spins. We explore the predictions of the toy model by performing full many-body numerical simulations of ultra-cold atoms in an optical lattice. We consider two distinct protocols for Bell correlations generation, which differ by the source of the occupation defects. In the first one, Bell correlations between individual atoms are generated in the Mott insulating regime via a weak inhomogeneous magnetic field or interaction anisotropy <cit.>, and the considered occupation defects are vacant sites (holes). The probability p of single occupancy sites is the filling factor f=N/M. Our numerical simulations confirm predictions of the toy model of the lower bound on the filling factor concerning the detection of the M-sites Bell correlations. However, the detection of the two-site Bell correlations in the lattice system is hindered whenever f<1 for fixed positions of holes. The movement of holes turned out to be crucial to lower the numerical bound on the filling factor, allowing for the detection of two-site Bell correlations up to f≈ p_c, as predicted by the toy model. In the second protocol, entanglement is generated in the superfluid regime using atom-atom interactions <cit.> and next is transferred to the Mott phase by increasing the lattice height. The main source of defects is imposed by non-zero temperature T of an initial sample when N=M. We relate analytically the probability p with the initial temperature T of the system in the limit of an isoentropic transformation that leaves the system in thermal equilibrium at each moment. In this case, we determine analytically the upper bound for the initial temperature corresponding to p_c below which M- and two-site Bell correlations can be detected. The paper is organized as follows: In Secs. <ref> and <ref>, we provide the theoretical framework for the Bell scenario and toy model. This is applied to the system composed of ultra-cold atoms in an optical lattice presented in Sec. <ref> via analytical and numerical approaches. In Secs. <ref> and <ref> we present the results for the first and second protocols, respectively, discussing the role of defects introduced by holes and non-zero initial temperature of the system. We conclude with further discussion and outlook in Sec. <ref>. § BELL SCENARIO We consider M spatially separated subsystems labelled i=1,...,M, sharing a quantum M-partite state described by the density operator ρ̂. In each subsystem, k observables labeled α=0,...,k-1, can be measured, each having d possible outcomes. A Bell experiment, as shown in Fig. <ref>-(a), involves choosing an arbitrary observable for each of the M subsystems α={α_i}_i=1^M, and recording the measurement results r={r^(i)}_i=1^M. The goal is to determine the M-partite probability distribution P_M(r|α) for the outcomes r given the settings α for all possible choices of settings α. In the LV theory <cit.>, any probability distribution P_M(r|α) can be written as P_M^( LV)(r|α)=∫ dλ q(λ)∏_i=1^MP^(i)(r^(i)|α_i,λ). The local nature of this equation resides in the fact that the probability distribution of the outcomes for any given subsystem i depends only on the setting within that same subsystem i. The correlations between different subsystems here take their origin from a dependence relation that was established in the past, when the state ρ̂ was generated. This dependence can be fully described by some variable λ, which may be random with a probability distribution q(λ), affecting simultaneously all the M subsystems. Furthermore, if we consider a local realistic model where the measurement result r^(i) is deterministically determined by the setting α_i and the variable λ in each subsystem i, we have P^(i)(r^(i)|α_i,λ)=δ_r^(i),r^(i)(λ). In the Bell experiment described above, the possibility of decomposing the measured probability distribution P_M(r|α) into the form (<ref>) constitutes the locality condition for correlations between the M subsystems under study. Conversely, if the measured probability distribution P_M(r|α) cannot be written in the form (<ref>), the correlations present in the system are non-local. This decomposition corresponds to the most general locality condition, as defined by Bell. In practice, as we will see later, this condition is often formulated as a Bell inequality. In this paper, we consider an optical lattice with M sites, each containing zero, one or two two-level atoms. Each atom with two internal states a and b is an effective spin 1/2, and we can express local collective spins in the quantized form as ŝ_x^(j) = (ŝ_+^(j)+ŝ_-^(j))/2, ŝ_y^(j) = (ŝ_+^(j)-ŝ_-^(j))/(2i), ŝ^(j)_z=(â_j^†â_j - b̂_j^†b̂_j)/2 with ŝ_+^(j) = â_j^†b̂_j, ŝ_-^(j)=(ŝ_+^(j))^†. Within the Bell scenario, we consider in each site j, the measurement of two local collective spin observables ŝ^(j)_α on the j-th site, with α=0,1, each giving d=3 possible results r_0^(j)=0,-1/2,+1/2 ; r_1^(j)=0,-1/2,+1/2, as illustrated in Fig.<ref>-(a), where the result r=0 is assigned to any measurement result different from ±1/2. The probability distribution P_M(r|α) to obtain the results r given the settings α can be theoretically calculated, in terms of the density operator ρ̂ describing the system's state, as P_M(r|α)= tr[ρ̂⊗_j=1^MΠ̂_α_j,r^(j)] , where Π̂_α_j,r^(j)=±1/2 projects onto the eigensubspace of ŝ^(j)_α_j with eigenvalue r^(j)=±1/2, and Π̂_α_j,r^(j)=0=1̂-∑_r=±1/2Π̂_α_j,r projects onto the subspace perpendicular to both eigensubspaces of ŝ^(j)_α_j corresponding to the eigenvalues r^(j)=+1/2 and r^(j)=-1/2. In the next Section, we derive two Bell inequalities using this framework. The first one, relying on M-site correlations, is mainly useful for highly entangled states with a small number of lattice sites (see, e.g., Fig. <ref>-(b)). The second one, relying on two-site correlations, is more applicable to spin-squeezed states with a large number of lattice sites (see, e.g., Fig. <ref>-(c)). For both cases, we introduce a simplified toy model accounting for occupation defects that result in measurements outcomes r^(j)± 1/2. § TOY MODEL In the lattice system considered one starts from a product state |x⟩^⊗ N where each atom is in a coherent superposition of two internal states, i.e. coherent spin state (CSS) along the x direction. The entanglement between spins is dynamically obtained using many-body interactions by different protocols (see e.g., Sec. <ref> and Sec. <ref>) that can be effectively described, in some limit, by the OAT model Ĥ_ OAT = ħχŜ_z^2, where χ quantifies the strength of interactions and Ŝ_σ represents the collective spin operator of N two-level atoms with σ = x,y,z. In (<ref>) we have σ =z. The OAT model generates spin-squeezed states as well as non-Gaussian entangled states, including the GHZ state <cit.>. In order to study the role of occupation defects in the detection of Bell correlations, we introduce an approximate toy model where the internal and external degrees of freedom of the atoms are decoupled ρ̂ = ρ̂_ext⊗ρ̂_SS. Here ρ̂_ SS describes the internal degrees of freedom of the atoms including entanglement among the spins. In particular, we consider the OAT evolution (<ref>) ρ̂_SS=|ψ_t⟩⟨ψ_t| with |ψ_t⟩=e^-iĤ_ OATt/ħ|x⟩^⊗ N, and ρ̂_ ext describes the external degrees of freedom, which we assume to be factorized over the different lattice sites ρ̂_extern = ⊗_j=1^Mρ̂_j with ρ̂_j=p |1⟩_j _j⟨ 1| +(1-p) ρ̂^⊥_j, where |1⟩_j is the Fock state with one spin at site j and p is the probability of having a single occupancy state. The probability of having an empty, double, etc., occupied site, as described by ρ̂^⊥_j, is given by (1-p). In practice, in ρ̂^⊥_j we only consider empty sites or doubly occupied sites. In this framework, when measuring a local spin observable, we assign the value r^(j)=0 to any measurement result different from ±1/2. §.§ M-site Bell correlations We now derive a Bell inequality whose violation allows the detection of non-local correlations. For this, from the measurement outcomes, in the two settings α=0,1, in each subsystem we introduce the complex quantity r_+^(j) = r_0^(j)+i r_1^(j) and we consider the product c_M = ∏_j=1^M r_+^(j), In a local realistic theory, where the probability distribution P_M(r|α) has the form of (<ref>) and (<ref>), the average of c_M over many Bell experiment realizations is given by ⟨ c_M⟩=∫ dλ q(λ)∏_j=1^M r_+^(j)(λ). By introducing the functions f(λ)=1 and g(λ)=∏_j=1^M r_+^(j)(λ), and the scalar product ⟨ f,g⟩=∫ dλ q(λ)f^*(λ)g(λ), the application of the Cauchy-Schwarz inequality to the functions f(λ) and g(λ), gives |⟨ f,g⟩|^2≤⟨ f,f⟩⟨ g,g⟩. This leads to the following Bell inequality ℰ_M≡|⟨ c_M⟩|^2≤∫ dλ q(λ)∏_j=1^M|r_+^(j)(λ)|^2≤ 2^-M, and ℰ_M is the M-site Bell correlator. We choose, in each site j, the settings α=0 and α=1 corresponding to ŝ_y^(j) and ŝ_z^(j) respectively <cit.>, to form ŝ_+^(j)=ŝ_y^(j)+i ŝ_z^(j). It is worth stressing here, that the above setting is optimal for the GHZ state and is less effective for other entangled states generated by the OAT dynamics. The Bell inequality (<ref>) takes the form ℰ_M=|⟨ŝ_+^(1)ŝ_+^(2)...ŝ_+^(M)⟩|^2≤ 2^-M. When considering the toy model (<ref>), the only nonzero contribution to E_M (<ref>) comes from the sites occupied by a single spin. We have ℰ_M^(p1) = p^2Mℰ_M^(p=1), where p^M represents the probability that the M sites are occupied with a single atom. In the non-Gaussian regime of the OAT dynamics (<ref>), a macroscopic superposition of coherent states is created at some particular instants χτ=π/q labelled by an even integer q=2, 4, 6, …, M, with q=2 for the GHZ state <cit.>. The M-site Bell correlator corresponding to these non-Gaussian spin states, for p=1, reads ℰ_M^(p=1) 1/q^2, see Eq.(17) in <cit.>. By replacing in (<ref>), one can derive a critical value of p below which the M-site Bell correlations present in the states generated at χτ=π/q cannot be detected p_ c = q^1/M/√(2), i.e. for which ℰ_M^(p=p_ c) = 2^-M. We note that in the large M limit we have p_c≈ 1/√(2) for all q. §.§ Two-sites Bell correlations In the case of the two-sites Bell correlations, we introduce the vector M⃗ and the matrix C̃, whose elements, for α,β=0,1, are respectively given by M_α = ∑_j=1^M⟨ r^(j)_α⟩ C_αβ =∑_i,j≠ i⟨ r^(i)_αr^(j)_β⟩, C̃_αβ = C_αβ-M_α M_β . It can be shown <cit.> that for any input data M⃗ and C̃ compatible with a LV theory (<ref>), any 2× 2 positive semi-definite matrix A and any 2× 1 vector h⃗, the following Bell inequality holds L(A,h) = tr(AC̃)+h⃗·M⃗ + E_ max≥ 0, where the classical limit is E_ max=max_r⃗[r⃗ ^TAr⃗-h⃗·r⃗], with r⃗=(r_0,r_1)^T being the vector of all possible pairs of outcomes corresponding to the two settings α=0 and α=1 for a single subsystem. The positive semi-definite matrix A and the vector h⃗ that minimize L(A,h) can be found using a data-driven method as in Ref. <cit.>. We find that the Bell inequality  [Alternatively, the Bell inequality (<ref>) can be represented using the C matrix, namely L=C_00+C_11-C_01-C_10 -(M_0-M_1)^2-M_0-M_1 + M≥ 0.] L=C̃_00+C̃_11-C̃_01-C̃_10-M_0-M_1 + M≥ 0, first established in Ref. <cit.> for spin-squeezed states in the case of only two possible measurement outcomes, is optimal also in our case with the three possible outcomes (<ref>) and under occupation defects. For the case of two-sites Bell correlations, we choose α=0 and α=1 corresponding to the measurement, at each site j, of the spin components <cit.> ŝ^(j)_0 = ŝ^(j)_n⃗cos(θ)+ŝ^(j)_m⃗sin(θ), ŝ^(j)_1 = ŝ^(j)_n⃗cos(θ)-ŝ^(j)_m⃗sin(θ), where the unit vector n⃗ is in the spin direction and m⃗, perpendicular to the spin direction, is in the best squeezing direction. The choice of the two settings (<ref>)-(<ref>) is dictated by the geometry of the spin-squeezed states. Non-zero expectation values come from the averages calculated in the plane spanned by the n⃗ and m⃗ vectors. The averages (<ref>) and the correlations (<ref>) for α,β∈{0,1}, that form the Bell inequality (<ref>), take respectively the form M_α = ∑_j=1^M∑_r=±1/2r tr[ρ̂ Π̂_α_j,r] C̃_αβ =∑_i,j i^M(∑_r,s=±1/2rs tr[ρ̂ Π̂_α_i,r⊗Π̂_β_j,s])-M_αM_β. To account for the occupation defects, we evaluate M_α and C̃_αβ to determine the form of the Bell inequality using the toy model density matrix (<ref>) under the two scenarios investigated in the physical systems presented in Secs. <ref> and <ref> when N≤ M and N=M, respectively. §.§.§ Non-unit filling The first scenario, relevant to the system described in Sec. <ref>, assumes that N=pM≤ M. Under this condition, p<1 indicates a non-unit filling of the lattice whose sites are empty or singly occupied. The density matrix ρ̂^⊥_j in (<ref>), in this scenario, becomes ρ̂^⊥_j=|0⟩_j _j⟨ 0|. The averages (<ref>) and the correlations (<ref>) for α,β∈{0,1} are in this case M_α = ⟨Ŝ_n⃗⟩_ SScos(θ), C̃_αβ =N-p/N-1{ (ΔŜ_n⃗)^2_ SScos^2(θ)-(-1)^δ_αβ(ΔŜ_m⃗)^2_ SSsin^2(θ) . . -N/4cos[ 2 θ (1-δ_αβ) ] } +1-p/N-1⟨Ŝ_n⃗⟩^2cos^2(θ). The Bell inequality (<ref>) is given, for this scenario, by L^ (V)_θ/M =p(N-p)/N-1sin^2(θ)(4(ΔŜ_m⃗)^2_ SS/N-1) -2p⟨Ŝ_n⃗⟩_ SS/Ncos(θ)+1≥0. The minimization of L^ (V)_θ with respect to θ gives L^ (V)/M =1-p(N-1)/N-p⟨Ŝ_n⃗⟩^2_ SS/N[N-4(ΔŜ_m⃗)^2_ SS] -p(N-p)/N-1[1-4(ΔŜ_m⃗)^2_ SS/N], for an optimal θ such that cosθ_ opt=N-1/N-p⟨Ŝ_n⃗⟩_ SS/N-4(ΔŜ_m⃗)^2_ SS[Similar to the previous case, this solution should verify |cos(θ_ opt)|≤1, which introduces the restriction p≤ N-(N-1)|⟨Ŝ_n⃗⟩/[N-4(ΔŜ_m⃗)^2]|. Here, L^ (V) can be written in terms of the squeezing parameter ξ^2=N (ΔŜ_m⃗)^2_ SS/⟨Ŝ_n⃗⟩_ SS^2 and the normalized mean spin v=⟨Ŝ_n⃗⟩_ SS/(N/2) as L^ (V)/M= 1-p(N-1)/N-pv^2/4/1-ξ^2 v^2-p(N-p)/N-1(1-ξ^2v^2). ]. The equation (<ref>) is represented in Fig. <ref> as a function of time of the OAT dynamics for a given N and for different values of the occupation probability p. This reveals that for a given N and at each moment of time τ, there is a critical value of p below which, the two-site Bell correlations present in the system cannot be detected. This critical probability p=p_ c is the value for which, the following 3^ rd order equation holds p^3-2N(1-4(ΔŜ_m⃗)^2_ SS/N)p^2 +[N^2 + N-1 ] p +[4N (ΔŜ_m⃗)^2_ SS+(N-1)^2⟨Ŝ_n⃗⟩^2_ SS/N[N-4(ΔŜ_m⃗)^2_ SS]]p -N(N-1)=0. In the limit of large atom number N, the minimal value of (<ref>), over χτ, converges to the p-dependent constant L^ (V)_ min/M≈ 1-5/4p, and the corresponding critical value of the occupation probability tends to p_ c=4/5. §.§.§ Doubly occupied sites In this scenario, relevant to the system described in Sec. <ref>, we assume that N=M. Under this condition, p<1 indicates a redistribution of the N atoms among the M lattice sites resulting in the emergence of both empty and doubly occupied sites. The filling factor is one, f=1. Thus, the density matrix ρ̂^⊥_j in (<ref>) can be written as ρ̂^⊥_j=|0⟩_j _j⟨ 0|+|2⟩_j _j⟨ 2|/2. The calculation of the averages (<ref>) and the correlations (<ref>) for α,β∈{0,1} gives M_α = p ⟨Ŝ_n⃗⟩_ SScos(θ), C̃_αβ =p^2 (ΔŜ_n⃗)^2_ SScos^2(θ)-(-1)^δ_αβp^2 (ΔŜ_m⃗)^2_ SSsin^2(θ) -p^2N/4cos[2θ(1-δ_αβ)], where the subscript SS refers to an expectation calculated in the ρ̂_ SS state (<ref>), and the Bell inequality (<ref>) reads L_θ^ (VI)/M =p^2sin^2(θ)(4(ΔŜ_m⃗)^2_ SS/N-1) -2p⟨Ŝ_n⃗⟩_ SS/Ncos(θ)+1≥0. By minimizing L^ (VI)_θ with respect to θ we obtain L^ (VI)/M=1-⟨Ŝ_n⃗⟩^2_ SS/N[N-4(ΔŜ_m⃗)^2_ SS]-p^2[1-4(ΔŜ_m⃗)^2_ SS/N], for an optimal θ such that cosθ_ opt=1/p⟨Ŝ_n⃗⟩_ SS/N-4(ΔŜ_m⃗)^2_ SS[This solution should verify |cos(θ_ opt)|≤1 which introduces the restriction p≥ |⟨Ŝ_n⃗⟩/[N-4(ΔŜ_m⃗)^2]|.],[An alternative formula of L^ (VI) can be obtained by introducing the spin squeezing parameter ξ^2 and the normalized mean spin v as L^ (VI)/M= 1-v^2/4/1-ξ^2 v^2 - p^2(1-ξ^2v^2). ]. The equation (<ref>) is represented in Fig. <ref>-(a) as a function of time τ of the OAT dynamics for a given N and for different values of p. We note that for a given N and at each moment of time, there is a critical value of p below which, the two-site Bell correlations present in the system cannot be detected, from (<ref>) we obtain p_ c = √(N[N-4(ΔŜ_m⃗)^2_ SS]-⟨Ŝ_n⃗⟩^2_ SS)/N-4(ΔŜ_m⃗)^2_ SS. In the limit of large atom number N, the minimal value of (<ref>), over χ t, approaches a p-dependent constant value L^ (VI)_ min/M≈3/4-p^2, and the corresponding critical probability p_ c approaches p_ c=√(3)/2. A lower value of p can be obtained with a Bell inequality including onsite and two-site correlations up to the fourth-order. In the limit of large N, this leads to a critical occupation probability of p_ c=1/2 <cit.>. § APPLICATION WITH ULTRA-COLD ATOMS IN OPTICAL LATTICES We test the predictions of our toy model using a system consisting of N ultra-cold bosonic atoms confined in an optical lattice. We focus on rubidium-87 atoms occupying two internal states, labelled a and b, and loaded into an optical lattice potential, akin to recent experiments employing quantum gas microscopes <cit.>. The optical lattice comprises M lattice sites and is considered to be in one dimension. The system, in the lowest energy band, is conveniently considered in the Wannier functions basis <cit.>. In the tight-binding limit, where the lattice potential exceeds the recoil energy E_R=ħ^2k^2/(2m), the system can be conveniently described by the Bose-Hubbard model, ℋ̂_ BH = - t ∑_i, j=i± 1(â_i^†â_j + b̂_i^†b̂_j) + U_aa/2∑_i=1^M n̂^a_i (n̂^a_i -1) + U_bb/2∑_i=1^M n̂^b_i (n̂^b_i -1) + U_ab∑_i=1^M n̂^a_in̂^b_i , where t and U_σσ' are the tunneling and interaction parameters. â_i (b̂_i) is the annihilation operator of an atom in internal state a (b) in the i-th site of the lattice, and n̂^a_i=â_i^†â_i, n̂^b_i=b̂_i^†b̂_i are the corresponding number operators. We explore two protocols for the dynamical generation of Bell correlations within this system as detailed in Sections <ref> and <ref>, both resulting in Mott entangled states where atoms exhibit spin entanglement and are evenly distributed across the lattice. In both cases, the initial state ρ̂_a with all atoms in the internal state |a⟩ is turned into a coherent superposition of |a⟩ and |b⟩ by applying a π/2-pulse, which is equivalent to ρ̂_ ini = e^-i Ŝ_y π/2 ρ̂_a e^i Ŝ_y π/2 . The two protocols differ in their specific mechanisms for inducing entanglement in the system and hence in the relevant source of occupation defects. In the first protocol, Bell correlations are generated directly in the strongly interacting Mott regime U_σσ'≫ t where individual spins interact via spin-exchange interactions between neighbouring spins (<ref>). These interactions allow for the generation of entanglement within the system either through weak anisotropy or by coupling with an inhomogeneous field. The mechanism for generating entanglement was explained and demonstrated in prior works such as <cit.>. In the second protocol, Bell correlations are induced by atom-atom interactions in the superfluid regime t≫ U_σσ' with all-to-all individual spin connections. Subsequently, these correlations are transferred to the Mott phase by an adiabatic increase of the optical lattice depth, a process described in detail in previous studies <cit.>. The final state of both protocols allows measuring spin components at a specific lattice site. In the Bell scenario, the outcomes of local measurements across all lattice sites can be collected, and used for the detection of Bell correlations using inequalities (<ref>) and (<ref>). In the ideal realization of the Mott phase protocol, when ρ̂_a is the ground state where each lattice site hosts precisely one atom, two distinct measurement outcomes are possible. However, imperfections introduce non-unit filling throughout the lattice, leading to additional measurement outcomes, such as when a site is either empty or double occupied. These failures arise from imperfect preparation of initial state ρ̂_a or to non-zero temperature. In the following sections, we consider the role of imperfections and we establish a connection between the factor p of the toy model (<ref>) and the filling factor f=N/M or the initial temperature T for these two protocols. § ENTANGLEMENT GENERATION IN THE MOTT PHASE In the strongly interacting limit U_σ, σ'≫ t and in the ground state manifold with at most one atom per lattice site, the system (<ref>) is approximately described by the t–J model Ĥ_t-J=- t ∑_i, j=i± 1P̂_0 (â_i^†â_j + b̂_i^†b̂_j)P̂_0 + Ĥ_ XXZ , where P̂_0 is a projector operator over the manifold of at most single occupancy states, and where Ĥ_ XXZ= - J ∑_j=1^M-1( ŝ_x^(j)ŝ_x^(j+1) + ŝ_y^(j)ŝ_y^(j+1) + Δŝ_z^(j)ŝ_z^(j+1) - 1/4), is the Heisenberg XXZ model with the spin-exchange interaction parameter J=4 t^2/U_ab and the anisotropy parameter Δ= U_ab/U_aa + U_ab/U_bb - 1 <cit.>. When Δ=1 the Hamiltonian takes the form of the isotropic Heisenberg XXX model and it is a natural case for rubidium-87 where U_aa≈ U_bb≈ U_ab. The anisotropy parameter Δ can be tuned by changing the values of interaction strengths using either Feshbach resonances or by shifting optical lattice potentials for states a and b. The collective spin operators are just a summation over the individual ones, Ŝ_σ = ∑_j=1^M ŝ^(j)_σ for σ = x,y,z, ±. The tunnelling term in (<ref>) is relevant whenever the filling factor f=N/M is not one, meaning there are holes (empty sites) in the system and N≤ M. Let us consider the case of zero temperature when the initial state ρ̂_a is in the Mott regime. In the ideal case, we have ρ̂_a = ⊗_j=1^M |a⟩⟨ a|_j, indicating that at each lattice site there is an atom in the internal state |a⟩. However, in the presence of holes, the on-site state can be |0⟩⟨ 0|_j if it is not occupied. In the given experimental realisation, the number of holes can be arbitrary as well as their positions. In our simulations, we evolve single realizations with initially each site being empty or singly occupied, and then average over many realizations to evaluate expectation values. Specifically, we generate a random number x_j∈ (0,1] for each lattice site, obtaining |Ψ_x⟩_j = θ̅(f-x_j) |a⟩_j + ( 1-θ̅(f-x_j) ) |0⟩_j, where θ̅(x) is the Heaviside step function. The density matrix ρ̂_a describing N_r realizations is obtained through a simple averaging, ρ̂_a= 1/N_r∑_r =1^N_r[ ⊗_j=1^M |Ψ_x⟩⟨Ψ_x |_j ], over the set of on-site random numbers r={x_j^1, x_j^2, ⋯, x_j^N_r} for all j ∈{1,...M}. One can show, that in the continuous limit, we obtain ρ̂_a = ⊗_j=1^M ( f |a⟩⟨ a|_j + (1-f) |0⟩⟨ 0|_j ), given that all the sites are equivalent. Therefore, the filling factor f can be identified with the parameter p of the external state of the toy model (<ref>) provided that ρ̂_j^⊥=|0⟩⟨ 0|_j. Owing to the presence of holes, the average number of atoms N differs from the number of lattice sites M when f 1, namely N=f M. It is worth noting here, that the density matrix ρ_a in (<ref>) can also be cast in the following way ρ̂_a = f^M ⊗_j=1^M| a ⟩⟨ a |_j +f^M ( 1-f/f)^1( |0 ⟩⟨ 0 |_1 ⊗_j=1^M-1 | a ⟩⟨ a |_j + 𝒫_1) +f^M ( 1-f/f)^2( |0 ⟩⟨ 0 |_1 ⊗ |0 ⟩⟨ 0 |_2 ⊗_j=1^M-2 | a ⟩⟨ a |_j + 𝒫_2) + ⋯, where 𝒫_σ are all other configurations with σ holes. The state (<ref>) can be written using the shorter notation ρ̂_a = f^M [ ⊗_j=1^M | a ⟩⟨ a |_j + ∑_σ=1^M( 1-f/f)^σΣ_σ[ ρ̂_σ] ], where ρ̂_σ is the product state of a given configuration having σ holes and 1-σ occupied sites while Σ_σ represents summation over permutations of all possible configurations. Having the above expression, the initial spin coherent state (<ref>) can be expressed as follows: ρ̂_ ini = f^M [ ρ̂_ ini^(0) + ∑_σ=1^M( 1-f/f)^σΣ_σ[ ρ̂_ ini^(σ)] ], where the rotations act on the density matrix corresponding to specific configurations of σ holes ρ̂_ ini^(σ) = e^-i Ŝ_y π/2 ρ̂_σ e^i Ŝ_y π/2 , and ρ̂_ ini^(0)=e^-i Ŝ_y π/2 ( ⊗_j=1^M | a ⟩⟨ a |_j ) e^i Ŝ_y π/2. Therefore, any expectation values of the operator Ô can be evaluated as ⟨Ô⟩ = f^M [ Tr[ρ̂_ ini^(0)Ô] + ∑_σ=1^M ( 1-f/f)^σ⟨Ô⟩_σ], where ⟨Ô⟩_σ = Σ_σ Tr[ρ̂_ ini^(σ)Ô], and where summation runs over all configurations of σ holes on M lattice sites. In Subsections <ref> and <ref>, we employ the above-discussed approach for the numerical evaluation of Bell correlations and the critical value of the filling factor allowing for their detection. Before, however, let us discuss the mechanism responsible for generating entanglement within the system in the Mott phase. §.§ Effective microscopic description To generate entanglement from the initial spin coherent states (<ref>) driven by the t–J Hamiltonian (<ref>) we consider two methods. Both of them were thoroughly investigated in <cit.>. The first (A) uses a weak anisotropy Δ 1 in (<ref>), such that Δ≪ 3 - 2 cos(π/M) and Δ≫ 2cos(π/M) - 1, to generate entanglement in the system. Under an ideal scenario with N=M,the system can be represented by a single spin chain whose dynamics is described by the OAT model <cit.>. The presence of holes divides the spin chain into partial chains separated by holes whose configurations are included in (<ref>). For instance, if a single hole is located somewhere in the middle of the chain (not at the borders), the system can be effectively viewed as two partial chains. The system dynamics composed of the partial chains separate only when the positions of holes are fixed. The microscopic model describing the dynamics of each specific configuration involved in (<ref>) is effectively approximated by the OAT model when the positions of holes are pinned to the lattice. The model describing each of n partial chains for a given configuration of σ holes reads Ĥ^(0)_ eff, n = -χ_n^(0)Ŝ_z,n^2 , with χ_n^(0)=J (Δ-1)/(L_n-1), where L_n is the number of atoms consisting of the partial chain. The second method (B) uses a weak inhomogeneous magnetic field Ĥ_B=∑_j β_j ŝ^(j)_z, with β_j ≪ J [cos (π/N ) - 1], for the isotropic case Δ=1 to generate entanglement in the system. In this case, when σ holes are pinned into the lattice, the effective model describing each of the n partial chains is described effectively by the following Hamiltonian Ĥ_ eff, n = χ_ nŜ_z,n^2 + v_nŜ_z,n, where we omitted constant energy terms, and where χ_n = 1/L_n-1∑_q=1^L_n-1|c^(q)_n|^2/E^(q)_n, v_n = 1/L_n∑_l=l_n^l_n+L_n-1β_l, c^(q)_n = √(2)/L_n∑_l=l_n^l_n+L_n-1 p_l^(q,n) (β_l-v_n), with l_n being the location of the first spin in the partial chain and E^(q)_n = J (1-cos(π q / L_n)). The effective models for the (A) and (B) methods are derived and explained in detail in <cit.>. Their validity for describing spin-squeezing generation was also demonstrated. Therefore, one can employ the effective models based on the OAT dynamics for partial chains when evaluating expectation values as given by (<ref>) when considering fixed positions of holes (effectively t=0 in the t–J model (<ref>)). We employ them to calculate the upper bound on the filling factor required for the Bell correlations detection as demonstrated in <ref>. However, the effective models become inaccurate when the tunnelling of holes occurs. In such a case, we perform full numerical many-body calculations by the exact diagonalization method of the t–J model (<ref>) when demonstrating Bell correlations and validity of the Toy model. In the next subsections, we demonstrate the generation of Bell correlations in the system using scenarios A and B. In our numerical simulations, we consider open boundary conditions <cit.> and use the parameters as in the recent experiment of A. Rubio-Abadal et al <cit.> with ^87Rb atoms, lattice spacing d=532 nm, and inter- and intraspecies interactions U_aa∼ U_bb∼ U_ab=U <cit.> with U=24.4t. §.§ M-site Bell correlations We numerically evaluate the many-body Bell correlator ℰ_M = |⟨ŝ_1,+…ŝ_M,+⟩ |^2 as defined in (<ref>) for the initial spin coherent state given by (<ref>). In the upper panel of Fig. <ref> we show the evolution of the M-site Bell correlator ℰ_M with M=12 and for the two proposed methods, A and B. The small shift in time scale and lower magnitude of the correlator maxima in the anisotropic case result from its effective model being approximated to the first order <cit.>. The only relevant contribution comes from the part of the system state describing all sites occupied, as already discussed in Section <ref>. The tunnelling of holes cannot change the value of this correlator. In the lower panel of Fig. <ref> we check the scaling of the correlator with the filling factor f for relevant instances in time using a logarithmic scale. We see good agreement with the scaling of the toy model result in Eq. (<ref>) when identifying p with f. Likewise, we see the GHZ state result crossing the classical limit at the value obtained in Eq. (<ref>) for q=2, identified by the edge of the grey area. §.§ Two-sites Bell correlations In the short-time dynamics, where spin squeezing is generated, the two-site Bell correlator L given by (<ref>) is more relevant. In Fig. <ref> we demonstrate the variation of the two-sites Bell correlations generated using the method A (interactions anisotropy, left column) and B (inhomogeneous magnetic field, right column) for different filling factors and effective tunnelling rates t/J. The color bar indicates the magnitude of effective tunnelling when holes are present in the system relative to the relevant energy scale J. While in all instances t 0 to have a non-zero value of J, we assume for the effective tunnelling t/J=0 that positions of spins are fixed for those particular numerical simulations. Shaded areas indicate the upper and lower boundaries set by the results of holes fixed in place and holes moving infinitely fast, respectively. The upper bound is calculated numerically for the case of the fixed position of holes by using effective models as described in <cit.>. The lower bound is given by the toy model and the corresponding Bell inequality (<ref>) where expectation values ⟨·⟩_ SS are replaced with the one given by the corresponding OAT model. We also observe the generation of Bell correlations requires a larger filling factor in the inhomogeneous magnetic field scenario; both when the movement of holes is allowed and when not. Nevertheless, the numerical results demonstrate the effectiveness of the proposed toy model (<ref>) in the estimation of the lower bound in the detection of two-sites Bell correlations. Note, the microscopic effective models (<ref>) and (<ref>) differ by the presence of a linear term, such that their results will be bounded by different limits as illustrated in Fig. <ref>. The critical value for the lower bound in this figure is obtained by solving the corresponding cubic equation by setting (<ref>) to zero, for which only one real root exists. The anisotropy scenario A, while less accurately described by the corresponding effective OAT model (<ref>) as explained in <cit.>, achieves lower bound results predicted by the toy model (<ref>) when tunnelling processes happen. This is much harder to achieve in the inhomogeneous magnetic field scenario B since the linear term in (<ref>) changes at each configuration of holes and makes the dynamics largely incoherent. This is a stark contrast with the M-site Bell correlator presented in Fig. <ref> where we find the lower bound set by the toy model (<ref>) to be equally accurate. § ENTANGLEMENT GENERATION IN THE SUPERFLUID PHASE We consider here the protocol where the quantum correlations are generated using atom-atom interactions within the superfluid phase and then stored in the Mott phase through adiabatic increasing of the lattice height <cit.>. In this protocol, initially, all atoms occupy the internal state a, and at zero temperature, they are in the ground state of the Bose-Hubbard Hamiltonian |ψ^(a)_0⟩ in the superfluid regime. Subsequently, a π/2 pulse is applied to put the atoms in a coherent spin state where the collective spin is along the x-axis. During the dynamics, the lattice height V_0 linearly increases as V_0(τ)=V_ ini+(V_ fin-V_ ini)τ/τ_ramp, such that, at τ=τ_ramp, for an adiabatic evolution and initially zero temperature, the system reaches the Mott-squeezed state where the atoms are spatially distributed with one atom per lattice site and show entanglement in their internal degree of freedom (from spin squeezed state up to a GHZ state according to the chosen ramp duration τ_ramp). This corresponds to a state of the form (<ref>) with ρ̂_ ext=⊗_i=1^M|1⟩_j _j⟨ 1|. At finite temperature T, the system exhibits initial thermal fluctuations described by the density operator ρ̂_T=1/Z∑_n e^-E_n^(0)/k_ BT|ψ_n^(a)⟩⟨ψ_n^(a)|, where k_ B is the Boltzmann constant, E_n^(0) are the eigenenergies of the initial Hamiltonian (in the superfluid phase), all atoms are in the internal mode a so |ψ_n^(a)⟩ are the corresponding eigenstates, and Z=∑_ne^-E_n^(0)/k_ BT is the normalization constant. These initial thermal fluctuations lead, at the end of the ramp, to a non-zero probability of having holes and double occupations of the lattice sites. In this section, we explore the effect of these initial thermal fluctuations on the detection of the M-site and two-sites non-local Bell correlations present in the final state. We first note that numerical observations performed in small 1D lattices reveal that when U_aa=U_bb≳ U_ab> 0, which is relevant to our purposes, the external dynamics is weakly affected by the internal dynamics up to the Mott transition. Consequently, we estimate the occupation statistics of different lattice sites in the final state by restricting the Bose-Hubbard model to a single internal state a. In this case, the spectrum of the Hamiltonian in the deep Mott phase with t→ 0, is simple: a non-degenerate ground state |ψ_0^ MI⟩, with one atom per lattice site, and gapped excited states |ψ_n^ MI⟩ showing holes and doubly occupied sites. The ground state |ψ_0^ MI⟩ is obtained, from the initial ground superfluid state |ψ_0^(a)⟩, by a unitary evolution with the time dependent Bose-Hubbard Hamiltonian |ψ_0^ MI⟩=Û|ψ_0^(a)⟩. §.§ M-site Bell correlations Since the excited states |ψ_n^ MI⟩ have at least one hole and one doubly occupied lattice site, they give no contribution to the M-site Bell correlator (<ref>) whose value at non-zero temperature is given by ℰ_M^(T0)=P_0^2ℰ_M^(T=0), where P_0=e^-E_0^(0)/k_ BT/Z represents the probability of occupying the ground state in (<ref>)[It is important to note that in general, the final external configuration of the system does not consist of a factorized state over lattice sites, as in (<ref>), in particular the probability P_0 is not of the form P_0=p^M.]. Fig. <ref>-(a) illustrates this relationship for N=M=4 over varying temperatures, revealing the existence of a critical temperature T_ c below which M-site Bell correlations are detectable. In the weakly interacting regime, the probability P_0 can be analytically determined using the Bogoliuobov theory, where, in the initial superfluid regime, when all the atoms occupy the internal state a, the system can be approximately described by the Hamiltonian Ĥ_ Bog=E_0+∑_j≠ 0ħω_jd̂_j^†d̂_j. In equation (<ref>), E_0 is the ground state energy of the system, d̂_j is the annihilation operator of a Bogoliubov quasi-particle associated with the quasi-momentum q_j, and ħω_j is given in terms of the tunneling parameter t and the interaction parameter U=U_aa, by ħω_j=4t√(sin^2(π/Nj)(sin^2(π/Nj)+U/2tN/M)). At non-zero temperature T, the probability P_0 of being in the ground state is given by P_0=∏_j≠0p_j(n_j=0), where, p_j(n_j=n)=e^-nħω_j/k_ BT/Z_j with Z_j=(1-e^-ħω_j/k_ BT)^-1. By replacing in (<ref>), one obtains P_0=∏_j≠ 0(1-e^-ħω_j/k_ BT). The critical temperature T_ c, below which Bell correlations in states generated at times χτ=π/q of OAT dynamics are detectable, can be determined by setting P_0 (<ref>) equal to the critical probability p_ c^M, where p_ c is given by (<ref>). Fig. <ref>-(b), illustrates this critical temperature for a GHZ state (i.e. q=2) as a function of the atom number N. §.§ Two-sites Bell correlations In large systems, studying M-site Bell correlations can be challenging both theoretically and experimentally. Therefore, for such systems, we focus on two-site Bell correlations. In this context, we assume that the system is brought to the Mott phase through a transformation that leaves the system in thermal equilibrium at each moment and conserves the entropy. Under such conditions, one can explicitly write an approximate external density matrix ρ̂_ ext of the system in the final Mott phase and thus analytically determine the probability p(T_ i) of having exactly one atom in a given lattice site as a function of the initial temperature T_ i. We first calculate the initial entropy S_ SF as a function of T_ i. We then determine the final temperature T_ f of the system in Mott using entropy conservation <cit.> S_ SF(T_ i)|_V_0=V_ ini=S_ Mott(T_ f)|_V_0=V_ fin. In the weakly interacting regime, in the superfluid phase, the entropy can be calculated using the Bogoliubov theory. Indeed, By using the Bogoliubov spectrum (<ref>), the system partition function at temperature T can be written as (non interacting bosons) Z_ Bog(T)=∏_j(1-e^-ħω_j/k_ BT)^-1. The free energy defined as F_ Bog=-k_ BTln Z_ Bog is given by F_ Bog(T)=k_ BT∑_jln(1-e^-ħω_j/k_ BT). We thus deduce the entropy of the system as S_ SF^( Bog)(T)/k_ B≡-∂_T F/k_ B =-∑_j[ln(1-e^-ħω_j/k_ BT)+ħω_j/k_ BTe^-ħω_j/k_ BT/1-e^-ħω_j/k_ BT]. We now calculate the entropy of the system in the Mott phase by considering the limit t=0 and small temperatures. In this case, the partition function, when N=M, can be estimated using a two particles-holes excitation approximation, where, we take into account only the states with at most two holes and two doubly occupied lattice sites, with U=U_ aa Z_ Mott(T) =1+M(M-1)e^-U/k_ BT +M(M-1)(M-2)(M-3)e^-2U/k_ BT. By using the free energy, in the Mott phase, one can obtain the system entropy S_ Mott(T)/k_ B =ln Z_ Mott+1/Z_ MottU/k_ BT[M(M-1)e^-U/k_ BT. .+M(M-1)(M-2)(M-3)e^-2U/k_ BT]. After determining T_ f using (<ref>), we approximate the external density matrix of the system at the end of the ramp as ρ̂_ ext ≈1/Z_ Mott(T_ f)[|ψ_0^0 h⟩⟨ψ_0^0 h| . . +e^-U/k_ BT_ f∑_j=1^M∑_k≠ j^M|ψ_jk^1 h⟩⟨ψ_jk^1 h|. .+e^-2U/k_ BT_ f∑_j=1^M∑_k≠ j^M∑_l≠ j,k^M∑_m≠ j,k,l^M|ψ_jklm^2 h⟩⟨ψ_jklm^2 h|], where |ψ_0^0 h⟩ is the state with exactly one atom per lattice site (zero holes), |ψ_jk^1 h⟩ is the state with only one hole in the j^ th site and a single double occupancy in the k^ th site and |ψ_jklm^2 h⟩ is the state with two holes in the j^ th and k^ th sites respectively and two double occupancies in the l^ th and m^ th sites respectively. This approximation enables us to analytically calculate, as a function of the initial temperature, the probability p of having a Fock state with one atom in a given lattice site p ≡ tr[|1⟩_i_i⟨1| ρ̂_ ext] =1/Z_ Mott(T_ f)[1+(M-1)(M-2)e^-U/k_ BT_ f. .+(M-1)(M-2)(M-3)(M-4)e^-2U/k_ BT_ f] In numerical simulations of the exact dynamics, p can be calculated as the trace of the projection of the density matrix of the system on the eigenspace of the observable n̂_i associated with the eigenvalue n_i=1. Equation (<ref>) is represented in Fig. <ref>-(a) as a function of the initial temperature, compared to exact results using both one- and two-component Bose-Hubbard Hamiltonian. This reveals that at sufficiently low temperatures in small systems, the probability p(T_ i) can be accurately approximated using (<ref>) and the entropy conservation condition (<ref>). By equating p(T_ i) with the critical probability (<ref>) from the toy model, one can determine an upper bound, dependent on the ramp duration τ_ramp, for the initial temperature T_ c above which two-sites non-local Bell correlations cannot be detected. Fig. <ref>-(b) shows the critical temperature associated to the minimal (critical) p, over the ramp duration τ_ramp, as a function of the atom number N  [For larger atoms number, beyond the range of N presented in the figure, the two-hole approximation becomes invalid.]. § SUMMARY AND CONCLUSIONS In this paper, we considered the detection of Bell correlations using two-level ultra-cold bosonic atoms loaded into optical lattices. We focus on identifying Bell violations for dynamically generated entangled states when imperfections related to non-unit filling appear. Our proposed toy model accounts for imperfections that provide non-unit filling per site in the preparation and measurement stages through the probability p of a site being single-occupied. The M-site Bell correlator shows a violation for this model in the limit of a large system, when p>p_c≈ 1/√(2). On the other hand, the two-site Bell correlator relying on collective spin measurements shows the bounds p_c=4/5 and p_c=√(3)/2 when N=pM<M and N=M, respectively. We illustrate these general results under two practical methods of entanglement generation varied by the source of imperfections: vacant sites due to initial non unit filling, and vacant and multiply occupied sites due to non-zero temperature of the initial state. In the presence of holes, we study the generation of entanglement in the Mott insulating via interaction anisotropy and inhomogeneous field demonstrating the validity of the toy model predictions for the critical value of the filling factor allowing generation of Bell correlations. For non-zero temperatures, we study the generation of entanglement in the superfluid regime produced via atom-atom interactions transferred to the Mott regime via adiabatic rising of the lattice height. We find a connection between the probability p of the toy model and the effective temperature T of the initial state that drives the system from the unit filling regime under study. We identified the critical value of the initial temperature allowing violation of Bell inequalities. Our results reveal the fundamental limits on detecting Bell correlations in the lattice system due to occupation defects. The bounds identified in specific protocols fall within the range of typical experimental realizations, suggesting the feasibility of detecting Bell correlations. § ACKNOWLEDGMENTS We gratefully acknowledge discussions with Irénée Frérot. This work was supported by the Polish National Science Center DEC-2019/35/O/ST2/01873 and DEC-2023/48/Q/ST2/00087. T.H.Y acknowledges support from the Polish National Agency for Academic Exchange through the Foreign Doctoral Internship Grant NAWA Preludium BIS 1 No. PPN/STA/2021/1/00080. A part of the computations was done at the Centre of Informatics Tricity Academic Supercomputer & Network.
http://arxiv.org/abs/2409.02781v1
20240904145729
Uncountable Hyperfiniteness and The Random Ratio Ergodic Theorem
[ "Nachi Avraham-Re'em", "George Peterzil" ]
math.DS
[ "math.DS", "math.LO", "37A20, 28D15, 37A40, 03E15, 22D40, 22F10" ]
UnLearning from Experience to Avoid Spurious Correlations Jeff Mitchell Queen's University Belfast School of EEECS United Kingdom jmitchell25@qub.ac.uk Jesús Martínez del Rincón Queen's University Belfast School of EEECS United Kingdom j.martinez-del-rincon@qub.ac.uk Niall McLaughlin Queen's University Belfast School of EEECS United Kingdom n.mclaughlin@qub.ac.uk September 9, 2024 ============================================================================================================================================================================================================================================================================================================================================ § ABSTRACT We show that the orbit equivalence relation of a free action of a locally compact group is hyperfinite (à la Connes–Feldman–Weiss) precisely when it is hypercompact. This implies an uncountable version of the Ornstein–Weiss Theorem and that every locally compact group admitting a hypercompact probability preserving free action is amenable. We also establish an uncountable version of Danilenko's Random Ratio Ergodic Theorem. From this we deduce the Hopf dichotomy for many nonsingular Bernoulli actions. § INTRODUCTION A well-studied property of countable equivalence relations is the property of being hyperfinite, which means that it is the increasing union of countably many finite equivalence relations, where the finiteness is referring to the cardinality of the classes. Of a particular interest are Borel actions of countable groups whose associated orbit equivalence relation (henceforth OER) is hyperfinite. One of the milestones in this theory is the Ornstein–Weiss Theorem <cit.>, by which OERs arising from nonsingular free actions of amenable groups are measure hyperfinite, i.e. hyperfinite up to a zero measure set. The celebrated Connes–Feldman–Weiss Theorem generalized this to amenable equivalence relations (not necessarily OERs) <cit.>. The usual notion of hyperfiniteness applies to countable equivalence relations only, thus the aforementioned theorems deal with countable groups. In dealing with general locally compact second countable (henceforth lcsc) groups, Connes, Feldman & Weiss used cross sections <cit.>,[The terminology in the field has evolved in an inconsistent way. What is referred to by Connes, Feldman & Weiss as transversal has evolved to what is nowadays usually called cross section, meaning a set which intersects every class countably many times, while transversal nowadays refers to a set intersecting every class exactly once.] which will be introduced below in Section <ref>. Thus, they called an uncountable OER hyperfinite if its restriction to every lacunary cocompact cross section is hyperfinite.[See Remark <ref> below regarding this definition.] In order to avoid confusion, we will call this property sectional-hyperfinite (see Definition <ref> below). Thus, the Connes–Feldman–Weiss Theorem asserts that every OER of an lcsc amenable group is measure-sectional-hyperfinite <cit.>. In this work we focus on OERs that arise from Borel free actions of lcsc groups. The classes of such an OER are typically uncountable, but using the freeness of the action there is a natural way to define compactness and hypercompactness by pushing forward the topology from the acting group to the orbits. A precise definition will be presented in Section <ref>. With this natural concept defined, we have the following characterization which is purely in the Borel category: Let G be an lcsc group and X a free Borel G-space. Then E_G^X is sectional-hyperfinite if and only if it is hypercompact. In <cit.>, Connes, Feldman & Weiss have proved that in the presence of a measure, being sectional-hyperfinite is independent on the the choice of the cross section up to a null set. From Theorem <ref> we can show that this is true already in the Borel level for orbital equivalence relations of free actions: Let G be an lcsc group and X a free Borel G-space. If E_G^X restricted to one lacunary cocompact cross section is hyperfinite, then its restriction to every lacunary cocompact cross section is hyperfinite. When adding a measure, from the Connes–Feldman–Weiss Theorem we obtain an uncountable version of the Ornstein–Weiss Theorem: Every nonsingular free action of an lcsc amenable group is measure-hypercompact. We continue to study the asymptotic invariance of hypercompact OERs, presented in Section <ref>, and using it to get an uncountable version of a well-known fact in countable groups (see <cit.> with the Ornstein–Weiss Theorem): An lcsc group G is amenable if and only if one (hence every) of its probability preserving free G-space is measure-hypercompact. Hyperfiniteness or hypercompactness of an action provides a natural way to take ergodic averages of a function and study their asymptotic behaviour. This was done by Danilenko <cit.> in the countable case, where he established a Random Ratio Ergodic Theorem for nonsingular actions of countable amenable groups using the Ornstein–Weiss Theorem. In the next we exploit Theorem <ref> in order to establish an uncountable version of Danilenko's theorem. We will make use of a natural notion of a Random Følner sequence that will be presented in Section <ref>. Let G be an lcsc amenable group and (X,μ) a nonsingular probability G-space (not necessarily free). Then there exists a random Følner sequence (𝒮_0,𝒮_1,…) of G such that for every f∈ L^1(X,μ), lim_n→∞∫_𝒮_ndμ∘ g/dμ(x)f(g.x)dλ(g)/∫_𝒮_ndμ∘ g/dμ(x)dλ(g)=𝔼(f|Inv_G(X))(x) for almost every realization (𝒮_0,𝒮_1,…), both μ-a.e. and in L^1(X,μ). As it was shown by Hochman <cit.>, when fixing one realization of 𝒮_0⊂𝒮_1⊂… it is not always true that the limit in Theorem <ref> holds for all functions in L^1(X,μ) at once. Nevertheless, it was observed by Danilenko that one may restrict the attention to a countable collection of functions in L^1(X,μ), and obtain the following particularly useful corollary: Let G be an lcsc amenable group and (X,μ) a nonsingular probability G-space. Then for every countable collection ℒ⊂ L^1(X,μ) there exists a Følner sequence S_0⊂ S_1⊂… of G such that for all f∈ℒ, lim_n→∞∫_S_ndμ∘ g/dμ(x)f(g.x)dλ(g)/∫_S_ndμ∘ g/dμ(x)dλ(g)=𝔼(f|Inv_G(X))(x) both μ-a.e. and in L^1(X,μ). The Random Ratio Ergodic Theorem in countable groups proved itself useful in ergodic theory in recent years, particularly because it is not limited to probability preserving actions but applies also to nonsingular actions, where the pointwise ergodic theorem is generally unavailable as was shown in <cit.>. The classical Hopf method, originally used by Hopf in the probability preserving category to prove the ergodicity of the geodesic flow, was developed in recent years by Kosloff <cit.> and then by Danilenko <cit.> in the nonsingular category. Originally, Kosloff suggested this method for Ratio Ergodic Theorem countable groups (see <cit.>), and Danilenko, observing that the Random Ratio Ergodic Theorem is sufficient, showed that this method applies for all amenable groups. The works of Kosloff and Danilenko focus on proving ergodicity in nonsingular Bernoulli and Markov shifts in countable groups. In Section <ref> we will demonstrate the power of this method using Theorem <ref> for proving that certain nonsingular Bernoulli shifts of locally compact groups obey the Hopf dichotomy: they are either totally dissipative or ergodic. § FUNDAMENTALS Throughout this work, G stands for a locally compact second countable (lcsc) group, that is, a Polish group whose topology is locally compact. We will fix once and for all a left Haar measure λ on G. A compact filtration of G is a sequence K_0⊂ K_1⊂… of compact subset of G such that G=K_0∪ K_1∪…. Such a compact filtration is called equicompact if it has the property that every compact set C⊂ G is contained in K_n for all sufficiently large n∈ℕ. By a theorem of Struble <cit.>, every lcsc group G admits a compatible proper metric, that is, a metric on G whose topology is the given topology of G and with respect to which closed ball are compact. In particular, an increasing sequence of balls in such a metric whose radii diverge to +∞ forms an equicompact filtration of G. §.§.§ Borel G-Spaces A Borel G-space is a standard Borel space X (whose σ-algebra is fixed but remains implicit) together with a Borel map G× X→ X, (g,x)↦ g.x, such that e.x=x for every x∈ X, where e∈ G is the identity element, and gh.x=g.(h.x) for every g,h∈ G and x∈ X. A Borel G-space is called free if g.x≠ x for every g∈ G\{e} and x∈ X. §.§.§ Cross Sections The notion of cross section for a Borel G-space is classical and is known for decades, but more recently it went through some useful improvements. See the survey <cit.>. In the following presentation we mostly follow Slutsky's treatment <cit.>. Let X be a Borel G-space. A cross section (also called complete section or countable section) for X is a Borel set ℭ⊂ X that intersects every orbit in at most countably many points. A cross section ℭ is called U-lacunary, for some symmetric identity neighborhood U⊂ G, if the action map U×ℭ→ X is injective, i.e. U.w∩ U.z=∅ for all distinct w,z∈ℭ. By a theorem of Kechris <cit.>, every Borel G-space admits a lacunary cross section. A cross section ℭ of X is called K-cocompact, for some compact identity neighborhood K⊂ G, if K.ℭ=X. By a theorem of C. Conley and L. Dufloux, strengthening the aforementioned theorem of Kechris, every Borel G-space admits a cocompact cross section. See <cit.>. The following version is due to Slutsky <cit.>. Let G be an lcsc group and X a Borel G-space. For every compact symmetric identity neighborhood U⊂ G, there exists a U-lacunary U^2-cocompact cross section for X. §.§.§ Voronoi Tessellations From a cross section of a free Borel G-space one may define in a standard way a Voronoi tessellation of the space. We present this here following Slutsy <cit.>. Let X be a free Borel G-space and ℭ a lacunary cross section. Fix a compatible proper metric d on G and, as the action is free, consider the map d_o:E_G^X→ℝ_≥ 0, d_o(x,g.x)=d(e,g). For x∈ X and r≥ 0 denote the set ℭ_r(x):={ w∈ℭ∩ G.x:d_o(x,w)≤ r}. The lacunarity of ℭ shows that ℭ_r(x) is a finite set and, since ℭ is a cross section, for every x∈ X there is some r≥ 0 for which ℭ_r(x) is nonempty. Consequently, the Borel function r_o:X→ℝ_≥0, r_o(x)=min{ d_o(x,w):(x,w)∈(X×ℭ)∩ E_G^X}, is well-defined. Let us consider the finite set ℭ(x):=ℭ_r_o(x)(x)={w∈ℭ∩ G.x:d_o(x,w)=r_o(x)}. The Voronoi tessellation {T_w:w∈ℭ} will be designed to form a partition of X, where we aim to allocate a point x∈ X to a tile T_w if w∈ℭ(x). While for a given x there can be multiple elements in ℭ(x), there are at most finitely many such elements, so fix a Borel linear ordering ≺ on X (say via a Borel isomorphism of X with ℝ) and define the allocation Borel map τ_o:X→ℭ by letting τ_o(x) be the ≺-least element of ℭ(x). Accordingly, we define the Voronoi tessellation {T_w:w∈ℭ} by T_w:={x∈ X:τ_o(x)=w}, w∈ℭ. §.§.§ OERs, Smoothness and Idealism Suppose X is a Borel G-space. Then the orbit equivalence relation associated with X is E_G^X:={(x,g.x):x∈ X,g∈ G}⊂ X× X. Since G is lcsc it is known that E_G^X is Borel, i.e. a Borel subset of X× X, and every class in E_G^X, namely every G-orbit, is Borel (see e.g. <cit.>). The equivalence relation E_G^X is referred to as the orbit equivalence relation associated with X. It will be convenient for us to relax the notion of orbit equivalence relations as follows: Let X be a Borel G-space. An orbit equivalence relation (OER) on X is a Borel subequivalence relation of E_G^X. An OER E⊂ E_G^X is said to be positive if there is a identity neighborhood U⊂ G such that for every x∈ X there exists g∈ G with Ug.x⊂[x]_E. In this case we may call it U-positive. The following notion of smoothness is central in the theory: An equivalence relation E on a standard Borel space X is called smooth if there is a Borel function s:X→ Y, for any standard Borel space Y, such that for every x,x'∈ X, (x,x')∈ E s(x)=s(x'). A smooth equivalence relation is always Borel, as it is the inverse image of the diagonal of Y under the Borel function (x,x')↦(s(x),s(x')). Given a Borel equivalence relation E on a standard Borel space X, consider the space X/E of E-classes of points in X. We recall that a σ-ideal on a given set is a nonempty collection of subsets which is closed under taking subsets and countable unions. A Borel equivalence relation E on a standard Borel space X is called idealistic if there is a Borel assignment I:C↦ I_C, C∈ X/E, assigning to each E-class C a σ-ideal I_C on C such that C∉ I_C. Here, I is being Borel in the sense that for every Borel set A⊂ X× X, the set A_I:={x∈ X:(A∩ E)_x∈ I_[x]_E}, is Borel in X, where we denote for a general set S⊂ X× X, S_x={x'∈ X:(x,x')∈ S}. The next proposition is analogous to <cit.>: Every positive OER is idealistic. For every [x]_E∈ X/E let I_[x]_E be defined by S⊂ I_[x]_Eλ(g∈ G:g.x∈ S)=0, for whatever S⊂[x]_E. By the positivity of E it is clear that [x]_E∉ I_[x]_E and, from monotonicity and σ-additivity of λ, we also see that I_[x]_E is a σ-ideal. In order to see that I is Borel, we note that for a Borel set A⊂ X× X we have by definition A_I={ x∈ X:λ(g∈ G:g.x∈(A∩ E)_x)=0}. Since (x,g)↦1_A∩ E(x,g.x) is a Borel function, from <cit.> it follows that also x↦λ(g∈ G:g.x∈(A∩ E)_x) is a Borel function, hence A_I⊂ X is a Borel set. The following characterizations are due to Kechris (see <cit.>). For a comprehensive treatment we refer to <cit.>. Let E be an idealistic Borel equivalence relation (e.g. an OER) on a standard Borel space X. TFAE: * E is smooth. * E admits a Borel selector: a Borel function s:X→ X such that (x,s(x))∈ E and (x,x')∈ E s(x)=s(x') for all x,x'∈ X. * E admits a Borel transversal: a Borel subset T⊂ X that intersects every class in exactly one point. When E is smooth with a Borel transversal T, there can be found a Borel selector for E of the form s:X→ T. § HYPERCOMPACT OERS Recall that a countable equivalence relation E on a standard Borel space X is hyperfinite if it admits a finite filtration, namely a sequence E_0⊂ E_1⊂… of equivalence relations with E=E_0∪ E_1∪… such that each E_n is finite, i.e. each E_n-classes is finite. When E is uncountable, we will give a definition for hyperfiniteness following Connes, Feldman & Weiss <cit.>: Let G be an lcsc group and X a Borel G-space. We say that E_G^X is sectional-hyperfinite if E_G^X∩(ℭ×ℭ) is hyperfinite for every lacunary cocompact cross section ℭ of X. The definition of Connes, Feldman & Weiss is not restricted to lacunary cocompact cross sections but to all cross sections. However, they have proved that in nonsingular G-spaces being sectional-hyperfinite is independent on the choice of the cross section up to a null set <cit.>. Thus, for nonsingular free G-spaces, sectional-hyperfiniteness as in Definition <ref> and the Connes–Feldman–Weiss notion of hyperfiniteness are the same. We chose to restrict our attention for lacunary cross sections because it turns out to be the best framework in studying hypercompactness, as it is manifested in Theorem <ref>. We now came to define hypercompactness, which is the natural uncountable analog to the notions of hyperfiniteness. Suppose X is a Borel G-space and E⊂ E_G^X is an OER. For every x∈ X denote G_E(x):={g∈ G:(x,g.x)∈ E}. Since E is Borel this is a Borel set in G. One can routinely verify that G_E(g.x)=G_E(x)g^-1 whenever g∈ G_E(x). Thus, E is U-positive according to Definition <ref> for some identity neighborhood U⊂ G, if for every x∈ X there is g∈ G with Ug⊂ G_E(x). Let X be a free Borel G-space. * An OER E⊂ E_G^X is called compact if there is a compact identity neighborhood K⊂ G such that G_E(x)⊂ K for every x∈ X. In this case we may call it K-compact. * A compact filtration of E_G^X is an increasing sequence of compact OERs E_0⊂ E_1⊂… with E=E_0∪ E_1∪…. Such a compact filtration is called equicompact if for every x∈ X, the filtration G_E_0(x)⊂ G_E_1(x)⊂… of G is equicompact.[In the sense we defined in Section <ref>.] * E_G^X is called hypercompact if it admits an equicompact filtration. We do not know whether hypercompactness can be defined in a meaningful way for general Borel G-spaces, at least not without significant restrictions on the stabilizers. In fact, unlike the notion of hyperfiniteness which is defined plainly for any countable equivalence relation, the notion of hypercompactness is restricted to orbit equivalence relations and relies on the canonical topology of the acting group. Therefore, we do not know whether one can define a meaningful notion of hypercompactness for general equivalence relations. [Free transitive actions are hypercompact] It is an elementary fact that the action of every countable group G on itself is hyperfinite. Indeed, we may define for every n∈ℕ a partition T_n of G into finite subsets of G, each of which of size 2^n. We can do this in such a way that T_n+1 is coarser than T_n (i.e. every element of T_n is contained in an element of T_n+1) for every n∈ℕ. Now letting E_n be the equivalence relation whose classes are the elements of T_n for every n∈ℕ, it is easy to see that E_0⊂ E_1⊂… forms a finite filtration of E_G^G=G× G. Let us show that E_G^G=G× G is hypercompact for every lcsc group G using essentially the same argument. Fix some compatible proper metric d on G. Start by picking a countable set A:={a_1,a_2,…}⊂ G which is 1-discrete (i.e. d(a_i,a_j)≥ 1 for all distinct a_i,a_j∈ A) and 2-dense (i.e. dist(g,A)<2 for every g∈ G). Form a Voronoi tessellation {T_1,T_2,…} so that T_i consists of points which are 1-distant from a_i, for i=1,2,… (just as described in Section <ref>), from which we obtain the equivalence relation E_0 whose classes are {T_1,T_2,…}. Now for every n∈ℕ let E_n be the equivalence relation whose classes are { T_1∪… T_2^n,T_2^n+1∪…∪ T_2^n+1,T_2^n+2∪…∪ T_2^n+2+1,…}. Since A is uniformly discrete, every ball in G is covered by finitely many of the T_i's, hence E_0⊂ E_1⊂… is an equicompact filtration of E_G^G=G× G. Let X be a free Borel G-space. If E⊂ E_G^X is a positive OER, then every G-orbit contains at most countably many E-classes. Let G.x be some G-orbit. Letting G.x/E be the set of E-classes in G.x, for every C∈ G.x/E put G_E(C):={ g∈ G:g.x∈ C}. On one hand, those sets are pairwise disjoint and together form a partition of G_E(x). On the other hand, for every C∈ G.x/E, if we fix any g_C∈ G_E(C) then similarly to (<ref>) we note that G_E(C)=G_E(g_C.x)g_C. Assuming that E is U-positive, we deduce that G_E(C) contains a translation of U. Since G is second countable there are at most countably many disjoint translations of U, hence G.x/E is at most countable. Let X be a free Borel G-space. If E_G^X is hypercompact then it admits an equicompact filtration consisting of positive OERs. Fix an equicompact filtration F_0⊂ F_1⊂… of E_G^X. For x∈ X put φ_0(x)=inf{ m≥ 0:λ(G_F_m(x))>0}. Since E_G^X is clearly positive, necessarily φ_0(x)<+∞ for every x∈ X. For every m, since F_m is Borel, from <cit.> it follows that x↦λ(G_F_m(x)) is a Borel function, hence so is φ_0. Define then E_0⊂ E by (x,x')∈ E_0(x,x')∈ F_φ_1(x). It is clear that E_0⊂ X× X is a Borel set. Note that if (x,x')∈ F_m for some m∈ℕ and λ(G_F_m(x))>0, then in light of (<ref>) and the quasi-invariance of λ to multiplication from the right also λ(G_F_m(x'))>0, so it easily follows that φ_0(x)=φ_0(x'). This implies that E_o is an equivalence relation. We also note that G_E_0(x)=G_F_φ_0(x)(x), hence E_0 is compact. Proceeding by induction, for n≥ 1 put φ_n(x)=inf{ m≥φ_n-1(x)+1:λ(G_F_m(x))>0}, and define E_n by (x,x')∈ E_n(x,x')∈ F_φ_n(x). It is then routine to verify that E_0⊂ E_1⊂… forms a positive equicompact filtration of E_G^X. Let us now formulate our first main result, of which Theorem <ref> and Corollary <ref> are particular cases: For a free Borel G-space X TFAE: * E_G^X admits a compact filtration. * E_G^X is sectional-hyperfinite. * E_G^X is hypercompact. * E_G^X restricted to one lacunary cocompact cross section is hyperfinite. The hard part of Theorem <ref> is the implication (3)(4), and its proof relies on the existence of compact OER with sufficient regularity: Let X be a free Borel G-space and let U⊂ G be an arbitrary relatively compact identity neighborhood. Fix some U-lacunary K-cocompact cross section ℭ for X, for a relatively compact identity neighborhood K⊂ G (which exists, e.g. for K=U^2, by Theorem <ref>), and let {T_w:w∈ℭ} be the corresponding Voronoi tessellation. Define E_U⊂ X× X to be the equivalence relation whose equivalence classes are {T_w:w∈ℭ}. For all E_U as in Definition <ref> the following properties hold: * E_U is a smooth OER of E_G^X, and the allocation map τ_o:X→ℭ of the Voronoi tessellation serves as a Borel selector of E_U. * E_U is a U-positive and K^2-compact. * If E_0⊂ E_1⊂… is a compact filtration of E_G^X for which E_0⊂ E_U, then it is an equicompact filtration. First, E_U is an OER of E_G^X since when (x,x')∈ E_U then x,x'∈ T_w for some w∈ℭ, so x,x'∈ G.w. By the construction of the Voronoi tessellation, the allocation map τ_o:X→ℭ is a Borel selector into ℭ, and thus E_U is smooth. By the U-lacunarity of ℭ we see that E_U is U-positive. By the K-cocompactness of ℭ we see that G_E_U(x)⊂ K^2 for every x∈ X, thus E_U is K^2-compact. Let us now show the third property. Let E_U⊂ E_0⊂ E_1⊂… be a compact filtration. Abbreviate G_n(·)=G_E_n(·) and [·]_n=[·]_E_n. The key property to get equicompactness is the following: For every x∈ X there exists n∈ℕ such that G_n(x) contains an identity neighborhood in G. Proof of the claim: Let x∈ X be arbitrary. Since ℭ_r_o(x)+1(x) is a finite set containing ℭ(x), there exists ϵ=ϵ(x)>0 sufficiently small with ℭ_r_o(x)+ϵ(x)=ℭ_r_o(x)(x)=ℭ(x). Look at B_ϵ/2(e), the open ball of radius ϵ/2 around the identity e∈ G with respect to the compatible proper metric defining the cross section ℭ, and we aim to show that B_ϵ/2(e)⊂ G_n(x) for every sufficiently large n∈ℕ. Let g∈ B_ϵ/2(e) be arbitrary. First note that for whatever w∈ℭ(x) we have d_o(g.x,w)≤ d_o(x,w)+d_o(x,g.x)<r_o(x)+ϵ/2, hence by the definition of r_o we have r_o(g.x)<r_o(x)+ϵ/2. Now pick any w∈ℭ(g.x), namely r_o(g.x)=d_o(g.x,w), and we see that d_o(x,w) ≤ d_o(x,g.x)+d_o(g.x,w) <ϵ/2+r_o(g.x)<r_o(x)+ϵ. This means that w∈ℭ_r_o(x)+ϵ(x)=ℭ(x), so we deduce that ℭ(g.x)⊂ℭ(x). Now since ℭ(x) is finite and E_0⊂ E_1⊂… is a filtration of E_G^X, there can be found n=n(x)∈ℕ sufficiently large such that [ℭ(x)]_n⊂[x]_n, hence g.x∈[ℭ(g.x)]_n⊂[ℭ(x)]_n⊂[x]_n. We found that B_ϵ/2(e)⊂ G_n(x), completing the proof of the claim. ◊ We now deduce the equicompactness of E_U⊂ E_0⊂ E_1⊂…. Let x∈ X be arbitrary and let C⊂ G be some compact set. For every c∈ C, by the claim we may pick n=n(x,c)∈ℕ such that G_n(c^-1.x) contains some identity neighborhood U. Enlarging n if necessary, we may assume that (x,c.x)∈ E_n, that is c∈ G_n(x). Then by (<ref>) we have Uc⊂ G_E_U(c.x)c=G_E_U(x) hence c∈Int(G_n(x)), so we deduce that C⊂⋃_n∈ℕInt(G_n(x)). Since C is compact there exists n_o∈ℕ such that C⊂ G_n(x) for all n>n_o. (1)(2): Fix a compact filtration E_0⊂ E_1⊂… of E_G^X, and let ℭ be a U-lacunary cross section ℭ for E_G^X (we do not use cocompactness here). Put F_n:=E_n∩(ℭ×ℭ) for n∈ℕ. Clearly F_0⊂ F_1⊂… is a filtration of E_G^X∩(ℭ×ℭ), so we have left to show that F_n is a finite equivalence relation for every n∈ℕ. Indeed, since the action is free, for every x∈ X the set G_F_n(x) is U-discrete and contained in G_E_n(x), which is relatively compact. Thus G_F_n(x) is necessarily finite. (2)(3): Let U⊂ G be some identity neighborhood, pick a U^2-cocompact U-lacunary cross section ℭ for X, and let E_U⊂ E_G^X be as in Definition <ref>. By the assumption that E_G^X is sectional-hyperfinite, we obtain that E_G^X∩(ℭ×ℭ) is a hyperfinite countable equivalence relations. Let then F_0⊂ F_1⊂… be a finite filtration of E_G^X∩(ℭ×ℭ). With the Borel selector τ_o:X→ℭ of E_U, define for n∈ℕ, E_n:={(x,y)∈ X× X:(τ_o(x),τ_o(y))∈ F_n}. Note that E_n⊂ F_n⊂ E_G^X∩(ℭ×ℭ)⊂ E_G^X for every n∈ℕ. Also note that if (x,x')∈ E_G^X then (τ_o(x),τ_o(x'))∈ E_G^X∩(T× T) hence there exists n∈ℕ such that (τ_o(x),τ_o(x'))∈ F_n hence (x,y)∈ E_n. Thus, we found that E_0⊂ E_1⊂… is a filtration of E_G^X. Since (x,x')∈ E_Uτ_o(x)=τ_o(x') it is clear that E_U⊂ E_0. Let us verify that E_n is compact for each n∈ℕ. Indeed, let x∈ X be arbitrary and note that as F_n is finite there is a finite set A_n(x)⊂ G such that for every g∈ G there is some a∈ A_n(x) such that τ_o(g.x)=τ_o(a.x). Hence G_E_n(x) =⋃_a∈ A_n(x){ g∈ G:τ_o(g.x)=τ_o(a.x)} =⋃_a∈ A_n(x){ g∈ G:(g.x,a.x)∈ E} =⋃_a∈ A_n(x)G_E_U(a.x)=⋃_a∈ A_n(x)G_E_U(x)a^-1. Since G_E_U(x) is relatively compact we deduce that so is G_E_n(x). Finally, since E_G^X admits a compact filtration whose first element is E_U, by Lemma <ref> it is an equicompact filtration, thus E_G^X is hypercompact. (3)(4): This implication is included in the implication (1)(2). (4)(1): Let ℭ be a K-cocompact U-lacunary cross section for X such that E_G^X∩(ℭ×ℭ) is hyperfinite. Let E_U and τ_o be as in Definition <ref>. Fix a finite filtration F_0⊂ F_1⊂… of E_G^X∩(ℭ×ℭ) and put E_n:={(x,x')∈ X× X:(τ_o(x),τ_o(x'))∈ F_n} , n∈ℕ. Then, following the same reasoning in the proof of (2)(3), one can verify that E_0⊂ E_1⊂… is a compact filtration for E_G^X. § MEASURE-HYPERCOMPACT OERS Let us recall the basic setup of the nonsingular ergodic theory. A standard measure space is a measure space (X,μ) such that X is a standard Borel space and μ is a Borel σ-finite measure on X. A Borel set A⊂ X is said to be μ-null if μ(A)=0 and it is said to be μ-conull if μ(X\ A)=0. A Borel property of the points of X will be said to hold modulo μ or μ-a.e. if the set of points satisfying this property is μ-conull. A nonsingular G-space is a standard measure space (X,μ) such that X is a Borel G-space and μ is quasi-invariant to the action of G on X; that is, the measures μ∘ g^-1 and μ are mutually absolutely continuous for every g∈ G. A particular case of a nonsingular G-space is measure preserving G-space, in which we further have μ∘ g^-1=μ for every g∈ G. When μ is a probability measure, we stress this by using the terminology nonsingular probability G-space or probability preserving G-space. Let (X,μ) be a nonsingular G-space. We say that E_G^X is μ-sectional-hyperfinite or μ-hypercompact if there exists a μ-conull G-invariant set X_o⊂ X such that E_G^X_o:=E_G^X∩(X_o× X_o) is sectional hyperfinite or hypercompact, respectively. When the measure μ is clear in the context, we may call these notions by measure-sectional-hyperfinite or measure-hypercompact. As mentioned in Remark <ref>, Connes, Feldman & Weiss proved that being measure-sectional-hyperfinite with respect to one cross section implies the same for all other cross sections. The requirement in Definition <ref> that X_o would be G-invariant is necessary in order to define hypercompactness, as this notion is define exclusively for OERs, but is unnecessary to define sectional-hyperfiniteness, since hyperfiniteness is defined for general countable equivalence relations. Note however that also in measure-sectional-hyperfiniteness we may always assume that it is the case that X_o is G-invariant; indeed, otherwise we may replace X_o by the G-invariant set G.X_o, which is a Borel set,[It follows from the Arsenin–Kunugui Theorem; <cit.>, <cit.>.] and of course is also μ-conull, and E_G^G.X_o remains sectional-hyperfinite because every cross section for X_o serves also as a cross section for G.X_o. In light of this discussion, Corollary <ref> is nothing but a reformulation of the celebrated Connes–Feldman–Weiss Theorem using Theorem <ref>: Let G be an amenable group and (X,μ) be a nonsingular G-space. By the Connes–Feldman–Weiss Theorem there exists a μ-conull set X_o⊂ X such that E_G^X_o is sectional-hyperfinite. By the above discussion we may assume that X_o is G-invariant, thus E_G^X_0 is a sectional hypercompact OER. Then by Theorem <ref> we deduce that E_G^X_o is hypercompact, thus E_G^X is μ-hypercompact. § CONDITIONAL EXPECTATION ON COMPACT OERS Here we introduce a formula for conditional expectation on the σ-algebra of sets that are invariant to a compact OER. It will be useful both in relating hypercompactness to amenability and in establishing the Random Ratio Ergodic Theorem. Let X be a Borel G-space. For an OER E⊂ E_G^X we denote by Inv(E) the E-invariant σ-algebra whose elements are the Borel sets in X which are unions of E-classes. Thus, a Borel set A⊂ X is in Inv(E) if and only if for every x∈ X, either the entire E-class of x is in A or that the entire E-class of x is outside A. The E_G^X-invariant σ-algebra will be abbreviated by Inv_G(X):=Inv(E_G^X). Thus, a Borel set A⊂ X is in Inv_G(X) if it is a G-invariant set. From a given filtration E_0⊂ E_1⊂… of E_G^X, we obtain an approximation of Inv_G(X) by the sequence Inv(E_0)⊃Inv(E_1)⊃…, that satisfies Inv_G(X)=Inv(E_0)∩Inv(E_1)∩…. Suppose (X,μ) is a nonsingular G-space. It then has an associated Radon–Nikodym cocycle, which is a Borel function ∇:G× X→ℝ_>0, ∇:(g,x)↦∇_g(x), that satisfies the cocycle identity ∇_gh(x)=∇_g(h.x)∇_h(x), g,h∈ G, x∈ X, and has the property ∇_g(·)=dμ∘ g/dμ(·) in L^1(X,μ) for each g∈ G. We mention shortly that the fact that there can be found a version of the Radon–Nikodym cocycle that satisfies the cocycle identity pointwise is certainly non-trivial, but is true due to the Mackey Cocycle Theorem (see e.g. <cit.>). Using nothing but the Fubini Theorem, the Radon–Nikodym property of ∇ can be put generally in the following formula: † ∬_G× X∇_g(x)f_0(g.x)f_1(x)φ(g)dλ⊗μ(g,x) =∬_G× Xf_0(x)f_1(g^-1.x)φ(g)dλ⊗μ(g,x), for all Borel functions f_0,f_1:X→[0,∞) and φ:G→[0,∞). We introduce the main formula we need for conditional expectation on a compact equivalence relation. For finite OERs in the context of countable acting groups, this formula was mentioned in <cit.> (cf. the generalized Bayes' law in <cit.>). Let E⊂ E_G^X be a compact OER. With the Radon–Nikodym cocycle (<ref>), define an operator of measurable functions on X by S^E:f↦ S_f^E, S_f^E(x):=∫_G_E(x)∇_g(x)f(g.x)dλ(g). For a compact positive OER E⊂ E_G^X, the conditional expectation of every f∈ L^1(X,μ) with respect to Inv(E) has the formula 𝔼(f|Inv(E))(x)=S_f^E(x)/S_1^E(x)=∫_G_E(x)∇_g(x)f(g.x)dλ(g)/∫_G_E(x)∇_g(x)dλ(g), for μ-a.e. x∈ X (depending on f). We will start by a simple lemma that demonstrates that compact OERs are dissipative in nature (cf. <cit.>). Let E⊂ E_G^X be a compact positive OER. For every f∈ L^1(X,μ) we have S_f^E(x)<+∞ for μ-a.e. x∈ X (depending on f). Pick a compatible proper metric on G and for r>0 let B_r be the ball of radius r around the identity with respect to this metric. Recalling the identity (<ref>), for every f∈ L^1(X,μ) and every r>0 we have ∬_B_r× X∇_g(x)f(g.x)dλ⊗μ(g,x) =∬_G× Xf(x)1_B_r(g^-1)dλ⊗μ(g,x) =λ(B_r^-1)‖ f‖_L^1(X,μ)<+∞. Since this holds for every r>0, there exists a μ-conull set A_f such that ∫_B_r∇_g(x)f(g.x)dλ(g)<+∞ for every r>0 and every x∈ A_f. Then for every x∈ A_f, since G_E(x) is relatively compact it is contained in some B_r for some sufficiently large r>0, hence S_f^E(x)<+∞. Note that for every x∈ X, if g∈ G_E(x) then 1_E(g.x,hg.x)=1_E(x,hg.x) for all h∈ G. Let :G→ℝ_>0 be the modular function of G with respect to λ. From the cocycle property of ∇ it follows that whenever g∈ G_E(x), I S_f^E(g.x) =∫_G1_E(g.x,hg.x)∇_h(g.x)f(hg.x)dλ(h) =∇_g(x)^-1∫_G1_E(x,hg.x)∇_hg(x)f(hg.x)dλ(h) =∇_g(x)^-1(g)∫_G1_E(x,h.x)∇_h(x)f(h.x)dλ(h) =∇_g(x)^-1(g)S_f^E(x). In particular, the function X→ℝ_>0, x↦ S_f^E(x)/S_1^E(x), is E-invariant, i.e. Inv(E)-measurable. From all the above we obtain the following identity: II ∫_Xf(x)dμ(x) =∫_Xf(x)S_1^E(x)S_1^E(x)^-1dμ(x) =∬_G× X∇_g(x)f(x)1_E(x,g.x)S_1^E(x)^-1dλ⊗μ(g,x) (<ref>) =∬_G× Xf(g^-1.x)1_E(g^-1.x,x)S_1^E(g^-1.x)^-1dλ⊗μ(g,x) (<ref>) =∬_G× Xf(g^-1.x)1_E(g^-1.x,x)∇_g^-1(x)(g)S_1^E(x)^-1dλ⊗μ(g,x) =∬_G× Xf(g.x)1_E(g.x,x)∇_g(x)S_1^E(x)^-1dλ⊗μ(g,x) =∫_XS_f^E(x)S_1^E(x)^-1dμ(x). Finally, note that for every E-invariant function ψ∈ L^∞(X,μ) we have S_f·ψ^E(x)=S_f^E(x)ψ(x) for μ-a.e. x∈ X, so the identity (<ref>) when applied to f·ψ implies the identity ∫_Xf(x)ψ(x)dμ(x)=∫_XS_f^E(x)S_1^E(x)^-1ψ(x)dμ(x). Then the E-invariance of S_f^E(x)S_1^E(x)^-1 with the last identity readily imply that S_f^E(·)S_1^E(·)^-1=𝔼(f|Inv(E)) in L^1(X,μ). § ASYMPTOTIC INVARIANCE OF HYPERCOMPACT OERS The following asymptotic invariance property in hyperfinite equivalence relations was proved by Danilenko <cit.> and has been used by him for the random ratio ergodic theorem <cit.>. We will show that Danilenko's proof can be adapted to hypercompact OERs as well. Let G be an lcsc group with a left Haar measure λ. A compact set S⊂ G is said to be [K,ϵ]-invariant, for some compact set K⊂ G and ϵ>0, if λ(g∈ S:Kg⊂ S)>(1-ϵ)λ(S). Let (X,μ) be a probability preserving G-space. Suppose E_G^X is hypercompact with an equicompact filtration E_0⊂ E_1⊂…. Then for every compact set K⊂ G and ϵ>0, it holds that lim inf_n→∞μ(x∈ X:G_E_n(x) is [K,ϵ]-invariant)>1-ϵ. We first formulate a simple probability fact: Let W_nW be an L^1-convergent sequence of random variables taking values in [0,1]. Then for every 0<ϵ≤ 1/2, if 𝔼(W)>1-ϵ^2 then lim inf_n→∞ℙ(W_n>1-ϵ)>1-ϵ. For every n, since W_n is taking values in [0,1], 𝔼(W_n) =∫_{ W_n>1-ϵ}W_ndℙ+∫_{ W_n≤1-ϵ}W_ndℙ ≤ℙ(W_n>1-ϵ)+(1-ϵ)ℙ(W_n≤1-ϵ) =1-ϵℙ(W_n≤1-ϵ). Since the left hand-side converges to 𝔼(W)>1-ϵ^2 as n→+∞, we deduce that ℙ(W_n≤1-ϵ)<ϵ for every sufficiently large n. For m∈ℕ let X_m(K)={ x∈ X:K⊂ G_m(x)}. From the equicompactness, X_m(K)↗ X as m↗+∞, so pick m_o∈ℕ such that μ(X_m_o(K))>1-ϵ^2. By Lévy's martingale convergence theorem (see e.g. <cit.>), W_n:=𝔼(1_X_m_o(K)|Inv(E_n))𝔼(1_X_m_o(K)|Inv_G(X)):=W. Since 𝔼(W)=μ(X_m_o)>1-ϵ^2, from Lemma <ref> it follows that there exists n_o∈ℕ, say n_o>m_o, such that μ(W_n>1-ϵ)>1-ϵ for every n>n_o. Abbreviate G_n(·)=G_E_n(·). For every n>n_o, by the formula for conditional expectation as in Proposition <ref> in the probability preserving case, since G_m_o(x)⊂ G_n(x) and G_n(g.x)=G_n(x)g^-1 for g∈ G_n(x), we get W_n(x) =λ(g∈ G_n(x):K⊂ G_m_o(g.x))/λ(G_n(x)) ≤λ(g∈ G_n(x):K⊂ G_n(g.x))/λ(G_n(x)) =λ(g∈ G_n(x):Kg⊂ G_n(x))/λ(G_n(x)) Hence, for all n>n_o, μ(x∈ X:G_n(x) is [K,ϵ]-invariant)≥μ(W_n>1-ϵ)>1-ϵ. We can now prove Theorem <ref>. First we mention that the setting of Theorem <ref> is never void, since every lcsc group admits a probability preserving free G-space (see e.g. <cit.>). Recall that an lcsc group G is amenable if it admits a Følner sequence, namely a compact filtration S_0⊂ S_1⊂… of G such that for every compact set K⊂ G and every ϵ>0, there is n_o∈ℕ such that S_n is [K,ϵ]-invariant for all n>n_o. One implication is a particular case of Theorem <ref>. For the other implication, suppose that (X,μ) is a probability preserving G-space and that, up to a μ-null set, E_G^X admits an equicompact filtration E_0⊂ E_1⊂…. Fix an equicompact filtration K_0⊂ K_1⊂… of G, and a sequence ϵ_0,ϵ_1,… of positive numbers with ϵ_0+ϵ_1+…<+∞. From Proposition <ref> together with the Borel–Cantelli Lemma, one deduces that for μ-a.e. x∈ X and every m∈ℕ, G_E_n(x) is [K_m,ϵ_m]-invariant for all but finitely many n∈ℕ. It then follows that G_0(x)⊂ G_1(x)⊂… is a Følner sequence of G for μ-a.e. x∈ X, and in particular G is amenable. § TWO RANDOM RATIO ERGODIC THEOREMS The classical Random Ergodic Theorem of Kakutani regards the asymptotic behaviour of ergodic averages of the form 1/n∑_k=1^nf(z_k.x), n=1,2,…, where x is a point in a G-space, (z_1,z_2,…) is a random walk on G, and f is an integrable function. See e.g. <cit.> and the references therein. Performing a random walk on the acting group enables one the use of the classical ergodic averages along the natural Følner sequence of ℕ. Alternatively, one may propose other methods to pick compactly many group elements at random in each step, and study the asymptotic behaviour of the corresponding ergodic averages. This was done by Danilenko for countable amenable groups <cit.> using the Ornstein–Weiss Theorem about hyperfiniteness, and in the following we extend this method to uncountable amenable groups using Theorem <ref> about hypercompactness. Let (X,μ) be a nonsingular probability G-space and suppose that E_G^X is hypercompact with an equicompact filtration E_0⊂ E_1⊂…. Then for every f∈ L^1(X,μ), lim_n→∞∫_G_E_n(x)∇_g(x)f(g.x)dλ(g)/∫_G_E_n(x)∇_g(x)dλ(g)=𝔼(f|Inv_G(X))(x) both μ-a.e. and in L^1(X,μ). It is worth noting that in the setting of Theorem <ref>, while it is called an ergodic theorem, we do not say that the filtration has the Følner property. Since E_0⊂ E_1⊂… is a filtration of E_G^X modulo μ, Inv(E_0)⊃Inv(E_1)⊃…… is an approximation of Inv_G(X) modulo μ. Then by Lévy's martingale convergence theorem (see e.g. <cit.>), for every f∈ L^1(X,μ) it holds that lim_n→∞𝔼(f|Inv(E_n))=𝔼(f|Inv_G(X)) both μ-a.e. and in L^1(X,μ). By the formula for conditional expectation as in Proposition <ref> this is the limit stated in the theorem. The following theorem was proved by Danilenko <cit.> for countable amenable groups, using the Ornstein–Weiss Theorem. For the general case we will follow the same idea as Danilenko, only that we substitute the Ornstein–Weiss Theorem with Theorem <ref> and use the formulas for conditional expectations as in Proposition <ref>. It is worth mentioning that the general idea of the proof is the same, albeit considerably simpler, to the approach used by Bowen & Nevo in establishing pointwise ergodic theorems for probability preserving actions of non-amenable (countable) groups. See e.g. <cit.>. A random equicompact filtration of G is a sequence (𝒮_0,𝒮_1,…) such that: * Each 𝒮_n is a function from an abstract probability space (Ω,ℙ) into the relatively compact sets in G. * Each 𝒮_n is measurable in the sense that {(ω,g):g∈𝒮_n(ω)} is a measurable subset of Ω× S. * 𝒮_0⊂𝒮_1⊂… is ℙ-almost surely an equicompact filtration of G. A random Følner sequence is a random equicompact filtration (𝒮_0,𝒮_1,…) of G that forms ℙ-almost surely a Følner sequence of G. Our source for Random Følner filtrations is, of course, hypercompact orbit equivalence relations: Suppose E_G^X is hypercompact with an equicompact filtration E_0⊂ E_1⊂…. Then the sequence (𝒮_0,𝒮_1,…) that is defined on (X,μ) by 𝒮_n(x)=G_E_n(x)={g∈ G:(x,g.x)∈ E} forms a random equicompact filtration. Of course, (𝒮_0(x),𝒮_1(x),…) forms an equicompact filtration of G for every x∈ X. In order to see the measurability, note that whenever E⊂ E_G^X is an OER, and in particular a Borel subset of X× X, then the set {(x,g)∈ X× G:g∈ G_E(x)} is nothing but the inverse image of E under the Borel map X× G→ X× X, (x,g)↦(x,g.x). Having all these notions defined we can prove now Theorem <ref>: Pick any probability preserving free G-space (Y,ν) (the existence of such a G-space is well-known; see e.g. <cit.>). Since G is amenable, it follows from Theorem <ref> that (Y,ν) is hypercompact, and in fact there can be found an equicompact filtration F_0⊂ F_1⊂… of E_G^Y. Consider the diagonal nonsingular G-space (X× Y,μ⊗ν), namely with the action of G that is given by g.(x,y)=(g.x,g.y). Note that E_G^X× Y is measure-hypercompact, since it has the equicompact filtration E_0⊂ E_1⊂… given by E_n={((x,y),(g.x,g.y)):(y,g.y)∈ F_n}. Indeed, as the action of G on (Y,ν) is free, it is easy to verify that G_n(x,y):=G_E_n(x,y)=G_F_n(y), (x,y)∈ X× Y. Also, since G acts on (Y,ν) in a probability preserving way, its Radon-Nikodym cocycle can be taken to be the constant function 1, so we have ∇_g^μ⊗ν(x,y)=∇_g(x), (x,y)∈ X× Y, where ∇_g^μ⊗ν and ∇_g denote the Radon–Nikodym cocycle (<ref>) of the nonsingular G-spaces (X× Y,μ⊗ν) and (X,μ), respectively. Then for every f∈ L^1(X,μ), applying Theorem <ref> to f⊗ 1∈ L^1(X× Y,μ⊗ν), we obtain lim_n→∞∫_G_n(y)∇_g(x)f(g.x)dλ(g)/∫_G_n(y)∇_g(x)dλ(g)=𝔼(f⊗1|Inv(E_G^X× Y))=𝔼(f|Inv(E_G^X)) both μ⊗ν-a.e. and in L^1(X× Y,μ⊗ν). Define the random equicompact filtration (𝒮_0,𝒮_1,…) on the probability space (Y,ν) by 𝒮_n(y)=G_E_n(y), n=0,1,…, which is a random equicompact filtration by Lemma <ref>. Finally, in order to see that (𝒮_0,𝒮_1,…) is a random Følner sequence, note that since F_0⊂ F_1⊂… is an equicompact filtration of G, then E_0⊂ E_1⊂… is an equicompact filtration of E_G^X× Y. Then, as we have shown in the proof of Theorem <ref>, the equicompactness ensures that for ν-a.e. y∈ Y this is a Følner sequence. § HOPF DICHOTOMY IN NONSINGULAR BERNOULLI ACTIONS An important model in nonsingular ergodic theory is the nonsingular Bernoulli G-space. For a countable group G, this is the Borel G-space {0,1}^G, where the action is given by translating the coordinates, and with a product measure of the form ⊗_g∈ G(p_g,1-p_g). A classical theorem of Kakutani provides a criterion on the summability of (p_g)_g∈ G that completely determines when this action becomes nonsingular, in which case ({0,1}^G,⊗_g∈ G(p_g,1-p_g)) is called a nonsingular Bernoulli shift. This model is well-studied in ergodic theory and its related fields in recent years, and many results are known about its conservativity, ergodicity and other ergodic-theoretical properties. See the survey <cit.>. When it comes to an lcsc group G, the natural model for nonsingular Bernoulli G-spaces is the nonsingular Poisson suspension.[Nonsingular Poisson suspension which has a dissipative base is referred to as nonsingular Bernoulli G-space, following the terminology in <cit.>.] We will introduced the fundamentals of this model below. The analog of the Kakutani dichotomy was established by Takahashi, and the formula for the Radon–Nikodym derivative was presented by Danilenko, Kosloff & Roy <cit.>. See also the survey <cit.>. Here we will use the Random Ratio Ergodic Theorem <ref> in order to show that for a large class of nonsingular Bernoulli G-spaces the Hopf Dichotomy holds: they are either totally dissipative or ergodic. Our method follows the main line of the Hopf method in proving ergodicity, which was presented in the nonsingular category by Kosloff <cit.> and extended by Danilenko <cit.>. §.§.§ Conservativity and Dissipativity Let (X,μ) be a nonsingular probability G-space. Then (X,μ) is said to be conservative if it has the following recurrence property: for every Borel set A⊂ X with μ(A)>0 and every compact set K⊂ G, there exists g∈ G\ K such that μ(A∩ g.A)>0. It is a classical fact (for a proof of the general case see <cit.>) that conservativity is equivalent to that ∫_Gdμ∘ g/dμ(x)dλ(g)=+∞ for μ-a.e. x∈ X. Accordingly, the Hopf Decomposition or the Conservative–Dissipative Decomposition of (X,μ) is given by 𝒟:={ x∈ X:∫_Gdμ∘ g/dμ(x)dλ(g)<+∞} and 𝒞:=Y\𝒟. Thus, (X,μ) is called totally dissipative if μ(𝒟)=1 and, on the other extreme, it is called conservative if μ(𝒞)=1. §.§.§ Nonsingular Bernoulli G-spaces Let G be an lcsc group and μ be an absolutely continuous (i.e. in the Haar measure class) measure on G. Denote the corresponding Radon–Nikodym derivative by ∂:G→ℝ_>0, ∂(x)=dμ/dλ(x). Thus, (G,μ) is a nonsingular G-space in the action of G on itself by translations, and we have the Radon–Nikodym cocycle ∇_g(x):=dμ∘ g/dμ(x)=dμ/dλ(gx)dλ/dμ(x)=∂(gx)/∂(x), g,x∈ G. Here μ∘ g is the measure A↦μ(gA) (g forms an invertible transformation). Let G^∗ be the space of counting Radon measures on G. Thus an element p∈ G^∗ is a Radon measure on G that takes nonnegative integer values. The Borel σ-algebra of G^∗ is generated by the mappings N_A:G^∗→ℤ_≥ 0∪{+∞}, N_A(p)=p(A), for all Borel sets A⊂ G. Denote by μ^∗ the unique probability measure on G^∗ with respect to which the distribution of each of the random variables N_A is Poisson with mean μ(A), meaning that μ^∗(N_A=k)=e^-μ(A)μ(A)^k/k!, k∈ℤ_≥0. It is a basic fact that when G is non-discrete and μ is absolutely continuous, for every p in a μ^∗-conull set in G^∗ there are no repetitions, that is to say, every such p is of the form p=∑_p∈ Pδ_p, where P⊂ G is some discrete set that we will denote P=Supp(p) and refer to as the support of p. There is a natural action of G on G^∗, which is given by g.p(A)=p(gA), g∈ G. By <cit.> (see also <cit.>), when μ satisfies ∇_g-1∈ L^1(G,μ) for all g∈ G, or, equivalently, g.∂-∂∈ L^1(G,λ) for all g∈ G, then the action of G on (G^∗,μ^∗) is nonsingular, and the Radon–Nikodym derivatives have the formula ∇_g^∗(p):=dμ^∗∘ g/dμ^∗(p) =e^-∫_G(∇_g-1)dμ·∏_x∈Supp(p)∇_g(x) =e^-∫_G(g.∂-∂)dλ·∏_x∈Supp(p)∂(gx)/∂(x), g∈ G. §.§.§ A Hopf Dichotomy The following theorem is a general result about nonsingular Bernoulli G-spaces. For constructions of nonsingular Bernoulli G-spaces with a variety of ergodic properties we refer to <cit.>. Let G be an lcsc amenable group and μ an absolutely continuous measure on G such that ∂:=dμ/dλ satisfies 0<inf∂<sup∂<+∞ and g.∂-∂∈ L^1(G,λ), g∈ G. Then the nonsingular G-space (G^∗,μ^∗) is either totally dissipative or ergodic. The first part of the proof, namely that (G^∗,μ^∗) is either totally dissipative or conservative, is essentially the same as the proof of <cit.>. The main part of the proof, namely that conservativity implies ergodicity, uses Corollary <ref> of the Random Ratio Ergodic Theorem. Both parts relies on the following observation of Danilenko, Kosloff & Roy. Denote by Π=Π(G,μ) the group of all μ-preserving invertible compactly supported transformations of (G,μ). We denote by Supp(π) the (compact) support of π∈Π. Then Π acts naturally on (G^∗,μ^∗) in a measure preserving way via π.p(A)=p(π^-1(A)), π∈Π, p∈ G^∗. As it was proved in the course of the proof <cit.>, we have: The action of Π=Π(G,μ) on (G^∗,μ^∗) is ergodic. First, note that for every g∈ G we have ∫_G|g.∂/∂-1|dμ=∫_G|g.∂-∂|dλ. Then since g.∂-∂∈ L^1(G,λ) for every g∈ G, it follows from <cit.>) that (G^∗,μ^∗) is a nonsingular G-space. §.§.§ Part 1 Look at the dissipative part of (G^∗,μ^∗), that we denote 𝒟^∗:={ p∈ G^∗:∫_G∇_g^∗(p)dλ(g)<+∞}. Let us show that 𝒟^∗ is Π-invariant. Find 0<α<1 with α≤inf∂<sup∂≤α^-1. For every π∈Π consider the function υ_π:G^∗→ℝ_>0, υ_π(p)=α^#(Supp(p)∩Supp(π))=α^N_Supp(π)(p), where P is the support of p and Supp(π) is the support of π. Since Supp(π) is compact, for μ^∗-a.e. p∈ G^∗ we have N_Supp(π)(p)<+∞. By the formula (<ref>) we deduce ∇_g^∗(π.p)/∇_g^∗(p) = ∏_x∈Supp(p)∩Supp(π)∇_g(π(x))/∇_g(x)∈[υ_π(p)^4, υ_π(p)^-4]. This readily implies the Π-invariance of 𝒟^∗, and since Π acts ergodically we deduce that (G^∗,μ^∗) is either totally dissipative or conservative. §.§.§ Part 2 We use Corollary <ref> of the Random Ratio Ergodic Theorem in order to establish the following property: Assuming that (G^∗,μ^∗) is conservative, then for every π∈Π there is a function Υ_π:G^∗→ℝ_>0 such that every nonnegative function φ∈ L^1(G^∗,μ^∗) and μ^∗-a.e. p∈ G^∗, Υ_π(p)·𝔼(φ|Inv_G(G^∗))(p)≤𝔼(φ|Inv_G(G^∗))(π.p)≤𝔼(φ|Inv_G(G^∗))(p)/Υ_π(p) In fact, we will show that Υ_π(p):=υ_π(p)^16=α^16· N_Supp(π)(p) works. Let φ∈ L^1(G^∗,μ^∗) be some nonnegative function. By a standard approximation argument, in order to establish (<ref>) it is sufficient to assume that φ is uniformly continuous in the vague topology on G^∗, which induces its Borel structure. For such φ, for every π∈Π one can easily verify that for all f∈ C_0(G) (i.e. continuous and vanishes as g→∞), ∫_Gf(gπ(x))dp(x)-∫_Gf(gx)dp(x)→0 as g→∞. That is to say, d(g.π.p,g.p)→0 as g→∞, where d denotes the metric of the vague topology on G^∗. Then by the uniform continuity of φ we obtain φ(g.π.p)-φ(g.p)→0 as g→∞ uniformly in p. Using Corollary <ref>, there exists a Følner sequence S_0⊂ S_1⊂… of G, corresponding to ℒ={φ}, whose ergodic averages A_nφ(p):=∫_S_n∇_g^∗(p)φ(g.p)dλ(g)/∫_S_n∇_g^∗(p)dλ(g), p∈ G^∗, n∈ℕ, satisfy that lim_n→∞A_nφ(p)=𝔼(φ|Inv_G(G^∗))(p) for μ^∗-a.e. p∈ G^∗. From (<ref>), for μ^∗-a.e. p∈ G^∗ and every n∈ℕ we have 0.9υ_π(p)^16·∫_S_n∇_g^∗(p)φ(g.π.p)dλ(g)/∫_S_n∇_g^∗(p)dλ(g)≤ A_nφ(π.p)≤υ_π(p)^-16·∫_S_n∇_g^∗(p)φ(g.π.p)dλ(g)/∫_S_n∇_g^∗(p)dλ(g). Recall that by the conservativity of the action of G on (G^∗,μ^∗) we have that lim_n→+∞∫_S_n∇_g^∗(p)dλ(g)=∫_G∇_g^∗(p)dλ(g)=+∞, so using (<ref>) we deduce that lim_n→∞|∫_S_n∇_g^∗(p)φ(g.π.p)dλ(g)/∫_S_n∇_g^∗(p)dλ(g)-A_nφ(p)| ≤lim_n→∞∫_S_n∇_g^∗(p)|φ(g.π.p)-φ(g.p)|dλ(g)/∫_S_n∇_g^∗(p)dλ(g)=0. Finally, when taking the limit as n→+∞ in (<ref>), together with (<ref>) we obtain (<ref>) for the function Υ_π(p)=υ_π(p)^16. §.§.§ Part 3 We finally deduce that if (G^∗,μ^∗) is conservative then it is ergodic. If E⊂ G^∗ is a G-invariant set with μ^∗(E)>0, from (<ref>) for the indicator φ=1_E we obtain that 1_E(π.p)=1_E(p) for every π∈Π and μ^∗-a.e. p∈ G^∗, thus E is a Π-invariant. Since Π acts ergodically we deduce μ^∗(E)=1. §.§ Acknowledgments We would like to express our gratitude to Michael Björklund for many fruitful discussions regarding this work, and to Zemer Kosloff for suggesting that an uncountable version of Danilenko's Random Ratio Ergodic Theorem should be true, and for providing us with useful comments and corrections. acm *
http://arxiv.org/abs/2409.02749v1
20240904142803
Doping-Induced Enhancement of Hydrogen Evolution at MoS2 Electrodes
[ "Sander Ø. Hanslin", "Hannes Jónsson", "Jaakko Akola" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "cond-mat.mes-hall", "physics.chem-ph", "physics.comp-ph" ]
1em [ Power-grid modelling via gradual improvement of parameters I. Papp September 4, 2024 ================================================================ ] § INTRODUCTION MoS2 has emerged as a promising candidate among earth-abundant compounds to replace the precious metal catalysts traditionally used for the hydrogen evolution reaction (HER) <cit.>. Its two-dimensional layered nature allows for novel engineering on the nanoscale, for example by maximizing the presence of edge sites, which display higher activity than the basal plane. Transition metal doping has been successful in improving the reaction rates <cit.>, but different and sometimes conflicting results raise interesting questions regarding the underlying activation mechanism. For example, Deng et al. <cit.> found significant reduction of the HER overpotential with Pt-doping of few-layer MoS2 samples, while Co and Ni showed signs of weak activation and deactivation, respectively. Lau et al. <cit.> found that Co-doping lead to activation with respect to undoped MoS2, while Fe, Ni, and Ag were detrimental for the electrochemical current density. Wang et al. <cit.> found reduction in the HER overpotential for Fe, Co, Ni, Cu in vertically aligned MoS2, while Humphrey et al. <cit.> conversely found that Co-doped MoS2 displayed a larger overpotential on basal-oriented MoS2. In light of these rich properties of doped MoS2, it is of interest to determine which mechanisms are responsible for the experimentally observed activation or deactivation. The experimental situation is complex, and the doping, especially at high levels, may lead to significant changes in the sample morphology, or other large-scale modifications such as phase transitions, phase separation or formation of impurity particles or clusters on the MoS2 substrate. Such effects are relevant for catalyst performance, but constitute a regime which is challenging to assess theoretically due to the wide scope. In this work, we therefore limit our focus to the relatively low concentration doping regime, considering single- or few-atom impurities in the 2H phase of MoS2. We note that multi-elemental codoping is also a promising approach towards increasing MoS2 activity <cit.>, however such synergistic effects are not explored herein. Activation with respect to HER is often attributed to improved values of the H-adsorption free energy (Δ G_H) on both the basal plane and sulfur-terminated edges. However, theoretical works have shown that the nature of hydrogen evolution via Mo-sites and S-sites differs, as in the latter case a large Heyrovsky activation energy must be overcome <cit.>. This challenges the common descriptor-based view as metal-bound hydrogen contributes to evolution more readily despite the near-thermoneutral adsorption in both cases. Thus, improved HER via activation of sulfur-sites is an unlikely mechanism. A further implication is that calculations of activation energy are necessary to predict the HER performance at different dopant sites. Hereby, we investigate the consequences of transition/noble metal impurities (Co, Ni, Pt) on the HER kinetics through theoretical reaction modelling at the atomic scale. Our starting point is to consider the active sites of the undoped edge- and basal-oriented MoS2, which are respectively the sulfur-depleted Mo-edge (Mo_0) and S-vacancies, and their modification via doping. Overall, the results demonstrate how the presence of impurities can lead to reduction in the overpotential, and provide insight into the role of dihydrogen intermediates. § METHODS The electronic structure calculations were performed at the level of density functional theory (DFT) within the plane-wave formulation implemented in the Vienna ab initio simulation package (VASP) <cit.>, with core electrons described by the projector-augmented wave approach <cit.>. All calculations were spin-polarized, with plane wave basis sets up to 400 eV and 3×3×1 Monkhorst-Pack sampling of the Brillouin zone. Valence electrons for transition metals include the outer s- and d-electrons. The revised Perdew-Burke-Ernzerhof exchange-correlation functional <cit.> was used, along with the D3 dispersion corrections <cit.>. Transition states for elementary reaction steps were approximated by first-order saddle points on the potential energy surface, obtained by the climbing-image nudged elastic band method <cit.>. Minima and saddle points were optimized with force convergence criteria of 0.02 eV/Å and 0.05 eV/Å, respectively. The electrolyte is represented by an explicit Eigen cation water cluster, as well as an implicit polarizable continuum model as implemented in VASPsol <cit.>. For the final kinetic evaluations, the grand-canonical reaction and activation energy is evaluated so as to account for the constant electrode potential during the electrochemical reaction. This is done by allowing the number of electrons in the system to vary, implicitly tuning the electrode potential <cit.>. The grand-canonical energy is then defined as Ω = E_n + δ n e Φ where E_n is the DFT energy with n electrons, δ n = n-n_0 is the difference in number of electrons from the neutral state, e is the elementary charge and Φ is the electrode potential, given by the effective work function. In the following, we refer the electrode potential to the standard hydrogen electrode (SHE), defining U=Φ-Φ_SHE with Φ_SHE = 4.43 V. During the grand-canonical evaluation, the saddle point structure representing the transition state is kept fixed. The basal plane is modelled as a single monolayer, represented by a periodically repeating 5×5 MoS2 unit cell (75 atoms excluding adsorbates and solvent species). The edge model consists of alternating layers terminated with 50 % and 0 % S-coverage on the S- and Mo-edges, respectively, resembling a slab of vertically aligned 2H-MoS2 sheets. Each layer is 3×4 MoS2 unit cells for reaction calculations, and 5×4 for formation energy and adsorption calculations (64 and 112 atoms per simulation cell). Single transition metal atoms are introduced as Mo-substitutional impurities to these model systems, yielding a basal plane doping concentration of 4 % in terms of Mo-substitution, and 25 % or 100 % substitution of the Mo edge. The inset in Figure <ref> shows a top-view of the two simulation cells. Formation energy calculations are performed via the cohesive energy E_c = E_tot - ∑_iE_i n_i, where the sum goes over unique atomic species i with energy E_i and of quantity n_i in the simulation cell. The free energy of hydrogen adsorption in the gas phase is used for initial characterization of different adsorption sites, and is given by Δ G = E_nH-E_(n-1)H - 1/2 E_H_2+ Δ E_zpe - TΔ S, where E_nH is the DFT energy of a state with n adsorbed H-atoms and E_H_2 is the DFT energy of the H2 molecule. Δ E_zpe denotes the difference in zero-point energy upon adsorption, and is determined by vibrational analysis. The entropic contribution TΔ S is estimated by the standard entropy of the H2 molecule, effectively neglecting the entropy of the adsorbed state. The concentration of sulfur vacancies and hydrogen adatoms directly depends on the electrochemical conditions, specifically the pH, electrode potential U as well as the presence of sulfur species in solution. We assume that desulfurization occurs via electrochemical H2S production, so that the sulfur chemical potential is μ_S = μ_H_2S - 2(μ_H^+ + μ_e^-), where the chemical potential of the proton-electron pair in solution is defined via the computational hydrogen electrode approach <cit.>: μ_H^+ + μ_e^- = 1/2μ_H_2 - eU + k_BT ln a_H^+, where pH=-log_10 a_H^+. As for H2, μ_H_2S is given by the DFT energy, zero-point energy and standard entropy. Acidic conditions (pH=0) and a partial H2S pressure of p_H_2S/p^∘_H_2S = 10^-8 are assumed. These definitions of sulfur and proton-electron pair chemical potentials are used to evaluate relative energies of hydrogenated sulfur vacancies on the basal plane, as a function of electrode potential. A kinetic model is used to evaluate the relative reaction rates of the different configurations, and construct simulated polarization curves. We consider one site at a time, and separately evaluate the Volmer-Tafel and Volmer-Heyrovsky pathways. Assuming constant concentration of solvated species, the state space is given by the possible surface hydrogen configurations for the given site (0 or 1 H for the Heyrovsky pathway, 0, 1 or 2 H for the Volmer-Tafel pathway). Transitions between different microstates occur via the elementary Volmer, Heyrovsky and Tafel steps, and reverse reactions involving H2 are neglected, i.e. H2 is considered to be removed from the local system as soon as Tafel or Heyrovsky combination has occurred. A general transition rate from microstate a to b via path s is given by r^s_ab = θ_a k^s_ab, with θ_a being the occupation of state a. Within transition state theory, the associated rate constant takes the form k^s_ab = k_BT/h e^-Ω^(s)_ab / k_B T, where k_B is the Boltzmann constant, h is the Planck constant, T is temperature, and Ω^(s)_ab is the grand-canonical activation energy, i.e. the energy of the transition complex along s connecting the a and b minima, referred to that of the state a. T=300 K is used in rate calculations. To account for rearrangement of the solvent beyond those included in the simulation, a minimum limit of 0.2 eV is enforced for the Volmer and Heyrovsky forward/reverse barriers. The total transition rate from a to b is then a sum over the available pathways, r_ab = ∑_s r^s_ab. Solving for the steady state condition, where the occupation of each state is constant, the steady state current density due to electron transfer in the forward (+) and reverse (-) Volmer (v) and Heyrovsky (h) steps is given by j = -e/A∑_ab( r_ab^v(+) - r_ab^v(-) + r_ab^h(+)), with the elementary charge e and effective area per site A. j is voltage-dependent via the grand-canonical activation energies, and with this knowledge one can construct theoretical polarization curves. The area per site is here given by the number of sites (1 for basal plane, 1-4 for edges) and lateral dimensions of the chosen simulation cell. Our goal is not a direct comparison of the absolute magnitude of the current density with experiments, but nonetheless we should note that the edge content may vary considerably in experiments, depending on the sample morphology. The Mo_0 edge content of this simulation cell is ca. 0.8 nm/nm^2, which is of comparable magnitude to experimental values <cit.>. As the current density is proportional to the site density, the exponential nature of the polarization curve somewhat mitigates the importance of the chosen area on determining the overpotential. When discussing the theoretical overpotential η in the following, we refer to the magnitude of the negative electrode potential at which j exceeds the arbitrary threshold of 10 mA/cm^2. § RESULTS AND DISCUSSION Considering first the relative free energy of the impurity formation (Figure <ref>), we find that substitution on the edges is the most favored thermodynamically for all three metals, and especially for the Mo_0 edge. Thus, it is more likely to occur on edge-locating Mo. Further, interaction with sulfur-vacancies stabilizes the basal plane impurity. Defining chemical potentials from the respective bulk metal phases, the absolute basal plane substitution energies (zero-levels in Figure <ref>) are 3.8 (Co), 4.3 (Ni), and 4.5 eV (Pt). However, conclusions regarding absolute stability warrant a more thorough thermodynamic study, considering other possible reference phases in appropriate conditions. Hence we consider the relative formation energy to evaluate the different sites for each impurity metal. In this regard, Co, Ni, and Pt all show similar behavior. Based on these energies one might expect a significant edge-substitution at thermodynamic equilibrium, if no phase separation occurs. As indicated by horizontal lines in Figure <ref>, the formation energy per impurity atom remains low in the case of complete edge substitution, suggesting that a complete filling of the Mo_0 edge is more favorable than occupying other sites. Moving forward, we assume from this that the doped Mo_0 edge is present in the alternating vertical layer model. Note that the selection of the Mo_0 and S_50 edges qualitatively corresponds to sulfur-poor conditions, and that the selectivity of the substitution in general depends on the chemical potential of sulfur, as well as the impurity element <cit.>. The differences in substitution energy between basal plane and edges are in good agreement with those of Ref. , where it was also found that doping of the Mo_0 edge was significantly more favorable than with 50 % or 100 % terminating S-coverage. The presence of impurity atoms on the Mo edge would then stabilize the sulfur-depleted configuration to some degree. §.§ Doping of vacancies on basal plane Looking further into the interaction between S-vacancies and the impurity metal, we note first that the single vacancy is stabilized by almost 2 eV if situated next to an impurity atom in comparison with the undoped basal plane. This is in good agreement with an earlier theoretical work <cit.>. In thermodynamic equilibrium the impurities will then be accompanied by S-vacancies. Considering higher levels of S-deficiency and interaction between vacancies, neighboring vacancy pairs are weakly favored over dispersed configurations by ca. 0.1 eV on the undoped MoS2 basal plane, see Figure <ref>a. In the presence of impurity atoms, however, the neighboring configuration is strongly favored by ∼1 eV for all dopants. Thus, the impurities enable neighboring vacancy configurations to a larger degree. Further, for the third vacancy a cluster-like configuration is favored for Ni- and Pt-doping, while Co-doped and undoped MoS2 favor linear vacancy distributions, see Figure <ref>b. The cluster triple vacancy configuration fully exposes a metal atom and is likely of interest for HER. Overall, the vacancies and impurity atoms are co-confining. On undoped MoS2, hydrogen atoms can adsorb in the single vacancy site with a near-zero change in free energy. The multiple-vacancy configurations and impurity atoms modify the adsorption, as shown in Figure <ref>. Notably, Ni and Pt do not modify adsorption onto the single vacancy (V_S) significantly, while Co leads to a more favorable adsorption. In the doped vacancy, the second adsorption is significantly less favorable, but the adsorption configuration consists of H2* adsorbed onto a Mo-atom, which suggests a possible Volmer-Tafel pathway. The importance of dihydrogen intermediates has been established for single-atom catalysts <cit.>, and similar trends may be relevant on the local doping-induced structures considered in this work. In this specific case the intermediate seems too unstable to support an efficient pathway. In the double-vacancy (V_2S), adsorption becomes stronger by doping, and these configurations are likely not of interest for hydrogen evolution. In the clustered triple vacancy configuration (V_3S), undoped MoS2 has a near-thermoneutral adsorption onto the top-site once the more favorable vacancy sites are filled. The presence of an impurity atom makes the top site less favorable. A bridge-configuration is preferred instead, but is still high in energy. Notably, a fifth hydrogen atom may adsorb (unfavorably) onto the undoped top site to form a dihydrogen complex. On the doped structures such a complex would dissociate, but even then the adsorption is too endothermic. Overall, the doping seems to bring the vacancy sites out of the thermoneutral regime for hydrogen adsorption. Once the MoS2 system is subjected to electrochemical HER conditions, the potentially stabilizing effect of hydrogen adsorption is crucial in determining which vacancy configurations are formed. Under desulfurizing conditions (pH=0 and p_H_2S/p^∘_H_2S = 10^-8), Figure <ref> shows the free energy of hydrogenated vacancy configurations in doped and undoped MoS2. Single and double vacancy formation is favored at small applied negative voltages for the doped systems, while the triple vacancy is preferred below ca. U=-0.35 to -0.40 V. In undoped MoS2, the fivefold hydrogenated triple vacancy configuration surpasses the pristine basal plane around -0.55 V. At larger negative potentials, higher vacancy concentrations would be preferred, but importantly we note from these results that due to favorable hydrogen adsorptions, the clustered triple vacancy is favored over the linear or zig-zag variants in all systems. In the two latter cases, the adsorptions are higher in energy, more similar to those on the double vacancy. Given that the system can equilibrate under these conditions, we thus expect the clustered triple vacancy to be present (and favored) even in undoped MoS2, and going forward we will consider the cluster vacancy for all systems. Tsai et al. <cit.> observed clustered vacancy configurations after electrochemical desulfurization of basal-oriented (undoped) MoS2. Further, their observed onset in reduction of the S:Mo ratio between U=-0.5 and U=-0.6 V vs. RHE coincides well with the stable region of V_3S for the undoped system in Figure <ref>. Next we consider the single, double and triple (cluster) vacancies, and obtain the grand-canonical reaction- and activation energies via reaction modelling as outlined in the Methods section. For the single-vacancy, a Volmer-Heyrovsky mechanism via the first hydrogen is preferred, despite the Tafel-relevant geometry upon double adsorption. The energy diagram for this mechanism (see Figure <ref>a), shows that the Heyrovsky barrier is significantly reduced by doping. The simulated polarization curves are presented in Figure <ref>b. As expected from the energy diagram, all dopants activate the single-vacancy with respect to the undoped case, reducing the required overpotential by 0.3-0.4 V. On the double vacancy configuration, the Volmer-Heyrovsky pathway is preferred, but only Co shows any significant activity. The Heyrovsky barriers are larger than in the single vacancy due to the more favorable adsorption, leading to the current onset at a more negative potential. Interestingly, the Heyrovsky barrier becomes smaller than the Volmer barrier at larger negative potentials, and continues to scale while the Volmer barrier stagnates. This leads to the flat shape of the Co-MoS2 polarization curve, as the (more) rate-determining step is not being significantly reduced. For reference, calculations of the Tafel mechanism on these doped double-vacancy configurations showed that the barrier for Tafel combination is very large (2 eV upwards) due to repulsion from the impurity atom, rendering this mechanism irrelevant. The double vacancy thus strictly inhibits evolution in both doped and undoped cases. For the triple vacancy, different mechanisms are supported in doped and undoped systems. A Volmer-Volmer-Tafel mechanism via a dihydrogen intermediate is possible in the undoped case, while the endothermic H-adsorption of the doped systems favors a direct Volmer-Heyrovsky mechanism (see Figure <ref>a). The corresponding energy diagrams are shown in Figure <ref>b, where the required activation energy is significantly smaller in the undoped case. This leads to the low overpotential in Figure <ref>b. The Co- and Pt-systems display very similar overpotentials compared to the single vacancy case, while the Ni-doped system is limited by a large Volmer-barrier and is deactivated in this configuration. Importantly, formation of a dihydrogen complex enabled a more efficient reaction path, which is further explored in the following section on edges. The improved kinetics of the clustered triple vacancy on undoped MoS2 has some important implications. Due to the large overpotential required on the undoped single vacancy, triple vacancies would form before this point according to the analysis in Figure <ref>, indicating that the origin of basal plane activity is not only any S-vacancy, but specifically the undercoordinated Mo-atoms in threefold vacancy configurations. This is in contrast with the typical understanding that the single vacancy itself enables evolution due to its near-thermoneutral hydrogen binding. Comparing again with work by Tsai et al. <cit.>, the onset of reduced S:Mo atomic ratio around U=-0.6 V vs. RHE was accompanied by significantly increased HER activity. This aligns well with the understanding provided by our results, namely that the fully exposed Mo atom is the active site on the basal plane. This configuration is generated by the same conditions that drive the hydrogen evolution, and connection with the initial vacancy concentration and distribution in the sample may be elusive. Doping deactivates the triple vacancy (relatively for Co and Pt, and completely for Ni), which suggests that the basal plane is deactivated by doping overall. Doped single-vacancies are activated by a reduction of 0.3-0.4 V in the overpotential, and are present at lower voltages than in the undoped case. However, they are only stable at very small voltages, before they are replaced by doped double- and triple vacancies, both of which inhibit hydrogen evolution compared to the undoped triple vacancy. Experimental work by Humphrey et al. <cit.> found reduction in HER activity upon doping basal-oriented MoS2 with Co, in agreement with these conclusions. Rather than deactivating the single vacancy, our results suggest that this is due to deactivation of the triple vacancy. §.§ Doping of edges We have chosen to focus on the Mo_0 edge, as this is both the most kinetically active edge <cit.> as well as the most thermodynamically stable doping site. Two cases of edge doping levels are considered, 25 % and 100 %. At 25 % edge doping, the equilibrium H-coverage is reached at 1.25 monolayers where H is bound to Mo-atoms, avoiding the slightly higher energy impurity site. From there on, the next adsorption can occur onto the impurity atom or onto one of the Mo atoms, in most cases with a weakly positive free energy cost. The exception is adsorption onto the Ni-atom, which is unfavorable by ca. 0.5 eV, see Figure <ref>. At 100 % edge doping, the first adsorption is (by default) endothermic for Co and Ni, and 0 eV for Pt. However, interestingly, another hydrogen atom can be adsorbed onto the same dopant site, forming an adsorbed H2* complex. The adsorbed complex is stable for Co, and for Ni the second adsorption is nearly thermoneutral with respect to the first. On the contrary, the second adsorption is strongly unfavorable for Pt. The same trend can be seen on the impurity atom at the 25 % doping level. The affinity towards H* and H2* are thus qualitatively different for these metal impurities. We note that the trend of exothermic adsorption to form stable H2* following an initial endothermic H-adsorption is similar to that observed in single transition metal atoms supported on the MoS2 basal plane <cit.>. The H2* complex can also form on Mo-atoms, and in this case the adsorption is closer to thermoneutral also for the Ni- and Pt-systems. The formation of such H2* species is relevant when considering the possible reaction pathways for HER. Such species with near-thermoneutral binding are prime candidates for efficient intermediates. This also illustrates the importance of considering coverage effects and exploring the adsorption configuration space prior to modelling HER itself. The possibility of further H-adsorption leading to H2* complexes warrants revisiting the undoped Mo_0 edge, where the evolution was found to proceed through a Volmer-Heyrovsky pathway <cit.>. Interestingly, the incoming proton interacts favorably with the surface in the corresponding Heyrovsky transition state (TS). This attractive interaction stabilizes the transition complex, and leads to geometrically similar TS for the Volmer and Heyrovsky steps which are also close on the potential energy surface (Figure <ref>a). From the Volmer perspective, TS is stabilized by favorable H-H interaction in addition to the surface attraction. This also means that something resembling a H2* complex near the surface is part of the reaction for both the Heyrovsky and Volmer processes. Depending on whether the energetics favor adsorption of the H2* complex, H2 may release directly into solution or stay on the surface, (nominally a Heyrovsky or Volmer step). In the latter case, H2* may then desorb in a Tafel-like process at a later point, completing HER. In the following we refer to these mechanisms as Volmer-Heyrovsky and Volmer-Volmer-Tafel, although the Tafel process only involves desorption, not H-H combination. Figure <ref>b displays the grand canonical energy diagrams at U=-0.2 V for the undoped Mo_0 edge at a hydrogen coverage θ_H = 1.25, starting after the first Volmer step (identical for all cases). The H2* intermediate is seen to proceed with a very small Tafel barrier after adsorption. However, the direct H2-release via a Heyrovsky process is slightly preferred due to the smaller barrier, and proceeds at a lower overpotential, as shown in Figure <ref>c. Note that this distinction may be sensitive to computational specifics, e.g. choice of exchange-correlation functional and dispersion corrections. The Volmer-Tafel process through the H2* intermediate (A) is also compared to that of the alternative 2H* intermediate (B). The 2H* configuration is more stable, but also requires a large Tafel barrier to combine, leading to detrimentally worse overall kinetics. The difference between these two Tafel paths illustrates that a more thermoneutral intermediate is not always associated with better kinetics, as possible pathways and barriers must be considered. The presence of these similar transition- and intermediate complexes suggests that the potential energy surface is quite flat in this immediate neighborhood of the configuration space, encompassing configurations of H2* to H2 via water-surface complexation with comparable energies. Based on the findings above and considering the adsorption energetics, we map out the Volmer-Heyrovsky and Volmer-Volmer-Tafel pathways through the Mo- and dopant-sites. The polarization curves are presented in Figure <ref>, where we consider hydrogen coverages immediately below and above the equilibrium coverage. The H atoms partaking in evolution are colored blue in the right panels. For both mechanisms, the kinetics on the Mo-site farthest from the impurity atom (Mo^1) are rather similar, which is to be expected. Closer to the dopant atom the behavior is more different. The Volmer-Heyrovsky mechanism is deactivated on both Mo-sites, but activated on the dopant site for 25 % of Co and Pt impurities. The Ni-site is severely Volmer-limited due to the large adsorption energy. At 100 %, all systems are moderately deactivated for Volmer-Heyrovsky, compared to the pristine Mo edge. The kinetics at 25 % match the expectations set by adsorption energies, but this is not the case at 100 %, where Pt performs worse despite Δ G = 0 eV. The Mo-sites also perform worse despite near-zero adsorption energies, demonstrating that the interaction between the protonated water cluster and electronic states of the hydrogenated surface at the transition state can not be inferred from the hydrogen binding energy alone. The Volmer-Volmer-Tafel process is seen to proceed at a reduced overpotential via the dopant site for Co (25 % and 100 %), and via the neighboring Mo-site for Ni and Pt (25 %), where the H2* complex is not favored on the impurity site. While considering these different possibilities in the context of experiment, the first remark is that the Mo_0 edge is significantly more active than the basal plane, with ca. 0.25 V lower overpotential in the undoped cases. Therefore, if the edge-content in experiment is not negligible (even small amounts of edge-sites will affect the effective overpotential significantly), e.g. in polycrystalline samples, it is reasonable to assume that the observed doping effect is due to modification of the edge sites. Within this regime, moderate doping levels of Co, Ni and Pt will enhance the edge activity. At high doping levels, Ni- and Pt-doping leads to deactivation while Co-doping still turns out beneficial. The activation due to Pt by Deng et al. <cit.> and Co by Lau et al. <cit.> leads to shifts in the overpotential of similar magnitude as those found here for the 25 % Mo_0 edge. The deactivation due to Ni by Lau et al. is also similar to what we find for the 100 % edge substitution. Deng et al. find also weak activation by Co, yielding a slightly larger overpotential than for Pt-MoS2, while we find the opposite trend in the theoretical activation. The reason for this is not clear, but we should note that fixed wt %-doping constitutes different degrees of atomic substitution for Co/Ni and Pt. Due to the difference in atomic mass, the distribution of Pt-impurities will be more than three times as dilute. Deng et al. use a lower doping level (1.7 wt %) than Lau et al. (3.0 wt %) and see a less pronounced Ni-deactivation. We speculate that this difference is related to the degree of edge-substitution, as it is consistent with the theoretical understanding developed in this work (activation for partial edge substitution, deactivation upon complete edge substitution). The experimental work that most resembles the edge model used here is that of Wang et al. <cit.> where vertically aligned MoS2 layers were doped with 3d metals Fe through Cu. The Co-doped sample contained ca. 22 at % Co on the edge, decaying with depth, and all dopants resulted in overpotential reduction on the order of 0.1 V, in excellent agreement with our findings for the 25 % edge substitution. Rather than activation of the S-edge, our results indicate that this activation can be explained by improved kinetics on the Mo_0-edge. It should be noted that any systematic errors due to choices in the theoretical model, e.g. the implicit solvation scheme, explicit interface description, exchange-correlation functional or other computational parameters, may manifest as shifts in predicted barriers and reaction rates. We assume that these errors are similar across the studied systems, so that relative comparison is valid. Comparison with experiment suggests that the absolute rates are also of reasonable magnitude, but we must acknowledge the possibility of errors due to neglected effects, as well as partial cancellation of these. Unlike the edges, our results show that the 2H-MoS2 basal plane is deactivated by Mo-substitutional doping. This doping mechanism cannot therefore account for the very low overpotentials observed in experiments. However, we can not disregard the possibility that the general basal plane may be activated by means of other mechanisms, notable examples being transition to the 1T phase or anchored single-atom impurities. Both the basal plane S-vacancies and the Mo_0 edge involve Mo-bound hydrogen with Δ G_H≈ 0, but the much faster kinetics on the edge (and the significant improvement from single to triple vacancy) emphasizes the significance of exposing undercoordinated metal atoms, and suggests that the resulting hydrogen binding is of different nature on the respective sites. The low coordination of the edge enables formation of the metal-bound dihydrogen complex. Such affinity towards dihydrogen chemisorption is typically associated with transition metal complexes <cit.>, as H2 on metal surfaces tends to either dissociate or physisorb <cit.> (dihydrogen may however bind to defects such as ridges <cit.> or form as transient states during Tafel combination <cit.>). This indicates that the Mo_0 edge represents a middle ground between undercoordination, enabling chemistry resembling a coordination compound, and maintaining the good electron transport and stability of a metallic surface, as can be a problem with e.g. supported single atom catalysts. The hapticity of H2 ligands on transition metal complexes is largely governed by π-backdonation from metal d-orbitals into the antibonding σ^*_H-H orbital, and equivalent mechanisms likely determine the H2* binding energy on the doped edges. From the few dopants studied herein one can at least note that dihaptic binding on the edge is less favored for dopants with a higher number of valence d-electrons (Co, Ni and Pt being respectively 3d^7, 3d^8 and 5d^9), though more in-depth analysis would be insightful in this regard. As the Kubas interaction fundamentally defines the energy landscape of the H2*↔2H* transition, tuning this interaction may be an important tool also in optimizing electrocatalyst performance. § CONCLUSIONS The effect of transition/noble metal doping (Co, Ni and Pt) on the activity of MoS2 towards HER was studied by performing grand-canonical DFT simulations (incl. solvent description) and theoretical reaction modelling which enabled construction of theoretical polarization curves. For the undoped basal plane, the active site responsible for intrinsic activity was found to be the central Mo atom in clustered threefold S-vacancies. Despite the single- and double-vacancies allowing near-thermoneutral H-adsorption, hydrogen evolution via a H2* complex occurs through a Volmer-Volmer-Tafel pathway on the triple vacancy with a much lower overpotential (η>1.1 V vs. η=0.52 V). The clustered triple vacancy is also energetically favored by HER conditions. Impurity atoms interact favorably with S-vacancies in mutually stabilizing configurations, enabling increased vacancy generation. However, H-intermediates on the impurity atom are high in energy, and the resulting Volmer-Heyrovsky pathway requires a larger activation energy. Therefore, all dopants seemingly deactivate the basal plane. Like the undoped triple vacancy, the edges display moderate affinity towards surface-bound H2* complexes in both doped and undoped cases. This enables an efficient Volmer-Volmer-Tafel process of evolution in which the H* combination occurs directly in the second Volmer step, and also influences the Volmer-Heyrovsky process by stabilizing the transition complex via attractive surface interaction. At moderate doping levels (25 % edge substitution), evolution proceeds with a reduced overpotential (0.1-0.2 V) for all dopants. At full edge substitution, Ni and Pt are deactivated relative to the undoped Mo_0 reference (η = 0.27 V), and only the Volmer-Volmer-Tafel process on the Co site remains more active, seemingly due to the more stable dihydrogen intermediate, which may further be related to the valence d-electrons via the Kubas interaction. The large discrepancy in performance between sites on the basal plane and edges despite similar thermodynamics of the H* intermediate illustrates the importance of explicitly including activation energy in kinetic estimations, as the free energy descriptor does not account for this difference. In experimental context, it suggests that observations of low overpotential in doped 2H-MoS2 are unlikely to be due to Mo-substitutional doping of the basal plane, and rather due to modification of the Mo edge. In summary, edge doping can greatly increase the HER activity of MoS2, and the best results are achieved by partial (Co, Ni, Pt) or full substitution (Co) of the Mo_0 edge. The different trends on MoS2 basal planes, low-, and high-level doped edges may help explain experimental observations of differing effect of transition and noble metal impurities. Further, the role of dihydrogen intermediates was identified, contributing towards understanding the chemical picture of hydrogen evolution in these systems. The behavior of transition metal dichalcogenide edges presents an intersection between surface- and coordination chemistry which seems essential for their role as efficient catalysts. Further understanding of this regime may guide the design of active yet stable catalysts. § ACKNOWLEDGEMENTS The calculations were performed on resources provided by Sigma2 - the National Infrastructure for High Performance Computing and Data Storage in Norway, project No. NN9497K. JA acknowledges financial support from the Academy of Finland, project No. 322832 ‘‘NANOIONICS’’. HJ thanks Prof. Jens Nørskov for numerous inspiring discussions and collaborations over the past 30 years, and acknowledges financial support from the Icelandic Research Fund, project No. 207283-053. § CONFLICT OF INTEREST There are no conflicts to declare. Wiley-chemistry
http://arxiv.org/abs/2409.03440v1
20240905114226
Rx Strategist: Prescription Verification using LLM Agents System
[ "Phuc Phan Van", "Dat Nguyen Minh", "An Dinh Ngoc", "Huy Phan Thanh" ]
cs.CL
[ "cs.CL" ]
Rx Strategist: Prescription Verification using LLM Agents System Phuc Phan Van1, Dat Nguyen Minh1, An Dinh Ngoc1, Huy Phan Thanh1 University Collaboration with Li Jinghong2, Dong Yicheng2 1FPT University, Ho Chi Minh Campus, Vietnam 2Japan Advanced Institute of Science and Technology, Ishikawa Campus, Japan {phucpvse170209, datnmse170570, andnse171386, huypt24}@fpt.edu.vn, and {lijinghong-n, s2320035}@jaist.ac.jp September 9, 2024 ======================================================================================================================================================================================================================================================================================================================================================================== Rx Strategist: Prescription Verification using LLM Agents System Phuc Phan Van1, Dat Nguyen Minh1, An Dinh Ngoc1, Huy Phan Thanh1 University Collaboration with Li Jinghong2, Dong Yicheng2 1FPT University, Ho Chi Minh Campus, Vietnam 2Japan Advanced Institute of Science and Technology, Ishikawa Campus, Japan {phucpvse170209, datnmse170570, andnse171386, huypt24}@fpt.edu.vn, and {lijinghong-n, s2320035}@jaist.ac.jp September 9, 2024 ======================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT To protect patient safety, modern pharmaceutical complexity demands strict prescription verification. We offer a new approach - Rx Strategist - that makes use of knowledge graphs and different search strategies to enhance the power of Large Language Models (LLMs) inside an agentic framework. This multifaceted technique allows for a multi-stage LLM pipeline and reliable information retrieval from a custom-built active ingredient database. Different facets of prescription verification, such as indication, dose, and possible drug interactions, are covered in each stage of the pipeline. We alleviate the drawbacks of monolithic LLM techniques by spreading reasoning over these stages, improving correctness and reliability while reducing memory demands. Our findings demonstrate that Rx Strategist surpasses many current LLMs, achieving performance comparable to that of a highly experienced clinical pharmacist. In the complicated world of modern medications, this combination of LLMs with organized knowledge and sophisticated search methods presents a viable avenue for reducing prescription errors and enhancing patient outcomes. Medical Systems, Large Language Model, Question Answering § INTRODUCTION Verifying prescriptions is an essential stage in the healthcare process that guarantees patient safety and the best possible results from treatments. However, studies have shown that a significant proportion of prescribed dosages are erroneous. For instance, A work analysis of medication errors in Vietnamese hospitals <cit.> found that roughly 40% of doses prescribed to patients are incorrect in two urban public hospitals in Vietnam. Moreover, the availability of healthcare professionals, particularly in regions like Vietnam, is limited, exacerbating the issue. According to the Ministry of Health [https://moh.gov.vn/documents/174521/1760801/3.++BC-BYT-+T%E1%BB%95ng+k%E1%BA%BFt+ng%C3%A0nh.pdf/481b5482-2b3c-4487-bfd6-20dd2601cb04Ministry of Health (MOH) 2023 Report], Vietnam has just 12.5 doctors and 3.2 graduate pharmacists for every 10,000 people. This shortage of qualified personnel underscores the urgent need for advanced systems capable of automating and enhancing prescription verification without relying heavily on human resources. Leveraging artificial intelligence (AI), particularly LLMs, as an assistant for healthcare providers offers a promising solution to mitigate prescription errors. AI-powered systems can rapidly analyze vast amounts of medical information, potentially identifying inconsistencies or potential issues with dosages, drug interactions, and contraindications. However, current LLM systems face challenges in achieving reliable performance in this domain. Notably, the limited availability of real-world clinical data for AI model training raises concerns about their generalization to heterogeneous patient populations and varied clinical scenarios. Moreover, many LLMs rely on memorization rather than deep medical reasoning <cit.>, making them susceptible to hallucinations or incorrect answers when faced with unfamiliar or complex cases. To address these challenges, we propose a novel LLM agent system designed specifically for prescription verification. Our system incorporates a sequence of specialized agents including 2 main tasks: indication verification and dose verification, each equipped with a unique combination of knowledge graphs, rule-based systems, and LLM components. This modular architecture enables a comprehensive analysis of each prescribed active ingredient, taking into account patient-specific information, the indicated condition, and established medical knowledge by combining the strengths of structured knowledge sources with the adaptability of LLMs. In addition, we introduce a specialized dataset focused on drug information and a novel methodology for knowledge retrieval. This dataset, combined with our retrieval approach, aims to enhance the system's robustness and overall performance. Chain-of-thought (CoT), introduced by <cit.>, marks a pivotal advancement in improving the reasoning abilities of text-generation models. CoT enables models to generate intermediate steps in their thought processes, mirroring human problem-solving techniques. Research by <cit.> further demonstrated that certain prompts, like "Let's think step-by-step," can naturally induce CoT reasoning in LLMs. These breakthroughs have laid the groundwork for ongoing research aimed at enhancing the reasoning capabilities of LLMs. To address the inherent knowledge limitations of LLMs, Retrieval Augmented Generation (RAG) has emerged as a prominent approach. RAG integrates LLMs with information retrieval systems, enabling them to access and utilize relevant external knowledge. This is typically achieved by embedding both the query and candidate documents in a shared vector space and then identifying the documents with the highest similarity to the query. Recent advancements in RAG have focused on improving retrieval accuracy. For instance, HyDE <cit.> enhances retrieval by generating a hypothetical answer to the query and then embedding it alongside the documents, allowing for a more nuanced comparison. Alternatively, Take a Step Back <cit.> aims to identify the foundational knowledge documents most relevant to the query, potentially leading to more accurate and comprehensive responses. Beyond traditional document retrieval, advanced RAG systems have increasingly turned to knowledge graphs (KGs) to improve retrieval accuracy and context understanding. KGs offer a structured representation of knowledge, capturing entities, relationships, and facts in a graph format. By integrating KG retrieval into RAG pipelines, researchers have been able to achieve several key advantages: * Structured Knowledge: KGs provide a structured representation of knowledge, enabling more precise retrieval and reasoning compared to unstructured text. * Semantic Understanding: KGs capture the semantic relationships between entities, allowing RAG systems to better understand the meaning and context of queries. * Multi-hop Reasoning: KGs can facilitate multi-hop reasoning, where the system navigates multiple relationships in the graph to answer complex questions. * Explainability: KGs can provide a transparent explanation of the reasoning process by highlighting the relevant entities and relationships used to generate a response. Recent works such as <cit.> and <cit.> have demonstrated the effectiveness of incorporating KG retrieval into RAG. These systems leverage KG embeddings, graph traversal algorithms, and graph neural networks to identify relevant entities and subgraphs within the knowledge graph, providing the LLM with richer context and enabling more accurate and informative responses. Besides optimizing in LLMs and RAG, many works integrate many LLMs together and also provide them tools for calling <cit.> to boost the performance of LLMs. These works demonstrated that a multi-agent system, where individual agents specialize in different reasoning tasks and communicate via function calls, can outperform single-agent models on complex question-answering tasks. This architecture allows for a more modular and adaptable approach, where agents can leverage each other's expertise and collaboratively generate responses. As a result, LLMs can benefit from improved accuracy, a wider range of capabilities, and better handling of complex tasks that require multiple perspectives. In summary, our main contributions are as follows: (1) we introduce a novel system flow for prescription verification, leveraging a multi-agent LLM architecture combined with knowledge graphs and rule-based systems; (2) we provide a specialized dataset focused on drug information and a novel knowledge graph-based retrieval methodology that significantly enhances the performance of our system; and (3) our system demonstrates exceptional performance across various metrics from models to human labels, outperforming current LLMs and achieving parity with highly experienced clinical pharmacists. § DATASET Current inference approaches for LLMs, such as CoT and ReAct <cit.>, primarily rely on the model's response capabilities. However, these methods can be prone to hallucination and illogical reasoning. To mitigate these issues, we propose leveraging relevant reference materials, including drug and indication information, to guide LLM responses. Instead of assessing specific drugs independently, we delve deeper into the active ingredients, which are specific compounds that make up a medication and are responsible for the therapeutic effects of it. For instance, Losartan is primarily used to treat hypertension and to protect the kidneys from damage due to diabetes, as well as helps relax blood vessels, making it easier for the heart to pump blood and reducing the risk of strokes and heart attacks [https://www.drugs.com/losartan.htmlLosartan information]. §.§ Data Collection §.§.§ Drug Information Sources For this task, we collected highly accurate drug information from reliable sources like Drugs.com and Long Chau Pharmacy. * Drugs.com: Information on over 1700 active ingredients in AHFS DI Monographs format, including medication indication, administration, dosage, and adverse effects (if available). * Long Chau Pharmacy: Vietnamese-language information on medicinal properties specific to Vietnam. Data from both sources was stored and retrieved as HTML documents, with each document corresponding to a specific active ingredient. The data was then chunked according to headers and saved in Markdown format for human readability. For evaluation and querying purposes, the Markdown documents were further processed into a structured JSON format. §.§.§ Standardizing Indication Terminology To address inconsistencies in medical indication terminology, By using AI models like GPT 3.5 to generate ICD-10 codes based on various indication terms, we create a unified language for identifying and classifying diseases. This standardization allows healthcare professionals to quickly and easily identify the correct ICD-10 code, streamlining tasks such as medical billing, insurance claims processing, and research data analysis. In addition, accurate disease classification facilitated by standardized terminology can lead to improved diagnosis, treatment, and ultimately, patient outcomes. §.§.§ Drug Interaction Data We collected interaction data for 27 common active components, with detailed name in Appendix <ref>, from Drug.com interaction checker. The data include interaction level and detailed descriptions of their interactions with other components. By integrating this data, we aim to enhance the model's ability to identify and evaluate potential adverse effects, ensuring safer and more effective prescription recommendations. §.§ Expert-Approved Labels To ensure safe and accurate prescription verification, we leveraged the expertise of clinical pharmacists with varying levels of experience to establish a robust and reliable evaluation process. To assess the model's performance at different levels of human expertise, we enlisted three clinical pharmacists: a junior pharmacist with one year of experience, an intermediate pharmacist with three years of experience, and a senior pharmacist with five years of experience. Each pharmacist independently evaluated prescriptions, verifying the appropriateness of both the indication and dosage for each active ingredient. If an indication was deemed incorrect, the corresponding dosage was not evaluated. Our diligent human labeling process not only provided valuable insights but also optimized the evaluation of our automated verification model. A council of three highly experienced hospital pharmacists (with over five years of experience) provided the definitive assessment used as the gold standard for accuracy to which individual pharmacists' results were compared. This comprehensive evaluation strategy ensures that our system not only aligns with but surpasses industry standards, fostering trust in its ability to enhance patient safety and optimize medication management. §.§ Data Quality Control For practical application and seamless integration into a database, the collected data underwent a rigorous preparation and quality control process. The initial Markdown format was converted into a structured JSON format for enhanced usability and compatibility. This transformation involved several key steps: * Removal of extraneous characters: Non-essential symbols, including tabs (\t) and daggers (†), were eliminated. Newline characters (\n) were retained to preserve content separation. * Hyphen standardization: Long and short hyphens were unified to ensure consistency. * Dosage unit conversion: Regular expressions were employed to convert dosage amounts expressed in micrograms (mcg) to the standard unit of milligrams (mg). * Restructuring of JSON hierarchy: Redundant layers within the JSON structure were removed to streamline data access and manipulation. §.§ Data Statistics The dataset includes 1780 active ingredients, each containing information about compatible age groups alongside usage, as shown in Figure <ref>. §.§.§ Age groups Close assession and filtering of dataset reveals that 1694 (95%) of all available active ingredient can be used by adults, while only 923 (52%) of them are prescribed towards pediatric patients. This imbalance of data can be understandable as the number of illnesses grows positively correlated with a person's age. §.§.§ Total information in each active ingredient In order to measure how much information is described in each medicine, we calculate the number of words available in the "description" portion of the data. The result shows that the number of words range from 0 up to 7697, with the majority of active ingredient descriptions fall under 1000 words. This indicates a large variance across our collection of active ingredient. §.§.§ Drug versatility The visualization also includes drug versatility measurement, which equals the total number of diseases one drug can cure, according to our dataset. On average, each drug can be indicated to cure 3 diseases, and few of them can reach up 40. § METHODOLOGY Overview. The Rx Strategist, as illustrated in Figure <ref>, is composed of three agents, an information extractor, and a checker. These components interact based on fit status, a key concept in the system, with the fit of indication (FI) and fit of dosage (FD) playing a crucial role in identifying the active ingredients and connecting the different modules. Specifically, a prescription is first processed by using optical character recognition (OCR) to extract relevant information, which is then converted into features by an LLM. ICD Finder is an LLM component identifies ICD-10 codes from the extracted active ingredients. ICD Matcher compares the identified ICD-10 codes with those in the patient's diagnosis, filtering the active ingredients to those that are suitable for the patient's condition as FI. Dosage Retriever determines the appropriate dosage of the active ingredient based on both FI and patient information. Finally, Checker utilizes both FI and FD to assess the overall validity of the prescription, providing explanations for its determination. §.§ Indication: Finding and Mapping To initiate the prescription verification process and provide the necessary input for the ICD Finder, we first extract relevant information from the prescription image. The image of a prescription is fed into an OCR model, returning a full-text format of the whole prescription. Subsequently, this body of text goes through a reformatting phase using GPT-4o-mini, turning it into a dictionary-like format (with information such as age group, indication, and dosage) for easy extraction in the latter phase. ICD Finder. The ICD Finder receives a list of active ingredients in dictionary format. To address the challenge of inconsistent name representation, we employ a fuzzy matching algorithm to identify potential matches between the received active ingredients and the database entries. This fuzzy matching approach enhances the system's ability to recognize active ingredients with slight variations in naming conventions, thereby improving the accuracy of subsequent steps. Once potential matches are identified, the ICD Finder retrieves the corresponding usages for each active ingredient, indicating the diseases each active ingredient is designed to treat. Subsequently, the ICD Finder maps these diseases to their respective ICD-10 codes, establishing a set of ICD-10 codes associated with the prescription's intended therapeutic purposes. In instances where information about a specific active ingredient is unavailable within our database, we leverage the capabilities of an LLM to generate potential ICD-10 codes. The LLM is trained on a vast corpus of medical literature and clinical data, enabling it to infer potential ICD-10 codes based on the active ingredient's name, chemical structure, and known therapeutic uses. However, it is important to note that LLM-generated ICD-10 codes should be treated with caution and may require additional validation before being used for clinical decision-making. ICD Matcher. Leveraging the ICD-10 codes obtained by the ICD Finder, the ICD Matcher compares them against the ICD-10 codes derived from the patient's diagnosed conditions. The comparison is conducted at the category level of the ICD-10 code, which consists of an alphabetical letter followed by two numbers (e.g., A01, B15, C23). An active ingredient is labeled as "APPROPRIATE" if all ICD-10 categories associated with its usages are present in the patient's diagnosis codes, signifying that it aligns with the patient's medical needs. Conversely, if any ICD-10 category linked to an active ingredient's usage is absent from the patient's diagnosis codes, indicating a potential mismatch between the medication and the patient's condition, the active ingredient is classified as "INAPPROPRIATE". §.§ Dosage: Retriever with Knowledge Graph Due to the structured nature of our dataset and the possibility that LLMs might overlook subtle textual details <cit.>, we have developed a specialized text-to-graph processing approach. This method focuses on task-specific information extraction, improving the robustness of our model while minimizing the need for extensive fine-tuning. We extracted dosage information for each disease within specific age groups (e.g., pediatric, adult) and administration routes (e.g., oral, intravenous) from our dataset. Given the absence of detailed patient histories in our prescription data, we established the initial dosage specified in the drug information as the recommended baseline dose. Any prescribed dosage deviating from this baseline was flagged for further scrutiny, as it might necessitate individualized adjustments based on the patient's specific circumstances and medical history. To structure this information, we constructed a knowledge graph with nodes representing drugs, diseases, and dosages, and edges denoting the relationships between them (see Appendix <ref> for details). The process of generating this knowledge graph is outlined in Algorithm <ref>. Specifically, the algorithm iterates through each active ingredient in the dataset, then for each age group associated with that ingredient, and finally for each disease that the ingredient is indicated for within that age group. For each disease, the language model is used to generate dosage nodes and relationships, which are then added to the knowledge graph. This structured representation facilitates efficient retrieval and reasoning about dosage information, enhancing the accuracy and effectiveness of our prescription verification system. Dosage Retriever. a knowledge graph-based system, efficiently retrieves appropriate dosages for verified active ingredients. It takes as input the active ingredients validated in the indication stage, along with patient-specific details like age group, age-specific factors (e.g., weight, kidney function), and the diagnosed condition the drug aims to treat. The system then navigates the knowledge graph, which is structured around relationships between drugs, diseases, and dosages. The retrieval process involves comparing the patient's information with the knowledge graph's nodes and edges. For instance, if the patient is a child diagnosed with a specific condition, the Dosage Retriever will traverse the graph to find the active ingredient node, follow the edge to the relevant disease node, and then identify the dosage node associated with the child age group. To enhance accuracy, a language model is integrated to standardize keywords and address inconsistencies in drug information representation. This ensures precise matching between the patient's data and the knowledge graph's information. If no exact match is found, the language model can suggest the closest available dosage information. Algorithm <ref> details the step-by-step process of dosage retrieval. It first identifies relevant diseases for the active ingredient and patient age group within the knowledge graph. If the diagnosed disease isn't directly linked, the language model assists in finding the closest match. The patient's age is then categorized into a specific age range, and potential dosages are retrieved based on the active ingredient, disease, and age range. If multiple dosage options exist, the system selects the most appropriate one based on the patient's specific age and any additional guidance from the language model. If no dosage information is available, the system indicates this lack of information. § EXPERIMENTAL SETTINGS §.§ Benchmarks To evaluate the performance of our system, we curated a dataset of 20 real-world prescriptions sourced from Vietnam hospitals, the age group of patient is majoring as adults. To ensure patient privacy, all personally identifiable information, including names, addresses, and hospital details, was meticulously removed from the dataset. §.§ Baselines and Experimental settings Assessing our system’s capabilities reliably and comprehensively involved comparing its performance to a diverse set of benchmarks, including state-of-the-art language models (LLMs) and human experts. This multifaceted approach allowed for a robust evaluation across various metrics and perspectives. For the LLM baseline, we utilized both open-source (Qwen2 72B by Alibaba group <cit.>, LLama3.1 family models from 8 parameters to 405 parameters by Meta <cit.>) and closed-source models (GPT4o-mini by OpenAI[<https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/>] and Claude 3.5 Sonnet by Anthropic[<https://www.anthropic.com/news/claude-3-5-sonnet>]). To evaluate human performance, we collected prescription evaluations from clinical pharmacists with 1 to 5 years of experience, establishing a real-world benchmark. All LLM evaluations employed a CoT prompting method, with their specific prompts detailed in Appendix <ref>. Open-source models were initially configured with a temperature of 0, top-p of 0.7, and top-k of 50 for all models except LLama 3.1 8B. Additionally, We observed a tendency for LLama 3.1 8B to omit answers and duplicate words for certain prescription types. To address this, we adjusted the temperature for LLama 3.1 8B to 0.2 or 0.3 for prescription inference, and a value of 0.5 for interaction query summarization, encouraging a more exploratory approach and potentially leading to more comprehensive responses. Closed-source models were utilized with default settings provided by their respective developers. Our system utilizes GPT4o-mini as the underlying language model due to its availability, ease of integration, and strong performance in preliminary tests. §.§ Evaluation Metrics To assess the performance of our system, we prepare a variety of metrics including accuracy, precision, recall, F-0.5 score as our evaluation metrics. For each prescription, we compare the set of active ingredients predicted by our system against the corresponding gold standard set (i.e., the ground truth active ingredients). Basic metrics such as Accuracy, while reliable for most tasks, are not the best indicator of good performance for this task. More specifically, cases of inappropriate elements being flagged as appropriate are considered harmful to patients, as they create a false belief that their description is accurate when it is not. On the other hand, the opposite case of appropriate elements being flagged as inappropriate is less serious, as it only requires a close assession by the professionals. Therefore, we introduce more metrics to put emphasis on the harmful case. The list of metrics we will use to assess performance is the following: * Accuracy: proportion of correct prediction over all samples. * Precision: proportion of predicted appropriate elements that are correct. * Recall: proportion of actual appropriate elements that are correctly predicted. * F-0.5 Score: variant of F-β Score with β=0.5, measuring weighted harmonic mean of precision and recall. Instead of the traditional F1-Score, which balances precision and recall, this variant weights more on precision, with the purpose of minimizing precision - inappropriate elements classified as appropriate. § RESULTS Table <ref> presents a comprehensive comparison of our system's performance against human labels, open-source models, and closed models. The results unequivocally demonstrate that our system surpasses nearly all current LLMs, achieving a level of knowledge comparable to that of a clinical pharmacist with 5 years of experience in which you can see more in Figure <ref>. In particular, our system outperforms GPT4o Mini by 5.56% and Claude3.5 Sonnet by 3.71% in terms of accuracy on closed models. On human labels, Rx Strategist outperforms clinical pharmacist with one year of experience by 4.63% and by 3.71% for clinical pharmacist with three years of experience, highlighting its superior capability in this domain. Interestingly, we observed a positive correlation between the precision-recall ratio and the F-0.5 score in prescription verification tasks (Figure <ref>). This suggests that models prioritizing precision (minimizing false positives) while maintaining a reasonable recall (minimizing false negatives) tend to achieve higher overall performance, as indicated by the F-0.5 score. This finding highlights the importance of balancing precision and recall in this domain, where both accurate identification of valid prescriptions and avoidance of incorrect rejections are crucial. Our system, Rx Strategist, demonstrates this balance effectively, positioning it as a promising tool for enhancing prescription verification accuracy and ultimately improving patient safety. Another noteworthy performance criterion is runtime, which is an important indication of how long our system can return its output. Table <ref> clearly demonstrates the difference between the time taken by Rx-Strategist compared to other LLms, with additional information such as average time per token and total generated tokens for measuring conciseness. calculated for LLM only § INTERACTION CHECKING Given the complex nature of interaction between medical elements, we propose a new way of representing the relationship of medical properties by utilizing a Knowledge Graph. The triplets, which is used for graph construction, is extracted using the LLama3-8B model with prompting that targets domain-specific relationships, as detailed in the Appendix <ref>. Subsequently, the corresponding triplet embeddings are generated using the general text embedding model gte-Qwen2-1.5B-instruct <cit.>. To validate the approach's performance against other experimental results and determine its effectiveness for our specific application, we have Knowledge Graph's impact on refining evaluation outcomes. The graph is combined with two models LLama3-8B <cit.> and LLama3.1-8B <cit.> respectively. Information on active components in prescriptions is retrieved from the graph using the cosine similarity score between the input query embedding and the precomputed triplet embeddings within the graph. The retrieved data is subsequently summarized using the same inference model to condense the information. We anticipate that this approach will enable the model to interpret the graph from its perspective. The summarized interactions are then utilized to aid the LLM in its prescription evaluation. The result shows that both LLama3-8B and LLama3.1-8B have improved their accuracy substantially, achieving an increase of 8.33%. § CONCLUSION AND FUTURE WORK In conclusion, we present a novel, efficient approach to prescription verification, effectively combining a knowledge base with a well-instructed reasoning process. Our system's performance surpasses not only some senior clinical pharmacists but also state-of-the-art LLMs, demonstrating its potential for real-world implementation, particularly in resource-constrained hospital settings. Limitations and Future Works: Several avenues for future enhancement exist. The current reliance on Vietnamese-language data necessitates further investigation into multilingual capabilities to ensure broader applicability. Additionally, refining the ICD-10 coding process, potentially through the development of a dedicated model, could mitigate the risk of hallucination and further improve accuracy. Expanding the knowledge base to incorporate diverse data sources, such as electronic health records and clinical guidelines, could enhance the system's understanding of complex medical scenarios, ultimately leading to more robust and reliable prescription verification. ieeetr §.§ LLM Prompt for Evaluate §.§.§ Base evaluation prompt You are a meticulous clinical pharmacist specializing in medication safety and appropriateness. Given a patient's profile and prescription, your task is to thoroughly evaluate the prescription's suitability. Pay close attention to the patient's age, medical conditions (especially kidney failure, liver disease, and pregnancy), and any relevant allergies. Follow these steps for each medication in the prescription: 1. Assess Patient Profile: Carefully review the patient's information to identify any potential risk factors or contraindications. 2. Verify Indication: Determine if the medication or combination of medication is APPROPRIATE, INAPPROPRIATE, or UNDERPRESCRIBED for the patient's diagnosed condition(s). 3. Verify Dosage and Administration (If Appropriate): If the medication is appropriate, confirm that the prescribed dosage and administration instructions are safe and effective for the patient. 4. Conclusion: For each medication, provide a final assessment: * If APPROPRIATE, state "APPROPRIATE". * If INAPPROPRIATE, specify which aspect (e.g., dosage, active ingredient, interaction) is problematic and provide a detailed explanation. Prescription: §.§.§ Evaluation prompt with Interaction graph You are a meticulous clinical pharmacist specializing in medication safety and appropriateness. Given the reference materials bellow. —————————————- Indication query summarization —————————————- Your task is to thoroughly evaluate the prescription's suitability in english. Pay close attention to the patient's age, medical conditions (especially kidney failure, liver disease, and pregnancy), and any relevant allergies. Follow these steps for each medication in the prescription: 1. Assess Patient Profile: Carefully review the patient's information to identify any potential risk factors or contraindications. 2. Verify Indication: Determine if the medication or combination of medication is APPROPRIATE, INAPPROPRIATE, or UNDERPRESCRIBED for the patient's diagnosed condition(s). 3. Verify Dosage and Administration (If Appropriate): If the medication is appropriate, confirm that the prescribed dosage and administration instructions are safe and effective for the patient. 4. Conclusion: For each medication, provide a final assessment: * If APPROPRIATE, state "APPROPRIATE". * If INAPPROPRIATE, specify which aspect (e.g., dosage, active ingredient, interaction) is problematic and provide a detailed explanation. Prescription: §.§ Components table §.§ Data Representation §.§.§ Raw JSON data Sample [breaklines=True] "losartan": "dosage": "Pediatric Patients": "Hypertension": "Oral": "Children >6 years of age.. ." , "Adults": "Hypertension": "Losartan Therapy": "Oralrecommends initial dosage of 50 mg..." , "Prevention of Cardiovascular Morbidity and Mortality": "Oral": "Initially, 50 mg once daily..." , "Diabetic Nephropathy": "Oral": "Initially, 50 mg once daily..." , "Heart Failure [off-label]": "Oral": "Initially, 25-50 mg once daily ..." , "esomeprazole": "dosage": "Pediatric Patients": "GERD": "GERD Without Erosive Esophagitis": "Oral1-11 years of age..." , "Adults": "GERD": "GERD Without Erosive Esophagitis": "Oral20 mg once daily ..." , "Duodenal Ulcer": "Helicobacter pylori Infection and Duodenal Ulcer": "Oraltherapy:..." , "NSAIA-associated Ulcers": "Prevention of Gastric Ulcers": "Oral20 or 40 mg once daily;..." , ... §.§ Prescription Sample An example of a prescription named 24th is demonstrated in Figure <ref>. §.§ Dosage Knowledge Graph Structure §.§.§ Node Examples [breaklines=True] [ "id": 1, "name": "rosuvastatin", "type": "Drug" , "id": 2, "name": "heterozygous familial hypercholesterolemia", "type": "Disease" , "id": 3, "name": "5-10 mg once daily", "type": "Dosage" , ... ] §.§.§ Relationship Examples [breaklines=True] [ "start_id": 1, "end_id": 2, "type": "TREATS", "age_group": "pediatric" , "start_id": 2, "end_id": 3, "type": "INITIAL_DOSAGE", "age_specific": "children 8 to <10 years of age", "administration": "oral" , "start_id": 2, "end_id": 4, "type": "INITIAL_DOSAGE", "age_specific": "children and adolescents 10-17 years of age", "administration": "oral" , "start_id": 1, "end_id": 5, "type": "TREATS", "age_group": "adults" , "start_id": 5, "end_id": 6, "type": "INITIAL_DOSAGE", "administration": "oral" , "start_id": 5, "end_id": 7, "type": "SPECIFIC_DOSAGE", "administration": "oral", "indication": "patients who have not achieved adequate response with the 20-mg daily dosage" , "start_id": 1, "end_id": 8, "type": "TREATS", "age_group": "adults" , ... ] §.§ Output Examples §.§.§ LLama3.1 405B The output of the LLama 3.1 405B model on the 24th prescription (one of the prescription in the evaluation dataset). For more information about input, see at Appendix <ref> I'll evaluate each medication in the prescription based on the patient's profile. Patient Profile: - Age: 45 (born in 1978) - Medical Conditions: - Chronic hepatitis B without hepatitis D coinfection - Non-insulin-dependent diabetes mellitus (without complications) - Mixed hyperlipidemia - Hypertension - No allergies mentioned Medications: 1. Tenofovir (TDF) - Tefostad T300 300mg - Assess Patient Profile: Patient has chronic hepatitis B, which Tenofovir is indicated for. - Verify Indication: APPROPRIATE for chronic hepatitis B. - Verify Dosage and Administration: The prescribed dose (300mg once daily) is standard for treating chronic hepatitis B. - Conclusion: APPROPRIATE 2. Atorvastatin - Lipotatin 20mg - Assess Patient Profile: Patient has mixed hyperlipidemia. - Verify Indication: APPROPRIATE for hyperlipidemia. - Verify Dosage and Administration: The dose (20mg once daily) is within the recommended starting dose range for adults. - Conclusion: APPROPRIATE 3. Amlodipin + losartan - Troysar AM 5mg + 50mg - Assess Patient Profile: Patient has hypertension. - Verify Indication: APPROPRIATE for hypertension management. - Verify Dosage and Administration: The combination and doses (5mg of Amlodipine and 50mg of Losartan once daily) are within recommended ranges for hypertension management. - Conclusion: APPROPRIATE 4. Metformin hydroclorid - Meglucon 1000 1000mg - Assess Patient Profile: Patient has non-insulin-dependent diabetes mellitus. - Verify Indication: APPROPRIATE for type 2 diabetes. - Verify Dosage and Administration: The dose (1000mg twice daily) is a common starting dose for adults. - Conclusion: APPROPRIATE 5. Linagliptin - TRIDJANTAB 5mg - Assess Patient Profile: Patient has non-insulin-dependent diabetes mellitus. - Verify Indication: APPROPRIATE for type 2 diabetes. - Verify Dosage and Administration: The dose (5mg once daily) is the recommended dose for adults. - Conclusion: APPROPRIATE Overall Assessment: All medications in the prescription are APPROPRIATE for the patient's conditions, with no contraindications based on the provided information. However, it's essential to monitor kidney function, especially with the use of Tenofovir and Metformin, and to assess the efficacy and potential side effects of all medications regularly.
http://arxiv.org/abs/2409.03160v1
20240905013429
Autonomous Drifting Based on Maximal Safety Probability Learning
[ "Hikaru Hoshino", "Jiaxing Li", "Arnav Menon", "John M. Dolan", "Yorie Nakahira" ]
cs.RO
[ "cs.RO", "cs.SY", "eess.SY" ]
The YMDB catalog: Young massive detached binaries for the determination of high-precision absolute stellar parametersFull version of Tables 2, 3 and 4 are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/. Pablo Martín-Ravelo 1,2 Roberto Gamen 3,4 Julia I. Arias 1 André-Nicolas Chené 2 Rodolfo H. Barbá In Memoriam (1962–2021) 1 Received: 20 June 2024 / Accepted: 05 July 2024 ================================================================================================================================================================================================================================================================================================================================ empty empty § ABSTRACT This paper proposes a novel learning-based framework for autonomous driving based on the concept of maximal safety probability. Efficient learning requires rewards that are informative of desirable/undesirable states, but such rewards are challenging to design manually due to the difficulty of differentiating better states among many safe states. On the other hand, learning policies that maximize safety probability does not require laborious reward shaping but is numerically challenging because the algorithms must optimize policies based on binary rewards sparse in time. Here, we show that physics-informed reinforcement learning can efficiently learn this form of maximally safe policy. Unlike existing drift control methods, our approach does not require a specific reference trajectory or complex reward shaping, and can learn safe behaviors only from sparse binary rewards. This is enabled by the use of the physics loss that plays an analogous role to reward shaping. The effectiveness of the proposed approach is demonstrated through lane keeping in a normal cornering scenario and safe drifting in a high-speed racing scenario. § INTRODUCTION Driving in adverse conditions (e.g. high-speed racing or icy roads with low traction) is challenging for both human drivers and autonomous vehicles. The vehicle operates near its handling limits in a highly nonlinear regime and has to cope with uncertainties due to unmodeled dynamics and noises in sensing and localization <cit.>. Furthermore, it needs to have a very low response time to adapt to the rapidly changing environment <cit.>. Deterministic worst-case frameworks including robust sliding-mode control <cit.> can often be efficiently computed but require full system models and small bounded uncertainties (errors). Techniques based on set invariance, such as control barrier functions and barrier certificates <cit.>, are applied to vehicle systems with analytical models when these functions can be designed. Techniques based on probabilistic invariance <cit.> can be used to generate safety certificates using samples of complex (black-box) systems in extreme environments <cit.> and occluded environments <cit.>. Model Predictive Control (MPC) techniques, including stochastic MPC <cit.> and chance-constrained MPC <cit.>, exploit future predictions to account for uncertainties. However, as the number of possible trajectories grows exponentially to the outlook time horizon, there are often stringent tradeoffs between outlook time horizon and computation burdens. Thus, it is still challenging to ensure safety in adverse and uncertain conditions with lightweight algorithms suitable for onboard computation. Meanwhile, within drifting control, various model-based/model-free techniques are considered. In <cit.>, a bicycle model is used to describe the phenomenon of vehicle drifting, and an unstable “drift equilibrium” has been found. Sustained drifting has been achieved by stabilizing a drift equilibrium using various control methods, such as LQR <cit.>, robust control <cit.>, and MPC <cit.>. Hierarchical control architectures are proposed in <cit.> for general path tracking. A drawback of these previous works is that they aim to stabilize a specific equilibrium or reference trajectories, and pre-computation of such a reference is required. As a data-driven drifting control, Probability Inference for Learning Control (PILCO) has been applied <cit.>, but the method is only considered in a single-task setting of minimizing tracking error of a particular drift equilibrium. Soft actor-critic algorithm was designed to go through sharp corners in a racing circuit in simulations <cit.>. Twin-Delayed Deep Deterministic (TD3) Policy Gradient is developed in <cit.>, and tabular Q-learning is used in <cit.>. The transference of RL agents to the real world was studied by  <cit.>, using experiments of radio-controlled (RC) model cars. Drift parking task is considered in <cit.>. However, like other RL problems <cit.>, an appropriate design of reward function is required to generalize across different drift maneuvering tasks. The reward functions in these works tend to be complex, and the distance from a target drift equilibrium or a reference trajectory is typically used for reward shaping. In this paper, we propose a novel approach to control vehicle drifting based on Physics-Informed Reinforcement Learning (PIRL). Specifically, PIRL is used to learn a control policy that maximizes the safety probability such that the vehicle can safely drift or slip in adverse conditions. Major technical challenges stem from the fact that the objective function associated with this problem takes multiplicative or maximum costs over time and that the reward is binary and sparse in time. To overcome the difficulty of learning multiplicative or maximum costs, we first present a transformation that converts this problem into RL with additive costs. To efficiently learn from sparse reward, we built upon our previous work  <cit.>, which imposes physics constraints on the loss function. This term in the loss function plays an analogous role in rewarding shaping and allows safe policies to be learned only from binary rewards that are sparse in time. The proposed framework only requires safe events (safe regions) to be specified without the need for reference trajectories or laborious reward shaping. The effectiveness of the proposed approach is demonstrated through lane keeping in a normal cornering scenario and safe drifting in a high-speed racing scenario. §.§ Notation Let ℝ and ℝ_+be the set of real numbers and the set of nonnegative real numbers, respectively. Let and _+ be the set of integers and the set of non-negative integers. For a set A, A^c stands for the complement of A, and ∂ A for the boundary of A. Let ⌊ x ⌋∈ be the greatest integer less than or equal to x∈. Let 1[ℰ] be an indicator function, which takes 1 when the condition ℰ holds and otherwise 0. Let [ ℰ | X_0 = x ] represent the probability that the condition ℰ holds involving a stochastic process X = { X_t }_t∈_+ conditioned on X_0 = x. Given random variables X and Y, let 𝔼[X] be the expectation of X and 𝔼[X|Y=y] be the conditional expectation of X given Y=y. We use upper-case letters (e.g., Y) to denote random variables and lower-case letters (e.g., y) to denote their specific realizations. For a scalar function ϕ, ∂_x ϕ stands for the gradient of ϕ with respect to x, and ∂_x^2 ϕ for the Hessian matrix of ϕ. Let tr(M) be the trace of the matrix M. For x ∈^n and A ⊂^n, dist(x, A) := inf_y ∈ A x - y. § MAXIMAL SAFETY PROBABILITY LEARNING In this section, the problem formulation of estimating maximal safety probability is given in sec:problem_formulation, and the PIRL algorithm <cit.> is briefly reviewed in sec:PIRL. §.§ Problem Formulation We assume that the vehicle dynamics can be represented as a control system with stochastic noise of w-dimensional Brownian motion { W_t }_t ∈_+ starting from W_0 = 0. The system state X_t ∈𝕏⊂ℝ^n is assumed to be observable and evolves according to the following stochastic differential equation (SDE): dX_t = f(X_t, U_t) dt + σ(X_t, U_t) dW_t, where U_t ∈𝕌⊂ℝ^m is the control input. Throughout this paper, we assume sufficient regularity in the coefficients of the system (<ref>). That is, the functions f and σ are chosen in a way such that the SDE (<ref>) admits a unique strong solution (see, e.g., Section IV.2 of <cit.>). The size of σ(X_t, U_t) is determined from the uncertainties in the disturbance, unmodeled dynamics, and prediction errors of the environmental variables. For numerical approximations of the solutions of the SDE and optimal control problems, we consider a discretization with respect to time with a constant step size Δ t under piecewise constant control processes. For 0=t_0 < t_1 < … < t_k < …, where t_k := kΔ t, k ∈_+, by defining the discrete-time state X_k := X_t_k with an abuse of notation, the discretized system can be given as X_k+1 = F^π(X_k, Δ W_k), where Δ W_k := { W_t }_t ∈ [t_k, t_k+1), and F^π stands for the state transition map derived from (<ref>) under a Markov control policy π : [0, ∞) ×𝕏→𝕌. Safety of the system can be defined by using a safe set ⊂𝕏. For the discretized system (<ref>) and for a given control policy π, the safety probability ^π of the initial state X_0 = x for the outlook horizon τ∈ can be characterized as the probability that the state X_k stays within the safe set for k ∈𝒩_τ := {0, …, N(τ)}, where N(τ) := ⌊τ / Δ t ⌋, i.e., ^π(τ, x) := [ X_k ∈, ∀ k ∈𝒩_τ |  X_0 = x, π ]. Then, the objective of the learning is to estimate the maximal safety probability defined as Ψ^∗(τ,x) := sup_π∈𝒫Ψ^π(τ,x), where 𝒫 is the class of bounded and Borel measurable Markov control policies, and to learn the corresponding safe control policy π^∗ := _π∈𝒫Ψ^π. §.§ Physics-informed RL (PIRL) The training framework is illustrated in fig:PIRL_framework. PIRL integrates RL and Physics-Informed Neural Networks (PINNs) <cit.> for efficient estimation of maximal safety probability. This paper uses a PIRL algorithm based on Deep Q-Network (DQN) <cit.>. The overall structure follows from the standard DQN algorithm, and the optimal action-value function will be estimated by using a function approximator. For this function approximator, we use a PINN, which is a neural network trained by penalizing the discrepancy from a partial differential equation (PDE) condition that the safety probability should satisfy. To define an appropriate RL problem, we consider the augmented state space 𝒮 := ×𝕏⊂^n+1 and the augmented state S_k ∈𝒮, where we denote the first element of S_k by H_k and the other elements by X_k, i.e., S_k =[ H_k, X_k^⊤]^⊤, where H_k represents the remaining time before the outlook horizon τ is reached. Then, consider the stochastic dynamics starting from the initial state s := [τ, x^⊤]^⊤∈𝒮 given as follows [With this augmentation, the variable X_k in S_k is no longer equivalent to that in (<ref>), because X_k transitions to itself when S_k ∈𝒮_abs. However, we use the same notation for simplicity. See <cit.> for precise derivation.]: for ∀ k ∈_+, S_k+1 = F̃^π(S_k, Δ W_k), S_k ∉𝒮_abs, S_k, S_k ∈𝒮_abs, with the function F̃^π given by F̃^π(S_k, Δ W_k) := [ H_k - Δ t; F^π( X_k, Δ W_k) ], and the set of absorbing states 𝒮_abs given by 𝒮_abs := { [ τ, x^⊤]^⊤∈𝒮 | τ < 0 x ∉}. Then, the following proposition states that the value function of our RL problem is equivalent to the safety probability. Consider the system (<ref>) starting from an initial state s = [τ, x^⊤]^⊤∈𝒮 and the reward function r: 𝒮→ given by r(S_k) := [ H_k ∈𝒢 ] with 𝒢 := [0, Δ t). Then, for a given control policy π, the value function v^π defined by v^π(s) := 𝔼[ ∑_k=0^N_f r( S_k )   |   S_0= s, π], where N_f := inf{ j ∈_+ | S_j ∈𝒮_abs}, takes a value in [0,1] and is equivalent to the safety probability ^π(τ, x), i.e., v^π(s) = ^π(τ, x). The proof follows from Proposition 1 of <cit.> with a minor rewrite of the reward and value functions. Thus, the problem of safety probability estimation can be solved by an episodic RL problem with the action-value function q^π(s, u), defined as the value of taking an action u ∈𝕌 in state s and thereafter following the policy π: q^π(s, u) := 𝔼[ ∑_k=0^N_f r(S_k)  |  S_0 = s, U_0 = u, π]. Furthermore, it is shown in <cit.> that the maximal safety probability can be characterized by the Hamilton-Jacobi-Bellman (HJB) equation of a stochastic optimal control problem. While the HJB equation can have discontinuous viscosity solutions, the following theorem shows that a slightly conservative but arbitrarily precise approximation can be constructed by considering a set _ϵ smaller than : Consider the system (<ref>) derived from the SDE (<ref>) and the action-value function q^π_ϵ(s,u) given by q^π_ϵ(s, u) := 𝔼[ ∑_k=0^N_f r_ϵ(S_k)  |  S_0 = s, U_0 = u, π], where the function r_ϵ is given by r_ϵ(S_k) := [ H_k ∈𝒢 ] l_ϵ( X_k ), with _ϵ := { x ∈ | dist(x, ^c) ≥ϵ} and l_ϵ (x) := max{ 1- dist(x, _ϵ)ϵ, 0}. Then, if the Assumption 1 in <cit.> holds, the optimal action-value function q^ϵ(s,u): = sup_ u ∈𝕌 q^π_ϵ(s,u) becomes a continuous viscosity solution of the following PDE in the limit of Δ t → 0: ∂_s q^∗_ϵ(s, u^∗) f̃(s, u^∗) +12tr[ σ̃(s,u^∗)σ̃(s,u^∗)^⊤∂_s^2 q^∗_ϵ(s, u^∗) ] = 0, where the function f̃ and σ̃ are given by f̃(s, u) := [ -1; f(x, u) ], σ̃(s, u) := [ 0; σ(x, u) ], and u^∗ := _u ∈𝕌 q^∗(s, u). The boundary conditions are given by q^∗_ϵ([0, x^⊤]^⊤, u^∗) = l_ϵ( x ), ∀ x ∈𝕏, q^∗_ϵ([τ, x^⊤]^⊤, u^∗) = 0, ∀τ∈,  ∀ x ∈∂. When we assume further regularity conditions on the function l_ϵ(x) (i.e., differentiability of l_ϵ(x)), the PDE (<ref>) can be understood in the classical sense (see e.g., <cit.>). This means that the PDE condition can be imposed by the technique of PINN using automatic differentiation of neural networks. Based on the above, a standard DQN algorithm is integrated with a PINN by using a modified loss function given by L = L_D + λ L_P + μ L_B, where L_D is the data loss of the original DQN, L_P the loss term imposing the PDE (<ref>), and L_B the loss term for the boundary conditions (<ref>) and (<ref>). The parameters λ and μ are the weighting coefficients for the regularization loss terms L_P and L_B, respectively. Detailed descriptions of these loss terms are given in <cit.>. § NUMERICAL EXPERIMENTS This section discusses numerical results of autonomous driving based on maximal safety probability learning. After presenting training methodology in sec:setup, we provide results of normal cornering in sec:lane_keeping and high-speed drifting in sec:drifting. Our implementation is available at <https://github.com/hoshino06/pirl_itsc2024>. §.§ Training Methodology The PIRL agent learns maximal safety probabilities by interacting with CARLA <cit.>, an open-source simulator providing high-fidelity vehicle physics simulations that are not explicitly described by control-oriented models such as a bicycle model. The input to the neural network as a function approximator Q for our action-value function is the state s ∈𝒮 defined in (<ref>), and the output is a vector of q values for all discrete actions as explained below. The state s consists of the vehicle state x and the outlook horizon τ: s = [τ, x^⊤]^⊤, and x can be further decomposed into x_vehicle for the vehicle dynamics and x_road for the vehicle position relative to the road: x = [ v_x, β, r_x_vehicle, e, ψ, 𝒯_x_road ]^⊤∈ℝ^15, where x_vehicle consists of v_x standing for the vehicle longitudinal velocity, β for the sideslip angle, defined as β := arctan(v_y/v_x) with the lateral velocity v_y, and r for the yaw rate. The variable x_road consists of e standing for the lateral error from the center line of the road, and ψ for the heading error, and 𝒯 that contains five (x, y) positions of reference points ahead of the vehicle placed on the center line of the lane. The control action a is given by a = [ δ, d ]^⊤, where δ is the steering angle, and d is the throttle. They are normalized to [-1, 1] and [0, 1], respectively. To implement our DQN-based algorithm, which admits a discrete action space, the control inputs are restricted to d ∈{ 0.6, 0.7, 0.8, 0.9, 1.0 } and δ∈{ -0.8, -0.4, 0, 0.4, 0.8}. The reason for limiting d ≥ 0.6 is to prevent slow driving with trivially safe behaviors (the vehicle is safe if it never moves). The steering is limited to |δ| ≤ 0.8, since high-speed vehicles are prone to rollover if large steering angles are applied <cit.>. For better sample efficiency and generalization to unseen regions, PIRL can exploit the structure of a control-oriented model encoded in the form of a PDE in (<ref>) and the regularization loss term L_P in (<ref>). In this paper, we used a dynamic bicycle model with a Pacejka tire model. The parameters of the model that correspond to the vehicle dynamics in CARLA are assumed to be known in this paper, but PINN can also be used for the discovery of coefficients of PDEs from data <cit.>, or the entire tire model can be obtained from online learning <cit.>. Also, the term σ(X_t, U_t) and the convection term for the reference points 𝒯 were neglected in the simulations, which need to be constructed or learned from data for more precise estimation. §.§ Normal Cornering We first show preliminary results with a normal cornering task. The objective is to keep the vehicle within the lane while driving on a road. The safe set ⊂𝕏 is given by = { x ∈𝕏 |  |e| ≤ E_max}, where E_max is the maximum distance from the center line of the lane, and it is assumed to be constant. We used a corner in a built-in map of the CARLA simulator (southwest part of Town2 and shown in fig:town2_south_west), and the bound of the lateral error is set to E_max = 1m. During the training, the initial vehicle speed was randomly selected within [5m/s, 15m/s], and the vehicle spawn point was randomized for each episode. Also, the outlook horizon τ was uniformly randomized between 0 and 5.0s. For the function approximator Q, we used a neural network with 3 hidden layers with 32 units per layer and the hyperbolic tangent () as the activation function. At the output layer, the sigmoid activation function was used. fig:town2_training_progress shows the training progress with the learning rate of 5×10^-4 and the regularization weight of 1 × 10^-4. The orange and blue lines show the episode reward and the q-value averaged over a moving window of 500 episodes. The solid curves represent the mean of 10 repeated experiments, and the shaded region shows their standard deviation. While there is a large variance during the transient phase of learning, it converges to similar values after about 20,000 episodes. As defined in (<ref>), the safety probability depends on the initial state and the horizon τ. fig:SafeProb_e_psi shows the dependency of the safety probability on the lateral error e and the relative heading angle ψ, when the vehicle is placed on the straight part of the road. The vehicle state and the horizon are fixed to x_vehicle = [10m/s, 0, 0]^⊤ and τ = 5.0s. The learned safety probability is as high as approximately 1 near (e, ψ)=(0,0), and decreases near the boundaries of the road (|e|=1m). Also, it decreases when the heading angle is large. Similarly, fig:town2_safety_probabilityb shows that the safety probability decreases as the velocity v_x increases and is higher in the inner side of the curve (e < 0). Overall, the above result appropriately describes how the safety probability depends on the vehicle state and position in the lane. However, the values of the safety probability differed across agents, and further investigation is needed to ensure the accuracy. Nevertheless, it has been confirmed that the learned agents safely go through the corner when they are placed near the center of the road. §.§ Safe Drifting Here we present results for safe drifting based on maximal safety probability learning. A racing circuit for CARLA developed in <cit.> is used, and the training is performed at a specific corner as shown in fig:mapC_task. The road width is about 20m and E_max can be 10.0m, but the safe set was conservatively set as E_max = 8.0m during the training. The initial vehicle speed was set to 30m/s, and the slip angle β and the yaw rate r were randomized in the range of β∈ [ -25deg, -20deg ], r ∈ [ 50deg/s, 70deg/s ]. The neural network has the same shape with that used above. For this task, the training progress is strongly influenced by the learning rate as shown in fig:mapC_training_progress. As before, the solid curves represent the mean of the averaged rewards of 8 repeated experiments, and the shaded regions show their standard deviations. At the initial phase of training, the averaged reward (empirical safety probability) gradually decreases, but after a while it starts to increase. The timing of this increase is faster for larger learning rates, but the final episode reward increases as the learning rate is reduced from 2e-4 to 5e-5. However, at the learning rate of 2e-5, the agents failed to learn safe policy and the reward dropped to zero except for 2 cases out of 8 experiments. Also, while the learning rate of 5e-5 performed best among the four settings above, there were few cases where the reward dropped to zero, which were excluded from the plot in fig:mapC_training_progress. After these trials, we selected an agent trained with the learning rate 5e-5 at a check point of 90,000 episodes that behaved best when tested with closed-loop simulations. fig:mapC_closed_loopa shows the vehicle trajectories simulated using the learned agent. The cross mark (×) in the figure shows the initial position of the vehicle, and 20 trajectories with different initial conditions of β and r are illustrated. It can be confirmed that the learned policy is navigating through the corner without hitting the boundary of the track for different initial conditions. As shown in fig:mapC_closed_loopb, the slip angle and yaw rate take large values, and the vehicle is safely drifting while cornering. These safe behaviors have been learned only from sparse zero and one rewards. It is interesting that such a drifting maneuver can be learned by maximizing the safety probability without providing a specific reference trajectories or laborious reward shaping. § CONCLUSIONS This paper explored autonomous driving based on the learning of maximal safety probability by using Physics-informed Reinforcement Learning (PIRL). Safe drifting was achieved only from sparse binary rewards without providing a specific reference trajectory or laborious reward shaping. Since this concept is related to forward invariance in the state space, the learned safety probability may be used to safeguard a nominal controller which does not necessarily consider the safety specifications. Future work of this paper includes verifying the accuracy of the learned safety probability, and testing the proposed framework in more diverse scenarios. § ACKNOWLEDGMENT The authors would like to thank Dvij Kalaria for providing vehicle model parameters for CARLA simulation. § APPENDIX §.§ DQN based PIRL Algorithm The PIRL algorithm is presented in Algorithm <ref>. After the initializations of the replay memory 𝒟, the function approximator Q, and its target function Q̂, the main loop starting at line 4 iterates M episodes, and the inner loop starting at line 6 iterates the time steps of each episode. Each episode starts with the initalization of the state s_0 = [h_0, x_0^⊤]^⊤ in line 5, which is sampled from the distribution P_D given by P_D(s_0) = 1/|Ω_D|, h_0 = τ_Dx_0 ∈Ω_D, 0, otherwise, where τ_D∈_+ is the time interval of the data acquired through the DQN algorithm, which can be smaller than τ. The set Ω_D⊂𝕏 is the domain of possible initial states, and |Ω_D| its its volume. At each time step k, through lines 7 to 10, a sample of the transition (s_k, a_k, r_k, s_k^' ) of the state s_k, the action a_k, the reward r_k, and the next state s_k^' is stored in the replay memory 𝒟. In lines 11 to 13, a random minibatch 𝒮_D of transitions is taken from 𝒟, and the set 𝒴_D of the target values is calculated using the target q-function Q̂, where the j-th element y_j of 𝒴_D is given by[The index j is independent of the time step k. Random sampling from different time steps improves the stability of learning process by reducing non-stationarity and correlation between updates <cit.>.] y_j = r_j, for terminal s_j^', r_j + max_aQ̂(s_j^', a; θ̂), otherwise. Then, the loss function L_D is given by L_D(θ; 𝒮_D, 𝒴_D) = 1|𝒮_D|∑_j (y_j - Q(s_j, a_j; θ))^2. To calculate the loss term L_P, a random minibatch 𝒮_P = { s_l } is taken at 15. Each element s_l=[h_l, x_l^⊤]^⊤ is sampled from the distribution P_P given by P_P(s_l) = 1/(τ |Ω_P|), h_l ∈ [0, τ] x_l ∈Ω_P, 0, otherwise, with Ω_P⊂ that specifies the domain where the PDE (<ref>) is imposed. In line 16, the set of greedy actions 𝒜_P = { a_l^∗} is calculated by a_l^∗ = max_a Q(s_l, a; θ). Then, the PDE loss L_P can be defined as L_P(θ; 𝒮_P, 𝒜_P) = 1|𝒮_P|∑_l W_P(s_l, a_l^∗; θ)^2 with the residual function W_P(s_l, a_l^∗; θ) given by W_P(s_l, a_l^∗; θ) := ∂_s Q(s_l,a_l^∗; θ) f̃(s_l, a_l^∗) + 12tr[ σ̃(s_l, a_l^∗)σ̃(s_l, a_l^∗)^⊤ ∂_s^2 Q(s_l,a_l^∗;θ) ], For the boundary loss L_B, as stated in line 18, a minibatch 𝒮_B = { s_m } with s_m = [h_m, x_m^⊤]^⊤ is taken by using the distribution P_B(s) given by P_B(s) = 1/(2|Ω_P|), h_m=0 x_m∈Ω_P, 1/(2τ |Ω_B|), h_m ∈ [0, τ ] x∈Ω_B, 0, otherwise. where Ω_B⊂∂ stands for the lateral boundary. The loss L_B can be defined as L_B(θ; 𝒮_B, 𝒜_B) = 1|𝒮_B|∑_m W_B(s_m, a_m^∗; θ)^2, with the set 𝒜_B = { a_m^∗} of greedy action and the residual W_B given by W_B(s_m, a_m^∗; θ) = Q(s_m, a_m^∗; θ) - l_ϵ(x_m). Finally, at line 21, the parameter θ is updated to minimize the total loss L based on a gradient descent step. The paramter θ̂ of the target function Q̂ used in (<ref>) is updated at the line 22 with a smoothing factor η∈ (0,1]. §.§ Vehicle Model A 3-DOF bicycle model is used following <cit.>. The model’s inputs are lateral force of the front wheel F_yf, longitudinal force F_xr and lateral force F_yr of the rear wheel. The governing equations for the model are given as follows: dv_xdt = 1M(F_xr -F_yfsinδ) + r v_x β, dβdt = 1M v_x(F_yr+F_yfcosδ ) - r, drdt = 1I_z(L_f F_yfcosδ - L_rF_yr ) where M is the mass of vehicle, L_f and L_ r are the distance from C.G. to front and rear axle, and I_z is the vehicle moment of inertia around the z-axis. When the vehicle enters the extreme driving condition of drifting, the tire force will become progressively saturated, and the linear tire model cannot be used to express the change of the lateral force accurately. Here we use a slightly modified version of the Fiala lateral tire force model given in <cit.>. To formulate a lane keeping problem, the vehicle dynamics model needs to be combined with the model of the vehicle position and heading angle. When they are described in a road coordinate, the road curvature can be viewed as a disturbance pushing the vehicle out of the lane. By defining the yaw rate error ψ between the vehicle and the road, its dynamics can be described as dψd t = r - v_x ρ(t) where ρ is the curvature of the road. Then, the lateral error e can be described by the following equation: d e d t = v_y cosψ + v_x sinψ, where v_y is the lateral velocity of the vehicle and given as v_y = v_x tanβ. IEEEtran
http://arxiv.org/abs/2409.02744v1
20240904142409
Two equivalent descriptions of opetopes: in terms of zoom complexes and of partial orders
[ "Louise Leclerc" ]
math.CT
[ "math.CT", "math.CO", "06A75, 18N99" ]
[2016/12/29] skins, breakable arrows, arrows.meta, decorations.markings, shapes.geometric, trees 1.05 LS1 LS1stix2mn arrows2LS1stix2sfmit arrows2boldLS1stix2sfbit arrows2 arrows2"B2 arrows2"B3 arrows2"B4 arrows2"B5 hmargin=2.5cm,vmargin=1.5cm * §.§ 0pt0ex plus 1ex minus .2ex2ex plus .2ex figuresection paragraphCounter[section] break break rmk[paragraphCounter]Remark exm[paragraphCounter]Example *exm*Example thm[1][] paragraphCounter [ enhanced , breakable , title = #1 Theorem  Theorem  : #1 , attach boxed title to top left = xshift = 15pt , yshift* = -/2 , boxed title style = size = small , colback = purple!20 , top = 1pt , bottom = 1pt , colback = white , after skip = 10pt , before skip = 6pt ] thm*[1][] [ enhanced , breakable , title = #1 Theorem Theorem : #1 , attach boxed title to top left = xshift = 15pt , yshift* = -/2 , boxed title style = size = small , colback = purple!20 , top = 1pt , bottom = 1pt , colback = white , after skip = 10pt , before skip = 6pt ] defin[1][] paragraphCounter [ enhanced , breakable , title = #1 Definition  Definition  : #1 , attach boxed title to top left = xshift = 15pt , yshift* = -/2 , boxed title style = size = small , colback = blue!30 , top = 1pt , bottom = 1pt , colback = white , after skip = 10pt , before skip = 6pt ] prop[1][] paragraphCounter [ enhanced , breakable , title = #1 Proposition  Proposition  : #1 , attach boxed title to top left = xshift = 15pt , yshift* = -/2 , boxed title style = size = small , colback = green!50!blue!30! , top = 1pt , bottom = 1pt , colback = white , after skip = 10pt , before skip = 6pt ] lem[1][] paragraphCounter [ enhanced , breakable , title = #1 Lemma  Lemma  : #1 , attach boxed title to top left = xshift = 15pt , yshift* = -/2 , boxed title style = size = small , colback = blue!30 , top = 1pt , bottom = 1pt , colback = white , after skip = 10pt , before skip = 6pt ] plain
http://arxiv.org/abs/2409.03117v1
20240904225713
Mathematical ideas and notions of quantum field theory
[ "Pavel Etingof" ]
math-ph
[ "math-ph", "hep-th", "math.MP", "math.QA" ]
theoremTheorem[section] lemma[theorem]Lemma exercise[theorem]Exercise proposition[theorem]Proposition corollary[theorem]Corollary definition definition[theorem]Definition example[theorem]Example conjecture[theorem]Conjecture remark[theorem]Remark note[theorem]Note
http://arxiv.org/abs/2409.02890v1
20240904173034
Single Pion Production off Free Nucleons: Analysis of Photon, Electron, Pion and Neutrino Induced Processes
[ "M. Kabirnezhad" ]
hep-ph
[ "hep-ph" ]
APS/123-QED Imperial College London, Department of Physics,London SW7 2BZ, United Kingdom m.kabirnezhad@imperial.ac.uk § ABSTRACT In this paper, I introduce a unified model for single-pion production across photo-, electro-, and neutrino-nucleon interactions, designed to be valid over a broad kinematic range that is crucial for accelerator-based neutrino experiments. This model includes vector and axial-vector nucleon transition form factors for all excited nucleons or resonances up to 2 GeV, as well as non-resonant backgrounds, within a meson dominance framework that adheres to QCD principles and ensures unitarity. This approach guarantees accurate asymptotic behaviour at high momentum transfer (Q^2) and effectively addresses the transition region. Additionally, the model employs the Conserved Vector Current (CVC) and Partially Conserved Axial Current (PCAC) relations to provide reliable predictions at very low Q^2, tackling challenges encountered by current neutrino experiments. The unified model facilitates a comprehensive analysis by integrating all available data from electron, photon, pion, and neutrino scattering experiments. This integration enables a detailed investigation of nucleon structure within the resonance region and is particularly valuable for probing weak interactions, where neutrino-nucleon data are limited. The combined analysis allows for the simultaneous parameterisation and constraint of the model’s free parameters, while quantifying associated uncertainties, thus providing a robust and reliable framework for future neutrino measurements. Single Pion Production off Free Nucleons: Analysis of Photon, Electron, Pion and Neutrino Induced Processes M. Kabirnezhad September 9, 2024 =========================================================================================================== § INTRODUCTION Neutrino interactions that result in the production of a single pion in the final state are critically important for accelerator-based neutrino experiments. Single pion production (SPP) channels constitute the largest fraction of the inclusive neutrino-nucleus cross-section in the 1-5 GeV energy region, which is relevant for all accelerator-based neutrino beams. Accurate models of SPP cross-section processes are required to precisely predict the number and kinematics of neutrino interactions. These models help establish the relationship between neutrino energy and energy deposition in a neutrino detector, which is crucial for interpreting experimental results and reducing systematic uncertainties. Therefore, it is essential that these models are valid within the kinematic region of the neutrino experiments. Chiral perturbation theory (ChPT), the effective field theory of QCD, is adept at describing non-perturbative QCD processes, such as pion production, particularly at neutrino energies around 1 GeV. This makes ChPT highly useful for understanding interactions in this energy range. However, current and future neutrino experiments, such as NOvA and DUNE, operate in a wide energy beam that includes higher energies. These experiments explore a transition region where neutrino energies exceed E_ν = 1 GeV but remain below the energies where perturbative QCD becomes the dominant framework. This kinematic region, spanning both perturbative and non-perturbative regimes, features hadronic degrees of freedom that are a mixture of nucleons, resonances, and partons. This presents unique challenges in accurately modelling the interactions and particle productions observed in these experiments. This can explain current discrepancies in the first resonance region between MiniBooNE and MINERvA experiments, which have been discussed in Ref. <cit.>. Additionally, the transition region encompasses several processes where both resonant and non-resonant mechanisms play a role. Currently, 18 overlapping resonances with varying quantum numbers have been identified, each with resonance masses below 2 GeV. These overlapping resonances result in significant interference effects, not only among the resonances themselves but also between resonant and non-resonant interactions. Such complex interference patterns add to the difficulty of accurately understanding pion production. Therefore, a thorough understanding of these processes within the transition region is crucial for improving the precision of neutrino interaction predictions. Figure <ref> illustrates these complexities: the left panel shows how the hadron invariant mass W evolves, while the right panel depicts the corresponding Q^2 evolution, mapping out the regions explored by both current and future neutrino experiments. In addition to model development and robust predictions, a crucial aspect of theoretical work in neutrino experiments is the quantification of uncertainties. While several SPP models exist , most of them address only key processes, focusing primarily on the first and second resonance regions. However, none of these models systematically study or evaluate the systematic uncertainties inherent to the models themselves. Furthermore, existing SPP models are confined to the non-perturbative region, which does not encompass the broad kinematic range required for neutrino experiments. The primary goal of this work is to develop a comprehensive model that addresses these gaps, providing a robust framework to enhance the precision of neutrino measurements. The valid region for the SPP model presented in this paper is defined by a hadron invariant mass (W) below 2 GeV , as illustrated in Fig. <ref>. This region is characterised by broad nucleon excitations superimposed on a smooth non-resonant background. For hadron invariant masses below 1.4 GeV, the target nucleon is predominantly excited to the Δ(1232) resonance, which then decays almost immediately into a single pion and a nucleon. In this lower mass range, the Δ resonance, with its significant contribution, is the sole resonance observed within the first resonance region (1.08GeV < W < 1.4 GeV). The dynamics of the interactions become more intricate in the second resonance region (1.4 GeV < W < 1.6GeV), where the scattering processes are influenced by three isospin-1/2 resonances: the P_11(1440) Roper resonance, and two negative-parity states, D_13(1520) and S_11(1535). As W continues to rise into the third resonance region (1.6GeV < W < 2.0 GeV), the complexity further escalates with the contribution of an additional ten four-star resonances, as noted in Ref. <cit.>, significantly enriching the interaction dynamics. A commonly used form factor in theoretical models, derived from elastic electron scattering data, is the dipole form factor. However, inelastic scattering data indicate that the dipole form factor is inadequate for accurately describing excited nucleons, particularly in the transition region. In contrast, the meson dominance (MD) form factor model, which accounts for the interaction between leptons and nucleons via meson exchange with analogous quantum properties, offers a more accurate representation. As emphasised in Ref. <cit.>, the MD model, consistent with QCD calculations, effectively captures the nucleon form factor across both perturbative and non-perturbative regimes, particularly addressing the hadron-quark transition region. The model incorporates several parameters that can be constrained by unitarity conditions and analytic models. However, precise quantification of these form factors requires accurate data across a broad kinematic range—a challenge compounded by the scarcity of available data for neutrino interactions in this context. To address this challenge, this paper introduces a comprehensive unified model that extends its applicability to analogous processes involving not only neutrino beams but also electron, photon, and pion beams. The integration of these diverse probes is particularly advantageous, as electron, photon, and pion scattering experiments have yielded a wealth of data over the years, covering a broad range of kinematic conditions. By harnessing this extensive dataset, the model enables a more detailed and nuanced investigation of nucleon structure, offering insights that would be difficult to achieve using neutrino data alone. The unified model capitalises on the strengths of each interaction type: electron scattering data provides precision and control for probing the vector current, photon interactions offer complementary insights into the vector current at very low Q^2, and pion scattering data, guided by the Partially Conserved Axial Current (PCAC) hypothesis, delivers crucial information on the axial-vector current at low Q^2. This comprehensive dataset is particularly valuable for addressing and potentially resolving discrepancies between single-pion production (SPP) models and neutrino data, especially in the low Q^2 region (Q^2 < 0.2 , GeV^2). The advanced theoretical framework developed in this work is designed to incorporate these diverse datasets seamlessly, ensuring that the model remains consistent with fundamental QCD principles and preserves all symmetry relations. This consistency is crucial for making robust predictions about nucleon behaviour across various experimental contexts. Furthermore, the sophisticated analysis method employed in this paper allows for meticulous control over systematic uncertainties, which primarily arise from experimental data and the choice of model parametrisation. Robust predictions with quantified uncertainties are essential for precision neutrino measurements, paving the way for future discoveries. § MK MODEL (GENERAL DESCRIPTION) Let us consider the weak, electro- and photo- pion production reactions: .[ ν_l; ν̅_̅l̅ ]} (k_1) + N(p_1) → .[ l; l̅ ]} (k_2) + N(p_2) + π(q), e(k_1) + N(p_1) → e(k_2) + N(p_2) + π(q), γ(k) + N(p_1) → N(p_2) + π(q) Here N represents a nucleon and l denotes the outgoing charged lepton or neutrino in charged current (CC) and neutral current (NC) interactions. The four-momentum of each particle is indicated in parentheses. In the weak and electro-production reactions, the momentum transfer is defined by 𝐤 = 𝐤_1 - 𝐤_2, and the invariant quantities Q^2 and W are given by: Q^2 = -k^2 = -(k_1 - k_2)^2, W^2 = (k+p_1)^2 = (p_2 + q)^2. Non-invariant quantities, such as pion angles, are defined in the centre-of-mass frame of the final pion and nucleon (the isobaric frame). This is represented by the equation: 𝐪 + 𝐩_2 = 𝐤 + 𝐩_1 = 0. The pion angles in the isobaric frame are illustrated in Fig. <ref>. Variables in the laboratory frame are labeled by L. In this framework, the full kinematic differential cross section for single pion production in lepton-nucleon scattering is defined as: dσ(l N → l'Nπ)/dk^2 dW dΩ_π = 1/(2π)^4|q|/8M^2-k^2/(𝐤^𝐋)^2|ℳ|^2 where ℳ, the matrix element for weak and electromagnetic interactions, can be expressed as follows: ℳ(ν_l N → l N π) = G_F/√(2)a ϵ_ρ ⟨ Nπ| J_CC+^ρ |N  ⟩ ℳ(ν̅_̅l̅ N →l̅ N π) = G_F/√(2)a ϵ̅_ρ ⟨ Nπ| J_CC-^ρ |N  ⟩ ℳ(e N → e' N π) = e^2 ϵ_ρ^EM1/k^2⟨ Nπ| J_EM^ρ |N  ⟩ where G_F is the Fermi coupling constant, and e is the electric charge. ϵ^ρ is the leptonic current and a is either the cosine of the Cabibbo angle for CC interactions or 1/2 for NC interactions. ϵ_ρ = u̅_l(k_2) γ_ρ(1- γ_5) u_ν(k_1), ϵ̅_ρ = u̅_l̅(k_2) γ_ρ(1+ γ_5) u_ν̅(k_1), ϵ^EM_ρ = u̅_̅e̅(k_2) γ^ρ u_e(k_1), where u is the spinor of leptons. The lepton current (ϵ^ρ) can be interpreted as the intermediate gauge boson's polarisation vector: ϵ^ρ_λ = [C_L_λ e^ρ_L + C_R_λ e^ρ_R + C_λ e^ρ_λ], where 𝐞_L and 𝐞_R are the transverse polarisations (i.e., perpendicular to the momentum transfer), and 𝐞_λ is the longitudinal polarisation along the 𝐳 direction of the isobaric system. This gives: e^α_L = 1/√(2)[ 0 1 -i 0 ] , e^α_R = 1/√(2)[ 0 -1 -i 0 ] , e^α_λ = 1/√(|(ϵ^0_λ)^2 - (ϵ^3_λ)^2 |)[ ϵ^0_λ 0 0 ϵ^3_λ ] and, C_L_λ = 1/√(2)(ϵ_λ^1+ iϵ_λ^2) , C_R_λ = - 1/√(2)(ϵ_λ^1- iϵ_λ^2) , C_λ = √(|(ϵ^0_λ)^2 - (ϵ^3_λ)^2 |) . where λ= -(+) stands for the left (right)-handed helicity. In NC weak current and electron scattering lepton current, where the lepton mass is neglected, ϵ^ρ_+=0, therefore the gauge boson can be interpreted by three polarisations. The electromagnetic hadron current is given by J^ρ_EM = 1/2𝒱_Y^ρ + 𝒱_3^ρ. The weak hadron currents can be decomposed into vector current(𝒱): 𝒱^ρ_CC = 𝒱^ρ_1 + i𝒱^ρ_2 , 𝒱^ρ_NC = (1 - 2sinθ_W^2)𝒱^ρ_3 - sinθ_W^2)𝒱^ρ_Y - 1/2𝒱^ρ_S, and axial-vector (𝒜): 𝒜^ρ_CC = 𝒜^ρ_1 + i𝒜^ρ_2 𝒜^ρ_NC = 𝒜^ρ_3 - 1/2𝒜^ρ_S where 𝒱_1,2,3 (𝒜_1,2,3) are the component of the isovector part of the vector (axial-vector) current, 𝒱_Y and 𝒱_S (𝒜_S) are hypercharge and strange isoscalar current. Isospin symmetry establishes a connection between the matrix elements of the isovector components of the vector and axial-vector hadronic currents. Consequently, the matrix elements for antineutrino- and neutrino-induced reactions are proportional, given by: ⟨ pπ^-| J^CC-_ρ |p  ⟩ = ⟨ nπ^+| J^CC+_ρ |n ⟩ ⟨ nπ^-| J^CC-_ρ |n  ⟩ = ⟨ pπ^+| J^CC+_ρ |p  ⟩ ⟨ nπ^0| J^CC-_ρ |p  ⟩ = -⟨ pπ^0| J^CC+_ρ |n  ⟩ A real photon has two transverse polarisations. The matrix elements in Eq. <ref> can be written in terms of helicity amplitudes and the lepton coefficients in Eq. <ref>. The helicity amplitudes of the hadron current are defined with three indices: the helicity of the incident nucleon (λ_1), the helicity of the outgoing nucleon (λ_2), and the polarisation of the gauge boson (λ_k): F̃_λ_2, λ_1^λ_k = ⟨ Nπ| e^ρ_λ_k (1/2W) J^V_ρ |N  ⟩ , G̃_λ_2, λ_1^λ_k = ⟨ Nπ| e^ρ_λ_k (1/2W) J^A_ρ |N  ⟩, Therefore, there are 16 helicity amplitudes to describe the vector current and 16 helicity amplitudes to describe the axial-vector current for weak CC interactions. For electron scattering and NC channels, the gauge boson has three polarisations; therefore, there are 12 helicity amplitudes for the electromagnetic current. For photon scattering, there are two polarisations and thus 8 helicity amplitudes. The hadronic currents and helicity amplitudes involve several form factors. The most sophisticated model to describe these form factors is the meson dominance (MD) model, based on the effective Lagrangian of quantum field theory as introduced by Sakurai and Gell-Mann in Refs. <cit.>. The MD model explains the interaction between leptons and nucleons through meson exchange with analogous quantum properties. The form factors can then be expressed in terms of the meson masses (m_j), the ratio of coupling strengths between the gauge boson and the meson, and between the meson and the nucleon (a_j), summing over all possible n mesons: FF_N(k^2)= ∑_j=1^n a_j m^2_j/m_j^2 - k^2 where a list of vector-mesons and axial-mesons is given in Table <ref>. There are two shortcoming from the form factor model; firstly, the FF_N in eq: <ref> does not have analytic properties and does not obey the unitarity condition. secondly, the asymptotic behaviour of the form factor differs from the asymptotic scaling behaviours predicted by QCD. However these properties can be imposed to the MD model. Imposing the conditions bring us to a complex system of equations for the coupling constant ratios, making it difficult to find solutions in the general case. However, in Ref. <cit.>, an equivalent system of equations is derived for the coupling constant ratios, with coefficients that are simply even powers of the corresponding vector-meson masses. In principle, one can find solutions to these equations even in the general case, referred to as linear superconvergence relations: ∑_j=1^n m^2_j a_j =0, ∑_j=1^n m^4_j a_j =0, ⋮ ∑_j=1^n m^2(m-1)_j a_j =0, where m≤ n shows the asymptotic behaviour of the form factors: FF_N(k^2)_|k^2| →∞∼ k^-2m § RESONANCE PRODUCTION The general form of helicity amplitudes was calculated in my previous papers <cit.>, and these are reproduced in Appendix <ref> and <ref>. The 16 vector helicity amplitudes (F̃_λ_2, λ_1^λ_k) are presented in <Ref>, and the 16 axial-vector helicity amplitudes (G̃_λ_2, λ_1^λ_k) are shown in Table <ref>. The definitions include several factors, such as Clebsch-Gordan coefficients for different SPP channels, branching ratios for each resonance decaying into a single pion, and Breit-Wigner parametrisation for the broad resonance states. These factors are common across all models. However, the key part of the definition lies in the resonance production amplitudes, which depend on the specific model chosen. The production amplitudes are calculated from the Feynman diagrams shown in Fig. <ref>. The goal of this section is to calculate these amplitudes, which are defined as follows: f^(V,A)_-3 = -1/2W⟨ R, 3/2| e^ρ_R J^(V,A)_ρ |N, 1/2  ⟩ f^(V,A)_-1 = -1/2W⟨ R, 1/2| e^ρ_R J^(V,A)_ρ |N, -1/2  ⟩ f^(V,A)(λ)_0+ = 1/2W√(-k^2)/|k|⟨ R, 1/2| e^ρ_R J^(V,A)_ρ |N, 1/2  ⟩ Here f^(V,A) denotes the vector and axial-vector production amplitudes and J^(V,A) represent hadron currents. §.§ The first and second resonance regions In the first and second resonance regions, the produced resonances correspond to the first four states listed in Table <ref>. These resonances have spins of either 1/2 or 3/2. The hadronic currents associated with these resonances can be expressed as follows: ⟨ R(p)| J^ρ_3/2 |N(p_1)  ⟩ = ψ̅_α(p)Γ^αρ_3/2u(p_1,s_z) ⟨ R(p)| J^ρ_1/2 |N(p_1)  ⟩ = u̅(p)Γ^ρ_1/2u(p_1,s_z) . In these equations, u(p) represents the Dirac spinor for spin-1/2 resonances, while ψ_α(p) denotes the Rarita–Schwinger spinor for spin-3/2 resonances. The operators Γ^αρ_3/2 and Γ^ρ_1/2 encapsulate the interaction dynamics and depend on the specific form of the current J^ρ, as well as the quantum numbers of the resonances involved. Given that the dynamics governing resonances with spin 1/2 and 3/2 differ significantly, I will discuss these cases separately to provide a clear understanding of their respective contributions: §.§.§ Resonances with spin 3/2 In the first and second resonance regions, there are two resonances with spin 3/2: the P_33 (Δ) resonance, which has positive parity, and the D_13(1520) resonance, which has negative parity. Their corresponding interactions are defined as follows: Γ^αρ_3/2(P_33) = [𝒱^αρ_3/2 - 𝒜^αρ_3/2]γ^5 Γ^αρ_3/2(D_13) = [𝒱^αρ_3/2 - 𝒜^αρ_3/2] where the vector (𝒱^αρ_3/2) and axial-vector (𝒜^αρ_3/2) components are given by: 𝒱^αρ_3/2 = 𝒞_3^V/M(g^αρk̸ - k^αγ^ρ) + 𝒞_4^V/M^2(g^αρ k.p - k^α p^ρ)   + 𝒞_5^V/M^2(g^αρk.p_1 - k^α p_1^ρ) , 𝒜^αρ_3/2 = 𝒞_3^A/M(g^αρk̸ - k^αγ^ρ) + 𝒞_4^A/M^2(g^αρ k.p - k^α p^ρ)   + 𝒞_5^A g^αρ + 𝒞_6^A/M^2k^α k^ρ . The vector and axial-vector helicity amplitudes for the P_33 and D_13 resonances can be derived from Eqs. <ref> and <ref>. Here, we present the explicit forms of these amplitudes for P_33 resonance: f^V_-3 (P_33) = -|k|/W1/√(2𝒩)[C_3^V W_+/M + C_4^V/M^2 Wk_0 + C_5^V/M^2 (Wk_0 -k^2 )], f^V_-1 (P_33) = |k|/W1/√(2𝒩)[C_3^V/MW(k^2 - MW_+) + C_4^V/M^2 Wk_0 + C_5^V/M^2 (Wk_0 -k^2)], f^V(λ)_0+(P_33) = -|k|/W1/√(2𝒩)1/C_λ(|k|ϵ^0_λ - k_0ϵ^z_λ) [C_3^V/M + C_4^V/M^2W + C_5^V/M^2 (W- k_0)] f^A_-3(P_33) = - √(𝒩/2) [C_3^A W_-/M + C_4^A/M^2 Wk_0 + C_5^A ], f^A_-1 (P_33) = - √(𝒩/6) [C_3^A/M2|k| + k_0W_-/E_k + M - C_4^A/M^2 Wk_0 - C_5^A ], f^A(λ)_0+(P_33) = -√(𝒩/3)1/C_λ[(C_3^A/M + C_4^A/M^2W)(|kϵ^0_λ - k_0ϵ^z_λ) - C_5^A ϵ^z + C_6^A/M^2 |k|(|k|ϵ^0_λ - k_0ϵ^z_λ) ] and for D_13 resonance: f^V_-3 (D_13) = √(𝒩/2)[C_3^V/MW_- + C_4^V/M^2 Wk_0 + C_5^V/M^2(Wk_0 - k^2)], f^V_-1 (D_13) = √(𝒩/6)[C_3^V/M(W _-- 2k^2/E_k + M) + C_4^V/M^2Wk_0 + C_5^V/M^2(Wk_0 - k^2) ], f^V(λ)_0+ (D_13) = √(𝒩/3)1/C_λ(|k|ϵ^0_λ - k_0ϵ^z_λ) [C_3^V/M + C_4^V/M^2W + C_5^V/M^2 (W- k_0)] f^A_-3 (D_13) = -|k|/W1/√(2𝒩)[C_3^A W_+/M + C_4^A/M^2 Wk_0 + C_5^A ], f^A_-1 (D_13) = |k|/W1/√(6𝒩) [C_3^A/M (W+M-2k_0) - C_4^A/M^2 Wk_0 - C_5^A ], f^A(λ)_0+ (D_13) = |k|/W1/√(3𝒩)1/C_λ[(C_3^A/M + C_4^A/M^2W)(|kϵ^0_λ - k_0ϵ^z_λ) - C_5^A ϵ^z + C_6^A/M^2 |k|(|k|ϵ^0_λ - k_0ϵ^3_λ) ] where 𝒩=√(M^2 + k ^2) + M/W. The coupling factors 𝒞^V_i (i= 3-5), represent either the CC vector form factors (C_i^V), the electromagnetic form factors (C_i^N, N=p,n) or the NC form factors (C̃_i^N, N=p,n) for the resonances with spin 3/2. Similarly, 𝒞^A_i, i= 3-6, represent either the CC axial-vector form factors (C_i^A) or NC form factors (C̃_i^A). §.§.§ Resonance with spin 1/2 In the first and second resonance regions, there are two resonances with spin 1/2: the P_11(1440) resonance, which has positive parity, and the S_11(1535) resonance, which has negative parity. Their corresponding interactions are defined as follows: Γ^ρ_1/2(P_11) = [𝒱^ρ_1/2 - 𝒜^ρ_1/2] Γ^αρ_1/2(S_11) = [𝒱^αρ_1/2 - 𝒜^αρ_1/2]γ^5 where the vector (𝒱^αρ_1/2) and axial-vector (𝒜^αρ_1/2) components are given by: 𝒱^ρ_1/2 = ℱ_1^V/2M^2(k̸q^ρ - k^2 γ^ρ) + ℱ_2^V/2M(iσ^ραk_α) , 𝒜^ρ_1/2 = ℱ_Aγ^ργ^5 + ℱ_P/M k^ργ^5 . The vector and axial-vector helicity amplitudes for the P_11 and S_11 resonances can be derived from Eqs. <ref> and <ref>. Here, we present the explicit forms of these amplitudes for P_11 resonance: f^V_-1 (P_11) = |k|/√(W(E_k +M))[ℱ_1^V - ℱ_2^V/W_+^2 k^2] f^V(λ)_0+ (P_11) = -1/C_λ|k|/√(2W(E_k +M))(|k|ϵ^0_λ - k_0ϵ^3_λ)     1/W_+[ ℱ_1^V - ℱ_2^V] f^A_-1(p_11) = - ℱ_A √(E_k + M/W) f^A(λ)_0+ (P_11) = 1/C_λ1/√(2W(E_k +M)) [ ℱ_A [(|k|ϵ^0_λ - k_0ϵ^3_λ)    + ϵ^3_λW_+ ] + ℱ_P|k|/M(|kϵ^0_λ - k_0ϵ^3_λ) ] and for S_11 resonance: f^V_-1 (S_11) = √((E_k + M)/W)[ℱ_1^V/W_+^2 k^2 - ℱ_2^V/W_+W_-] f^V(λ)_0+ (S_11) = 1/C^V_λ√(E_k + M/2W)(|k|ϵ^0_λ - k_0ϵ^3_λ)   [-ℱ_1^V W_-/W_+^2 + ℱ_2^V/W_+] f^A_-1 (S_11) = -ℱ_A |k|/√(W(E_k +M)) f^A(λ)_0+(S_11) = -1/C_λ√(E_k +M/3W)[ ℱ_A( ϵ^0_λW_+)   + ℱ_P/M(|k|ϵ^0_λ - k_0ϵ^3_λ) ] where ℱ^V_i (i= 1,2), represent either the CC vector form factors (F_i^V), the electromagnetic form factors (F_i^N, N=p,n) or the NC form factors (F̃_i^N, N=p,n) for the resonances with spin 1/2. Similarly, ℱ_A and ℱ_P represent for CC (NC) axial-vector form factors F_A, F_P (F^N_A, F^N_P). The vector form factors C_i^V and C_i^VN (F_i^V and F_i^VN) can be related to the electromagnetic form factors C_i^p,n (F_i^p,n) as is presented in Table <ref>. §.§.§ Form factor parametrisation The Meson Dominance (MD) model, which aligns with Quantum Chromodynamics (QCD) calculations, effectively describes the nucleon form factor in both perturbative and non-perturbative regimes as highlighted in <cit.>. This model is particularly adept at addressing the hadron-quark transition region. In this work, we use a parametrization suggested by G. Vereshkov and N. Volchanskiy in Ref. <cit.>. C_α^(V,A)(k^2) = C_α ^(p)(0)/L_α(k^2)∑_k=1^Ka^(V,A)_α k m^2_k/m^2_k- k^2,   (α=3-5) g_β^(V,A)(k^2) = g_β^(p)(0)/L_β(k^2)∑_k=1^Kb^(V,A)_β k m^2_k/m^2_k - k^2,   (β=1-2) Here, m^2_k = m^2_(ρ)k represents the ρ-meson mass for the Δ (P_33(1232)) resonance, while m^2_k = (m^2_(ω)k + m^2_(ρ)k)/2 pertains to the resonances in the second region, such as P_11(1440), D_13(1520), and S_11(1535). The values for m_(ρ)k and m_(ω)k are listed in Table <ref>. The parameters a^(V,A)α k and b^(V,A)β k, which represent the coupling constant ratios of vector and axial-vector mesons to nucleons, are not free parameters. Instead, they are constrained by the linear superconvergence relations as expressed in Eq. <ref>: For α=3: ∑_k=1^K a_3 k =1,               ∑_k=1^K a_3 k m^2_k=0, ∑_k=1^K a_3 k m^4_k=0. For α=4,5: ∑_k=1^K a_α k =1,              ∑_k=1^K a_α k m^2_k=0, ∑_k=1^K a_α k m^4_k=0,          ∑_k=1^K a_α k m^6_k=0. For β=1: ∑_k=1^K b_1 k =1,               ∑_k=1^K b_3 k m^2_k=0, ∑_k=1^K b_1 k m^4_k=0. For β=2: ∑_k=1^K b_2 k =1,              ∑_k=1^K b_2 k m^2_k=0, ∑_k=1^K b_2 k m^4_k=0,          ∑_k=1^K b_2 k m^6_k=0. The logarithmic renormalisation L_α, β^(V,A)(k^2) is necessary to satisfy quark-hadron duality: L_α (β)^(A,V)(k^2) = 1 + k_α (β)^(A,V)ln^n_α(β)(1 - k^2/Λ^2_QCD) where n_1 = n_3 ≃ 3, n_1>n_2 and n_5>n_3>n_4. This parametrisation provides a robust framework for analyzing the nucleon form factors, ensuring consistency with both theoretical predictions and experimental observations. The linear superconvergence relations play a crucial role in constraining the parameters, thereby enhancing the predictive power and reliability of the MD model in various resonance regions. §.§ The Third resonance regions The third resonance region is characterised by overlapping structures of multiple resonances, each contributing less significantly to the hadronic tensor compared to those in the earlier regions. Additionally, this region includes several resonances with spins greater than 3/2, which are inherently more complex to describe due to the increased number of degrees of freedom. To avoid introducing an excessive number of parameters that would complicate the analysis of these overlapping resonances, I employ the quark harmonic oscillator model <cit.> to calculate the production amplitudes as described in Eq. <ref>. This approach simplifies the problem by reducing the complexity associated with higher-spin resonances. In fact, these amplitudes have already been calculated in previous works <cit.>, where a simple dipole form factor was used for each resonance. The hadronic current for each resonance in this region is related to a dipole form factor, parameterised by two adjustable quantities, F_V(0) and M_V: F_V(k^2) = F_V(0)( 1- k^2/M_V^2)^-2(1 - k^2/M^2)^n/2 Here, F_V(k^2) represents the form factor as a function of the momentum transfer squared, k^2 . The parameter F_V(0) correspond to the form factor's value at zero momentum transfer. The factor (1 - k^2/M^2)^n/2 introduces additional flexibility in the form factor to account for the effects of higher resonances where n represents the number of oscillator excitations in the quark oscillator model. By adopting this approach, the model remains manageable while still capturing the essential physics of the third resonance region. The use of a dipole form factor with adjustable parameters allows for a more practical and streamlined analysis, facilitating the study of this complex region without sacrificing accuracy. § NON-RESONANT BACKGROUND The helicity amplitudes of non-resonant background from the chiral-perturbation (ChPT) diagrams were calculated in the previous work <cit.> and are reproduced in Appendix <ref>. In this work, the dipole vector form factors of ChPT theory (low W) are changed to VMD form-factors as suggested by Lomon <cit.>. They are F_1^iv (k^2) = N/21.0317 + 0.0875(1 - k^2/0.3176)^-2/1 - k^2/0.5496 F_1^ρ(k^2) + ℛ_ρ'm_ρ'^2/m_ρ'^2 - k^2 F_1^ρ(k^2) + (1 - 1.1192N/2 - ℛ_ρ') F_1^D(k^2), F_2^iv (k^2) = N/25.7824 + 0.3907(1 - k^2/0.1422)^-1/1 - k^2/0.5362 F_2^ρ(k^2) + κ_ρ'ℛ_ρ'm_ρ'^2/m_ρ'^2 - k^2 F_2^ρ(k^2) + (κ_v - 6.1731N/2 - κ_ρ'ℛ_ρ') F_2^D(k^2), F_1^is (k^2) = ℛ_ωm_ω^2/m_ω^2 - k^2 F_1^ω(k^2) + ℛ_ϕm_ϕ^2/m_ϕ^2 - k^2 F_1^ϕ(k^2) + (1 - ℛ_ω ) F_1^D(k^2), F_2^is (k^2) = κ_ωℛ_ωm_ω^2/m_ω^2 - k^2 F_2^ω(k^2) + κ_ϕℛ_ϕm_ϕ^2/m_ϕ^2 - k^2 F_2^ϕ(k^2) (κ_s - κ_ωℛ_ω - κ_ϕℛ_ϕ) F_2^D(k^2), where κ_v=3.793, κ_s=-0.12, m_ρ'=1.465 GeV, m_ω=0.78265 GeV and m_ϕ=1.020 GeV. The vector form-factors, F_1,2^ρ, F_1,2^ω and F_1,2^D, are given by: F_1^ℳ,D(k^2) = Λ^2_1,D/Λ^2_1,D - k̃^2Λ^2_2/Λ^2_2 - k̃^2, F_2^ℳ,D(k^2) = Λ^2_1,D/Λ^2_1,D - k̃^2(Λ^2_2/Λ^2_2 - k̃^2)^2, where ℳ= ρ and ω and Λ_1,D is Λ_1 for F_1,2^ℳ and Λ_D for F_1,2^D, F_1^ϕ(k^2) = F_1^ℳ(-k^2/Λ^2_1 - k^2)^3/2 ,     F^ϕ_1(0)=0, F_2^ϕ(k^2) = F_2^ℳ(μ^2_ϕ - Λ_1^2 k^2/μ^2_ϕΛ^2_1 - k^2)^3/2, where k̃^2 = k^2 ln[(Λ^2_D - k^2)/Λ_QCD^2 ]/ln(Λ^2_D/Λ^2_QCD)  . The non-resonant background of the MK model is extended to high W (W>1.4 GeV), using the Regge trajectory approach <cit.>. The propagators of the t-channel meson exchange ChPT diagrams are replaced by the corresponding Regge propagators (ReChi). The expressions of the Regge propagators for the meson trajectories of the π and ρ are the following: 𝒫_π(t,s) = -α'_πΓ(- α_π(t) )(S/S^π_0)^α_π(t), 𝒫_ρ(t,s) = -α'_ρΓ(- α_ρ(t) )(S/S^ρ_0)^(α_ρ(t) - 1), where S^π_0 and S^ρ_0 are adjustable parameters. The Regge trajectory of π and ρ are as follows: α_π(t) = 0.75(t- m_π^2), α_ρ(t) = 0.53 + 0.85t. Following the Hybrid model's parametrisation <cit.>, the helicity amplitudes for non-resonant background will be: F̃ = cos^2 ϕ(W) F̃_ChPT + sin^2 ϕ(W)F̃_ReChi, where F̃= F̃_λ_2, λ_1^λ_k and ϕ(W) = π/2( 1- 1/exp (W-1.7)/0.1). F̃_ChPT are the helicity amplitudes for the ChPT background and F̃_ReChi are the modified helicity amplitudes with Regge propagators. This approach with the linear Regge trajectories (Eq. <ref>) is valid at high-energy and low-momentum transfer. To ensure that the model reproduces data at high momentum transfer, the trajectories are multiplied by a factor of (1 + 2.4Q^2/W^2)^-1, as suggested in Ref. <cit.>. The adjustable parameters in the Regge trajectory and VMD form factors are fit to the available exclusive electron scattering data in the next section. The non-resonant interactions follow the Hernandez et al. model <cit.> which is deduced from the chiral-perturbation (ChPT) theory Lagrangian of the pion-nucleon system, therefore, it is not reliable at high hadron invariant mass (W>1.4 GeV). However to extend the model to high W (W>1.4 GeV), Regge trajectory approach <cit.> is applied, where the propagators of the t-channel meson exchange ChPT diagrams are replaced by the corresponding Regge propagators (ReChi). The expressions of the Regge propagators for the meson trajectories of the π and ρ are the following: 𝒫_π(t,s) = -α'_πΓ(- α_π(t) )(S/S^π_0)^α_π(t), 𝒫_ρ(t,s) = -α'_ρΓ(- α_ρ(t) )(S/S^ρ_0)^(α_ρ(t) - 1), where S^π_0 and S^ρ_0 are adjustable parameters and the Regge trajectory of the π and ρ are: α_π(t) = 0.75(t- m_π^2), α_ρ(t) = 0.53 + 0.85t. Following the Hybrid model's parametrisation <cit.>, the helicity amplitudes for non-resonant background will be: F̃ = cos^2 ϕ(W) F̃_ChPT + sin^2 ϕ(W)F̃_ReChi, where F̃= F̃_λ_2, λ_1^λ_k and ϕ(W) = π/2( 1- 1/exp (W-1.7)/0.1). F̃_ChPT is the helicity amplitudes for ChPT background and F̃_ReChi is the modified helicity amplitudes with Regge propagators. This approach with the linear Regge trajectories (equation <ref>) is valid at high Energy and low momentum transfer. To ensure that the model reproduces data at high momentum transfer, the trajectories are multiplied by a factor of (1 + 2.4Q^2/W^2)^-1, as suggested in reference <cit.>. The adjustable parameters in the Regge trajectory and VMD form factors are fit to the available exclusive electron scattering data in the next section. The helicity amplitudes of non-resonant background were calculated in the previous work <cit.> and they are given in Appendix <ref>. § ANALYSIS OF EXCLUSIVE SPP DATA Accurate determination of vector and axial-vector transition form factors, as well as non-resonant nucleon-pion production amplitudes, is essential for providing reliable predictions of SPP channels. These form factors involve several adjustable parameters, which can only be accurately defined through the use of experimental data. In this section, we conduct a comprehensive analysis of all available exclusive SPP data on proton and neutron targets. This joint analysis aims to determine the Q^2 dependence of all form factors described in the previous section, providing a detailed understanding of the underlying interaction dynamics. The analysis incorporates a wide range of data sources to ensure robust and accurate form factor determinations, considering various experimental conditions and measurements. The vector form factors in weak interactions are closely linked to the electromagnetic form factors. As a result, electron and photon scattering data on nucleon targets are invaluable for determining the vector form factors. Conversely, pion scattering data are essential for fitting the axial-vector form factors. By combining electron, photon, pion, and neutrino scattering data, we can achieve a comprehensive and precise extraction of both vector and axial-vector form factors. This integrated approach ensures coverage of the entire kinematic range relevant for Single Pion Production (SPP) processes. A list of resonances used in the MK model is given in <Ref>. §.§ Photo-Nucleon and Electro-Nucleon Single Pion Productions Electron and photon scattering data are crucial for this analysis as they allow for probing the vector current operators using monochromatic incident beams across a wide range of kinematic regions. Given the limitations of theoretical predictions in accurately capturing the Q^2-dependence of form factors, particularly in the transition region, this experimental data is crucial for guiding the selection of optimal functions for various vector form factors, which are then applied to axial-vector form factors. The interactions of electrons and photons with protons are described by the following reactions: e + p → e p  π^0   ,           e + p → e n  π^+ γ + p → p  π^0   ,          γ + p → n  π^+ Similarly, the interactions of electrons and photons with neutrons are: e + n → e n  π^0   ,           e + n → e p  π^-   γ + n → n  π^0   ,          γ + n → p  π^-. In previous work presented in Ref. <cit.>, the model for electron-proton interactions and the electromagnetic MD form factors were tested for their ability to describe excited protons within the transition region. In that study, the MD form factors were fitted to all available electron-proton scattering data in the two channels described by Eq. <ref>. The results demonstrated that the model could predict the data smoothly and accurately across these kinematic ranges. In the current study, we adopt a similar analysis methodology but extend it by including photo-proton scattering data in Eq. <ref>. This inclusion is particularly important for addressing the form factors at very low Q^2, a region that is less accessible through electron scattering alone. Photo-nucleon interactions provide complementary information that helps fill in these gaps, ensuring a more comprehensive understanding of the form factors across all relevant kinematic regions. This study expands the investigation by incorporating data on surface plasmon polaritons (SPPs) induced by electrons and photons on neutron targets, as shown in Eq. <ref>. The recent analysis of π^- production with an electron beam has made this analysis possible for the first time. This advancement is crucial for determining the form factors of excited neutrons, a subject that has been less studied compared to protons. The form factors obtained from interactions involving excited protons and neutrons are anticipated to be identical for isospin 3/2 resonances due to isospin symmetry considerations, as discussed by Lalakulich et al. <cit.>. However, for isospin 1/2 resonances, differences can emerge, reflecting the distinct internal dynamics of these states. Understanding these discrepancies is essential for a comprehensive understanding of nucleon resonance behavior. §.§ Pion-Nucleon and Neutrino-Nucleon Single Pion Productions Neutrino interactions with weak current can provide information for both vector and axial-vector currents. The SPP neutrino interactions consist of CC and NC channels for neutrino and anti-neutrino beams. The charged current SPP channels are: ν_l + p →l^- p  π^+   ,          ν̅_l + n → l^+ n  π^- ν_l +n → l^- n  π^+   ,          ν̅_l + p → l^+ p  π^- ν_l + n → l^- p  π^0   ,          ν̅_l + p → l^+ n  π^0 and the neutral current SPP channels are: ν_l + p →ν_lp  π^0   ,          ν̅_l + p →ν̅_l p  π^0 ν_l + p →ν_ln  π^+   ,          ν̅_l + p →ν̅_l n  π^+ ν_l + n →ν_ln  π^0   ,          ν̅_l + n →ν̅_l n  π^0 ν_l + n →ν_lp  π^-   ,          ν̅_l + n →ν̅_l p  π^- Experimental knowledge of the neutrino-nucleon cross section for single pion production in the GeV neutrino energy range, which is crucial for current and planned neutrino oscillation experiments, remains limited. Most of the existing data originate from low-statistics bubble chamber experiments conducted with hydrogen or deuterium targets and involve muon (anti-)neutrino beams. Unlike electron scattering experiments, where full kinematic measurements are typically available, neutrino data are generally reported as integrated cross sections or one-dimensional differential cross sections. The only data available on hydrogen targets are from CC neutrino and anti-neutrino channels measured by the BEBC experiment, which utilised very broad neutrino and anti-neutrino beams. Consequently, the most relevant weak interaction data for this study is the Q^2-differential cross section obtained from the BEBC experiment. There are more data on deuterium targets from the ANL and BNL bubble chamber experiments with neutrino beams in the few GeV range, which are more relevant for this study. In fact, all single pion production (SPP) models have used the ANL and BNL data to fit the axial form factor, assuming that nuclear effects on deuterium are negligible. However, recent measurements with an electron beam <cit.> indicate that the cross sections for hydrogen and deuterium differ by approximately 15%. As a result, I am not using the direct cross-section measurements on deuterium targets, as the nuclear effects are not negligible. Nevertheless, I assume that the ratio of the cross-section data across different channels will effectively cancel out the nuclear effects. Consequently, the already limited neutrino data will become even more restricted. Fortunately, pion scattering data on hydrogen can provide additional information for determining the axial form factor. The weak interaction, which includes both vector and axial vector currents, only has contributions from the axial vector current at very low Q^2∼0. This phenomenon is explained by the conserved vector current (CVC) relation. In this region, the axial vector current is connected to elastic pion-nucleon interactions via the partially conserved axial current (PCAC) relation: dσ/dWdQ^2(ν_μ p →μ^- p π^+) |_Q^2=0 = σ(π^+ p →π^+ p) dσ/dWdQ^2(ν̅_μ p →μ^+ p π^-) |_Q^2=0 = σ(π^- p →π^- p) Therefore, the axial form factors are fitted to the pion scattering data at very low Q^2. The Q^2-dependence of the axial form factors can only be determined using neutrino-nucleon single pion production (SPP) data, which is very scarce. The model includes 88 parameters to parametrise all the transition form factors for resonance production and non-resonant background. These parameters are fit to the experimental data by using the Minuit2 minimiser and the best fit result, with a reduced χ^2=3.05, are discussed below: §.§ VMD form-factors for spin-1/2 resonances There are two spin-1/2 resonances in the second resonance region i.e P_11(1440) and S_11(1535) with the form-factors in equation <ref>. Five mesons in the <Ref> are taken into account and therefore K=5 for spin 1/2 resonances. As a result there will be two free parameters in b_1 k as they satisfy three superconvergence relations (equation <ref>) and one free parameter will be allowed for b_2 k due to the four superconvergence relations in equation <ref>. The QCD-scale in the logarithmic renormalisations (L^(p)_β) can vary between Λ∈ [0.19-0.24] GeV and n_β=1=3, n_β=2=2. All adjusted parameters related to spin-1/2 resonances are reported in <Ref>. §.§ dipole form-factors for resonances in the third region The most characteristic feature of the third region is overlapping structures of several resonances with less overall contribution to the hadronic tensor. To avoid having many parameters (as we do with VMD form-factors) in a rather small phase space we use the Rein-Sehgal model with a simple dipole form-factor for each resonance. The hadron current of each resonance in the third resonance region will be related to a dipole form-factor with two adjustable parameters, F_V(0) and M_V: F_V(k^2) = F_V(0)( 1- k^2/M_V^2)^-2(1 - k^2/M^2)^n/2 where n is the number of oscillators from the Rein-Sehgal model. The number of oscillators and parameters F_V(0) and M_V are reported in <Ref> for resonance in the third region. §.§ The VMD form-factors for non-resonant interactions The update for non-resonant interaction was presented in the previous section <ref>. The adjustable parameters for non-resonant form-factors are defined in equations (<ref>-<ref>) and the adjustable parameter for Regge propagator are defined in equation (<ref>). All parameters related to non-resonant interaction and their best fits are reported in <Ref>. § RESULT AND COMPARISON WITH EXPERIMENTAL DATA In this section we present results from the MK model and its comparison with electron scattering data. The experimental data is the measurements of the virtual photon cross section dσ_T/dΩ^∗ _π + ϵdσ_L/dΩ^∗ _π[the virtual photon cross section is defined in the previous work <cit.>] <cit.>. The selected results are chosen to cover a broad range of Q^2 and W for two channels with pπ^0 and nπ^+ final states hadrons in <Ref> and <Ref> within 1σ error band. To have an accurate estimate for the systematic uncertainty, the correlations between parameters are included to avoid double counting or cancellation of the systematic effects. We also present results from other existing models: * Unitary Isobar Model - MAID2007 <cit.> is the latest version of unitary isobar model for partial wave analysis on the world data of pion photo and electroproduction in the resonance region (Q^2<5.0   (GeV/c)^2). * Dynamical coupled-channels (DCC) model <cit.>. The DCC analysis is on the limited pion photo and electroproduction data (Q^2<3.0   (GeV/c)^2). * Hybrid model <cit.> which is valid at low Q^2. The Rarita-Schwinger formalism plus Regge approach are used for the resonant and non-resonant interactions like the MK model, however the form-factors in both models are different. Note that the valid region for the above models is smaller than the MK model, therefore the missing results from the above models in the <Ref> is due to the model's limitations. <Ref> shows the cross-section for e p → e n π^+ channel and <Ref> shows the cross-section for e p → e p π^0 channel over a broad range of Q^2 (∈[0.16-6.00] (GeV/c)^2) in terms of the pion polar angle in the resonance rest frame. Plots at low invariant mass (W<1.15 GeV) are dominated by non-resonant background contributions and at higher W, the Δ resonance has a dominant contribution in plots with W ∈ [1.18- 1.28 GeV]. <Ref> show that all models have good agreement with data at low invariant mass (W<1.28 GeV) for both channels. At still higher invariant mass (W ∈ [1.4- 1.6 GeV]) the next three resonances (P_11(1440), D_13(1520), and S_11(1535)) have the dominant effects in the second resonance region. The MAID results show a good agreement at low Q^2 (two top plots in <Ref>), however, at higher Q^2, it favours data from e p → e p π^0 channel over e p → e n π^+ channel. The second plot of <Ref> shows that the Hybrid model doesn't predict the backward pion well. Also, the model under-predict data in the dip region between the first and the second resonance region, i.e. W ∈ [1.28- 1.4 GeV], which is also visible in <Ref>. The rest of resonances in the <Ref> contribute to the third resonance region (W > 1.6 GeV) and only one measurement for e p → e n π^+ channel exists. This is presented in the third plot of <Ref> where the MAID result doesn't show good agreements with data. Some of the datasets in <Ref> are over a large range of the hadron invariant mass (W) which will allow us to show the cross-section in different resonance regions as it is presented in <Ref> (<Ref>) for low (medium) Q^2. Comparing the two channels in each figure reveals the different W distribution which is mainly due to the different Clebsch-Gordan coefficients for isospin 3/2 and 1/2 resonances. The first resonance region is dominant in the e p → e p π^0 channel due to larger Clebsch-Gordan coefficient for the Δ (isospin 3/2) resonance. The second resonance region is more pronounced for the e p → e n π^+ channel because this region is populated mainly by isospin 1/2 resonances that have larger Clebsch-Gordan coefficients for this channel. All comments about the model comparisons in the first and the second resonance regions from the <Ref> are visible in <Ref> (the third resonance region is not visible here). The DCC prediction for the e p → e n π^+ channel at low Q^2 (in <Ref>) and forward pion angle shows discrepancies with data in the second resonance region. § CONCLUSION The Single Pion Production (SPP) model presented in this work is meticulously designed to enhance our understanding and interpretation of neutrino measurements: * Comprehensive Coverage: The model encompasses all SPP channels and is applicable across the entire kinematic range relevant to the energy spans of all neutrino experiments. * NC Weak Interaction: By employing a unified model of electromagnetic and weak interactions along with isospin symmetry, the model delivers precise predictions for NC neutrino interactions, even in the absence of extensive data. This capability is particularly crucial for water Cherenkov detectors such as T2K and Hyper-K, where neutral current π^0 production constitutes a significant background. * Addressing Low Q^2: The model effectively addresses the low Q^2 region, where discrepancies with data currently exist. This is achieved through the use of photon scattering data and the Conserved Vector Current (CVC) hypothesis for the vector current, as well as pion scattering data and Partially Conserved Axial Current (PCAC) for the axial current. * Addressing High Q^2: Neutrino-nucleon-resonance vertices are parameterized with phenomenological form factors that possess analytic properties and adhere to the unitarity condition. These form factors are asymptotically consistent with QCD calculations, ensuring the model's validity at high Q^2. * Addressing High W: The model extends its validity to high W by utilizing Regge phenomenology, replacing t-channel Feynman propagators in non-resonant interactions with the corresponding Regge trajectories. * Integration with Event Generators: The model is designed for seamless integration into nuclear theory frameworks and neutrino event generators employed by neutrino experiments, ensuring accurate and consistent predictions. * Systematic Uncertainty Control: The calculations are conducted with stringent control over systematic uncertainties stemming from experimental errors, model dependencies, parameterization choices, and extrapolation to higher energies. This rigorous approach is crucial for the reliability and precision of neutrino measurements. By addressing these key areas, the SPP model provides a robust framework for enhancing the accuracy and reliability of neutrino and anti-neutrino interaction studies, paving the way for more precise and insightful experimental results and CP discovery. § HELICITY AMPLITUDES FOR THE RESONANT INTERACTIONS The helicity amplitudes of the vector and axial-vector currents are given in <Ref> and <Ref>, where f^V,A(R) is the production amplitudes and D(R) is the decay amplitude: 𝒟^j(R)=⟨ Nπ,λ_2|R λ_R ⟩ = σ^D C_Nπ^j√(χ_E)κ C_Nπ^I f_BW where f_BW(R) is the Breit-Wigner amplitude: f_BW(R) = √(Γ_R/2π)( 1/W- M_R + iΓ_R/2 ) where Γ_R = Γ_0 (|𝐪(W)|/|𝐪(M_R)|)^2l+1, and κ= ( 2π^2 W^2/M^2 . 2/2j+1 1/|𝐪|)^1/2, Γ_0, M_R, σ^D and χ_E are given in <Ref> and C^I_Nπ are the isospin Clebsch-Gordan coefficients given in <Ref>. The explicit form of the d^j_λ,μ(θ) functions for j=l+1/2 are: d^j_1/21/2  = (l+1)^-1cosθ/2 (P'_l+1 - P'_l) d^j_-1/21/2 = (l+1)^-1sinθ/2 (P'_l+1 + P'_l) d^j_1/23/2  = (l+1)^-1sinθ/2 (√(l/l+2)P'_l+1 + √(l+2/l) P'_l) d^j_-1/23/2 = (l+1)^-1cosθ/2 (-√(l/l+2)P'_l+1 + √(l+2/l) P'_l) where P_l are Legendre polynomials and P'_l= dP_l/dcosθ. The helicity amplitudes of the vector current are given in <Ref>, where ℱ_i = K^V_i . F_i  (i=1, ...,6) where F_1 = V_1 + (V_3-V_4)(qk)/W_- + V_4W_- - V_6 k^2/W_- , F_2 =-V_1 + (V_3-V_4)(qk)/W_+ + V_4W_+ - V_6 k^2/W_+ , F_3 = V_3 - V_4 + V_25/W_+  , F_4 = V_3 - V_4 - V_25/W_-  , F_5 = V_1(W_+^2 - k^2)/2W - V_2(qk)(W_+^2 - k^2 + 2WW_-)/2W + (V_3-V_4)(W_+q_0 - (qk)) + V_4(W_+^2 - k^2)W_-/2W - V_5(qk)k_0 - V_6 (W_+^2 - k^2)W_-/2W + q_0 V_25 , F_6 =-V_1(W_-^2 - k^2)/2W + V_2(qk)(W_+^2 - k^2 + 2WW_-)/2W + (V_3-V_4)(W_-q_0 - (qk)) + V_4(W_-^2 - k^2)W_+/2W + V_5(qk)k_0 - V_6 (W_-^2 - k^2)W_+/2W - q_0 V_25 , and V_i (i=1, ...,6) are presented in <Ref> where s, u, t are invariant Mandelstam variables: s = (p_2 + q)^2 = (p_1 + k)^2 = W^2 , t = (k-q)^2 ,   and   u=(q-p_1)^2 . K^V_i are given in reference <cit.>: K_1^V = W_- O_1+ K_2^V = W_+ O_1- K_3^V = q^2 W_+ O_2-   K_4^V = q^2 W_+ O_2- K_5^V = 1/O_2+ K_6^V = 1/O_2- where O_1± = [(W^2_± - k^2)(W^2_± - m_π^2)]^1/2/2W O_2± = [(W^2_± - k^2)/(W^2_± - m_π^2)]^1/2. plainnat
http://arxiv.org/abs/2409.03255v1
20240905052045
Asking Fast Radio Bursts (FRBs) for More than Reionization History
[ "Abinash Kumar Shaw", "Raghunath Ghara", "Paz Beniamini", "Saleem Zaroubi", "Pawan Kumar" ]
astro-ph.CO
[ "astro-ph.CO", "astro-ph.HE" ]
Abinash Kumar Shaw abinashkumarshaw@gmail.com 0000-0002-6123-4383]Abinash Kumar Shaw Astrophysics Research Center of the Open University (ARCO), The Open University of Israel, 1 University Road, Ra'anana 4353701, Israel Department of Computer Science, University of Nevada Las Vegas, 4505 S. Maryland Pkwy., Las Vegas, NV 89154, USA 0000-0001-9816-5070]Raghunath Ghara Astrophysics Research Center of the Open University (ARCO), The Open University of Israel, 1 University Road, Ra'anana 4353701, Israel Department of Physical Sciences, Indian Institute of Science Education and Research Kolkata, Mohanpur, WB 741 246, India 0000-0001-7833-1043]Paz Beniamini Astrophysics Research Center of the Open University (ARCO), The Open University of Israel, 1 University Road, Ra'anana 4353701, Israel Department of Natural Sciences, The Open University of Israel, 1 University Road, Ra'anana 4353701, Israel Department of Physics, The George Washington University, 725 21st Street NW, Washington, DC 20052, USA 0000-0001-9121-8467]Saleem Zaroubi Astrophysics Research Center of the Open University (ARCO), The Open University of Israel, 1 University Road, Ra'anana 4353701, Israel Department of Natural Sciences, The Open University of Israel, 1 University Road, Ra'anana 4353701, Israel Kapteyn Astronomical Institute, University of Groningen, PO Box 800, 9700AV Gröningen, The Netherlands Department of Astronomy, University of Texas at Austin, Austin, TX 78712, USA § ABSTRACT We propose different estimators to probe the epoch of reionization (EoR) intergalactic medium (IGM) using the dispersion measure (DM) of the FRBs. We consider three different reionization histories which we can distinguish with a total of ≲ 1000 DM measurements during EoR if their redshifts are known. We note that the redshift derivatives of DM are also directly sensitive to the reionization history. The major point of this work is exploring the variance in the DM measurements and the information encoded in them. We find that the all-sky average DM(z) gets biased from the LoS fluctuations in the DM measurements introduced by the ionization of IGM during EoR. We find that the ratio σ_ DM/ DM depends directly on the ionization bubble sizes as well as the reionization history. On the other hand, we also find that angular variance (coined as structure function) of DM encodes the information about the duration of reionization and the typical bubble sizes as well. We establish the usefulness of variances in DM using toy models of reionization and later verify it with the realistic reionization simulations. § INTRODUCTION According to the current understanding of cosmology, our universe transitioned from being a highly cold-neutral phase in the past to almost hot-ionized phase at present. This is supposed to be a result of UV radiations from the very first objects that formed in the universe photoionizing the intergalactic medium (IGM) <cit.>. This window of transition is termed as the Epoch of Reionization (EoR). The study of the EoR is crucial for answering several questions regarding the emergence of the first sources, their properties and impact on the IGM, evolution to present day structures, the exact timeline of this epoch, etc. Despite our efforts in the last few decades, our understanding of the EoR remains limited <cit.>. Our present understanding of the timing and duration of EoR is guided by a few indirect observations such as the measurements of the Thomson scattering optical depth from the cosmic microwave background radiation observations <cit.> and the Gunn-Peterson troughs in the high-z quasar spectra <cit.>. Additional constraints on the timeline of EoR come from the recent observations of the high-z Ly-α emitters <cit.> and their clustering measurements <cit.>, Lyman break galaxies <cit.>, and the Ly-α damping wings in the high-z quasar spectra <cit.>. These experiments attempt to constrain the reionization history by putting bounds on the global ionization fraction of the IGM during EoR. On the other hand, the measurements of the effective optical depth of Ly-α forests (using dark-gap/pixel statistics) <cit.> suggests that the end of reionization has a longer tail extending to somewhere between z = 5.5 and 5.0 instead of z ≈ 6. However, all these analyses are either model-dependent or suffer from statistical variance, thus providing only loose bounds on the EoR timeline. Probing EoR directly using the redshifted 21-cm signal with the current instruments is also challenging because of several hindrances such as large (∼ 10^4 times) foregrounds <cit.>, thermal noise, radio frequency interference, ionospheric turbulence and other systematics. While no undisputed detection of the EoR 21-cm signal has been achieved so far, the current data from the radio-interferometric experiments have been able to provide a few upper limits on the EoR 21-cm power spectra (e.g., LOFAR: , MWA: , HERA: ), and the upper-limits are improving gradually. A few earlier works have demonstrated the potential of highly energetic astrophysical events such as gamma-ray bursts (GRBs) <cit.> and fast radio bursts (FRBs) <cit.> during EoR as a probe to measure the reionization history. In this work we focus on the FRBs, which are luminous short-duration (∼ few ms) astrophysical radio pulses which have been detected within a frequency band of 0.1-7  GHz <cit.>. Empirical studies on the all-sky event rates of FRBs based on observations find it to be ∼ 10^3/day above the fluence limit of 5 Jy ms and at a central frequency of 500 MHz <cit.>, and the rate is expected to increase significantly at lower fluence thresholds. FRB signals disperse while travelling through the ionized medium. The amount of dispersion, quantified as Dispersion Measure (DM), directly depends on the free electron content along its path. The DM of a cosmological FRB is expected to have dominant contribution coming from the electrons in the IGM. During post-EoR (z ≲ 5.5), where the IGM is almost ionized, the IGM DM roughly scales directly with the distances. Therefore the DM measurements can be turned to infer the redshift-distance of the FRB <cit.>. Conversely, knowing the redshift of the FRBs accurately can be potentially used to estimate baryonic content of IGM during the post-EoR <cit.>, probe the epoch of second helium reionization <cit.> and constrain several cosmological parameters <cit.>. In this work we explore how useful they can be as detailed probes of the EoR. Despite their enigmatic origin, a recent discovery of a galactic FRB <cit.> clearly associates at least some FRBs to magnetars. Other, less direct, evidence linking FRBs to magnetars comes from the statistical properties of the bursts, from host galaxies and offsets relative to them and from the energetics and temporal properties of the bursts <cit.>. Hence, we can expect a sufficiently large number of FRBs during EoR (z>6) which spans a much larger time compared to the life-time of the massive Pop III stars (∼  5 - 30  Myr) that leave behind neutron star (NS) remnants with large angular momentum and strong magnetic fields. There are indirect evidences which supports a large abundance of high-z FRBs <cit.>. With the possibility of detecting high-z FRBs <cit.>, one can turn their precise DM measurements to probe the sources and IGM during reionization. Recently, a few theoretical studies <cit.> have demonstrated the feasibility of using the DM measurements to extract the reionization history and Thomson scattering optical depth (τ_ Th). <cit.> and <cit.> have used the mean DM from their synthetic FRB population to constrain the parameters of their reionization simulation. The results in most of these works are based on the assumption of knowing the precise redshift (spectroscopic or empirically) of FRBs. However, detecting the precise spectroscopic redshifts from the host galaxies is a challenging task with the current instruments. Conversely, <cit.> suggest that the maximum value of DM for bursts spanning the EoR can provide an independent estimate of the Thomson optical depth of the universe without requiring direct redshift information. They have also shown that ∼ 40 FRBs are sufficient to estimate average electron fraction in 4 z-bins (within z = 6 to 10) with 4% accuracy, if their redshifts could be determined with 10% uncertainty. Similar results have been reported in <cit.> where they also estimated τ_ Th and the mid-redshift of reionization using the average DM. <cit.> also suggested that the reionization history can be constrained from the determination of the number of FRBs during the EoR per unit DM, i.e., dN_ FRB/d DM. Whereas, <cit.> have shown that the redshift derivatives of DM have potential to directly constrain the reionization history. In this work, we forward model the FRB signal measurements using estimators like globally-averaged DM, its redshift derivative, global dispersion and angular dispersion (along different LoS). For the first time, we demonstrate that fluctuations in the LoS in DM significantly impact measurements during the EoR, potentially biasing mean DM estimates compared to those derived using the mean ionization fraction of the IGM. We also demonstrate that d DM/dz and the scatter in DM along different LoS (as a function of z) has the potential to discern different reionization histories and morphologies of the IGM. We explore a novel aspect of angular dispersion in DM (defined as structure function in <ref>) at different redshifts. This structure function encodes information regarding the typical bubble sizes in the IGM as it probes the angular fluctuations. We validate our estimators on simple toy models of EoR light-cone signals. Later, we also apply our estimators to more realistic light-cones obtained from simulations. We also perform a comparative study between different reionization histories. For this study, we primarily assume a scenario where the redshifts of the FRBs are known to within 10% uncertainty. Later, we ignore the redshift information and compute the marginalized average DM over the EoR window. This manuscript is arranged as follows. We define DM and structure function estimators in <ref>. In <ref>, we validate our estimators with the toy ionization field model. This is followed by <ref> where we briefly describe the details of the actual EoR simulations, and the present the corresponding results under its subsection where we discuss the impact of post-EoR IGM on the DM estimates and presented results. Finally, we summarize and conclude this exercise in <ref>. The cosmological simulation here uses the cosmological parameter values Ω_ m =0.27, Ω_Λ = 0.73, Ω_ b = 0.044 and h=0.7 adapted from <cit.>. § METHODOLOGY We revisit the basic theory of the IGM DM in the context of FRBs and define DM estimators which will be used in this work. §.§ Dispersion Measure The multifrequency FRB pulses disperse while traveling through an ionized medium due to their interaction with the free electrons along the way. The time delay in the arrival of the signal at frequency ν, Δ t ∝ν^-2 DM; where DM is the line-integral of the free electron density. The total observed time-delay/DM will have contributions from the host galaxy (DM_host), the Milky Way galaxy including the halo (DM_MW), and the IGM (). For DM_MW we have reasonably good Galactic maps, and this can be largely removed from the data. Further, DM_host is reduced by a factor of (1+z)^-1 in the observer frame and so is suppressed when considering high-z events, whereas DM_ IGM increases with z. Hence, in this work we only focus on studying and ignore any further discussion on other DM components, unless stated otherwise. The for an FRB, located at an angular position θ and the redshift z, is <cit.> (θ, z) = c ∫_0^z dz^'n_e(θ, z^')/(1+z^')^2 H(z^') , where c is speed of light in vacuum and n_e(θ, z) is the proper number density of free electrons. H(z)=H_0√(Ω_m 0 (1+z)^3 + Ω_Λ 0) denotes the Hubble parameter with H_0, Ω_m 0 and Ω_Λ 0 respectively being the present day values of Hubble constant, matter density parameter and dark energy parameter. Baryons being the primary source of free electrons during and after EoR, we can write n_e in terms of the baryon density parameter Ω_ b0 and recast eq. (<ref>) as (θ, z) = 13/16 3 c H_0 Ω_ b0/8 π G m_ H× ∫_0^z dz^'(1+z^') Δ(θ, z^') (θ, z^')/√(Ω_m0(1+z^')^3 + Ω_Λ0) , where G is the gravitational constant, m_ H is the mass of hydrogen atom, Δ is the matter overdensity and denotes the ionization fraction of the IGM. Here, Δ includes the effects of evolution of the underlying matter density in the IGM whereas is controlled by the photon field responsible for ionizing the IGM. On large scales, =0 before the reionization starts and it eventually approaches unity towards the end of EoR when the IGM is almost completely ionized. We obtain both Δ and from our simulations which we discuss in a later section. The treatment of eq. (<ref>) assumes the hydrogen and helium constitute almost the entire baryons in the universe, and the helium being 25% of it by mass. Our model of IGM also assumes that the ionization of  to  occurs concurrently with  reionization. The LoS path which the FRB signal traverses during post-reionization is effectively very large (∼ 6000   Mpc), adding a larger contribution to total . Post-EoR contribution acts as a nuisance since we are only interested in the impact of the reionization to the DM measurements. Hence, we restrict the lower limit of the integral (eq. <ref>) to the end of reionization z_ end and define : (θ, z) = 13/16 3 c H_0 Ω_ b0/8 π G m_ H× ∫_z_ end^z dz^'(1+z^') Δ(θ, z^') (θ, z^')/√(Ω_m0(1+z^')^3 + Ω_Λ0) , where z ≥ z_ end. We estimate the sky-averaged mean (z) = ⟨(θ, z) ⟩_θ and the sample variance (z) = ⟨(θ,z) ⟩_θ^2 - ^2(z) numerically for a given z during EoR using simulations. §.§ Structure Function We aim to outline how FRBs can be used as a probe for the characteristic size of the ionization () bubbles in the IGM. One can certainly expect the DM measurements of the two nearby FRBs to be correlated <cit.> as they traces the underlying structures. To this end, we define the DM structure function, for a given LoS at θ and redshift z, as Ξ(δθ, z) = ⟨ [(θ+ δθ, z) - (θ, z)]^2 ⟩_θ, δθ , where δθ is a small angular separation for all nearby LoS and ⟨⋯⟩_θ, δθ denotes double average – first, over different rotations by an amount δθ around every θ, and next, average over different LoS directions θ. This definition utilizes the assumption that the sky is statistically homogeneous and isotropic at any particular z, which leaves Ξ a function of the magnitude δθ and z only. The dependence of Ξ on , which itself is an integrated quantity, makes it unsuitable to directly conclude anything about the ionized bubble sizes and their growth. Hence, we use the derivatives of Ξ which probe local IGM properties. ∂Ξ/∂δθ encompasses the information about how fast the structures decorrelates on the sky plane. However it still has integrated effects along the LoS, and we therefore compute the second order derivative . This has both the instantaneous and local information about the structures and their scale information. We will demonstrate how the landscape of corresponds to the different reionization histories and morphologies in the (δθ, z) plane. Later, we marginalize over δθ and z one at a time. Marginalization makes it easier to understand the behaviour of as a function of either z or δθ irrespective of the other variable and requires fewer observed bursts to be determined observationally. We finally compute the average of the derivatives and their marginalized values over various LoS θ. This provides us with the information of the mean sizes of the ionized regions in the IGM. § TOY MODEL SIMULATIONS We demonstrate the impact of the ionized bubble sizes and the rate of reionization on the estimators mentioned in <ref>, allowing us to gain intuition and test the general validity of the approach. We use an approximate and simplistic toy model of reionization to simulate the ionization () field light-cones (LCs). The toy model assumes all the ionized bubbles have the same radii and that everything inside the bubbles is completely ionized and anything outside is completely neutral. We divide the whole LC boxes into reasonably thin slices along the LoS axis and fill them with a number of spherical bubbles matching the average input ionization fraction for every z slice. We place the bubble centers in the slices uniform randomly and allow overlap between them. A LC thus created will be a binary ionization field where Δ(θ, z)=1 and (θ, z) is either 0 or 1. This field has basic differences from the LC obtained from the real simulations (see <ref>) where additional fluctuations in the free electron density arise due to perturbations in the underlying matter density field. Furthermore, as opposed to the toy model, the realistic reionization model has inherent temporal growth in the bubble sizes apart from their percolation. We simulate the toy model LCs (see Figure <ref>) within a comoving volume (500 × 500 × 1500) [h^-1  Mpc]^3 that is divided into (600 × 600 × 1800) cubic voxels, accordingly. This particular choice of LC volume is to roughly match with our reionization simulations as described below (<ref>). We use the asymmetric tanh(z) form of reionization history <cit.> for our toy models. This form of the history closely mimics the histories found in simulations. We fix the origin of the toy model LCs at z=6 assuming reionization ends by then. The other end of the toy model LC boxes extends up to z≈ 15. §.§ Dependence on Reionization History We study the effect of different reionization histories on the and the other derived estimators, as defined above. We generate three toy models with `Faster', `Fiducial' and `Slower' reionization histories as shown in Figure <ref> with their corresponding DMs. We mimic the ionization histories by varying the number of bubbles in each slice of the LCs with bubble radius being fixed at 10 grid units (≈ 12  Mpc). We choose the reionization mid point at z_ mid = 7.5, 8.5, 9.0 and the corresponding reionization window to be δ z = 1.0,  2.0,  2.5 (i.e. the reionization to end at the same time for these three toy models) corresponding to the `Faster', `Fiducial' and `Slower' reionization histories, respectively. We fix the asymmetry parameter α = 3. As shown in Figure <ref>, there is a slight offset between , the mean estimated over all the 600^2 grids on the transverse plane of the box and the average DM estimated using the x̅_ into eq. <ref> and ignoring the fluctuations along different LoS. As DM is a cumulative estimator, its mean value rises rapidly with z where most of the reionization is happening. At higher z it saturates, as there are no more free electrons to contribute to it. The asymptotic difference between the two mean DM estimators (solid and dashed lines) increases monotonically from faster to slower reionization at any z. This happens because the fluctuating structures exists for a larger LoS distance in the case of slower reionization history. We also show the corresponding 1σ sample variance around . Fluctuations due to the binary ionization field are not able to cause any significant (>1σ) deviations between (z) and DM(x̅_(z)) for all three histories considered here. The deviation should be enhanced for more realistic reionization LC where both and Δ contribute to the LoS fluctuations in . Also, contribution to σ_ DM accumulated from z<6 will increase the spread in DM_ EoR. Figure <ref> shows the ratio σ_ DM/ that qualitatively follows for z ≳ 7. It first increases rapidly and then saturates towards large z as the ionized regions disappear. However, we find that σ_ DM/ increases sharply towards the end of reionization (z<7). This is because has a value around zero at z≈ 6 since we ignore the contribution from the low-redshift IGM. Starting at z≈ 7, increases more rapidly while moving towards higher z than the fluctuations do and finally the ratio saturates. The saturation redshift varies depending on the reionization history. It is at low redshift for the faster reionization and vice-versa. σ_ DM/ is larger for the faster reionization history and vice-versa, which indicates a direct mapping between the relative (to the mean) fluctuations in the DM_ EoR with the emergence and sizes of the structures in the IGM. Figure <ref> also shows the derivative d/dz which directly traces the local electron density. After Gaussian smoothing, the derivatives are roughly the same towards the end of reionization (z < 6.5) where the IGM has roughly indistinguishable properties. However, the derivatives beyond z_ mid apparently encodes the information of the reionization history in its slope when plotted against z in a log-log plane. The slower history has a shallower slope and it increases gradually towards fiducial and faster histories. This is simply because there are more ionized bubbles for the slower history causing a larger change in at higher z. Figure <ref> shows contours of the derivatives of the structure function on the δθ-z plane. We consider only 400 LoS at every equidistant comoving slice to compute instead of all available 360,000 LoS per comoving slice. Our choice of 400[Our mock simulations have access to 400 FRBs/slice even at larger redshifts. However, in reality, the number of FRBs might drop significantly down at larger z slices (see Figure <ref>).] FRBs per comoving slice is closer to real observations and computationally tractable. On the other hand, we believe that it is a good number for statistical convergence of . We finally bin-average our estimates of all slices within equispaced z bins of width 0.5 for further use. We consider as a 2D surface and plot the contours which include top 10%, 30%, 50% and 67% of the total area under the surface. For each reionization history we depict z_ mid and the angular size corresponding to the radius of the individual bubble in our toy model. The peak (10% contour) gradually shifts to a smaller z while moving from the slower to the faster history, approximately tracking the change in z_ mid, and the contours gets more squeezed along the z axis. This clearly indicates that the peakedness of the landscape along z is directly connected to the reionization window. There is no considerable change in the extent of the contours along the δθ axis, which is expected since we are using bubbles of fixed radii here. We next marginalize over δθ. The marginalized result, ∫ ()  dδθ, is shown in Figure <ref> as a function of z. ∫ ()  dδθ peaks roughly around z_ mid. The decrease towards the end of EoR is sharper and roughly independent of reionization history. However, the drop towards higher z is related to the reionization history. The drop is slower for the slower reionization history and vice-versa. That could be simply related to the rate of emergence of the ionized bubbles in the IGM, and once the reionization is roughly around its midway, the percolation of bubbles makes it insensitive to the history. §.§ Dependence on Bubble Sizes We assess the impact of ionized bubble size on our estimators by fixing the reionization histories to the fiducial case and varying the bubble radius R. We consider R=5, 10 and 20 grid units which corresponds to 6, 12 and 24  Mpc, respectively. We have repeated the same analysis as in the <ref>. The three simulations perfectly agree with their common input reionization history, leading to the similar values. However, variations in bubble size influence the fluctuations in the DM_ EoR estimates, and consequently σ_ DM. Figure <ref> shows the contours of . Similar to Figure <ref>, we have used 400 LoS per comoving slices and bin them within a redshift window Δ z=0.5. The contours are nearly unchanged along z-axis for the different R values. The peak of the surface (depicted by 10% contour) shifts towards larger δθ values with increasing R, approximately tracking the change in the bubble size. The derivatives decrease for δθ greater than the angular bubble size θ_R, as the correlation between the structures decays out. Whereas for scales less than θ_R (see the right panel), the derivatives decrease as the points are tightly correlated and Ξ itself is consistently small. We can understand this with a simple argument. We distribute the bubbles uniformly in each z slice, following a Poisson distribution while preventing any overlap. Considering a redshift slice, the mean DM would approximately be DM_1× N_ col, where DM_1 is the contribution from a single bubble and N_ col is the average number of bubbles appearing along a LoS. Hence, the corresponding spread in the DM along different LoS would be σ_ DM≈ DM_1 × N_ col^1/2. Since we keep the ionization fraction of slices (and hence mean DM) constant while changing the bubble radius, the total number of the bubbles within the slice varies as N ∝ R^-3. Considering a cubical slice N_ col∝ N^1/3∝ R^-1. Since DM_1∝ R, we see that σ_ DM∝ R^1/2. This scaling is consistent with the results plotted in the left panel of Figure <ref>. We plot σ_ DM/ and its z derivative as a function of z in Figure <ref>. The trend is qualitatively similar to that shown in Figure <ref>. We find a sharp turnover and rapid rise in σ_ DM/ by the end of reionization (z<7) due to near zero value of . However for z>7, σ_ DM/ increases almost linearly towards higher z and gradually starts saturating beyond z ≈ 10. As the saturation point strongly depends on the reionization history, hence the saturation knee is almost at the same redshift for all the three R values. However, the variation in σ_ DM/ is solely due to variation in σ_ DM with R. Overall, σ_ DM increases with increasing R as shown in the Figure. The reason is clear as filling the IGM with the smaller bubbles would make the distribution of the ionized regions roughly uniform and homogeneous, and therefore the fluctuations between the different LoS would be less and vice-versa. The derivative d(σ_ DM/)/dz is a more local quantity as shown in Figure <ref>. The derivatives drop very sharply approaching -∞ for z<7 as there is a rapid increase in σ_ DM/ with decreasing z. However, we see a power-law decrement in the derivative with increasing z for z ≥ 8. The slope of this power-law drop is roughly the same for the three R values, although the magnitude scales with R in a similar way as for σ_ DM/. We again marginalize here, but this time along the z-axis, to obtain ∫ ()  dz as shown in Figure <ref> for the three R values. We observe a similar qualitative trend as seen in the peaks of the contour plots. The peak of ∫ ()  dz shifts to a larger δθ for large bubble sizes. We also note that it decreases for δθ larger than θ_R. ⟨θ_R⟩ is computed by marginalizing the θ_R(z) corresponding to the respective characteristic bubble size R in the z range of interest. § EOR SIMULATION USING GRIZZLY We now use the simulated EoR ionization field LCs to estimate the DM and other related quantities to examine how the different reionization models affect them. We simulated three EoR LCs corresponding to the three different reionization histories x̅_ as shown in Figure <ref>. In the following sections, we briefly describe the method used to simulate the EoR scenarios followed by all the corresponding estimates. §.§ EoR Simulations The EoR LCs used here are constructed by stitching thin slices from the coeval cubes simulated at several different redshifts in chronological order. We simulate coeval boxes using a CD-EoR code, grizzly <cit.>, which is based on a 1D radiative transfer technique. This algorithm takes the dark-matter density field and the corresponding halo catalogue from a N-body simulation to produce a map at a redshift for a particular astrophysical source model. We use the dark-matter density fields and the associated halo catalogues obtained from the PRACE[Partnership for Advanced Computing in Europe: <http://www.prace-ri.eu/>] project, PRACE4LOFAR. The input dark-matter distributions are generated using dark-matter only N-body simulation code cubep^3m [<http://wiki.cita.utoronto.ca/mediawiki/index.php/CubePM>] <cit.>. The 3D density cubes have comoving volume [500 h^-1  Mpc]^3 <cit.> which are gridded into [600]^3 voxels. The dark-matter particle distributions have been used to find the collapsed halos using spherical overdensity halo finder <cit.>. The minimum halo mass in the PRACE4LOFAR simulation is ∼ 10^9, which corresponds to ≈ 25 dark-matter particles. We simulated 63 coeval dark-matter cubes between a redshift range 6.1 ≲ z ≲ 15.6 with an equal time gap of 11.4  Myr. We consider an EoR source model where the dark-matter halos with masses larger than 10^9 host UV photon-emitting galaxies. We assume that the stellar mass of a galaxy (M_⋆) is related to the host dark matter halo mass M_ halo as M_⋆∝ M^α_s_ halo. We tune the ionization efficiency (ζ) so that the reionization ends at z∼ 6. Note that all reionization models considered in this study are inside-out in nature. Our fiducial grizzly model corresponds to a choice of α_s=1 and spans from redshift 6 to 15.6. We consider a rapidly evolving Faster and a slowly evolving Slower reionization scenario which correspond to α_s=2 and α_s=0.1 respectively. Next, we produce coeval cubes of at 63 redshifts between 6.1 and 15.6 for the aforementioned source model. We refer the readers to <cit.> and <cit.> for more details about these calculations. Finally, we used these coeval cubes of to create the LC which accounts for the evolution of with redshift. The detailed method to implement the LC effect can be found in <cit.>. The reionization histories for the three different EoR scenarios are shown in Figure <ref> while we present the corresponding LCs in Figure <ref>. The LCs clearly show the difference in the patchiness of the reionization scenarios. §.§ grizzly Simulation Results We present the results for the LCs (Figure <ref>) simulated using grizzly. We exclude the contributions coming from post-reionization IGM and the host galaxy, assuming these would be perfectly measured and subtracted in the future as we expect to detect a larger population of FRBs at lower redshifts (Figure <ref>). This optimistic assumption primarily allows us to investigate the impact of  bubble sizes and reionization histories on our estimators by evading low-z contributions. In Figure <ref>, we show the estimated from the LC simulations for `Slower', `Fiducial' and `Faster' reionization histories. The solid lines show the mean estimated using all the LoS (here 360 000 grid points) in the simulations and the shaded regions around them are the respective 1σ deviations due to cosmic variance. We consider small z-bins (Δ z = 0.5) while computing the mean and the sample variance. We overplot the estimates calculated using the average ionization fraction (Figure <ref>) for the three reionization scenarios. (z) begins with a near zero value (with low-z IGM contributions subtracted) around the end of reionization (z ≈ 6) and increases almost linearly with z until when ionization is sufficiently small (x̅_≈ 0.1), where it starts to plateau. This is qualitatively consistent across both the realistic and toy model reionization histories (see Figure <ref>). The rate of linear rise at lower z and the saturation value is highest for the slower reionization history, and these decrease monotonically for faster histories. The `knee' in the (z) curves occurs at a larger redshift for slower reionization histories. The saturation values of DM differ by more than 1σ across all three histories. The (z) and DM(x̅_(z)) are significantly distinct for all three reionization histories due to IGM electron density fluctuations driven by the formation and growth of ionized bubbles, as well as the underlying density perturbations. The bias due to the fluctuations are highest at larger z (z ≳ 12) where the DM saturates, and the differences between the two estimates decrease rapidly towards the end of reionization. This is because the merging and percolation of the ionized bubbles typically wash out the fluctuations towards the mid and advanced stages of reionization. The difference is smallest (≲ 1σ) for the Faster reionization scenario, where large ionization bubbles appear suddenly at lower redshifts (see bottom panel of Figure <ref>) and quickly percolate and fill up the entire IGM. In this case, the IGM is more patchy and hence small-scale fluctuations do not prevail. Relatively small ionized bubbles form for slower histories and takes a longer time to fill the entire IGM. This leads to relatively more small-scale fluctuations (see Figure <ref>), thereby resulting in a larger difference (≳ 2σ) between the (z) and DM(x̅_(z)) (solid and dashed lines) for the slower reionization history. As the average is an integrated quantity, we compute its derivative d/dz to extract instantaneous information at a given stage of reionization. We show the derivatives in Figure <ref> for the three histories. d/dz starts with a high value which is similar for the three histories. This simply indicates that the electron distribution in the IGM is roughly the same for all three scenarios nearing the end of reionization. The large values of the derivative at z ≲ 8 are an implication of the fact that x̅_ changes rapidly during the mid and end stages of the reionization. However, the difference in how fast x̅_(z) evolves causes the derivatives to distinctly vary for the three histories considered here. The derivative for the faster scenario has small values but is the steepest among the three. The value of the derivatives increases for the fiducial and further to slower reionization histories, while the slopes of the curves decreases. Figure <ref> depicts σ_ DM/ as a function of z. This shows how the sample variance error in the measurement is related with the mean DM. The qualitative behavior of the ratio is similar for the three reionization histories. The `knee' of saturation in σ_ DM/ depends on the z_ mid as seen in toy models (top panel of Figure <ref>). However, contrary to the toy models (Figures <ref> and <ref>), the ratio σ_ DM/ decreases with z. The 1σ cosmic variance for all the three histories are roughly similar, hence the ratio scales inversely with . This could be because of the fact that in the real simulations the electron distribution in the IGM has an additional contribution coming from the underlying matter density field which is absent in our toy models. This needs further detailed investigation with a more sophisticated toy model and we defer it to a future work. Figure <ref> shows contours of the derivatives of the structure function on the δθ-z plane, similar to Figures <ref> and <ref>. We estimate from 400 LoS per comoving slice and bin them within bins of Δ z = 0.5. The behaviour is qualitatively similar to the toy models. The contours are tightly squeezed towards lower z (10% contour at z≲ 7.2) for the faster reionization history and they gradually expand for the fiducial and slower models (10% contour at z≲ 8.1). The effects of the different IGM topologies are also clearly evident from the δθ extent of the contours. The faster history has larger bubbles (hence patchy IGM) resulting in extended angular correlations. In contrast, the angular correlations are smaller (depicted by the squeezed contours along δθ) for slower histories where the bubbles are smaller in sizes. This trend closely resembles what is observed in Figure <ref>. §.§ Impact of Non-uniform Distribution of FRBs and Post-EoR All the analyses presented above are independent of the distribution of FRBs in a LC box. There are various factors that make the distribution of FRBs non-uniform in the sky-plane and across redshift. We repeat the analysis for the grizzly simulated LCs considering a realistic redshift distribution of the sky-averaged number of FRBs. Following the estimates in <cit.>, we used the cumulative observable redshift distribution as shown in Figure <ref>. To make our estimates more realistic, we also consider the FRBs to be correlated with the overdensities in our matter density LC. This is well motivated because the overdense regions are the places which can host radiation sources and therefore also FRBs. Ideally, one should correlate the FRBs with the halo field which is precisely the location of sources. However, here we lack the exact halo locations and instead use the matter overdensity peaks as a proxy. We denote by N_ FRB^ tot the total number of FRBs observed in the range 6≤ z≤ 15. We divide the whole EoR redshift range (z=6 to 15) into bins of Δ z = 0.5 and estimate the number of FRBs per bin according to the distribution (Figure <ref>) for a given N_ FRB^ tot. For each bin, we randomly pick up the grid points which are biased by the high-dense regions. We finally use those grid points to estimate the average DM and corresponding dispersion for all the redshift bins. Our main findings in Figure <ref> depict the minimum number of total FRBs required to distinguish between the three different reionization histories. In every panel, the solid lines denote the ensemble mean of the sky-averaged DM varies as a function of N_ FRB^ tot. We compute the DM from a random sample of N_ FRB^ tot distributed across the EoR redshift range. Later, we consider 1000 such independent realizations of the distribution of N_ FRB^ tot to compute the ensemble mean ⟨ DM⟩ and the 3σ cosmic variance as shown by the shaded regions around the lines. Figure <ref> shows ⟨ DM(z_ mid)⟩ corresponding to z_ mid for the individual histories. This inherently assumes that we have redshift information about the FRBs (within an uncertainty of Δ z = 0.5). However, if we drop this assumption and just assume that the FRB observed is somewhere within the duration of EoR, we can compute the marginalized sky-averaged DM defined as DM_ margin = ∫_z_ min^z_ max DM_ EoR (z)  dP/dz  dz/∫_z_ min^z_ max dP/dz dz , where dP/dz is the probability of detecting an FRB per unit redshift (as per Figure <ref>), z_ min=6.0 and z_ max=14.5 are respectively the lower and upper bounds of the EoR redshift window. This too is shown in Figure <ref>. We introduce the contribution from post-EoR (0 ≤ z ≤ 6) and consider three different scenarios – (i) Optimistic, where the contribution from the post-EoR is perfectly modelled and completely removed from the data, (ii) Moderate, where the post-EoR contribution is imperfectly removed from the individual DM(z) such that the residual low-z contribution remains within 0.5 σ_ DM at z=6, and (iii) Pessimistic, where the post-EoR contribution is completely present for every FRB measured. Note that these three scenarios are differentiated based on the how accurately we know the DM(θ, z<6) for every observed LoS θ above z=6. The assumption that we either know the redshifts of every FRB or at least can separate EoR FRBs from the post-EoR FRBs confidently is implicit within all the three scenarios and is itself non-trivial. Apart from the fact that the boundary between EoR and post-EoR is still fuzzy, the erroneous/indeterminate FRB redshifts can cause a spill over the boundary. This may bias our estimators and we plan to investigate this in greater detail in future studies. We simulate a matter density LC within the redshift range 0 ≤ z ≤ 6 (comoving distance 8542  Mpc), and assume that the electrons linearly follow the underlying density contrast. We use the publicly available gadget4[<https://wwwmpa.mpa-garching.mpg.de/gadget4/>] <cit.> to simulate the dark matter density fields within comoving cubes of volume [500 h^-1  Mpc]^3 at the lower redshifts. We choose the same cosmological parameters as mentioned in the end of <ref>. We also set the resolution (for Particle-Mesh) of the box to match those in the grizzly simulations. We simulated the particle distribution at 66 redshifts within the range 0 ≤ z ≤ 7 roughly equidistant by 200  Myr. We generated the density coeval boxes on 600^3 grids and then used them to make the density LC. We use weighted linear interpolation scheme to interpolate the density fields at the desired redshift slices from the redshifts at which the coeval boxes are generated. We cannot simulate a large box spanning the whole range up to z≤ 6. Hence, we need to repeat the boxes along the LoS (z-axis) while creating the final LC. The contribution from low-redshifts increases both and σ_ DM(z), making the separation between the two different histories more challenging. In Figure <ref>, ⟨ DM(z_ mid)⟩ for the three different histories starts at different N_ FRB^ tot. This is because, for the slower history, z_ mid is larger than that of the other histories, requiring a higher N_ FRB^ tot to populate the corresponding z-bin with at least two FRBs. ⟨ DM(z_ mid)⟩ converges very quickly for N_ FRB^ tot≈ 10; however the corresponding dispersion is large and it decreases with increasing N^ tot_ FRB, as expected. N_ FRB^ tot≥ 20 is sufficient to distinguish ⟨ DM(z_ mid)⟩ at 3σ for the optimistic scenario. This lower bound remains the same to distinguish the slower reionization history with the faster one for the moderate scenario. However, it takes nearly N_ FRB^ tot = 150 and 40 to discern ⟨ DM(z_ mid)⟩ of slower-fiducial and fiducial-faster pairs of histories, respectively. Considering the pessimistic case, it takes N_ FRB^ tot = 80, 150, 600 to distinguish between faster-slower, slower-fiducial and fiducial-faster pairs of histories, respectively. Figure <ref> also shows the variation of the ⟨ DM_ margin⟩ with N_ FRB^ tot. Considering the optimistic case, we need roughly 40 FRBs identified during the EoR to discern slower and faster reionization histories at 3σ with ⟨ DM_ margin⟩. Whereas the N_ FRB^ tot slightly increases to 60 and 90 for faster-fiducial and fiducial-slower pairs of histories, respectively. These numbers respectively increase to 90, 200 and 300 for the moderate case as shown in the middle panel. Finally for pessimistic scenario, we need N_ FRB^ tot≈ 220, 600, 1000 to discern (at 3σ) slower-faster, faster-fiducial and fiducial-slower pairs of reionization histories, respectively. The numbers we found are realistic, and it would be possible to detect many FRBs during EoR using the upcoming SKA-mid. § DISCUSSION Understanding the epoch of reionization (EoR) is a crucial step in learning about one of the most important eras in the cosmic history, when it transitioned from being devoid of any stars, consisting of cold and neutral gas, to hot, ionized, and teeming with the objects we see today. The first sources are supposed to drive the whole process of reionization, therefore studying the IGM during EoR can be connected with the properties and emergence of the first structures. There are many direct and indirect contemporary probes such as the redshifted 21-cm signal from  in the IGM during EoR and, Thomson scattering optical depth of the CMB photons, high-z quasar spectra, Ly-α systems at high-z. However these probes are limited by their own challenges to date. In this work, we propose to use the dispersion measure (DM) of the fast radio bursts (FRBs) from the high-z to probe the IGM during EoR. The dispersion introduced in the FRB pulse, while it travels through the ionized medium, can be used in probing the electron distribution along the line-of-sight (LoS) during EoR. We demonstrated the use of the sky-averaged DM and its derivatives to discern between the different reionization histories. Beyond this, we primarily aim to make use of the sky-averaged and the angular dispersion in the DM estimates to extract information on the ionization bubbles during the EoR. Using a toy model (Figure <ref>) of the binary ionization field (within the range 6 ≤ z ≤ 15), we see that the (z) first increases (starting from the end of EoR) roughly up to the mid-point of the reionization, z_ mid, and then tend to saturate as the ionized regions decreases towards the initial stages of the reionization (Figure <ref>). The derivative d/dz directly traces how fast the reionization progresses (Figure <ref>). The all-sky variance σ_ DM/ is larger for the reionization scenario where the bubble sizes are larger and vice-versa (Figure <ref>). We also compute where Ξ is the angular dispersion (eq. <ref>) at any redshift. We demonstrated that the contours are elongated or squeezed along z and δθ depending respectively on the varying reionization history (Figure <ref>) and bubble sizes (Figure <ref>). The impacts are clearly prominent for the marginalized structure function derivatives (Figures <ref> and <ref>). We also analyzed a more realistic reionization light-cone (LC) (Figure <ref>) generated using grizzly 1D radiative transfer code for three different reionization histories ending at the same z. The behaviour of is the same as in the toy model. We find that the different LoS variance plays a significant role for realistic IGM (Figure <ref>) and biases the average (z) by more than 1σ as compared to the case when the DM is computed using the averaged ionization fraction x̅_. The slope of the d/dz is directly sensitive to the reionization history (Figure <ref>). σ_ DM/ is also dependent on the reionization history as well as indirectly on the IGM morphology (bubble sizes) in an intermingled way. Figure <ref> clearly shows that is sensitive to the reionization window as well as the typical bubble sizes. The contours are squeezed along z axis and elongated along δθ axis for faster reionization history which has relatively larger ionized bubbles and vice-versa. Our initial analyses are rather optimistic where we have considered the FRBs to be located uniformly at every grid points in our reionization LC with their redshifts known within an uncertainty of Δ z =0.5. We also assumed that the contribution from the low-redshift (z<6) has been perfectly removed from the DM measurements. We next consider observationally more realistic situation where the FRBs abundances vary with redshift (Figure <ref>) and they are more clustered at the highly dense regions on the sky plane. This biases relative to our initial results and also introduces more LoS dispersion, particularly at high redshifts where the FRB abundance drops rapidly. Taking realistic estimates of the FRB rate evolution with z, one requires ≲ 100 FRBs to be distributed across the whole EoR window in order to discern the reionization histories at 3σ (see the left column of Figure <ref>) using the mean DM only. This is assuming we have removed contribution from lower redshifts (z<6) from the DM measurements which is an `optimistic' case. Considering a `pessimistic' case where the low redshift contribution is present, we find that the numbers could shoot as high as 200-600 if we focus on the mid-reionization redshift bin (within uncertainty of Δ z = 0.5. Using the marginalized DM to discern between the reionization histories at 3σ might require roughly ∼ 1000 FRBs in total during EoR (see right column of Figure <ref>). The numbers presented above corresponds to a particular choice of telescope sensitivity and FRB population models and are expected to vary if we change them, however we expect them to stay within the order of magnitude. We plan to include these effects gradually in our future work along the line. We have successfully demonstrated the potential of the derivative of the structure function and of σ_ DM as probes of the ionization bubble sizes along with the reionization history. The , being a derivative, suffers from large variance, and we need a large number of FRBs (N_ FRB^ tot≲ 100 000) within range 6 ≤ z ≤ 15 to suppress the variance significantly down. The computation of and their marginalization here considers a uniform distribution of the FRBs on the regular comoving grid. This is only done for convenience, and the derivatives of the structure function are well defined also when the sample is very uneven with z and/or δθ. In reality, FRBs should be associated with the galaxies which are generally clustered around the high-density peaks that gets ionized first. Therefore, it is highly probable to find more FRBs within the ionized regions, and that might help to compute the structure function in the vicinity of the ionized bubbles. We also plan to investigate more deeply into this estimator which will be very useful in probing the ionized regions around the sources. § ACKNOWLEDGEMENTS AKS, RG and SZ acknowledge support by the Israel Science Foundation (grant no. 255/18). AKS is also supported by National Science Foundation (grant no. 2206602). RG also acknowledges support from SERB, DST Ramanujan Fellowship no. RJF/2022/000141. PB is supported by a grant (no. 2020747) from the United States-Israel Binational Science Foundation (BSF), Jerusalem, Israel, by a grant (no. 1649/23) from the Israel Science Foundation and by a grant (no. 80NSSC 24K0770) from the NASA astrophysics theory program. PK’s work is funded in part by an NSF grant AST-2009619 and a NASA grant 80NSSC24K0770.
http://arxiv.org/abs/2409.02565v1
20240904093158
Efficient Extraction of Noise-Robust Discrete Units from Self-Supervised Speech Models
[ "Jakob Poncelet", "Yujun Wang", "Hugo Van hamme" ]
eess.AS
[ "eess.AS", "cs.SD" ]
BMI Prediction from Handwritten English Characters Using a Convolutional Neural Network N. T. Diba1, N. Akter2, S. A. H. Chowdhury3 Dept. of Electronics & Telecommunication Engineering Rajshahi University of Engineering & Technology Rajshahi, Bangladesh nishattasnimediba09@gmail.com1, nasrinmanha123@gmail.com2, arif.1968.ruet@gmail.com3 J. E. Giti4 Dept. of Electrical & Electronic Engineering Rajshahi University of Engineering & Technology Rajshahi, Bangladesh jishan.e.giti@gmail.com4 September 9, 2024 =============================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Continuous speech can be converted into a discrete sequence by deriving discrete units from the hidden features of self-supervised learned (SSL) speech models. Although SSL models are becoming larger and trained on more data, they are often sensitive to real-life distortions like additive noise or reverberation, which translates to a shift in discrete units. We propose a parameter-efficient approach to generate noise-robust discrete units from pre-trained SSL models by training a small encoder-decoder model, with or without adapters, to simultaneously denoise and discretise the hidden features of the SSL model. The model learns to generate a clean discrete sequence for a noisy utterance, conditioned on the SSL features. The proposed denoiser outperforms several pre-training methods on the tasks of noisy discretisation and noisy speech recognition, and can be finetuned to the target environment with a few recordings of unlabeled target data. Discrete units, noise robustness, self-supervised learning, speech recognition § INTRODUCTION FirstPage Self-supervised learning (SSL) has enabled the development of versatile speech models which have advanced the state-of-the-art on a wide array of speech processing tasks <cit.>. Pre-training speech models on large amounts of unlabeled data leads to better generalisation capabilities and an improved robustness against acoustic, speaker and language variations <cit.>. Furthermore, within an SSL model, different layers are able to capture various speech attributes without supervision, such as phones, word boundaries and speaker characteristics <cit.>. Depending on the application, the hidden states of the most suited layers of the SSL model can be chosen as inputs for a task-specific model <cit.>. In automatic speech recognition (ASR), great improvements have been observed by self-supervised pre-training of large multi-purpose speech models <cit.>. Most SSL methods rely on quantisation or clustering to guide the training towards discovering meaningful and distinct speech units <cit.>. Recently, there has been a growing interest in extracting discrete units from self-supervised models, as they have several advantages <cit.>. First, the conversion from waveforms or feature vectors to discrete units facilitates a strong compression which allows efficient model training, fast inference and low-cost storage. Second, temporal clustering of granular features aids the discovery of acoustic units which are strongly correlated with the content of spoken language <cit.>. Third and finally, discretisation allows the integration with natural language processing techniques and models <cit.>. The discrete units can be treated as a pseudo-language to pre-train speech decoders <cit.> and unit language models <cit.> for spoken language processing. However, despite their impressive performance, SSL models still exhibit sensitivity to shifting domains. This effect has been observed for changing acoustic and linguistic conditions <cit.>, unseen speaker accents <cit.>, and noisy environments <cit.>. Besides a drop in performance, this has implications for discretisation. For example, HuBERT <cit.> was observed to assign clusters given by each noise condition <cit.>. As additive noise and reverberation shift the SSL model's features, the extracted discrete sequence is strongly dependent on the acoustic conditions, which impacts the performance of decoders and unit language models that are trained on clean data <cit.>. This work focuses on the robustness of discrete SSL units to distorted speech in noisy and reverberant environments, which is relevant for many applications in real-world scenarios. As SSL foundation models are becoming larger, pre-training noise-robust models from scratch or adapting SSL models to new domains forms a large burden on computing resources. Therefore, there is a strong need for small and versatile models that can adapt the SSL features to distortions. We propose a parameter-efficient denoiser to extract robust discrete units in noisy environments. The denoiser generates a clean cluster sequence from the latent features of a pre-trained SSL model for a distorted speech input, similar to a denoising auto-encoder <cit.>. Notice that denoising is not merely mapping one unit to another, but also entails inserting or deleting units. We investigate an external denoiser and an adapter-based denoiser. The denoiser approach can be applied to any pre-trained model, and requires a relatively small amount of data to train. Moreover, it does not require finetuning the SSL model itself on noisy data, and the backbone model (e.g. ASR, voice conversion model, unit language model) can be trained on discrete units extracted from clean data and does not need to be retrained with new clusters. We evaluate the generated discrete units on a denoised unit prediction task and on a distorted speech recognition task, showing the benefits of the proposed method for several SSL models. As the model is light-weight, we show that it can be efficiently adapted to new target environments. § RELATED WORK Several works have investigated noise-robust pre-training of speech SSL models with synthesised noisy mixtures, mainly by constraining the outputs <cit.> or the hidden features <cit.> to match the clean outputs or representations. For large SSL models, pre-training becomes very costly. This can be circumvented by noise-robust distillation into smaller models <cit.> or by parameter-efficient adaptation through finetuning small adapter blocks, such as Houlsby adapters <cit.>, inserted in the SSL model. Recent work <cit.> has investigated a technique to improve the augmentation invariance of discrete SSL units in the scope of generative spoken language modeling <cit.>. The outputs of an SSL model for an augmented input are mapped to the discrete clusters from a K-means teacher on the clean outputs, by training a new quantiser. However, their approach has several drawbacks. First, the quantiser is limited to a 3-layer MLP. Second, the quantiser that has to denoise the clusters is not conditioned on the acoustics (i.e. lower layer features in an SSL model), as it only uses the features of the K-means layer. Therefore, it has no idea of the signal SNR and how much the clean signal was distorted. Third, iterative re-training with new clusters is necessary. Finally, their research focuses on the application of small cluster vocabularies and base SSL models for generative spoken language modeling, instead of the applications for ASR. Our experiments with discretised speech units and the application to efficient ASR model training and inference follow recent success in this field <cit.>. § METHOD We aim to extract noise-robust and reverberation-robust discrete speech units from SSL models by training a denoiser model on top of a frozen pre-trained SSL encoder to generate clean discrete units for an augmented input. First, discrete units are computed for a clean speech dataset using a quantisation mechanism of choice, e.g. K-means, as shown in Figure <ref>. Second, the denoising task is learned on an augmented dataset created by adding noises and reverberation to a small dataset of clean speech. The denoiser can be external (Figure <ref>) or contain additional adapter blocks in the SSL encoder (Figure <ref>). Finally, for ASR applications, a noisy speech sample is discretised using the trained denoiser and fed to the discrete ASR model, depicted in Figure <ref>. §.§ Self-Supervised Speech Representation Learning While SSL is a very broad field spanning many research efforts in speech processing <cit.>, we apply our method to three well-known speech models, namely HuBERT, WavLM and Wav2vec2. First, HuBERT <cit.> learns speech representations by applying masked language modeling <cit.> to speech features. Discrete acoustic units are discovered by offline clustering of MFCC features or the features of a previously pre-trained model. Then, these audio tokens are predicted with a bidirectional encoder conditioned on masked latent speech features. In WavLM <cit.> this paradigm is further applied to noisy data and overlapping speech, by predicting clean target tokens for a noisy mixture. Finally, Wav2vec2 <cit.> predicts quantised latent features for masked audio frames with a context network, leveraging contrastive learning techniques. §.§ Discrete Speech Units Quantisation of speech can be part of the SSL training scheme, e.g. in wav2vec models <cit.>. However, the codebook during training is often much larger than necessary for downstream tasks. On the other hand, HuBERT-like models <cit.> have shown that offline clustering of hidden layer features leads to informative discrete speech units. In this work, we apply K-means clustering to the layer of the SSL model that is most informative for word information and performs best on a downstream ASR task <cit.>. The clustering model is learned on a small fraction of clean speech and the chosen cluster vocabulary size depends on the size and output dimension of the SSL model. We reduce the cluster sequence length by deduplication, i.e. removing repetitions <cit.>. Offline clustering is depicted in Figure <ref>. §.§ Noise-Aware Model Adaptation Previous work <cit.> has addressed the domain shift between pre-training and target domains by continual pre-training of the SSL model on target data. As HuBERT and Wav2vec2 models are trained on clean speech, adaptation to distorted speech data improves performance in noisy and reverberant environments. To this end, as a baseline, we adapt pre-trained HuBERT and Wav2vec2 models by continuing the pre-training process on augmented data. For HuBERT, the K-means quantiser of the pre-trained model is used to generate the target clusters for the augmented dataset. We denote continual pre-training as COPT. Moreover, HuBERT can be optimised directly to perform the denoising task, by predicting the clean clusters for an augmented sample (cf. WavLM). We abbreviate this noise-invariant pre-training as NIPT. These methods have two main drawbacks. First, they require the SSL models to be updated entirely, which is computationally expensive. Second, the adaptation can shift the hidden features, such that the quantiser of a pre-trained ASR model, trained on clean data, might be suboptimal. §.§ Efficient Learning of Robust Discrete Speech Units We propose a light-weight external denoiser module that is able to reduce the effects of additive noise and reverberation during discretisation of pre-trained SSL features, without requiring adaptation of the full SSL model. The module learns to simultaneously denoise the features and generate discrete units. Given a mixed speech signal, the denoiser encoder processes the hidden SSL features and an autoregressive decoder predicts the discrete cluster sequence of the clean speech. In SSL models, the lower layers tend to correlate with acoustic factors (speaker, environment, domain, etc.), while the upper layers are more aligned with phonetic, lexical and semantic features <cit.>. Therefore, the hidden features of all layers in the SSL encoder are combined with a learnable weighted sum <cit.>, such that the denoising of the discrete clusters is conditioned on the low-level acoustics as well. The computed weighted sum is then linearly projected to a smaller dimension for efficient modeling. The encoder processes and denoises the features, and the decoder generates a discrete cluster sequence autoregressively by attending to the encoder outputs. The decoder has to predict deduplicated cluster units, which makes the model robust against reverberation, compared to an approach with frame-based cluster targets (as in HuBERT adaptation). Therefore, the sequence lengths of the input and output are not the same, which is why an encoder-decoder approach with CTC <cit.> encoder regularisation is used for the denoiser. The outlined method is called Denoiser and depicted in Figure <ref>. It is a completely external module that only requires the hidden features of the SSL model, which can be extracted before training. Inspired by wide success in efficient model adaptation <cit.>, we also investigate a denoiser model that additionally has small residual adapter blocks inserted in the layers of the frozen SSL model. This method, denoted AdaDenoiser and depicted in Figure <ref>, has the advantages that it can adapt the SSL features directly (i.e. before the weighted sum) and the burden of the denoiser encoder is reduced, but it requires loading the SSL model during training. We use Houlsby adapters <cit.>, consisting of a down-projection to a bottleneck dimension, a non-linearity and an up-projection to the original feature dimension. The adapters are inserted after the feed-forward layers in the SSL encoder. § EXPERIMENTAL SETUP We use the train-clean-100, dev-clean and test-clean splits of LibriSpeech <cit.> for the experiments. The data configuration and training setup closely follow the clustering and ASR modeling setup from <cit.>, implemented in ESPnet <cit.>. §.§ Offline Clustering The K-means model using pre-trained clean SSL features is trained on a randomly selected 30 percent of LibriSpeech train-clean-100. Based on previous work <cit.>, for HuBERT and WavLM, we learn K=500 cluster centroids from layer 9 features for base models, and K=2000 centroids from layer 21 features for large models. For Wav2vec2, we use layer 7 for the base model and layer 12 for the large variant. §.§ Data Augmentation For adaptation to noisy data, we use the same 30 hours data split as used to train the K-means model for a fair comparison. For every utterance, we make 5 different versions: the original clean utterance, a reverberated utterance, and 3 noisy utterances with different noise types. In total this gives a training set of 150 hours or 42k utterances. For reverberation, we sample real RIRs from the openSLR28 dataset <cit.> which contains impulse responses from the Aachen IR and RWCP datasets and the REVERB challenge. For additive noise, for every utterance we sample one noise segment from the DEMAND dataset <cit.>, one from MUSAN <cit.> and one from CHIME <cit.>. The noises are added to the clean audio with a random SNR between 0 and 20 dB. For validation, we sample 1000 utterances from dev-clean and create 4 augmentations per sample. For evaluation, we create test-clean-augmented by sampling 100 utterances from test-clean and generating 13 augmentations per utterance, including 1 reverberated and 12 noisy versions, by mixing with a noise sample from the three noise datasets at SNRs of [5,10,15,20] dB. Including the clean samples, this gives 1400 test utterances. The RIRs and noise types for evaluation are unseen during training. §.§ SSL Baselines The HuBERT and Wav2vec2 base models were pre-trained on 960h LibriSpeech and their large variants on 60kh LibriLight. WavLM was trained on augmented data. The base models have 12 Transformer layers (95M parameters) and the large models have 24 Transformer layers (316M parameters). In the adaptation experiments, we continue the pre-training of pre-trained HuBERT models in fairseq <cit.> for 150k steps with 300K tokens per batch and a learning rate of 2e-5 with polynomial decay and 20k warmup steps, on the data from Section <ref>. In case of training only residual adapters for NIPT adaptation (AdaNIPT), the learning rate is 1e-3, and the bottleneck dimension of the adapters is set to 64 or 1024 <cit.>. §.§ Denoiser Setup The denoiser module is trained on the augmented dataset from Section <ref>. The inputs are a learned weighted sum of the hidden features of the (frozen) SSL model <cit.>, which are linearly mapped to a smaller dimension of 256. The Denoiser is implemented in ESPnet as a hybrid CTC/Attention encoder-decoder model <cit.>, where the encoder is regularised with a CTC objective and the decoder is trained with a cross-entropy loss on target tokens. The model consists of an encoder with either 2 Conformer <cit.> layers (Denoiser-S) or 6 Transformer layers (Denoiser-M) and a 3-layer Transformer decoder. For the AdaDenoiser method, the encoders are smaller. For small SSL models, we use AdaDenoiser-S, which has no additional encoder, but only retains the linear down-projection layer after the weighted sum as encoder. For large SSL models, the additional encoder consists of 2 Transformer layers (AdaDenoiser-M). The decoder remains the same as in the Denoiser with 3 Transformer layers. The adapters have a bottleneck dimension of 64 and GELU non-linearity. Convolutional subsampling (even with a factor 2) of the input features was found to be disadvantageous for the denoising task. Temporal input feature masking did not improve the denoiser either. The denoiser models are trained for 50 epochs with an effective batch size of 256 utterances, and a learning rate of 1e-3 with 5k warmup steps and exponential decay. The targets for the denoiser models are deduplicated cluster indices. The predicted clean clusters are decoded with a beam size of 20 and a CTC-weight of 0.3. §.§ ASR Modeling The ASR model follows <cit.> and consists of a hybrid CTC/Attention encoder-decoder model <cit.>, with a 12 layer E-Branchformer <cit.> encoder and 6 layer Transformer <cit.> decoder. The discrete inputs are deduplicated cluster indices. The target transcriptions are tokenised with 6000 BPE tokens. We found that especially for noisy ASR, applying BPE modeling to input cluster units does not improve performance. The discrete inputs are converted into 512-dimensional embeddings, followed by random temporal masking, convolutional subsampling, and then fed to the encoder. The ASR model is trained on LibriSpeech train-clean-100 for 300 epochs with a learning rate of 5e-4 with decay and 5K warmup steps. The model has 38M parameters and requires 12 GPU hours to train. Decoding is performed with beam size 20 and CTC weight 0.3, without language model. The ASR model is trained on discrete units extracted from clean data (train-clean-100) and is not retrained after adaptation of the discrete units and SSL models (cf. Section <ref>). § RESULTS §.§ Robust Discrete Units Augmentation-invariant discrete speech units should not depend on the presence of noise or reverberation. In this experiment, we investigate the sensitivity of SSL model discretisation to augmentations by evaluating the Unit Error Rate (UER) between the discrete cluster units extracted from a clean signal with the SSL model and the clusters extracted from distorted versions of the same signal with adapted models. The UER is computed as the Character Error Rate between the two sequences, treating the discrete units as characters. We evaluate our approach on HuBERT and Wav2vec2, which were pre-trained on clean data only, and on WavLM, which was pre-trained to perform denoising on augmented data, and compare to the other adaptation strategies. For all methods, the quantiser is trained on features of the backbone pre-trained SSL model. Table <ref> shows the results. The proposed Denoiser and AdaDenoiser reduce the UER for all models and baselines, with only a fraction of the total model parameters, indicating that the shift between noisy discrete units and clean discrete units is reduced for the same spoken sentence. Overall, the small Denoiser model is more effective than the larger Denoiser model for UER reduction. In most cases, for base variants of SSL models the AdaDenoiser outperforms the Denoiser, except for WavLM. Adapters seem to have less effect for WavLM, which was trained on augmented data. For WavLM large, the AdaDenoiser approach did not converge to a useful result and probably requires a different architecture to be optimal. For HuBERT, the strong improvements on the reverberated test set over the adaptation baselines suggest that the CTC or Attention objective with deduplicated units is more effective than frame-wise denoising for the case of reverberation. For Wav2vec2, the high UERs in the table indicate that it is the most sensitive to noise-induced feature shifting. §.§ Robust ASR In this section, we analyse the effectiveness and robustness of the discrete clusters for ASR in noisy test environments. To this end, we train a discrete ASR model on clean data (train-clean-100) with the clusters from an SSL model, and evaluate on augmented data with the clusters from the noise-adapted models. We found that retraining the ASR model on adapted units does not have a significant benefit over using ASR models trained on clean SSL units, hence the ASR model can be trained on clean data and the adaptation only uses unlabeled data. Table <ref> shows the Word Error Rates (WER). For the base SSL models, we observe strong improvements in almost all settings, both clean and noisy, even for WavLM base, which was already trained to perform denoising. Of all SSL models, Wav2vec2 benefits most from denoising, as was indicated with the UER results. For the large models, which were pre-trained on much more data, there are still improvements from the proposed adaptation. Additionally, the proposed approach requires training much fewer parameters than regular SSL adaptation. For most settings, the AdaDenoiser model outperforms the Denoiser model, especially in very low SNR regimes. In some cases, the WER is reduced by more than 50% compared to the baseline model. We experienced that an optimal performance for low SNR data requires either adaptation of the convolutional feature extraction layers of the SSL, or inserting layer-wise adapters such as in the AdaDenoiser, which can have a bigger impact on the convolutional outputs compared to the weighted sum approach in the Denoiser. For WavLM large, which is already capable of denoising and was pre-trained on augmented data, our method has expectedly less effect besides efficient test-time adaptation. We remark that in contrast to discrete unit denoising, for noisy speech recognition a denoiser model (Denoiser-M) with a more powerful encoder is beneficial. The relation between UER and WER is non-monotonic, which is reminiscent of the relation between Phone Error Rate and WER. However, we still observe a general trend of improvements for both Denoiser architectures compared to the baselines. The WER shows the optimal model for downstream speech recognition models, but depending on the application (e.g. voice conversion, language modeling) the UER might be preferred. Finally, Table <ref> details a short ablation study to validate our choice for an encoder-decoder model, which outperforms an encoder-only (with CTC) and a decoder-only variant. §.§ Test Time Adaptation Previous sections have shown the capabilities of the proposed denoiser by pre-training on a varied set of noises and evaluating on a test set with unseen noises. In practice, if one wants to apply the model in a new setting (e.g. a factory with specific noises which were not well represented in the augmented training set), it could be beneficial to adapt the model to the target environment by recording a few samples and then finetune the model. As the denoiser module is a light-weight extension with few parameters, it can be finetuned on the fly with only a limited amount of unlabeled target data. We simulate this setting by choosing a new noise type with several recordings from the NTT Ambient Noise database <cit.>. For every recorded noise sample of 30 seconds, we create 100 utterances by combining the noise with clean speech samples from the training set at different SNR levels between 0 and 20 dB. Only the encoder of the Denoiser is finetuned, the rest is frozen. Figure <ref> shows the UER of a finetuned Denoiser-S model in function of the amount of recorded noise samples, computed on 100 samples from test-clean augmented with unseen noises of the same noise type. We observe that finetuning to a new stationary noise source such as an old car or the inside of a train improves the pre-trained denoiser. For non-stationary noises with babbling like shopping mall recordings, finetuning has less effect. § CONCLUSION This paper focuses on reducing the sensitivity of SSL model discretisation in noisy and reverberant environments. We have proposed a small encoder-decoder denoiser model and an adapter-based variant that denoise SSL features and predict clean discrete units for noisy inputs. The method is parameter-efficient and able to improve discretisation of SSL models for noisy data, as shown for denoised unit prediction and noisy speech recognition. Future work could apply this same strategy to other fields using speech discretisation, or delve into alternatives for the bottleneck layer, larger datasets and pre-training of the decoder. IEEEbib
http://arxiv.org/abs/2409.03700v1
20240905165658
Constituent automorphism decoding of Reed-Muller codes
[ "Yicheng Qu", "Amir Tasbihi", "Frank R. Kschischang" ]
cs.IT
[ "cs.IT", "math.IT" ]
Constituent Automorphism Decoding of Reed–Muller Codes Yicheng Qu, Amir Tasbihi, and Frank R. KschischangThe authors are with the Edward S. Rogers Sr. Dept. of Electrical and Computer Engineering, University of Toronto, 10 King's College Road, Toronto, Ontario M5S 3G4, Canada. Email: , , . Submitted for publication on July 19th, 2024. September 9, 2024 =============================================================================================================================================================================================================================================================================================== § ABSTRACT Automorphism-ensemble decoding is applied to the Plotkin constituents of Reed–Muller codes, resulting in a new soft-decision decoding algorithm with state-of-the-art performance versus complexity trade-offs. Reed–Muller codes, automorphism ensemble decoding, successive cancellation list decoding. § INTRODUCTION This paper introduces the constituent automorphism (CA) decoder, a new soft-decision decoding algorithm for Reed–Muller (RM) codes. The new algorithm, a variant of automorphism ensemble (AE) decoding <cit.>, exploits the recursive Plotkin structure of RM codes, applying AE decoding to constituent RM codes located at various levels in the resulting decoding tree. By carefully adjusting the number of automorphisms applied to particular constituent decoders, CA decoding results in better performance versus complexity trade-offs than other state-of-the-art decoding algorithms such as successive cancellation list (SCL) decoding <cit.> and AE decoding itself <cit.> (where automorphisms are applied only at the root of the decoding tree). Although Reed–Muller codes are among the earliest families of codes introduced in coding theory <cit.>, contemporary interest in them arises from (a) their excellent soft-decision decoding performance at short block lengths, (b) their close connection to polar codes, (c) the relatively recent discovery that RM codes can be capacity-achieving in certain situations, and (d) the fact that RM codes arise in connection with various problems in theoretical computer science; see <cit.> for an excellent review. In <cit.>, Schnabl and Bossert view RM codes as generalized multiple concatenated (GMC) codes, introducing the first soft-decision RM decoding algorithm—herein referred to as the GMC decoder—by exploiting their recursive Plotkin structure to successively decode their constituent codes. In the literature on polar codes <cit.>, the GMC decoder is also often referred to as a successive cancellation (SC) decoder. In <cit.>, Dumer and Shabunov show that better RM soft-decision decoding performance can be obtained for the binary-input additive white Gaussian noise (BI-AWGN) channel by combining GMC decoding with list-decoding. The resulting successive-cancellation list (SCL) decoder was extended to polar codes in <cit.>. The idea of exploiting code automorphisms for decoding dates back to the 1960s <cit.>. Recent interest in automorphism-based decoders for RM codes <cit.> stems from their ability to approach maximum-likelihood (ML) decoding performance with a reasonable complexity. The general idea is to permute the received vector by the elements of a set ℰ of code automorphisms, then to decode the permuted versions in parallel via a simple sub-optimal elementary decoder. The inverse automorphism is then applied to each decoded word to arrive at a list of |ℰ| (not necessarily distinct) decoding candidates, where |ℰ| denotes the number of elements in ℰ. The decoder then selects the most likely codeword from the candidate list. While the automorphism-based decoder is approximately |ℰ| times computationally more complex than its elementary decoder, its inherent parallelism makes it very hardware-friendly <cit.>, which may be seen as an advantage of AE decoding compared with SCL decoding. The key idea of this paper is to apply AE decoding at the level of the Plotkin constituent codes of a Reed–Muller code. This maintains a high degree of parallelism in the decoding algorithm and, as we demonstrate, it can result in performance-complexity benefits. The remainder of this paper is organized as follows. Various coding-theoretic preliminaries needed to understand the rest of the paper are reviewed in Sec. <ref>. The CA decoder is introduced in Sec. <ref>, and its complexity is analyzed in Sec. <ref>. Simulation results showing performance and complexity tradeoffs are given in Sec. <ref>, along with comparisons to other state-of-the-art RM decoders. Some brief concluding remarks are given in Sec. <ref>. Throughout this paper, the real numbers are denoted by ℝ, the integers are denoted by ℤ, and the two-element finite field { 0, 1 } is denoted by 𝔽_2. The extended real numbers ℝ∪{ -∞, +∞} are denoted as . The (modulo-two) addition operation in 𝔽_2, equivalent to an exclusive-OR (XOR) operation, is denoted as ⊕. The soft XOR of two extended real numbers a, b ∈, denoted as a ⊞ b, is given as a ⊞ b ≜ 2tanh^-1( tanh( a/2) tanh( b/2) ), where tanh( ±∞) = ± 1, and where the symbol `≜' signifies that the right-hand side is the definition of the left-hand side. The negative log-sigmoid of x ∈ is given as (x) ≜ln(1 + exp(-x)), where (-∞) = ∞ and (∞) = 0. For any integer k, ℤ^≥ k≜{ k, k+1, …} and ℤ^≤ k≜{…, k-1, k }. For any n ∈ℤ^≥ 0, [n] ≜{ 0, 1, …, n-1 }, where [0] is the empty set. The mapping (-1)^(·): 𝔽_2 →ℝ is defined, for b ∈𝔽_2, as (-1)^b ≜ 1, if b=0; -1, if b=1. For any predicate p, 1_p denotes the indicator function 1_p ≜ 1, if p is true; 0, otherwise. The codomain of 1_p may be or 𝔽_2 as context dictates. Vectors (over or 𝔽_2) are denoted with boldface lower-case letters. A vector of length n, i.e., a vector having n components, is referred to as an n-vector. The ith component of an n-vector 𝐯 is denoted 𝐯[i], where i ∈ [n]. The empty vector ∅≜ () has zero length and no components. The concatenation of an n_1-vector 𝐮 and an n_2-vector 𝐯, in that order, is the (n_1+n_2)-vector ( 𝐮|𝐯 ) ≜ ( 𝐮[0], …, 𝐮[n_1-1], 𝐯[0], …, 𝐯[n_2-1] ). Generally (𝐮|𝐯 ) ≠ (𝐯|𝐮), although ( 𝐮|∅) = (∅|𝐮) = 𝐮. The all-zero and all-one n-vectors over 𝔽_2 are denoted as 0_n and 1_n, respectively, where the subscript may be omitted if the length is clear from the context. For any binary vector 𝐛 and any real vector λ of the same length, (-1)^𝐛λ denotes a real vector whose ith component is (-1)^𝐛[i]λ[i]. For any n ∈ℤ^≥ 0, ((a_i, b_i)_i ∈ [n]) denotes the n-tuple of 2-tuples ((a_0, b_0), (a_1, b_1), …, (a_n-1, b_n-1)), where the a_i's and b_i's are arbitrary mathematical objects. This notation may be generalized to n-tuples of m-tuples in the obvious way. For any m ∈ℤ^≥ 0, the ring of m-variate binary polynomials with indeterminates x_0, …, x_m-1 is denoted by 𝔽_2[x_0, …, x_m-1]. When m=0 we have 𝔽_2[ ] ≜𝔽_2. For any r ∈ℤ, 𝔽_2^≤ r [x_0, …, x_m-1] denotes the set of polynomials in 𝔽_2[x_0, …, x_m-1] whose total degree is at most r. The total degree of the zero polynomial 0 is taken as -∞; thus for any r < 0, 𝔽_2^≤ r[x_0,…,x_m-1] = { 0 }. § PRELIMINARIES In this section we briefly review various coding-theoretic concepts needed to understand the remainder of the paper. §.§ Channels and Decoders Throughout this paper, we assume the use of a binary-input memoryless channel with input alphabet 𝔽_2, output alphabet 𝒴, and channel law given, for x ∈𝔽_2 and y ∈𝒴, as W(y | x), where W is either a conditional probability mass function or conditional probability density function according to whether 𝒴 is discrete or continuous. Without loss of generality, we assume that 𝒴 is chosen so that W(y | 0) ≠ 0 or W(y | 1) ≠ 0 for all y ∈𝒴. A standard example of such a channel is the binary symmetric channel with crossover probability p, where 𝒴 = 𝔽_2 and W(1| 0)=W(0| 1)=p. Another example is the binary erasure channel (BEC) with erasure probability ϵ, where 𝒴 = 𝔽_2 ∪{ e } and W(e | 0) = W(e | 1) = ϵ and W(0 | 1)=W(1 | 0) = 0. Another standard example, and the one we use to present simulation results, is the BI-AWGN with noise variance σ^2, where 𝒴 = ℝ, with W(y | 0) = 𝒩(1,σ^2) and W(y | 1) = 𝒩(-1,σ^2), where 𝒩(m,σ^2) denotes a Gaussian density function with parameters m and σ^2. The signal-to-noise ratio (SNR) of such a channel is 1/σ^2. Associated with each received channel output y ∈𝒴 is the log-likelihood ratio (LLR) Λ(y) ≜ln( W(y | 0) / W(y | 1)) ∈, where Λ(y) = -∞ if W(y | 0) = 0 and Λ(y) = +∞ if W(y | 1) = 0. Infinite LLRs occur, for example, for unerased symbols received at the output of the BEC. The hard decision associated with an LLR value λ = Λ(y), for some channel output y, is given by the function : →𝔽_2, where (λ) ≜1_λ < 0. Suppose that a binary codeword of length n is transmitted. Based on the corresponding channel output 𝐲∈𝒴^n, we assume that the receiver produces the log-likelihood-ratio (LLR) vector λ∈^n, where λ[i] = Λ(𝐲[i]) for i ∈ [n]. The LLR vector serves as the interface between the channel and the decoder. We also define the vector of hard-decisions (λ) ∈𝔽_2^n as a binary n-vector whose entries are hard decisions associated with λ entries, i.e., (λ)[i] ≜(λ[i]) for i ∈ [n]. A decoding rule or simply decoder for a binary code C of length n is a function D: ^n → C ∪{ F } that maps an LLR vector λ either to a codeword of C or to the symbol F (which indicates a decoding failure). Two decoders D_1 and D_2 are the same for a given channel if they almost surely produce the same decoding decision, i.e., if (D_1(λ) ≠ D_2(λ)) = 0, where λ is the LLR vector corresponding to the channel output. We write D_1 ≠ D_2 when D_1 and D_2 are not the same. The analog weight w(𝐱, λ) of a channel input 𝐱 with respect to an LLR vector λ is given as the sum of the absolute values of those components of λ whose hard-decisions disagree with the corresponding component of 𝐱, i.e., w(𝐱,λ) ≜∑_i:𝐱[i] ≠(λ[i]) |λ[i]|. The analog weight w(𝐱,λ) measures the cost of disagreements between 𝐱 and (λ). Among all possible channel inputs, the hard-decision vector (λ) itself has minimum possible analog weight, namely 0; however, (λ) may not be a codeword. Given an LLR vector λ, a maximum-likelihood (ML) decoder for a code C produces a codeword 𝐯∈ C having least analog weight with respect to λ. If codewords are equally likely to be transmitted, an ML decoder minimizes block error rate (BLER), i.e., the probability that the decoding decision disagrees with the transmitted codeword. §.§ Code Automorphisms For any positive integer n, a permutation of order n is a bijection π : [n] → [n]. The set of all permutations of order n forms a group (the symmetric group S_n) under function composition. The group identity is denoted as 𝕀. The symmetric group S_n acts on n-vectors by permutation of coordinates, i.e., for any n-vector 𝐯 and any π∈ S_n, π𝐯≜ ( 𝐯[π(0)], … , 𝐯[π(n-1)] ). A permutation π∈ S_n is called an automorphism of a code C of length n if 𝐜∈ C implies π𝐜∈ C. In other words, an automorphism of a code is a permutation which maps codewords to codewords. The set of all automorphisms of C form a subgroup of S_n called the automorphism group of C. §.§ Reed–Muller Codes Fix m ∈ℤ^≥ 0. For any polynomial p = p(x_0,…,x_m-1) ∈𝔽_2[x_0, …, x_m-1], and any binary m-vector 𝐯, called a point, let p(𝐯) = p(𝐯[0], …, 𝐯[m-1]) ∈𝔽_2 denote the evaluation of the polynomial p at the point 𝐯, i.e., the binary value obtained when x_i is substituted with 𝐯[i], for each i ∈ [m]. When m=0, we have p(∅) = 0 if p = 0 and p(∅)=1 if p=1. Define the evaluation map : 𝔽_2 [x_0, …, x_m-1] →𝔽_2^2^m as the function which maps the polynomial p ∈𝔽 [x_0, …, x_m-1] to its evaluation at all points of 𝔽_2^m in lexicographic order, i.e., (p) ≜( p(0,…,0,0), p(0,…,0,1), p(0,…,1,0), …, p(1,…,1,1)). For any r ∈ℤ and any m ∈ℤ^≥ 0, the binary Reed–Muller code of log-length m and order r, denoted as (r,m), is the image of 𝔽_2^≤ r[x_0,…,x_m-1] under the evaluation map, i.e., (r,m) ≜{(p): p ∈𝔽_2^≤ r[x_0, …, x_m-1] }. It can be shown (see, e.g., <cit.>), for any m ∈ℤ^≥ 0 and any integer r satisfying 0 ≤ r ≤ m, that (r,m) is a code of block length n = 2^m, dimension k(r,m) ≜∑_i=0^r mi and minimum Hamming distance d = 2^m-r. When r < 0, (r,m) = {0_2^m} contains only the zero codeword. When r=0, (r,m) is the binary repetition code of length 2^m. When r=m-1, (m-1,m) is the (2^m,2^m-1) single-parity check (SPC) code. When r ≥ m, (r,m) = 𝔽_2^2^m (the whole space of 2^m-vectors). These are the trivial RM codes. When 0 < r < m-1, we call (r,m) nontrivial. RM codes are one of the few algebraic code families with a well-characterized automorphism group. We denote the automorphism group of (r,m) by r,m. When (r,m) is nontrivial, it is known <cit.><cit.> that r,m is isomorphic to the general affine group GA(m, 𝔽_2). When (r,m) is trivial, r,m is equal to S_2^m (i.e., every permutation of coordinates is an automorphism). Under the lexicographic order by which the evaluation map in (<ref>) is defined, one may show that RM codes admit a Plotkin structure <cit.>. This means that any codeword 𝐜∈(r,m), where m ≥ 1, can be written as 𝐜 = (𝐮|𝐮⊕𝐯), where 𝐮∈(r,m-1) and 𝐯∈(r-1,m-1). We refer to (r,m-1) and (r-1,m-1) as the Plotkin constituents of (r,m). Of course (r,m-1) and (r-1,m-1) are themselves RM codes, having their own Plotkin constituents. The relationships among the various Plotkin constituents of (r,m) are readily visualized using a binary tree, called a Plotkin tree, and denoted rm. Each vertex in the tree is labelled with a pair of integers (r',m'); in particular, the root is labelled with (r,m). A vertex v with label (r',m') where m'>0 has two children: the left child has label (r',m'-1) and the right child has label (r'-1,m'-1). Vertices (r',m') with m'=0 are leaf vertices; they have no children. Fig. <ref> illustrates 24. Since it is possible for some vertices in rm to have the same label, it is useful to assign a unique variable-length binary address to each vertex. The construction is standard: the root is assigned address ∅, and throughout the tree (or any rooted subtree thereof), the left child of a vertex with the address 𝐚 receives address (𝐚| 0), while the right child receives address (𝐚| 1). However, rather than writing addresses as vectors, we will write them as binary strings. For example, Fig. <ref> illustrates the vertex addresses for a rooted subtree of 36. If the root of the tree has label (r,m), then a vertex with address 𝐚 has label (r-wt(𝐚),m-len(𝐚)), where wt(𝐚) denotes the Hamming weight of 𝐚 and len(𝐚) denotes the length of 𝐚. In principle, soft-decision ML decoding of (r,m) can be accomplished via the Viterbi algorithm; however the number of edges in the trellis diagram scales at least exponentially in the block length <cit.>, making this approach computationally infeasible for all but very short RM codes. On the other hand, ML decoding of the trivial RM codes (which include repetition codes and SPC codes) is easy. ML decoding of (m-1,m), the SPC code, is straightforward via the Wagner decoding rule <cit.>, with the main complexity being determination of the position in the received word having the smallest LLR magnitude. Soft-decision decoding of first-order RM codes (1,m) can be accomplished using the “Green Machine” decoder <cit.><cit.>, with the main complexity being the computation of a Hadamard transform of the received LLR vector. In the following sections, we discuss some of the non-ML soft-decision decoding algorithms for general RM codes. §.§ GMC Decoding of RM Codes As already noted, the GMC decoder <cit.> uses a divide-and-conquer approach that exploits the recursive Plotkin structure of RM codes, splitting the task of decoding (r,m) into that of decoding its Plotkin constituents, namely (r-1,m-1) and (r,m-1). In turn, these smaller decoding problems can themselves be split into even smaller decoding problems, until eventually an “easy” decoding problem is reached, at which point no further task-splitting is required. To make this precise, let 𝒜, called an atom set, be any collection of RM codes for which task-splitting is not required due to the availability of computationally feasible decoders for those codes. For example, 𝒜 might contain all trivial RM codes and the first-order RM codes (1,m). We will assume that an atom set always contains trivial RM codes (r,0) of length 1. An RM code not contained in 𝒜 is called composite with respect to 𝒜. The decoding function for an RM code (r,m) ∈𝒜 is denoted as A_r,m. Let λ∈^2^m, denote the LLR vector produced at the output of a binary-input memoryless channel when a codeword of (r,m) is transmitted. The GMC decoder with atom set 𝒜 is the function rm^𝒜 : ^2^m→(r,m) recursively defined by rm^𝒜(λ) ≜ A_r,m(λ), if (r,m) ∈𝒜; (𝐮|𝐮⊕𝐯), otherwise, where, assuming λ is partitioned as λ=(λ'|λ”) with λ', λ”∈^2^m-1, the binary 2^m-1-vectors 𝐯 and 𝐮 are given as 𝐯 = r-1m-1^𝒜 (λ'⊞λ”), 𝐮 = rm-1^𝒜 (λ' + (-1)^𝐯λ”), respectively. Since 𝒜 is required to include trivial length-1 RM codes, the GMC decoder is guaranteed to terminate. Note that computation of 𝐮 in (<ref>) depends on the availability of 𝐯 defined in (<ref>), and both 𝐮 and 𝐯 are required in (<ref>). In effect, the GMC decoder rm^𝒜 performs a reverse post-order traversal of a rooted subtree of rm—called the GMC decoding tree with respect to 𝒜 and denoted as rm^𝒜—in which vertices with labels corresponding to codes that are composite with respect to 𝒜 are internal, and those with labels corresponding to elements of 𝒜 are leaves. We refer to the decoders for elements of 𝒜 as leaf decoders. The smallest possible atom set contains only the RM codes of length 1, in which case the GMC decoding tree for (r,m) is the same as the Plotkin tree rm. However, as noted earlier, another choice for the atom set is the one containing the trivial and the first-order RM codes, i.e., 𝒜^∗ = {(r, m): m ∈ℤ^≥ 0, r ∈ℤ^≤ 1∪ℤ^≥ m-1}. The circled vertices in Fig. <ref> illustrate 24^𝒜^∗, the GMC decoding tree for (2,4) with atom set 𝒜^∗, and Fig. <ref> shows the GMC decoding tree 36^𝒜^∗ (along with the address of each vertex). It is important to note that GMC decoding, though computationally convenient, does not always return an ML codeword, and therefore GMC decoding does not generally have the smallest possible BLER. The complexity of GMC decoding of an RM code of length n is 𝒪(n log n) <cit.>. Because of its low complexity (and its suboptimality), the GMC decoder can be used as an elementary decoder in AE decoding; see Sec. <ref>. §.§ SCL Decoding of RM Codes SCL decoders generalize GMC decoders by allowing the constituent decoders to return not just a single codeword, but rather a list of possible codewords. In SCL decoding, the atom set is usually chosen as 𝒜' ≜{(r,0): r ∈ℤ}, which contain only RM codes of length 1. To avoid an exponential growth of the decoding list, an upper bound on the returned list size at each constituent decoder is maintained, retaining only those candidate codewords of lowest cost, defined as follows <cit.>. For any LLR vector λ∈^2^m and any 𝐜∈(r,m), let (𝐜, λ) ≜∑_i ∈ [2^m]((-1)^𝐜[i]λ[i]) be the cost of decoding the LLR vector λ to the vector 𝐜, where was defined in (<ref>). If λ = (λ'|λ”), for some λ' and λ”∈^2^m-1, and if 𝐜 = (𝐮|𝐮⊕𝐯), then (𝐜, λ) = (𝐯, λ'⊞λ”)+ (𝐮, λ' + (-1)^𝐯λ”). The terms (𝐯, λ'⊞λ”) and (𝐮, λ' + (-1)^𝐯λ”) may be regarded as the Plotkin constituent costs of 𝐜 = (𝐮|𝐮⊕𝐯); the SCL decoder exploits this additive decomposition of the overall cost of 𝐜. SCL decoding proceeds recursively in the same manner as the GMC decoder, except, rather than being provided with a single LLR vector, the constituent decoders are provided with a list of one or more ordered pairs, where each pair comprises an LLR vector and an associated cost. The output of each constituent decoder is a list of triples, where each triple comprises a codeword, the index within the input list of the LLR vector from which that codeword was decoded, and the overall cost of that codeword. More precisely, a constituent SCL decoder for (r,m) with maximum output-list-size ∈ℤ^≥ 2 and input-list-size ≤ is a function _r,m^, : (^2^m×)^→ ((r,m) × [] ×)^, where = min(2^k(r,m), ). We will start by defining this function for the special case when m=0, i.e., for a leaf decoder. When m=0, the SCL decoder is a map _r,0^, : (×)^→ ((r,0) × [] ×)^, where = min(2^k(r,0), ). Thus the input to a leaf decoder is a list of pairs ((λ_0,ψ_0), …, (λ_ -1,ψ_-1)) ∈ (×)^, where λ_i is an LLR value and ψ_i is the associated cost. Two cases must be considered: * If r ≥ 0, then (r,0) = {0_1, 1_1 }. In this case, for each λ_i, i ∈ [], there are two possible codewords: 0 with overall cost ψ_i + (0, λ_i) = ψ_i + (λ_i) leading to the tentative output triple (0,i,ψ_i + (λ_i)), and 1 with overall cost ψ_i + (1, λ_i) = ψ_i + (-λ_i) leading to the tentative output triple (1,i,ψ_i + (-λ_i)). In total there will be 2 tentative output triples. If needed, this list of tentative triples is truncated to length , with the decoder returning only those candidates with the least overall cost. * If r < 0, then (r,0) = {0_1 }. In this case 0 is the only possible candidate codeword for each λ_i, i∈ [], leading to output triple (0,i,ψ_i + (λ_i)). All of the resulting triples are returned by the decoder. When m>0, suppose the input to the SCL decoder is the -tuple τ = (((λ'_i |λ”_i), s_i)_i ∈ []), with λ'_i and λ”_i∈^2^m-1 and s_i ∈. We will then have _r,m^, (τ) ≜(((𝐮_i |𝐮_i ⊕𝐯_q_i), p_q_i, s'_i)_i ∈ []) where 𝐯_j's and the p_j's in (<ref>) are obtained from ((𝐯_j, p_j, s_j”)_j∈ [ℓ']) = _r-1, m-1^, ( (λ'_i ⊞λ”_i,s_i)_i ∈ []), where ℓ' = min(2^k(r-1, m-1), ). Moreover, the 𝐮_j's, the q_j's, and the s'_j's in (<ref>) are obtained from ((𝐮_j, q_j, s'_j)_j∈ [ℓ”]) = _r, m-1^ℓ', ( (λ'_p_j + (-1)^𝐯_jλ”_p_j, s_j”)_j ∈ [ℓ']), where ℓ” = min(2^k(r, m-1)ℓ', ) = min(2^k(r,m), ). The overall SCL decoder for (r,m) is obtained by setting = 1 and by choosing any arbitrary real number as the initial cost of its input LLR vector. The overall decoder returns the codeword having the smallest analog weight with respect to the input LLR vector from the list of candidates returned by the decoder at the root of the decoding tree. §.§ AE Decoding of RM Codes As described in Sec. <ref> and as illustrated in Fig. <ref>, an AE decoder operates by permuting the received LLR vector λ by the elements of a set ℰ = {π_1, …, π_ℓ}, called an automorphism ensemble, of code automorphisms. Each of the ℓ permuted LLR vectors is then decoded (perhaps in parallel) using a simple sub-optimal elementary decoder such as a GMC decoder, and the corresponding inverse permutation is applied to each decoding result. The most likely codeword is then selected from the ℓ generated candidates. For any choice of atom set 𝒜 and any automorphism π∈r, m, let rmπ, 𝒜 : ^2^m→(r,m) denote the π-permuted GMC decoder, defined as rmπ, 𝒜≜π^-1∘rm^𝒜∘ π, where ∘ denotes function composition. Thus rmπ,𝒜(λ) = π^-1rm^𝒜(πλ). Of course rm𝕀, 𝒜 = rm^𝒜, i.e., the 𝕀-permuted GMC decoder is just the GMC decoder itself. Depending on the choice of π, because of the suboptimality of a GMC decoder, it may happen that rm^𝒜≠rmπ, 𝒜. This is useful, because if rm^𝒜 makes an error, there is a possibility that rmπ, 𝒜 can nevertheless decode correctly, and vice versa. Automorphism ensemble decoding of RM codes <cit.> exploits (<ref>) to decode a received LLR vector using an ensemble ℰ of automorphisms with the property, that π_1, π_2 ∈ℰ, π_1 ≠π_2 implies rmπ_1, 𝒜≠rmπ_2, 𝒜. Indeed, if rmπ_i, 𝒜, π_i ∈ℰ, produces decoding decision 𝐜_i, then the overall AE decoding decision is given, as illustrated in Fig. <ref>, by 𝐜 = min_𝐜_1,…,𝐜_ℓ w(𝐜_i,λ), where the analog-weight function w was defined in (<ref>). § CONSTITUENT AUTOMORPHISM DECODING Like all successive-cancellation decoding schemes, GMC decoders are prone to error propagation: once a leaf decoder makes an error, the error propagates to all succeeding decoders encountered in the decoding tree traversal. Because of channel polarization <cit.>, different constituent leaf decoders of an RM code observe different synthetic channels and, as a result, they have different first-error probability, the probability that they produce an error while all preceding leaf decoders have decoded correctly. For example, Table <ref> gives the binary addresses of the leaf constituent codes of 49^𝒜^∗ having the largest first-error probability at the output of a BI-AWGN channel with an SNR of 4 dB. It is clear that most decoding errors are triggered by the leaf decoder with address 111. Because of this channel polarization effect, it is sensible to devote more decoding resources to those constituent codes having relatively degraded channels, as these trigger the most decoding errors. However, a conventional AE decoder treats all constituent codes evenly. In particular, via an AE decoder with some automorphism ensemble ℰ, each constituent code of a composite RM code is decoded |ℰ| times. To remedy this issue, we introduce constituent automorphism decoding. The key idea is to apply AE decoding locally to the codes that are composite with respect to a given atom set 𝒜. With a random selection of ℰ from r, m chosen to satisfy (<ref>), we did not see any difference in BLER at the output of a BI-AWGN between various selected automorphism ensembles for any of the (r,m) codes that we tested. Thus we speculate that the performance of an AE decoder depends only on the ensemble size. As a result, we will only specify the AE size, without describing the actual automorphisms used. By default, all decoders in the decoding tree will use an AE size of one (conveniently implemented with an automorphism ensemble containing just the identity permutation 𝕀). However, certain selected composite (non-leaf) decoders will use AE decoding locally with a larger AE size. To specify the local decoders having an AE size greater than unity, we define an automorphism distribution 𝒮 as a set of (𝐚,s) pairs, where 𝐚 is the binary address of a decoder in rm^𝒜 and s is the corresponding AE size. For example, 𝒮 = { (1,2), (11,3) } denotes a CA decoder that applies AE decoding with AE size 2 at the decoding tree vertex with address 1, and AE decoding with AE size 3 at the vertex with address 11. In this notation, a conventional AE decoder with AE size s has automorphism distribution 𝒮 = { (∅,s) }. We denote the CA decoder for (r,m) with automorphism distribution 𝒮 and atom set 𝒜 as _r, m^𝒮, 𝒜: ^2^m→(r, m). In the rest of this paper, all CA decoders use the atom set 𝒜^∗. The leaf decoders associated with 𝒜^∗ are permutation invariant ML decoders, so applying AE to them cannot improve performance. Hence the AE size of leaf nodes in rm^𝒜^∗ is fixed to unity. As we will show in Sec. <ref>, most of the benefit of CA decoding arises when the addresses 𝐚 are chosen from the set {∅, 1, 11, 111, …} which are called rightmost nodes, as these correspond to the synthetic channels that polarize to relatively “bad” channels. A slightly better heuristic for choosing the vertices at which to apply AE decoding is given in Sec. <ref>. § COMPLEXITY ANALYSIS §.§ Basic Operations It is difficult to give a precise analysis of decoding complexity, as it strongly depends on the available hardware and its architecture. Nevertheless, complexity (measured by the area-time product of an integrated circuit implementation, or in execution time of a software implementation) will scale approximately linearly with the number of unary and binary operations executed by the decoding algorithm. Accordingly, we will measure complexity by counting such operations (in the worst case) for each of the decoders that we consider. In practice, LLR values are often represented using a fixed point (integer) representation with a fixed word size. We will assume that vectors of length n, whether integer- or 𝔽_2-valued, are stored as sequences of n words. All decoders that we consider are implemented as sequences of the following basic unary and binary operations (operating on one or two words): addition (+), comparison (<, >, ≤, ≥), minimum and maximum (min, max), soft XOR (⊞), negative log-sigmoid (), absolute value (|·|), negation (-), binary addition (⊕), and, finally, copying a word or its negation. We assume the soft XOR and the negative log-sigmoid operations are calculated by look-up tables or hardware-friendly approximations. In a pipelined processor architecture, the fetching of operands and the storing of results take place in parallel with instruction decoding and execution, so we do not assign additional complexity for such operations. Assuming the use of pipelining, all of these basic operations can be executed in a single clock cycle on average. Accordingly, we weight these operations evenly, taking their total count as a measure of decoder complexity. Let D_r,m denote a decoder for (r,m) and let _D(r,m) denote its complexity. Except at leaf decoders, D_r,m is defined recursively in terms of D_r-1,m-1 and D_r,m-1, the decoders for its Plotkin constituents. Depending on the nature of the decoder, these constituent decoders may be invoked more than once. It follows that _D(r,m) will generally decompose (recursively) into three terms: 1) a term that accounts for one or more applications of D_r-1,m-1, 2) a term that accounts for one or more applications of D_r,m-1, and 3) a term that accounts for the preparation of LLRs and the final decision. Leaf decoders are not defined recursively, and thus each such decoder will require its own separate complexity analysis. §.§ Complexity of CA, GMC, and AE decoders For all nontrivial RM codes, assuming use of the atom set 𝒜^∗ defined in (<ref>), the only required leaf decoders are those for the (m-1,m) SPC codes and (1,m) first-order RM codes. §.§.§ (m-1,m) The worst-case complexity of applying the Wagner decoding rule to the (m-1,m) SPC code can be separated into: * converting LLRs to hard-decisions, which requires 2^m comparisons; * determining the overall parity, which requires 2^m-1 binary field additions; * checking whether the overall parity is nonzero, which requires 1 comparison, * flipping the bit with the least absolute LLR value (if the parity bit is nonzero), which requires 2^m absolute value computations, 2^m-1 comparisons, and 1 binary field addition. Adding these values gives (m-1,m) = 2^m+2. §.§.§ (1,m) The worst-case complexity of decoding (1,m), the length-2^m first-order RM code, using the Green Machine decoder can be separated into: * applying the fast Hadamard transform to the given LLR vector, which requires m · 2^m additions; * finding the index with the largest absolute value, which requires 2^m absolute value computations, and 2^m-1 comparisons; * extracting the sign of the corresponding LLR, which requires 1 comparison, * converting the sign and index to a codeword, which requires 2^m + m operations according to Algorithm <ref>. Adding these values gives (1,m) = (m+3)2^m +m . Let _(r, m, 𝒮) denote the complexity of _r, m^𝒮, 𝒜^∗, the CA decoder for a composite (r,m) under an automorphism distribution 𝒮 and with atom set 𝒜^∗. Let ℓ denote the number of automorphisms applied at the root of the decoding tree. Note that (∅, ℓ) ∈𝒮 if and only if ℓ > 1. A CA decoder for (r,m) using 𝒮 invokes CA decoders for the Plotkin constituents of the code, i.e., (r-1, m-1) and (r, m-1), using, respectively, the automorphism distributions 𝒮_1 and 𝒮_0, defined as 𝒮_1 ≜{(𝐚, s_𝐚): ((1 |𝐚), s_𝐚) ∈𝒮} and 𝒮_0 ≜{(𝐚, s_𝐚): ((0 |𝐚), s_𝐚) ∈𝒮}. The number of operations contributing to _(r, m, 𝒮) decomposes as follows: * preparing LLRs for _r-1,m-1^𝒮_1, 𝒜^∗ in accordance with (<ref>), which requires ℓ 2^m-1 soft XOR operations; * invoking ℓ instances of _r-1, m-1^𝒮_1, 𝒜^∗, which requires ℓ_(r-1, m-1, 𝒮_1) operations; * preparing LLRs for _r, m-1^𝒮_0, 𝒜^∗ in accordance with (<ref>), which requires ℓ 2^m-1 comparisons and ℓ 2^m-1 additions; * invoking ℓ instances of _r, m-1^𝒮_0, 𝒜^∗, which requires ℓ_(r, m-1, 𝒮_0) operations; * combining Plotkin constituents according to (<ref>), which requires ℓ 2^m-1 binary field additions; * choosing the codeword with minimum analog weight among ℓ candidates, which if ℓ = 1 requires no operations and if ℓ > 1 requires ℓ 2^m comparisons and ℓ (2^m - 1) additions for computing analog weight, and another ℓ - 1 comparisons for choosing the final codeword. As a result, _(r, m, 𝒮) = ℓ_(r-1, m-1, 𝒮_1) + ℓ_(r, m-1, 𝒮_0) + ℓ 2^m+1 + 1_ℓ > 1· (ℓ 2^m+1 - 1), with boundary conditions _(m-1, m, {}) = (m-1, m) and _(1,m, {}) = (1,m), defined in (<ref>) and (<ref>), respectively. GMC and AE decoders are special cases of a CA decoder and so their complexity may be expressed in terms of _. Since all nodes in the decoding tree of a GMC decoder use only the identity automorphism, the automorphism distribution for such a decoder is 𝒮 = {}; thus GMC(r,m) = _(r, m, {}). An AE decoder with an automorphism ensemble of size ℓ has automorphism distribution 𝒮={(∅, ℓ)}; thus, AE(r, m, ℓ) = _(r, m, {(∅, ℓ)}). §.§ Complexity of SCL Decoders We assume that (r,m) has a non-negative order and that the atom set 𝒜' is used. Similar to CA decoders, we first discuss the complexity of list decoding of codes in 𝒜'. We then derive the complexity of the SCL decoder for a composite code (r,m). §.§.§ (r,m) ∈𝒜' For list-truncation purposes, some of the leaf SCL decoders must find the least numbers among an unsorted list of n numbers, where n ∈ℤ. When < n, there are a number of algorithms for this task, each having its own complexity. We denote by sel(,n) the number of basic operations required for this task in the worst case. Clearly sel(,n) = 0 when ≥ n. In this paper, we use (,n) selection networks, which combine a fixed number of comparator and minimum selector (minselector) elements to achieve a deterministic number of basic operations for selection of the smallest entries from an input list of size n > <cit.>, <cit.>. A comparator is a two-input two-output computational unit, that transforms a pair (x,y) of input scores to (min(x,y),max(x,y)), requiring two basic operations. Similarly, a minselector is a two-input one-output unit, transforming a pair (x,y) to min(x,y), requiring one basic operation. Since n- inputs are dropped by an (,n) selection network, the number of minselectors in such a network will always be n-. Thus, an (,n) selection network with a total of T computational units (comparators and minselectors) requires sel(,n) = 2T - (n-) basic operations. An (,n) selection network is said to be optimal if the total number T of computational units that it contains is as small as possible; this minimum possible number of computational units is denoted as U_(n). The exact value of U_(n) is known only for small values of and n; however, it is known that <cit.> U_(n) ≥ (n-) ⌈log_2 (+1) ⌉. Fig. <ref> shows conjectured-optimal (4,8) and optimal (6,8) selection networks. Table <ref> shows the value for sel(,n) for different values of and n encountered in SCL decoding of RM codes in Sec. <ref>. The values for sel(4,8) and sel(6,8) correspond to the selection networks of Fig. <ref>. In the (6,12) case, we have assumed that the lower bound in (<ref>) is achieved with equality; however, in practice this bound may not be achievable, and thus sel(6,12) may be larger than the value given in the table. Let _(r, m, , ) denote the decoding complexity of _r,m^,. We start by considering _(r, 0, , ), i.e., the decoding complexity of the leaf decoders _r,0^,. For an input-list size ∈ℤ^≥ 1 and a maximum output-list size ∈ℤ^≥ 2, the complexity of applying _r, 0^, to decode LLR values λ_0, …, λ_ - 1 to a codeword (of length one) in (r,0) can be separated into the following steps. * Computing the overall cost of each tentative candidate codeword along with the value of that codeword. For the zero codeword this requires one call to , one addition, and one bit-copy operation, for a total cost of 3 basic operations. When r ≥ 0, the nonzero codeword requires these operations and an extra negation for an additional cost of 4 basic operations. In total, this results in (3+4 ·1_r ≥ 0) operations. * Finding the (at most) codewords having least cost from a list of 2^k(r,0) codewords, which requires sel(,2^k(r,0)) basic operations. Adding the number of basic operations for these steps gives _(r, 0, , ) = (3+4 ·1_r ≥ 0) + sel(,2^k(r,0)). §.§.§ (r,m) ∉𝒜' For an input-list size and a maximum output-list size , the complexity of applying _r,m^, to decode (r,m) ∉𝒜' decomposes as follows: * preparing LLRs for _r-1, m-1^, in accordance with (<ref>), which requires 2^m-1 soft XOR operations; * invoking _r-1, m-1^,, which requires _(r-1, m-1, , ) operations; * preparing LLRs in accordance with (<ref>), which requires ℓ' 2^m-1 comparisons and ℓ' 2^m-1 additions where ℓ' is given in (<ref>); * invoking _r, m-1^ℓ',, which requires _(r, m-1, ℓ', ) operations; * combining Plotkin constituents, which requires ℓ” 2^m-1 binary field additions where ℓ” is as (<ref>). As a result, _(r, m, , ) = _(r-1, m-1, , ) + _(r, m-1, ℓ', ) + 2^m-1( + 2ℓ' + ℓ”), with boundary conditions defined by leaf decoders according to (<ref>). The complexity of the overall SCL decoder with a maximum output-list size of is then _(r, m, 1, ). § NUMERICAL RESULTS In this section, using numerical simulations we provide performance versus complexity and BLER versus SNR trade-offs for the schemes described in this paper. We assume transmission over a BI-AWGN channel. Performance is measured by the gap to the constrained Shannon limit at a BLER of 10^-3. Complexity is measured as the worst-case number of operations, computed according to Sec. <ref>, normalized by the code dimension. §.§ Selection of automorphism distribution There are numerous possible ways to choose an automorphism distribution. It is an open problem to determine the automorphism distribution that leads to the best performance at a given decoding complexity. Guided by the following numerical results, we provide a heuristic method for finding “good” CA decoders. We first compare the performance-complexity tradeoffs of CA decoders for (4,9) for a large number of different choices of automorphism distribution. According to Table <ref>, most decoding errors are caused by rightmost leaf decoders. We have tested addresses in automorphism distributions involving the first 5 nodes in the reverse pre-order decoding tree traversal (namely, nodes with address ∅, 1, 11, 110, and 1100), with AE size for each address ranging from 1 to 4. The results are given in Fig. <ref>. The lower hull in Fig. <ref> is the Pareto frontier; it connects the Pareto-efficient points. A point is Pareto-efficient with respect to a given set of points if no other point in the set can achieve better performance with the same or lower complexity. The parameters of the Pareto-efficient points in this test are given in Table <ref>. We categorize the tested automorphism distributions in Fig. <ref> into four classes: (a) no rightmost nodes, (b) only root node, (c) only rightmost nodes (excluding only root node), (d) others. It is clear that CA decoders in class (a) have almost the same decoding performance as the GMC decoder, which implies that including the rightmost nodes is a necessary condition to make an improvement. The Pareto frontier has roughly 0.2 dB gain from decoders in (b), which shows the advantage of CA decoding compared to AE decoding. CA decoders with automorphism distributions satisfying (c) are called rightmost CA decoders. According to Fig. <ref> and Table <ref>, the Pareto-efficient points involve automorphism distributions where only rightmost or “nearly rightmost” vertices in the decoding tree receives an AE size greater than one. These observations suggest the following heuristic method for choosing an automorphism distribution where only rightmost composite decoders (i.e., the root or any non-leaf decoder with an all-ones address) receive an AE size greater than unity. Let m(i) denote the AE size for the right-most composite decoder at depth i in rm^𝒜^*. The heuristic would then choose: * m(i) ≤ 7 for all i, i.e., the number of automorphisms applied at any local decoder is restricted; and * if m(i) ≥ 2 then m(j) ≥ 2 for all j≥ i, i.e., AE decoding with size greater than unity should be applied at a node only when all of its right descendants also have AE size greater than unity. Fig. <ref> shows the performance of all decoders for (4,9), where AE decoding is applied at the right-most composite decoders. Included are those decoders whose complexity does not exceed that of AE-4, i.e., the CA decoder with 𝒮 = {(∅,4) }. It can be seen that almost all Pareto-efficient points are selected by the given heuristic and all heuristically chosen decoders are close to the Pareto frontier (within 0.07 dB). The decoder parameters for Pareto-efficient points in Fig. <ref> are given in Table <ref>. In the following comparisons, the CA decoders were all selected using the heuristic described above. §.§ Comparison of Decoders Fig. <ref> compares the performance achieved by different decoders for three different RM codes of rate 1/2. Table <ref> lists the complexity for each tested decoder. The AE, CA and SCL decoders that we have tested are approximately at the same complexity level. It is clear that the CA decoding algorithm outperforms the other decoders at the same complexity level. Comparing the performance of CA and AE decoding, we find that the improvement changes for different codes. The improvement for (3,7) is merely 0.1 dB, whereas there is a gain of 0.4 dB for (5,11). Fig. <ref> shows a comparison of CA decoders and AE decoders for different RM codes with the same block length (same m). The Pareto-efficient CA decoders with decoding complexity not exceeding that of AE-4 are depicted with solid marks, while the AE decoders are drawn in hollow marks. We observe that the gap between the AE decoder and the CA decoder is larger for higher order r. In other words, the gain achieved when applying CA decoding compared to AE decoding is more significant for high order RM codes. This trend is expected, since higher order RM codes have more rightmost nodes, which means there are more nodes observing “bad” channel polarization, and therefore such codes benefit more from an uneven distribution of decoding resources. § CONCLUSIONS This paper has introduced the constituent automorphism (CA) decoding algorithm for RM codes, which applies AE decoding at the (mainly rightmost) constituent decoders according to a specified automorphism distribution. CA decoders can achieve better performance versus complexity trade-offs than other state-of-the-art decoding algorithms. The benefits of CA decoding appear to increase as the order of the RM code increases. While the problem of selecting an automorphism distribution achieving the best performance at a fixed complexity remains open, we have provided a simple heuristic by which a “good” automorphism distribution may be selected. In the future, it would be interesting to study whether CA decoding can be adapted to a suitable class of polar codes. As in <cit.>, the chosen polar-code class would need to overcome the property that polar codes do not generally have a large automorphism group <cit.>. 7 IEEEtran
http://arxiv.org/abs/2409.03594v1
20240905145254
A Complete Landscape of EFX Allocations of Mixed Manna on Graphs
[ "Yu Zhou", "Tianze Wei", "Minming Li", "Bo Li" ]
cs.GT
[ "cs.GT" ]
[ [ September 9, 2024 ===================== § ABSTRACT We study envy-free up to any item (EFX) allocations on graphs where vertices and edges represent agents and items respectively. An agent is only interested in items that are incident to her and all other items have zero marginal values to her. Christodoulou et al. [EC, 2023] first proposed this setting and studied the case of goods. We extend this setting to the case of mixed manna where an item may be liked or disliked by its endpoint agents. In our problem, an agent has an arbitrary valuation over her incident items such that the items she likes have non-negative marginal values to her and those she dislikes have non-positive marginal values. We provide a complete study of the four notions of EFX for mixed manna in the literature, which differ by whether the removed item can have zero marginal value. We prove that an allocation that satisfies the notion of EFX where the virtually-removed item could always have zero marginal value may not exist and determining its existence is NP-complete, while one that satisfies any of the other three notions always exists and can be computed in polynomial time. We also prove that an orientation (i.e., a special allocation where each edge must be allocated to one of its endpoint agents) that satisfies any of the four notions may not exist, and determining its existence is NP-complete. § INTRODUCTION Fair allocation of indivisible items has been broadly studied in the research fields of computer science, economics, and mathematics in the past few decades. One of the most compelling and natural fairness notions is envy-freeness (EF), which requires that every agent prefers her own bundle to any other agent's bundle. Though envy-freeness can always be satisfied for divisible items <cit.>, it is too demanding for indivisible items. An EF allocation does not exist even for the simple instance where there are two agents and one indivisible item with non-zero marginal values to both agents. The fact that envy-freeness is hard to satisfy necessitates the study of its relaxations, the most popular one among which is envy-freeness up to any item (EFX). EFX requires that any envy could be eliminated by virtually removing any item that the envious agent likes from the envied agent's bundle or any item that the envious agent dislikes from her own bundle. As remarked by <cit.>: “Arguably, EFX is the best fairness analog of envy-freeness for indivisible items.” Despite significant effort in the literature, the existence of EFX allocations still remains an open problem for indivisible items. Only a few special cases are known to admit EFX allocations. For the case of goods where every item is liked by every agent, <cit.> showed that EFX allocations exist when there are only two agents. <cit.> and <cit.> complemented this result by showing the existence of EFX allocations when there are only three agents, and when the number of goods is at most three greater than the number of agents, respectively. <cit.> also showed that EFX allocations exist when the valuations are identical or additive with identical ordering. <cit.> extended this result by showing the existence of EFX allocations when the valuations are bi-valued. Other restricted valuations for which EFX allocations always exist include submodular dichotomous valuations where the marginal value of any item is either 0 or 1 <cit.>, and lexicographic valuations where the value of a single item is greater than the total value of all items that are less preferred <cit.>. For the case of chores where every item is disliked by every agent, even less is known. For example, <cit.> and <cit.> showed that EFX allocations exist for ordered instances and leveled preference instances, respectively. Recently, <cit.> studied EFX allocations on graphs where vertices correspond to agents and edges correspond to indivisible goods. An agent (vertex) is only interested in the goods (edges) that are incident to her and all other edges have zero marginal values to her. Thus, each good is liked by exactly two agents in their setting. As motivated in <cit.>, a direct application of this setting is the allocation of geographical resources, for instance, natural resources among countries on the boundaries, working offices among research groups, and public areas among communities in a region, etc. <cit.> proved that EFX allocations always exist and can be computed in polynomial time for arbitrary graphs. Remarkably, this is one more rare case with more than three agents for which an EFX allocation is guaranteed to exist. They also considered a more restricted scenario where each edge must be allocated to one of its endpoint agents. In this scenario, an allocation is also called an orientation. Unfortunately, <cit.> proved that an EFX orientation may not exist, and determining whether it exists or not is NP-complete. Besides goods, recent years have seen a rapidly growing interest in the case of mixed manna in the literature of fair division. A mixed manna contains items that are goods for some agents but chores for others. Practically, the setting of mixed manna can model the scenarios where agents have different opinions on items. Many real-world scenarios involve the allocation of mixed manna. For example, when the items are paid jobs, they are goods for some people because completing them can bring extra revenue; however, they can be chores for some people who do not care much about this amount of money and would like to save time for other matters. The case of mixed manna is also a typical setting where the valuations are not monotone. <cit.>'s graphic nature also appears in the setting of mixed manna. For example, in sports games, each match (that can be viewed as an item) involves two teams (that can be viewed as the agents) and has to be hosted by one of them (i.e., home or away). Hosting a match might be a good for some teams as they can make profit and might be a chore as they cannot cover the expenses. Furthermore, the graph orientation setting can use the topology to indicate who are capable of completing what jobs (edges), so that the jobs can only be allocated to people (incident vertices) who are able to do them. The allocation setting can model the case when people really do not have any cost or benefit on the items they are not incident to. §.§ Our Problem and Results In this work, we extend the model of <cit.> to the case of mixed manna, where an edge may be liked or disliked by its endpoint agents. We consider the four variants of EFX for mixed manna in the literature, i.e., _0^0, ^0_-, ^+_0, and ^+_-, where the super and sub scripts indicate the items removed from the envied agent's and the envious agent's bundles respectively, +/- means an item with a strictly positive or negative margin and 0 means an item with a possibly zero margin. Similar as <cit.>, we first study the setting where each edge must be allocated to one of its endpoint agents, i.e., orientations. The main results are summarized in the second column of Table <ref>. Specifically, we show that an orientation that satisfies any of the four EFX notions may not exist, and determining its existence is NP-complete. Due to the hardness results for orientations, we also study some simple graphs such as trees, stars and paths. For these simple graphs, ^+_0 or ^+_- orientations always exist and can be computed in polynomial time. Although ^0_0 or ^0_- orientations may not exist, determining their existence is in polynomial time. We then study the setting where the edges can be allocated to any agent. The main results are summarized in the third column of Table <ref>. Specifically, we show that an ^0_0 allocation may not exist and determining its existence is NP-complete. In contrast, an allocation that satisfies any of the other three notions always exists and can be computed in polynomial time. §.§ More Related Works Mixed Manna. Since initiated by <cit.>, there has been a rapidly growing interest in fair allocation of mixed manna in recent years. <cit.> proposed a polynomial-time algorithm named double round-robin that computes an envy-free up to one item (EF1) allocation for any instance with additive valuations. <cit.> extended this result by computing EF1 allocations for instances with general valuations. Their algorithm is based on the envy-cycle procedures and also runs in polynomial time. <cit.> gave a polynomial-time algorithm that computes an EFX and Pareto-Optimal (PO) allocation for instances with identical and additive valuations. In their definition of EFX, the removed item must not have a value of zero, which as we will see, is the weakest one among the notions of EFX that we study in this paper. Other fairness notions studied for mixed manna include proportionality up to one item (PROP1) <cit.> and maximin share (MMS) <cit.>. We refer the readers to the survey by <cit.> for more details on this topic. Fair Allocation on Graphs. There are many other works that study fair allocation of indivisible items on graphs, whose settings whereas, are quite different from ours and <cit.>'s. <cit.> formalized the problem that there is an underlying graph whose vertices are indivisible items and each agent must receive a connected component of the graph. They considered several fairness notions such as proportionality, envy-freeness, maximin share, and gave hardness results for general graphs and polynomial-time algorithms for special graphs. Many following works investigated the same problem with different fairness notions or graph structures <cit.>. <cit.> considered the same model and quantified the loss of fairness when imposing the connectivity constraint, i.e., price of connectivity. <cit.> studied a similar model where each agent must receive a compact bundle of items that are “closely related”. Different from this line of works, <cit.> used a graph to reflect conflicts between items. Each vertex on the graph is an item and each edge means that its two endpoint items have a conflict. They require that two items that have a conflict cannot be allocated to the same agent. In other words, the bundle allocated to each agent must be an independent set of the graph. <cit.> studied fair allocation on graph where vertices are agents (as in our setting). The graph was used to relax fairness notions such that fairness only need to be satisfied for the endpoint agents of the edges. § PRELIMINARIES For any positive integer k, let [k] = {1, …, k}. In an instance of our problem, there is a graph G = (N, M) where N = {a_1, …, a_n} is the vertex set and M is the edge set. Each vertex corresponds to an agent and each edge corresponds to an indivisible item. We use vertex and agent, edge and item, interchangeably. We also write both (a_i, a_j) and e_i, j to represent the edge between a_i and a_j. Each agent a_i ∈ N has a valuation v_i: 2^M→ℝ over the edges and v_i(∅) = 0. We also write v_i(e) to represent v_i({e}). For an agent a_i, each item e ∈ M is classified as a good (if it has strictly positive marginal values to a_i, i.e., v_i(S ∪{e}) > v_i(S) for any S ⊆ M∖{e}), a chore (if it has strictly negative marginal values to a_i, i.e., v_i(S ∪{e}) < v_i(S) for any S ⊆ M∖{e}), or a dummy (if it has zero marginal value to a_i, i.e., v_i(S ∪{e}) = v_i(S) for any S ⊆ M∖{e}). Accordingly, an instance is called a goods instance (if no item is a chore for any agent), a chores instance (if no item is a good for any agent), or a mixed instance (if an item may be a good, a chore, or a dummy for any agent). Let E_i be the set of all edges that are incident to a_i, E_i^≥ 0⊆ E_i be the subset of non-chores for a_i, and E_i^>0⊆ E_i be the subset of goods for a_i. Note that in our setting, all edges that are not incident to a_i (i.e., M ∖ E_i) are dummies for a_i. An allocation 𝐗 = (X_1, …, X_n) is an n-partition of M such that X_i contains the edges allocated to agent a_i ∈ N, where X_i ∩ X_j = ∅ for any a_i, a_j ∈ N and ⋃_a_i ∈ NX_i = M. An orientation is a restricted allocation where each edge must be allocated to one of its endpoint agents. An allocation 𝐗 is partial if ⋃_a_i ∈ NX_i ⊊ M. §.§ Fairness Notions Given an allocation 𝐗, we say agent a_i envies agent a_j if v_i(X_j) > v_i(X_i). The allocation is envy-free (EF) if no agent envies the others, i.e., for every two agents a_i, a_j ∈ N, v_i(X_i) ≥ v_i(X_j). As we have seen, envy-freeness is too demanding for indivisible items. Thus, in this paper, we focus on its relaxation envy-free up to any item (EFX). For the case of mixed manna, there are four variants of EFX in the literature <cit.>, namely, _0^0, ^0_-, ^+_0, and ^+_-. _0^0 requires that any envy could be eliminated by removing any item that is not a chore for the envious agent from the envied agent's bundle or any item that is not a good from the envious agent's own bundle. Formally, An allocation 𝐗 = (X_1, …, X_n) is ^0_0 if for every two agents a_i, a_j ∈ N such that a_i envies a_j, both of the following conditions hold: * for any e ∈ X_j such that v_i(X_j ∖{e}) ≤ v_i(X_j), v_i(X_i) ≥ v_i(X_j ∖{e}); * for any e ∈ X_i such that v_i(X_i ∖{e}) ≥ v_i(X_i), v_i(X_i ∖{e}) ≥ v_i(X_j). ^0_- differs from _0^0 in that the item removed from the envious agent's bundle cannot be a dummy. More concretely, the item e considered in the second condition is subject to v_i(X_i ∖{e}) > v_i(X_i). ^+_0 differs from _0^0 in that the item removed from the envied agent's bundle cannot be a dummy, i.e., the item e considered in the first condition is subject to v_i(X_j ∖{e}) < v_i(X_j). ^+_- differs from _0^0 in that the item removed from the envied agent's and the envious agent's bundles cannot be a dummy. Formally, An allocation 𝐗 = (X_1, …, X_n) is ^0_- if for every two agents a_i, a_j ∈ N such that a_i envies a_j, both of the following conditions hold: * for any e ∈ X_j s.t. v_i(X_j ∖{e}) ≤ v_i(X_j), v_i(X_i) ≥ v_i(X_j ∖{e}); * for any e ∈ X_i s.t. v_i(X_i ∖{e}) > v_i(X_i), v_i(X_i ∖{e}) ≥ v_i(X_j). An allocation 𝐗 = (X_1, …, X_n) is ^+_0 if for every two agents a_i, a_j ∈ N such that a_i envies a_j, both of the following conditions hold: * for any e ∈ X_j s.t. v_i(X_j ∖{e}) < v_i(X_j), v_i(X_i) ≥ v_i(X_j ∖{e}); * for any e ∈ X_i s.t. v_i(X_i ∖{e}) ≥ v_i(X_i), v_i(X_i ∖{e}) ≥ v_i(X_j). An allocation 𝐗 = (X_1, …, X_n) is ^+_- if for every two agents a_i, a_j ∈ N such that a_i envies a_j, both of the following conditions hold: * for any e ∈ X_j s.t. v_i(X_j ∖{e}) < v_i(X_j), v_i(X_i) ≥ v_i(X_j ∖{e}); * for any e ∈ X_i s.t. v_i(X_i ∖{e}) > v_i(X_i), v_i(X_i ∖{e}) ≥ v_i(X_j). Obviously, any ^0_0 allocation is also ^0_- or ^+_0, and any ^0_- or ^+_0 allocation is also ^+_-. Goods and Chores Instances. Goods instances have been well studied in <cit.>, and we will see that our results provide alternative approaches. For chores instances, we provide a discussion in Appendix <ref>. In the subsequent sections, we shall focus on the general case of mixed manna. § EFX ORIENTATIONS In this section, we elaborate on EFX orientations. Firstly, we have the following proposition. There exist graphs for which no orientation satisfies any of the four notions of EFX. Consider a connected graph where there are more edges than vertices and each edge is a chore for both its endpoint agents. Since each edge must be allocated to one of its endpoint agents, there exists an agent who receives at least two chores in any orientation. Even after removing one chore from her own bundle, this agent still receives a negative value and envies the agents who are not her neighbors. Due to this negative result, we turn to studying the complexity of determining the existence of EFX orientations. The result by <cit.> (see Theorem 2 in their paper) directly implies that determining the existence of ^0_- orientations is NP-complete. In the graphs constructed in their reduction, each edge is a good for both its endpoint agents. For such graphs, any ^0_- orientation is also ^0_0. Therefore, we have the following corollary. Determining whether an ^0_0 or ^0_- orientation exists or not is NP-complete. §.§ ^+_- Orientations In the following, we prove the below theorem for ^+_-. Determining whether an ^+_- orientation exists or not is NP-complete, even for additive valuations[Valuation v_i is additive if v_i(S) = ∑_e ∈ S v_i(e) for any S ⊆ M.]. To prove Theorem <ref>, we reduce from (3, B2)-SAT problem to the ^+_- orientation problem. A (3, B2)-SAT instance contains a Boolean formula in conjunctive normal form consisting of n variables {x_i}_i ∈ [n] and m clauses {C_j}_j ∈ [m]. Each variable appears exactly twice as a positive literal and exactly twice as a negative literal in the formula, and each clause contains three distinct literals. Determining whether a (3, B2)-SAT instance is satisfiable or not is NP-complete <cit.>. Our reduction uses a gadget to ensure that a specific agent must receive a chore if the orientation is ^+_-. One such gadget is shown in Figure <ref>. In this example, agent a_i must receive (a_i, a_1^Δ) if the orientation is ^+_-. Otherwise, one of the other three agents must receive at least two chores and envy a_i even after removing one chore. Given a (3, B2)-SAT instance ({x_i}_i ∈ [n], {C_j}_j ∈ [m]), we construct a graph as follows: * For each variable x_i, create two vertices a^T_i, a^F_i and one edge (a^T_i, a^F_i) with a value of 2 to both a^T_i and a^F_i. * For each clause C_j, create one vertex a_j^C. Besides, if C_j contains a positive literal x_i, create one edge (a_j^C, a^T_i) with a value of 1 to both a_j^C and a^T_i. If C_j contains a negative literal x_i, create one edge (a_j^C, a^F_i) with a value of 1 to both a_j^C and a^F_i. * Create three vertices a_1^Δ, a_2^Δ, a_3^Δ and three edges (a_1^Δ, a_2^Δ), (a_2^Δ, a_3^Δ), (a_1^Δ, a_3^Δ). Besides, for each i ∈ [n], create two edges (a_i^T, a_1^Δ) and (a_i^F, a_1^Δ). For each j ∈ [m], create one edge (a_j^C, a_1^Δ). Each of these edges has a value of -1 to both its endpoint agents. * Each vertex has an additive valuation. To visualize the above reduction, we show the graph constructed from the formula (x_1 ∨ x_2 ∨ x_3) ∧ (x_1 ∨ x_2 ∨ x_3) ∧ ( x_1 ∨ x_2 ∨ x_3) ∧ ( x_1 ∨ x_2 ∨ x_3) in Figure <ref>. We now prove that a (3, B2)-SAT instance is satisfiable if and only if the constructed graph has an _-^+ orientation. For ease of presentation, for each variable x_i, we denote by C_j^i, T, 1, C_j^i, T, 2 the two clauses that contain the positive literal x_i and by C_j^i, F, 1, C_j^i, F, 2 the two clauses that contain the negative literal x_i. For one direction, we assume that the (3, B2)-SAT instance has a satisfying assignment and use the assignment to create an _-^+ orientation as follows: * Allocate (a_1^Δ, a_2^Δ) to a_1^Δ, (a_2^Δ, a_3^Δ) to a_2^Δ, and (a_3^Δ, a_1^Δ) to a_3^Δ. Allocate each other edge that is incident to a_1^Δ to the endpoint that is not a_1^Δ. * For each variable x_i that is set to True, allocate (a_i^T, a_i^F) to a_i^T, (a_i^T, a_j^i, T, 1^C) to a_j^i, T, 1^C, (a_i^T, a_j^i, T, 2^C) to a_j^i, T, 2^C, and (a_i^F, a_j^i, F, 1^C), (a_i^F, a_j^i, F, 2^C) to a_i^F. * For each variable x_i that is set to False, allocate (a_i^T, a_i^F) to a_i^F, (a_i^F, a_j^i, F, 1^C) to a_j^i, F, 1^C, (a_i^F, a_j^i, F, 2^C) to a_j^i, F, 2^C, and (a_i^T, a_j^i, T, 1^C), (a_i^T, a_j^i, T, 2^C) to a_i^T. Next, we show that the above orientation is _-^+. For agents a_1^Δ, a_2^Δ, a_3^Δ, each of them receives one edge with a value of -1 and all edges received by other agents have non-positive values to them. After removing the edge from their bundles, they do not envy others. For each variable x_i that is set to True, agent a_i^T does not envy others since she receives a total value of 1 and each of her incident edges that she does not receive has a value of 1. Agent a_i^F receives three edges with values of 1, 1, -1, respectively. The only incident edge that she does not receive is (a_i^T, a_i^F), which is allocated to a_i^T and has a value of 2. After removing the edge with a value of -1 from her own bundle or (a_i^T, a_i^F) from a_i^T's bundle, a_i^F does not envy a_i^T. We have an analogous argument for each variable that is set to False. It remains to consider the agents that correspond to clauses. Since the assignment is satisfying, each clause contains at least one literal that is evaluated to True. This implies that each agent a_j^C receives at least one edge with a value of 1. For example, if the clause C_j contains a positive literal x_i that is evaluated to True, a_j^C receives the edge (a_i^T, a_j^C). Since each of a_j^C's incident edges that she does not receive has a value of 1, a_j^C does not envy other agents after removing the edge with a value of -1 from her own bundle or the edge with a value of 1 from other agents' bundles. For the other direction, we assume that the constructed graph has an _-^+ orientation and use the orientation to create a satisfying assignment as follows: for each variable x_i, if the edge (a_i^T, a_i^F) is allocated to agent a_i^T, then set x_i to True; otherwise, set x_i to False. Next, we show that the assignment is satisfying. First, since the orientation is _-^+, each agent that corresponds to a variable or a clause must receive the edge between herself and a_1^Δ that has a value of -1. For each variable x_i, if the edge (a_i^T, a_i^F) is allocated to agent a_i^T, both (a_i^F, a_j^i, F, 1^C) and (a_i^F, a_j^i, F, 2^C) must be allocated to agent a_i^F. Otherwise, a_i^F will envy a_i^T even after removing (a_i^F, a_1^Δ) from her own bundle. For a similar reason, if the edge (a_i^T, a_i^F) is allocated to a_i^F, both (a_i^T, a_j^i, T, 1^C) and (a_i^T, a_j^i, T, 2^C) must be allocated to a_i^T. For each clause C_j, agent a_j^C must receive at least one edge with a value of 1. Otherwise, a_j^C will envy the agents who receive her incident edges that have a value of 1 even after removing the edge with a value of -1 from her own bundle. This implies that each clause has a literal that is evaluated to True. Notice that in the graphs constructed in the above reduction, each edge has non-zero values to both its endpoint agents. For such graphs, an orientation is _0^+ if and only if it is _-^+, since no agent receives an edge with a value of zero. Therefore, the hardness of determining the existence of _-^+ orientations also applies to _0^+ orientations. Determining whether an ^+_0 orientation exists or not is NP-complete, even for additive valuations. §.§ Orientations on Simple Graphs Due to the above hardness results for general graphs, we also study EFX orientations on some simple graphs such as trees, stars and paths. For these simple graphs, ^+_0 or ^+_- orientations always exist and can be computed in polynomial time. Although ^0_0 or ^0_- orientations may not exist, determining their existence is in polynomial time. Trees We first consider trees and have the following result. For trees, ^+_0 or ^+_- orientations always exist and can be computed in polynomial time. Since any ^+_0 orientation is also ^+_-, it suffices to compute ^+_0 orientations. To achieve this, we first traverse the tree using the breadth-first search (BFS) algorithm and label each vertex with its layer number during the BFS algorithm. We then allocate each edge to the endpoint agent at the upper layer if the edge is a good for that agent, and to the endpoint agent at the lower layer otherwise. The formal description is provided in Algorithm <ref>. Clearly, Algorithm <ref> runs in polynomial time. It suffices to show that the computed orientation is ^+_0. Observe that for each agent a_i, the edge between herself and an agent at one higher layer is the only edge that is a good for her but is allocated to another agent, or the only edge that is not a good for her but is allocated to her. For the first case, after removing the edge from the other agent's bundle, a_i does not envy that agent. For the second case, after removing the edge from her own bundle, a_i does not envy others. Therefore, the orientation is ^+_0. Note that stars and paths are also trees, Theorem <ref> holds for stars and paths. However, ^0_0 or ^0_- orientations may not exist even for a simple tree that is also a star and a path. There exists a simple tree that is also a star and a path, for which no allocation is ^0_0 or ^0_-. Consider the simple tree illustrated in Figure <ref>, which is also a star and a path. Notice that the edge e_2, 3 must be allocated to a_2. Otherwise, a_3 will envy whoever receives e_1, 2 even after removing e_1, 2 from the agent's bundle. Moreover, e_1, 2 must also be allocated to a_2. Otherwise, a_2 will envy a_1 even after removing e_2, 3 from her own bundle. Since a_1 envies a_2 even after removing e_2, 3 from a_2's bundle, no orientation is ^0_0 or ^0_-. The good news is that for stars and a specific type of path, we can determine whether ^0_0 or ^0_- orientations exist or not in polynomial time. Stars We then consider stars and have the following result. For stars, determining whether an ^0_0 or ^0_- orientation exists or not is in polynomial time. If there is only one edge, allocating the edge to one of its endpoint agents gives an ^0_0 and ^0_- orientation. In the following, we assume that there are at least two edges. Observe that if an edge is a chore for its satellite agent, the edge must be allocated to the center in any ^0_0 or ^0_- orientation. To see this, let the edge be e_i, j such that a_j is the satellite and a_i is the center and suppose that e_i, j is allocated to a_j. Let another edge be e_i, k. Since e_i, k is a dummy for a_j, no matter who receives e_i, k, a_j will envy her even after removing e_i, k from her bundle. Thus, e_i, j must be allocated to a_i in any ^0_0 and ^0_- orientation. Also, observe that if the center receives an edge, each edge that is a good for its satellite agent must be allocated to the satellite. Otherwise, the satellite will envy the center even after removing the edge that the center has received. We consider the following two cases: Case 1: Each edge is not a chore for its satellite agent. In this case, we simply allocate each edge to its satellite agent, and it is easy to see that the orientation is ^0_0 and ^0_-. Case 2: There exists an edge that is a chore for its satellite agent. In this case, as we have seen, the edges that are chores for their satellite agents must be allocated to the center in any ^0_0 and ^0_- orientation. Moreover, the edges that are goods for their satellite agents must be allocated to their satellites. For the edges that are dummies for their satellite agents, it does not matter how they are allocated since the orientation is always envy-free for their satellites. Thus, if they are not goods for the center, we allocate them to their satellites and otherwise we allocate them to the center. Therefore, by checking whether the orientation is ^0_0 (resp. ^0_-) for the center, we can determine whether there exists any ^0_0 (resp. ^0_-) orientation. Paths We next consider a specific type of paths and have the following result. For paths where each edge is either a good or a chore for both its endpoint agents, determining whether an ^0_0 or ^0_- orientation exists or not is in polynomial time. In the paths we focus on, ^0_0 and ^0_- are equivalent since no edge is a dummy for any of its endpoint agents. Thus, in the following, we only consider ^0_-. If there is only one edge or each edge is a good for both its endpoint agents, we can obtain an ^0_- orientation by allocating each edge to the right endpoint agent. If there are only two edges and at least one is a chore for both its endpoint agents, an ^0_- orientation does not exist, which is easy to see by the same reasoning we made in the proof of Proposition <ref>. It remains to consider the cases where there are at least three edges and at least one is a chore for both its endpoint agents. We first observe that for each edge that is a chore for both its endpoint agents, its neighboring vertices and edges follow the same pattern if there exists an ^0_- orientation. Let one such edge be e_i_0, i_1 that is a chore for both a_i_0 and a_i_1, the two neighboring edges on its right are e_i_1, i_2, e_i_2, i_3, and the two neighboring edges on its left are e_i_-2, i_-1, e_i_-1, i_0. We claim that if there exists an ^0_- orientation, one of the following two conditions holds: * e_i_1, i_2 is a good for both a_i_1 and a_i_2, e_i_2, i_3 is a good for both a_i_2 and a_i_3, v_i_1({e_i_0, i_1}∪{e_i_1, i_2}) ≥ 0, v_i_2(e_i_1, i_2) ≤ v_i_2(e_i_2, i_3); * e_i_-1, i_0 is a good for both a_i_-1 and a_i_0, e_i_-2, i_-1 is a good for both a_i_-2 and a_i_-1, v_i_0({e_i_0, i_1}∪{e_i_-1, i_0}) ≥ 0, v_i_-1(e_i_-1, i_0) ≤ v_i_-1(e_i_-2, i_-1). To see this, first consider the case if e_i_0, i_1 is allocated to a_i_1. If e_i_1, i_2 is not a good for a_i_1 or v_i_1({e_i_0, i_1}∪{e_i_1, i_2}) < 0, a_i_1 gets a negative value and will envy whoever gets an edge that is not incident to her even after removing that edge. Therefore, in any ^0_- orientation, e_i_1, i_2 must be a good for a_i_1, v_i_1({e_i_0, i_1, e_i_1, i_2}) ≥ 0 and e_i_1, i_2 must be allocated to a_i_1. Furthermore, if e_i_2, i_3 is not a good for a_i_2 or v_i_2(e_i_1, i_2) > v_i_2(e_i_2, i_3), a_i_2 will envy a_i_1 after removing e_i_0, i_1 from her bundle. Therefore, in any ^0_- orientation, e_i_2, i_3 must be a good for a_i_2, v_i_2(e_i_1, i_2) ≤ v_i_2(e_i_2, i_3) and e_i_2, i_3 must be allocated to a_i_2. An analogous statement holds if e_i_0, i_1 is allocated to a_i_0. We are now ready to determine whether an ^0_- orientation exists or not. We consider three cases: Case 1: If there exists an edge such that it is a chore for both its endpoint agents but its neighboring vertices and edges on neither of its two sides follow the pattern we observed (i.e., satisfy the above two conditions), we can assert that there does not exist any ^0_- orientation. Case 2: If for each edge that is a chore for both its endpoint agents, the neighboring vertices and edges on both its two sides follow the pattern, we allocate each edge that is a chore to the left endpoint agent and each other edge to the right one. It is easy to verify that the orientation is ^0_-. Case 3: If there exists an edge that is a chore for both its endpoints, and its neighboring vertices and edges on only one of its two sides follow the pattern, we know how these edges must be allocated in any ^0_- orientation. Take the edge e_i_0, i_1 for example, if only the first condition holds, e_i_0, i_1 and e_i_1, i_2 must be allocated to a_i_1, and e_i_2, i_3 must be allocated to a_i_2. Therefore, we can delete these three edges and a_i_1 and a_i_2 from the path. It is easy to see that the original path has an ^0_- orientation if and only if all reduced paths have ^0_- orientations. By repeating the above reduction, either in some sub-path, Case 1 occurs and we can assert that there does not exist any ^0_- orientation; or in all sub-paths, either Case 2 occurs or each edge is a good for both its endpoint, then we can conclude that an ^0_- orientation exists. § EFX ALLOCATIONS In this section, we elaborate on EFX allocations. §.§ _0^0 Allocations We start with the strongest one among those four notions, i.e., ^0_0. We say an edge e is priceless to an agent a_i if for any S_1, S_2 ⊆ M such that e∉ S_1 and e∈ S_2, we have v_i(S_1) < v_i(S_2). Notice that an agent envies whoever receives her priceless edge no matter which edges she herself receives and doe not envy others as long as she receives it. We have the following proposition, which provides some characterization of _0^0 allocations on some graphs with priceless edges. For graphs that satisfy (1) each edge is a good for both its endpoint agents, (2) each agent has one priceless incident edge and (3) each priceless edge is priceless to both its endpoint agents, we have that each edge must be allocated to one of its endpoint agents in any _0^0 allocation. First, consider priceless edges. Let (a_i, a_j) be one priceless edge and assume for the sake of contradiction that it is allocated to agent a_k who is not a_i or a_j. In this case, a_k is envied by a_i and a_j, and thus cannot receive her priceless edge since it is a dummy for a_i and a_j. Otherwise, a_i and a_j will envy a_k even after removing the dummy from a_k's bundle. Moreover, a_k envies whoever receives her priceless edge even after removing (a_i, a_j) from her own bundle, which is a dummy for her, thus the allocation is not ^0_0. Then, assume that each priceless edge is allocated to one of its endpoint agents and consider non-priceless edges. Observe that each agent either envies the agent who receives the edge that is priceless to her, or is envied by that agent if she receives that priceless edge. Each envied agent cannot receive any other edge since every edge is either a good or a dummy for the agent who envies her. Otherwise, after removing the other edge from the envied agent's bundle, the envious agent still envies the envied agent, thus the allocation is not ^0_0. Each agent who envies others cannot receive any edge that is not incident to her since it is a dummy for her. Otherwise, after removing the dummy from her own bundle, the envious agent still envies others, thus the allocation is not ^0_0. Therefore, each non-priceless edge must also be allocated to one of its endpoint agents. We first show that ^0_0 allocations may not exist. There exist graphs for which no allocation is ^0_0. Consider the graph illustrated in Figure <ref>. Notice that the graph satisfies the three properties in Proposition <ref>. Therefore, each edge must be allocated to one of its endpoint agents in any ^0_0 allocation. Without loss of generality, assume that e_1, 2 is allocated to a_1, e_3, 4 to a_3, and e_1, 3 to a_1. Since a_2 envies a_1 even after removing e_1,3 from a_1's bundle, the allocation is not ^0_0. We next study the complexity of determining the existence of ^0_0 allocations and have the following result. Determining whether an _0^0 allocation exists or not is NP-complete, even for additive valuations. To prove Theorem <ref>, we reduce from Circuit-SAT problem to the _0^0 allocation problem. Circuit-SAT problem determines whether a given Boolean circuit has an assignment of the inputs that makes the output True, which is well-known to be NP-complete <cit.>. We first show how to simulate the OR gate, the NOT gate, the wire in the circuit and how to force the final output to be True. To achieve this, we construct four graphs, named OR gadget, NOT gadget, WIRE gadget, TRUE terminator gadget, respectively (see Figure <ref>). It is easy to see that Proposition <ref> applies to all these four gadgets. That is, in any _0^0 allocation on each of these gadgets, each edge must be allocated to one of its endpoint agents. This enables us to represent each input (or output) in the circuit as an edge in the gadgets and its value (True or False) as the orientation of the edge. In the OR gadget (see Figure <ref>), edges (a_1, a_1^') and (a_2, a_2^') represent the two inputs of the OR gate, edge (a_3, a_3^') represents the output. The following claim shows that the OR gadget correctly simulates the OR gate. In every _0^0 allocation on the OR gadget, edge (a_3, a_3^') is allocated to a_3 if and only if edge (a_1, a_1^') is allocated to a_1 or edge (a_2, a_2^') is allocated to a_2. We first show that if (a_1, a_1^') is allocated to a_1, (a_3, a_3^') must be allocated to a_3. Since (a_1, a_1^') is priceless to a_1^' but is allocated to a_1, a_1^' envies a_1. Hence, (a_1, a_3^') must be allocated to a_3^'. Otherwise, a_1^' still envies a_1 after removing (a_1, a_3^') from a_1's bundle. Moreover, (a_3, a_3^') must be allocated to a_3. Otherwise, a_3 still envies a_3^' after removing (a_1, a_3^') from a_3's bundle. By symmetry, it holds that if (a_2, a_2^') is allocated to a_2, (a_3, a_3^') must be allocated to a_3. We then show that when (a_1, a_1^') is allocated to a_1 and (a_3, a_3^') is allocated to a_3, no matter which endpoint agent (a_2, a_2^') is allocated to, there exists an _0^0 allocation. When (a_2, a_2^') is allocated to a_2, we construct an _0^0 allocation as follows: allocate each priceless edge to the upper endpoint agent, i.e., (a_i, a_i^') to a_i for every i ∈{1, 2, 3} and (b_i, b_i^') to b_i for every i ∈{1, 2, 3}; allocate the middle four edges to the endpoint agents who are further away from b_2^', i.e., (a_1^', b_1^') to a_1^', (b_1^', b_2^') to b_1^', (b_2^', b_3^') to b_3^', (b_3^', a_2^') to a_2^'; allocate (b_2^', a_3) to b_2^', (a_1, a_3^') to a_3^', (a_2, a_3^') to a_3^'. Since each agent has a positive value for each edge she receives, to verify that the allocation is _0^0, it suffices to consider the agents who receive more than one edge (only a_3^' in the above allocation). Since both a_1 and a_2 receive their priceless edges, neither of them envies a_3^' and thus the allocation is _0^0. When (a_2, a_2^') is allocated to a_2^', we construct an _0^0 allocation as follows: allocate each priceless edge except (a_2, a_2^') and (b_1, b_1^') to the upper endpoint, i.e., (a_i, a_i^') to a_i for every i ∈{1, 3}, (b_i, b_i^') to b_i for every i ∈{2, 3}, (a_2, a_2^') to a_2^', (b_1, b_1^') to b_1^'; for the middle four edges, allocate (a_1^', b_1^') to a_1^', (b_1^', b_2^') to b_2^', (b_2^', b_3^') to b_3^', (b_3^', a_2^') to b_3^'; allocate (b_2^', a_3) to b_2^', (a_1, a_3^') to a_3^', (a_2, a_3^') to a_2. In the above allocation, only b_2^' and b_3^' receive more than one edge. For b_2^', neither b_1^' nor a_3 envies her since both of them receive their priceless edges. For b_3^', a_2^' does not envy her since she receives her priceless edge, and b_2^' does not envy her since she receives a value of ϵ_1 + ϵ_2 and thinks that b_3^' receives a value of ϵ_1. Therefore, the allocation is also _0^0. By symmetry, when (a_2, a_2^') is allocated to a_2 and (a_3, a_3^') is allocated to a_3, no matter which endpoint agent (a_1, a_1^') is allocated to, there exists an _0^0 allocation. We next show that if both (a_1, a_1^') and (a_2, a_2^') are allocated to their lower endpoint agents, (a_3, a_3^') must be allocated to a_3^'. It suffices to show that (b_2^', a_3) must be allocated to a_3. This is because if both (a_3, a_3^') and (b_2^', a_3) are allocated to a_3, a_3^' will envy a_3 even after removing (b_2^', a_3) from a_3's bundle. If (b_2, b_2^') is allocated to b_2^', (b_2^', a_3) must be allocated to a_3 and we have done, since otherwise b_2 will envy b_2^' even after removing (b_2^', a_3) from b_2^''s bundle. Therefore, it remains to consider the case when (b_2, b_2^') is allocated to b_2. Since (a_1, a_1^') is allocated to a_1^', (a_1^', b_1^') must be allocated to b_1^' since otherwise a_1 will envy a_1^' even after removing (a_1^', b_1^') from a_1^''s bundle. Furthermore, (b_1, b_1^') must be allocated to b_1. By the same reasoning, (a_2^', b_3^') must be allocated to b_3^' and (b_3, b_3^') must be allocated to b_3. Then consider the incident edges of b_2^' that have not been allocated so far, i.e., (b_1^', b_2^') and (b_2^', b_3^'). b_2^' must receive one of these two edges, since otherwise she will envy b_1^' even after removing (a_1^', b_1^') from b_1^''s bundle, and b_3^' even after removing (a_2^', b_3^') from b_3^''s bundle. No matter which edge b_2^' receives, (b_2^', a_3) must be allocated to a_3. To see this, let the edge that b_2^' receives be (b_1^', b_2^'). Since b_1^' receives a value of ϵ_2 and thinks that b_2^' receives a value of ϵ_1 > ϵ_2, she envies b_2^' and thus b_2^' cannot receive (b_2^', a_3) any more. Lastly, we show that when all of (a_1, a_1^'), (a_2, a_2^') and (a_3, a_3^') are allocated to their lower endpoint agents, there exists an _0^0 allocation. We allocate each priceless edge except (b_1, b_1^') and (b_3, b_3^') to the lower endpoint agent, i.e., (a_i, a_i^') to a_i^' for every i ∈{1, 2, 3}, (b_2, b_2^') to b_2^', and (b_i, b_i^') to b_i for every i ∈{1, 3}; for the middle four edges, allocate (a_1^', b_1^') and (b_1^', b_2^') to b_1^', (b_2^', b_3^') and (b_3^', a_2^') to b_3^'; allocate (b_2^', a_3) to a_3, (a_1, a_3^') to a_1, (a_2, a_3^') to a_2. In the above allocation, only b_1^' and b_3^' receive more than one edge. For b_1^', neither a_1^' nor b_2^' envies her since both of them receive their priceless edges. By the same reasoning, neither b_2^' nor a_2^' envies b_3^'. Therefore, the allocation is _0^0. In the NOT gadget (see Figure <ref>), edge (a_1, a_1^') represents the input of the NOT gate, and edge (a_2, a_2^') represents the output. The following claim shows that the NOT gadget correctly simulates the NOT gate. In every _0^0 allocation on the NOT gadget, edge (a_2, a_2^') is allocated to a_2^' if and only if edge (a_1, a_1^') is allocated to a_1. If (a_1, a_1^') is allocated to a_1, (a_1, a_2) must be allocated to a_2 since otherwise a_1^' will envy a_1 even after removing (a_1, a_2) from a_1's bundle. Furthermore, (a_2, a_2^') must be allocated to a_2^'. By symmetry, if (a_1, a_1^') is allocated to a_1^', (a_2, a_2^') must be allocated to a_2. When (a_1, a_1^') is allocated to a_1 and (a_2, a_2^') is allocated to a_2^', allocating the edges clockwise produces an _0^0 allocation. When (a_1, a_1^') is allocated to a_1^' and (a_2, a_2^') is allocated to a_2, allocating the edges counterclockwise produces an _0^0 allocation. In the WIRE gadget (see Figure <ref>), edge (a_1, a_1^') represents the input of the wire of a circuit, and edge (a_2, a_2^') represents the output. Since the only difference between the NOT gadget and the WIRE gadget is that the labels of a_2 and a_2^' are exchanged, we have the following claim that shows the WIRE gadget correctly simulates the wire in the circuit. In every _0^0 allocation on the WIRE gadget, edge (a_2, a_2^') is allocated to a_2 if and only if edge (a_1, a_1^') is allocated to a_1. In the TRUE terminator gadget (see Figure <ref>), edge (a_2, a_2^') is allocated to a_2 in every _0^0 allocation. In every _0^0 allocation on the TRUE terminator gadget, edge (a_2, a_2^') is allocated to a_2. For the sake of contradiction, suppose that (a_2, a_2^') is allocated to a_2^'. In this case, (a_1, a_2^') must be allocated to a_1, and (a_1^', a_2^') must be allocated to a_1^'. Otherwise, a_2 will envy a_2^' even after removing one edge except (a_2, a_2^') from a_2^''s bundle. Then neither a_1 nor a_1^' can receive (a_1, a_1^'), a contradiction. This is because the endpoint agent that receives (a_1, a_1^') will be envied by the other even after removing one edge except (a_1, a_1^') from her bundle. When (a_2, a_2^') is allocated to a_2, allocating (a_1, a_1^') to a_1, (a_1^', a_2) and (a_1^', a_2^') to a_1^', (a_1, a_2^') to a_2^' produces an _0^0 allocation. Given a circuit, we first substitute each AND gate with three NOT gates and one OR gate, and get an equivalent circuit without AND gates. For the new circuit, we construct a priceless edge with a value of +∞ for each input, and the corresponding gadget for each gate and wire. We then construct a True terminator gadget to force the final output to be True. Figure <ref> shows the graph constructed from a simple circuit with one AND gate, two inputs and one final output. Note that Proposition <ref> still applies to the graph we construct. Up to now, it is not hard to see the correctness of Theorem <ref>. To prove Theorem <ref>, we show that the circuit has a satisfying assignment if and only if the constructed graph has an ^0_0 allocation. For one direction, we assume that there exists an assignment of the inputs such that the final output of the circuit is True and use the assignment to create an allocation as follows: for each input, if it is set to True in the assignment, allocate the corresponding edge to the upper endpoint; otherwise, allocate the edge to the lower endpoint. Allocate the edge that simulates the final output to the upper endpoint. Allocate the remaining edges in each gadget according to Claims <ref> to <ref>. Clearly, the allocation is _0^0. For the other direction, we assume that there exists an _0^0 allocation in the constructed graph and use the allocation to create an assignment as follows: for each input, if the corresponding edge is allocated to the upper endpoint, set the input to True; otherwise, set the input to False. By Claim <ref>, the edge that simulates the final output must be allocated to the upper endpoint. Therefore, by Claims <ref> to <ref>, the assignment makes the final output of the circuit True. Our reduction borrows an idea from the reduction by <cit.> (see Theorem 2 in their paper) and generalizes their reduction. Our reduction can imply their result, while theirs cannot carry over to our problem since it relies on the orientation model. §.§ _-^0 Allocations We next study _-^0 and have the following theorem. For any graph, an _-^0 allocation always exists and can be computed in polynomial time. We first introduce some notations. Given a (partial) allocation 𝐗 = (X_1, …, X_n), let R(𝐗) denote the set of unallocated edges, i.e., R(𝐗) = M ∖⋃_a_i ∈ NX_i. We say an agent a_j is safe for another agent a_i if a_i does not envy a_j even if a_j receives all her unallocated incident edges that are not chores for her, i.e., v_i(X_i) ≥ v_i(X_j ∪ (E_i^≥ 0∩ R(𝐗))). We next introduce some properties of allocations. We say that a (partial) allocation 𝐗 satisfies * property (1) if for every agent a_i, the value of her bundle is at least the largest value among her unallocated incident edges that are not chores for her. That is, v_i(X_i) ≥ v_i(e) for every edge e ∈ E_i^≥ 0∩ R(𝐗); * property (2) if for every envied agent a_i, the value of her bundle is at least the value of all her unallocated incident edges that are not chores for her. That is, v_i(X_i) ≥ v_i(E_i^≥ 0∩ R(𝐗)); * property (3) if for every two envied agents, there exists a non-envied agent who is safe for both of them; * property (4) if no agent receives an edge that is a chore for her. That is, e ∈ E_i^≥ 0 for any a_i ∈ N and e ∈ X_i; * property (5) if every envied agent a_i receives exactly one edge, i.e., |X_i| = 1; * property (6) if every envied agent is envied by exactly one agent; * property (7) if there is no envy cycle among the agents. That is, there does not exist a sequence of the agents a_i_0← a_i_1←⋯← a_i_s such that a_i_l envies a_i_l-1 for every l ∈ [s] and i_0 = i_s; * property (8) if for any sequence of agents a_i_0← a_i_1←⋯← a_i_s such that a_i_l envies a_i_l-1 for every l ∈ [s] and a_i_s is non-envied, we have that a_i_l is safe for a_i_0 for every l ∈ [s]. We obtain an ^0_- allocation in two parts. Part 1. In the first part, we compute a (partial) ^0_- orientation that satisfies properties (1)-(8) in Definition <ref>. Our algorithms in this part are adapted from those by <cit.>. There are two differences between our algorithms and <cit.>'s. First, since there is one more requirement in our problem that agents cannot envy others after removing a chore from their own bundles, we need to carefully allocate the edges that are chores for their endpoint agents. Second, the algorithms by <cit.> cannot guarantee property (8) and our algorithms need to deal with the case where property (8) is not satisfied. We have the following lemma. The algorithms and proofs can be seen in Appendix <ref>. For any graph, a (partial) ^0_- orientation that satisfies properties (1)-(8) in Definition <ref> can be computed in polynomial time. Part 2. In the second part, we allocate the edges that are not allocated in Part 1. We first categorize the unallocated edges into four disjoint groups: * G_1 contains each edge that has at least one non-envied endpoint agent for whom the edge is not a chore; * G_2 contains each edge that has two envied endpoints; * G_3 contains each edge that has one non-envied endpoint agent for whom the edge is a chore and one envied endpoint agent for whom it is not a chore; * G_4 contains the edges that have not been included in G_1, G_2, G_3. Notice that each edge in G_4 is a chore for both its endpoint agents. We will allocate the unallocated edges from G_1 to G_4 such that no agent will receive an edge that is a chore for her and thus property (4) will be retained. Besides, no agent will get worse off and no allocated edge will become unallocated, which will ensure that properties (1) and (2) are retained. Moreover, no new envy will occur, which will ensure that properties (6) and (7) are retained. Furthermore, no allocated edge will be reallocated to another agent, which will ensure that an agent who is safe for some agent is always safe for that agent and thus properties (3) and (8) are retained. We will also see that the (partial) allocation is always ^0_- during the allocation process. Specifically, * For each edge in G_1, we allocate it to the non-envied endpoint agent for whom it is not a chore. * For each edge in G_2, we allocate it to the non-envied agent who is safe for both its endpoint agents. * For each edge e_i, j in G_3, we consider three cases. Without loss of generality, let a_i be the endpoint agent for whom e_i, j is not a chore and a_j be the other one for whom it is a chore. First, a_i becomes non-envied. Similar to the allocation of G_1, we allocate the edge to a_i. Second, there exists a non-envied agent a_k ≠ a_j who is safe for a_i. Similar to the allocation of G_2, we allocate e_i, j to a_k. Third, a_j is the only non-envied agent who is safe for a_i. By property (8), it must be the case that there exists a sequence of agents a_i_0← a_i_1←⋯← a_i_s such that a_i_l envies a_i_l-1 for every l ∈ [s], a_i_0 is a_i and a_i_s is a_j. For this case, we allocate e_i, j to a_i_s-1. * For G_4, we consider two cases. First, if no agent is envied, we allocate each edge to an agent who is not its endpoint. Second, if some agent is envied, we find two agents a_i and a_j such that a_i is envied by a_j and a_j is non-envied. We allocate the edges in G_4 that are incident to a_j (i.e., E_j ∩ G_4) to a_i, and the other edges in G_4 (i.e., G_4 ∖ E_j) to a_j. We have the following lemma in Part 2. For any graph, given a (partial) ^0_- orientation that satisfies properties (1)-(8) in Definition <ref>, we can compute an ^0_- allocation in polynomial time. Each edge in G_1 is allocated to the non-envied endpoint agent for whom the edge is not a chore. This does not incur new envy, since the other endpoint agent of the edge prefers her own bundle to the edge by property (1). Each edge in G_2 is allocated to a non-envied agent who is safe for both its two endpoint agents. By property (3), such a non-envied agent exists. Since the non-envied agent is safe for both the two endpoint agents, no new envy occurs, either. Moreover, during the allocation of G_1 and G_2, no agent receives an edge that is a chore for her and no envied agent receives an edge, thus the new (partial) allocation is still ^0_- and retains properties (4)-(7). Since no agent gets worse off and no allocated edge becomes unallocated, properties (1) and (2) are retained. Furthermore, since no edge that was allocated to some agent is reallocated to another agent, an agent who is safe for some agent is always safe for that agent and thus properties (3) and (8) are retained. For each edge e_i, j in G_3, without loss of generality, let a_i be the endpoint agent for whom e_i, j is not a chore and a_j be the other one for whom e_i, j is a chore. There are three cases: * First, a_i now becomes non-envied. For this case, we allocate e_i, j to a_i. By the same reasoning we made when allocating the edges in G_1, the new (partial) allocation is still ^0_- and does not break properties (1)-(8). * Second, there exists a non-envied agent a_k ≠ a_j who is safe for a_i. For this case, similarly to the allocation of G_2, we allocate e_i, j to a_k. Although a_k may not be safe for a_j, allocating e_i, j to a_k does not make a_j envy a_k since e_i, j is a chore for a_j. Therefore, by the same reasoning we made when allocating the edges in G_2, the new (partial) allocation is still ^0_- and does not break properties (1)-(8). * Third, a_j is the only non-envied agent who is safe for a_i. Since there is no envy cycle among the agents by property (7), there exists a sequence of agents a_i_0← a_i_1←⋯← a_i_s such that a_i_l envies a_i_l-1 for every l ∈ [s], a_i_s is non-envied and a_i_0 is a_i. Then a_i_s must a_j. Otherwise, by property (8), a_i_s is safe for a_i, which contradicts the assumption of this case. For this case, we allocate e_i, j to a_i_s-1. Clearly, property (4) still holds. By property (8), a_i_s-1 is safe for a_i. Besides, since e_i, j is a chore for a_j, allocating e_i, j to a_i_s-1 does not incur new envy. Notice that although a_i_s-1 is envied, allocating e_i, j to her does not make the (partial) allocation not ^0_- since she is only envied by a_j by property (6) and e_i, j is a chore for a_j. Therefore, the new (partial) allocation is still ^0_- and retains properties (6) and (7). Since no agent gets worse off and no allocated edge becomes unallocated, properties (1) and (2) are retained. Furthermore, since no edge that was allocated to some agent is reallocated to another agent, an agent who is safe for some agent is always safe for that agent and thus properties (3) and (8) are retained. At last, we allocate the edges in G_4, each of which is a chore for both its endpoint agents. If no agent is envied in the new (partial) allocation, we allocate each edge in G_4 to an agent who is not its endpoint and obtain an envy-free allocation. If some agent is envied, we first find two agents a_i and a_j such that a_i is envied by a_j and a_j is non-envied. We are able to find such two agents since there is no envy cycle among the agents by property (7). Since no new envy occurs during the allocation of G_1, G_2 and G_3, a_j envied a_i in the (partial) orientation computed in Part 1 and thus e_i, j was allocated to a_i in Part 1 and is not in G_4. We allocate the edges in G_4 that are incident to a_j (i.e., E_j ∩ G_4) to a_i, and the other edges (i.e., G_4 ∖ E_j) to a_j. Although a_i is envied, she can receive the edges in E_j ∩ G_4 since she is only envied by a_j according to property (6) and these edges are chores for their endpoint agents (including a_j). a_j can receive the edges in G_1 ∖ E_j since she is non-envied and these edges are not incident to her and thus are dummies for her. Therefore, the final allocation is still ^0_-. Clearly, the whole allocation process runs in polynomial time and we complete the proof. By Lemmas <ref> and <ref>, it is clear that Theorem <ref> holds. §.§ _0^+ Allocations Finally, we study _0^+ and have the following theorem. For any graph, an _0^+ allocation always exists and can be computed in polynomial time. First recall that for chores instances where each edge is a chore for both its endpoint agents, we can compute an envy-free allocation by allocating each edge to an agent who is not its endpoint. Thus in the following, we only consider graphs where there exists an edge that is not a chore for at least one of its endpoint agents. To get some intuitions about how to compute an _0^+ allocation, consider the graphs where each edge is a good for at least one of its endpoint agents. For these graphs, we can simply allocate each edge to the endpoint agent for whom it is a good. For any agent a_i, each edge she receives is a good for her, and at most one edge that each other agent a_j receives is a good for her. After removing the good from a_j's bundle, a_i does not envy a_j. Thus, the allocation is _0^+. The trickier graphs to deal with are those with edges that are not goods for any of their endpoint agents. For these graphs, we want to find an agent who can receive all such edges, so that we can simply allocate each remaining edge to one of its endpoint agents as above. At the same time, the allocation should be _0^+ for the agent we find. When there exists an agent a_i to whom the total value of her incident edges is non-negative (i.e., v_i(E_i) ≥ 0), we let a_i receive all her incident edges as well as all edges that are not goods for any of their endpoint agents. We then allocate each remaining edge to one of its endpoint agents for whom it is a good. Since a_i receives all her incident edges whose total value is non-negative, the allocation is _0^+ for her. However, when the total value of the incident edges is negative to every agent (i.e., v_i(E_i) < 0 for every a_i), we cannot simply allocate all incident edges to an agent as above, since the allocation may not be _0^+ for her. For this case, we let an agent receive all her incident edges that are not chores for her and allocate her other incident edges to another agent. More concretely, we first choose an edge e_i, j that is not a chore for a_i, breaking the tie by giving priority to the edges that are not chores for one endpoint agent and are chores for the other. We then let a_i receive all her incident edges that are not chores for her, as well as all edges that are not incident to her but are not goods for any of their endpoint agents. Next, we let a_j receive all her unallocated incident edges that are goods for her, as well as a_i's unallocated incident edges that are not goods for any of their endpoint agents. At last, we allocate each remaining unallocated edge to one of its endpoint agents for whom it is a good. The formal description of the above allocation process is provided in Algorithm <ref>. Now we are ready to prove Theorem <ref>. Clearly, Algorithm <ref> runs in polynomial time. It remains to show that the computed allocation is _0^+. We consider two cases: Case 1: there exists an agent a_i such that v_i(E_i) ≥ 0. In this case, no edge remains unallocated at the end of the algorithm. To see this, the incident edges of a_i, as well as those that are not incident to a_i and are not goods for any of their endpoint agents, are allocated to a_i. The remaining edges are goods for at least one of their endpoint agents and are allocated to those endpoint agents. For agent a_i, she receives a non-negative value since she receives all her incident edges whose total value is non-negative to her and other edges she receives are not incident to her and are dummies for her. Thus, each other agent receives a bundle that has a value of 0 to a_i and a_i does not envy them. For each a_j ≠ a_i, each edge that she receives is a good for her (if exists) and each other agent receives at most one edge that is a good for her. Thus a_j does not envy others after removing the good from their bundles and the allocation is ^+_0. Case 2: v_i(E_i) < 0 for every agent a_i. In this case, no edge remains unallocated at the end of the algorithm, either. To see this, the edges that are incident to a_i and are not chores for a_i, as well as those that are not incident to a_i and are not goods for any of their endpoint agents, are allocated to a_i. The edges that are incident to a_i but are not goods for any of their endpoint agents are allocated to a_j. The remaining edges are goods for at least one of their endpoint agents and are allocated to those endpoint agents. For agent a_i, she receives a non-negative value since she only receives her incident edges that are not chores for her and other edges she receives are not incident to her and are dummies for her. Thus, each other agent receives a bundle that has a non-positive value to a_i and a_i does not envy them. For each agent a_k ≠ a_i or a_j, each edge that she receives is a good for her (if exists) and each other agent receives at most one edge that is a good for her. Therefore, the allocation is ^+_0 for a_k. It remains to consider agent a_j. a_j does not envy each agent a_k ≠ a_i or a_j, since a_j does not receive any edge that is a chore for her and thus receives a non-negative value, and a_k does not receive any edge that is a good for a_j and thus her bundle has a non-positive value to a_j. If the edge (a_i, a_j) is a chore for a_j, a_j does not envy a_i, either, since a_i does not receive any edge that is a good for a_j. If (a_i, a_j) is not a chore for a_j, the tie-breaking rule implies that each edge in the graph either is not a chore for any of its endpoint agents, or is a chore for both its endpoint agents. Thus, all a_j's incident edges that are chores for a_j are allocated to a_i. Since v_j(E_j) < 0, a_i's bundle has a negative value to a_j and a_j does not envy a_i. Therefore, the allocation is ^+_0. Since any _-^0 or _0^+ allocation is also ^+_-, we have the following corollary. For any graph, an ^+_- allocation always exists and can be computed in polynomial time. § CONCLUSION In this paper, we give a complete computational study of EFX allocations on graphs when the items are a mixture of goods and chores. There are some future directions. In our setting, exactly two agents are interested in one common item that is incident to both of them. One immediate direction is to study the generalized setting with multi-edges where multiple edges exist between two agents or hypergraphs where more than two agents are interested in one common item. Another direction is to study the setting where agents are also interested in the edges that are not very far away from them. To bypass the hardness results of EFX orientations, we have studied some simple graphs including trees, stars and paths. One can also study complex graphs for which the existence of EFX orientations can be determined in polynomial time. plainnat § APPENDIX § A DISCUSSION ON GOODS AND CHORES INSTANCES There are two variants of EFX for goods instances in the literature, i.e., ^0 and ^+, which require that any envy could be eliminated after virtually removing any item from the envied agent's bundle that has a non-negative or a strictly positive marginal value to the envious agent, respectively. Formally, An allocation 𝐗 = (X_1, …, X_n) is ^0 if for every two agents a_i, a_j ∈ N such that a_i envies a_j and any e ∈ X_j such that v_i(X_j ∖{e}) ≤ v_i(X_j), we have v_i(X_i) ≥ v_i(X_j ∖{e}). An allocation 𝐗 = (X_1, …, X_n) is ^+ if for every two agents a_i, a_j ∈ N such that a_i envies a_j and any e ∈ X_j such that v_i(X_j ∖{e}) < v_i(X_j), we have v_i(X_i) ≥ v_i(X_j ∖{e}). For goods instances, <cit.> studied ^0 in their work and proved that determining whether ^0 orientations exist or not is NP-complete. Our reduction in Theorem <ref> provides an alternative proof to this result. It is easy to compute an ^+ orientation by allocating each edge to one of its endpoint agents. Since for each agent, she receives a non-negative value and each other agent receives at most one of her incident edges, the orientation is ^+. <cit.> also showed that an ^0 allocation always exists and can be computed in polynomial time, which also holds for ^+ allocations since any ^0 allocation is also ^+. There are also two variants of EFX for chores instances in the literature, i.e., _0 and _-, which require that any envy could be eliminated after virtually removing any item from the envious agent's bundle that has a non-positive or a strictly negative marginal value to her. Formally, An allocation 𝐗 = (X_1, …, X_n) is _0 if for every two agents a_i, a_j ∈ N such that a_i envies a_j and and any e ∈ X_i such that v_i(X_i ∖{e}) ≥ v_i(X_i), we have v_i(X_i ∖{e}) ≥ v_i(X_j). An allocation 𝐗 = (X_1, …, X_n) is _- if for every two agents a_i, a_j ∈ N such that a_i envies a_j and any e ∈ X_i such that v_i(X_i ∖{e}) > v_i(X_i), we have v_i(X_i ∖{e}) ≥ v_i(X_j). For chores instances, an _0 or _- orientation may not exist. Consider a connected graph where there are more edges than vertices and each edge is a chore for both its two endpoint agents. By the pigeonhole principle, there must exist an agent who receives more than one edge. After removing one edge from her own bundle, she still receives a negative value and envies the other endpoint agents of the edges she receives. We next show that we can determine whether an _- orientation exists or not in polynomial time. To achieve this, we first allocate each edge to an endpoint agent for whom it is a dummy (if exists) and remove these edges from the graph. We can do this since _- does not consider dummies in the envious agent's bundle. After this step, each unallocated edge is a chore for both its endpoint agents. We then compare the number of unallocated edges and agents in each connected component of the graph. If there exists a connected component where the number of unallocated edges is larger than that of agents, we can conclude that there does not exist an _- orientation. This is because by the pigeonhole principle, at least one agent receives more than one chore. If the number of unallocated edges is at most that of agents for every connected component, we can compute an _- orientation by allocating at most one chore to each agent. We note that whether the existence of an _- orientation can be determined in polynomial time remains as an open problem. We conjecture that it is NP-complete. At last, an envy-free allocation can be computed by allocating each edge to an agent who is not its endpoint, which is also _0 and _-. § COMPUTING A (PARTIAL) ^0_- ORIENTATION THAT SATISFIES PROPERTIES (1)-(8) IN DEFINITION <REF> In this section, we show how to compute a (partial) ^0_- orientation that satisfies properties (1)-(8) in Definition <ref>. We achieve this in two steps. §.§ Step 1: Computing an Initial (Partial) Orientation 𝐗^1 We first construct an initial (partial) orientation 𝐗^1 by letting each agent pick one incident edge (if it exists) that she values the most among those that have not been allocated and are not chores for her. Once an agent picks an edge, the other endpoint agent of the edge will be the next to pick an edge. The formal description is provided in Algorithm <ref>. It is easy to see that the following claim holds. The initial (partial) orientation 𝐗^1 computed in Step 1 is ^0_- and satisfies properties (1) and (4)-(7) in Definition <ref>. Moreover, it is computed in polynomial time. Clearly, Algorithm <ref> runs in polynomial time. Since each agent receives at most one incident edge that is not a chore for her, 𝐗^1 is ^0_- and satisfies properties (4)-(6). Since each agent picks the edge that she values the most among those that have not been allocated and are not chores for her, 𝐗^1 satisfies properties (1) and (7). After Step 1, either Step 2-1 or Step 2-2 will be executed, depending on whether 𝐗^1 satisfies properties (2) and (3). §.§ Step 2-1: Computing the Desired (Partial) Orientation if 𝐗^1 does not Satisfy Property (2) Since 𝐗^1 does not satisfy property (2), there exists an envied agent a_i, for whom her bundle is less valuable than the unallocated incident edges that are not chores for her (i.e., v_i(X_i^1) < v_i(E_i^≥ 0∩ R(𝐗^1))). We allocate all edges in E_i^≥ 0∩ R(𝐗^1) to a_i and release the edge she received in 𝐗^1, which may break property (1). In order to regain property (1), for any agent who prefers an unallocated incident edge to her current bundle, we allocate the unallocated edge to the agent and release all edges she received previously. We repeat the above process until property (2) is satisfied. The formal description is provided in Algorithm <ref>. We have the following claim. Taking as input a (partial) orientation that is ^0_- and satisfies properties (1) and (4)-(7) in Definition <ref>, Algorithm <ref> computes a (partial) orientation that is ^0_- and satisfies properties (1)-(7) in polynomial time. Clearly, no agent receives an edge that is a chore for her in Algorithm <ref>, thus property (4) always holds. We next show by induction that properties (1) and (5)-(7) always hold at the end of each round of the outer while-loop. Assume that at the beginning of a round, these four properties hold. Let a_i be the agent who makes the algorithm enter the outer while-loop and a_j be the agent who envies a_i. By property (5), a_i only owned e_i, j at the beginning of the round. In the outer while-loop, a_i receives all edges in E_i^≥ 0∩ R(𝐗^') and releases e_i, j. Since the value of E_i^≥ 0∩ R(𝐗^') to a_i is larger than that of e_i, j, property (1) holds for a_i. Besides, the other endpoint agents of the edges in E_i^≥ 0∩ R(𝐗^') do not envy a_i since property (1) was satisfied for them at the beginning of the round. Since a_i gets better off, she does not envy the agents whom she did not envy previously. Therefore, the new (partial) orientation retains properties (5)-(7) after a_i gets her new bundle. After a_i releases e_i, j, property (1) does not hold for a_j, and the algorithm enters the inner while-loop. In the inner while-loop, a_j receives e_i, j and releases the edges she owned previously. The value of e_i, j to a_j is larger than the total value of the edges a_j owned previously, and is larger than the value of each of these edges since none of them are chores for a_j by property (4). Furthermore, since property (1) held for a_j at the beginning of the round, the value of e_i, j to a_j is larger than the value of each of a_j's incident edges that were unallocated at the beginning of the round and are not chores for her. Thus, property (1) holds for a_j. Besides, no one envies a_j since she only receives the edge e_i, j and a_i now prefers her own bundle to e_i, j. Since a_j gets better off, she does not envy the agents whom she did not envy previously. Therefore, the new (partial) orientation retains properties (5)-(7) after a_j gets her new bundle. If a_j was non-envied at the beginning of the round of the outer while-loop, each agent prefers her own bundle to the edge between herself and a_j that a_j releases. Therefore, the round ends and properties (1) and (5)-(7) hold. If a_j was envied by some agent, property (1) is not satisfied for that agent and the algorithm enters the inner while-loop again. By the same reasoning, properties (1) and (5)-(7) hold after that round of the inner while-loop. By induction, properties (1) and (5)-(7) hold at the end of the the round of the outer while-loop. Therefore, by induction, properties (1) and (5)-(7) hold at the end of the algorithm. Next, we show that properties (2) and (3) hold at the end of the algorithm. Observe that the agents who receive new bundles during the algorithm become non-envied, and the agents who were non-envied previously are still non-envied since no new envy occurs during the algorithm. Therefore, the algorithm can terminate in polynomial time, and we have property (2) at the end. To see property (3), let a_k be the agent who makes the algorithm enter the last round of the outer while-loop and a_l be the agent involved in the first round of the inner while-loop in that round. At the end of that round, both a_k and a_l are non-envied, and a_l only receives e_k, l. By property (2), a_l is safe for all envied agents and property (3) holds. The (partial) orientation is ^0_- since for each agent, the edges that she receives are not chores for her by property (4), and each other agent receives at most one of her incident edges. Therefore, we complete the proof. §.§ Step 2-2: Computing the Desired (Partial) Orientation if 𝐗^1 Satisfies Property (2) but not (3) Since 𝐗^1 does not satisfy property (3), there exist two envied agents a_i and a_j such that no non-envied agent is safe for both of them. Let a_i_0← a_i_1←⋯← a_i_s be the sequence of agents such that a_i_0 is a_i, a_i_l envies a_i_l-1 for every l ∈ [s] and a_i_s is non-envied, which exists since there is no envy cycle among the agents by property (7). Similarly, let a_j_0← a_j_1←⋯← a_j_t be the sequence of agents such that a_j_0 is a_j, a_j_l envies a_j_l-1 for every l ∈ [t] and a_j_t is non-envied. We first consider the following case: Case 1: e_i, i_s is allocated to a_i_s in 𝐗^1. Since each agent receives at most one edge in 𝐗^1, a_i_s only receives e_i, i_s and does not receive any incident edge of a_j. Thus, a_j values a_i_s's bundle at zero and by property (2), a_i_s is safe for a_j. According to our assumption, a_i_s is not safe for a_i; that is, v_i({e_i, i_s}∪ (E_i^≥ 0∩ R(𝐗^1))) > v_i(X_i^1). For this case, we allocate the edges in E_i^≥ 0∩ R(𝐗^1) as well as e_i, i_s to a_i, and e_i_l-1, i_l to a_i_l for every l ∈ [s]. Figure <ref> visualizes the allocation process. We have the following claim. The (partial) orientation computed in Case 1 of Step 2-2 is ^0_- and satisfies properties (1)-(7) in Definition <ref>. Moreover, it is computed in polynomial time. Clearly, the allocation process runs in polynomial time. The (partial) orientation is ^0_- and satisfies property (4) since for each agent, the edges that she receives are not chores, and each other agent receives at most one of her incident edges. We then show that the (partial) orientation satisfies properties (1), (2) and (5)-(7). Observe that the agents who receive new bundles in the allocation process become non-envied. To see this, first consider a_i. Since 𝐗^1 satisfies property (1), the other endpoint agents of the edges in E_i^≥ 0∩ R(𝐗^1) do not envy a_i. Besides, a_i_s does not envy a_i, since she prefers the edge she owns currently (i.e., e_i_s-1, i_s) to e_i, i_s. For each l ∈ [s], a_i_l is non-envied, since a_i_l-1 prefers her current bundle to e_i_l-1, i_l. Also observe that these agents do not envy the agents who they did not envy previously since they get better off. These two observations give that the new (partial) orientation still satisfies properties (5)-(7). Moreover, since no agent gets worse off and no edge that was allocated previously becomes unallocated, properties (1) and (2) are still satisfied. Since a_i_1 receives only the edge between herself and a_i_0 who is non-envied, by property (2), she is safe for all envied agents. Therefore, property (3) is also satisfied and we complete the proof. When e_j, j_t is allocated to a_j_t, we can also construct a (partial) orientation that is ^0_- and satisfies properties (1)-(7), following the same process as above. Therefore, it remains to consider the case where e_i, i_s is not allocated to a_i_s and e_j, j_t is not allocated to a_j_t. In this case, a_i values a_i_s's bundle at zero, and thus a_i_s is safe for her. This implies that a_i_s is not safe for a_j; that is, a_i_s receives e_j, i_s in 𝐗^1 and v_j({e_j, i_s}∪ (E_j^≥ 0∩ R(𝐗^1))) > v_j(X_j^1). Similarly, a_j_t is not safe for a_i, a_j_t receives e_i, j_t in 𝐗^1 and v_i({e_i, j_t}∪ (E_i^≥ 0∩ R(𝐗^1))) > v_i(X_i^1). Figure <ref> illustrates the initial (partial) orientation regarding these agents. We further consider the following two cases: Case 2: a_i prefers e_i, j to e_i, j_t or is indifferent between them For this case, we let a_i receive the incident edge (if exists) that she values the most among those that are not allocated and are not chores for her (i.e., E_i^≥ 0∩ R(𝐗^1)). We then allocate e_i_l-1, i_l to a_i_l for every l ∈ [s] and release e_j, i_s. Figure <ref> illustrates the new (partial) orientation. Note that the new (partial) orientation may not satisfy property (2), since some agents who were non-envied are now envied by a_i and the incident edge e_i_s, j of a_j that was allocated now becomes unallocated. The good news is that we can prove that the new (partial) orientation is ^0_- and satisfies properties (1) and (4)-(7). Therefore, we can run Algorithm <ref> to obtain a (partial) orientation that is ^0_- and satisfies properties (1)-(7). We have the following claim. The (partial) orientation computed in Case 2 of Step 2-2 is ^0_- and satisfies properties (1)-(7) in Definition <ref>. Moreover, it is computed in polynomial time. Recall that by Claim <ref>, the initial (partial) orientation 𝐗^1 is ^0_- and satisfies properties (1) and (4)-(7). We first show that the new (partial) orientation as illustrated in Figure <ref> is also ^0_- and satisfies properties (1) and (4)-(7). Since no agent receives an edge that is a chore for her in the allocation process, property (4) holds. Moreover, for each agent, each other agent receives at most one of her incident edges, thus the new (partial) orientation is ^0_-. To see property (1), it suffices to consider agents a_i, a_j and a_i_l for l ∈ [s], since only a_i and a_i_l for l ∈ [s] receive a new bundle in the allocation process, and only a_j has an incident edge that was allocated previously but now becomes unallocated. For a_i, if she does not receive an edge, then all her unallocated incident edges are chores for her, thus property (1) holds for her. If she receives an edge, then the edge is the one she values the most among those in E_i^≥ 0∩ R(𝐗^1), thus property (1) also holds for her. For each l ∈ [s-1], a_i_l does not get worse off and her unallocated incident edges do not change, thus property (1) also holds for a_i_l. For the same reason, both a_i_s and a_j prefer their current bundles to each of their unallocated incident edges except e_i_s, j, which is the only edge that was allocated previously but is now unallocated. For a_i_s, she prefers her current bundle to e_i_s, j since she envied a_i_s-1 previously and now owns the edge e_i_s-1, i_s. For a_j, she prefers her current bundle to e_i_s, j since she did not envy a_i_s previously. Thus, property (1) also holds for both of them. Now consider properties (5)-(7), first observe that except a_i_1, the agents who receive new bundles are now non-envied (i.e., a_i and a_i_l for each l ∈ [s] ∖{1}). To see this, since 𝐗^1 satisfies property (1), the other endpoint agent of the edge that a_i receives (if it exists) does not envy a_i. For each l ∈ [s] ∖{1}, a_i_l is non-envied by since a_i_l-1 prefers her current bundle to e_i_l-1, i_l. Since each agent receives at most one edge, properties (5) and (6) still hold. For property (7), since a_i is the only agent who may become worse off, the only agents who were non-envied previously and now become envied and who were envied by some agent and are now envied by another agent are all a_i's neighbors. Each of them is envied by a_i who is now non-envied, thus there is still no envy cycle among the agents and property (7) still holds. Clearly, the allocation process for Case 2 of Step 2-2 runs in polynomial time. If the new (partial) orientation satisfies property (2), it also satisfies property (3). To see this, consider a_i and a_j_t. Since the edge that a_i owns currently is the one she values the most among those in E_i^≥ 0∩ R(𝐗^1) (including e_i, j) and she prefers e_i, j to e_i, j_t, she does not envy a_j_t. Moreover, since a_j_t only receives the edge between herself and a_i who is now non-envied, a_j_t is safe for all envied agents and property (3) holds. If the new (partial) orientation does not satisfy property (2), we run Algorithm <ref> once. By Claim <ref>, the final (partial) orientation is ^0_- and satisfies properties (1)-(7), and is computed in polynomial time. Case 3: a_i prefers e_i, j_t to e_i, j For this case, we allocate all edges in E_j^≥ 0∩ R(𝐗^1) as well as e_j, i_s to a_j, e_j_l-1, j_l to a_j_l for every l ∈ [t], and e_i_l-1, i_l to a_i_l for every l ∈ [s]. We then release e_i, j_t and let a_i receive the incident edge (if it exists) that she values the most among those that are not allocated and are not chores for her. Figure <ref> illustrates the new (partial) orientation. Similar to Case 2, the new (partial) orientation may not satisfy property (2), since some agents who were non-envied are now envied by a_i. But we can prove that the new (partial) orientation is ^0_- and satisfies properties (1) and (4)-(7). Therefore, after running Algorithm <ref>, we can obtain a (partial) orientation that is ^0_- and satisfies properties (1)-(7). We have the following claim. The (partial) orientation computed in Case 3 of Step 2-2 is ^0_- and satisfies properties (1)-(7) in Definition <ref>. Moreover, it is computed in polynomial time. The proof is similar to that of Claim <ref>. We first show that the new (partial) orientation is ^0_- and satisfies properties (1) and (4)-(7). Since no agent receives an edge that is a chore for her in the allocation process, property (4) holds. Moreover, for each agent, each other agent receives at most one of her incident edges, thus the new (partial) orientation is ^0_-. To see property (1), first consider a_i. If a_i does not receive an edge, then all her unallocated incident edges are chores for her, thus property (1) holds for her. If she receives an edge, the edge is the one that she values the most among her unallocated incident edges that are not chores for her, thus property (1) also holds for her. For each l ∈ [s], a_i_l gets better off and her unallocated incident edges do not change, thus property (1) holds for her. For the same reason, property (1) also holds for a_j_l for each l ∈ [t-1]. For a_j, all her incident edges that are not chores for her are allocated, thus property (1) holds for her. For a_j_t, although e_i, j_t was allocated to her previously but is now unallocated, she prefers her current bundle to e_i, j_t since she envied a_j_t-1 and now owns the edge e_j_t, j_t-1. Thus property (1) also holds for a_j_t. Therefore, the new (partial) orientation retains property (1). To see properties (5)-(7), first observe that except a_i_1, the agents who receive new bundles are now non-envied. To see this, since 𝐗^1 satisfies property (1), the other endpoint agent of the edge that a_i receives (if it exists) does not envy a_i. For each l ∈ [s] ∖{1}, a_i_l is non-envied since a_i_l-1 prefers her current bundle to e_i_l-1, i_l. For the same reason, for each l ∈ [t], a_j_l is non-envied. For a_j, a_i_s does not envy her since a_i_s prefers e_i_s-1, i_s to e_j, i_s. The other endpoint agents of the edges in E_j^≥ 0∩ R(𝐗^1) do not envy her, either, since 𝐗^1 satisfies property (1). Since only a_j receives more than one edge and she is now non-envied, properties (5) and (6) still hold. For property (7), since a_i is the only agent who may become worse off, the only agents who were non-envied previously and now become envied and who were envied by some agent and are now envied by another agent are all a_i's neighbors. Each of them is envied by a_i who is now non-envied, thus there is still no envy cycle among the agents and property (7) still holds. If the new (partial) orientation satisfies property (2), it also satisfies property (3). To see this, consider a_j and a_j_1. Since a_j_1 only receives the edge between herself and a_j who is non-envied, a_j_1 is safe for all envied agents and property (3) holds. If the new (partial) orientation does not satisfy property (2), we run Algorithm <ref> once. By Claim <ref>, the final (partial) orientation is ^0_- and satisfies properties (1)-(7), and is computed in polynomial time. Now we are ready to prove Lemma <ref>. By Claims <ref>, <ref>, <ref>, <ref>, <ref>, we know that a (partial) orientation that is ^0_- and satisfies properties (1)-(7) can be computed in polynomial time. If it also satisfies property (8), then we have done. If the (partial) orientation does not satisfy property (8), then there exists a sequence of agents a_i_0← a_i_1←⋯← a_i_s such that a_i_l envies a_i_l-1 for every l ∈ [s] and a_i_s is non-envied, we have that a_i_l is not safe for a_i_0 for some l ∈ [s]. By property (5), for each l ∈ [s-1], agent a_i_l only receives e_i_l, i_l+1 and does not receive any incident edge of a_i_0. Thus by property (2), a_i_l is safe for a_i_0. The only case that property (8) is not satisfied is that a_i_s receives e_i_s, i_0 and is not safe for a_i_0. For this case, we run the same allocation process for Case 1 of Step 2-2, as illustrated in Figure <ref>. Specifically, we allocate e_i_s, i_0 to a_i_0 as well as all her unallocated incident edges that are not chores for her, and allocate e_i_l-1, i_l to a_i_l for every l ∈ [s]. The above allocation process is repeated until property (8) is satisfied. By the same reasonings in the proof of Claim <ref>, all the involved agents become non-envied, and ^0_- and properties (1)-(7) in Definition <ref> are retained. Therefore, the allocation process is repeated for polynomial times and gives a (partial) ^0_- orientation that satisfies properties (1)-(8).
http://arxiv.org/abs/2409.02837v1
20240904160307
Evolution of radiation profiles in a strongly baffled divertor on MAST Upgrade
[ "Fabio Federici", "Matthew L. Reinke", "Bruce Lipschultz", "Jack J. Lovell", "Kevin Verhaegh", "Cyd Cowley", "Mike Kryjak", "Peter Ryan", "Andrew J. Thornton", "James R. Harrison", "Byron J. Peterson", "Bartosz Lomanowski", "Jeremy D. Lore", "Yacopo Damizia" ]
physics.plasm-ph
[ "physics.plasm-ph" ]
E-mail address: fabio.federici@ukaea.uk Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA Commonwealth Fusion Systems, Cambridge, MA 02139, USA York Plasma Institute, University of York, Heslington, York, YO10 5DD, UK Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA UK Atomic Energy Authority, Culham Centre for Fusion Energy, Abingdon, OX14 3DB, UK UK Atomic Energy Authority, Culham Centre for Fusion Energy, Abingdon, OX14 3DB, UK digiLab, The Quay, Exeter EX2 4AN, United Kingdom UK Atomic Energy Authority, Culham Centre for Fusion Energy, Abingdon, OX14 3DB, UK York Plasma Institute, University of York, Heslington, York, YO10 5DD, UK UK Atomic Energy Authority, Culham Centre for Fusion Energy, Abingdon, OX14 3DB, UK UK Atomic Energy Authority, Culham Centre for Fusion Energy, Abingdon, OX14 3DB, UK UK Atomic Energy Authority, Culham Centre for Fusion Energy, Abingdon, OX14 3DB, UK National Institute for Fusion Science, 322-6 Oroshi-cho, Toki 509-5292, Japan Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA UK Atomic Energy Authority, Culham Centre for Fusion Energy, Abingdon, OX14 3DB, UK Electrical Engineering and Electronics department, University of Liverpool,Liverpool, L69 3GJ, UK See author list of E. Joffrin et al 2024 Nucl. Fusion (https://doi.org/10.1088/1741-4326/ad2be4) See author list of J. Harrison et al 2019 Nucl. Fusion 59 112011 (https://doi.org/10.1088/1741-4326/ab121c) § ABSTRACT 7.5pt8pt Plasma detachment in tokamaks is useful for reducing heat flux to the target. It involves interactions of the plasma with impurities and neutral particles, leading to significant losses of plasma power, momentum, and particles. Accurate mapping of plasma emissivity in the divertor and X-point region is essential for assessing the relationship between particle flux and radiative detachment. The recently validated InfraRed Video Bolometer (IRVB) diagnostic, in MAST-U<cit.> enables this mapping with higher spatial resolution than more established methods like resistive bolometers. In previous preliminary work<cit.>, the evolution of radiative detachment was characterised in L-mode (power entering the scrape-off layer, P_SOL∼0.4MW). With a conventional divertor the inner leg consistently detached ahead of the outer leg, and radiative detachment preceded particle flux detachment. This work presents results also from the third MAST-U experimental campaign, fuelled from the low field side instead of the high field side, including Ohmic and beam heated L-mode shots (with a power exiting the core up to P_SOL∼ 1-1.5MW). The radiation peak moves upstream from the target at lower upstream densities than the ion target flux roll-over (typically considered the detachment onset), while the inner leg detaches before the outer one. The movement of the radiation is in partial agreement with the expectations from the DLS model<cit.>, predicting a sudden shift from the target to the X-point. The energy confinement is found to be related to detachment, but there seems to be some margin between the radiation on the inner leg reaching the X-point and confinement being affected, a beneficial characteristic if it could be extrapolated to future reactors. For increasing P_SOL the particle flux roll over is almost unaffected, while radiative detachment occurs at higher density in both legs, but much higher on the outer, suggesting an uneven distribution of the power exiting the core. Evolution of radiation profiles in a strongly baffled divertor on MAST Upgrade the MAST Upgrade team =============================================================================== § INTRODUCTION MAST-U is a spherical tokamak at the Culham Centre for Fusion Energy (CCFE) in the United Kingdom<cit.>. It features a double null (DN) plasma, strongly baffled divertor configurations, and can support an innovative Super-X divertor (SXD), which significantly reduces target heat loads and improves access and stability of plasma (e.g., 'particle') detachment<cit.>. In this work, we investigate the radiative power dissipation and its evolution as detachment progresses on MAST-U. To accurately measure the total emissivity profile, multiple resistive bolometer arrays are installed to monitor the core and divertor chamber. To complement the resistive system and fill the gap from the X-point to the divertor chamber (see <ref>), a prototype Infrared Video Bolometer (IRVB) was installed, aimed at the lower X-point. This diagnostic was recently validated<cit.> and its data have already been used to complement various scientific endeavors<cit.>. Until now, the resistive system has been affected by significant noise that, while still allowing the calculation of the total radiated power from the core, prevented detailed measurements of the movement of the radiation front in the divertor chamber. Although the IRVB cannot reconstruct the radiative emissivity map in the divertor chamber downstream the baffle, due to its viewing location, its fiend of view (FOV) is adequate for investigating the radiative emissivity distribution in the Conventional Divertor (CD) configuration<cit.>. This paper presents the initial results from the analysis of the IRVB data from the first MAST-U experimental campaign (MU01) and third (MU03) L-mode CD shots focusing on changes in the total radiated power spatial distribution along the divertor legs in connection with detachment. Usually a region with high total radiative emissivity is present on the divertor legs at the strikes points or near scrape-off layer (SOL). When the core density increases, this region (also called the radiation front) moves upstream towards the X-point, 'detaches' from the target, moving along the separatrix. This is related with the ionisation front detachment and was the subject of significant modeling efforts using simplified analytical models, among which is the detachment location sensitivity (DLS) model, which aims to predict the location and sensitivity of the front<cit.> and will be here compared with experimental observations. § DIAGNOSTIC IMPROVEMENTS Before discussing our experimental results, we will first discuss the IRVB diagnostic implementation and its various improvements. IRVB measurements first started in MU01, further documented in references <cit.>. The geometry of the IRVB was optimised in MU02 to provide a more detailed view of the plasma around the X-point by retracting the foil from the pinhole, from 45mm to 60mm. A significantly thinner platinum absorber foil than expected (measured ∼ 0.72 μ m instead of nominal 2.5)<cit.> resulted in higher signal levels than expected <cit.>, enabling this modification. The IRVB geometry optimisation also reduced the portion of the foil shaded by the P6 coil (see <ref>) and increased the coverage of the divertor chamber (Figure 2.5a to 2.5b in <cit.>). The IRVB geometry was verified to improve the accuracy of the geometrical calibration beyond the design specification. The internal pinhole location was accurately triangulated with sub-mm precision with CALCAM fits from multiple angles<cit.>, returning a ∼3.9mm shift with respect to target parameter. Together with the exact location of the IRVB flange on the vacuum vessel, the precise FOV for MU02, MU03, and the future MU04 was determined, shown in <ref>. This improved IRVB FOV characterisation has greatly improved the accuracy of IRVB measurements, enabling it to distinguish radiation at or slightly (a few cm) away from the plasma surface facing components in the divertor. After a camera image is obtained by the IRVB, a Bayesian tomographic inversion is performed to obtain a 2D radiative emissivity map. This inversion is performed with an arbitrary regularisation coefficient to reduce noise on the inversion. Unlike previous results, spatial binning has been disabled, improving the inversion by making use of the full resolution available from the camera. A running average smoother (∼ 30ms) is applied over time to remove temporal oscillations presented previously<cit.>. § RADIATIVE EMISSIVITY RESULTS L-mode DN shots are analysed in this paper, as upstream conditions are difficult to control in H-mode. The data is from the MU01 and MU03 campaigns. The only impurity present in significant quantities is the intrinsic carbon from the walls. MU01 was the first experimental campaign in MAST-U and it was often characterised by the presence of MHD activity (possibly influenced by error fields) and imprecise plasma control, which negatively affected the overall plasma performance (shots 45468, 45469, 45470, 45473). The shots are Ohmically heated with fueling from the high field side (HFS), have a plasma current I_p=600kA and have a power crossing the separatrix (P_SOL) ∼ 0.4MW (determined as the Ohmic power minus the power radiated in the core from resistive bolometry). In MU03 similar shots were performed with a more optimised scenario, yielding better overall performance, but with a higher starting density, so the transition of the radiation detaching on both legs cannot be observed (shots 47950, 47973, 48144). The shots are Ohmically heated with main fueling from the low field side (LFS), I_p=750kA and P_SOL∼ 0.5-0.6MW. LFS fuelling was employed to enable higher power L-mode operation, making the scenario compatible with off-axis (SW) 1.5-1.8 MW NBI heating <cit.> by raising the L-H threshold. These beam-heated L-mode discharges allow us to verify if the detachment evolution changes with a higher P_SOL and to better probe the initial stages of detachment (at higher q_∥ a higher n_u is expected to be required to achieve detachment), featuring I_p=750kA and P_SOL∼ 1-1.5MW. First we will describe the typical evolution of the radiative emissivity during a core density ramp where the divertor is progressively cooled in <ref>. This indicates that the radiation region detaches from the inner target at lower densities than the outer target. The outer leg dissipates significantly more power than the inner one, due to its larger volume integral. After the inner leg radiation moves towards the X-point, it progresses upstream the X-point along the inner separatrix before the target radiation disappears from the outer strike point. A MARFE-like structure appears at the HFS midplane even before the outer leg radiatively detaches. This is likely caused by HFS fuelling, as the inner gap (∼ 4 cm) is sufficiently large to avoid interactions with the HFS column and this phenomenon does not occur in shots fuelled from the LFS. The presence of the MARFE-like structure is observed in interpretative SOLPS simulations <cit.> and is confirmed by high speed visible light imaging, Thomson scattering (TS) measurements (that shows a region of high density and low temperature penetrating the core from the HFS midplane) and resistive bolometry (increased brightness of the LOS close to the central column). The latter is routinely used to verify if a peaked emission towards the inner midplane in the IRVB inversions constitutes an artefact or a true MARFE-structure occurrence (phenomena more common with the optimised IRVB FOV). Although the spatial resolution is insufficient to distinguish core and SOL radiation near the separatrix, a clear inward movement towards the core is observed at higher electron densities, by comparing <ref> and <ref> with<ref>). § MODEL PREDICTIONS Our aim is to compare the movement of the total radiation distribution against model expectations. The Two Point Model (2PM)<cit.> was later modified to include the presence of a thermal or radiation front in the Thermal Front Model<cit.>. This model was further refined to consider the leg geometry and magnetic field, resulting in the Detachment Location Sensitivity (DLS) model<cit.>. Assuming that thermal front movement corresponds to detachment, this reduced model predicts that for given magnetic geometry (e.g., different inner and outer leg topologies (CD, Super-X)) the location of the detachment front depends on the control parameter C = n_u √(f_I)/q_∥,u^5/7, containing upstream conditions (upstream heat flux q_∥, u, upstream electron density n_u) and the divertor impurity concentration f_I. The impact of the magnetic geometry on the detachment onset and evolution can be quantified via the coefficient C_1 (s_∥, f) = B_f/B_u[ ∫_s_X^L_∥B(s_∥)/B_X(L_∥ - s_∥)/(L_∥ - s_X) ds_∥ + ∫_s_∥,f^s_XB(s_∥)/B_X ds_∥]^-2/7 where s_∥ <cit.> is the position along the separatrix and f denotes quantities at the front location (S_∥, f = 0 corresponds to the detachment front at the target: the detachment onset). Assuming a constant electron heat conductivity coefficient and radiative cooling function (it depends on the impurity specie and its transport), n_u √(f_I)/q_∥,u^5/7∝ C_1. Therefore, the front location can be modelled as function of magnetic geometry C_1 (s_∥, f) in terms of changes in n_u, f_I, q_∥,u. If f_I and P_SOL are constant and the MAST scaling law between power decay length at the outer midplane (λ_q) and n_u from <cit.> is used, this further reduces to n_u^1.5 ∝ C_1 (s_∥, f). The sensitivity and stability of the detachment front at a certain location (s_∥,f) can be modelled as d C_1/d s_∥, f. If C_1 decreases from the target to the X-point (d C_1/d s_∥, f < 0), the detachment front location is unstable, as a slight perturbation towards the X-point increases the power dissipated in it, pushing the front further upstream. Conversely, d C_1/d s_∥, f > 0 implies an intrinsic detachment front stabilisation. For a typical MAST-U CD configuration, the C_1 profile along the inner and outer divertor leg is shown in <ref>. The front on the inner leg is unstable up to a location very close to the X-point, while the front on the outer leg is stable. This model behaviour can be verified with IRVB measurements. § FRONT CHARACTERISATION To compare the DLS predictions with the IRVB data, it is essential to define what is meant with the term (detachment) front. In the DLS and other models, this is defined as an infinitely narrow region where "the electron temperature transitions between the hotter upstream region and the colder region below which is dominated by ionization, recombination, and other neutral processes"<cit.>. This region is associated with high total radiated power, usually attributed to the presence of impurities in the plasma that radiate efficiently at the temperature typical of the SOL and divertor, causing the necessary power losses. The front is idealised in the DLS model as infinitely narrow, so the (impurity) radiation front and the ionisation (detachment) front coincide. In reality this is not the case and a clear separation between the impurity radiation region and the ionisation/detachment front is observed <cit.>. Furthermore, the emissivity seldom has a single well defined peak that identifies the radiation front. A simple way is to identify the front as the region with the peak emissivity along a divertor leg and thus the highest radiative dissipation. The current MAST-U IRVB implementation, though, is characterised by significant absorber foil properties non-uniformities<cit.>, which causes local variations affecting in particular the peak emissivity obtained from the tomographic inversion. Foil non-uniformities are planned to be reduced in the future by replacing the foil with one produced with vapour deposition processes<cit.>. Additionally, the radiation front is not strongly localised on MAST-U as both impurity radiation and, downstream of the impurity radiation region, electron-impact excitation (EIE) results in significant radiative power loss <cit.>. EIE is the largest contributor to the total radiated power within the divertor chamber (in a purely hydrogenic plasma<cit.>). Due to the size of the radiation region and foil non-uniformities, the radiation peak location cannot be obtained reliably. However, the location where the radiation has a sharp decrease can be tracked more reliably and is likely reminiscent of a thermal front where T_e becomes too low (<3-5 eV) for EIE and ionisation <cit.>. Hence, the radiation region or 'front' is tracked experimentally also by tracking the location where the radiative emissivity reaches a set fraction (50% is used here) of the peak emission in the divertor leg, analogous to techniques that have tracked the ionisation front using imaging <cit.>. The peak radiative emissivity corresponds to the peak in total radiation (hydrogenic + impurity), whose movement we define as the beginning of 'radiative detachment'. This is not to be confused with (particle) detachment, when the ionisation source detaches from the target and the ion target flux rolls-over, which is in better agreement with the 50% falloff point of the radiative emissivity. In the remainder of the paper, both the peak of radiative emissivity and the 50% falloff are tracked in terms of poloidal distance from the target along the separatrix divided by the poloidal distance from the X-point to target, called L̂_x: 0 corresponds to the target, 1 corresponds to the X-point, >1 implies locations upstream of the X-point towards the midplane. It should be noted that given the IRVB FOV, the radiation cannot be reliably tracked on the outer separatrix past the X-point. To put the evolution of these two radiative power markers into perspective, their evolution is compared with the total target particle flux. Before detachment, the target particle flux first increases as the upstream/core density is decreased, as predicted by the 2PM. Due to a combination of power limitation, volumetric recombination and momentum losses, the particle flux first plateaus (detachment onset) and then decreases or 'rolls-over' (detachment). In MAST-U, Langmuir probes (LPs) are used to monitor the particle flux, and they are present only in the outer strike points (both upper and lower in DN).<cit.> The roll-over of the particle flux is expected to be associated with the detachment of the ionisation region, and thus (if hydrogenic radiation is at least comparable with the impurity driven one) the 50% fall-off point of the radiative emissivity, from the target. The detachment of the ionisation region from the target can be observed more directly using the D_2 Fulcher band emission front as a proxy for the ionisation region <cit.>. For a CD, this can be observed by monitoring the brightness of the D_2 Fulcher emission near the strike point using the DMS diagnostic <cit.>. Although not shown here, the reduction of D_2 Fulcher brightness at the target is in agreement with the ion target flux roll-over within experimentally uncertainties (e.g. it occurs at slightly higher upstream density (0.05/0.1 · 10^19#/m^3)). This confirms the self-consistency of these two detachment metrics. Finally, a movement of the radiation front upstream implies that a region of 'cold' plasma moves towards the core. Work on other tokamaks has shown that core confinement deteriorates when this radiation front is near the X-point and, subsequently, enters the core <cit.>. Such core deterioration is detrimental for future fusion reactors. To monitor this, we compare the evolution of detachment with the energy confinement time, τ_ th, defined by dW/dt=P_heat - W/τ_ th with W the stored energy and P_heat the power injected in the core. § FRONT TRACKING RESULTS After having discussed the general evolution of the radiative emissivity and defined our two different radiative front identifiers, we show their evolution as function of upstream density at the outer separatrix midplane (n_e,up EFIT). n_e,up EFIT is obtained from a smoothed Thomson scattering (TS) data and the location of outer midplane separatrix from EFIT<cit.>. An important caveat to this technique is that n_e,up EFIT has large uncertainties due to the uncertainty of the separatrix position determination. Although this can cause systematic uncertainties between different discharges, the trend here identified by n_e,up EFIT should be relatively reliable, as similar dependencies are found in terms of Greenwald fraction. The density on the inner side is supposed to be the same, as the variation of the plasma parameters on the same flux surface in the core is limited, although this is not necessarily true for HFS fuelled discharges<cit.>. <ref> compares our tracking results with the ion target flux measurements, energy confinement time and dr_sep for the MU01 and MU03 Ohmic shots. In MU01, the roll-over is quite clear on both outer strike points, happening at n_e,up EFIT ∼ 0.5 × 10^19#/m^3, and the particle flux is quite up/down symmetric. This is the case even if in these shots dr_sep (the distance between the lower and upper separatrix at the outer midplane; if positive the core is more strongly coupled with the upper divertor and vice versa) is 0/-6mm, with λ_q in the range 5-15 mm (consistent with MAST scaling laws <cit.>).Such a large dr_sep/λ_q ratio is expected to cause significant heat flux asymmetries <cit.>, which seems to be inconsistent with the particle flux symmetry and the lack of asymmetries observed in high speed camera data or resistive bolometry measurements. This may indicate inaccuracies in the dr_sep retrieved from EFIT for MAST-U or larger λ_q than currently determined or expected from scaling laws. The start of the peak radiation movement from the inner target across all MU01 discharges happens at a slightly lower n_e,up than the outer peak radiation movement and thus at a lower n_e,up than the particle flux roll-over. The movement of the peak radiation is not regular for both divertor legs and there appear to be discrete steps, likely due to the foil non uniformity previously mentioned. However, the radiation evolution difference is more noticeable on the 50% falloff marker. As mentioned, because these discharges are fuelled from the HFS, the radiation on the inner separatrix reaches the HFS midplane before the outer leg is fully detached with both markers. The evolution of the 50% falloff marker is very gradual on the outer divertor leg, in contrast to the inner leg where the radiation quickly disappears from the target and re-appears near the X-point (e.g. there are almost no points between L̂=0 and 0.8). This abrupt movement of the radiation on the inner leg, but not on the outer leg, is in agreement with the DLS model prediction in <ref>, of a stable detachment on the outer leg and unstable detachment on the inner leg. Further research is required using post MU02 discharges where the IRVB viewing geometry was fully verified to obtain further confidence in this conclusion. Lastly, the confinement time initially increases due to the increase in stored energy at the beginning of the shot, but as soon as the peak emission moves from the target there is a strong degradation. This is true also for the ratio of τ_ th with the confinement time from scaling laws identified in <cit.>. This is most likely due to the negative effect of regions with high emissivity and low temperature inside or close to the core. Ohmic shots from MU03 have higher I_p and P_SOL and reduced interactions with the main vessel and baffle plates. This increases the particle flux roll-over point from a Greenwald fraction of 0.22 to 0.35. Although | dr_sep| /λ_q is reduced (e.g. | dr_sep| <1mm with λ_q between 5-15 mm), the LPs measure a noticeably higher particle flux on the upper outer leg. This is consistent with recent experiments and simulation studies that indicate significant up/down asymmetries in the MAST-U plasma due to drifts in a connected DN, although further studies are required - particularly given the uncertainty of dr_sep.<cit.> The initial upstream density is not not low enough to witness the detachment of the peak emission on either leg, but the 50% falloff still detaches from the inner target at a lower n_e,up than at the outer target, much earlier than the outer target roll-over. A rapid movement of the inner leg radiation region towards the X-point is still observed, although only a few points of L̂_50%≈ 0 are observed. The outer leg L̂_50% evolution is more gradual, in agreement with the older MU01 results. As the density increases, the emissivity on the entire inner separatrix increases, with similar peak emissivities near the midplane and the X-point (causing L̂_̂p̂êâk̂ to oscillate), whereas the 50% falloff marker remains quite stable near the X-point. When the outer leg radiation reaches the X-point, the radiation on the inner separatrix rises further upstream. Apart from the very end of shot 47950, there is no evidence of the presence of a MARFE-like structure localised at the HFS midplane from both IRVB and resistive bolometry. The lack of such a MARFE structure results in less degradation of P_SOL, likely explaining why the outer target flux roll-over and all phases of radiative detachment occur over a much larger range of upstream density. The energy confinement time is increased and seems to peak at a higher n_e,up than the start of the outer leg radiative detachment (L̂_peak>0) and the inner separatrix radiation reaching the X-point (L̂_̂5̂0̂%̂≈ 1). The relative decrease of confinement time is also lower than in MU01 and occurs at much higher n_e,up.However, particle detachment (e.g. the outer target particle flux roll-over) happens at a higher n_e,up than the confinement peak; implying that outer target detachment (for the CD), without impurity seeding, requires a degradation of confinement for this Ohmic L-mode scenario. The results for the MU03 beam heated L-mode shots are shown in <ref>. The ratio of up/down outer particle flux is unchanged, but both particle fluxes are increased due to the higher P_SOL. The outer target ion flux increases initially more sharply than observed in the Ohmic cases, likely as the initial density achieved is lower and P_SOL is significantly higher. Because of this, the ion target flux evolution shows a clearer roll-over - although the reduction of ion target flux occurs at similar core and upstream densities. Due to the lower n_e,up/higher P_SOL starting point, it is possible to observe the inner leg L̂_̂5̂0̂%̂ unstable transition from the target to the X-point, consistent with the DLS model predicion. Curiously, L̂_̂5̂0̂%̂ transitions from 0 to 0.6, rather than 0.8. This could be consistent with a larger extent of the radiation front. Including the extent of the thermal front (rather than assuming it is infinitely narrow) is one of the goals of the DLS-Extended model<cit.>. A larger front has the effect of averaging the magnetic characteristics in a set region of the leg, providing a larger stability window for the front. The outer leg 50% falloff marker movement happens at about the same density as the particle flux roll-over, at much higher n_e,up compared to the inner leg. The energy confinement time profile is quite similar to the MU03 Ohmic case, suggesting that the increased external heating results in a proportional stored energy increase. The confinement peak occurs while L̂_̂5̂0̂%̂ approaches 1 on the inner leg, but before it starts to increase on the outer leg. This seems to imply that the difference in outer leg detachment does not have a significant impact on confinement. This might be due solely to the inner leg detaching first and the radiation starting to cross inside the core from there, and if a different partition of P_SOL between inner and outer leg could be achieved the relationship could be reversed. It might be possible to achieve this in single null, when more power is directed to the inner leg. § CONCLUSION AND FUTURE WORK In this paper the first scientific results exploiting the new MAST-U infrared imaging bolometer (IRVB) are presented using L-mode conventional divertor density ramp discharges. The radiation along the inner divertor leg sharply transitions from near the target to near the X-point as density is increased. In contrast, the radiation front detachment evolves gradually from the target to the X-point. The DLS model <cit.> suggests an unstable thermal front evolution at the inner target and a stable thermal front evolution at the outer target, consistent with experiments. After both radiation fronts have detached from the target, ultimately particle detachment is observed from the ion target flux roll-over. The fueling location can have a strong impact on the radiation evolution: fueling from the HFS causes the emergence of a MARFE-like structure at the HFS midplane that can then penetrate the core and effect core performance. Resolving this limitation and reducing plasma surface interactions with the main vessel wall resulted in a clearer separation between radiation detachment and particle flux detachment on the outer leg. The degradation of the energy confinement time occurs at higher upstream densities than the complete radiative detachment on the inner leg, but lower than the particle flux detachment. Hardware upgrades for improved measurements are underway, including a replacement of the existing IRVB IR camera, improving signal to noise levels and enabling a higher time resolutions, to improve the monitoring of transients. In the next MAST-U vacuum breach, the foil will be replaced with a more uniform one<cit.>, and a second IRVB installed aimed at the upper X-point, to assess up/down asymmetries and provide a full bolometric coverage of the entire plasma volume. Future experimental campaigns, that should try to start from a lower initial upstream density, are planned to investigate higher power H-mode conditions, up/down symmetries<cit.> and the impact of inner target geometry. 7.5pt8ptThis work is supported by US Department of Energy, Office of Fusion Energy Sciences under the Spherical Tokamak program, contract DE-AC05-00OR22725 and under the auspices of the EPSRC [EP/L01663X/1]. Support for M. L. Reinke’s contributions was in part provided by Commonwealth Fusion Systems. This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200-EUROfusion) and from the EPSRC [grant number EP/W006839/1]. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them. § REFERENCES 9 7.5pt8pt Federici2023 F. Federici et al., Review of Scientific Instruments, vol. 94, no. 3, p. 033502, 3 2023. Federici2023a F. Federici et al, in 70th Annual Meeting of the APS Division of Fluid Dynamics, Denver, 2023, p. 1. Myatra2021 O. Myatra, Ph.D. dissertation, University of York, 2021. Cowley2022 C. Cowley et al., Nuclear Fusion, vol. 62, no. 8, 2022. Lipschultz2016 B. Lipschultz et al., Nuclear Fusion, vol. 56, no. 5, 2016. Morris2018 W. Morris et al., IEEE Transactions on Plasma Science, vol. 46, no. 5, pp. 1217–1226, 2018. Fishpool2013 G. Fishpool et al., Journal of Nuclear Materials, vol. 438, no. SUPPL, pp. S356–S359, 7 2013. Moulton2024 D. Moulton et al., Nuclear Fusion, pp. 1–50, 5 2024. Verhaegh2022 K. H. Verhaegh et al., Nuclear Fusion, vol. 63, no. 1, p. 21, 2022. Verhaegh2023a K. Verhaegh et al., Nuclear Fusion, vol. 63, no. 12, 2023. Soukhanovskii2022a V. A. Soukhanovskii et al., Nuclear Materials and Energy, vol. 33, no. June, 2022. Henderson2024 S. S. Henderson et al., Submitted to Nuclear Materials and Energy, 2024. Federici2023b F. Federici, Ph.D. dissertation, University of York, 2023. Federici2024 F. Federici et al., Submitted to Review of Scientific Instruments, pp. 1–5, 2024. Silburn2020 S. A. Silburn et al., “Calcam,” 2020. Rivero-Rodriguez2018 J. F. Rivero-Rodriguez et al., in 45th EPS Conference on Plasma Physics, EPS 2018, vol. 2018-July.1em plus 0.5em minus 0.4emEuropean Physical Society, 2018, pp. 233–236. Verhaegh2023b K. Verhaegh et al., submitted to Nuclear Fusion, pp. 1–30, 11 2023. Stangeby2001 P. Stangeby, ser. Series in Plasma Physics.1em plus 0.5em minus 0.4emCRC Press, 1 2000, vol. 43, no. 2. Hutchinson1994 I. H. Hutchinson, Nuclear Fusion, vol. 34, no. 10, pp. 1337–1348, 1994. Harrison2013 J. Harrison, G. Fishpool, and A. Kirk, Journal of Nuclear Materials, vol. 438, pp. S375–S378, 2013, proceedings of the 20th International Conference on Plasma-Surface Interactions in Controlled Fusion Devices. Wijkamp2022 T. A. Wijkamp et al., Nuclear Fusion, vol. 63, no. 5, p. 056003, 5 2023. Ryan2023 P. J. Ryan et al., Review of Scientific Instruments, no. to be submitted, 2023. Kallenbach2015a A. Kallenbach et al., Nuclear Fusion, vol. 55, no. 5, 2015. Reinke2013 M. L. Reinke, in 7th IAEA Technical Meeting on Steady State Operation of Magnetic Fusion Devices, 2013. Reimold2015 F. Reimold et al., Nuclear Fusion, vol. 55, no. 3, 2015. Lao1985 L. L. Lao et al., Nuclear Fusion, vol. 25, no. 11, pp. 1611–1622, 1985. Fevrier2021 O. Février et al., Nuclear Fusion, vol. 61, no. 11, 2021. Lovell2024 J. J. Lovell et al., Submitted to Nuclear Materials and Energy, 2024. ParadelaPerez2024 I. Paradela Pérez, Nuclear Fusion, in preparation, 2024. Kryjak2024 M. Kryjak, 50th annual IOP Plasma Physics Conference, York, Tech. Rep., 2024.
http://arxiv.org/abs/2409.03126v1
20240904233603
Co-Developing Causal Graphs with Domain Experts Guided by Weighted FDR-Adjusted p-values
[ "Eli Y. Kling" ]
stat.ME
[ "stat.ME" ]
1]Eli Y. Kling Email: eli.kling@avanade.com Alternate Email: eli_kling@hotmail.com LinkedIn: https://www.linkedin.com/in/elikling/https://www.linkedin.com/in/elikling/ [1]Avanade, London, UK indentpara 2em plainnat Co-Developing Causal Graphs with Domain Experts Guided by Weighted FDR-Adjusted p-values [ September 2024 ======================================================================================== § ABSTRACT This paper proposes an approach facilitating co-design of causal graphs between subject matter experts and statistical modellers. Modern causal analysis starting with formulation of causal graphs provides benefits for robust analysis and well-grounded decision support. Moreover, this process can enrich the discovery and planning phase of data science projects. The key premise is that plotting relevant statistical information on a causal graph structure can facilitate an intuitive discussion between domain experts and modellers. Furthermore, Hand-crafting causality graphs, integrating human expertise with robust statistical methodology, enables ensuring responsible AI practices. The paper focuses on using multiplicity-adjusted p-values, controlling for the false discovery rate (FDR), as an aid for co-designing the graph. A family of hypotheses relevant to causal graph construction is identified, including assessing correlation strengths, directions of causal effects, and how well an estimated structural causal model induces the observed covariance structure. An iterative flow is described where an initial causal graph is drafted based on expert beliefs about likely causal relationships. The subject matter expert's beliefs, communicated as ranked scores could be incorporated into the control of the measure proposed by Benjamini and Kling, the FDCR (False Discovery Cost Rate). The FDCR-adjusted p-values then provide feedback on which parts of the graph are supported or contradicted by the data. This co-design process continues, adding, removing, or revising arcs in the graph, until the expert and modeller converge on a satisfactory causal structure grounded in both domain knowledge and data evidence. § INTRODUCTION The current common practice for conducting data science development projects is to kick off with a "Discovery" phase, where the problem is defined. This phase is often conducted in a workshop setting, where the data scientist and the subject matter expert work together to specify the target, postulate potential drivers or features and identify the data sources. Many frameworks specify a 'hypothesis workshop' as part of the kick-off and shaping steps. These sessions are usually conducted before the statistical modeller sees the data and are typically meant as a vehicle for identifying and prioritising relevant data sources. This practice encourages blowing up the list of features to "through" at a model. Automatic feature selection and modelling tools have been developed to address the tsunami of features. The increasing reliance on complex models and automated data-mining systems raises important questions about interpretability, inference and correct methodology implementation [<cit.>]. For example, <cit.> voices a paradigm common to statistical modelling and data mining: perform an automatic variable selection and then allow the expert to overlay the business or physical context. This practice results in models that are not easily explained. Thus also posing a challenge to implementing responsible AI guidelines. The issues and concerns above are exacerbated when the models are used to drive decisions and actions. The nature of many challenges in the business and manufacturing worlds is such that it is often impossible to design experiments to cleanly assess the impact of decisions and actions on outcomes. For instance, the Market Mix Methodology (MMM) attempts to attribute success to marketing activities in a world of confounding co-activities and poor data [<cit.>]. In cases where experimentation is not possible, there is a risk that misspecification and misuse of confounders will not only result in wrong action-to-outcome impact estimates but could also indicate the wrong direction of effect, as demonstrated by Simpson's paradox. As <cit.> state, “The Simpson's paradox is not so much of a paradox but rather a warning of how sensitive causal reasoning can be with respect to model misspecifications.” A robust approach to address these situations is to deploy techniques developed in the field of statistical causality analysis [<cit.>]. Modern statistical causal analysis, exemplified by Pearl and Mackenzie <cit.>, begins with the formulation of a causal graph. This approach offers several benefits to the quality and robustness of analysis and decision support. The causal graph serves as an intuitive tool that bridges the knowledge and beliefs of a subject matter expert (SME) with the statistical modeller's data-driven insights. More importantly, Pearl <cit.> emphasises that mapping causal mediating factors illuminates 'intrinsic properties of reality that have tangible policy implications'. <cit.> discusses the tension between the two analysis goals explainability and productiveness.Each has a different starting point and nuances of the methodology used. Specifying correctly a causal graph has the benefit of expalianbility and bringing to the for potential introduction of bias that is not compliant with responsible AI guidelines. Once an explainable model is crafted the statistical modeller may proceed to fit a prediction oriented model guided by the insights of the discovery step. This is in contrast to <cit.> "Using complex predictors may be unpleasant, but the soundest path is to go for predictive accuracy first, then try to understand why" A growing body of research explores graphical methods to visualise and communicate the results of causality analysis. For instance, <cit.> discuss supporting lay users with no specific expertise in machine learning, promoting an interactive approach aimed at "emotionally connecting" the subject matter expert. While the field of automatically discovering causal graphs is active [<cit.>], this paper focuses on an approach to aid discussion between the statistical modeler and the subject matter expert while constructing a causality graph. The discussion explores the formation of a causality graph as a tool to be used during the discovery and exploratory data analysis phase. This paper posits that plotting relevant information on a causality graph facilitates discussion between the statistical modeller and the subject matter expert. This approach does not argue for setting aside rigour for the sake of simplifying the discussion. The False Discovery Rate (FDR) or False Discovery Cost Rate (FDCR) is an appropriate measure for introducing multiplicity considerations in an intuitive way. Figure <ref> demonstrates that this type of presentation is self-explanatory and suitable for both systematic-level discussions with subject matter experts and modelling-level discussions with trained statisticians. This paper proposes an approach to guide the construction of a causality graph that will underpin inference, estimation, and predictions. For simplicity but without loss of generality, the approach describes stepwise crafting of a Structural Equation Model (SEM) or Structural Causal Model (SCM) where the adjustment of the False Discovery Rate (FDR) is deployed as an aid and guide [<cit.>]. The paper starts with discussing which hypotheses are relevant to the co-design of a causality graph (Section <ref>). Once a family of hypotheses is defined, Section <ref> describes the control of the False Discovery Rate (FDR) and its generalisation to FDCR using costs of false alarms. Section <ref> demonstrates the use of such adjustment in the construction of a Structural Causal Model (SCM) using a toy example. The paper concludes with Section <ref>, touching on thoughts not explored here. § IDENTIFYING THE FAMILY OF HYPOTHESES This paper follows a paradigm of collaboratively working with subject matter experts (SMEs) to translate their expertise and experience into a hypothesised causality graph. It is natural to transform this process into a set of hypotheses that reflect the SME's beliefs. The next logical step is applying some multiplicity control. The False Discovery Rate (FDR) is a natural multiplicity measure for this setting. Before delving into multiplicity error measures and their control, it is important to define the family of hypotheses and understand the co-dependence between the hypotheses, the statistics, and the resulting p-values. [<cit.>] discuss using Pearl's theorems on the properties of conditional independence relations [<cit.>] and avoiding the execution of some statistical tests to reduce the computational load. A note of caution: avoiding calculating the test does not exclude the hypotheses from the family if it is pertinent to structuring the causality graph unless the correlation to another hypothesis to another hypothesis is 1. In the context of assessing of a causality graph, hypotheses are postulated based on SME's experience, their prior beliefs, and suggestions derived from explanatory analysis. As <cit.> points out, the hypotheses focus on the discovery of edges in the Directed Acyclic Graph (DAG). The identification of the relevant hypothesis is somewhat derived from the typical co-design process. Note how the choice of modelling is incorporated here: * Identify the outcomes of interest and decisions that are likely to affect the outcomes * Use the Six Sigma fish–bone diagram to list potential drivers of the outcomes * Obtain, clean, and prepare data for the information listed in the fish–bone diagram * Calculate the correlations and their corresponding p-values (The null hypothesis for each pair is that there is no correlation or co-dependence) * Assign a causal direction to each pair (could also be 'no causal relationship') * Assign a belief score to each postulated causal relationship. For instance: 0 = Causal relation is not possible 1 = Could be a causal relationship 2 = It is likely this is a causal relationship 3 = Known causal relationship * Using the causality graph defined by (4) & (5), fit a Structural Causal Model (SCM). This naturally defines a sub-family of hypotheses: * The model does not explain the data (tested with a set of fit statistics) * The model coefficients are zero * The mean of the residuals is zero (could also hypothesise on the variance of the residuals) * A further set of hypotheses are based on the assertion that the correlations induced by the SCM in (7) are the same as seen in (4). This is in line with the field of structure learning. For example, <cit.> advise that a DAG should be as close as possible to the global dependence structure Step (8) is linked notionally to the definition of faithfulness. A graph is faithful to some distribution if the graph connectivity represents exactly the dependencies and independences dictated by the distribution (see for instance, <cit.>.) Moreover, rather than examining the correlations, it makes sense to work with covariances so that the direction is also considered. The hypotheses in (8) are an important feedback to the co-design process as 'unattended' correlations would be highlighted. However, the way (8) is formulated above results in counter-intuitive p-values (big is good) and does not really prove the SCM is reflecting the covariances and variances. Rather, it only shows that there is no evidence to refute that it does. (8) should be couched similarly to equivalence testing, where the null hypotheses are that the covariances and variances induced by the SCM are different from the observed correlations: H_0^i: |ρ_scm^i - ρ_data^i| ≥δ H_1^i: |ρ_scm^i - ρ_data^i| < δ Where, for n variables, ρ_data^i (i = 1, …, n(n+1)/2) are the observed correlations in the data. ρ_scm^i (i = 1, …, n(n+1)/2) are the matching correlations induced by the SCM and δ is a threshold parameter. It is customary to form the above as two one-sided hypothesis tests with two p-values that should be small if the two measures are similar. Examining the p-values in (7) and (8) informs suggested changes to the causality graph. Steps (7) and (8) are repeated until the Modeller and the SME are satisfied. Thus, multiplicity of hypothesis control is called for. As the process could be viewed as a screening process where the proportion of true null hypotheses is small, the control of the False Discovery Rate (FDR) is appropriate. I.e., the p-values reviewed during the process should be FDR adjusted. We propose that due to the consistency property of the FDR adjustment, it is sufficient to consider the adjustment for each iteration separately. Moreover, the FDR adjustment provides a mechanism for inclusion of the belief scores where they can define the weights for a weighted FDR adjustment. Another source of hypotheses comes from the field of causality structure learning where conditional independence tests underpin a stepwise (forward or backward) causality graph construction. This paper is concerned with "manual" co-design of the causality graph rather than an automated, data-driven algorithm. Thus, the hypotheses in (8) are sufficient for driving and informing the discussion between the SME and the Statistical Modeller, although not as nuanced as the conditional independence tests. It is important to understand the correlation structure among all the p-values used, as some of the procedures for the control of the FDR (e.g. the Bejamini-Hochberg procedure [<cit.>]), control the FDR only under independence or under Positive Regression Dependence (PRDS). In cases where complex correlations are suspected, it is possible to use the Benjamini-Yekutieli algorithm [<cit.>] or bootstrap as demonstrated by <cit.> using the foundations laid by <cit.>. A nuance of the above requirement for independence or at least PRDS is that it is sufficient to show it for the true null hypothesis. As the hypotheses pertain to the existence of a causal effect manifested by conditional dependence, the true null hypotheses that could be interdependent are those considering the same edge. Thus, there might be a correlation for the p-values assessing the coefficient for modelling an edge and the p-value for the correlation induced by the SCM for that edge. Arguably, they are measuring overlapping constructs and thus are PRDS. However, the hypotheses in (8) are formulated as an equivalence between the correlation structure induced by the SCM and the observed correlation. For a world where A does not directly cause B, the true causality graph will not have an edge linking nodes A and B (edge AB). For an SCM containing an edge AB, the hypothesis that the coefficient for the regression of B on A is zero is true. However, the hypothesis that the induced correlation between A and B is equivalent to the observed could be true or false as the correlation may be induced. Simple cases where a null coefficient is fully informative of the existence of the correlation can be constructed. Arguably these are simple cases where a model crafting session is not required. Therefore, the p-values considered here could be treated as independent or PRDS. For illustration purpose, consider a simple causality Graph with one mediator T → M; T → O; M → O. Assuming T, M & O are continuous and the functional form linking them is linear, the Structural Equation Model (SEM) could be: M̂ = β_0 + β_1 T + ε_1 Ŷ = β_2 + β_3 T + β_4 M + ε_2 Where ε_j ∼ N(0, σ_j); j = 1, 2 The hypotheses of interest are: * H_0,i^Parms: β_i = 0; H_1,i^Parms: β_i ≠ 0; ∀ i ∈{0, 1, 2, 3, 4} * H_0,j^Noise: ε_j ∼ N(0, .); H_1,j^Noise: ε_j ≁N(0, .); ∀ j ∈{1, 2} * H_0,x,y^Cov: |SEM induced Cov(x,y) - observed Cov(x,y)| > δ; H_1,x,y^Cov: |SEM induced Cov(x,y) - observed Cov(x,y)| ≤δ; ∀ x, y ∈{T, M, Y} – upper triangle & diagonal (i) asserts that the coefficients are not zero. For simple linear modelling, the p-values could be theoretically derived; (ii) tests the assumption that the noise is Gaussian distributed. This could be tested using a Chi-square test. It could be argued that the roles of H_0 and H_1 should be flipped, posing a challenge of testing for "non-normality". The process we describe is iterative. At each step, edges could be added or removed and the coefficients refitted (either backwards or forwards construction). It could be argued that the hypotheses of all the steps should be considered as one family. The consistency property of the FDR (when Number of True null hypotheses: n_0 ≪ Number of false null hypotheses: n_1) allows for working within iteration. Thus circumventing confusion due to having several p-values generated for the same hypothesis. Therefore, It is sufficient to control the FDR within each iteration. Before providing an example for the process, the FDR and the False Discovery Control Rate (FDCR) and their control are discussed in the next section. § FALSE DISCOVERY [COST] RATE This section is based on technical papers by <cit.> and <cit.>. When testing several hypotheses simultaneously to reach an overall decision, there is a trade-off between controlling the type I error for per-hypothesis considerations and the overall hypothesis, which is usually their union. This issue of balancing error rates is known as the Multiplicity Problem. This dilemma is encountered almost universally where statistics are applied. Some of the most prominent statisticians have addressed it. However, there is neither a common nor global approach. For instance, <cit.> discussed a special case of the problem, pairwise comparisons. He continued discussing it as late as 1991 [<cit.>]. An extensive overview of the research of the Multiplicity problem may be found in <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. <cit.> demonstrated this: "For example, a particular survey may identify a small p-value, say p = .005, and claim that the associated effect is 'statistically significant.' This p-value is interpretable as follows: when there is no causal basis for the effect, there is only a 0.5% probability of observing a result as extreme as the observed result. On the other hand, it is possible that the multiplicity adjusted p-value is .15 (Adj-p = .15), which is not statistically significant. This adjusted p-value incorporates the multiple tests performed and can be interpreted as follows: when there is no causal basis for any effect tested, there is a 15% chance that somewhere in the experiment a result as extreme as the observed result of .005 will appear." Classical procedures aim to control the probability of committing at least one type-I error when considering a family of hypotheses simultaneously to control the multiplicity effect. The control of this Familywise Error Rate (FWE, see Equations <ref> & <ref>) is usually required in the strong sense (Equation <ref>), i.e., under all configurations of true and false hypotheses [<cit.>]. The main problem with classical methods is that they tend to have low power. Consequently, it has been argued that no special control is needed (e.g. [<cit.>]). An alternative, more powerful measure was introduced by <cit.>: the False Discovery Rate (FDR, Equation <ref>). It is an appropriate error rate to control in many problems where the (strong) control of the FWE is not needed. The FDR is the expected ratio of the number of erroneous rejections to the number of rejections (discoveries) and is equal to or less than the FWE. The two error rates are equal when the number of true null hypotheses (m_0) equals the number of hypotheses under test (m). When m_0 < m, the FDR may be substantially lower than the Familywise Error Rate, so an FDR controlling procedure at conventional levels can be more powerful. <cit.> provided a linear step-up procedure (BH) that controls the FDR for independent test statistics. <cit.> showed that when the test statistics are PRDS correlated, the BH procedure controls the FDR. Furthermore, they introduced a resampling-based procedure that controls the FDR [<cit.>]. A useful aspect of the BH procedure is that it provides quite consistent discoveries when applied to subsets of the family of hypotheses. For example, when evaluating individual hypotheses pertaining to enumeration districts, the family could be the whole of the United States or just a specific state. FWE control procedures usually will provide conflicting individual decisions for the different hypotheses families, whereas the FDR control will generate consistent discoveries when m_0 ≪ m. <cit.>, <cit.>, <cit.>, and <cit.> frame the control of the FDR using a Bayesian paradigm. This approach allows the discussion of a prior belief on the probability that the null hypothesis is correct. This is somewhat related to the discussion in this paper of assisting experts in mapping out their beliefs on causality. Following the notation used by <cit.> and <cit.>: For a composition of m sub-hypotheses (H_0i; i=1,2,…,m) let the intersection hypothesis be H_00 = ⋂_i=1^m H_0i. Let R_i (i=0,1,2,…,m) be 1 if H_0i is rejected and zero otherwise; and let V_i (i=0,1,2,…,m) be 1 if H_0i is erroneously rejected and zero otherwise. Note, if H_0i is true then V_i=R_i. Furthermore, let the number of rejections ("Discoveries") be R = ∑_i=1^m R_i, and the number of erroneous rejections be V = ∑_i=1^m V_i. Note that R and V do not include R_0 and V_0. Let I_R=1 when R>0 otherwise I_R=0. Similarly, I_V=1 when V>0 otherwise I_V=0. To complete the notation used in this paper, define I_0 as the set of indices of the true null sub-hypotheses (I_0={j;H_0j is true, 1 ≤ j ≤ m}); and the number of true null sub-hypotheses is m_0=||I_0||. The family error measures could be defined using the above terms: Strong-FWE = P(V>0) = E[max V_j; 1≤ j≤ m] Weak-FWE = E[V_0] = E[V_0/R_0] FDR = E[V/R] Weighted FDR = WFDR = E[∑_i=1^m w_i V_i/∑_i=1^m w_i R_i] where V/R and V_0/R_0 are defined as zero when V=R=0 and V_0=R_0=0 respectively. The FWE is appropriate when even one erroneous discovery is not desired. Procedures such as the Bonferroni procedure, Holm's procedure, Hochberg's Procedure [<cit.>], and Tukey's T-method for pairwise comparisons [<cit.>], all control the FWE in the strong sense (Equation <ref>). This type of control is relevant to situations where any erroneous discovery implies a very high cost. For example, such conservativeness is required when examining the primary end-points during Phase III clinical trials. Control of the FWE in the weak sense (Equation <ref>) is achieved by testing directly the intersection null hypothesis (and not controlling the individual hypothesis). For instance, by using the multivariate Hotelling T^2 statistic. Thus, the overall type I error rate is controlled only when all the sub-hypotheses are true (m_0 = m). This situation is very common in Statistical Process Control (SPC); where once an out-of-control signal is given (the intersection hypothesis is rejected) it is assumed that it is no longer necessary to protect from erroneous sub-discoveries, and on the other hand increased power is desired. Generally, the use of the FDR (Equations <ref> & <ref>) is appropriate in situations where high power is imperative and a pre-specified percent of the wrongly rejected (individual) hypotheses is tolerable and does not affect the quality of the overall decision. This is usually characterised by the belief that m_0 ≪ m. The FDR may be used in pilot or screening studies, grouping analysis, and situations where many variables are considered such as data-mining and Bioinformatics. In such situations, the error from a single erroneous rejection is not always as crucial for drawing conclusions for the family hypothesis. Thus, we are ready to bear more errors when many hypotheses are rejected, but with less when fewer are rejected. The last notion is reflected by the control of the FDR - for which one must specify the acceptable expected proportion of wrong discoveries. For an example, see <cit.> for a review of papers that proposed adopting the FDR for multiple CUSUM charts. Though widely researched, there are no clear-cut rules for which error measure to use and what multiplicity control procedure to use. <cit.> and <cit.> argued that this decision is part of the modelling process. They show that in situations where costs or weights can be attributed to erroneous discoveries, the weak-FWE and the FDR are special cases of a generic cost-based error measure, the False Discovery Cost Ratio (FDCR, Equations <ref>). They show that the W-BH procedure [<cit.>] keeps its control also when the test statistics are PRDS ON I_0 and propose a procedure for the control of the FDCR when the test statistics are PRDS. Assigning variable cost of C_i (i=0,1,2,…,m) of an individual erroneous discovery (e.g., rejecting H_0i results in stopping machine i for maintenance) and a fixed cost C_0 for the overall discovery (rejecting H_00 results in calling in an engineer), the cost of false discoveries is C_0 I_V_0 + ∑_i=1^m C_i V_i. In the spirit of the FDR and using the above notation, the expected proportion of the cost of false discoveries is: E[C_0 I_V_0 + ∑_i=1^m C_i V_i/C_0 I_R_0 + ∑_i=1^m C_i R_i] ≤ E[C_0 I_V + ∑_i=1^m C_i V_i/C_0 I_R + ∑_i=1^m C_i R_i], where the proportion is zero when the denominator is zero and I_R=I_{R>0} and I_V =I_{V>0}. <cit.> define the False Discovery Cost Rate as the expected ratio of the cost wasted due to erroneous discoveries to the total cost related to the discoveries: FDCR = E[C_0 I_V_0 + ∑_i=1^m C_i V_i/C_0 I_R_0 + ∑_i=1^m C_i R_i] = E[∑_i=0^m C_i V_i/∑_i=0^m C_i R_i], where the proportion is to be defined as zero when the denominator is zero. The Weak FWE, the FDR, and the W-FDR are special cases of this measure obtained by specific structures of the costs. When C_0=0 the FDCR is the W-FDR. The FDR is obtained by further assigning equal costs to the m hypotheses. On the other hand, when the cost consists only of C_0 the FDCR is the FWE in the weak sense. It is interesting to examine the meaning of controlling the FWE in the strong sense from the viewpoint of the FDCR. Seeking to control the probability of making any false rejection suggests that every erroneous discovery is very costly and perceived as infinite. Thus, the Strong-FWE can be approximated by the FDCR, but it cannot be expressed in the context of additive costs. The Strong-FWE may be expressed in terms of "relative cost", E[max(v_i)/max(R_i], which does not render itself easily to economic interpretation. <cit.> propose testing simultaneously H_00 and the rest of the sub-hypotheses, using the FDCR to correct for multiplicity. In particular, testing H_00 through Hotelling's T^2, weighted Simes' statistic (Equation <ref>), and Fisher's statistic (Equation <ref>) P^ws = min_j [∑_i=1^m c_i/∑_i=1^j C_(i) P_(j)]. P^F = -2∑_i=1^m ln p_i. They showed that an FDCR controlling procedure could be constructed by applying the W-BH procedure to the m+1 p-values P_0, P_1, …, P_m, where the weighted Simes adjusted p-value for P_0 (corresponding to H_00) is used. Thus, the control of the FDCR could be achieved when the p-values associated with the m_0 true hypotheses are independent or PRDS. § CONSTRUCTING A CAUSALITY GRAPH WHILE CONTROLLING THE FDR The process of constructing a causality graph and a statistical model should involve a statistically trained data scientist and a subject matter expert (SME) working through a process similar in spirit to variable selection. They identify the target variables, potential explanatory variables, and assign beliefs to the direction of causality. These beliefs could be expressed as a score. Then the statistician prepares the data for analysis and iteratively constructs the causality graph and models with the SME. <cit.> propose an approach for variable selection in linear models that controls the False Discovery Rate (FDR) where the discoveries are the selected variables to be included in the model. They point out that in many practical situations, there are only a few relevant variables among the many recorded. It makes sense to view the process of constructing the causality graph as a systematic screening of variable selection hypotheses pertaining to strengths of relationships and direct and indirect causality effects. To illustrate such a process, a synthetic example was generated. The demonstration will iteratively create a Structural Causality Model where feedback to the SME is a graphical representation overlaid with the False Discovery Cost Rate (FDCR) adjusted p-values. The toy example describes an offshore wind farm operator who decides how fast to allow the turbine to turn. The key causality relationships of interest are "Rotational_RPM" → "Energy_Yield" and "Rotational_RPM" → "Perceived_Noise". Appendix A describes the construction of the toy example. Table <ref> lists the variables the SME suggested could have bearing on the outcome. 20,000 observations were generated. The univariate exploratory analysis kicks off the co-design process, facilitating conversations between the statistician and the SME. The next step is to evaluate the pairwise relationships. Table <ref> lists the Pearson pairwise correlations, and Table <ref> lists the FDR adjusted (BH adjusted) p-values obtained through bootstrapping the Pearson R^2 [<cit.>]. Reviewing the correlations and the adjusted p-values is a good starting point for designing the first draft causality graph and attributing the SME's belief. For this example, the belief is scored simply: * As an SME, I do not believe there is a causal relationship between the two variables – this is implicit by not drawing an arc in the causality graph * As an SME, I am not sure whether there is a causal relationship between the two variables * As an SME, I believe there is a causal relationship between the two variables * As an SME, I am sure there is a causal relationship between the two variables The C_i weights for the FDCR are set to 1/(belief+0.0001). All unprovided weights, such as for the tests for the intercept and C_00 (Weighted Simes), are set to 1. Assume the SME provides the following beliefs: Figure <ref> sets the legend for the graphical representation of the results. The estimates and FDCR adjusted p-values are presented alongside the edges. The non-significant adjusted p-values are highlighted as those mark where the SME and statistician should focus their discussion. Figure <ref> shows a possible outcome where the significant correlations are accounted for, and the direction of the causation is informed by the SME's experience. The covariances and variances induced by the SCM model and their corresponding FDCR adjusted p-values are presented in Figure <ref>. The next iteration takes into account that: * The estimate for Strength_degradation → Rotational_RPM is practically zero, albeit statistically significant. * The covariance between sea_temperature and Perceived_noise generated by the model is not equivalent to the measured relationship. * The covariance between wind_speed and Perceived_noise generated by the model is not equivalent to the measured relationship. After several iterations, the model could look as presented in Figures <ref> and <ref>. § CONCLUSION This paper demonstrated how the FDR and FDCR could be put to use in the context of co-designing a causality graph in the discovery step of an analytical project. This approach favours hand crafting the causality graph but does not preclude using concepts and tools developed for automatic causality discovery. The use of FDR control during model discovery has been proposed by several researchers. For example, algorithms such as FDR-IAMB [<cit.>] and FDR-IAPC [<cit.>] & [<cit.>] apply techniques for controlling the multiplicity of hypotheses. These algorithms aim to learn the structure of the causality graph from the data and use FDR control as a weighting for a greedy algorithm. The primary focus lies in facilitating communication between statistical modellers and subject matter experts (SMEs) by balancing statistical rigour with clear and intuitive presentation of results. One potential improvement would be to replace p-values with e-values as suggested by <cit.>] and <cit.>. While the example used employs linear regression for modelling causal relationships, the discussion is not limited to this specific functional form. Statistical modellers have the flexibility to choose the modelling approach for conditional probabilities and select appropriate measures of variable (parent node) contribution. For instance, a random forest might be used as the functional representation, with variable importance as the contribution measure. The hypothesis testing framework described could be generalised. Instead of focusing on a specific parameter value, the null hypothesis could be formulated as: "the parent variable does not contribute to modelling the child." In this case, a nested bootstrap procedure is recommended by <cit.> and <cit.>: 1. [Outer] Bootstrap a distribution of p-values and performs FDR adjustment. 2. [Inner] Bootstrap a distribution of contribution measures or regression parameters for deriving p-values. When using Bootstrap to obtain the distribution of the p-values. Careful thought should be given to ensuring the correlation structures under the appropriate null hypotheses is maintained. § ACKNOWLEDGEMENTS I would like to thank Professor Yoav Benjamini for his valuable contributions to the original research and for granting permission to use material from our technical paper. § TOY EXAMPLE GENERATION The data example is deliberately simple, using mainly Gaussian relationships and a variable that is not really participating in the system. The example simulates the decision of how fast to turn a wind turbine, where the outcomes are energy produced and perceived noise. It is loosely based on research by <cit.>, who state: “By accounting for individual site conditions we confirm that load factors do decline with age, at a similar rate to other rotating machinery. Wind turbines are found to lose 1.6 ± 0.2% of their output per year, with average load factors declining from 28.5% when new to 21% at age 19. This trend is consistent for different generations of turbine design and individual wind farms. This level of degradation reduces a wind farm's output by 12% over a twenty year lifetime, increasing the levelised cost of electricity by 9%.” The variables in the toy example are generated as follows: Winter_Ind ∼Binomial(n=1, p=0.7) Sea_Temperature ∼𝒩(μ = 20 - Winter_Ind· 10, σ = 2) Wind_Speed ∼𝒩(μ = 40 + Winter_Ind· 20, σ = 10) Strength_Degradation ∼𝒩(μ = 1.5, σ = 0.1) Rotational_RPM ∼𝒩(μ = 1.2 + Wind_Speed/10, σ = 0.2) Energy_Yield ∼𝒩(μ = 10 + Rotational_RPM/1.5 - Strength_Degradation/10, σ = 2) Perceived_Noise ∼𝒩(μ = 20 + Rotational_RPM/1.5 - Wind_Speed/4, σ = 1) Where 𝒩(μ, σ) denotes a normal distribution with mean μ and standard deviation σ. Note: * Rotational_RPM is measured in thousands (i.e., the actual RPM is 1000 times the value used in the model). * Energy_Yield is measured in Cent/kWh. * Perceived_Noise is an arbitrary scale. The toy example was developed using Python and used capabilities provided by the DoWhy project [https://github.com/py-why/dowhy].
http://arxiv.org/abs/2409.03627v1
20240905153911
Fewer supermassive binary black holes in pulsar timing array observations
[ "Boris Goncharov", "Shubhit Sardana", "A. Sesana", "J. Antoniadis", "A. Chalumeau", "D. Champion", "S. Chen", "E. F. Keane", "G. Shaifullah", "L. Speri" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.CO", "gr-qc" ]
frame=tb, language=Matlab, aboveskip=3mm, belowskip=3mm, showstringspaces=false, columns=flexible, basicstyle=, numbers=none, numberstyle=, keywordstyle=, commentstyle=, stringstyle=, breaklines=true, breakatwhitespace=true tabsize=3 boris.goncharov@me.com Max Planck Institute for Gravitational Physics (Albert Einstein Institute), 30167 Hannover, Germany Leibniz Universität Hannover, 30167 Hannover, Germany Max Planck Institute for Gravitational Physics (Albert Einstein Institute), 30167 Hannover, Germany Department of Physics, IISER Bhopal, Bhauri Bypass Road, Bhopal, 462066, India Dipartimento di Fisica “G. Occhialini", Universitá degli Studi di Milano-Bicocca, Piazza della Scienza 3, I-20126 Milano, Italy INFN, Sezione di Milano-Bicocca, Piazza della Scienza 3, I-20126 Milano, Italy INAF - Osservatorio Astronomico di Brera, via Brera 20, I-20121 Milano, Italy FORTH Institute of Astrophysics, N. Plastira 100, 70013, Heraklion, Greece Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, 53121, Bonn, Germany ASTRON, Netherlands Institute for Radio Astronomy, Oude Hoogeveensedijk 4, 7991 PD Dwingeloo, The Netherlands Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, 53121, Bonn, Germany Shanghai Astronomical Observatory, Chinese Academy of Sciences, Shanghai 200030, P. R. China School of Physics, Trinity College Dublin, College Green, Dublin 2, D02 PN40, Ireland Dipartimento di Fisica “G. Occhialini", Universitá degli Studi di Milano-Bicocca, Piazza della Scienza 3, I-20126 Milano, Italy INFN, Sezione di Milano-Bicocca, Piazza della Scienza 3, I-20126 Milano, Italy INAF - Osservatorio Astronomico di Cagliari, via della Scienza 5, 09047 Selargius (CA), Italy Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Am Mühlenberg 1, 14476 Potsdam, Germany § ABSTRACT We reanalyse the second data release of the European Pulsar Timing Array (EPTA) using an observationally-driven model for ensemble properties of pulsar noise. We show that the revised gravitational wave background properties are in better agreement with theoretical expectations for the strain spectrum. Our improved model for ensemble pulsar noise properties reduces a systematic error at 1σ level and increases Bayesian odds of Hellings-Downs correlations by ∼ 10%. Fewer supermassive binary black holes in pulsar timing array observations L. Speri September 9, 2024 ========================================================================= Since 2020, Pulsar Timing Arrays (PTAs) have reported growing evidence for the nanohertz-frequency gravitational wave background in their data. The first tentative evidence came from a temporally-correlated stochastic process in pulsar timing data <cit.>. The Fourier spectrum of delays and advances in pulsar pulse arrival times resembled the expected spectral properties of the background. Most recently, PTAs — with varying levels of statistical significance <cit.> — showed that this stochastic process exhibits Hellings-Downs correlations <cit.> consistent with the isotropic unpolarised stochastic gravitational wave background. Supermassive black hole binaries at subparsec separations are expected to be a dominant source of the stochastic gravitational wave background at nanohertz frequencies <cit.>. However, the expected amplitude of the background is lower than the observations suggest. Previous EPTA analyses found the best-fit strain amplitude to be at the edge of the values simulated from supermassive black hole binary population synthesis models. This is visible in Figure 7 from <cit.>[This is not apparent in the analogous Figure 5 from <cit.> due to the binary population modelling differences.]. The inferred strain amplitude lies at the theoretical upper limit of the predicted astrophysical range <cit.>. It might also be in tension with the observed black hole mass function <cit.>, although see <cit.> for a different view. Furthermore, the strain spectral index of the gravitational wave background is in ≈ 2σ tension with the value corresponding to binary inspirals driven by gravitational wave emission alone. This is visible in Figure 5 in  <cit.> and Figure 11 in  <cit.>. Overall, although previous PTA results were consistent with a very broad range of assumptions about supermassive binary black hole populations <cit.>, they suggested deviations from purely gravitational wave-driven binary evolution. It was pointed out in several studies <cit.> that the standard PTA models of how noise parameters are distributed across pulsars are incorrect. These models manifest as prior probabilities in PTA data analysis. To be precise, the models are incorrect because they are `static', i.e., the shape of the distribution of noise parameters is not influenced by the data. Although imposing such priors is very unlikely to influence our conclusions about evidence for the gravitational background <cit.>, it may introduce systematic errors in our measurement of the strain spectrum <cit.>. In this Letter, we address the aforementioned problem. We employ hierarchical Bayesian inference to account for the uncertainties in our noise priors. In particular, we parametrise noise prior distributions which are our models of how parameters governing pulsar-specific noise are distributed across the pulsars <cit.>. These newly-introduced hierarchical parameters are called hyperparameters. Furthermore, we use a new procedure of marginalisation over hyperparameters, which is described in Section 2.2.2 of the companion paper (Goncharov et al., in prep.). Based on the new procedure, we revisit the measurement of properties of the gravitational wave background with the European Pulsar Timing Array (EPTA) <cit.>. We assess the properties of the gravitational wave background using the power law model of its characteristic strain spectrum: h_c(f)=A (f yr^-1)^-α. Here, A is the strain amplitude, and -α is the power law spectral index[One may also find a notation where α'=2/3 and γ=3-2α'.]. The corresponding spectral index of the power spectral density [s^3] of delays(-advances) [s] induced by the background in the timing data is -γ. These stochastic timing delays resulting from the gravitational-wave redshift of pulsar spin frequency are referred to as temporal correlations. The value of γ=13/3 (α=2/3) corresponds to purely gravitational wave driven inspirals of supermassive black hole binaries <cit.>. Modelling of pulsar noise is important because it can affect the conclusions about the properties of the gravitational wave background. Pulsar noise introduces temporal correlations which may be difficult to distinguish from the effect of gravitational background based on single pulsar data. However, unlike for the gravitational background, noise-induced timing delays have different Fourier spectra across pulsars. Our hierarchical model describes the ensemble properties of these noise spectra. Despite similar delay time series induced by both pulsar noise and gravitational background in pulsar data, the presence of the background can still be established. The Hellings-Downs function <cit.> determines inter-pulsar correlations of the stochastic timing delays induced by the isotropic stochastic unpolarised gravitational background. The statistical significance of the signal is established based on the Bayesian odds between (1) the Hellings-Downs-correlated stochastic process with the same spectrum of delays across pulsars, and (2) the same spectrum of delays across pulsars and no inter-pulsar correlations. Inter-pulsar correlations of the gravitational wave background do not contribute as much to a measurement of its strain spectrum as temporal correlations of the signal <cit.>. This is visible in, e.g., Figure 6 in ref <cit.> or Figure 3 in ref. <cit.>. Therefore, on one hand, even in the lack of evidence of Hellings-Downs correlations per se, the background amplitude and spectral index can still be well-constrained. This is the case for the full 25-year EPTA data which lacks evidence for inter-pulsar correlations, and this was the case for earlier studies that have identified a common-spectrum stochastic process in the PTA data <cit.>. On the other hand, improving the measurement of pulsar noise could assist in resolving Hellings-Downs correlations. Results.–Posterior distributions for A and γ are shown as contours in Figure <ref>. Solid blue contours correspond to our improved model. For comparison, dashed red contours correspond to the standard `static' noise priors used in earlier EPTA analyses <cit.>. The results are shown for both the 10-year subset of the EPTA data which showed evidence for the Hellings-Downs correlations, and the full 25-year EPTA data where the evidence is not visible[The correlations are thought to be `scrambled' by unmodelled noise from the older backend-receiver systems of the telescopes.]. The value of γ=13/3 (α=2/3) is shown as a dashed straight line. Our improved model results in a lower median-aposteriori strain amplitude of the gravitational wave background, as well as in a steeper spectral index, as shown with solid blue contours in Figure <ref>. A detailed inspection of Figure <ref> reveals that the impact is most significant for the full 25-year data, where the maximum-aposteriori amplitude (best fit) also shifts outside of 1σ levels of fully-marginalized distributions of A and γ, peaking exactly at γ=13/3. For the 10-year data, the value of γ=13/3 now lies at the edge of the 1σ credible level, but the maximum-aposteriori value remains almost unaffected. Implications.–It is common to assume that the energy loss in binary inspirals is dominated by the emission of gravitational waves. In this case, the characteristic strain spectrum of the gravitational wave background is <cit.> h_c^2(f) = 4G^5/3/3π^1/3 c^2 f^-4/3∫d^2N/dV dzℳ^5/3/(1+z)^1/3 dz, where (G,c) are the universal constants, z is redshift, ℳ is the binary chirp mass, and d^2N/(dVdz), a function of (ℳ,z), is the number density of binaries per unit comoving volume per unit redshift. The integral does not depend on a gravitational wave frequency, thus h_c∝ f^-2/3, as stated earlier. The background amplitude A depends on the mass spectrum and the abundance of supermassive binary black holes in the universe. The α=2/3 (γ=13/3) dependence is confirmed by population synthesis simulations, e.g. Figure 7 in <cit.>, where the theoretical uncertainty is only δγ∼ 0.1 at 1σ due to cosmic variance <cit.>. Tensions of the previously-estimated γ≈ 3 with 13/3 reported during the announcement of evidence for the gravitational background <cit.> has led to discussions on whether the signal is influenced by certain effects of binary evolution that make strain spectrum to appear flatter. Mechanisms of flattening h_c(f) typically involve the introduction of a more rapid physical mechanism of binary hardening[A reduction in binary separation.] compared to a gravitational wave emission at <0.1 parsec separations. Such a mechanism could be an environmental effect such as stellar scattering <cit.>, the torques of a circumbinary gas disc <cit.>. It could also be due to the abundance of binaries in eccentric orbits which lead to a more prominent gravitational wave emission <cit.>[Eccentricity also results in a steeper h_c(f>10^-8 Hz) <cit.>, but PTA sensitivity declines towards high frequencies.]. In contrast, the results of our improved analysis maintain consistency with binary evolution driven only by the emission of gravitational waves. Our improved model also impacts the measurement of the strain amplitude which can be recast in terms of the number density of supermassive black hole binaries. The new results hint that the supermassive black hole binaries are not as (over-)abundant as the earlier measurements suggested. It is visible in Figure <ref>, which is a replica of Figure A1 in <cit.>. There, green horizontal bands correspond to theoretical uncertainties on the strain amplitude at the 16th - 84th percentile level in 24 studies <cit.>. A consideration of ensemble noise properties of pulsars reduces tensions of the gravitational wave background strain amplitude with theoretical and observationally-based predictions for supermassive black hole binaries. The caveat is that the reported amplitude is referenced to f=yr^-1, the covariance between A and γ in a posterior changes following a rotation of a power law about this frequency. Therefore, visible consistency with theoretical predictions may further improve based on a different refrence frequency. Observational impact.– We find that our improved model yields an increase in evidence for Hellings-Downs correlations. For the 10-year data, we find an increase in Bayesian odds by 22%, and in the 25-year data - by 6%. While the increase in the signal-to-noise ratio (SNR)[Roughly, log Bayesian odds ∝SNR^2 <cit.>.] is almost negligible, consistency of the increase between the 10- and the 25-year data suggests that we have removed a systematic error arising from prior misspecification. When the model closely matches reality, one would expect a reduction of the measurement uncertainty when adding extra data. This is not visible in the original EPTA analysis where the 1σ range for ( A, γ) is roughly the same between the 10-year and the 25-year data. As shown in Figure <ref>, our improved analysis yields a larger measurement uncertainty based on temporal correlations in the 10-year data, in agreement with our expectation. This is no longer visible in Figure <ref> suggesting that inter-pulsar correlations present in the 10-year data provide additional constraints. The shift of the posteriors towards larger spectral indices and smaller amplitudes with our improved analysis[The covariance between A and γ in Figure <ref> is along the line of equal noise power.] suggests that the louder pulsar-intrinsic noise with flatter spectra leaks into our measurement of the background strain spectrum when ensemble pulsar noise properties are not modelled. Because the 10-year data and the 25-year data are not independent data sets, a high degree of consistency is expected. In the original EPTA analysis, maximum-aposteriori ( A, γ) in the 25-year data differ from those of the 10-year data by around (0.3, 0.8). It is visible in red contours across all three panels in Figure <ref>. A match of the best-fit ( A, γ) between the 25-year data that does not exhibit Hellings-Downs correlations (Figure <ref>) and the 10-year data when modelling only temporal correlations and not the Hellings-Downs correlations (Figure <ref>) is achieved with our improved analysis. However, our improved analysis does not strongly impact the 10-year data when modelling inter-pulsar correlations. It supports our previous statement that inter-pulsar pulsar correlations in the 10-year data provide additional constraints on ( A, γ). Because ( A, γ) obtained with only temporal correlations is still expected to match those obtained with temporal and Hellings-Downs correlations, it is also possible that the EPTA data contains other systematic errors that are yet to be mitigated. Hypothesising where other systematic errors may stem from, we note that the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) has mitigated a tension of the background spectral index with 13/3 by adopting the Gaussian process model of the dispersion variation noise <cit.>, as in the EPTA analysis <cit.>. Therefore, one potential source of a systematic error could be the mismodelling of the pulsar-specific noise that depends on a radio frequency. A very nearby binary is another example <cit.>. Frequency-wise comparison of the inferred strain spectrum against black hole population synthesis models performed earlier by the EPTA (Figure 3 in ref. <cit.>) suggests that the deviation from γ=13/3 may occur due to excess noise in two frequency bins, ≈ 10^-8 Hz and ≈ 3 × 10^-8 Hz. The rest of the spectrum seems to be consistent with γ=13/3. The aforementioned potential sources of systematic errors may require better temporal and inter-pulsar correlation models of the data as part of future work. Methods.–PTAs perform precision measurements of pulse arrival times from millisecond radio pulsars. The likelihood of delays(-advances) δ t for a vector of pulse arrival times t is a multivariate Gaussian distribution ℒ(δ t | θ), where θ is a vector of parameters of models that describe the data. From the Bayes theorem, it follows that the posterior distribution of model parameters outlining our measurement is 𝒫(θ|δ t) = 𝒵^-1ℒ(δ t | θ) π(θ), where 𝒵 is Bayesian evidence, a fully-marginalised likelihood. The term π(θ) is called prior, a model of how likely it is to find a certain value of θ in Nature. Model selection is performed based on computing the ratio of 𝒵 for pairs of models, it is referred to as the Bayes factor. The Bayes factor is equal to the Bayesian odds ratio if both models are assumed to have equal prior odds. A form of the likelihood and the description of the standard analysis methodology can be found in <cit.>. Because the total PTA noise prior is a product of noise priors for every pulsar, PTA data will inform on the distribution of θ in Nature. Our improved analysis introduces hyperparameters Λ to parametrise priors: π(θ|Λ) π(Λ). We then perform a numerical marginalisation over Λ. For more details, please refer to the companion paper (Goncharov et al., in prep.). Data availability.–Second data release of the European Pulsar Timing Array <cit.> is available at https://zenodo.org/record/8091568zenodo.org and https://gitlab.in2p3.fr/epta/epta-dr2gitlab.in2p3.fr. Acknowledgements.–We thank Bruce Allen for helpful comments on the results, contributions to the structuring of this manuscript, and valuable advice on scientific paper writing. We also thank Rutger van Haasteren for many insightful discussions about hierarchical Bayesian inference. Some of our calculations were carried out using the OzSTAR Australian national facility (high-performance computing) at Swinburne University of Technology. European Pulsar Timing Array is a member of the International Pulsar Timing Array <cit.>. J. A. acknowledges support from the European Commission (ARGOS-CDS; Grant Agreement number: 101094354). A. C. acknowledges financial support provided under the European Union's Horizon Europe ERC Starting Grant “A Gamma-ray Infrastructure to Advance Gravitational Wave Astrophysics” (GIGA; Grant Agreement: 101116134). apsrev4-2
http://arxiv.org/abs/2409.02070v1
20240903171931
Explicit Differentiable Slicing and Global Deformation for Cardiac Mesh Reconstruction
[ "Yihao Luo", "Dario Sesia", "Fanwen Wang", "Yinzhe Wu", "Wenhao Ding", "Jiahao Huang", "Fadong Shi Anoop Shah", "Amit Kaural", "Jamil Mayet", "Guang Yang", "ChoonHwai Yap" ]
eess.IV
[ "eess.IV", "cs.CV" ]
Yihao Luo, Dario Sesia et al. [tnote1]Official Implementation: [https://github.com/Luo-Yihao/GHDHeart]https://github.com/Luo-Yihao/GHDHeart 1]Yihao Luocor1fn1 [cor1]Corresponding authors y.luo23@imperial.ac.uk 8,7]Dario Sesiafn1 d.sesia22@imperial.ac.uk [fn1]The two authors contribute equally. 1,3,4]Fanwen Wang 1,3,4]Yinzhe Wu 1]Wenhao Ding 1,3,4]Jiahao Huang 1,3]Fadong Shi 2,6]Anoop Shah 2,7]Amit Kaura 2,7]Jamil Mayet 1,7,3,4]Guang Yangcor1fn2 g.yang@imperial.ac.uk 1]ChoonHwai Yapcor1fn2 c.yap@imperial.ac.uk [fn2]Co-last Senior Authors: Guang Yang and Choon Hwai Yap [1]Department of Bioengineering, Imperial College London, London, UK [8]Department of Medicine, Imperial College London, London, UK [2]Department of Cardiology, Imperial College Healthcare NHS Trust, London, UK [7]National Heart & Lung Institute, Imperial College London, London, UK [3]Imperial-X, Imperial College London, London, UK [4]Cardiovascular Research Centre, Royal Brompton Hospital, London, UK [5]British Heart Foundation Centre of Research Excellence, Imperial College London, UK [6]London School of Hygiene and Tropical Medicine, London, UK XXXX XXXX XXXX XXXX XXXX § ABSTRACT Three-dimensional (3D) mesh reconstruction of the cardiac anatomy from medical images is useful for shape and motion measurements and biophysics simulations to facilitate the assessment of cardiac function and health. However, 3D medical images are often acquired as 2D slices that are sparsely sampled (e.g., large slice spacing) and noisy, and 3D mesh reconstruction on such data is a challenging task. Traditional voxel-based approaches utilize pre- and post-processing that compromises fidelity to images, while mesh-level deep learning approaches require large 3D mesh annotations that are difficult to get. Therefore, direct cross-domain supervision from 2D images to 3D meshes is a key technique for advancing 3D learning in medical imaging but it has not been well-developed. While there have been attempts to approximate the voxelization and slicing of meshes that are being optimized, there has not yet been a method for directly using 2D slices to supervise 3D mesh reconstruction in a differentiable manner. Here, we propose a novel explicit differentiable voxelization and slicing (DVS) algorithm allowing gradient backpropagation to a 3D mesh from its slices, which facilitates refined mesh optimization directly supervised by the losses defined on 2D images. Further, we propose an innovative framework for extracting patient-specific left ventricle (LV) meshes from medical images by coupling DVS with a graph harmonic deformation (GHD) mesh morphing descriptor of cardiac shape that naturally preserves mesh quality and smoothness during optimization. The proposed framework achieves state-of-the-art performance in cardiac mesh reconstruction tasks from densely sampled (CT) as well as sparsely sampled (MRI stack with few slices) images, outperforming alternatives, including marching cubes, statistical shape models, algorithms with vertex-based mesh morphing algorithms and alternative methods for image-supervision of mesh reconstruction. Experimental results demonstrate that our method achieves an overall Dice score of 90% during a sparse fitting on multi-datasets. The proposed method can further quantify clinically useful parameters such as ejection fraction and global myocardial strains, closely matching the ground truth and outperforming the traditional voxel-based approach in sparse images. 52-0441A1065D1865D1792C55 Cardiac Shape Mesh ReconstructionDifferentiable Slicing § INTRODUCTION Cardiovascular diseases are one of the leading causes of mortality worldwide, accounting for around 17.9 million deaths annually <cit.>. Computational reconstruction of the cardiac anatomy is useful for shape and motion measurements to determine heart function for diagnosis, treatment and prognosis <cit.>. They are also useful for physics-based modelling to understand cardiac physiology and predict intervention outcomes, and also for surgical planning <cit.>. However, reconstructing patient-specific cardiac mesh is challenging because of imaging imperfection (insufficient resolution or excessive noise) and the variability of cardiac shape across individuals. To date, most studies have focussed on the segmentation of voxelated medical images rather than a mesh-level reconstruction. When the final mesh result is desired, investigators typically use offline algorithms for isosurface extraction from segmentations and contours (<cit.>), via the marching cubes algorithm (<cit.>) and point cloud surface reconstruction(<cit.>). However, meshes reconstructed by these algorithms often suffer from staircase artifacts, uncontrollable topology, high polygon counts, and poor smoothness. These issues can lead to inaccuracies in representing the complex geometries of cardiac structures, and smoothing post-processing to address staircasing can deteriorate its fidelity to images. Further, these methods involve multi-step procedures without differentiability, which prevents the integration of this mesh reconstruction approach into deep learning frameworks for better optimization and learning. In sparsely sampled data, it is even harder to apply marching cubes as this will lead to much more topological flaws, and pre-processing the spare images with interpolations will also deteriorate fidelity to images. A 3D Mesh, as a discretization of a non-Euclidean manifold (<cit.>), is hard to be directly optimized by deep learning frameworks until the recent development of graph convolutional networks (GCN) (<cit.>). Some studies proposed deep learning-based methods to directly predict the mesh results (<cit.>). These methods typically combine a convolutional neural network (CNN) for image feature extraction and a graph convolutional network (GCN) for mesh processing. However, these data-driven methods require large, high-quality training datasets and ground truths, which can be expensive or even impossible to obtain. Several works resorted to using marching cubes to obtain such training datasets, thus retaining limitations of marching cubes. Therefore, it is desirable to develop a differentiable algorithm to simulate the slicing of 3D objects, which allows the gradient of a loss function defined on a 2D image to be back-propagated to the mesh. This can be used to optimize the mesh directly with the image-level supervision, where the image-level annotations are much easier to obtain. To the best of our knowledge, there is no existing explicit differentiable algorithm for this purpose before this work, though some works have proposed alternative approximating methods like <cit.>. Thus, the first contribution of this work is to propose a differentiable voxelization and slicing (DVS) algorithm for establishing direct, global supervision of image labels on the slices of a pending-to-optimize mesh without approximating errors. Meanwhile, mesh-level learning for 3D cardiac reconstruction is usually achieved by optimizing a learnable vertex-wise deformation from a template mesh, which can cause oscillations and self-intersections unless additional regularization terms are introduced. Inspired by the graph Fourier analysis <cit.>, we propose a global mesh deformation algorithm based on graph harmonics (GHD), which decomposes the deformations of a mesh as surface Fourier waves to preserve geometric information at different levels. Representing the deformation as a global function on the mesh, rather than a local function on each vertex, can preserve mesh smoothness and quality during the optimization process, thereby overcoming the limitations of current vertex-wise deformation techniques and providing a more robust framework for cardiac mesh reconstruction. Compared to the existing mesh morphing methods and statistical shape models (SSM), such as <cit.>, our GHD algorithm tends to take the balance between the dimensional reduction and the degree of freedom and avoid the dependency on the training data, which is more flexible and efficient for the mesh reconstruction. By combining DVS and GHD, we propose a novel differentiable framework for cardiac mesh reconstruction adaptive to dense or sparse segmentation slices. This framework is flexible and can incorporate various differentiable operators as auxiliary constraints. It does not require pre- and post-processing, smoothness regularization, or prior training. It achieves SOTA performance on different datasets in both CT and MRI. §.§ Related Work In the following, we review the existing methods for cardiac mesh reconstruction and discuss their limitations. §.§.§ Voxel-Based Cardiac Mesh Reconstruction Traditionally, 3D mesh-based heart reconstruction from labeled images is conducted via the marching cubes algorithm (<cit.>), coupled with smoothing post-processing to remove the staircase texture. An example of this staircase texture can be seen in Fig. <ref>. However, the marching cubes algorithm has several limitations. Firstly, it relies on the assumption of a uniform voxel grid, which can lead to inaccuracies in representing the complex geometries of cardiac structures, smoothing post-processing, and deteriorating its fidelity to images. Additionally, the algorithm often produces meshes with many polygons, necessitating further simplification and optimization steps to make the meshes usable for computational modeling, which can introduce additional errors and distortions. Moreover, the marching cubes algorithm inherently lacks differentiation capabilities, which limits its integration with modern machine learning frameworks that require differentiable operations for optimization and learning. This non-differentiability hinders the development of end-to-end deep learning models for cardiac mesh reconstruction, where back-propagation through the entire pipeline, including the reconstruction step, is essential. More recently, some works have proposed differentiable versions of the marching cubes algorithm, such as <cit.>. These methods show promise in enabling the integration of mesh reconstruction into deep learning frameworks, facilitating end-to-end optimization and learning. However, without cross-domain supervision, these methods still rely on 3D label for training, which prevents their applications in medical imaging tasks. §.§.§ Cardiac Statistical Shape Models (SSM) SSM is another approach for image-based mesh reconstruction. It is traditionally derived from principal component analysis (PCA)(<cit.>) and proper orthogonal decomposition (POD) ((<cit.>)). These methods typically use training data to learn a low-dimensional parametric shape model that captures the variability of the cardiac shape across the population. Subsequently, a weighted sum of the shape modes is fitted to the images for the mesh reconstruction. For example, <cit.> developed a PCA-based statistical shape model from an atlas of over 1000 MR images. <cit.> developed a similar model with echocardiography scans of 435 patients to guide segmentations in other scan modalities. Besides, deep learning-based SSMs have been proposed, such as <cit.>, which uses a variational autoencoder (VAE) to learn the shape modes from a large dataset of cardiac MRI images. These methods have shown high accuracy in reconstructing cardiac meshes from images. However, this approach still utilizes marching cubes to derive training mesh data and retain the limitations of marching cubes. Further, training meshes must all have a uniform mesh structure with a fixed number of nodes, which can limit mesh quality and increase the data preparation burden. It would thus be advantageous to have a technique that does not require large data training, such as our GHD+DVS approach. §.§.§ Mesh-based Deep Learning Cardiac Reconstruction Deep learning approaches have shown great promise in cardiac mesh reconstruction. For instance, <cit.> developed a deep learning framework for direct whole-heart mesh reconstruction directly from 3D image data with promising accuracy. Their methods conventionally involve a segmentation module with a U-Net architecture (<cit.>) that allows supervision using ground truth annotations, followed by a mesh generation module that predicts the mesh vertices from the segmented image with a graph convolutional network (<cit.>). Another approach is mesh-based VAE models proposed by <cit.> for non-linear dimensionality reduction and cardiac mesh generation, which also demonstrates high accuracy and physiological reconstructions. However, both approaches relies on manual preparation for mesh ground truths, which is time-consuming, and shares the accuracy limitations of traditional voxel-based methods. It would thus be advantageous to have a technique that does not require large data training, and avoid traditional voxel-based methods, such as our GHD+DVS approach. Mesh morphing methods establish a mean template mesh of the heart, and the reconstruction process involves deforming the template mesh using local optimizations to match tissue boundaries on input images. For example, <cit.> proposed a method to warp a patient-specific ED volume mesh based on registration-based propagated surface meshes using a log barrier-based mesh warping (LBWARP) method. <cit.> proposed a method that applies a registration-based deformation field to mesh vertices, supervised by the Chamfer distance (CD) between the warped mesh and the target mesh. Mesh morphing methods can be sensitive to the template initialization, which may require complicated steps and manual efforts for mitigation. However, this approach can be differentiable, and if managed well, can produce smooth reconstructions with high image fidelity. Currently approaches, as above, utilizes a marching cube mesh database for training, this can be avoided if it is coupled with our proposed DVS. The advantages of deep learning approaches include their differentiability, which allows integration into end-to-end optimization frameworks, and their ability to learn complex, non-linear mappings from data, leading to high reconstruction accuracy. However, similar to statistical shape models, these approaches are data-driven and require large, high-quality training datasets and ground truths, which can be challenging to obtain. §.§.§ Approximating Cross-domain Supervision The typical way to facilitate image supervision of mesh reconstruction takes the form of a differentiable loss function describing the match between the mesh nodes and image voxels or image pixels along several image planes or slices, which can be optimized during the reconstruction process. For example, the fitting of statistical shape modes or mesh morphing template meshes to images requires a differentiable module that minimizes a loss term describing the proximity of mesh nodes to image boundaries. This is typically achieved via distance-based measures between mesh nodes and image boundary voxels, such as the Chamfer distance. However, these are local loss functions that are likely to generate local minima, making convergence more challenging. Having a global loss function can minimize this effect and enable better convergence and accuracy. The differentiable mesh-to-image rasterizing proposed by <cit.> compares the image-level mask contour with probabilistic mesh intersections, considering only the boundary information. This approach, however, is again a local loss function with the above limitations. Further, the projection of only the mesh boundary to the rasterization plane can be noisy. Another approach, the approximately differentiable voxelization and slicing (ADVS) method proposed by <cit.>, performs an initial offline voxelization of the canonical template, followed by voxelated image warping guided by iterative mesh fitting. This method also relies on a local distance-based loss. Moreover, supervisions are implemented at the image level, meaning that back-propagation does not directly relate to mesh-level optimization but stops at the initial voxelization mask. Our proposed differentiable DVS algorithm utilizes an effective global loss function for the image supervision of mesh reconstruction. Unlike the deep learning approaches above <cit.>, DVS can achieve direct supervision at the mesh level, rather than an indirect one at the image level, to facilitate back-propagation, to improve convergence and thus accuracy. §.§.§ Global Mesh Deformation A challenge faced by mesh morphing mesh reconstruction approaches is the smoothness and quality of generated meshes. Existing reconstruction techniques, which often rely on vertex-wise deformation, can cause oscillations and self-intersections unless additional regularization terms are introduced. Mesh quality is importance for biophysics simulations and for feature extraction in mesh-based deep learning, but is not well studied. Our proposed GHD approach can address these limitations. Graph Fourier analysis is a cutting-edge topic within signal processing, particularly for data on non-Euclidean domains such as graphs and meshes. The Graph Fourier Transform (GFT) extends the classical Fourier transform to graphs, utilizing the eigenvalues and eigenvectors of the graph Laplacian matrix for frequency analysis. The graph Laplacian, defined as L = D - A (where D is the degree matrix and A is the adjacency matrix), is fundamental to GFT. The eigenvalues represent the graph frequencies, and the eigenvectors serve as the basis for the transform. A graph signal is a function defined on the graph's nodes, with each node associated with a signal value. The GFT projects a graph signal onto the eigenvector basis of the Laplacian, enabling analysis in the spectral domain. This approach has applications in various fields, including network analysis, image processing, and machine learning on graph-structured data <cit.>. Mesh spectral processing extends these concepts to 3D meshes, which are crucial in computer graphics and geometric processing. A 3D mesh, typically represented by vertices and faces, can be analyzed using the mesh Laplacian, analogous to the graph Laplacian. The spectral decomposition of the mesh Laplacian facilitates operations like smoothing, compression, and shape analysis. Both graph Fourier and mesh spectral processing provide powerful tools for handling complex data structures, enabling more effective and efficient analysis and manipulation of signals on graphs and meshes. Key references in this field include foundational works by Shuman et al. on graph signal processing <cit.> and by Zhang et al. on spectral mesh processing <cit.>, which lay the groundwork for these transformative techniques. Inspired by these works <cit.>, our GHD approach considers the deformations of a mesh as decomposed on spectral bases to preserve geometric information at different levels. Based on the graph spectral and GFT theory, we propose a novel optimization technique for generating patient-specific left ventricle (LV) meshes via global deformations. This method ensures the smoothness and quality of the generated meshes, thereby overcoming the limitations of current vertex-wise deformation techniques and providing a more robust framework for cardiac mesh reconstruction. §.§ Contributions Our specific contributions can be summarized as follows: Differentiable Voxelization and Slicing (DVS) Algorithm. We propose the DVS algorithm for establishing direct, global supervision of image labels on mesh reconstruction. Compared to local distance-wise supervision approaches and indirect supervision approaches, this improves convergence and accuracy, contributing to a more robust ability to reconstruct from sparsely sampled images. The DVS algorithm ensures that the back-propagation process directly relates to mesh-level optimization, facilitating more precise and reliable cardiac mesh reconstructions. This technical can be widely used in general medical 3D reconstruction tasks. Graph Harmonics Deformation (GHD) Algorithm. We introduce the novel GHD algorithm for generating patient-specific left ventricle (LV) meshes quickly by morphing from a canonical mesh, where displacements are described as surface Fourier waves. GHD naturally preserves high mesh quality and smoothness, enabling robust performance even in sparsely sampled medical images. This enhances the accuracy and efficiency of cardiac mesh reconstruction. Novel Differentiable Framework for Cardiac Mesh Reconstruction from 2D Slices. By combining DVS and GHD, we propose a robust framework for differentiable cardiac mesh reconstruction adaptive to dense or sparse segmentation slices. This framework has the flexibility of incorporating various differentiable operators as auxiliary constraints, does not requiring pre-and post-processing, smoothness regularization, or prior training. It achieves SOTA performance on different datasets in both CT and MRI. § THEORETICAL FRAMEWORK §.§ Differentiable Voxelization & Slicing (DVS) Voxelization and slicing are essential for image supervision on 3D cardiac mesh reconstructions. Traditional mesh-boolean-based algorithms (<cit.>) are not differentiable and cannot be integrated into optimization frameworks such as neural networks. As a result, pre- and post-processing steps are often required to obtain 3D results from image-level training, complicating visualization and evaluation. Previous work like <cit.> have attempted to address this by using probabilistic rasterization and image-level warping for implicit voxelization and slicing. However, these methods have limitations in efficiency and accuracy of information transfer between images and meshes. Our proposed differentiable DVS algorithm can overcome these challenges, and provide direct supervision between meshes and images, facilitating mesh-level optimization in medical image deep learning. Our method is inspired by classical physics field theory, where vector fields emanating from a source decay in strength according to the inverse square law with distance from the source, as seen in gravitational <cit.> and electric fields <cit.>. In this context, every point within a cardiac mesh generates a vector field that influences its occupancy within the mesh. By morphing the mesh to maximize occupancy of all voxelated points designated within the mesh on the image, we can achieve an optimal fit to the image data. Mathematically, the field strength E⃗_⃗q⃗ at a point x from a single source q satisfies: ∬_∂Ω⟨n⃗,E⃗_⃗q⃗⟩ ds = ∭_Ω∇·E⃗_⃗q⃗ dv = 4πδ _q, where δ _q equals 1 if q ∈Ω or 0 if not, and Ω is a 3D volume with ∂Ω as its meshed surface. The abstract inverse quadratic field is defined as the unit field direction vector multiplied by the inverse quadratic field strength: E_q(x) = x-q/|| x-q||·1/(x-q)^2 = x-q/|| x-q ||^3 . The Gauss Theorem states that the surface integration of the vector flux along dΩ is proportional to the total influence of the sources within the surface <cit.>. Conversely, this surface integration can determine the number of sources within dΩ or the binary occupancy of a point source. This is known as the Winding Number Theorem in topology <cit.>. Our DVS algorithm is formulated as the discrete version of Equation (<ref>). To determine whether a point is within a mesh boundary, we construct its vector field, E⃗, and calculate vectors at each vertex of the mesh, dΩ. Next, at every vertex, we seek the inner product of E⃗ and the normal vector n⃗, and sum them after weighting with surface area represented by the vertex area of dual faces <cit.>. The occupancy of the query source point, q, is calculated by: Ocp(q) = 1/4π∑_V ∈Σ⟨n⃗(V),E⃗_⃗q⃗(V) ⟩·Area^*(V), where V refers to a vertex. Alternatively, the discrete integration can be calculated as a facet-wise summation, which is more stable but consumes more memory: Ocp(q) = 1/4π∑_F ⊂Σ⟨n⃗(F),E⃗_⃗q⃗(c(F)) ⟩·Area(F), where c(F) is the centroid of the facet F. To minimize the error of the discrete integration and maintain the continuous differentiability of the algorithm, we utilize the tanh function to approximate the binary value: Ocp(q) = tanh(β· ( Ocp(q) ) - 1/2), where β is a hyper-parameter to control the smoothness of the approximation, usually set to 10^3 in our experiments. Fig. <ref> shows the inverse quadratic fields on the surface of a left ventricle mesh, derived from a query point inside the mesh and one outside, and the resulting occupancy value. This formulation is differentiable as the operators used, inner products, summations, and continuous tanh activation functions are all differentiable. Furthermore, it is a global formulation as it considers the influence of a point source across all vertices or facets of the mesh. This contrasts with local formulations such as Chamfer loss and differentiable rasterization loss <cit.>, which focus on localized mesh-to-voxel matching. Local formulations are prone to local minima, making convergence more challenging. The effects of the mesh-to-voxel alignment between the proposed differentiable voxelization algorithm with two implicit alternatives, ADSV <cit.> and Differentiable Rasterize <cit.> is shown in Table <ref> in the next section. The center part demonstrates the comparison of the slicing accuracy. White masks the slicing of the original mesh. After the mesh is deformed, green shows the ground truth of the slicing of the updated mesh, and blue shows the slicing results from different methods. The proposed algorithm is flexible and robust. Equation (<ref>) provides an estimated occupancy even for non-watertight meshes. The DVS algorithm can be applied to point samples in the background space as well as inside the labeled cardiac space, and in combination, produce better results. §.§ Graph Harmonic Deformation (GHD) Our proposed GHD method is a model of the cardiac mesh. GHD describes the mesh deformation from a canonical or template mesh to the target mesh and is designed to preserve the mesh triangle quality and smoothness while keeping enough degrees of freedom to fully capture complex anatomic features. GHD models the mesh connectivity as a graph, and mesh deformation as displacement vectors on the mesh nodes. Mesh deformation is modeled via Graph Fourier Transform (GFT), as a linear combination of the eigenfunctions of the Laplacian matrix of the cotangent-weighted mesh graph, each of which describes smooth and periodically fluctuating functions on the mesh surface. This effectively reconstructs the mesh surface via harmonic surface waves, and thus the name Graph Harmonic Deformation. Mathematically, the graph Laplacian is defined as L = D - A, where D is the degree matrix, a diagonal matrix with the degrees of each node on its diagonal, and A is the adjacency matrix of the graph. The eigenvectors of the Laplacian matrix provide a set of basis functions called the Graph Fourier Basis, denoted by U := [u_1, …, u_n]^T, where L · u_i = λ_i u_i, 0 < i ≤ N. The eigenvectors corresponding to smaller eigenvalues of the graph Laplacian are the low-frequency Graph Fourier Basis functions, and vice versa. In this sense, the eigenvalues are the spectral energies, describing functions defined on the graph as fluctuant or smooth. After the basis is defined, GFT can be expressed by inner products of an arbitrary function with the bases. Mathematically, for a graph G and a real-valued function f : N_G →ℝ on G, we have the GFT of f as ϕ = U^T · f, where U^T denotes the transpose of the Graph Fourier Basis U, and ϕ is called the Graph Fourier Coefficients. Conversely, the original function f can be naturally reconstructed by multiplying the Graph Fourier coefficients and the corresponding bases f = U ·ϕ. A natural smoothing filter on the graph can be formulated by preserving the first p low-frequency components of the Graph Fourier basis, i.e., f_p = ∑_i < p U_i ·ϕ_i. We adopt the cotangent-weighted graph Laplacian, where weights are defined as: ω_ij = (α_ij) + (β_ij)/2, where α_ij and β_ij represent the opposite angles of the edge e_ij joining vertices v_i and v_j, in the two neighboring triangular faces respectively. The cotangent weights provide valuable information about the local geometry of the mesh, capturing the curvature and shape characteristics and the cotangent-weighted Laplacian is known as a typical approximation to the Beltrami-Laplacian of the base manifold <cit.>. Our experiments show that the cotangent-weighted Laplacian facilitates the global deformation of the mesh and preserves the triangle quality and smoothness during the optimization without the need for additional regularization terms. To balance various geometric information and avoid numerical issues caused by possible negative cotangent weights, i.e., (α_ij) + (β_ij) < 0, we take the combined Laplacian from the above three kinds of weights to form a mixed Laplacian: L_mix = L_cot + λ_normL_norm + λ_unwL_unw, where L_unw is the unweighted Laplacian and L_norm is the inversely normalized Laplacian, the weight defined as w_ij = 1/|| x_j - x_i || viewed as the discretization of the directional derivative on the surface. Notice that the cotangent Laplacian occupies the main part of the mixed Laplacian, and the rest part of L_norm, L_unw can be regarded as the low-weighted regularization. Fig. <ref> demonstrates the energy strips of the various modes of the mixed Laplacian. The first mode represents the first wave number, with uniform energy across the surface; the second to fourth modes appear to be of the second wave number, where energy variations are monotonic across the surface in a half periodic surface function, while modes 5-9 appear to be of the third wave number, featuring a single periodic surface functions with a minima or maxima. As it appears that the number of modes at each wave number is in an arithmetic progression, with (2k-1) modes at the wave number of k, the total number of modes employed should logically be ∑_1^n (2k-1) = k^2, where n is the number of different types of modes to be involved, so that no mode is missing from any particular wave number. To achieve the desired shape, we engage in fitting GHD coefficients to minimize the loss between the deformed canonical mesh and the target. The optimization is effectively executed through Gradient Descent, represented mathematically as: Δϕ = -η·∂/∂ϕ Loss(ℳ_0 + Uϕ, ℳ̂), where η is the learning rate, ℳ_0 is the canonical mesh, ℳ̂ is the target mesh, and Uϕ is the deformed mesh. When the mesh ground truth is available, the loss function can be defined as the Chamfer distance <cit.> between the deformed mesh and the target mesh. When only the image-level supervision is possible, the loss function can be defined as the Dice loss <cit.> calculated from our differentiable slicing algorithm. As the GHD naturally preserves the smoothness and quality of the mesh during the mesh morphing, there is no need for smoothness regularization constraints. Consequently, the GHD can focus on minimizing the target-oriented loss, and as such performs with higher efficiency to reach better convergence and accuracy than traditional mesh morphing approaches. Further, the natural smoothness enables bridging between sparse image sampling, allowing the GHD to remain robust even when only very few image planes or image voxels are available. §.§ Differentiable Physiologic Constraints Besides voxelization and slicing, we propose a few differentiable mesh operations for auxiliary supervision, which can be implemented in 3D and 4D cardiac reconstruction tasks. Several others are possible. §.§.§ Differentiable Thickness During the fitting of sparsely sampled image data, controlling the thickness of the cardiac wall can be difficult, and crossover of the inner and outer surfaces can happen due to insufficient image supervision. This is especially so at the apex of the heart, where the inner and outer surfaces are topographically very far away. To address this, we introduce a differentiable thickness function that can be used to regularize thicknesses, such as having a minimum or adopting a value similar to elsewhere in the cardiac chamber. Thickness(q⃗) = min_p⃗∈ℳ( ‖q⃗ - p⃗‖ + λ‖N⃗_⃗p⃗ + N⃗_⃗q⃗‖), where q⃗ is the query point on the mesh, p⃗ is the closest point on a face on the opposite side of the thickness, found via the differentiable point_face_distance_forward PyTorch function, N⃗_⃗p⃗ and N⃗_⃗q⃗ are the normal vectors of the faces containing p⃗ and q⃗ respectively, and λ is a hyper-parameter to balance the distance and the normal consistency. The formulation assumes that the two points defining the local wall thickness are at minimum distance apart and have the best-aligned face normals. The thickness function can be optimized by the gradient descent algorithm, and it can be used to formulate a traditional mean square error loss term to supervise mesh reconstruction. §.§.§ Volume & Weak Incompressibility We further introduce a differentiable volume function for supervising mesh scaling and deformation, as well as a differentiable change of volume function to weakly enforce myocardial incompressibility physics. The differentiable function is given as: V = ∭_Ω∇·x⃗ dv = ∬_∂Ω⟨n⃗, x⃗⟩ ds, where x⃗ is the position vector and n⃗ is the normal vector of the mesh M. The volume can be converted to the surface integral by the divergence and Gauss theorem. The discretization of Equation (<ref>) is given as: V = 1/3∑_F ⊂Σ⟨n⃗(F), x⃗(c(F)) ⟩·Area(F), where c(F) is the centroid of the face F. The differentiable volume can be used to supervise the mesh scaling and the deformation of the myocardium. To enforce incompressibility, we use a function describing the change of volume within the mesh, which can be enforced to be zero in a loss term, expanded via the chain rule: d V/d t = d/d t∬_Σ⟨n⃗, x⃗⟩ ds = ∬_Σd/d t⟨n⃗, x⃗⟩ ds = ∬_Σ⟨d/d tn⃗, x⃗⟩ ds + ∬_Σ⟨n⃗, d/d tx⃗⟩ ds ≃∬_Σ⟨Δ_t^t+1n⃗, x⃗⟩ ds + ∬_Σ⟨n⃗, D_t^t+1⟩ ds = 0, where Δ_t^t+1n⃗ is the difference of the normal vector between time t and t+1, and D_t^t+1 is the mesh displacement field at time t to t+1. The weak incompressible constraint will be applied to reconstruct 4D cardiac motion. § PROPOSED GHD-DVS FRAMEWORK In this section, we present the GHD-DVS pipeline, its loss functions, the datasets used for experiments and the performance measures used to evaluate the results. §.§ Overall Pipeline The overall pipeline for the differentiable mesh reconstruction combining DVS and GHD is shown in Fig. <ref>. The process starts with image-level pre-processing, where the raw images are de-blurred, resampled, and intensity normalized as is typically done <cit.>. The heart region is then segmented from the images, and this can be performed manually or via a deep learning algorithm, such as U-Net in <cit.>. The labeled pointclouds from the right ventricle, left ventricle, left ventricle cavity and anocelia are then extracted from the segmented images. It is worth noting that the labeled point clouds are sparse and irregularly distributed when the raw images stack is sparse. In our experiments, we directly extracted the labeled point clouds from the manual annotation provided in the dataset to avoid potential errors from the segmentation model. Before the mesh reconstruction, the canonical shape is aligned to the target image roughly. The rigid orientation from the canonical mesh to the target pointclouds is obtained by optimizing the quaternion representation of the rotation matrix <cit.> and the translation vector among the random sampling points from the canonical shape and the target image stack. The right ventricle alignment is considered in the rigid orientation, which facilitates breaking the symmetry of the left ventricle to avoid wrong-paired fitting during the GHD optimization (shape fitting converging well while indices are misaligned). The canonical shape is a manually optimized LV mesh with 4000 vertices approximating the mean shape of the training dataset. Our experiments show that our mesh morphing method can be sensitive to the canonical shape chosen, however, due to the approximately axisymmetric shape of the left ventricle, it is not difficult to find a canonical shape that works well. However, for more complex geometries such as cranial aneurysms attached to surrounding blood vessels, a good canonical shape close to the average of anticipated geometries is required. The entire pipeline includes the rough rigid orientation and the GHD optimization supervised by the differentiable slicing. The optimization is executed by the Adam optimizer <cit.>. The rigid orientation from the canonical mesh to the target is obtained by optimizing the quaternion representation of the rotation matrix and the translation vector among the random sampling points from the canonical shape and the target image stack. Notice that the right ventricle alignment is considered in the rigid orientation, which facilitates breaking the symmetry of the left ventricle to avoid wrong-paired fitting during the GHD optimization. See Fig. <ref> for the whole pipeline of our 3D mesh reconstruction. §.§ Loss Functions We adopt the Dice loss (<cit.>) to supervise the match between the reconstructed mesh and ground truth image segmentations: Dice(ℳ_t, M̂ŝ) = 2∑_P_i ∈SamplesOcp(P_i) ·M̂ŝ(P_i) /∑Ocp(P_i) + ∑M̂ŝ(P_i), where Ocp(P_i) is the occupancy of samples P_i toward the current mesh, and M̂ŝ(P_i) is the ground truth mask from the slicing of the images. This loss term enforces the occupancy of all segmented voxels in the image planes to be within the mesh, and the non-occupancy of all non-segmented voxels to be outside the mesh. Since the GHD is naturally smooth, we do not need to regularize smoothness. The only regular term we use is the thickness constraint to avoid zero-thickness and mesh collapse at the apex during reconstructions of sparse image data. We constrained apical thickness to be greater than 4 mm, as informed by thickness reports of left ventricle thickness in the adult population <cit.>, using the thickness loss: Loss_th = ∑_P_i ∈SamplesSiLU(Thickness(P_i) - 4 mm), where SiLU(x) = x/1 + e^-x <cit.> is the scaled linear unit function. The final loss function is the combination of the Dice loss and the thickness loss: Loss = Dice + λ·Loss_th, where λ is the weight of the thickness loss. The weight of the thickness loss is set to 0.01 in our experiments. §.§ Datasets In our study, we employed and analyzed cardiac ventricular volumetric structural imaging data derived from two computed tomography (CT) datasets, MMWHS <cit.> and CCT48 <cit.>, and three magnetic resonance (MR) datasets, ACDC <cit.>, UK Biobank (UKBB) <cit.>, and the MITEA dataset <cit.>. The CT datasets, MMWHS and CCT48, are characterized by their high-resolution images, with an in-plane resolution of 0.78 mm and a slice thickness of 1.6 mm, making them dense imaging data sources due to the finely detailed spatial resolution they provide. The MR datasets, ACDC and UK Biobank, exhibit a sparser imaging structure with larger gaps between imaging planes. Specifically, ACDC has an in-plane resolution of 1.37 to 1.68 mm with slice thicknesses varying between 5 and 10 mm. The UK Biobank provides MR imaging data with a resolution of 1.7 mm, with slice thicknesses of 6.0 mm in short-axis (SAX) views and 8.0 mm in long-axis (LAX) views. The MITEA dataset is a set 3D echocardiography data that is annotated via multi-modality reconstruction with matching MRI images. MMWHS comprises 20 patients, CCT48 comprises 48 patients, and we utilized data for 100 patients from ACDC, data for 16 patients from UK Biobank, and data for 10 patients from MITEA. §.§ Performance Measures We evaluated the performance of our method using several metrics. The Dice coefficient was used to measure the overlap between the reconstructed mesh and the ground truth segmentations. The Chamfer Distance (CD) and Hausdorff Distance (HD) were used to evaluate the accuracy of the reconstructed mesh's surface compared to the ground truth surface. The Chamfer Distance (CD) between the reconstructed mesh and the ground truth surface is defined as: CD(ℳ, ℳ̂) = 1/|ℳ|∑_x ∈ℳmin_y ∈ℳ̂ x - y ^2 + 1/|ℳ̂|∑_y ∈ℳ̂min_x ∈ℳ y - x ^2, where ℳ and ℳ̂ are the reconstructed and ground truth meshes, respectively. The Hausdorff Distance (HD) is given by: HD(ℳ, ℳ̂) = max{sup_x ∈ℳinf_y ∈ℳ̂ x - y , sup_y ∈ℳ̂inf_x ∈ℳ y - x }. These performance measures provide a comprehensive evaluation of the reconstructed mesh's accuracy and fidelity to the ground truth. § RESULTS §.§ Convergence and Robustness of GHD We first evaluate the advantages of using the GHD to reconstruct the left ventricle myocardial mesh via mesh morphing, supervised by a smooth ground truth mesh, using Chamfer distance as the loss function. We compare GHD to a traditional vertex-wise formulation (where mesh deformation is modeled directly as vertex displacements). The results in Fig. <ref> (a) show that with the same optimizer and learning rate, the GHD method converges better with a lower Chamfer loss, demonstrating efficiency and robustness. Furthermore, visual observations demonstrate that the GHD preserves the triangle quality and smoothness during the optimization better, as the vertex-wise method results in a mesh with irregular triangles and sharp edges. With stronger smoothness regularization, such irregularities on the vertex-wise approach can reduce, but the convergence becomes poorer and ends with a higher Chamfer loss. With natural smoothness, the GHD avoids such a trade-off. Preserving mesh qualities is the essential advantage of GHD over vertex-wise displacement. We use the radio of triangles only containing good angles (between 30 and 120 degrees) as a measure of mesh quality to avoid extremely acute or obtuse triangles, which causes numerical instability in downstream applications, like finite element simulations. The good angle ratio (GAR) is defined as: GAR = | {T∈ F| ∀θ∈ T, 1/6π≤θ≤2/3π} |/| F |, where F is the set of all triangles in the mesh, and θ is the angle of any triangle T. The results in Fig. <ref> show that GHD preserves mesh quality during optimization, while coordinate-based morphing decreases the GAR significantly, from 0.942 to 0.534, indicating that the mesh quality is degraded during optimization. §.§ Performance of GHD-DVS on Dense Image Data (CT) We compare GHD to coordinate-based mesh morphing algorithms and statistical shape models (SSM) on their effectiveness and robustness as a shape representation by testing them on 3D mesh reconstruction on dense CT images. SSMs evaluated include models utilizing PCA of node coordinates as basis modes, and a model utilizing a VAE of node coordinates for reconstruction. for PCA SSM, we investigate two models, one trained by us using limited number of CT images and another from <cit.>, which is trained with 1000 MR images. Unlike mesh morphing approaches (GHD and coordinate-based mesh morphing), SSM requires pre-training with a dataset. The self-trained PCA SSM and the VAE SSM are trained with 20 samples from the MMWHS dataset, while the Bai et al. PCA SSM is adopted in the trained state. 20 modes are used from each PCA during reconstruction. 10 cases from MMWHS and the 48 samples from the CCT48 dataset are used for testing all methods. The evaluation task is to fit the various models to the ground truth mesh by minimizing the learnable parameter, θ (mode or latent vector weights for PCA or VAE, GHD weights for GHD, and vertex displacements for coordinates-based mesh morphing), via the gradient descent optimizer, as follows: Δθ = - η∇_θLoss(ℳ, ℳ̂), where η is the learning rate, and Loss is either CD or DVS, M, M̂ are the optimizing mesh and the ground truth mesh, respectively. Here, a ground truth mesh is required as CD can only be evaluated between two meshes. We utilized the unsmoothed marching cubes mesh as the ground truth mesh as it can represent the labelled image. Its shortcomings of having staircasing artefacts or imperfect mesh quality does not prevent the evaluation of the geometric accuracy of various shape reconstruction algorithms. We evaluate the performances using the Dice score (proportion of labelled image voxels within the mesh), and CD and HD between the reconstructed and ground truth meshes, calculated via Eq.(<ref>) and Eq.(<ref>). Results are shown in Table <ref>. Mesh morphing methods (coordinates-based and GHD) generally outperformed SSMs (PCA and VAE) in all measures. Modal constraint in PCA SSM may be reducing the goodness of the fit between reconstructed mesh and image labels. The two PCA SSM models performed similarly, suggesting that the low number of training cases is not limiting the performance of our self-trained PCA SSM. The VAE SSM appeared overtrained, with good performance during training but poor performance during testing, which may be associated with the low sample size available for training. We acknowledge that it may improve with a larger training dataset but do not have a larger training dataset for further investigations. However, this result points to the advantage of GHD, which does not require any pretraining or a training dataset for reconstruction. Comparing GHD to coordinates-based mesh morphing approaches, GHD has lower CD and HD, suggesting that it has a smoother and better quality mesh, corroborating results in subsection <ref>, but GHD has lower Dice, which is consequent to GHD being a lower dimensional representation of shape. However, with DVS, dice performance of GHD is insignificantly lower than that of coordinates-based mesh morphing. Further, comparing GHD to marching cubes with smoothing, Dice and CD performance are very similar, although GHD has slightly higher HD. The results thus suggest that GHD is a good method for mesh reconstruction on dense images, as it concurrently offers good accuracy and good mesh qualities. GHD enables a reduced dimensional representation of shape without sacrificing the ability to accurately model an unseen shape, and it does not require pre-training with a large dataset. Unlike marching cubes, GHD provides reconstructions where the number of vertices are fixed, and may be easier to use for downstream processes such as motion quantifications. Comparing CD to DVS for GHD and coordinates-based mesh morphing approaches, we observe that DVS gives better Dice while CD gives better CD and HD. This is naturally so as the CD loss attempts to minimize distances between mesh, while DVS attempts to optimize fit with image labelled voxels. §.§ Performance of GHD-DVS on Sparse Image Data (MRI) The 3D mesh reconstruction for MRI short-axis stack images is more challenging as image data is sparser, due to large spacing between image planes. We tested various mesh reconstruction algorithms on the ACDC and UKBB datasets, where the MRI images is scanned at 5 to 20 slices. The MRI datasets regularly contains bad cases with misalignment between the image slices. Therefore, during this experiment, we manually selected 15 cases from each dataset with good alignment. All cases are down-sampled to 5 short-axis slices for the sparse MRI reconstruction experiment. Reconstructions are performed only from the short-axis stack, but evaluation of Dice (percentage occupancy of labelled voxels within mesh) is performed on both short axis and long axis (including 2 and 4 chamber view) image slices, We compared GHD+DVS to several algorithms: (1) marching cubes applied to the interpolated MRI image (interpolation conducted to convert to isotropic resolution) with or without smoothing, (2) several mesh morphing approaches that utilize the vertex-wise displacement morphing model. These mesh morphing approaches include those that use ADVS <cit.>, differentiable Rasterization <cit.>, and our DVS as the image guidance of mesh reconstruction. The comparison is shown in Table <ref>. Results show that mesh marching cubes without smoothing achieves only moderately good Dice, CD and HD performance. However, the mesh quality is poor with staircasing artefacts, and the global topology is sometimes not controllable, with holes and other topological defects appearing, an example is shown in the second column of Fig. <ref>. While staircasing artefacts can be removed by smoothing, topology defects are hard to fix. Combining smoothing post-processing with marching cubes does not significantly improve performance measures, but it improves the visual quality of the mesh. On the average, coordinates-based mesh morphing approaches do not do substantially better than the unsmoothed marching cubes approach. However, the DVS-based mesh morphing performs better than marching cube in almost all measures. Comparing the 3 techniques for image guidance, Differentiable Rasterization performs better than ADVS, achieving both higher Dice and lower CD and HD, but it does not perform as well as DVS. The results thus suggest that DVS is a superior image guidance technique during mesh reconstruction. Finally, comparing coordinates-based mesh morphing guided by DVS to GHD guided by DVS, we observe further and substantial improvement in Dice, mild improvement in HD, and a slightly poorer but very similar CD. This shows that GHD is a better technique than direct vertex morphing for mesh reconstruction on sparse images. Further, GHD+DVS achieved the best results in all the methods tested, showing that it can outperform the state of the art. From the visual results in Fig. <ref> and Table <ref>, we can see that GHD+DVS can reconstruct the left ventricle mesh from sparse MRI images with high accuracy and fidelity to the ground truth. The reconstructed meshes are visually similar to the ground truth meshes, with good alignment and smoothness. The results demonstrate the effectiveness of GHD+DVS for 3D mesh reconstruction from sparse MRI images. §.§ Clinical Analysis To validate the clinical applicability of GHD+DVS, we employed it to compute left ventricular (LV) volume, ejection fraction (EF), and myocardial strains. The end-diastolic volume (EDV) was determined after fitting both the marching cubes (MC) method and GHD+DVS to the same ground truth segmentations. We utilized 50 segmentations from the MITEA 3DE dataset (<cit.> which comprises annotated 3D echocardiography scans with segmentations of the LV myocardium and cavity derived from paired cardiac MRI scans. To assess each method's ability to reconstruct the LV mesh with reduced ground truth labels, the segmentations were progressively reduced. For EDV and EF analyses (refer to Fig. <ref>(a & b)), we examined 10 cases, varying the number of slices from 10 to 50, ranging from sparse to dense configurations. Reconstructions were performed at both end systole and end diastole, with EF calculated as the percentage change in cardiac blood volume between these time points. Additionally, global longitudinal strain (GLS) and global circumferential strain (GCS) were derived from the UK Biobank for 15 patients and compared to manual segmentations. The results in Fig. <ref>(a) indicate that the marching cubes method, coupled with smoothing (currently the gold standard in medical image processing), results in significantly different LV reconstructions in sparse images, with a p-value < 0.05, indicating a deviation from the dense fit ground truth. In contrast, GHD+DVS provides more accurate volume quantification even in the sparsest cases, with a p-value of 0.757, as determined by linear regression analysis. Fig. <ref>(b) further illustrates that the marching cubes method tends to underestimate EF when sampling is sparse, leading to values that deviate more significantly from the ground truth. GHD+DVS, however, maintains a stronger correlation with expected EF values, closely aligning with the ground truths. For both volume and EF quantifications, the marching cubes method with 50 slices was used as the ground truth. In terms of strain analysis, Fig. <ref>(c) demonstrates that GHD+DVS closely matches manually calculated strains, underscoring its reliability for clinical application. § DISCUSSION To summarize, we propose a novel method, GHD+DVS, for the differentiable reconstruction of the left ventricle myocardial 3D mesh from clinical images, which can be CT, echo, or MRI images. We propose the novel DVS approach to supervise the mesh morphing to fit with image labels. The Winding-DVS is a global loss function that enables easier and better convergence for improved accuracy. We further introduced GHD as a novel mesh morphing approach to representing the 3D mesh, which has the advantages of being naturally smooth without regularization and yet robustly flexible to fix complex shapes. GHD modes are derived from the canonical mesh, which serves as the shape prior, and do not need to be derived from a large dataset of mesh ground truths. This makes it easy to adopt and does not require laborious collection and processing of large data. We demonstrate that the GHD+DVS approach has robust performance that challenges the state of the art. In dense data, it performs equally well as the traditional mesh morphing + Chamfer loss approach and outperforms SSM and marching cube approaches. However, with sparse MRI data, existing methods perform poorly, and GHD+DVS is the best-performing approach. One important utility of GHD+DVS is for the extraction of clinically relevant measurements. We show that it can extract EF, cardiac chamber volumes, and myocardial strains accurately, performing better than the clinical gold standard of marching cubes. In this experiment, we further demonstrated that by applying the approach to different states of dynamically moving organs, such as the heart, we can describe deformations, volume changes, and other dynamic changes, suggesting a wide range of possible applications. Currently, the GHD+DVS is applied frame by frame. However, it can be incorporated into a temporal framework that regularizes for temporal consistency. Besides reconstructing the LV, we propose that the GHD+DVS method can be a universal method for reconstructing various tissues and organs. For example, we successfully applied it in the rebuilding of cranial aneurysms together with the surrounding blood vessels (Fig. <ref>). The GHD+DVS reconstruction can be good shape inputs for flow dynamics predictor networks, and they may provide better morphological parameters for disease outcome predictions. We further envision that mesh reconstruction can also be applied to the brain, blood vessels, liver, placenta, fetus, limb parts, etc., for various biomedical research and clinical measurements. Importantly, the GHD-DVS mesh fitting framework is differentiable and can be utilized in deep learning and non-deep learning algorithms requiring mesh reconstruction to specific objectives. For example, it can be used for modeling cardiac biomechanics according to physics constraints while supervised by motions extracted from images, or it can be used for modeling the geometry of the heart while maintaining a concurrent match to cardiac images from different scan modalities (e.g., MRI and echo). § ACKNOWLEDGEMENT Yihao Luo's PhD project is funded by Imperial College London and the China Scholarship Council (CSC). Dario Sesia's PhD project is funded by an NHLI Endowment studentship and supported by the British Heart Foundation (BHF) DTP (Doctoral Training Programme). This work was supported in part by the ERC IMI (101005122), the H2020 (952172), the MRC (MC/PC/21013), the Royal Society (IEC\NSFC\211235), the NVIDIA Academic Hardware Grant Program, the SABER project supported by Boehringer Ingelheim Ltd, NIHR Imperial Biomedical Research Centre (RDA01), Wellcome Leap Dynamic Resilience, UKRI guarantee funding for Horizon Europe MSCA Postdoctoral Fellowships (EP/Z002206/1), and the UKRI Future Leaders Fellowship (MR/V023799/1). model2-names.bstauthoryear
http://arxiv.org/abs/2409.02471v1
20240904064317
Demographic parity in regression and classification within the unawareness framework
[ "Vincent Divol", "Solenne Gaucher" ]
stat.ML
[ "stat.ML", "cs.CY", "cs.LG" ]
1]Vincent Divol vincent.divol@ensae.fr 1]Solenne Gaucher solenne.gaucher@ensae.fr [1]CREST, ENSAE, IP Paris Demographic parity in regression and classification within the unawareness framework [ ==================================================================================== § ABSTRACT This paper explores the theoretical foundations of fair regression under the constraint of demographic parity within the unawareness framework, where disparate treatment is prohibited, extending existing results where such treatment is permitted. Specifically, we aim to characterize the optimal fair regression function when minimizing the quadratic loss. Our results reveal that this function is given by the solution to a barycenter problem with optimal transport costs. Additionally, we study the connection between optimal fair cost-sensitive classification, and optimal fair regression. We demonstrate that nestedness of the decision sets of the classifiers is both necessary and sufficient to establish a form of equivalence between classification and regression. Under this nestedness assumption, the optimal classifiers can be derived by applying thresholds to the optimal fair regression function; conversely, the optimal fair regression function is characterized by the family of cost-sensitive classifiers. § INTRODUCTION §.§ Motivation Recent breakthroughs in artificial intelligence have led to the widespread adoption of machine learning algorithms, exerting an increasingly influential and insidious impact on our lives. Essentially, these algorithms learn to detect and reproduce patterns using massive datasets. It is now widely recognized that these predictions carry the risk of perpetuating, or even exacerbating, the social discriminations and biases often present in these datasets <cit.>. Algorithmic fairness seeks to measure and mitigate the unfair impact of algorithms; we refer the reader to the reviews by <cit.> for an introduction. Different approaches have been developed to mitigate algorithmic unfairness. One approach focuses on individual fairness, ensuring that similar individuals are treated similarly, regardless of potentially discriminatory factors. Another approach targets group fairness, aiming to prevent algorithmic predictions from discriminating against groups of individuals. Statistical fairness falls under the latter approach and relies on the formalism of supervised learning to impose fairness criteria while minimizing a risk measure. In this work, we study risk minimization under the demographic parity criterion, which requires that predictions be statistically independent of sensitive attributes. Although this criterion, introduced by <cit.>, has some known limitations <cit.>, it finds application in a wide range of scenarios <cit.>. Its simplicity arguably makes it the most extensively studied criterion. The statistical fairness literature can be broadly divided into two currents, depending on whether the direct use of the protected attribute in predictions is permitted or not. A first line of works, studying the awareness framework, considers regression functions that make explicit use of discriminating attributes, thus treating individuals differently based on discriminating factors. For this reason, this approach is also often referred to as disparate treatment. In this work, we adopt the unawareness framework, in which disparate treatment is prohibited and the regression function cannot directly use the sensitive attribute. Empirical evidence from simulations <cit.> indicates that within the unawareness framework, predictions often result in suboptimal trade-offs between fairness and accuracy and may induce within-group discrimination. Moreover, the authors conjecture that while the unawareness framework aims to prevent discrimination based on sensitive attributes, predictions in this setting implicitly rely on estimates of these attributes—a phenomenon later proven in <cit.> for classification problems. Nevertheless, this framework remains crucial in practice, as the direct use of sensitive attributes may be legally prohibited or simply unavailable at prediction time. In this paper, we investigate the problem of fair regression under demographic parity constraints within the unawareness framework. A key difficulty in overcoming algorithmic unfairness is the limited understanding of how fair algorithms make predictions. Therefore, we focus on providing a simple mathematical characterization of the optimal regression function in the presence of fairness constraints. §.§ Problem statement Let (X,S,Y) be a tuple in ×× with distribution , where X corresponds to a non-sensitive feature in a feature space , S is a sensitive attribute in a finite set , and Y is a response variable that we want to predict, which has a finite second moment. To illustrate this problem with an example, assume, as in <cit.>, that X represents a candidate's skill, S is an attribute indicating groups of populations, and Y is the current market salary of the candidate. Due to historical biases, the distribution of the salary may be unbalanced between the groups. Our aim is to make predictions that are fair, and as close as possible to the current market value Y. In the unawareness framework, we cannot make explicit use of the sensitive attribute to make our predictions. Therefore, we consider regression functions of the form f : → in the set of score functions . We want to ensure that our regression function satisfies the following demographic parity criterion. The function f : → verifies the Demographic Parity criterion if f(X) ⊥ S. In essence, the demographic parity criterion requires that the distribution of predictions (in our example, the salary) be identical across all groups. We assess the quality of a regression function f through its quadratic risk _sq(f) = 𝔼[(Y - f(X))^2]. An optimal fair regression function f^* satisfies f^* ∈_f ∈{_sq(f) : f(X) ⊥ S }, where is the set of regression functions from to . Classical results show that when no fairness constraints are imposed, the Bayes regression function minimizing the squared risk _sq is a.s. equal to the conditional expectation η, where η(x) = 𝔼[Y | X = x]. In this paper, we also investigate the relationship between classification and regression problem. When Y ∈{0,1} a.s., the quality of a classification function g : →{0,1} can be assessed through its expected weighted 0-1 loss _y(g), where for y ∈ [0,1], _y(g) is defined as _y(g) = y·ℙ[Y = 0, g(X) = 1] + (1-y)·ℙ[Y = 1, g(X) = 0]. For the choice y = 1/2, minimizing this risk measure corresponds to maximizing the classical accuracy measure. For a given value y∈ [0,1], an optimal fair classification function g^*_y verifies g^*_y ∈_g ∈{_y(g) : g(X) ⊥ S }, where is the set of classification functions from to {0,1}. Let us again illustrate this problem with an example from recruitment. Assume that X represents a candidate's skill, S is an attribute indicating different population groups, and Y denotes whether a human recruiter would consider the candidate qualified for a given position. Due to historical biases, the distribution of the binary response Y may be unbalanced across the groups. We aim to make a prediction, or equivalently take the decision to accept or reject a candidate. Our goal is to make predictions for the value of Y, or equivalently, to decide whether to accept or reject a candidate, in a way that is both accurate and fair. Specifically, under demographic parity, we aim to ensure that the probability of acceptance is the same across all groups. Classical results show that when no fairness constraints are imposed, the classifier _y(x) = 1{(x) ≥ y } is a Bayes classifier that minimizes _y(g). This relationship is at the heart of the design and study of plug-in classifiers <cit.>. Interestingly, it was recently shown that a similar relationship holds under demographic parity constraints in the awareness framework <cit.>. Extending this result to the unawareness framework has remained an open problem, which we address in this paper. Notation We first set some notation. Recall that we are given a tuple (X,S,Y) in ×× with distribution , where is any measurable space (the space of features) and is a finite set (the set of labels). For s∈, we denote by p_s the probability (S=s) and by μ_s the conditional law of X|S=s. We let μ = ∑_s∈ p_sμ_s be the marginal distribution of X. We let () be the set of probability measures on the measurable space . Moreover, we let L^1(ν) be the space of functions integrable with respect to the probability measure ν. Finally, C denotes the interior of the set C. §.§ Related work Fair classification Research on optimal prediction under demographic parity constraints has primarily focused on classification, where the goal is to predict a binary response in {0,1}, as this problem is intrinsically linked to the issue of fair candidate selection, central in algorithmic fairness. This problem is well understood in the awareness setting from an algorithmic point of view <cit.>. On the theoretical side, <cit.> recently proved that the optimal classifier for the risk _y can be obtained as the indicator that the optimal fair prediction function for the squared loss f^* is above the threshold y, a result that was later extended in <cit.> to multi-class classification. Less is known about fair classification in the unawareness framework. On the algorithmic side, several works have proposed various relaxations of the demographic parity constraint, leading to tractable algorithms for computing classifiers <cit.>. On the theoretical side, <cit.> provided empirical evidence suggesting that fair classifiers may base their decisions on non-relevant features correlated with the sensitive attribute, potentially disrupting within-group ordering. This hypothesis was further confirmed by <cit.>, who characterized the optimal fair classifier in the unawareness framework. They showed that it is given by the indicator that the conditional expectation η(X) is above a threshold, which depends on the probabilities that the individual described by X belongs to the different groups. Notably, the question of whether this classifier can be obtained by thresholding the optimal fair prediction function for the squared loss remains an open problem. Fair regression In the awareness framework, fair regression is well understood from both the algorithmic and theoretical points of view <cit.>. On the theoretical front, it has been shown that the problem of fair regression under demographic parity can be rephrased as the problem of finding the weighted barycenter of the distributions of η(X,S)=[Y|X,S] across different groups, with costs given by optimal transport problems. Assume that for all s ∈, the distribution ν_s of η(X,S) for S=s has no atoms, and let p_s = ℙ(S = s). Then, f is fairmin_sq(f) = min_ν∈()∑_s∈p_s_2^2(ν_s, ν) where _2^2(ν_s, ν) is the squared Wasserstein distance between ν_s and ν. Moreover, if f^* and ν solve the left-hand side and the right-hand side problems respectively, then ν is equal to the distribution of f^*(X,S), and f^*(x,s) = (∑_s' ∈ p_s'_s')∘ F_s(η(x,s)), where _s and F_s are respectively the quantile function and the c.d.f. of ν_s. This result relates the problem of fair regression in the awareness framework to a more general optimal transport problem. Interestingly, this problem has an explicit solution, given by the quantiles and c.d.fs of the conditional expectation η(X,S) across the different groups. This explicit formulation yields, as an immediate consequence, that the optimal fair regression function preserves order, a property introduced in <cit.> within the awareness framework. Recall that the Bayes regression function in the awareness framework is η. A prediction function f is said to preserve order if for any two candidates (x,x') ∈^2 in the same group s ∈, η(x,s) ≤η(x',s) implies f(x, s) ≤ f(x',s). Thus, this property implies that the fairness correction does not alter the ordering of the predictions within a group. In contrast, the problem of fair regression within the unawareness framework has been seldom studied, particularly from a theoretical perspective. One reason for this is that the demographic parity constraint is more challenging to implement without disparate treatment. While algorithms complying with these constraints have been proposed by <cit.> and <cit.>, the authors do not claim that the estimators obtained are optimal in terms of risk. <cit.> propose an algorithm based on a discretization of the problem, followed by a reduction to cost-sensitive classification. However, their algorithm requires calling an oracle cost-sensitive classifier, which may not be available in practice. Additionally, their results are limited to a class of regression functions with bounded Rademacher complexity. §.§ Outline and contribution In this paper, we focus on the theoretical aspects of the problem of fair regression in the unawareness framework, specifically on characterizing and studying the optimal regression function. We extend results presented earlier in the awareness framework to this setting, albeit under the assumption that the sensitive attribute takes only two values; henceforth, we assume that = {1,2}. Although restrictive, this assumption is not uncommon in the literature <cit.> and covers the important case where one of the two groups includes protected individuals. Our results shed light on important phenomena, and we leave the extension to scenarios with more than two groups to future work. Similarly to the awareness case characterized in Theorem <ref>, we show that the solution to the fair regression problem in the unawareness framework is given by the solution to a barycenter problem with optimal transport costs. We begin in Section <ref> with a brief introduction to optimal transport theory and to the main tools used in the proofs of our results. In Section <ref>, we characterize the optimal fair regression function. First, we prove in Proposition <ref> that in general, the optimal fair regression function f^* does not preserve order. Next, we demonstrate the following result, which relates fair regression in the unawareness framework to an optimal transport problem. This result is formalized in Theorem <ref>. Under mild assumptions, the optimal fair regression function f^* is given by the solution to a barycenter problem with optimal transport costs. In particular, there exists a function f^* such that f^*(x) = f^*(η(x), Δ(x)), where Δ(x) ∝ℙ(S= 1| X=x)/p_1 - ℙ(S= 2| X=x)/p_2. Comparing this result to the one provided in Theorem <ref> within the awareness framework, we note that there is no explicit formula for the optimal fair regression function within the unawareness framework. Moreover, Theorem <ref> underscores that the optimal fair regression function effectively relies on an estimate Δ(X) of the unobserved sensitive attribute S to make predictions, thereby indirectly implementing disparate treatment. This result provides a theoretical explanation for the empirical phenomenon observed by <cit.>. As noted in their work, this behavior is problematic as it can lead to basing predictions on factors that are not relevant to predicting the outcome Y, simply because they are informative for predicting the sensitive attribute S. In Section <ref>, we investigate the relationship between fair regression and fair classification when Y ∈{0,1}. We demonstrate the existence of a dichotomy based on a nestedness criterion. Recall that as the threshold y increases, the Bayes classifier _y predicts 1 for a decreasing proportion of candidates; we show that this also holds for the optimal fair classifier g_y^*. We say that the fair classification problem is nested if, almost surely with respect to the measure μ of X, the prediction g^*_y(X) for the candidate X decreases as y increases. In other words, candidates rejected (i.e., with prediction 0) at low values of y cannot be accepted at higher values of y, when the proportion of accepted candidates is lower. For example, the Bayes classifier defined by _y(x) = 1{(x) ≥ y } satisfies this condition. When the nestedness criterion holds, the decision boundaries for the optimal fair classifier for different risk _y form nested sets. The following informal result summarizes our findings. Under mild assumptions, if the fair classification problem is nested, then the regression function f^*(x) = sup{y ∈ : g^*_y(x) = 1} is optimal for the fair regression problem (<ref>); equivalently, the classifier g_y(x) = 1{f^*(x) ≥ y} is optimal for the fair classification problem (<ref>) for the risk _y. Conversely, if the classification problem is not nested and if f^* is the optimal fair regression function, then there exists y ∈ (0,1) such that g_y(x) = 1{f^*(x) ≥ y} is sub-optimal for the fair classification problem with risk _y. This result is formalized in Proposition <ref> and in Corollary <ref> While nestedness may initially appear to be a natural assumption, it does not always hold. In Section <ref>, we show how to design examples of problems where this assumption is either met or violated. § A SHORT INTRODUCTION TO OPTIMAL TRANSPORT In this section, we provide a brief introduction to optimal transport. We present the main tools that will be used in the proofs of the theorems in Sections <ref> and <ref>. We begin by providing an overview of optimal transport in Section <ref>, before discussing the multi-to-one dimensional transport problem in Section <ref>. §.§ The optimal transport problem Optimal transport provides a mathematical framework to compare probability distributions. Consider a Borel probability measure μ on a Polish space and a Borel probability measure ν on some other Polish space . We are given a continuous cost function c:×→ [0,+∞], where c(x,y) represents the cost of moving a unit of mass from x∈ to y∈. The optimal transport problem consists in finding the optimal way of moving the distribution of mass μ to ν by minimizing the total displacement cost. Formally, a transport map is a measurable map T:→ such that the pushforward measure T♯μ of μ by T is equal to ν, where the pushforward measure is defined for all measurable sets B⊂ by T♯μ(B)=μ(T^-1(B)). The optimal transport problem is then the following minimize ∫ c(x,T(x))μ(x) under the constraint T♯μ = ν. The existence of minimizers of the optimization problem (<ref>) is a delicate problem that depends on both the regularity of the cost function c and the properties of μ and ν. For instance, when ==^d and c(x,y)=x-y^2, a solution exists whenever μ gives zero mass to sets of dimensions smaller than d-1; otherwise, a solution may not exist, see <cit.>. When ==^d and c(x,y)=x-y^2, the corresponding minimum is known as the (squared) Wasserstein distance between μ and ν, denoted by _2^2(μ,ν). More generally, an optimal transport map exists whenever μ gives zero mass to sets of dimensions smaller than d-1 and the cost function c(x,y)=x-y^2 is replaced by any smooth cost function c satisfying the so-called twist condition, which states that the determinant (∂^2 c/∂ y_j∂ x_i) never vanishes. The optimal transport problem also admits a relaxed version in terms of transport plans, which is often more convenient to work with. A transport plan is a probability measure π on the product space × which has first marginal equal to μ and second marginal equal to ν: for all measurable sets A⊂ and B⊂, π(A×)=μ(A), π(× B)=ν(B), or, in probabilistic terms, if (X,Y)∼π, then X∼μ and Y∼ν. Informally, for x∈, the conditional law of Y|X=x describes the different locations where the mass initially at x will be sent. The cost of a transport plan π is given by ∬ c(x,y) π(x,y). Note that a transport map T induces a transport plan by considering the law π of (X,T(X)) (formally, π=(𝕀, T)♯μ). The optimal transport cost is defined by the following minimization problem: _c(μ,ν)=min_π∈Π(μ,ν)∫ c(x,y)π(x,y), where Π(μ,ν) is the set of transport plans between μ and ν. Optimal transport plans always exist, whereas optimal transport maps may fail to do so. When optimal transport maps exist and the source measure μ has no atoms, the minimization problem (<ref>) gives the same value as the optimal transport cost defined in (<ref>), see <cit.>. Our proofs will rely heavily on the dual formulation of the optimal transport problem, which we now introduce. The c-transform of a function ϕ:→∪{+∞} is defined as ∀ x∈, ϕ^c(x)=sup_y∈(ϕ(y)-c(x,y)). The subdifferential of ϕ is defined as ∂_cϕ={(x,y)∈×: ϕ(y)-ϕ^c(x)=c(x,y)}. For the quadratic cost, these notions are closely related to the usual notions of convexity, with c-transforms being analogous to the concept of convex conjugates. Kantorovich duality <cit.> states that _c(μ,ν) =sup_ϕ∈ L^1(ν)∫ϕ(y)ν(y)-∫ϕ^c(x)μ(x). Moreover, under the mild assumption that there exist two functions a∈ L^1(μ) and b∈ L^1(ν) such that c(x,y)≤ a(x)+b(y) for all x∈, y∈, the previous supremum is attained by a function ϕ, which we call a Kantorovich potential. In that case, any optimal transport π is supported on the subdifferential of the c-convex function ϕ, meaning that π(∂_cϕ)=1. This last condition imposes significant constraints on the structure of optimal transport plans. For the quadratic cost, this fact is the key ingredient in proving that optimal transport plans are induced by optimal transport maps. §.§ Multi-to-one dimensional optimal transport In the next section, we demonstrate that the fair regression problem within the unawareness framework can be reduced to a barycenter problem of the form: min_ν∈() p_1_c(μ_1,ν)+p_2_c(μ_2,ν), where μ_1, μ_2 are two-dimensional probability measures and c:^2×→ [0,+∞) is a cost function. This reduction raises the question of whether the solutions to the barycenter problem (<ref>) can be characterized by transport maps. Proving that the optimal transport problem _c(μ_s,ν) is solved by a transport map is nontrivial. Complications arise because the measures μ_s and ν are defined on spaces of different dimensions. Optimal transport problems involving spaces of different dimensions have not been as extensively studied and exhibit distinct properties compared to the standard case where both measures are defined on the same space, see <cit.>. For instance, the classical twist condition (∂^2 c/∂ y_j∂ x_i)≠ 0 does not make sense in this setting: the Hessian matrix of c is not squared, so that the determinant is not even well-defined. <cit.> focus on the optimal transport problem between a measure μ supported on a domain ⊂^m (with m>1) and a measure ν on an interval ⊂ for some cost function c:×→ [0,+∞). They demonstrate that an optimal transport map T:→ between μ and ν exists under a natural condition on (c,μ,ν) known as nestedness. For y∈, k∈, let _≤(y,k)={x∈: ∂_y c(x,y)≤ k}. Kantorovich duality implies that an optimal transport plan between μ and ν will match an interval (-∞,y] to a set _≤(y,k), where k=k(y) is a solution of the equation ν((-∞,y])=μ(_≤(y,k)). The triplet (c,μ,ν) is called nested if the collection of sets (_≤(y,k(y)))_y increases with y. <cit.> prove that an optimal transport map T between μ and ν exists when the problem is nested: informally, the monotonicity of (_≤(y,k(y)))_y ensures that a given x_0∈ belongs to the boundary of a single set _≤(y_0,k(y_0)), with y_0 being equal to T(x_0). This nestedness condition will be crucial in <Ref>, where it will be used to establish the equivalence between regression and classification problems. However, in <Ref>, we will be able to show the existence of optimal transport maps for the barycenter problem (<ref>) (and consequently of optimal fair regression functions) without any nestedness condition. § FAIR REGRESSION AND THE BARYCENTER PROBLEM In this section, we characterize the solution to the fair regression problem. We begin by showing in Section <ref> that, under mild assumptions, the fair regression function does not preserve order. Then, in Section <ref>, we show that the fair regression problem can be reduced to a barycenter problem with optimal transport costs. Using the tools introduced in Section <ref>, we prove the existence of a fair optimal prediction function and study some of its properties. §.§ Fair regression functions do not preserve order Before analyzing in detail the fair regression problem in the unawareness framework, we establish a simple yet important property of fair regression functions. We begin by extending the definition of order preservation <cit.> to the unawareness framework. Recall that in this case, the Bayes prediction for a candidate x is given by η(x)=[Y|X=x]. A prediction function f is said to preserve order if, for any two candidates (x,x') ∈^2 in the same group s ∈, η(x) ≤η(x') implies f(x) ≤ f(x'). This definition is formalized below. A prediction function f preserves order if ⊗-almost surely, {η(X)<η(X') and S = S' } f(X)< f(X'). This property implies that the fairness correction does not alter the ordering of the predictions of the Bayes prediction function within a group. It is related to the concept of “rational ordering” introduced by <cit.> in the context of classification, where the authors require that within a group, the most able candidates are the ones accepted. Let f:→ be a regression function with [f(X)^2]< ∞ satisfying the demographic parity constraint. Assume that the Bayes regression function η does not satisfy the demographic parity constraint and that (S=s|X=x)∈ (0,1) for all s∈, x∈. Then, f does not preserve order. We prove the contrapositive: if f is a regression function satisfying the demographic parity constraint and preserving order, then the Bayes regression function also satisfies the demographic parity constraint. For a fixed group s∈, consider the joint law π_s of (η(X),f(X)), where X∼μ_s. As f is envy-free, the support of the measure π_s is monotone, in the sense that ∀ (y_1,z_1), (y_2,z_2)∈(π_s), y_1< y_2 z_1< z_2. According to <cit.>, this implies that π_s is actually the optimal transport plan for the quadratic cost between the first marginal of π_s, equal to η♯μ_sν_s and the second marginal of π_s, equal to f♯μ_sν (the second measure does not depend on s because of demographic parity). We claim that strict monotonicity implies that the transport plan π_s takes the form of a transport map T_s transporting ν towards ν_s, that is π_s = (T_s,𝕀)♯ν (see a proof below). So, if X∼μ_s, we have (η(X),f(X))=(T_s(f(X)),f(X)) almost surely. To put it another way, we have for every s, η(x) = T_s∘ f(x) μ_s-almost everywhere. As (S=s|X=x)>0 for all x∈, this equality is also satisfied μ-almost everywhere. Hence, for μ-almost all x, the quantity T_s∘ f(x) does not depend on s (it is equal to η(x)). This defines a function T with η♯μ_s = T♯ f♯μ_s= T♯ν. As this measure does not depend on s, this proves that η satisfies the demographic parity constraint. To conclude our proof, it remains to prove our claim. Decompose ν as ν_1+ν_2 where ν_2 is atomless and ν_1 = ∑_j p_j δ_z_j. If f(X)=z_j, then we have η(X)=y_j for some value y_j: this value y_j has to be unique, for otherwise it would contradict the monotonicity assumption. Therefore, ν_s can be written as ν_s = ν_1s+ν_2s, where ν_1s = ∑_j p_jδ_y_j. Consider the plan π_1 = ∑_j p_j δ_(y_j,z_j). Then π-π_1 is a plan between ν_2s and ν_2. By <cit.>, as ν_2 is atomless, the monotonicity condition implies that it is induced by a transport map T̃_s. In total, we can define T_s by T_s(z_j)=y_j and by T_s=T̃_s on the complementary set of the atoms. Proposition <ref> implies, in particular, that in many instances, the optimal fair regression function does not preserve order. Consequently, highly qualified individuals who belong to protected groups could potentially suffer from fairness corrections due to the demographic parity constraint. The condition that (S=s|X=x)∈ (0,1) for all s∈, x∈ ensures that the sensitive attribute S cannot be determined from the observation of X. When this condition is not satisfied, the distinction between the unawareness and awareness frameworks becomes blurred: if S can be inferred from X alone, it becomes meaningless to differentiate between a regression function that depends on both X and S, and one that depends solely on X. Furthermore, it is important to note that in the awareness framework, there do exist regression functions that satisfy the demographic parity constraint and preserve order, with the optimal fair regression function described in <Ref> being one such example. §.§ Reduction to an optimal transport problem In the following, we let ={1,2}. We assume that μ_1≠μ_2 (otherwise the Bayes regression function η already solves the fair regression problem). We now show how to transform the fair regression problem into a barycenter problem using optimal transport costs. To do so, we first leverage a reformulation of the demographic parity constraint due to <cit.>, which is based on the Jordan decomposition of the signed measure μ_1 - μ_2. Then, we show how to rephrase the regression problem as a barycenter problem, using this new constraint. Finally, we show that, under mild assumptions, the barycenter problem admits a unique solution, which is given by a transport map. §.§.§ Reformulation of the demographic parity constraint Let |μ_1-μ_2| be the variation of μ_1-μ_2 and define (μ_1 - μ_2)_+ = 1/2(|μ_1-μ_2|+μ_1-μ_2), (μ_1 - μ_2)_- = 1/2(|μ_1-μ_2|-μ_1+μ_2) the Jordan decomposition of μ_1-μ_2. The two measures (μ_1 - μ_2)_+ and (μ_1 - μ_2)_- have the same mass, which we denote by m. We define the scaled Jordan decomposition of μ_1-μ_2 as the pair of probability measures μ_+ = (μ_1 - μ_2)_+/m and μ_- = (μ_1 - μ_2)_-/m. Let μ_+/μ (resp. μ_-/μ) be the density of μ_+ (resp. μ_-) with respect to μ (that are defined uniquely μ-almost everywhere). As μ_+ and μ_- are mutually singular measures, we can always find versions of μ_+/μ and μ_-/μ such that the sets _+={x∈ : μ_+/μ(x)>0}, _-={x∈ : μ_-/μ(x)>0}, _==\ ( _+⊔_-). form a partition of , with μ_+ giving mass 1 to _+ and μ_- giving mass 1 to _-. Then, for for any three functions f_+, f_-, and f_= from to ℝ, we can define the associated function (f_+, f_-, f_=) equal to f_+ on _+, f_- on _-, and f_= on _=: (f_+, f_-, f_=)(x) = f_+(x) if x ∈_+ f_-(x) if x ∈_- f_=(x) if x ∈_=. Conversely, for any function f:→ℝ, there exist functions f_+, f_-, and f_= corresponding respectively to the restriction of f on _+, _-, and _=, i.e., such that f = (f_+, f_-, f_=). The following lemma, due to <cit.>, rephrases the demographic parity constraint in terms of μ_+ and μ_-. A regression function f : →ℝ verifies the demographic parity constraint if and only if f♯μ_+ = f♯μ_-. Lemma <ref> reveals that for any functions f, and f_+, f_-, f_= such that f = (f_+, f_-, f_=), the regression function f satisfies the demography parity constraint if and only if f_+♯μ_+ =f_-♯μ_-. The two functions f_+ and f_- can be chosen with disjoint support (in _+ and _-, respectively). Thus, the demographic parity constraint essentially reduces to the equality of the pushforward measures of two distinct probabilities (μ_+ and μ_-) by two distinct functions (f_+ and f_-). §.§.§ A barycenter problem In order to rephrase the regression problem as a barycenter problem, we introduce further notation. We define Δ(x)= μ_+/μ(x) if x∈_+, Δ(x)= -μ_-/μ(x) if x∈_-, Δ(x)= 0 if x∈_=. Equivalently, Δ(x) is proportional to μ_1/μ(x)-μ_2/μ(x). We also define the cost c : ×→ [0,+∞] given by c(x,y) = (η(x) - y)^2/|Δ(x)| for all x∈ and all y∈ℝ. When X∼μ_±, the variables (η(X), Δ(X)) belong to Ω{(h,d)∈^2: d≠ 0}. In the following, we use bold notation to denote functions related to these two-dimensional variables. For example, we denote by (resp. ) the distributions of (η(X), Δ(X)) when X follows the distribution μ_+ (resp. μ_-). Note that the support of is included in the upper half-plane {d>0} while is included in the lower half-plane {d<0}. We define the two-to-one dimensional cost , given by (, y) = (h - y)^2/|d| for all = (h,d)∈Ω and y∈. Consider the barycenter problem minimize _(, ν) + _(,ν) over ν∈() where we recall that _(, ν) is the optimal transport cost for sending to ν with cost function , defined in Equation (<ref>). We say that a solution ν^bar of the barycenter problem is solved by optimal transport maps if _(, ν^bar) = ∫(, ())() for some transport maps :Ω→ from to ν^bar. There is a one-to-one correspondence between the set of solutions to the barycenter problem (<ref>) solved by optimal transport maps and the set of optimal fair regression functions solving (<ref>). This correspondence associates a barycenter ν^bar with optimal transport maps to the optimal fair regression function f=(f_+,f_-,η), where f_±(x)= (η(x),Δ(x)) for x∈. Classical computations show that _sq(f) = 𝔼[(η(X) - f(X))^2] + 𝔼[(η(X) - Y)^2]. Thus, minimizing the risk is equivalent to minimizing 𝔼[(η(X) - f(X))^2]. Now, 𝔼[(η(X) - f(X))^2] = ∫ (η(x)-f(x))^2 μ(x) =∫_𝒳_+ (η(x)-f(x))^2 μ/μ_+(x) μ_+(x) + ∫_𝒳_- (η(x)-f(x))^2 μ/μ_-(x) μ_-(x) + ∫_𝒳_= (η(x)-f(x))^2 μ(x). Using the definition of Δ along with Lemma <ref>, we see that any solution to the fair regression problem can be written as f = (f_+, f_-, η), where (f_+, f_-) is solution to the problem minimize ∫_𝒳_+(η(x) - f_+(x))^2/|Δ(x)|μ_+(x) + ∫_𝒳_-(η(x) - f_-(x))^2/|Δ(x)|μ_-(x) such that f_+♯μ_+ = f_-♯μ_-. The triplet (η(X), Δ(X), f(X)) defines a coupling π_f+ between and ν_f+ = f_+ ♯μ_+. Likewise, we define a coupling π_f- between and ν_f-. We can rewrite (<ref>) as 𝔼[(η(X) - (f_+, f_-, η)(X))^2] = ∫(,y)π_+f(,y) + ∫ c(,y)π_-f(,y) ≥_ (,ν_f+)+_(,ν_f-). The constraint f_+♯μ_+ = f_-♯μ_- implies that ν_f+ = ν_f-. Hence, inf_f fair𝔼[(η(X) - f(X))^2] ≥inf_ν∈()_ (,ν)+_(,ν). Reciprocally, assume that there exists ν^bar solving the above barycenter problem, and that an optimal transport map between and ν^bar is given by an application : Ω→, with () ♯=ν^bar. Likewise, we assume that there exists an optimal transport map between and ν^bar. Then, ♯ = ♯ = ν^bar. Defining f_±(x) =(η(x),Δ(x)), we have f_-♯μ_- = f_+♯μ_+ = ν^bar, and so (f_+, f_-, η) is a fair regression function. Also, we have by optimality that _(,ν^bar)+_(,ν^bar) = ∫ c(x,f_+(x))μ_+(x) + ∫ c(x,f_-(x)) μ_-(x) = [(η(X)-(f_+, f_-, η)(X))^2]. Hence, by (<ref>), the regression function (f_+, f_-, η) is optimal. This concludes the proof of Lemma <ref>. §.§.§ Transport maps for the barycenter problem The rest of this section is devoted to proving that the barycenter problem indeed admits a solution given by transport maps, which will imply that there exists a solution to the fair regression problem. We show that this holds under the following mild regularity assumption. The measures and give zero mass to graphs of functions in the sense that for any measurable function F:\{0}→, ({(F(d),d): d≠ 0})=0. By Fubini's theorem, this assumption is trivially satisfied if and have a density with respect to the Lebesgue measure. Another interesting example is given by the awareness framework, seen as a particular instance of the unawareness framework. Consider a triplet of random variable (X,S,Y)∼, where X∈ is a feature, S∈{1,2} is a sensitive attribute and Y∈ is a response variable of interest. Let Z = (X,S) and let be the law of the triplet (Z,S,Y). Then, there is an equivalence between considering an aware regression function f(X,S) under law and an unaware regression function f(Z) under law . Note that Z is a random variable on =×{1,2}. The laws μ_1 of Z|S=1 and μ_2 of Z|S=2 have disjoint support. It follows that _+=×{1} with μ_+=μ_1 and _-=×{2} with μ_-=μ_2. Then, Δ(x)=1/p_1 if x∈_+ and Δ(x) = -1/p_2 if x∈_-. In particular, both measures and are supported on horizontal lines in Ω. In that case, Assumption <ref> is equivalent to the fact that and have no atoms, which is exactly equivalent to the fact that the law of η(X) (for X∼μ) has no atoms. This assumption is often considered to be a minimal assumption to ensure the existence of optimal fair regression functions in the awareness framework. Hence, Assumption <ref> constitutes a generalization of this assumption to the unawareness framework. Assume that (X,Y,S)∼ is such that [Y^2]<∞. Under Assumption <ref>, there is a unique minimizer ν^bar of the barycenter problem inf_ν_ (,ν)+_(,ν). Moreover, this problem is solved by optimal transport maps . In particular, there exists a unique solution f^* of the regression problem under the demographic parity constraint (<ref>), which is given by ∀ x∈, f^*(x) = ((η(x), Δ(x) ), (η(x), Δ(x)), η(x)). Using Lemma <ref>, it is enough to show that the barycenter problem admits a unique solution ν^bar such that the corresponding transport problems _(, ν^bar) and _(, ν^bar) are solved by transport maps. Step 1: reduction to a standard transport problem. We begin by reducing the barycenter problem (<ref>) to a single two-to-two dimensional optimal transport problem _(, ). The multimarginal version of the barycenter problem reads inf_ν∈()_ (,ν)+_(,ν)= inf_ρ∈Π(·, ,)∫ ( (_1,y)+(_2,y)) ρ(y,_1,_2), where Π(·, ,) stands for the set of measures on ×Ω×Ω with second marginal and third marginal . Indeed, if ρ∈Π(·, ,), then its two first marginals provide a transport plan between its first marginal ν and , while the first and last marginals provide a transport plan between ν and . This proves that the left-hand side of Equation (<ref>) is smaller than the right-hand side. For the other inequality, consider ν∈(), with associated optimal transport plans π_+∈Π(,ν) and π_-∈Π(,ν). By the gluing lemma (see, e.g., Lemma 5.5 in <cit.>), there exists ρ∈Π(·, ,) such that the joint law of the first two marginals is equal to π_+, and the joint law of the first and last marginal is equal to π_-. Then, _ (,ν)+_(,ν)=∫ ( (_1,y)+(_2,y)) ρ(y,_1,_2), proving that the right-hand side is smaller than the left-hand side in (<ref>). This shows the validity of (<ref>). Furthermore, if ρ solves the right-hand side of (<ref>), then its first marginal ν is a barycenter. Actually, by optimality, for any (y,_1,_2) in the support of the optimal ρ, the point y necessarily minimizes the function z↦(_1,z)+(_2,z). Let us compute this minimizer. For _1 = (h_1, d_1) and _2 = (h_2, d_2), we have (_1,y) + (_2,y) = (h_1-y)^2/|d_1| + (h_2-y)^2 /|d_2|. This function is convex in y. The first order condition for optimality reads (y-h_1)/|d_1| + (y-h_2)/|d_2| = 0 ⟺ y = m(_1,_2)h_1 /|d_1| + h_2 /|d_2|/1/|d_1|+1/|d_2|. Moreover, the cost (_1,_2) inf_y (_1, y) + (_2, y) corresponding to this minimum is equal to (_1,_2) = (h_2-h_1)^2 /|d_1|+|d_2|. These considerations show that inf_ν∈()_ (,ν)+_(,ν)=_(,), and that optimal transport plans π^*∈Π(,) are in correspondence with barycenters ν through the formula ν=m ♯π^*. In particular, as there exists at least one optimal transport plan, the infimum in the barycenter problem is actually a minimum. Step 2: existence of a transport map. Note that (_1,_2)=(h_1-h_2)^2/|d_1|+|d_2|≤ 2h_1^2/|d_1| + 2h_2^2/|d_2|. This quantity is integrable against ⊗. Indeed, ∫h_1^2/|d_1|(h_1,d_1)=∫η(x)^2/Δ(x)μ_+(x) = ∫__+η(x)^2 μ(x) ≤[[Y|X]^2]≤[Y^2]<∞. In particular, the optimal cost _(, ) is finite. Hence, by Kantorovich duality (see <Ref>), there is a -convex function (called a Kantorovich potential) ϕ: Ω→∪{+∞} such that if we let Γ = {(_1,_2): ϕ(_1)-ϕ^(_2)=(_1,_2)} be the subdifferential of ϕ, then any optimal transport plan π satisfies π(Γ)=1, see <Ref>. We show in Appendix <ref> the following lemma. Let ϕ:Ω→∪{+∞} be a -convex function with dom(ϕ):={: ϕ()<+∞}. Then, the set of points ∈dom(ϕ) such that the partial derivative ∂_h ϕ() does not exist is included in a countable union of graphs of measurable functions F:d∈\{0}↦ F(d)∈. Let Σ be the countable union of graphs given by Lemma <ref> for the Kantorovich potential ϕ. According to Assumption <ref>, if we let Ω_0=Ω\Σ, then (Ω_0)=1. Let _1∈Ω_0 and let (_1,_2)∈Γ. Consider the function g__2:∈Ω↦ϕ()-(,_2). As ϕ^(_2) =ϕ(_1)- (_1,_2), by definition of the -transform, the function g__2 attains its maximum at _1. In particular, as ∂_h_1ϕ(_1) exists by assumption, we have ∂_h ϕ(_1)= ∂_h_1(_1,_2) = 2(h_1-h_2)/|d_1|+|d_2|. This implies that h_2 = h_1 - |d_1|+|d_2|/2∂_h_1ϕ(_1). Using this expression, we find that m(_1,_2) = h_1 - |d_1|∂_h_1ϕ(_1)/2. In particular, m(_1,_2) is uniquely determined by _1. This defines a measurable map _1∈Ω_0 ↦(_1). We extend on Ω by setting (_1)=0 if _1∈Ω\Ω_0. As explained in Step 1, for (_1,_2)∼π^*, the law ν of m(_1,_2) solves the barycenter problem _(, ). Hence, ν = m♯π^*=(𝕀 ,)♯ is a barycenter. Step 3: uniqueness of a transport map. Likewise, we show the existence of a function such that ν' =(𝕀 , )♯ is a barycenter. If we show that there is a unique barycenter, then ν=ν'=ν^bar, and the theorem is proven. We now show uniqueness of the barycenter. Let ν be any measure that solves the barycenter problem. Let π_+ (resp. π_-) be an optimal transport plan for _(,ν) (resp. _(,ν)). By the gluing lemma, there exists ρ∈Π(·, , ) whose joint law of the two first marginals is equal to π_+, and whose joint law of the first and last marginal is equal to π_-. The joint distribution π between the second and last marginal is a transport plan between and . Furthermore, as ν is a barycenter and by definition of , we have _(,) =_(,ν)+_(,ν)= ∫ ((_1,y)+c(_2,y))ρ(y,_1,_2) ≥∫(_1,_2)π(_1,_2), so that π is an optimal transport plan between and , with ν=m♯π. But then, recall that (<ref>) holds for any optimal transport plan π (for the same potential ϕ). Hence, by the same arguments as before, we have ν=()♯ for the map :_1↦ h_1 - d_1∂_h_1ϕ(_1)/2 (defined -almost everywhere). In particular, ν is uniquely determined by and through the potential ϕ. Theorem <ref> is the counterpart of Theorem <ref>, established by <cit.> and <cit.> within the awareness framework. Both theorems demonstrate that the optimal fair regression function solves a barycenter problem with optimal transport costs. Remark <ref> further indicates that Theorem <ref> generalizes Theorem 2.3 in <cit.>, as the awareness framework can be considered as a special case of the unawareness framework. However, unlike in the awareness framework, there is no explicit formulation of the optimal fair prediction function in the unawareness framework, as the corresponding barycenter problem involves multi-to-one dimensional transport costs with no explicit solutions. Theorem <ref> reveals that the fair prediction f^*(x) only depends on the bi-dimensional feature (η(x), Δ(x)) of the candidate x. By definition, Δ(x)∝μ_1/μ(x) - μ_2/μ(x). Moreover, we have ℙ(S= 1| X=x) = p_1μ_1/μ(x) and ℙ(S= 2| X=x) = p_2μ_2/μ(x). Thus, Δ(x) ∝ℙ(S= 1| X=x)/p_1 - ℙ(S= 2| X=x)/p_2. In other words, Δ(x) reflects the probability that x belongs to the different groups. Hence, in the unawareness framework, the optimal fair regression function effectively relies on estimates of S to make its prediction. This result provides a theoretical justification for the empirical observations of <cit.>. As noted by these authors, this phenomenon may be undesirable, as it means that the predictions can rely on features not relevant to predict the response Y, simply because they are predictive of the group S. § LINKS BETWEEN CLASSIFICATION AND REGRESSION PROBLEMS We now turn to the study of the relationship between fair regression and fair classification problems within the unawareness framework. When Y∈{0,1}, classical results show that the Bayes classifier _y minimizing the risk _y(g) = y·ℙ[Y = 0, g(X) = 1] + (1-y)·ℙ[Y = 1, g(X) = 0]. is given by _y(x) = 1{(x) ≥ y }, where is the Bayes regression function minimizing _sq. Similarly, recent results by <cit.> demonstrate that in the awareness framework, the optimal fair classifier g_y^* minimizing the risk _y is given by g^*_y(x,s) = 1{f^*(x,s) ≥ y }, where f^* is the optimal fair regression function minimizing _sq. These results can be leveraged to obtain plug-in classifiers ĝ using estimates f̂ of the regression function. Somewhat less explored is the converse relationship: given a family of optimal classifiers (g_y)_y∈ [0,1] for the risks (_y)_y∈ [0,1], one could define a regression function f of the form f(x) = sup{y : g_y(x) = 1}. For example, this formulation yields the Bayes regression function when using Bayes classifiers and the optimal fair regression function when using optimal fair classifiers in the awareness framework. In both examples, this relationship may not be particularly useful since there already exists an explicit characterization of the optimal regression function. However, if this relationship were to hold in the unawareness framework, it would be significantly more valuable. Indeed, Theorem <ref> rephrases the fair regression problem as a barycenter problem with optimal transport costs but does not provide an explicit solution. <cit.> proposed leveraging this relationship to address the problem of fair regression using cost-sensitive classifiers. The authors demonstrate an equivalence between minimizing a discretized version of the risk _sq and minimizing the average of the cost-sensitive risks (_y(g_f,y))_y ∈ for a finite set , where g_f,y is defined as g_f,y(x) = 1{f(x) ≥ y}. To obtain the optimal fair regression function for this discretized risk, the authors assume access to an oracle that returns the regression function f such that g_f,y minimizes the average of the risks (_y(g_f,y))_y ∈. We emphasize that minimizing the average of the risks (_y(g_f,y))_y ∈ remains an open and challenging problem. In contrast, to define a regression function of the form f(x) = sup{y : g_y(x) = 1}, one only needs to solve independent cost-sensitive classification problems. Recent results by <cit.> offer an explicit characterization of these classifiers. This raises the intriguing possibility of constructing the optimal fair regression function in the unawareness framework using these fair classifiers. In this section, we demonstrate that such a construction is not always possible. To do so, we begin by providing some reminders on fair classification in the unawareness framework. §.§ Fair classification In this section, we assume that Y∈{0,1} almost surely. We consider the problem of minimizing a family of risk measures _y under the demographic parity constraint. We show that the optimal fair classifier for the risk _y is of the form g_y^κ for some κ∈ℝ, where g_y^κ is given by ∀ x∈, g_y^κ(x)= {η(x) ≥ y + κΔ(x)}. The following proposition extends Proposition 5.3 in <cit.>, and characterizes the optimal fair classifier. Let y∈, and let κ^*∈ verify μ_+η(X) ≥ y + κ^*Δ(X) = μ_-η(X) ≥ y + κ^*Δ(X). Under Assumption <ref>, g_y^κ^* solves the fair classification problem C_yminimize ℛ_y(g) such that 𝔼[g(X) | S = 1] = 𝔼[g(X) | S = 2]. Moreover, all solutions to (C_y) are a.s. equal to g_y^κ^* on ∖{x∈: η(x) = y and Δ(x) = 0}. The optimal classifier is uniquely defined outside of {η(x) = y}. While the set {η(x) = y and Δ(x) ≠ 0} has null measure under Assumption <ref>, the set {η(x) = y and Δ(x) = 0} may have positive measure. On this set, the classifiers {η(x) ≥ y} and {η(x) > y} differ, yet they are both optimal for the risk _y(g). The proof of this proposition is postponed to Appendix <ref>. As discussed in the previous section, Assumption <ref> encompasses as a special case the awareness framework. In this case, the optimal classifier presented in Proposition <ref> reduces to the optimal fair classifier in the awareness framework given by ∀ (x,s)∈×{1,2}, g_y^aware(x,s)= {η(x) ≥ y + κ^*/p_1} if s=1 {η(x) ≥ y - κ^*/p_2} if s=2, as described in <cit.>. In the unawareness framework, the optimal classifier relies on the probability that the observation X belongs to the different groups Δ(X). This behavior is similar to that of the optimal fair regression function, as established in Theorem <ref>. Next, we investigate whether the optimal classifier is envy-free. We say that a classifier g : →{0,1} is envy-free within group if ⊗-a.s., for (X,S) and (X',S') such that S = S' and (X) > (X'), we have g(X) ≥ g(X'). In essence, this property ensures that no candidate who would have been accepted by the Bayes classifier but is rejected after fairness correction envies another candidate from the same group who would have been rejected by the Bayes classifier but is accepted after fairness correction. Note that this property is weaker than order preservation, as a classifier that preserves order is envy-free within groups. Proposition <ref> reveals that in the unawareness framework, the optimal fair classifier is generally not envy free. This behavior contrasts with that of optimal fair classifiers in the awareness framework: indeed, since these classifiers preserve order, they are also envy-free. Let y∈. Under Assumption <ref>, if ℙ(S = s| X = x)∈ (0,1) for all s∈, x∈, then one of the following cases hold: * _y(x) = 1 g^κ^*_y(x) = 1 μ-a.s. * g^κ^*_y(x) = 1 _y(x) = 1 μ-a.s. * the classifier g^κ^*_y is not envy-free within group. Assume that <ref>. and <ref>. do not hold. Then, we have κ^* ≠ 0, and we can assume without loss of generality that κ^* > 0. Since <ref>. does not hold, we have that μ(_y(X) = 1 and g^κ^*_y(X) = 0)>0. This implies in turn that μ_+((X) = 1 and g^κ^*_y(X) = 0)>0, since _y and g^κ^*_y coincide on _=, and since by definition when κ^* >0, we have μ_-(_y(X) = 1 and g^κ^*_y(X) = 0) = 0. Similarly, we can show that since <ref>. does not hold, μ_-(_y(X) = 0 and g^κ^*_y(X) = 1) >0. Now, ℙ(S = s| X = x)∈ (0,1) for all s∈, so μ_1 ≫μ_+ and μ_1 ≫μ_-. Therefore, μ_1(_y(X) = 1 and g^κ^*_y(X) = 0) > 0, and μ_1(_y(X) = 0 and g^κ^*_y(X) = 1)> 0. This implies ℙ(_y(X) > _y(X') and g^κ^*_y(X) < g^κ^*_y(X')| S = S' = 1)>0 which concludes the proof. Extending cost-sensitive classification to = ℝ In the following, we consider the more general case where = ℝ. Although the interpretation in terms of optimal classification is no longer applicable, we can still analyze the family of functions g_y^κ defined in Equation (<ref>). The following proposition characterizes the values of the parameter κ^* such that g_y^κ^* satisfies demographic parity. These values partition the feature space equally, see <Ref>. Let y∈. Under Assumption <ref>, the set of numbers κ∈ such that μ_+η(X)≥ y + κΔ(X) = μ_-η(X)≥ y + κΔ(X) is a nonempty closed interval I(y) = [κ^-(y),κ^+(y)]. The function y↦κ^+(y) is upper semicontinuous and the function y↦κ^-(y) is lower semicontinuous. Moreover, it holds that for μ-almost all x, for all y ∈ℝ and all κ, κ' ∈ I(y), g_y^κ(x) = g_y^κ'(x). Introduce the function G:(κ,y)↦μ_+η(X)≥ y + κΔ(X) -μ_-η(X)≥ y + κΔ(X). Under Assumption <ref>, the measures and give zero mass to non-horizontal lines, implying that the function G is continuous. Furthermore, for y∈, the function G(·,y) is nonincreasing (recall that Δ(X)<0 for X∼μ_-). Hence, its zeroes form a closed interval I(y). For κ∈, the set {y∈: κ^-(y)>κ} is equal to the set {y∈: G(κ,y)>0}, which is an open set by continuity of G. This proves that κ^- is lower semicontinuous. We prove similarly that κ^+ is upper semicontinuous. It remains to prove the last statement. Fix y∈. First, we may assume without loss of generality that κ=κ^-(y) and that κ'=κ^+(y). We have μ_+η(X)≥ y + κ^+(y)Δ(X) = μ_+η(X)≥ y + κ^-(y)Δ(X) (and likewise for μ_-). Thus, we have μ_+(g^κ_y(X)≠ g^κ'_y(X)) = μ_+(g^κ_y(X)=1, g^κ'_y(X)=0) = μ_+η(X)-y/Δ(X)∈ [κ,κ']=0. The same equality holds for μ_-. Also, the equality g^κ^+(y)_y(x)=g^κ^-(y)_y(x) holds on _= (as Δ(x)=0 on _=). Hence, for a fixed y, the equality g^κ^+(y)_y(x)=g^κ^-(y)_y(x) holds for μ-almost all x. However, the set of points x (of full measure) where this equality is satisfied depends on y, so that it is not trivial to show that this equality holds simultaneously for all y∈, almost surely. To do so, we show that the set {(h,d)∈Ω: ∃ y∈ℝ, y+κ^-(y)d≤ h ≤ y+κ^+(y)d} has mass 0 under and . For y∈, let C_y={(h,d)∈Ω: y+κ^-(y)d≤ h ≤ y+κ^+(y)d}. We have previously shown that for any given y∈ℝ, g_y^κ^-(y)(x)=g_y^κ^+(y)(x) for μ-almost all x, implying that (C_y)=0. Let C=⋃_y∈ C_y. Let us show that (C)=0. Let C_1 =⋃_y∈ C_y. First, it holds that (C_1)=0. If it were not the case, as the measure is inner regular, there would exist a compact set K⊂ C_1 with (K)>0. But then, the compact set K is covered by the family of open sets ( C_y)_y∈. By compactness, there exists a finite cover C_y_1,…, C_y_N covering K. As each C_y_i has zero mass, we obtain a contradiction with the positivity of (K). Furthermore, if (h,d)∈ C\ C_1, then there exists y_0 with either h=y_0+κ^-(y_0)d or h= y_0+κ^+(y_0)d, with also y+κ^-(y)d≤ h ≤ y+κ^+(y)d for all y∈. This implies that C\ C_1 is included in the union of the graphs of the two functions d↦sup_y (y+κ^-(y)d) and d↦inf_y(y+κ^+(y) d). These two functions can easily be seen to be measurable because of the semicontinuity of κ^- and κ^+. Thus, by Assumption <ref>, (C\ C_1)=0. In conclusion, we have proven that (C)=0. We show likewise that (C)=0. §.§ The nestedness assumption Recall that our goal is to determine whether the optimal fair regression function can be expressed as f^*(x) = sup{ y : g_y^κ(y)(x) = 1} for a certain choice κ(y)∈ I(y). In this section, we introduce an assumption regarding the family of classifiers g_y^κ(y) and demonstrate that this assumption is necessary for the relationship to hold. Specifically, we wish to use the decision boundaries of optimal classifiers at different risk levels to define the regression function. For this to be possible, the function y ↦ g_y^κ(y)(x) must be nonincreasing for any choice of x: in other words, the rejection regions {x : g^κ(y)_y(x) < y} must be nested. We formalize this assumption in the following definition. We say that the problem corresponding to (X,Y,S)∼ is nested if there exists a choice of κ(y)∈ I(y) for all y∈ such that Nestedfor μ-almost all x∈, the map y∈↦ g^κ(y)_y(x) is nonincreasing. A straightforward (but key) property implied by nestedness is the fact that the sets ∀ y∈, A(y) = {x∈: η(x)< y + κ(y)·Δ(x)} are “almost” nested, in the sense that there exists a set of full μ-measure such that for all y'≤ y, it holds that A(y')∩⊆ A(y)∩. Assume that Assumption <ref> holds. Then, the problem is nested with choice κ(y)∈ I(y) for all y∈ if and only if for all y<y', μ(g_y^κ(y)(X)=0 and g_y'^κ(y')(X)=1)=0. Furthermore, if the problem is nested, one can always choose κ(y)=κ^+(y) for all y∈ in the definition of g_y^κ(y). The direct implication is clear. For the converse one, assume that the nestedness assumption does not hold. By definition, there exists a measurable set _0 of positive μ-mass such that y↦ g_y^κ(y)(x) is not nonincreasing for all x∈_0. It holds that either μ_+(_0)>0 or that μ_-(_0)>0. Assume without loss of generality that the first condition is satisfied, and let be the set of points x in _0∩_+ that satisfy ∀ y∈, η(x)-y/Δ(x)∉[κ^-(y),κ^+(y)] According to Proposition <ref>, μ_+()=μ_+(_0∩_+)>0. For x ∈, there exists y<y' with g_y^κ(y)(x) = 0 and g_y'^κ(y')(x) = 1. As x satisfies (<ref>), we have that η(x)< y+ κ^-(y)Δ(x) η(x)> y'+κ^+(y')Δ(x). Because the function κ^- is lower semicontinuous, for ỹ close enough to y, we also have η(x)<ỹ+κ^-(ỹ)Δ(x). Likewise, there exists ỹ' close enough to y' with η(x)>ỹ'+κ^+(ỹ')Δ(x). In conclusion, we have shown that ⊂⋃_y,y'∈ y<y'{x: η(x)< y+ κ^-(y)Δ(x) and η(x)> y'+κ^+(y')Δ(x)} In particular, as μ_+()>0, there exists y<y'∈ with μ_+(η(X)< y+ κ^-(y)Δ(X) and η(X)> y'+κ^+(y')Δ(X))>0. According to Proposition <ref> and Assumption <ref>, as the equality η(X)= y'+κ^+(y')Δ(X) happens with zero μ_+-probability, we have μ_+(g_y^κ(y)(X)=0 and g_y'^κ(y')(X)=1)>0, proving the first claim. The second claim follows from the characterization of nestedness that we have just established. Indeed, let y<y'. By Proposition <ref>, for μ-almost all x, g_y^κ(y)(x)=g_y^κ^+(y)(x) and g_y'^κ(y')(x)=g_y'^κ^+(y')(x). Thus, if (<ref>) holds for κ(y) and κ(y'), it also holds for κ^+(y) and κ^+(y'). As a warm-up, we begin by showing that the nestedness assumption is always verified in the awareness setting. Assume that S is X-measurable. Then, under Assumption <ref>, the classification problem is nested. We prove Proposition <ref> by contradiction. Assume that the problem is not nested. Using Lemma <ref>, there exist y < y' and a set _0 of positive μ probability such that for all x∈_0, g_y^κ(y)(x) = 0 and g_y'^κ(y')(x) = 1. Using Proposition <ref>, we can also assume without loss of generality that η(x) < y + κ^-(y)Δ(x) and η(x) > y' + κ^+(y')Δ(x) for x∈_0. Letting x∈_0, that we assume without loss of generality is in _+, the previous inequalities become y' + κ^+(y')/p_1 < η(x) < y + κ^-(y)/p_1. In words, the threshold for admission is lower at level y' that at level y. This implies in particular that μ_+(g_y'^κ^+(y')(X) = 1) ≥μ_+(g_y^κ^-(y)(X) = 1). On the other hand, since y<y', it also implies that κ^+(y')<κ^-(y). Therefore, y - κ^-(y)/p_2 < y' - κ^+(y')/p_2, so μ_-(g_y'^κ^+(y')(X) = 1) ≤μ_-(g_y^κ^-(y)(X) = 1). Using μ_+(g_y'^κ^+(y')(X) = 1) = μ_-(g_y'^κ^+(y')(X) = 1) and μ_+(g_y^κ^-(y)(X) = 1) = μ_-(g_y^κ^-(y)(X) = 1), we find that μ_+(g_y'^κ^+(y')(X) = 1) = μ_+(g_y^κ^-(y)(X) = 1). It implies that μ_+(η(X) ∈[y' + κ^+(y')/p_1, y + κ^-(y)/p_1]) = 0. Likewise, μ_-(η(X) ∈[y - κ^-(y)/p_2, y' - κ^+(y')/p_2]) = 0. In particular, for κ = κ^+(y') + p_1(y' - y), we see that y + κ/p_1 = y' + κ^+(y')/p_1< y + κ^-(y)/p_1. Thus, κ < κ^-(y), and y - κ/p_2≥ y - κ^-(y)/p_2. Moreover, κ > κ^+(y'), and y < y', so y - κ/p_2 < y' - κ^+(y')/p_2. This implies that y' - κ^+(y')/p_2 - (y - κ/p_2) = y' - y + 1/p_2(κ^+(y') + p_1(y' - y) - κ^+(y') ) > 0. so y - κ/p_2∈[y - κ^-(y)/p_2, y' - κ^+(y')/p_2]. Thus, μ_+(η(X) ≥ y + κ/p_1) = μ_-(η(X) ≥ y - κ/p_2) and κ∈ I(y). Since κ < κ^-(y), this yields a contradiction. Somewhat surprisingly, although the nestedness assumption may appear intuitive, it is not always verified. In Section <ref>, we present examples where this assumption holds and others where it does not. Before proving in the next section that under the nestedness assumption, the optimal fair classification functions g_y^κ(y) can be recovered by thresholding the optimal fair regression function f^*, we prove the converse: if the problem is not nested, there exists a value of y∈ where the classifier {f^*(x)≥ y} is suboptimal for the fair classification problem (C_y). Assume that = {0,1}, that Assumption <ref> holds and that the classification problem is not nested. Let f^* be the optimal fair regression function in the unawareness framework. Then, there exists y∈ such that the classifier x↦{f^*(x)≥ y} is not the optimal fair classifier for the risk _y. According to Lemma <ref>, there exists y<y' with μ(g_y^κ(y)(X)=0 and g_y'^κ(y')(X)=1)>0. Let _0 be the set corresponding to this event. Let us consider a classifier of the form g_y(x) = {f(x) ≥ y}. On the one hand, if μ(X∈_0 and f(X) < y ) >0, then the probability μ(X∈_0 and f(X) < y' ) is also positive, so g_y'(x) and g_y'^κ(y')(x) disagree on a set of positive probability. Now, Proposition <ref> implies that all optimal classifiers are a.s. equal, so g_y' is sub-optimal. On the other hand, if μ(X∈_0 and f(X) < y ) = 0, then g_y(x)=1 for almost all x∈_0. This implies that g_y(x) and g_y^κ(y)(x) disagree on a set of positive probability, so g_y is sub-optimal. §.§ Constructing a regression function using nested classifiers In the previous section, we proved that under mild assumptions, nestedness is a necessary condition for the relationship g^*_y(x) = {f^*(x) ≥ y} between the optimal fair classification and regression functions to hold. We now conclude by showing that nestedness is also a sufficient condition for this relationship to hold. We begin by defining the function f^* : → as ∀ x ∈, f^*(x) = sup{y : g_y^κ(y)(x) = 1} where g_y^κ(y) is given by Equation (<ref>). We assume without loss of generality (using Lemma <ref>) that κ(y)=κ^+(y) for all y∈. Remark that f^* is then almost measurable because of the upper semicontinuity of κ^+, in the sense that its restriction to some set of full measure is measurable (here given by the set of full measure where y↦ g_y^κ(y)(x) is nonincreasing). Assume that the classification problem is nested and that Assumption <ref> is satisfied. Then, the regression function f^* is optimal for the fair regression problem (<ref>). Before proving Theorem <ref>, we state the following corollary. Assume that the classification problem is nested, that Assumption <ref> is satisfied, and that = {0,1}. Then, the classification function g_y : y ↦{f^*(x) ≥ y} is optimal for the fair classification problem problem with cost _y, where f^* is the solution to the fair regression problem (<ref>). The proof of Corollary <ref> follows immediately by noticing that by Theorem <ref>, f^* is uniquely defined, and that the nestedness assumption and Theorem <ref> imply that g_y(x) = g_y^κ(y)(x) a.s. The rest of the section is devoted to proving Theorem <ref>. To do so, we begin by proving that f^* is a fair regression function, and by defining F, the c.d.f. of the predictions under μ_+ and μ_-. Assume that the problem is nested and that Assumption <ref> is satisfied. Let F:→ be defined by ∀ y∈, F(y) = μ_+η(X)≤ y+κ(y)Δ(X)= μ_-η(X)≤ y+κ(y)Δ(X). Then, there exists a probability measure ν^* with continuous c.d.f. F and finite second moment such that f^*♯μ_+ = f^*♯μ_-= ν^*. In particular, f^* is a fair regression function. The “almost” nestedness of the sets (A(y))_y implies that F is nondecreasing. Let us show that the function F is the c.d.f. of some continuous random variable, i.e., that it goes to 0 in -∞, that it goes to 1 in +∞, and that it is continuous. First, recall that Δ(X)>0 for X∼μ_+ and that Δ(X)<0 for X∼μ_-. Thus, if κ(y)≤ 0, then F(y)≤μ_+η(X)-y ≤ 0 and if κ(y)≥ 0, then F(y)≤μ_-η(X)-y ≤ 0. Hence, F(y)≤max{μ_+η(X)-y ≤ 0, μ_-η(X)-y≤ 0}, and F converges to 0 in -∞. Similarly, F(y)→ 1 when y converges to +∞. Next, let us show that F is continuous. Let y_0, y_1∈ be such that y_0 < y_1. Now, if κ(y_0)≥κ(y_1), then F(y_1) - F(y_0) = μ_+η(X)≤ y_1+κ(y_1)Δ(X) - μ_+η(X)≤ y_0+κ(y_0)Δ(X) ≤μ_+η(X)≤ y_1+κ(y_0)Δ(X) - μ_+η(X)≤ y_0+κ(y_0)Δ(X) Similarly, if κ(y_0)≤κ(y_1), then, (recalling that Δ(X)<0 for X∼μ_-) F(y_1) - F(y_0) =μ_-η(X)≤ y_1+κ(y_1) Δ(X) - μ_-η(X)≤ y_0+κ(y_0) Δ(X) ≤μ_-η(X)≤ y_1+κ(y_0) Δ(X) - μ_-η(X)≤ y_0+κ(y_0) Δ(X) . Thus, F(y_1) - F(y_0) ≤μ_+η(X) - κ(y_0)Δ(X) ∈[y_0, y_1] + μ_-η(X) - κ(y_0)Δ(X) ∈[y_0, y_1] We have shown that F is non-decreasing, so F(y_1) - F(y_0) ≥ 0. Under Assumption <ref>, μ_+ and μ_- give zero mass to the sets {η(X) = y_0 + κ(y_0)Δ(X)}, so F(y_1) - F(y_0) → 0 as y_1 → y_0^+. This proves that F is right-continuous. To show that F is left-continuous, we note that if κ(y_0)≥κ(y_1), then F(y_1) - F(y_0) ≤μ_+η(X)≤ y_1+κ(y_1)Δ(X) - μ_+η(X)≤ y_0+κ(y_1)Δ(X). Similarly, if κ(y_0)≤κ(y_1), then, F(y_1) - F(y_0) ≤μ_-η(X)≤ y_1+κ(y_1) Δ(X) - μ_-η(X)≤ y_0+κ(y_1) Δ(X) . Thus, F(y_1) - F(y_0) ≤μ_+η(X) - κ(y_1)Δ(X) ∈[y_0, y_1] + μ_-η(X) - κ(y_1)Δ(X) ∈[y_0, y_1] and F is also left-continuous. Then, let us show that ν^* has finite second moment. Let Z∼ν^*. We have [Y^2] = ∫_0^+∞(Z^2>t) t= ∫_0^+∞ (F(√(t))+(1-F(-√(t)))) t We use (<ref>) to obtain that for y∈, F(y)≤max(μ_+(η(X)≤ y), μ_-(η(X)≤ y)). But, as [Y^2]<+∞, the random variable η(X) has a finite second moment under the law of either μ_+ or μ_-. In particular, ∫_0^+∞ F(√(t)) t is finite. Similarly, ∫_0^+∞ (1-F(-√(t))) t is finite. Finally, we prove the statement f^*♯μ_+ = ν^*. Indeed, for all y_0 ∈, we have using that upper semicontinuity of κ(y)=κ^+(y) that f^*♯μ_+((-∞,y_0]) = μ_+ sup{y : g_y^κ(y)(X) = 1}≤ y_0 = μ_+ g_y_0^κ(y_0)(X) = 0=μ_+(η(X)<y_0+κ(y_0)Δ(X))=F(y_0). where the second line follows from the nestedness assumption and the fact that the line {η(X)=y_0+κ(y_0)Δ(X)} has zero mass. We show similarly that f^*♯μ_- = ν^*, thus concluding the proof of Lemma <ref>. The function f^* depends only on x through the pair (η(x),Δ(x)). Let ^*:Ω→ be defined by the relation f^*(x) = ^*(η(x),Δ(x)) for x∈_±. We show that ^* defines an optimal transport map between and ν^* with respect to the cost . Assume that the problem is nested. Then, ^* is an optimal transport map between and ν^* for the cost , with Kantorovich potential between ν^* and given by v:y↦ -2∫_0^y κ(t) t. The same holds for , with Kantorovich potential given by -v. The proof of Lemma <ref> relies on the following technical lemma, whose proof is postponned to Appendix <ref>. The function y↦κ(y) satisfies |κ(y)|≤ C(1+|y|) for some C>0. To prove Lemma <ref>, we begin by remarking that the potential v is in L^1(ν^*) because of Lemma <ref> and Lemma <ref>. Let us now show that for almost all x∈_+, v^c(x) sup_y∈ (v(y)-c(x,y)) = v(f^*(x))- c(x,f^*(x)). Let x∈_+ be a point such that y↦ g_y^κ(y)(x) is nonincreasing (almost all points satisfy this condition by nestedness). We remark that ∂_y c(x,y) = 2(y-η(x))/Δ(x). Hence, c(x,y)-c(x,f^*(x)) = ∫_f^*(x)^y 2(t-η(x))/Δ(x) t = -2∫_f^*(x)^y η(x)-t/Δ(x)-κ(t) t -2∫_f^*(x)^y κ(t) t = -2∫_f^*(x)^y η(x)-t/Δ(x)-κ(t) t + v(y)-v(f^*(x)). Assume that y≥ f^*(x). For t∈ (f^*(x),y], by definition of f^* and by nestedness, g_t^κ(t)(x)=0. Thus, η(x)-t/Δ(x)-κ(t)< 0. This implies that - 2∫_f(x)^y η(x)-t/Δ(x)-κ(t) t ≥ 0. We obtain that c(x,y)-c(x,f^*(x)) ≥ v(y)-v(f^*(x)). The same result holds when y<f^*(x). Indeed, in that case, for all t∈ [y,f^*(x)), η(x)-t/Δ(x)-κ(t)≥ 0. Hence, - 2∫_f^*(x)^y η(x)-t/Δ(x)-κ(t) t = 2∫_y^f^*(x)η(x)-t/Δ(x)-κ(t) t ≥ 0. This proves (<ref>). This relation implies that the -transform of the function v∈ L^1(ν^*) is a function w:Ω→ satisfying for μ_+-almost all x∈_+ (with =(η(x),Δ(x))) w() = v(f^*(x))- c(x,f^*(x)) = v(^*())-(,^*()). As v∈ L^1(ν^*), Kantorovich duality implies _(,ν^*) ≥∫ v(y)ν^*(y) -∫ w() ()= ∫ v(^*())()-∫ w() () = ∫(,^*()) (), see <Ref>. This shows that ^* is the optimal transport map between and ν^*. The same holds for , where we use the potential -v instead of v: precisely, we can show that we have for almost all x∈_- (-v)^c(x) sup_y∈ (-v(y)-c(x,y)) = -v(f^*(x))- c(x,f^*(x)). This concludes the proof of Lemma <ref>. Lemma <ref> shows that ^* defines an optimal transport map from to ν^*, and from to ν^*. To conclude the proof of Theorem <ref>, it remains to show that ν^* is solution to the barycenter problem described in Lemma <ref>. The distribution ν^* is solution to the barycenter problem described in Lemma <ref>. Let ϕ:_1∈Ω↦(_1,^*(_1))-v(^*(_1)) and let ψ:_2∈Ω↦(_2,^*(_2)) + v(^*(_2)). Using (<ref>) and (<ref>), we see that for all y∈, for -almost all _1 and -almost all _2, it holds that ϕ(_1)+ψ(_2) = (_1,^*(_1))-v(^*(_1))+ (_2,^*(_2)) + v(^*(_2)) ≤(_1,y)-v(y) + (_2,y)+v(y) = (_1,y)+(_2,y). By taking the value y that minimizes this last term, we obtain that ϕ(_1)+ψ(_2)≤(_1,_2), where is the cost function defined in (<ref>). In particular, -ϕ(_1)≥ψ^(_1). Furthermore, remark that -v(^*(_1))≤ϕ(_1)≤ c(_1,0). Thus, as v∈ L^1(ν^*) and ∫h^2/d(h,d)<+∞, it holds that ϕ∈ L^1(). Likewise, ψ∈ L^1(). By Kantorovich duality, it holds that _(,) ≥∫ψ(_2)(_2)-∫ψ^(_1) (_1) ≥∫ψ(_2)(_2)+∫ϕ(_1) (_1) = ∫(_1,^*(_1))(_1)+ ∫(_2,^*(_2))(_2) - ∫ v(^*(_1))(_1) + ∫ v(^*(_2))(_2) = _(,ν^*) + _(,ν^*) + ∫ v(y)ν^*(y)-∫ v(y)ν^*(y) = _(,ν^*) + _(,ν^*) ≥_(,). This proves that ν^* is the solution to the barycenter problem, and that f^* is an optimal regression function. § BUILDING EXAMPLES AND COUNTEREXAMPLES In the previous section, we proved that under mild assumptions, the relationship g^*_y(x) = {f^*(x) ≥ y} only holds under the nestedness assumption. In this section, we now explain how to build large classes of triplets (X,Y,S)∈××{1,2} whose distributions either satisfy or do not satisfy this criterion. The starting point of our approach consisted in associating to each distribution a pair of distributions (,)=((),()) on Ω, where we recall that () is the distribution of (η(X), Δ(X) ) when X ∼μ_±. Then, both the optimal fair regression function and the nestedness criterion are best understood in terms of the pair ((),()). However, given a pair of measure (,) on Ω, it is not a priori clear whether there exists a triplet (X,Y,S)∼ with (,)=((),()). We give a definitive answer to this problem by providing a list of necessary and sufficient conditions for the existence of such a probability distribution . We then use this theoretical result to build probability distributions for which the associated fair classification problem is either nested or not nested. Let be the distribution of a triplet (X,Y,S)∈××{1,2}, with [Y^2]<+∞. Let (,)=((),()) be the associated pair of measures on Ω. Then, it always holds that ∫_Ω |d|^-1(h,d) = ∫_1/Δ(x)μ_+(x) = ∫_μ/μ_+(x) μ_+(x) =μ(_+), while ∫_Ω| d|^-1(h,d)=μ(_-). In particular, 0< ∫_Ω |d|^-1(h,d)+∫_Ω |d|^-1(h,d) ≤ 1. Also, note that μ=p_1μ_1+p_2μ_2, so that Δ(x) = μ_+/μ(x)≤1/p_1m when x ∈_+, whereas Δ(x)≥- 1/p_2m when x∈_- (recall that m is the mass of the measure (μ_1-μ_2)_+). In particular, the supports of and are located in an horizontal strip of the form {(h,d): -M≤ d≤ M} for some M>0. The next proposition states that these two conditions are actually sufficient for the existence of a probability measure with (,)=((),()). Assume that is an uncountable standard Borel space (e.g., =[0,1]). Then, the set of pairs of measures ((),()) that can be obtained from a distribution of a triplet (X,Y,S)∈××{1,2} with [Y^2]<∞ is exactly equal to the set of pairs (,) supported on bounded horizontal strips, satisfying <Ref>, with supported on {d>0} and supported on {d<0}. This proposition allows us to easily build examples where either nestedness or nonnestedness is satisfied: one does not need to build from scratch a joint distribution on ××{1,2}, but can simply define a pair of measures (,) on Ω. As long as this pair satisfies the conditions given in Proposition <ref>, the existence of a probability distribution such that (,)=((),()) is ensured. We have already established that the pairs of measures ((),()) satisfy the conditions stated in Proposition <ref>. Reciprocally, consider a pair (,) satisfying <Ref>, supported on bounded horizontal strips, with supported on {d>0} and supported on {d<0}. Let a_± = ∫_Ω |d|^-1(h,d). Due to the Borel isomorphism theorem, is Borel isomorphic to ^2, so we may assume without loss of generality that =^2. Let _+ = {(h,d)∈^2: d>0}, _-={(h,d)∈^2: d<0} and _= = {(h,0): h∈}. Let μ_==δ_0. We let μ_+ = and μ_-=. Let μ(h,d) = 1/|d|μ_+(h,d) + 1/|d|μ_-(h,d) + (1-a_+-a_-)μ_=(h,d). Remark that μ is a probability measure: ∫μ = ∫1/|d|μ_+(h,d) + ∫1/|d|μ_-(h,d) + (1-a_+-a_-)∫μ_= =1. Consider m small enough so that the inequality m|d|/2≤ 1 holds on the support of μ (this is possible because the d coordinate is bounded in the support of and ). We define μ_1(h,d) = (1+m/2d) μ(h,d) and μ_2(h,d) = (1-m/2d) μ(h,d) Note that μ = 1/2μ_1+1/2μ_2. Also, μ_1 and μ_2 are probability measures, as ∫ d μ = ∫d/|d|μ_+(h,d) + ∫d/|d|μ_-(h,d) = 1-1=0. Let η(h,d)=h. We define the triplet (X,Y,S) by letting S be uniform on {1,2}. If S=1, we draw X∼μ_1 and let Y=η(X). If S=2, we draw X∼μ_2 and let Y=η(X). Let be the distribution of (X,Y,S). One can easily check that ((),())=(,), as desired. To build examples of nested and non-nested problems, we consider probability measures and supported on small horizontal segments: = 1/K∑_i=1^K ν_±^(i) where ν_±^(i) is the uniform measure on [a^(i)_±,a^(i)_±+1]×{d_±^(i)}. [A nested classification problem] Take K=1, d_+^(1)=d_-^(1)=1 and a^(1)_+=0, a^(1)_-=-1. Let be the probability associated with the pair (,) defined for this choice of parameters. Then, it holds that 1/2 ∈ I(y) for all y∈. By choosing κ(y)=1/2 for all y∈, we see that the classification problem associated with is nested. See also Figure <ref>. [A non-nested classification problem] Take K=2. Let d_+^(1)=d_-^(1)=1 and a_+^(1)=a_-^(1)=0. Let d_+^(2)=d_-^(2)=1/2, and a_+^(2)=-1, a_-^(2)=-6. Then, for y=0, I(y)={0}, so the support of ν_+^(2) is to the left of the classification threshold for y=0. But for y=-3, we have I(y)={4 } and the support of ν_+^(2) is to the right of the classification threshold. Hence, the classification problem is non-nested. See also Figure <ref>. § CONCLUSION AND FUTURE WORK This work presents the first theoretical characterization of the optimal fair regression function as the solution to a barycenter problem with an optimal transport cost. Our results also demonstrate that, under the nestedness assumption, the optimal fair regression function can be represented by the family of classifiers g_y^κ(y). Although both approaches—whether based on optimal transport or cost-sensitive classifiers—depend on the underlying distribution ℙ which is generally unknown, they pave the way for developing new algorithms that estimate these unknown quantities from observed data. Designing such estimators, along with bounding their excess risk and potential unfairness, represents a critical step toward the development of fair algorithms. While this work provides an initial characterization of the optimal fair regression function in the unawareness framework, it also has notable limitations. For instance, our results are currently limited to cases where the sensitive attribute is binary and apply only to univariate regression. Addressing these limitations and extending our findings to more general cases would be a valuable direction for future research. § ACKNOWLEDGEMENTS The authors gratefully acknowledge valuable and insightful discussions with Evgenii Chzhen and Nicolas Schreuder. S. G. gratefully acknowledges funding from the Fondation Mathématique Jacques Hadamard and from the ANR TopAI chair (ANR–19–CHIA–0001). alpha § ADDITIONAL PROOFS §.§ Proof of Lemma <ref> Let f:→∪{+∞} be a lower semicontinuous convex function. The domain of such a function is an interval (f). Its right derivative f'_+ is defined and finite everywhere on (f), except on the right endpoint of the interval (should the right endpoint be included in (f)) where it is equal to +∞. Such a function is upper semicontinuous, with the representation: ∀ h∈(f), f'_+(h) = inf_u>hf(u)-f(h)/u-h, where the infimum can be restricted to a countable dense collection of values u if needed. Likewise, the right derivative f'_- can be defined on (f), and is lower semicontinuous. Note also that the oscillation function (f)=f'_+-f'_-∈ [0,+∞] can be defined on (f), and is upper semicontinuous. Indeed, only one of f'_+ and f'_- can be infinite on (f) (and only at one of the endpoints of the domain), so that the difference is well defined. Recall that a function ϕ is -convex if ∀ (h,d)∈Ω, ϕ(h,d) = sup_(h',d')∈Ωϕ^(h',d') - (h-h')^2/|d|+|d'|. The function ϕ is lower semicontinuous as a supremum of continuous functions. Furthermore, for any d≠ 0, the function ϕ_d: h↦ϕ(h,d)+ h^2/|d| =sup_(h',d')∈Ωϕ^(h',d') + h^2/|d|- (h-h')^2/|d|+|d'| is convex as a supremum of lower semicontinuous convex functions. Let G:(ϕ)→ [0,+∞] be defined as G(h,d)=(ϕ_d)(h) for (h,d)∈(ϕ). Let L>0, and consider the set Σ_d,L defined as the set of points h∈(ϕ_d) such that G(h,d)≥ L^-1, (ϕ_d)'_+(h)≥ -L and (ϕ_d)'_-(h)≤ L. As the left and right derivatives of ϕ_d are nondecreasing, the set Σ_d,L is finite and its cardinality is bounded by a constant depending only on L. Let Σ_L = ⋃_d≠ 0Σ_d,L and Σ=⋃_L∈Σ_L. The set of points ∈(ϕ) such that ∂_hϕ() does not exist is equal to Σ. Let us show that for any integer L, the set Σ_L is included in a countable union of graphs of measurable functions. To do so, we use the following general result, see <cit.>. A correspondence Φ from a measurable set S to a topological space X assigns to each s∈ S a subset Φ(s) of X. We say that the correspondence is (weakly) measurable if for each open subset U⊂ X, the set Φ^ℓ(U)={s∈ S: Φ(s)∩ U≠∅} is measurable. If X is a Polish space and Φ is a measurable correspondence with non-empty closed values between S and X, then there exists a sequence (f_n)_n of measurable functions S→ X such that for every s∈ S, Φ(s)={f_1(s),f_2(s),⋯}. Let Φ be the correspondence that assigns to each d≠ 0 the subset Σ_d,L∪{0} of . As each set Σ_d,L is finite, this correspondence takes non-empty closed values. If we show that this correspondence is measurable, then Castaing theorem asserts the existence of a sequence of measurable functions (f_n)_n such that for every d≠ 0, Σ_d,L∪{0} = {f_1(d),f_2(d),⋯}. For each d, the set Σ_d,L is finite, so that Σ_d,L∪{0} = {f_1(d),f_2(d),⋯}, implying that Σ_L is included in a countable union of graphs of measurable functions. It remains to show the measurability of Φ. The function G is measurable. The representation (<ref>) implies that (h,d)↦ (ϕ_d)'_+(h) is given by a countable infimum of measurable functions, and is therefore measurable. Likewise, (h,d)↦ (ϕ_d)'_-(h) is measurable, so that G is also measurable. Let U⊂ be an open set. If 0∈ U, then Φ^ℓ(U)=\{0} is measurable. If 0∉U, we have Φ^ℓ(U) ={d≠ 0: ∃ h∈ [-L,L]∩ U, G(h,d)≥ L^-1, (ϕ_d)'_+(h)≥ -L, (ϕ_d)'_-(h)≤ L }. This set is the projection on the d-axis of the measurable set B={(h,d)∈Ω: h∈ [-L,L]∩ U, G(h,d)≥ L^-1, (ϕ_d)'_+(h)≥ -L, (ϕ_d)'_-(h)≤ L } Furthermore, for each d, the section {h∈: (h,d)∈ B}=Σ_d,L is compact. By <cit.>, this implies that Φ^ℓ(U) is measurable, concluding the proof of Lemma <ref>. §.§ Proof of Proposition <ref> Classical manipulations show that the risk _y(g) of a classifier g can be expressed as ℛ_y(g) = y𝔼[(1-Y)g(X)] + (1-y) 𝔼[Y(1-g(X))] = (1-y)𝔼[Y] + 𝔼[g(X)(y-η(X))]. Using the definition of μ_+, μ_- and Δ given in Section <ref>, we find that 𝔼[g(X)(y-η(X))] = ∫_𝒳_+ g(x)(y-η(x))μ/μ_+(x) μ_+(x) + ∫_𝒳_- g(x)(y-η(x))μ/μ_-(x) μ_-(x) + ∫_𝒳_= g(x)(y-η(x)) μ(x) = ∫_𝒳_+ g(x)y-η(x)/|Δ(x)|μ_+(x) + ∫_𝒳_- g(x)y-η(x)/|Δ(x)|μ_-(x) + ∫_𝒳_= g(x)(y-η(x)) μ(x). Moreover, Lemma <ref> implies that the demographic parity constraint is equivalent to the constraint 𝔼_X∼μ_+[g(X)] = 𝔼_X∼μ_-[g(X)]. Using the decomposition g = (g_+, g_-, g_=), we see that the fair classification problem can be rephrased as follows C_y'minimize 𝔼_μ_+[g_+(X)y-η(X)/Δ(X)] -𝔼_μ_-[g_-(X)y-η(X)/Δ(X)] + 𝔼_μ[1__= (X)g_+(X)(y-η(X))] such that 𝔼_μ_+[g_+(X)] = 𝔼_μ_-[g_-(X)]. The following lemma characterizes the solutions to the problem (C_y'). Under Assumption <ref>, for any optimal classifier g, there exist κ^+, κ^- such that g= (g^κ^+, g^κ^-, g_=), with g_=(x) = 1{η(x) > y} or g_=(x) = 1{η(x) ≥ y}, and g^κ(x) = 1{η(x) ≥ y+ κΔ(x)}. To conclude the proof of Proposition <ref>, it remains to prove that all optimal classifier are a.s. equal when Δ(X)≠ 0, and that the optimal classifier can be chosen as g^* = (g^κ^*, g^κ^*, g_=) for some κ^*. Denote by F_+ the c.d.f. of the random variable Z_+ = η(X)-y/Δ(X) when X∼μ_+ and by F_- the c.d.f. of Z_-= y-η(X)/Δ(X) when X∼μ_-. Let _+ (resp. _-) be the associated quantile function. To verify the demographic parity constraint, the classifier (g^κ^+, g^κ^-, g_=) must be such that F_+(κ^+) = F_-(-κ^-) (recall that Δ(X)<0 when X∼μ_-, so that g^κ^-(X)=1 if and only if Z_- ≥ -κ^-). Denoting β = F_+(κ^+)= F_-(-κ^-) and using the definition of the quantile function, we see that the law of g^κ^±(X), where X ∼μ_± is equal to the law of {U≥β} = {_±(U)≥±κ^±}, where U is a uniform random variable on [0,1]. Then, minimizing the risk of the classifier is equivalent to maximizing 𝔼[(_+(U) + _-(U))1{U≥β}] Since F_+ and F_- are continuous, u →_+(u)+_-(u) is strictly increasing and left-continuous. Straightforward computations show that the expression in (<ref>) has a unique maximum, which is attained for β^* = max{β : _+(β)+_-(β) ≤ 0 }. Hence, it holds that F_+(κ^+)=F_-(-κ^-)=β^*. Let g^*_1 = (g^κ^+_1, g^κ^-_1, g_=) and g^*_2 = (g^κ^+_2, g^κ^-_2, g_=) be two optimal classifiers. For Δ(X)> 0, they take different values only if Z_+ ∈ [κ^+_1, κ^+_2]. As F_+(κ^+_1)=F_+(κ^+_2)=β^*, this happens with zero probability. Likewise, the two classifiers are a.s. equal when Δ(X)<0. It remains to show that we can pick κ^+=κ^-. If _++_- is continuous at β^*, the proof is complete: in this case, _+(β^*)+_-(β^*)=0, and the choice κ^* = _+(β^*) = - _-(β^*) satisfies F_+(κ^*)=F_-(-κ^*)=β^*. Otherwise, _+(β^*)+_-(β^*)<0. Defining q_+ = lim inf_β→β^*_+_+(β) and q_- = lim inf_β→β^*_+_-(β), we have q_+ + q_->0. Because q_+>-q_- and _+(β^*)<-_-(β^*), there exists κ^*∈ [_+(β^*), q_+] ∩ [-q_-, - _-(β^*)]. By construction, F_+(κ^*)=F_-(-κ^*)=β^*, concluding the proof. §.§ Proof of Lemma <ref> Let g^* be a solution to the problem (C_y'), and let g_+^*, g_-^* and g_=^* be the restrictions of g^* to _+, _= and _=, so that g^* = (g_+^*, g_-^*, g_=^*). Straightforward computations show that we necessarily have g_=^*(X) = 1{η(X) ≥ y}1__=(X) or g_=^*(X) = 1{η(X) > y}1__=(X). Now, let us assume (without loss of generality) that g_+^* is not of the form g^κ^+. More precisely, assume that for κ^+ such that 𝔼_μ_+[g^κ^+(X)] = 𝔼_μ_+[g^*_+(X)], we have g^κ^+(X)≠ g^*_+(X) with positive μ_+-probability. Then, the classifier (g^κ^+, g^*_-, g^*_=) verifies the demographic parity constraint. Moreover, we have 𝔼_μ_+[g^κ^+(X)y-η(X)/Δ(X)] - 𝔼_μ_+[g_+^*(X)y-η(X)/Δ(X)] = 𝔼_μ_+[y-η(X)/Δ(X) g^κ^+(X)(1-g^*_+(X)) - y-η(X)/Δ(X) (1-g^κ^+(X))g^*_+(X)] = 𝔼_μ_+[(κ^+- η(X)-y/Δ(X)) g^κ^+(X)(1-g^*_+(X))] - κ^+𝔼_μ_+[g^κ^+(X)(1-g^*_+(X))] - 𝔼_μ_+[(κ^+- η(X)-y/Δ(X)) (1-g^κ^+(X))g^*_+(X)] + κ^+𝔼_μ_+[(1-g^κ^+(X))g^*_+(X)]. Since 𝔼_μ_+[g^κ^+(X)] = 𝔼_μ_+[g^*_+(X)], we obtain that 𝔼_μ_+[g^κ^+(X)(1-g^*_+(X))] = 𝔼_μ_+[(1-g^κ^+(X))g^*_+(X)]. Then, the definition of g^κ^+ implies 𝔼_μ_+[g^κ^+(X)y-η(X)/Δ(X)] - 𝔼_μ_+[g_+^*(X)y-η(X)/Δ(X)] =- 𝔼_μ_+[(κ^+- η(X)-y/Δ(X))_-(1-g^*_+(X))] - 𝔼_μ_+[(κ^+- η(X)-y/Δ(X))_+g^*_+(X)] < 0. This implies that _y((g^κ^+, g_-^*, g_=^*)) < _y((g_+^*, g_-^*, g_=^*)), which is absurd. Using a similar argument for g_-^*, we arrive to the conclusion that any optimal classifier is of the form (g^κ^+, g^κ^-, g_=^*) with g_=^*(X) = 1{η(X) ≥ y}1__=(X) or g_=^*(X) = 1{η(X) > y}1__=(X). §.§ Proof of Lemma <ref> Let us first show that κ is locally bounded. Let [y_0,y_1] be a bounded interval. Then, if κ(y)≥ M for some y∈ [y_0,y_1], F(y) = μ_+η(X)≤ y+κ(y)Δ(X)≥μ_+η(X)≤ y_0+MΔ(X) and F(y) = μ_-η(X)≤ y+κ(y)Δ(X)≤μ_-η(X)≤ y_1+MΔ(X). However, for M large enough, μ_-η(X)≤ y_1+MΔ(X) < μ_+η(X)≤ y_0+MΔ(X), and therefore it holds that κ(y)≤ M for all y∈ [y_0,y_1]. Likewise, we show that for M large enough, κ(y)≥ -M for all y∈ [y_0,y_1]. Hence, κ is locally bounded. We refine this argument to obtain a control on κ for large values of y. Let L>1 and let y>1 be such that κ(y)≥ L y. Then, because of the definition of κ(y) and as Δ(X)<0 for X∼μ_-, μ_+(η(X)≥ y) ≥μ_+(η(X)≥ y+ κ(y)Δ(X)) = μ_-(η(X)≥ y+ κ(y)Δ(X)) ≥μ_-(η(X)≥ y+ LyΔ(X)). The complementary of the region {(h,d)∈Ω: d<0, h≥ y+Lyd} in the lower half-plane {(h,d)∈Ω: d<0} is contained in the set A_L given by the union of the horizontal strip {d≥ -1/L} with the region {(h,d)∈Ω: d<0, h≤ 1+Ld}. For L large enough, (A_L)<1/2. Then it holds that μ_+(η(X)≥ y)≥ 1/2. For y large enough, this is not possible. Thus, we have shown that there exist L>0 and C>0 such that for y>C, we have κ(y)≤ Ly. Likewise, we show that there exist constants L,C>0 such that for |y|>C, |κ(y)|≤ L|y|. As κ is also locally bounded, the conclusion follows.
http://arxiv.org/abs/2409.02516v1
20240904082551
The emergence of nonlinear Jeans-type instabilities for quasilinear wave equations
[ "Chao Liu" ]
math.AP
[ "math.AP" ]
equationsection definition propositionProposition[section] lemma[proposition]Lemma corollaryCorollary[section] theoremTheorem[section] rassumpRough Bootstrap Assumption[section] assumpMain Bootstrap Assumption[section] assAssumption[section] conCondition[section] questionMain Question[section] quesQuestion[section] exampleExample[section] remarkRemark[section] definition[proposition]Definition problem[proposition]Problem *theorem*Theorem *mquestion*Main Question *claim*Claim claimClaim[section] *intuition*Intuition intuitionIntuition[section]
http://arxiv.org/abs/2409.03074v1
20240904205845
Three-body model of $^{6}$He with non-local halo effective field theory potentials
[ "E. C. Pinilla", "W. Leidemann", "G. Orlandini", "P. Descouvemont" ]
nucl-th
[ "nucl-th" ]
Universidad Nacional de Colombia, Sede Bogotá, Facultad de Ciencias, Departamento de Física, Grupo de Física Nuclear, Carrera 45 No 26-85, Edificio Uriel Gutiérrez, Bogotá D.C. C.P. 1101, Colombia. Department of Physics, University of Trento, V. Sommarive 14, I-38123 Trento, Italy INFN-TIFPA, V. Sommarive 14, I-38123 Trento, Italy Department of Physics, University of Trento, V. Sommarive 14, I-38123 Trento, Italy INFN-TIFPA, V. Sommarive 14, I-38123 Trento, Italy Département de Physique, C.P. 229, Université libre de Bruxelles (ULB), B 1050 Brussels, Belgium § ABSTRACT We study the ^6He Borromean nucleus in coordinate representation within a three-body model with two-body potentials derived from cluster effective field theory (EFT). These potentials are originally developed in momentum space and Fourier transformed to provide non-local potentials in configuration space. We use hyperspherical coordinates in combination with the Lagrange-mesh technique to compute the ground state energy, root mean square radius and the E1 strength distribution of ^6He. We also introduce a three-body interaction to eliminate dependencies on the cutoff parameter of the two-body potentials on the ground state energy. The E1 strength distribution exhibits a low lying resonance as expected. However it is strongly influenced by the choice of the three-body EFT interaction. Three-body model of ^6He with non-local halo effective field theory potentials P. Descouvemont September 9, 2024 ============================================================================== § INTRODUCTION Modern nuclear theory is paying a lot of attention to develop state of the art models as precise as possible to study nuclei. In particular, the derivation of nucleus-nucleus interactions from ab initio theories that allow the prediction rather than the phenomenological explanation of nuclear processes is of current interest. Effective field theory (EFT) is a powerful framework that takes advantage of the separation of scales in a physical system by integrating out its irrelevant degrees of freedom <cit.>. In nuclear physics, EFTs have become very popular to study few nucleon configurations (see for instance Refs. <cit.>). An interesting application of the EFT framework is the study of halo nuclei. Those nuclei can be seen as made up of a core plus one or more nucleons that are weakly bound to the core <cit.>. Thus, halo nuclei possess a separation of scales: a low momentum scale associated with the weakly bound character of the halo, and a high-momentum scale that corresponds to the binding energy of the core. A typical example of halo nuclei is ^6He, which can be described as an alpha core plus two weakly bound neutrons forming the halo. This nucleus is called Borromean, since none of the two-body pairs α-n or n-n form a bound state. Three-body models with phenomenological local potentials have been widely applied to study the structure and dynamics of this nucleus <cit.>. In Ref. <cit.>, a cluster EFT was used to find the ground state energy of the nucleus ^6He. This is a leading order (LO) three-body halo EFT and assumes that the two-body interactions are dominated by the S_0 n-n and P_3/2 α-n partial waves. Other partial waves are regarded as higher-order corrections in their EFT. A three-body EFT interaction was derived at LO to accurately reproduce the ^6He ground state energy and to eliminate cutoff dependencies. Once the two-body potentials are fixed, the three-body problem is solved using momentum-space Faddeev equations <cit.>. Cluster EFT potentials are derived in a natural way in momentum space. However,working in this space leads to convergence problems when the Coulomb interaction is present.Thus, a treatment in configuration representation is more convenient, since the Coulomb interaction can be managed in a simpler way. In coordinate space, those potentials become non-local and can be obtained by performing a double Fourier transform. In principle, the cluster EFT potentials <cit.> are more fundamental than the phenomenological ones, since they are not fitted to reproduce experimental data. Instead, they are derived from the matching to a low-energy theory of the expansion of an observable in a series, up to a chosen order, of the ratio of a typical low-momentum scale, over a high momentum scale. In our case, the low-energy theory is the effective range theory. The low energy constants (LECs) are related to terms of the expansion, which in turn, are related to the effective range parameters. Once, the experimental values of these parameters are inserted, we can get the cluster EFT potentials at some order. In this work, we employ two-body cluster EFT non-local potentials to study ^6He. We use a three-body model in configuration space with hyperspherical coordinates and the Lagrange mesh technique <cit.>. In particular, we compute structure properties such as the ground state energy, the root mean square radius (rms), as well as the electric dipole strength distribution. We Fourier transform the cluster EFT S-wave n-n potential, and the s_1/2 and p_3/2 α-n potentials derived in Refs. <cit.> at LO. An S-wave EFT three-body potential in representation coordinates at LO is introduced aiming at getting the correct binding energy of ^6He and to eliminate cutoff dependencies on the two-body potentials. The constructed EFT non-local α-n potential binds a bound state in the S-wave that simulates a forbidden state. As this state gives spurious eigenvalues in the three-body Hamiltonian, we must remove it. In this paper, we address the elimination of forbidden states in cluster EFT potentials by extending the projection technique given in Ref. <cit.> to non-local potentials. The paper is organized as follows. In sec. <ref>, we describe the three-body model with non-local potentials in hyperspherical coordinates. Section <ref> summarizes the derivation of the two-body potentials. In Sec. <ref>, we describe the calculation of the dipole strength distribution. Section <ref> shows the application to ^6He. Concluding remarks are given in Sec. <ref>. § THREE-BODY MODEL WITH NON-LOCAL POTENTIALS §.§ Coupled differential equations in hyperspherical coordinates Let us consider a three-body nucleus, made of three clusters, each with nucleon numbers A_i and position coordinates r_i. The Hamiltonian of the system, for two-cluster interactions only, is given by H=∑_i=1^3p^2_i/2m_NA_i+∑_i<j=1^3V_ij, with m_N the nucleon mass and p_i the momentum of cluster i. After removing the center of mass motion, we need to solve H_3bΨ^JMπ=EΨ^JMπ, with E the three-body energy measured from the three-body breakup threshold. The function Ψ^JMπ is an eigenstate of the three-body Hamiltonian with total angular momentum J, projection on the z-axis M and parity π. Variational solutions of Eq. (<ref>) with E<0 are bound states and with E>0 are the so-called pseudostates. Aiming at solving Eq. (<ref>), we make use of the scaled Jacobi coordinates <cit.> x_k=√(μ_ij)(r_j-r_i), y_k=√(μ_(ij)k)( r_k-A_i r_i+A_jr_j/A_i+A_j), with (i,j,k) a cyclic permutation of {1,2,3}. Thus, we have three sets of scaled Jacobi coordinates. The dimensionless reduced masses μ_ij and μ_(ij)k are defined as μ_ij=A_iA_j/A_i+A_j, μ_(ij)k=(A_i+A_j)A_k/A_i+A_j+A_k. Each of the three sets of scaled Jacobi coordinates in Eq. (<ref>) defines the hyperspherical coordinates ρ^2=x_k^2+y_k^2, α_k=arctan(y_k/x_k); 0≤α_k ≤π/2, Ω_x_k=(θ_x_k,φ_x_k), Ω_y_k=(θ_y_k,φ_y_k), where ρ and α_k are called the hyperradius and the hyperangle, respectively. In the hyperspherical formalism, the eigenstate Ψ^JMπ is expanded in hyperspherical harmonics, Y^JM_γ K(Ω_5_k), as Ψ^JMπ(ρ,Ω_5_k)=ρ^-5/2∑_K=0^∞∑_γχ^Jπ_γ K(ρ) Y^JM_γ K(Ω_5_k), where χ^Jπ_γ K(ρ) is the hyperradial wave function and the hyperspherical harmonics Y^JM_γ K(Ω_5_k) are given by 𝒴^JM_γ K(Ω_5_k)=[𝒴^l_xl_yK_L(Ω_5_k)⊗χ_S]^JM, with Ω_5_k=(Ω_x_k,Ω_y_k,α) and χ_S the spin wave function of the three bodies. The term Y_l_xl_yK^LM_L(Ω_5) is defined as the following coupling Y_l_xl_yK^LM_L(Ω_5)=ϕ_K^l_xl_y(α)[Y_l_x(Ω_x)⊗ Y_l_y(Ω_y)]^LM_L, where we have omitted the subindex k and where the function ϕ_K^l_xl_y(α) is given in Ref. <cit.>. Index γ in Eq. (<ref>) stands for γ=(l_x,l_y,L,S), with l_x and l_y the orbital angular momenta associated with a set of Jacobi coordinates in Eq. (<ref>), L is the total orbital angular momentum coupled to l_x and l_y, and S is the total spin of the clusters. The parity of a state is given by π=(-1)^K. In numerical calculations, the hypermomentum K is truncated at a K_max value. After inserting Eq. (<ref>) into the Schrödinger equation for two-cluster non-local potentials only, we end up with the set of coupled differential equations <cit.> (-ħ^2/2m_N[d^2dρ^2-(K+3/2)(K+5/2)ρ^2]-E) χ^Jπ_γ K(ρ) +∑_K' γ'∫_0^∞W_γ K,γ' K'(ρ,ρ')χ^Jπ_γ' K'(ρ')dρ'=0. The non-local kernel W_γ K,γ' K'(ρ,ρ') in Eq. (<ref>) is given by W_γ K,γ' K'(ρ,ρ')=∑_k=1^3∑_l_x”l_y” (ρρ')^-3/2μ_ij^-3/2δ_LL'δ_SS' ×⟨ k,l_x”l_y”|i,l_xl_y⟩_KL ×⟨ k,l_x”l_y”|i,l'_xl'_y⟩_KLℐ(ρ,ρ'), with the integral ℐ(ρ,ρ') defined as ℐ(ρ,ρ')=∫_0^min(ρ,ρ') W_k^l_x”( x/√(μ_ij),x'/√(μ_ij)) ϕ_K^l_x”l_y”(α)ϕ_K'^l_x”l_y”(α)xx'y^2dy. In Eq. (<ref>), the W_k^l_x”( x/√(μ_ij),x'/√(μ_ij)) term is a non-local cluster-cluster interaction, x=(ρ^2-y^2)^1/2, x'=(ρ'^2-y^2)^1/2, α=arctan(y/x), α'=arctan(y/x'). The ⟨ k,l_x”l_y”|i,l_xl_y⟩_KL in Eq. (<ref>) are the Raynal-Revai coefficients <cit.>. These coefficients allow to transform the hyperspherical harmonics that depend on the set of coordinates i to the set of coordinates k. §.§ Diagonalization of the Hamiltonian with the Lagrange mesh technique The Lagrange mesh technique is a variational calculation on a mesh <cit.>. This efficient technique simplifies the calculations of the Hamiltonian matrix elements, when they are computed at the Gauss approximation associated with the mesh. Therefore, with this technique the kinetic matrix elements become analytical and the matrix elements for local potentials are diagonal and evaluated at the mesh points. Let us use the Lagrange-mesh technique to solve Eq. (<ref>) when non-local potentials are involved. That is, we expand the hyperradial wave function χ^Jπ_γ K(ρ) over a set of N regularized Lagrange-Laguerre functions f̂_n(ρ/h) as χ^Jπ_γ K(ρ)=1/√(h)∑_n=1^N C_γ Kn^Jπf̂_n(ρ/h), where the C_γ Kn^Jπ are the coefficients of the expansion, h is a scaling parameter that allows to adjust the mesh to the size of the physical system, and the basis functions f̂_n(ρ/h) are given in Ref. <cit.>. After inserting the expansion (<ref>) into the set of coupled differential equations (<ref>) and projecting onto 1/√(h)f̂_n(ρ/h), we get the eigenvalue problem ∑_γ'K'n'( T^Jπ_γ Kn,γ'K'n'+W_γ Kn,γ' K'n' -Eδ_γγ'δ_KK'δ_nn')C_γ' K'n'^Jπ=0, with T^Jπ_γ Kn,γ'K'n' defined as T^Jπ_γ Kn,γ'K'n'=ħ^2/2m_N( T̂^G_nn'/h^2+ (K+ 3/2)(K+5/2)/u_n^2h^2δ_nn')δ_γγ'δ_KK'. The kinetic matrix elements T̂^G_nn' of the Lagrange-Laguerre basis are given in Ref. <cit.>. The potential matrix elements W_γ Kn,γ' K'n' become W_γ Kn,γ' K'n' =1/h∫_0^∞dρ∫_0^∞dρ'f̂_n(ρ/h) W_γ K,γ' K'(ρ,ρ')f̂_n'(ρ'/h), where the numerical computation is performed following the appendix of Ref. <cit.>. § TWO-BODY EFT POTENTIALS §.§ Derivation The construction of an EFT starts with the finding of the most general Lagrangian of the system. This Lagrangian must be consistent with all the symmetries in terms of the relevant effective degrees of freedom in the energy regime considered. Galilean-invariant operators in the Lagrangian depend on the relative momenta ħk and ħk', only. Thus, the potential can be written as a series of contact interactions and their derivatives, i.e., at low energies in momentum space, we can write V(k,k')=∑_l=0^∞(2l+1)V_l(k,k')P_l(k̂,k̂'), where P_l is a Legendre polynomial of degree l and where the partial wave term of the potential V_l is written as V_l(k,k')=k^lk'^lg(k)g(k')∑_αβ=0^1k^2αλ_αβk'^2β. The matrix elements λ_αβ are defined in the matrix λ= [ λ_0 λ_1; λ_1 0; ] . Note that the limits of the sums over the indexes α and β in Eq. (<ref>) could be larger than one. However, we are restricting these sums up to k^2 terms as it will be explained later. The factor g(k)=e^-(k/Λ)^2m, in Eq. (<ref>) is a regularization function, where Λ is called the cutoff parameter. This parameter suppresses high momenta components, that is, g(k)→ 0 when Λ→∞. However to get λ_0 and λ_1 finite, Λ must also be finite and limited by the Wigner bound <cit.>. In principle, observables should not depend on Λ. The coefficients λ_0 and λ_1 in Eq. (<ref>) are the so-called low energy constants (LECs) that capture the effects from high-energy physics. The main idea is to find the LECs in Eq. (<ref>) from the matching with a low energy theory. To this end, we make use of the partial wave expansion of the Lippmann-Schwinger equation of the T-matrix <cit.> T_l(k,k')= V_l(k,k')+1/2π^2× ∫_0^∞dqq^2V_l(k,q)1ħ^2k^2/2μ-ħ^2q^2/2μ+iϵT_l(q,k'), where μ is the reduced mass of the two clusters. Here, the T-matrix is expanded as in Eq. (<ref>) and the free state |k⟩ is normalized as ⟨k|k'⟩=(2π)^3δ(k-k'). The LECs λ_0 and λ_1 are found when the potential (<ref>) is introduced in Eq. (<ref>) and it is matched with the on-shell partial wave of the T-matrix T_l T_l=-2πħ^2/μ(-1/a_l+1/2r_lk^2-ik^2l+1)^-1k^2l. Equation (<ref>) is derived from the the effective range expansion (ERE) for uncharged cluster-cluster interactions <cit.> k^2l+1δ_l=-1/a_l+1/2r_lk^2+⋯. In Eq. (<ref>) δ_l is the phase-shift for the elastic scattering in the partial wave l. The a_l and r_l are the so-called scattering length and effective range parameters, in units of fm^2l+1 and fm^-2l+1, respectively. We consider terms up to the effective range i.e., up to k^2, which limits the indexes of the sums in Eq. (<ref>) up to 1. Once the experimental values of a_l and r_l are introduced in Eq. (<ref>) in combination with an appropriate power counting, the LECs λ_0 and λ_1 are obtained (see the Appendix for a summary and Refs. <cit.> for details). Equation (<ref>) involves partial waves of cluster-cluster potentials in coordinate representation. They can be obtained from the EFT potentials of the form (<ref>) by the double Fourier transform V(r,r')=1/(2π)^3∫ dkdk'e^-i(r·k+r'·k')V(k,k'). Expanding V(r,r') and V(k,k') in partial waves as shown in Eq. (<ref>), and with the help of the plane wave expansion in spherical waves, we have that each partial wave satisfies V_l(r,r') =(-1)^l2/π∫_0^∞∫_0^∞dkdk'k^2k'^2j_l(kr)V_l(k,k')j_l'(k'r'). In Eq. (<ref>) j_l(kr) is a spherical Bessel function of the first kind. §.§ Forbidden states The concept of forbidden states shows up in many-body theories of fermions, since the full antisymmetrization of the wave function must be taken into account. For two-cluster nuclei, the Pauli principle can be considered approximately by choosing deep nucleus-nucleus interactions containing unphysical bound states that simulate the forbidden states <cit.>. These forbidden states are located at energies E_F lower than the physical ones. When we are treating three-body nuclei, the two-body forbidden states add spurious eigenvalues associated with the three-body Hamiltonian that need to be removed. Different techniques such as the supersymmetry transform of the nucleus-nucleus potential <cit.> or the projection technique <cit.> have been introduced to remove forbidden states in local nucleus-nucleus potentials. These transformations keep the phase shifts unaffected. In this paper, we extend the projection technique <cit.> to non local potentials by substituting V_l(k,k') in Eq. (<ref>), for the l=0 α-n potential, by V_l(k,k')=V_l(k,k')+Γϕ_l(k)ϕ_l(k'), where Γ is a large constant energy value (typically Γ∼10^3 -10^9 MeV). The function ϕ_l(k) is a forbidden state or an eigenstate of the nucleus-nucleus Hamiltonian with eigenvalue E_F. Thus, ϕ_l(k) is orthogonal to the physical bound states. § E1 STRENGTH DISTRIBUTION Electric dipole excitations of weakly bound nuclei with just one bound state go directly to the continuum. If such nuclei are made of two-clusters, the calculation of the E1 strength distribution is rather simple <cit.>. However, for three-body nuclei, the computation of the three-body continuum with the correct asymptotic behavior, which is needed to compute E1 strength distributions, may involve rather heavy calculations such as solving the Faddeev equations in the continuum <cit.> or the R-matrix method <cit.>. There are other methods that extend the application of bound state variational calculations to the continuum. They are the complex scaling <cit.>, particularly suitable to treat resonant continuum regions, and integral transform (LIT) methods also in non-resonant energy regions <cit.>. Another widely used technique is the so-called pseudostate approach <cit.>. In Refs. <cit.> the E1 strength distribution of ^6He is computed in a three-body model with hyperspherical coordinates and two-body local potentials. The complex scaling and the pseudostate methods are tested with a computation that includes the correct asymptotic behavior of the continuum. These references show very good agreement among the computation with the exact continuum and the discretized approaches, although some ambiguity exists in the pseudostate method due to the smoothing technique necessary to derive continuous distributions. In the following we briefly explain the pseudostate method and clarify that no ambiguity exists when it is used within an integral transform approach. We start following the definition of the three-body dipole strength distribution <cit.> dB_E1/dE(E) =1/2J_0+1δ(E- E) ∑_JMπ M_0μ∑_γ_ωK_ω|⟨𝒦Ψ^JMπ_γ_ωK_ω( E)|ℳ^(E1)_μ|Ψ^J_0M_0π_0⟩|^2, where Ψ^J_0M_0π_0 is the initial bound state characterized by total angular momentum, projection on the z axis and parity, J_0, M_0, and π_0. respectively The final unbound state is represented by Ψ^JMπ_γ_ωK_ω(ε), with total angular momentum J, angular momentum projection M and parity π, where 𝒦 is the time-reversal operator and E is the excitation energy defined from the three-body breakup threshold. Note that here we have defined the response function with a Dirac Delta function to be further introduced into an integral transform. The electric dipole operator for a core+n+n system, with the Jacobi coordinate x along the n-n motion (see Ref. <cit.> for details), is given by ℳ^E1_μ(α,ρ)=eZ_c(2/AA_c)^1/2ρsinα Y_1^μ(Ω_y), with A_c and eZ_c the nucleon number and charge of the core, and A=A_c+2. On the other hand, an observable in an experiment is measured with a resolution defined by the experimental apparatus <cit.>. The latter can be represented in various forms. We will use a generical function K_resol(E, E,Γ...), where E,Γ... are the parameters of the resolution function. For example, if it is represented by a Gaussian or a Lorentzian function, E and Γ could be the center and the width of those functions. In principle Γ could also depend on E. We take it constant for simplicity of the presentation. Therefore, because of the experimental resolution, one actually measures the following integral transform of the dipole strength dB_E1/dE( E)=∫ dE dB_E1/dE(E)K_resol(E, E,Γ...). Then, introducing the theoretical E1 strength distribution (<ref>) in Eq. (<ref>) and inserting the completeness relation of states that diagonalizes the three-body Hamiltonian, one can show that for the case of the Borromean nucleus ^6He, that has J_0^π=0^+ and J^π=1^-, we obtain <cit.> dB_E1/d E=∑_λ |⟨Ψ^1M-_λ( E_λ)|ℳ^E1|Ψ^00+⟩|^2K_resol(E, E_λ,Γ...). The Ψ^1M-_λ( E_λ) are pseudostates or eigenstates of the three-body Hamiltonian at positive energies E_λ, for the partial wave J^π=1^-. What is important here is that for general positive definite Kernels such as Lorentzians or Gaussians, and the easily verified conditions about the existence of the transform and of the total strength, a theorem ensures that the transform can be calculated by means of finite norm functions (the pseudostates) <cit.>. The smoothed discretized distribution, namely the integral transform, converges exactly to the experimental folded strength distribution, at sufficiently high number of pseudostates. It has been noticed that the experimental smoothing procedure excludes higher-frequency oscillations, which therefore are irrelevant. On the other hand, theoretical results that show isolated Lorentzian/Gaussian peaks may indicate that the corresponding structure is not sufficiently resolved. Therefore an enlargement of the basis space is necessary. Response functions can be obtained from the inversion of integral transforms <cit.>. However, we would like to emphasize that if a smooth theoretical result is obtained with the experimental resolution, an inversion of the transform is not necessary. Problems can only arise if the experimental resolution is so small that such a smooth result is not obtained and isolated peaks due to single pseudostates appear, even if the basis is enlarged. In such a case it is more reliable to use an inversion for the transform <cit.>. § ^6HE STRUCTURE §.§ Two-body potentials The derivation of the LECs λ_0 and λ_1 for the two-cluster potential (<ref>) is performed in Refs. <cit.> and is summarized in the Appendix. We use the experimental values of the scattering length a_l and the effective range r_l cited in Table <ref>. They are taken from Refs. <cit.> for the n-n system, and from Ref. <cit.> for the α-n system. Once the EFT potentials are calculated in momentum space, they are transformed to coordinate representation by performing a double Fourier transform as indicated in Eq. (<ref>). The integrals are computed numerically with the help of a double Gauss-Legendre quadrature of N_k=300 points. Figure <ref> shows the low energy behavior of the n-n ^1S_0 phase shift computed with the two-body EFT potentials in comparison with a calculation that uses the AV18 potential <cit.>. We show the dependence on the exponent m of the Gaussian regularization function (<ref>) and on the cutoff parameter Λ_nn. As mentioned before, this parameter should be as large as possible, limited by the Wigner bound and the physics, in principle, should not depend on it. We observe a slight dependence on the cutoff parameter apart from Λ_nn=100 MeV. The low energy phase-shifts for the S-wave and P_3/2 α-n systems are studied in Refs. <cit.>. In coordinate representation, we calculate the phase shifts with the R-matrix method for non-local potentials <cit.>. We also check that the Fourier transformed EFT potentials reproduce the experimental scattering lengths a_l and the effective range r_l values. To this end, we extended Ref. <cit.> to compute effective range parameters with the R-matrix method for non-local potentials. Table <ref> gives the set of EFT potential parameters that are used to compute ground state properties of ^6He, unless mentioned otherwise. We choose Λ values as large as possible avoiding being close to Wigner bounds. On the other hand, the S-wave α-n EFT potential binds a forbidden state. We remove this state by adding a projector to the potential in momentum space as indicated in Eq. (<ref>). The functions ϕ(k) are obtaining by solving the Schrödinger equation in momentum space following Ref. <cit.> with N_p=30 Lagrange-Laguerre basis functions and scaling parameter h_p=0.3 fm. We have checked that the phase shifts remain unaffected with the addition of the projector potential. Once we have the potential (<ref>), we proceed to perform its double Fourier transform. Table <ref> shows the S-wave α-n potential parameters for m=2 and different cutoff parameters Λ^0_α n. We also show the associated forbidden state energies. These energies agree with the value of 12.38 MeV given by the Kanada et al. local potential <cit.> used in Ref. <cit.>. §.§ α+n+n ground state properties The two-body potentials with parameters given in Table <ref> are used in Eq. (<ref>) to compute the ground state energy, E_0, of ^6He. We solve the diagonalization problem as shown in Ref. <cit.> with N=60 mesh points and scaling parameter h=0.3 fm. The integrals over x and x' are computed with a Gauss-Laguerre quadrature with N_2=30 and h_2=0.3 fm. The integral over y is performed with N_y=250 points and a step of h_y=0.05 fm. The two-body potentials alone do no bind the system. Therefore, we introduce the three-body force V_3(ρ)=-V_03e^-(ρ/ρ_0)^2, which corresponds to a S-wave LO cluster EFT potential in coordinate representation. In Eq. (<ref>) ρ_0 makes the role of a three-body cutoff parameter. The introduction of this force allows to reproduce the correct experimental ground state energy and to remove dependencies on this energy of the cutoff parameter of the two-body EFT potentials. In Fig. <ref>, we study the convergence of E_0 with respect to the maximum K value, K_max, in the expansion (<ref>) of the three-body wave function. We also check the dependence of this energy on the projector Γ at eliminating the forbidden state in the S_1/2 non-local α-n potential. As it is the case for local potentials <cit.>, Γ values at least of 1000 MeV are needed to eliminate dependencies of the ground state energy. We use V_03=-15.2 MeV and ρ_0=5 fm to obtain the theoretical value of -0.979 MeV, which is close to the experimental value -0.97546(23) MeV <cit.>. We also compute the rms radius of the α+n+n system through the expression <cit.> ⟨ r^2 ⟩_^6He=1/6⟨Ψ^J_0M_0π_0|ρ^2|Ψ^J_0M_0π_0⟩ +2/3⟨ r^2 ⟩_α, with Ψ^J_0M_0π_0 the ground state of ^6He and √(⟨ r^2 ⟩)_α the rms radius of the ^4He nucleus. For this radius we take the experimental value of 1.463(6) fm given in Ref. <cit.>. With the three-body ground state mentioned before, we reproduce the value of 2.81 fm for the rms radius of ^6He. This value is in fair agreement with the experimental measurement of 2.49(4) fm <cit.>. §.§ Electric dipole strength distribution In this section we compute the electric dipole strength distribution of ^6He using Eq. (<ref>). We employ a Gaussian smoothing distribution with σ=(0.027+0.177E^0.654) MeV, which corresponds to the experimental energy resolution <cit.>. This distribution is normalized to unity from 0 to ∞. We use the same Hamiltonian and numerical conditions to compute the ground and the 1^- discrete states. Figure <ref> shows the convergence of the E1 strength distribution with the maximum value of the hypermomentum K_max of the 1^- pseudostates. We observe that we achieve convergence at K_max=13. In Fig. <ref> we show the convergence of the E1 strength distribution with the number of the Lagrange-Laguerre basis functions involved in the diagonalization of the three-body Hamiltonian with non-local potentials. The number of pseudostates corresponds to 242, 395 and 525 for N=40, N=50 and N=60, respectively. As isolated resonances are not obtained at increasing the number of pseudostates in the sum (<ref>), we do not need to perform an inversion of the integral transform (<ref>). Aiming at assessing the effect of the cutoff parameter of the two-cluster potentials on the E1 strength distribution, we compute this quantify for different values of the Λ_nn, Λ^0_α n and Λ^1_α n cutoff parameters of the S_0 n-n, S_1/2 α-n and P_3/2 α-n potentials, respectively. We observe a strong dependence on the E1 transition distribution for the values Λ_nn=400 MeV, Λ^0_α n=500 MeV and Λ^1_α n=300 MeV, which are close to the Wigner bounds 410 MeV, 510 MeV and 340 MeV of the S_0 n-n, S_1/2 α-n and P_3/2 α-n potentials, respectively. We also find that the ground state of ^6He exhibits a slow convergence with K_max (K_max=30), when the two-body potentials have a cutoff parameter that tends to a Wigner bound. In addition, we check the dependence of the E1 strength with the exponent m of the regularization function (<ref>). We get that this quantity differs at most 10% for all potentials in the peak region. In general, the EFT ^6He E1 strength distributions are in good agreement with the computation of the three-body model with phenomenological two-body local potentials of Ref. <cit.>. In Fig. <ref>, we show the dependence of the electric dipole strength distribution with ρ_0, i.e., with the cutoff parameter of the S-wave LO three-body potential. We use the values ρ_0=3 fm and ρ_0=5 fm. With ρ_0=3 fm, we need a depth of -47.5 MeV to reproduce the ground state energy of ^6He. This three-body potential provides a rms radius of 2.54 fm. We observe a strong dependence on the three-body cutoff parameter. This dependence could be related with the choice of the three-body interaction. We use a LO S-wave interaction, since its introduction in hyperspherical coordinates in configuration space is straightforward, and has the same functional form commonly used in three-body calculations with phenomenological local potentials <cit.>. However, as the ^6He structure is dominated by the α-n P_3/2 resonance the use of a P-wave three-body potential in our EFT three-body model may be more suitable. Refs. <cit.> introduce a three-body P-wave interaction in momentum space. However, its introduction in the coordinate representation is not direct and out of the scope of the present paper. Although the shape of the S-wave LO three-body interaction (<ref>) is the same as the one used for three-body calculations with two-body local phenomenological potentials, its effect is different. Fig. <ref> shows the dependence on ρ_0 of the E1 strength distribution of ^6He, computed with the three-body model of Refs. <cit.>. We employ the same potentials and numerical conditions. However, for the sake of simplicity in the comparison, we just use K_max=9 for the final state. Higher values will shift the peak position to lower energies. To reproduce the experimental ground state energy of ^6He, we tuned the depth of the potential to -7 MeV and -2.5 MeV, which corresponds to ρ_0=3 fm and for ρ_0=5 fm, respectively. With these three-body potentials, we find the rms radius of 2.39 fm and 2.47 fm. From Fig. <ref> we observe that the dependence of the E1 strength on ρ_0 is not as strong as for the case of EFT derived potentials. § SUMMARY AND CONCLUSIONS We have studied three-body systems with cluster EFT two-body potentials in coordinate representation, instead of momentum space, where the potentials are naturally derived. Besides avoiding complications with the Coulomb interaction, this will allow to introduce the three-body wave functions into existing low-energy reaction codes. This is necessary to achieve a more fundamental description of reaction processes and to study non-local effects in reaction theory. To this end, we have started with the ^6He nucleus by using a S_0 n-n, and a S_1/2 α-n and P_3/2 α-n EFT potentials. We have used a three-body model in hyperspherical coordinates with the Lagrange-mesh technique. The model incorporates the two-body non-local interactions that result from the Fourier transformed cluster EFT potentials in momentum space. We have also addressed the removal of forbidden states in cluster EFT potentials by extending the projection technique <cit.> to the non-local S_1/2 α-n potential. A LO S-wave three-body interaction is introduced in the Hamiltonian to properly bind the ^6He nucleus and to eliminate dependencies on the two-body cutoff parameter. Besides the ground state energy of ^6He, we have computed its rms radius and the electric dipole strength distribution. For the latter, we have used the pseudostate approach, which corresponds, under certain conditions, to the integral transform method. In fact, we have compared the results to the experimental data employing the same resolution as in experiment. Such a controlled pseudostate approach, which uses the discretization of the continuum, does not represent an approximation, under the conditions of the present calculation, and the obtained results can be compared directly to the data. The condition, that an increase in the number of pseudostates exhibits a convergent smooth result without isolated resonances, is verified by our calculation. Therefore an inversion of the transform has not even been necessary. An EFT should not, in principle, depend on the cutoff parameter of the renormalization function. Thus, we have checked this dependence for the two-body potentials as well as for the three-body interaction. We have observed that the E1 strength distribution depends strongly on the two-body cutoff parameter when it is close to the Wigner bound, requiring a certain care at choosing this value. Moreover we have found a rather strong dependence on the three-body cutoff. It is worth noticing that although its functional form is the same as the one used in the case of three-body calculations with phenomenological potentials, its effect results to be rather different. Because of the strong dependence on the three-body cut-off the comparison with existing experimental data is problematic. Different from a smaller cutoff a larger cutoff reproduces the energy of the resonance but gives too high a strength. This calls probably for a modification of the three-body interaction (higher order terms or P-wave interaction). On the other hand the experimental situation is also not yet fully settled. The variation of different experimental results is relatively large. In addition the experimental determination of the dipole strength is strongly model dependent as was pointed out in Ref. <cit.>. All this makes ^6He still a very interesting and challenging ground for continuing both theoretical and experimental investigations. § ACKNOWLEDGMENTS We thank C. Ji, E. Filandri and D. Baron for helping discussions on the EFTs potentials. E. C. Pinilla gratefully acknowledges the support given by the INFN. * § LECS UNDER WIGNER BOUND FOR NEUTRAL TWO-BODY INTERACTIONS In this section, we summarize the derivation of the LECs λ_0 and λ_1 of the EFT potential (<ref>) in natural units. We refer the reader to Ref. <cit.> for details. Let us make use of the partial wave expansion of the two-body Lippmann Schwinger equation (<ref>), the partial wave potential (<ref>) and a similar expansion for the T-matrix T_l(k, k')=k^lk'^lg(k)g(k')∑_i j=0^1 k^2 iτ_i j(E) k^' 2 j. After introducing Eq. (<ref>) and Eq. (<ref>) in the Lippmann Schwinger equation (<ref>), we have τ_i j= λ_i j +1/2 π^2∑_i' j'=0^1λ_i i'∫_0^∞dk k^2k^2l+2i'+2j'/E-k^2 /2μ+i ϵ g^2(k)τ_j' j, where g(k) is given by Eq. (<ref>). Equation (<ref>) corresponds to the matrix equation τ=λ+λΦ^(l)τ, with the matrix λ given by Eq. (<ref>) and the matrix Φ^(l) defined as Φ^(l)= [ ϕ^(l)_0 ϕ^(l)_2; ϕ^(l)_2 ϕ^(l)_4; ] . The matrix elements of Φ^(l) are given by the expression Φ^(l)_2ν=1/2 π^2∫_0^∞dk k^2k^2l+2ν/E-k^2 /2μ+i ϵ g^2(k), with ν=0,1,2. These matrix elements can be written recursively as ϕ^(l)_4=I_2 l+5+k^2 I_2 l+3+k^4 ϕ_0^(l), ϕ^(l)_2=I_2 l+3+k^2 ϕ_0^(l), ϕ^(l)_2ν=I_2 l+1+⋯+k^2 l+2ϕ_-1^(l), where the integral I_n is given by I_n=-μ/π^2∫_0^∞ d q q^n-1 g^2(q). When the regulator function is defines as in Eq. (<ref>), we have I_n =-μ/π^2Λ^n/nf_n,m, with f_n, m=(1/2)^n/2 mΓ(n/2 m+1). After solving Eq. (<ref>), rescaling the the LECs to λ_0=-π^2/μc_0/Λ^2 l+1 λ_1=-π^2/μc_1/Λ^2 l+3, expanding in powers of (k/Λ)^2 and comparing with the effective range expression for the on shell T-matrix (<ref>), we have expressions for the coefficients C_0 and C_1 given by c_0=[(f_3,m/3-1/c_1)^2f_1,m-π/2a_0Λ-f_5,m/5]c_1^2, c_1={1±[1-(f_-1,m-r_0πΛ/4)f_3,m/3(f_1,m-π/2a_0Λ)^2]^-1/2}3f_3,m, for the l=0 potentials and c_0=[(f_5,m/5-1/c_1)^2f_3,m/3-π/2a_1Λ^3-f_7,m/7]c_1^2, c_1={1±[1-(f_1,m+r_1π/4Λ)f_5,m/5(f_3,m/3-π/2a_1Λ^3)^2]^-1/2}5f_5,m, for the l=1 potentials. The a_l and r_l are the experimental scattering length and the effective range associated to the partial wave l=0,1. For each l, the c_1 constant has two values. The negative root is chosen since it provides the smaller values of most natural size.
http://arxiv.org/abs/2409.03502v1
20240905131543
Testing Whether Reported Treatment Effects are Unduly Dependent on the Specific Outcome Measure Used
[ "Peter Halpin", "Joshua Gilbert" ]
stat.ME
[ "stat.ME" ]
Testing Whether Reported Treatment Effects are Unduly Dependent on the Specific Outcome Measure Used Peter F. Halpin School of Education University of North Carolina at Chapel Hill Chapel Hill, NC 27514 Joshua B. Gilbert Graduate School of Education Harvard University Cambridge, MA 02138 ============================================================================================================================================================================================================================================================= § ABSTRACT This paper addresses the situation in which treatment effects are reported using educational or psychological outcome measures comprised of multiple questions or “items.” A distinction is made between a treatment effect on the construct being measured, which is referred to as impact, and item-specific treatment effects that are not due to impact, which are referred to as differential item functioning (DIF). By definition, impact generalizes to other measures of the same construct (i.e., measures that use different items), while DIF is dependent upon the specific items that make up the outcome measure. To distinguish these two cases, two estimators of impact are compared: an estimator that naively aggregates over items, and a less efficient one that is highly robust to DIF. The null hypothesis that both are consistent estimators of the true treatment impact leads to a Hausman-like specification test of whether the naive estimate is affected by item-level variation that would not be expected to generalize beyond the specific outcome measure used. The performance of the test is illustrated with simulation studies and a re-analysis of 34 item-level datasets from 22 randomized evaluations of educational interventions. In the empirical example, the dependence of reported effect sizes on the type of outcome measure (researcher-developed or independently developed) was substantially reduced after accounting for DIF. Implications for the ongoing debate about the role of researcher-developed assessments in education sciences are discussed.This material is based upon work supported by the National Science Foundation under Grant No. 2400864. Meta-analyses of educational interventions have consistently documented the importance of methodological factors related to the choice of outcome measures. In particular, when interventions are evaluated using measures developed by researchers involved with the intervention or its evaluation, the effect sizes tend to be larger than when using independently developed measures of the same or similar constructs <cit.>. For example, two recent meta-analyses focusing on PreK-12 math <cit.> and science <cit.> documented average effect sizes that were, respectively, 0.30 SD and 0.27 SD larger when using researcher-developed outcomes. In both studies, the type of outcome measure was more strongly related to reported effect sizes than any other methodological factor, including whether or not the intervention was randomized. The differences associated with the type of outcome were an order of magnitude larger than the focal comparisons among the interventions. Several hypotheses have been proposed about why researcher-developed outcome measures are associated with larger effect sizes <cit.>. In the usual case that the outcome measures are educational or psychological assessments (e.g., achievement tests, self-report surveys), some of these hypotheses can be interpreted in terms of how treatment effects depend on the specific questions or “items” that make up the assessments. For example, it has been argued that researcher-developed assessments may be more narrow in scope and more closely aligned with target learning domains than independently developed assessments <cit.>. The rationale here is that assessment items less closely aligned with the intervention should show diminished treatment effects. If independently developed assessments contain relatively larger proportions of such items, this could explain the reported discrepancies between researcher-developed and independently developed outcome measures. Others have argued that the content of researcher-developed measures may be either directly or indirectly pre-exposed to participants in the treatment group <cit.>, or may be otherwise over-aligned with the intervention being evaluated (What Works Clearinghouse, ). For example, some of the questions on a researcher-developed assessment may be drawn directly from program materials seen only by the treatment group. In this situation, we would expect relatively larger “effects” on items that included pre-exposed material. More generally, the pattern of effects on pre-exposed items is unlikely to be consistent with the pattern implied by a treatment effect on the underlying construct being measured, a point we elaborate on below. A related area of research has addressed item-level heterogeneous treatment effects in the context of program evaluations. A recent re-analysis of 46 randomized control trials (RCTs) in education, economics, and health documented extensive item-level heterogeneity, with 46 of 73 outcome measures exhibiting statistically significant item-level variation <cit.>. Various implications of item-level treatment effect heterogeneity have also been documented. For example, reported treatment effects can be highly sensitive to the removal of even just a single item <cit.>, and statistically significant item-level or subscale-level effects can be “masked” by aggregate null effects <cit.>. Variation in item-level effects can also inflate the standard errors of reported treatment effects <cit.>, and correlations between item-level effects and item difficulty may lead to bias in the estimation of treatment-by-covariate interactions <cit.>. The present paper contributes to this growing area of research by showing how item-level treatment effect heterogeneity can lead to systematic bias (technically, inconsistency) in reported treatment effects. To formalize this idea, we draw from the psychometric literature to distinguish between (a) an overall treatment effect on the construct being measured, which is referred to as impact, and (b) item-level treatment effects that are incompatible with an overall treatment effect, which are referred to as differential item functioning <cit.>. By definition, impact generalizes to other assessments of the same construct (i.e., assessments that use different items) whereas DIF is dependent on the specific items that make up the outcome measure. We argue that this distinction is an important but overlooked consideration about the internal validity of research studies in education and related fields. Building on this distinction, an asymptotic test is proposed for inferring whether reported treatment effects are affected by DIF. The test is constructed by comparing an estimator of impact that naively aggregates over assessment items with a less efficient estimator that is highly robust to DIF. The former estimator corresponds to the usual practice of reporting treatment effects using the unweighted mean over assessment items. The latter was recently developed to facilitate item-by-item tests of DIF <cit.>. The null hypothesis that both are consistent estimators of the true treatment impact leads to a Hausman-like specification test <cit.> of whether reported treatment effects are due to impact on the target construct instead of item-level variation that would not be expected to generalize beyond the specific outcome measured used. The main result of the paper shows that the robust estimator is consistent for the true treatment impact so long as fewer than 1/2 of items exhibit DIF. This result is complemented by Monte Carlo simulations that illustrate the how the specification test performs when more than 1/2 of the items are systematically biased in the same direction (e.g., toward 0). Together, these results indicate that, while the sign of the test statistic will depend on the proportion of items that exhibit DIF, its population value will be non-zero whenever item-level variability leads to inconsistency of the naive estimate. Consequently, the specification test remains useful regardless of the proportion of assessment items that exhibit DIF. The specification test is illustrated using 34 item-level datasets obtained from 22 randomized control trials (RCTs) of educational interventions. The studies were implemented in 13 countries, addressed a wide range of age groups and educational outcomes, and the majority (21 out of 22) involved cluster randomization of treatment conditions. Although the collection of studies is a convenience sample based on public availability of item-level data, the empirical analyses offer an illustration of the testing procedure and an initial indication that reported treatment effects may often be dependent on the specific outcome measured used. In the discussion, we return to the question of how the proposed methodology can address ongoing concerns about the role of researcher-developed assessments in education sciences. Not all of these concerns can be framed in terms of item-to-item variability of treatment effects. For example, the choice of outcome measure may be systematically related to other aspects of study design <cit.>. It is also the case that item-level effects may represent scientifically interesting sources of treatment effect heterogeneity that can inform education theory <cit.>. Consequently, it is important to understand the limitations of the proposed methodology as well as the potential for future research directions. § DIF IN PROGRAM EVALUATION RESEARCH The conceptual model underlying this research is summarized in Figure 1. Panel (a) depicts a situation in which treatment status, T, directly affects a target construct, θ, that is measured by the assessment items, X_i. There are two main assumptions at play in this panel. First, it is assumed that the path from T to θ is not confounded, which can be assured through random assignment of treatment status. Second is the psychometric assumption of conditional independence, which implies that E[X_i |θ, W] = E[X_i |θ]. Traditionally, this assumption has focused on the case in which W = X_j≠ i are other items on the assessment <cit.>. Later, researchers also considered situations in which W may be another variable external to the assessment <cit.>. In panel (b) of the figure, the assumption of conditional independence is relaxed for W =T, so that treatment status directly affects both the target construct and the assessment items. As recently recognized in the literature on causal inference <cit.>, the model in panel (a) is tacitly assumed when taking a univariate summary of the assessment items (e.g., the unweighted mean) and using it to study the treatment effect on the target construct. This can be seen most clearly in the case where the measurement model relating X_i and θ is linear. Writing E[X_i |θ] = a_i θ + b_i implies E[X_i | T = t] = E[E[X_i |θ, T = t] | T = t] = E[E[X_i |θ] | T = t] = a_i E[θ| T = t] + b_i. The crucial step in this derivation is the use of conditional independence in the second line. The overall conclusion is that, under the model in panel (a) of Figure 1, the item-level treatment effects are known functions of the treatment effect on the target construct. Importantly, this leads to testable implications about the pattern of treatment effects over items. Under the model in panel (b) of Figure 1, Equation impact can be re-written as E[X_i | T = t] = a_i E[θ| T = t] + b_i + c_i I(T = 1), where I(T = 1) is an indicator function for treatment status and c_i is the item-specific treatment effect. This situation has been interpreted in terms of item-level treatment effect heterogeneity. As summarized in the introduction, recent research indicates that this phenomenon is quite pervasive in education and related fields. In the psychometric literature, the treatment effect on the target construct (i.e., the solid path from T to θ) is called impact, and the item-specific treatment effects (i.e., the dashed paths from T to X_i) are referred to as DIF <cit.>. An extensive methodological literature on DIF has developed since the 1990s <cit.>.[In particular, Theorem 1 of <cit.> was anticipated by <cit.>.] The present paper follows the psychometric literature by building from the conceptual model in Figure 1 towards more realistic data-analytic models that allow for non-additive DIF and different link functions in the measurement equations. The DIF literature has focused on applications in which the predictor of interest is a pre-existing characteristic of the respondents (e.g., gender, race) rather than treatment status. In this context, DIF is typically interpreted in terms of measurement bias and regarded as a threat to test fairness <cit.>. In terms of Figure 1, the overall goal of DIF analysis is to identify any non-zero red paths in panel (b) and omit those items from the assessment, leaving us with the model in panel (a). The interpretation of DIF in the context of program evaluation research is quite different. Rather than fairness, the focal issue is whether the observed treatment effect is attributable to impact on the target construct or may also reflect other sources of variation. It is argued that distinguishing these two cases is important for establishing the internal validity of studies that evaluate educational interventions. By definition, impact generalizes to other assessments of the same construct. For this reason, impact is a valid causal mechanism through which observed treatment effects can arise. On the other hand, DIF with respect to treatment is dependent on the specific items that appear on an assessment, and therefore does not generalize to assessments that do not use those same items. DIF becomes a threat to internal validity when it leads researchers to make incorrect inferences about impact. There are many potential sources of item-level treatment effect heterogeneity, and documenting and theorizing them may provide fertile grounds for future research <cit.>. However, the focus of the present paper is the question of internal validity: Do reported treatment effects reflect impact on the target construct, or are they dependent upon the specific items used to measure the construct? In summary, distinguishing between impact and DIF is important for understanding the causal mechanism underlying observed treatment effects. In particular, this distinction directly addresses whether treatment effects reported using one outcome measure would be expected to generalize to other measures of the same construct. The extent to which observed treatment effects are due to impact or DIF is an empirical question, but one which has received very little attention in the psychometric literature. §.§ Previous Approaches to DIF The methodology developed in this paper can be motivated by considering the following three reasons that traditional methods for DIF analysis are not well suited to the context of program evaluation research. 1. DIF with respect to treatment cannot be discovered until it is too late – i.e., after the intervention has been administered. In this context, the usual practice of addressing DIF by omitting some items from the outcome measure would be highly questionable. In terms of Figure 1, this means we are squarely stuck with panel (b), and, in particular, with the problem of estimating impact in the presence of DIF. This problem has received very little attention in the psychometric literature, because, as mentioned, the usual goal has been to identify and remove problematic items during test development. 2. The shortcomings associated with commonly used item-by-item tests of DIF also become highly problematic in the present setting. These procedures typically involve conducting a large number of statistical tests, often in combination with iteratively re-fitting many IRT models to the data, and manually selecting so-called “anchor items” that are assumed not to exhibit DIF <cit.>. The choice of anchor items is especially problematic, as it can dramatically affect the results of DIF analyses, even changing the direction of impact <cit.>. It has also been shown that these methods can fail badly when a relatively large proportion of items (> 20%) exhibit DIF in the same direction <cit.>. In general, item-by-item tests of DIF are inherently exploratory procedures that are not well suited to program evaluation settings. 3. Item-by-item tests of DIF can be replaced with omnibus tests of whether any item exhibits DIF, which are often referred to in terms of measurement invariance <cit.>. However, inferring whether or not any item exhibits DIF is not the same as inferring whether DIF affects conclusions about impact. DIF on one item may be offset by countervailing DIF on another item, a phenomenon that has been referred to as “DIF cancellation" <cit.>. When DIF is isolated to only a small proportion of items, its overall effect on estimates of impact can be negligible, which is a point illustrated in the simulations studies reported below. If an omnibus test of measurement invariance indicates the presence of DIF, researchers will typically proceed with the usual item-by-item tests to identify potentially problematic items, and then refit a “final" model that omits those items or allows their parameters to vary over groups. After all this, we still lack a standard procedure for inferring whether there is a non-zero difference between the estimates of impact in the original model and the final model, because the covariance of the two estimates is not available from the separate model fits. In summary, conventional item-by-item tests of DIF have many practical shortcomings, and, ultimately, are not directly relevant to the question of whether DIF affects conclusions about impact. While this question has received some attention in the psychometric literature, current approaches require that items with DIF are specified beforehand <cit.>. The methodology described in the following section seeks to circumvent this issue by directly comparing two estimators of impact. § STATISTICAL METHODOLOGY This section derives the asymptotic distribution of the difference between an estimator of impact that naively aggregates over assessment items and a less efficient estimator that is highly robust to DIF. The former corresponds to the usual practice of estimating treatment effects using the unweighted mean over items. The robust estimator was recently developed in the context of item-by-item tests of DIF <cit.>. Theorem 1 of the present paper provides a sufficient condition for the robust estimator to be consistent for the true treatment impact on the target construct. The condition is seen to be quite weak, requiring only that there are more “good” items without DIF than there are “bad” items with numerically identical bias. The null hypothesis that both the naive and robust estimators are consistent for the true treatment impact leads to a test of whether DIF affects the naive estimator. As noted, the rationale here is a lot like Hausman's () specification test. However, the derivation of the asymptotic distribution of the difference between the estimators does not require Hausman's lemma regarding the covariance of an efficient estimator and its difference with an inefficient estimator. This is a practical advantage, since Hausman's lemma is an asymptotic result that does not always obtain in finite samples <cit.>. Additionally, the alternative hypothesis considered by Hausman was that only the less efficient estimator is consistent. In the present context, the alternative hypothesis is less clear cut: the robust estimator will remain consistent up to a certain point, but both estimators may become inconsistent if a sufficiently large proportion of items exhibit DIF that systematically favors one group over the other. This limitation is discussed further in the simulation studies reported below. Both estimators are developed using IRT-based scaling with the common items non-equivalent groups (CINEG) design <cit.>. In practice, this means that the proposed methodology can be implemented as a post-estimation procedure following maximum likelihood estimation of a focal psychometric model, estimated separately in the treatment and control groups. This paper focuses on the two-parameter logistic (2PL) IRT model for binary response data, although the main results extend directly to other IRT models (e.g., for ordered categorical data). §.§ Model Specification Begin by specifying the 2PL model in slope-intercept form: logit(p_gi) = a_giθ_g + b_gi, where: * g = 0 ,1 denotes the control and treatment groups, respectively. * i = 1 … m denotes the assessment items. * p is the probability of endorsing an item. * a > 0 is the item slope (also called discrimination). * b ∈ℝ is the item intercept (also called an item threshold or “easiness”). * θ_g = (θ^*_g- μ_g)/ σ_g with θ^*_g ∼ N(μ_g, σ_g^2). In this paper, we refer to a_gi and b_gi as the item parameters, μ_g and σ_g as scaling parameters, and θ_g as the latent trait. The constraints required to identify the model are imposed by standardizing the latent trait in both groups, which is discussed further in the next section. The results presented below assume that maximum likelihood estimates (MLEs) of the item parameters are available. To this end, let ν_i = [a_i0, b_i0, a_i1, b_i1]^T denote the vector of parameters for item i and ν= [ν_1^T, ν_2^T , ⋯, ν_m^T]^T. The MLEs, ν̂, have asymptotic distribution denoted by √(n) (ν̂ - ν) d→ N(0, V(ν̂)) where n = n_0 + n_1 is the total sample size and n_1 / n_0 → (0, ∞) <cit.>. The focus will be on plug-in estimators constructed using ν̂, and the asymptotic distribution of these estimators will be obtained by applying the Delta method <cit.>. §.§ Estimating impact using IRT-based scaling Standardizing the latent trait separately in both groups, as in Equation glm, implies that impact is ignored when estimating the model. However, it is possible to recover group differences on the latent trait in a post-estimation step, which is often referred to as test scaling or linking. This section outlines a variation on so-called moment-based scaling using comparable items non-equivalent groups (CINEG) procedure <cit.>. The mathematical details follow closely to the established procedures in IRT, but the notation and terminology are adapted to emphasize that the groups of respondents are defined with respect to treatment status. The scaling parameter of interest is defined as the average treatment effect on the target construct, standardized in the control group: δ_0 = (μ_1 - μ_0) / σ_0. We refer to this as impact on the target construct. It also commonly referred to as an effect size <cit.>. Because the variance of the latent trait is not identified without additional model constraints, we do not distinguish between (unstandardized) treatment effects and (standardized) effect sizes in the present paper. Next, define the item-level treatment effects: δ_i(ν) = (b_i1 - b_i0) / a_i0. In the Section <ref> of the Appendix, it is shown that δ_i(ν) = δ_0 if and only if item i does not exhibit DIF. Thus, the δ_i(ν) will be constant over items when the treatment effect on each item is due exclusively to impact. This corresponds to the situation shown in panel (a) of Figure 1 (i.e. “no DIF”). When item-level treatment effects arise from sources other than impact, the δ_i(ν) will vary over items. This corresponds to panel (b) of Figure 1. Aggregating over items leads to the following test-level treatment effect: δ(ν)= ∑_i=1^m w_i δ_i(ν), with convex weights w_i ∈ [0, 1] and ∑_i w_i = 1. The assumption that the item-level treatment effects are due exclusively to impact leads the unweighted mean as a natural test-level treatment effect. In the IRT literature <cit.>, this is referred to as “mean scaling." The unweighted mean will be written as δ_U(ν) with weights w_Ui = 1/m. In the present context, the plug-in estimator δ̂_U = δ_U(ν̂) can be interpreted as the IRT analog of estimating average treatment effects using the unweighted mean over assessment items. Because δ̂_U is motivated by the assumption that no item exhibits DIF, it is referred to as a naive estimate of impact. To address the situation in which a subset of items i ∈𝒟 exhibit DIF, we can re-write Equation delta as δ(ν) = ∑_i ∈𝒟 w_i δ_i(ν) + δ_0 ∑_j ∉𝒟 w_j. Here it is assumed that 0 < m_D < m, where m_D is the number of items with DIF. From Equation delta-dif, it is apparent that there are two situations in which a plug-in estimator δ(ν̂) will remain consistent for δ_0 in the presence of DIF. The first case arises when w_i = 0 for all items that exhibit DIF. This provides an intuitive motivation for the robust estimator of impact presented in the next section – i.e., the goal is to down-weight items that exhibit DIF. The second case arises when ∑_i ∈𝒟 w_i δ_i(ν) = (1 - ∑_j ∉𝒟 w_j) δ_0. In particular, δ̂_U will remain consistent so long as DIF “cancels out” over items: δ_0 = 1/m_D∑_i ∈𝒟δ_i(ν). Consequently, testing whether a naive estimator of impact is consistent for δ_0 is less restrictive than testing whether any item exhibits DIF, because we are purposely ignoring item-to-item fluctuations that do not have a cumulative effect on impact. §.§ Estimating Impact in the Presence of DIF This section outlines a recently developed test-level treatment effect estimator that is highly robust to DIF <cit.> and presents a new result about its consistency for the true treatment impact. The construction of the estimator borrows computational techniques from robust estimation of a location parameter <cit.>. In the present setting, the item-level treatment effects δ_i(ν) play the role of “data points” whose location we wish “estimate.” This analogy is seen to lead to a robust version of Equation delta. However, it is important to clarify that the asymptotic distribution of the resulting plug-in estimator will be obtained via the Delta method and does not otherwise make use of the asymptotic theory of robust statistics. Begin by writing the weights in Equation delta in terms of a scalar-valued function ψ, to be chosen subsequently: w_i = ψ(u_i) / u_i/∑_r = 1^m ψ(u_r) / u_r. By convention, the ratio ψ(u)/u is set to 1 when u = 0. The argument u_i = u_i(ν) = δ_i(ν) - δ is a function of both the item parameters ν and a “preliminary value” of the test-level treatment effect, δ. The dependence on δ leads to iteratively re-weighted least squares (IRLS) as a computational strategy <cit.>. In the present context, it is desirable to choose ψ to be a so-called redescending function <cit.>. While many redescending functions are available, Tukey's bi-square function performs well in practice and is convenient for illustrating the main ideas. The bi-square is defined as ψ(u) = {[ u (1 - ( u/k)^2)^2 for |u| ≤ k; 0 for |u| > k; ]. . The overall idea behind the bi-square is to choose the tuning parameter k such that ψ = 0 for outliers. This has the effect of setting the weights w_i = 0, so that outliers are down-weighted to zero during estimation. It can also be confirmed that the ratio ψ(u)/u is non-negative and reaches a maximum of 1 when u = 0. Thus, in the idealized case that u_i = 0 for all items without DIF and |u_i| > k for all items with DIF, the test-level treatment effect in Equation delta becomes the unweighted mean of the items without DIF. As noted in the previous section, this would ensure that the resulting plug-in estimator remains consistent for true treatment impact by down-weighting any items with DIF, which is the desired behavior. The performance of the bi-square depends strongly on the choice of the tuning parameter k <cit.>. In usual applications, k is chosen to be proportional to the scale (e.g., median absolute deviation) of the data points. In the present context, we can follow a similar logic by choosing k based on the asymptotic distribution of the item-level treatment effects δ̂_i = δ_i(ν̂). Under the null hypothesis that item i does not exhibit DIF (i.e., δ_i(ν) = δ_0), this distribution can be written as √(n)(δ̂_i - δ_0) d→ N(0, V_0(δ̂_i)). Letting α denote the desired Type I Error (false positive) rate for down-weighting items with DIF, we may choose item-specific tuning parameters k_i = z_1-α/2×√(V_0(δ̂_i) / n ), where z_q denotes the q-th quantile of the standard normal distribution. This choice of k_i implies that the weights w_i will be set to zero whenever δ̂_i is beyond the (1 - α) × 100 % asymptotic confidence interval centered at δ. Weights that are computed using Equations w through k will be referred to as robust weights and denoted w_Ri. The plug-in estimator constructed by using these weights in Equation delta will be denoted as δ̂_R = δ_R(ν̂). Halpin () discussed some additional details that improve the robustness and efficiency of δ̂_R; these are summarized in Section <ref> of the the Appendix. The following theorem addresses the consistency of δ̂_R. The proof is provided in Section <ref> of the the Appendix. Assume that there exists a value δ such that the number of items whose treatment effect is equal to that value (i.e., δ_i(ν) = δ) exceeds the number of items whose treatment effect is equal to any other particular value, say δ^* (i.e., δ_i(ν) = δ^* ≠δ). Then, as the sample size n →∞, the plug-in estimator δ̂_R defined in Equations delta, w - k converges in probability to δ. The theorem provides a relatively weak condition under which δ̂_R will remain consistent for the true treatment impact, δ_0. In the worst-case scenario that all items that exhibit DIF have item-specific treatment effects equal to the same aberrant value δ^*, then δ̂_R will remain consistent for δ_0 so long as fewer than 1/2 of items exhibit DIF. Perhaps a more natural assumption is that DIF is idiosyncratic – i.e., δ_i(ν) ≠δ_j(ν) unless they are both equal to δ_0. In this case, δ̂_R will remain consistent if just 2 items do not exhibit DIF. Importantly, the proof makes use of the modified tuning parameters k^*_i = max{k_i, ϵ}, where ϵ > 0 is a constant chosen to be less than 1/2 the smallest non-zero distance between any two item-level treatment effects, | δ_r(ν) - δ_s(ν)| for r, s = 1 … m. In practical terms, this means that standard errors of the δ̂_i must be substantially smaller than the variation among the δ_i(ν) before the mechanism ensuring consistency “kicks in." Consequently, the performance of the estimator with realistic sample sizes is an important consideration, which is addressed in the simulation studies reported below. §.§ Testing Whether Naive Estimates of Impact are Affected by DIF Having the naive and robust test-level treatment effects in hand, the next step is to consider their difference: Δ (ν) = δ_U (ν) - δ_R (ν) = ∑_i=1^m(1/m - w_Ri) δ_i(ν). Under the condition described in Theorem 1, δ_R (ν) = δ_0. Thus, when the condition holds, testing the null hypothesis that Δ (ν) = 0 is equivalent to testing δ_U (ν) = δ_0. A Wald test of this hypothesis is available via the asymptotic distribution of the plug-in estimator Δ̂= Δ (ν̂). In the Section <ref> of the Appendix, the Delta method <cit.> is used to derive this distribution. The relevant result is √(n) (Δ̂- Δ (ν)) d→ N(0, V(Δ̂)) where V(Δ̂) = ∑_i=1^m(1/m - v_Ri)^2 V(δ̂_i) and v_Ri = ψ'(u_i)/∑_r = 1^m ψ(u_r) / u_r . A closed-form expression for V(δ̂_i) is also given in the Appendix. Equations test and var-Delta define the proposed specification test. The plug-in estimate Δ̂ will be referred to as the Delta statistic and z = Δ̂/ SE(Δ̂) as the Delta test. When the condition stated in Theorem 1 holds, the null hypothesis Δ (ν) = 0 implies that the naive test-level treatment effect δ̂_U is a consistent estimate of the true treatment impact on the target construct, δ_0. When the null hypothesis is false, the naive effect is inconsistent for δ_0 due to item-level sources variation. In this situation, a pragmatic interpretation is that reported treatment effects are unduly dependent upon the specific items that make up the outcome measure. It is important to emphasize that, if the condition stated in Theorem 1 does not hold, then Δ (ν) = 0 does not necessarily imply that δ_U (ν) = δ_0. This is an important limitation of the proposed test and is addressed further in the simulations reported below. § SIMULATION STUDIES The overall goal of the simulations is to complement Theorem 1 by describing the finite sample performance of the test-level treatment effects and Delta test under varying degrees of DIF. In particular, the theorem implies that the robust estimator δ̂_R will be consistent for whichever value of δ_i occurs most frequently. When fewer than 1/2 of item exhibit DIF, this implies that δ̂_R is consistent for the true treatment impact δ_0. However, when more than 1/2 of items are systematically biased towards the same value (e.g., 0), the theorem implies that the robust estimator will be a consistent estimate of that value instead of δ_0. We are especially interested in the performance of the Delta test in such situations. Additionally, the asymptotic result stated in Theorem 1 requires that the sampling error of the δ̂_i must be substantially smaller than the variation among the item-level treatment effects. Thus, a secondary interest is to compare the bias of two estimators in finite samples. We report two simulation studies that are motivated by the two theoretical cases considered in the introduction. In the first case, the concern was that a non-null treatment effect could be “washed out" when the outcome measure includes a large proportion of items that are not aligned with the intervention. This case is modeled using an assessment in which items without DIF exhibit moderate impact on the latent trait (δ_i = 0.4), while items with DIF exhibit no impact (δ_i = 0). The value of 0.4 was chosen to reflect the estimated effect sizes on researcher-developed assessments reported in meta-analyses of math and science education <cit.>. In the second case, the concern was that a null treatment effect could be masked by pre-exposure of assessment content in the treatment group. This is modeled using a null treatment impact on items without DIF (δ_i = 0), while for items DIF the logit of a correct response systematically favors the treatment group (b_i1 = b_i0 + e where e ∼ U(0.4, 0.5)). Note that, in contrast with the first simulation study, the treatment effects on the biased items are not constant. This is because (a) e_i varies randomly over items, and (b) the item discrimination α_0i, which is used to compute the item-level treatment effects δ_i, also varies over items. The resulting range of possible values for the item-level treatment effects is δ_i ∈ [0.2, 1]. This is intended to reflect the situation in which non-zero item-level treatment effects are driven by a mechanism (pre-exposure) that is not related to impact on the target construct and consequently vary over items. The focal factor in both simulations is the proportion of items that exhibit DIF, p ∈ [0, 1]. As noted, this allows us to study the performance of the specification test in cases where the robust estimator is consistent for the true treatment impact (p < 1/2), as well as cases where it is inconsistent (p ≥ 1/2). Items that exhibit DIF are chosen at random in each replication. The other details of the simulation studies are summarized in Table 1. The number of assessment items is m = 16 and the number of respondents per group is n_g = 500. These are not focal factors of the study and are fixed over conditions. Item parameters vary randomly over replications. The range of item slopes is [.5, 2], representing predicted change on the logit scale for a 1 standard deviation increase in the latent trait. The range of item thresholds corresponds to [-1.5, 1.5] standard deviations on the latent trait. In each simulation, the two estimators are compared using half-violin plots, and the specification test is summarized in terms of false positive (Type I Error) and true positive (power) rates. §.§ Simulation 1: Wash-out In this simulation, the true treatment impact is δ_0 = 0.4 and all items that exhibit DIF have null effects. Theorem 1 implies that the robust estimator will be consistent for δ_0 when p < 1/2, but will instead be consistent for 0 when p > 1/2. When p = 1/2, both values are solutions and the estimator is inconsistent. Thus δ_R(ν) ∈{0, 0.4} for all simulation conditions. By contrast, δ_U(ν) will always fall between these two extreme values when 0 < p < 1. This behavior is illustrated in Figure <ref>. When p = 0, it can be seen that both estimators are centered near the true treatment effect (δ_0 = 0.4), although the naive estimator is slightly more efficient. As the number of items with null treatment effects increases up to 1/2, the robust estimator remains much less biased than the naive estimator. At p = 1/2 the distribution of the robust estimator becomes strongly bi-modal, and this is also apparent to a lesser degree for values of p close to 1/2. The robust estimator becomes a consistent estimate of the treatment effect on the biased items (i.e., δ_i(ν) = 0) when p > 1/2, and the two estimators are both centered at this value when all items exhibit null effects (p = 1). The figure shows that the behavior of the two estimators when p < 1/2 is a mirror image (reflection about the horizontal line at 0.2) of their behavior when p > 1/2. Consequently, the sign of the difference between the two estimators “flips" depending on the proportion of items with DIF. However, on average, the absolute value of the difference remains non-zero when 0 < p < 1, and is only equal to zero in the two extreme cases where p = 0 or p = 1. In both of these cases, the item-level treatment effects are constant over items, so it is correct to conclude that item-level treatment effect heterogeneity does not affect conclusions about impact. Based on these considerations, we may expect the Delta test to attain the nominal false positive rate when p = 0 or p = 1, and have non-zero power to reject the null hypothesis that Δ = 0 when 0 < p < 1. This conclusion is supported by the insets in Figure <ref>, which provide the rejection rates of the Delta test over the 500 replications, using a nominal rejection rate of α = .05. Overall, the results of this simulation support the conclusion that, while the sign of the Delta statistic depends on the number of items that exhibit DIF, its magnitude will generally be non-zero when a strict subset of items are systematically biased towards the same value. §.§ Simulation 2: Pre-exposure In this simulation, the true treatment impact is δ_0 = 0 and items that exhibit DIF are uniformly distributed in the range δ_i(ν) = [0.2, 1]. This situation is importantly different from the first simulation, because it is less obvious how variation in the item-level treatment effects will affect the finite sample performance of the robust estimator when the proportion of items that exhibit DIF exceeds 1/2. The theorem implies that the robust estimator should remains consistent when as few as 2 items do not exhibit DIF. However, it is also expected that large sample sizes will be required to realize this result. Figure <ref> compares the naive and robust estimators. In contrast to Simulation 1, upward rather than downward bias is induced by DIF. However, the qualitative pattern of results in Simulation 2 is very similar to Simulation 1. The robust estimator is less biased than the naive estimator, up until about p = 1/2 of the items exhibit DIF (8/16). The robust estimator becomes strongly bi-modal close to p = 1/2, and the behavior for p > 1/2 is an (approximate) mirror image of the behavior for p < 1/2. Insets show the false positive rate (when p = 0) and true positive rate (p > 0) of the specification test, using a nominal rejection rate of α = .05. The results of this simulation suggest that the specification test can be useful when p > 1/2 items are biased in the same direction, but not necessary by the same amount. §.§ Summary The simulations show that, when the proportion of items with DIF is less than 1/2, the robust estimator is much less biased than the naive estimator and consequently the Delta test performs as intended. When the number of items with DIF is greater than or equal to 1/2, the robust estimator may no longer be consistent for the true treatment impact. Consequently, the difference between the robust estimator and the naive estimator, (i.e., the Delta statistic), is no longer a good approximation of the difference between the true naive test-level treatment effect and the true treatment impact on the target construct. However, the simulations showed that, even after the robust estimator becomes inconsistent, the empirical distributions of the two estimators continued to have different expected values. As an intuitive explanation, we may reason as follows: Theorem 1 shows that robust estimator should behave the like the mode (most common value) of item-level treatment effects, where as the naive estimator is their mean. When the distribution of the item-level effects is highly asymmetrical (e.g., when DIF systematically favors one group over the other), these two values will generally differ. The simulations support the overall conclusion that, while the sign of the test statistic depends on the number of items that exhibit DIF, its magnitude will generally be non-zero when items with DIF are systematically biased in the same direction. In such situations, the proposed specification can be useful for inferring whether the naive estimator is affected by item-level variation, regardless of how many items exhibit DIF. § EMPIRICAL ILLUSTRATION In this section, we re-analyze 34 datasets from 22 unique RCTs. The data are a subset of a collection of 73 publicly available datasets from 46 RCTs with item-level data <cit.>.[The full collection of datasets will be publicly available at the following URL: https://doi.org/10.7910/DVN/C4TJCAhttps://doi.org/10.7910/DVN/C4TJCA.] In this re-analysis, we focus on educational interventions with children or adolescents. The datasets are summarized in Table <ref>. Together, the datasets include responses from 224, 572 individuals to 784 items, representing a wide range of geographic settings and outcome measures. Our re-analysis used initial treatment assignment and considered the first post-treatment followup assessment. We included only datasets with more than 500 subjects and more than 10 items, after excluding any items with endorsement rates outside of the 2-98% range, items with abnormally high (>10) or low (< .25) discrimination parameter estimates, or items responded to by less than 10% of the total sample. A handful of items with polytomous responses were re-coded to binary. All IRT models were fitted separately to the treatment and control samples in each dataset. For estimation, we used Stata's IRT module and, where relevant, standard errors were clustered using the same cluster variable as the original studies. The IRT-based estimates of impact and Delta tests were computed using the [https://github.com/peterhalpin/robustDIFCode is available at https://github.com/peterhalpin/robustDIF] package written in the language. Figure <ref> shows the Delta statistic for each dataset. At the 95%-level, the estimates were significantly different from zero in 8 out of 34 datasets (23.5 %), although 3 of these were from a single study. Standard errors showed a strong association with sample size (r = -.59, using log(N students)) and a moderate association with number of items (r = - .36 using log(N items)), indicating that large studies with longer outcome measures were best suited to revealing statistical significance. In particular, two studies with large negative estimates and wide confidence bands (Datasets 18 and 20) had smaller sample sizes and relative few items. Using a more lenient 80%-level of confidence, 15 out of 34 (44.1 %) datasets showed some evidence that the naive estimate may be inconsistent for the true treatment impact. Also note that, although the Delta statistic took on both positive and negative values, the simulation studies reported above caution against interpreting the direction of the difference without taking further steps to ensure that the proportion of items that exhibit DIF is less than 1/2. This point is revisited in the Discussion. Figure <ref> shows how the absolute value of the Delta statistic was related to the naive estimates of impact, for researcher-developed (RD) and independently developed (ID) outcomes. In general, the discrepancy between the naive and robust estimates was larger when the naive treatment effects were larger. The independently developed measures generally showed smaller naive estimates (range = [-0.05, 0.22]) and the discrepancy with the robust estimates was also smaller (range = [0.01, 0.32]). By contrast, the researcher-developed outcomes tended to shower larger naive effects (range = [-0.12, 1.00])) that tended to be more discrepant from the robust estimates (range =[0.01, 0.57]). Regressing the naive estimates on the type of outcome revealed that researcher-developed assessments predict larger treatment effects (b= 0.30, SE = 0.10). This regression coefficient is in standard deviation units of the latent trait and is consistent with the research reviewed in the introduction to this paper. In a model that also included the absolute value of the Delta statistic (b = 1.53, SE = 0.32), the relationship with the type of outcome was substantially diminished (b = 0.16, SE = 0.09) and no longer significant at the 5% level. This result suggests that the well-documented relationship between reported effect sizes and the type of outcome measure may be largely due to systematic bias induced by DIF. These regression results accounted for dependence of Datasets within Studies by including a random effect for Study, and they used precision weights to account for sampling error in the naive estimates. However, when interpreting these results, it is important to keep in mind the small sample size (34) and that 5 out of only 11 independently developed outcomes were from a single study. Figure <ref> shows the item-level effects. The datasets are shown in ascending order of the naive test-level effects, and the lighter data points indicate item-level effects that were down-weighted when estimating the robust test-level effect. The figure indicates substantial item-level treatment effect heterogeneity, which has already been documented using these data <cit.>. The standard deviation of the item-level effects was positively associated naive test-level effect (r = .41), showing that datasets that reported larger effects also exhibited more item-level variability. Some of the observed item-to-item variability is due to sampling error. However, some of this variability may also be due to idiosyncratic item-level sources of variation that do not reflect impact on the underlying construct (i.e., DIF). These potentially problematic item-level effects are represented by the lighter data points in the figure. In a total of 13 of 34 datasets (38.2%), at least 10% of assessment items were down-weighted to zero (i.e., were statistically different from the robust estimate at the .05 level). Moreover, the proportion of items that were down-weighted to zero was strongly associated with the naive estimate of impact (r = .63). However, it was also the case that many datasets (11 out of 34) had no items that were down-weighted to zero, and several of these were larger studies with relatively precise estimates. Together these results indicate that DIF with respect to treatment was pervasive but not ubiquitous in this sample of studies, and was especially apparent in studies that reported larger treatment effects. § DISCUSSION This paper contributes to a growing body of literature documenting the prevalence and importance of item-level heterogeneous treatment effects in program evaluation research <cit.>. Previous approaches to studying this phenomenon have focused on random-effects models whose identification constraints require omitting parameters that describe impact on the target construct <cit.>. Consequently, these models do not characterize the extent to which individual item-level treatment effects, or test-level treatment effects that aggregate over items, may be incompatible with impact on the target construct. The present paper addressed this issue by drawing on the psychometric distinction between impact and DIF <cit.>. Taking this approach, it was shown that DIF with respect to treatment status can cause the unweighted mean of assessment items to be inconsistent for the true impact on the target construct. Consequently, DIF is a potential source of systematic bias that may affect reported treatment effects. To address this issue, a Hausman-like specification test was proposed, which compares the naive (unweighted mean) estimate of impact to a more robust alternative. Pragmatically, a statistically significant test result indicates that reported treatment effects may be unduly dependent upon the specific items that make up the outcome measure, and therefore would not be expected to generalize to other measures that do not use those same items. Theoretical and simulation-based results show that the proposed test performs as intended when fewer than 1/2 of assessment items exhibit DIF. The simulations also suggested that, while the sign of the test statistic depends on the proportion of items that exhibit DIF, its magnitude will generally be non-zero when items with DIF are systematically biased in the same direction. Thus, in these situations, the proposed specification can be useful for inferring whether the naive estimates of impact are robust to item-level sources of variation, regardless of how many items exhibit DIF. This is an important advantage over traditional methods of DIF analysis, which can fail badly when a relatively large proportion of items (> 20%) exhibit DIF in the same direction <cit.>. While recent research has improved the performance of DIF detection methods when more items (< 50%) may exhibit DIF <cit.>, the present paper is the first to explicitly consider the implications for inferences about impact. An important shortcoming of the proposed test statistic is that, by itself, its sign is not a reliable indication of whether DIF leads to upwards or downwards bias in reported treatment effects. This point can be illustrated with reference to Figure <ref>. Focusing specifically on the bottom row (Dataset 26) we see that a small minority of item-level effects are positive outliers while the overall effect is slightly negative. This could be a small subset of items with a true non-zero effect (“wash-out”), or a large subset of items with a small negative effect. Unless we are willing to make the additional assumption that fewer than 1/2 of items exhibit DIF, the test statistic itself does not allow us to differentiate these two situations. Rather, the test only tells us whether conclusions about impact would differ depending on whether or not the positive outliers were included when reporting the overall treatment effect. If the reported treatment effect does not depend on this small subset of items, the matter can be concluded. However, if the reported treatment effect is strongly influenced by the positive outliers, it would be important to address this issue when interpreting the results of the study. Similar considerations apply in general, not just to this single example. This shortcoming also points to a clear direction for future research: developing open data practices <cit.> and statistical methodology <cit.> to study how item-level effects depend on pre-exising features of the items. Documenting and theorizing DIF that arises due to scientifically interesting item characteristics (e.g., content, difficulty) can present exciting opportunities to refine education theory. For example, research in language literacy has used item-level analysis to evaluate how treatment effects depend on the alignment between the intervention and item content to assess learning transfer <cit.>. Application of similar models in psychopathology show that the effect of selective serotonin re-uptake inhibitors (SSRIs) on patient-reported depression surveys is mostly driven by effects on mood rather than physical symptoms such as appetite, and these results align with the pathways by which SSRIs affect brain chemistry <cit.>. More generally, it may often be fruitful to pre-register item-specific hypotheses during the design of a study, or to consider them as ad hoc hypothesis that can nonetheless inform education theory. This can be useful for addressing issues related to treatment effect heterogeneity <cit.> as well as broader concerns about the role of measurement in evaluation contexts <cit.>. Our empirical example illustrated that DIF with respect to treatment can also play an important role in addressing ongoing concerns about the role of researcher-developed assessments in education sciences <cit.>. In this sample of studies, the average naive treatment effect was 0.30 SD larger for researcher-developed assessments as compared to independently-developed assessments (SE = 0.10). After after controlling for the discrepancy between the naive estimate and the robust estimate, this relationship was substantially reduced to 0.15 SD (SE = 0.09) and no longer significant at the 5% level. In pragmatic terms, this means that the well-documented relationship between reported effect sizes and the type of outcome measure was largely explained by systematic bias induced by DIF. As noted above, this result should be interpreted tentatively due to the small size of the empirical example (34 datasets) and because 5 out of only 11 independently developed outcomes were from a single study. In addition to replicating this finding in a wider range of studies, it is important for future research to investigate how the performance of the test can be improved in realistic settings, for example by using alternative specifications of the underlying psychometric models, model-free approaches based on resampling of items <cit.>, or alternative procedures. In summary, the proposed specification test can provide a clear signal to researchers, meta-analysts, and policy makers about whether reported treatment effects are unduly dependent on the specific items that make up the outcome measure. The test is straightforward to implement as a post-estimation step using standard IRT models that are widely available in general-purpose statistical software. Consequently, it is a relatively stand-alone procedure that can be implemented alongside other checks on internal validity (e.g., balance, attrition bias). Future research along these lines can help move education sciences beyond inherently subjective heuristics about who developed an assessment, and towards statistical methodology and empirical results that directly evaluate the sensitivity of treatment effects to the specific outcome measures used. § APPENDIX §.§ Derivation of item-level treatment effects Models that can be represented using Equation glm are identified only up to an affine transformation of the latent trait <cit.>. In Equation glm this is addressed by standardizing the latent trait in each group. When the groups may differ in the distribution of the latent trait, an alternative way to identify the model is to (a) standardize the latent trait in only one of the groups, and (b) assume some of the item parameters are equal over groups, which serves to identify the scaling parameters in the second group. In the IRT literature, this two-part scaling procedure is referred to as the “common items non-equivalent groups" (CINEG) procedure for IRT-based scaling <cit.>. To obtain Equation delta using the CINEG procedure, part (a) is achieved by standardizing the latent trait in the treatment group (i.e., μ_1 = 0 and σ_1 = 1). For part (b), we can use the following relation a_i0θ_0 + b_i0 = a^*_i0θ^*_0 + b^*_i0 in which θ_0 = (θ^*_0- μ_0)/ σ_0. This relation is implied by Equation glm and the invariance of the model under affine transformation of the latent trait. Algebraic manipulation leads to - μ_0 / σ_0 = (b^*_i0 - b_i0) / a_i0. Since we have set μ_1 = 0 to identify the model, Equation scale1 can equivalently be written as δ_0 = (μ_1 - μ_0 ) / σ_0 = (b^*_i0 - b_i0) / a_i0. Equation scale2 relates impact on the latent trait to the item parameters. However, the “unscaled" item thresholds in the control group, b^*_i0, are not observable. To address this situation, one may assume that the thresholds are equal across groups (i.e. b^*_i0 = b_i1) for some items. This is equivalent to assuming that those items do not exhibit DIF on their thresholds. The assumption leads to δ_0 = (μ_1 - μ_0 ) / σ_0 = (b_i1 - b_i0) / a_i0 = δ_i(ν), which establishes the relation between Equations delta_0 and delta-i. Note that when an item exhibits DIF on its threshold (i.e. b^*_i0≠ b_i1), Equation scale2 remains valid but second equality in Equation scale3 no longer holds. It is also important to note that both Equation scale2 and scale3 hold regardless of whether the item slopes exhibit DIF. §.§ Computational details of δ_R Robust estimation of the test-level treatment effect has been implemented in the package , which is used for the numerical examples reported in this paper. This software fine-tunes the computation of the robust estimator in a few ways, which are briefly summarized here. First, one may choose the tuning parameters in Equation k using V_0(δ̂_i - δ̂_R) rather than V_0(δ̂_i). This is possible because, under the null hypothesis that no item exhibits DIF, the V_0(δ̂_i - δ̂_R) depends only on the V_0(δ̂_i) <cit.>. Because δ̂_i and δ̂_R are positively correlated, V_0(δ̂_i - δ̂_R) ≤ V_0(δ̂_i), which results in more accurate false positive rates when flagging items with DIF in finite samples. A second point concerns the computation of V_0(δ̂_i). This asymptotic variance is a function of the individual item-level treatment effect, δ_i(ν) <cit.>. This is problematic because it implies that DIF not only biases the point estimates δ̂_i but also their variance. However, under the null hypothesis δ_i(ν)= δ_0, one can substitute a consistent estimate of δ_0 for the individual item-level effects when computing the plug-in estimate V_0(δ̂_i). In particular, using a robust estimate such as median(δ̂_i) in place of the δ̂_i leads to a highly robust version of δ_R <cit.>. A third detail concerns precision weighting. Computing the robust estimator using u^*_i = (δ̂_i -δ) / V_0(δ̂_i) in place u_i = δ̂_i -δ in Equations w through k leads asymptotic efficiency of the resulting estimator under the null hypothesis that no items exhibit DIF <cit.>. However, the robust estimator is not efficient when one or more items exhibit DIF. Finally, it is important to note that redescending loss functions, including the bi-square function used to compute δ̂_R, can result in local “bad” solutions. In particular, Lemma 1 of the Section <ref> shows that, if any items exhibit DIF, local solutions will necessarily arise as n →∞ when using the tuning parameters in Equation k. Thus, when many items exhibit DIF, different starting values (i.e., choices of δ in Equation w) may lead to different solutions when computing δ̂_R. In practice, this can be dealt with by comparing solutions using multiple starting values. If the different starting values converge to different solutions, the minimum of the solutions is taken as the estimated value of δ_R. Because estimation problem is unidimensional, this procedure is not computationally complicated or expensive. Moreover, because local solutions can arise only due to DIF, their presence can serve as a useful diagnostic procedure. §.§ Proof of Theorem 1 The set-up for the proof is adapted from Huber's () discussion of the finite sample breakdown of M-estimators of location. To simplify notation, x_i, i = 1 … m, will denote a finite collection of “data points,” which correspond to the item-level treatment effects. The “estimator” in question is defined as the global minimum of R(μ) = ∑_i=1^m ρ(x_i - μ), where ρ = ∫ψ(u) du is assumed to exist. Other than the change of notation, the loss function representation in Equation R is equivalent to the weighted mean representation in Equation delta <cit.> To address the case of redescending M-estimators, it is assumed that ρ satisfies the following assumptions: A1 ρ: ℝ→ [-1, 0] is continuous. A2 ρ(0) = -1 is the unique minimum. A3 There exists a value of k > 0 such that ρ(u) = 0 for |u| > k. These assumptions include the (appropriately rescaled) bi-square function defined in Equation bsq as well as other commonly used redescending M-estimators <cit.>. The proof appeals to Theorem 5.7 of van der Vaart (), which states two conditions that are together sufficient to ensure the consistency of M-estimators. One condition is that the population loss function must have a unique global minimum, say μ_0. The second condition is that the sample loss function must uniformly converge in probability to the population loss function. Together these conditions imply that the minimizing argument of the sample loss function converges in probability to μ_0. The following two lemmas serve to establish these two conditions for δ̂_R. Lemma 1 describes a population loss function that is computed using a finite number of items, m. Lemma 2 shows that, as the sample size n increases without bound, the loss function of δ̂_R uniformly converges in probability to the loss function described in Lemma 1. The proof of Theorem 1 follows as an immediate corollary of the two lemmas. Let x_i, i = 1, … m, denote a finite collection of data points and let S denote the set of unique values of x_i. For each s ∈ S let m_s denote the number of data points such that x_i = s and let ϵ be the smallest distance between any two values s_i, s_j ∈ S. If we choose k ≤ϵ / 2, then R(s) = -m_s is a local minimum of R for each s ∈ S, where R is defined by Equation R and assumptions A1-A3. Proof of Lemma 1. For each s ∈ S, A2 implies that ρ(x_i - s) = -1 for x_i = s, while A3 and the choice of k imply that ρ(x_j - s) = 0 for x_j ≠ s. Thus, R(s) = -m_s is the minimum of R (by A2) on the interval [s - ϵ/2, s + ϵ/2] (by A3 and choice of k). □ Let R(δ; ν̂) denote the sample loss function of the plug-in estimator δ̂_R = δ_R(ν̂) defined via Equations delta, w, bsq, using tuning parameters k_i^* = max{k_i, ϵ/2} with k_i defined in Equation k and ϵ defined in Lemma 1. Then R(δ; ν̂) uniformly converges in probability to a function that satisfies Lemma 1. Proof of Lemma 2. Uniform convergence of R(δ; ν̂) to R(δ; ν) follows from A1 and applying the continuous mapping theorem to the MLEs ν̂ <cit.>. From Equation k, it is apparent that k_i → 0 as n →∞. Thus, k^*_i →ϵ/2 so that tuning parameter of R(δ; ν) deterministically satisfies the condition stated in Lemma 1. □ An immediate corollary of Lemma 1 is that the population loss function will have a unique global minimum when there is a unique modal (most frequent) value of the item-level treatment effects. This is the condition stated in Theorem 1. As noted in the main text, Lemma 2 utilizes a “modified” tuning parameter that simplifies the proof, but does not affect the computation δ̂_R in finite samples. More subtle proofs may be available. Together with Theorem 1, the two lemmas imply that δ̂_R satisfies conditions stated in Theorem 5.7 of van der Vaart (). §.§ Derivation of Equation var-Delta The asymptotic distribution of the plug-in estimators described in the main text can be derived using the Delta method <cit.>. This approach is applicable whenever a continuously differentiable function g: ℝ^K →ℝ is applied to the MLEs, ν̂∈ℝ^k. The relevant result is √(n) (g(ν̂ ) - g(ν) )d→ N(0, ∇ g(ν) ^T V(ν̂ ) ∇ g(ν )) where ∇ g is the K-dimensional graident vector of g and V(ν̂ ) is the asmptotic covariance matrix ν̂. The following derivation focusses on g = δ_R(ν̂) defined using Equation delta with the weights given by Equations w through k. Parallel results for δ_P and Δ are seen to follow directly. Start by writing ∂/∂ν_kδ_R(ν) = ∑_i = 1^m∂/∂ν_k w_i(ν) δ_i(ν) Note δ_i(ν) depends on only the parameters of item i but the weights w_i(ν) depend on the parameters of all items. Thus, if ν_k ∈ν_i we require the product rule to compute the partial derivatives in Equation partial-1, but if ν_k ∈ν_r for r ≠ i we require only the partial derivatives of the weights. These considerations lead to the following expression, in which the dependence on ν has been made implicit to simplify notation: ∂/∂ν_kδ_R(ν) = w_i∂/∂ν_kδ_i + δ_i∂/∂ν_k w_i + ∑_r ≠ iδ_r∂/∂ν_k w_r. Computing the necessary derivatives for the second term of the sum in Equation partial-2 gives δ_i∂/∂ν_k w_i = δ_i (1 - w_i) /∑_s w̃_s∂/∂ν_kw̃_i where w̃_i denotes the non-normed weights in Equation w. The third term in the sum is ∑_r ≠ iδ_r∂/∂ν_k w_r = - ∑_r ≠ iδ_r w_r/∑_s w̃_s∂/∂ν_kw̃_i . Adding Equations partial-3 and partial-4 gives δ_i - ∑_r = 1^m δ_r w_r/∑_s w̃_s∂/∂ν_kw̃_i = δ_i - δ_R/∑_s w̃_s∂/∂ν_kw̃_i Equation partial-5 is further simplified by using the result ∂/∂ν_kw̃_i = ψ'(δ_i - δ) - w̃_i/δ_i - δ∂/∂ν_kδ_i. Substituting Equation partial-6 into Equation partial-5 gives δ_i - δ_R/δ_i - δψ'(δ_i - δ) - w̃_i/∑_s w̃_s∂/∂ν_kδ_i. In the above expression, recall that δ_R is a function of the item parameters whose gradient we desire. The quantity δ is a “preliminary value" of the target scaling parameter that is treated as a constant when computing δ_R. In practice, estimation via IRLS continues until the difference between these two values is arbitrarily small, leading to δ = δ_R in Equation partial-7. The simulation studies reported above confirm that analytical expressions for variance of δ_R(ν̂) and Δ(ν̂) obtained using this substitution correspond closely to the values obtained by directly computing their variances over simulated datasets. Using δ = δ_R in Equation partial-7 and substituting the resulting expression for the last two terms in Equation partial-2 yields ∂/∂ν_kδ_R(ν) = w_i∂/∂ν_kδ_i + ψ'(δ_i - δ) - w̃_i/∑_s w̃_s∂/∂ν_kδ_i = ψ'(δ_i - δ)/∑_s w̃_s∂/∂ν_kδ_i. Finally, using the notation v_i ≡ψ'(δ_i - δ)/∑_s w̃_s, leads to the following expression for the asymptotic variance of δ_R(ν̂): V(δ_R(ν̂)) = ∑_i = 1^m [v_i∇δ_i(ν)]^T V(ν̂ ) [v_i∇δ_i(ν)] = ∑_i = 1^m v_i^2 V( δ_i(ν̂ )) For the precision-weighted versions, treating the precision weights as ancillary constants <cit.> leads to good finite sample performance, as illustrated in the simulation studies reported in the main text. For g = δ_P, this leads to using w_Pi in place of v_i in Equation var-delta. For the precision-weighted robust estimator, Equation v becomes v^*_i ≡ψ'(δ_i - δ) / V(δ̂_i)/∑_s w̃_s / V(δ̂_s). For the differences between the precision weighted estimators, the Delta method leads to substituting v_i = (w_Pi- v^*_i) in Equation var-delta, which gives in Equation var-Delta. All that remains is to obtain the partial derivatives ∂δ_i / ∂ν_k, where δ_i was defined in Equation delta-i. The non-zero elements are listed below: ∂/∂ a_i0δ_i = - δ_i/a_i0 ∂/∂ b_i0δ_i = -1/a_i0 ∂/∂ b_i1δ_i = 1/a_i0. The derivation presented in this section applies to a relatively general choice of item-level treatment effects – this simply requires using the relevant partial derivatives in place of Equation partial-delta-i. The derivation uses Equation w but not Equation bsq, so it applies to other robust test-level treatment effects for which the function ψ is continuously differentiable. Interestingly, the derivation presented here leads to a slightly different result than provided by Halpin () and Wang et al. () Those papers also used the Delta method to obtain the variance of IRT scaling functions, but chose the function g to be estimating equation ∑_i ψ(u_i) = 0, which implicitly defines the target parameter δ. By contrast, the present paper instead chose g to be the weighted mean in Equation delta, which explicitly defines the target parameter. Comparing the weights v_i in Equation v with the corresponding expression from Halpin (2024; Equation (32)), reveals the main difference: Using the estimating equations, we must divide by the sum of the ψ'(u_i) / u_i rather than the sum of the w̃_i = ψ(u_i) / u_i. This difference is important when ψ is non-monotone (i.e., for redescending functions), in which case ψ' can take on positive and negative values, so the sum can be equal or close to zero, leading the weights to diverge to infinity. By contrast, w̃_i is non-negative under the (usual) assumption that ψ is odd, and by convention, it is set to 1 when u_i = 0. Consequently, the weights v_i in Equation v do not diverge (although they may be negative due to the numerator). Unreported simulation studies conducted by the author show that these two approaches generally lead to similar results, except when the δ̂_i are symmetrically distributed around δ̂, in which case the approach described in this paper is much more accurate. apalike
http://arxiv.org/abs/2409.02169v1
20240903180001
The structural properties of nearby dwarf galaxies in low density environments -- size, surface brightness and colour gradients
[ "Ilin Lazar", "Sugata Kaviraj", "Aaron E. Watkins", "Garreth Martin", "Brian Bichang'a", "Ryan A. Jackson" ]
astro-ph.GA
[ "astro-ph.GA" ]
firstpage–lastpage Efficiently preparing chiral states via fermionic cooling on bosonic quantum hardware Erez Berg ===================================================================================== § ABSTRACT We use a complete sample of 211 nearby (z<0.08) dwarf (10^8 M_⊙ < M_⋆ < 10^9.5 M_⊙) galaxies in low-density environments, to study their structural properties: effective radii (R_ e), effective surface brightnesses (⟨μ⟩_ e) and colour gradients. We explore these properties as a function of stellar mass and the three principal dwarf morphological types identified in a companion paper (Lazar et al.) – early-type galaxies (ETGs), late-type galaxies (LTGs) and featureless systems. The median R_ e of LTGs and featureless galaxies are factors of ∼2 and ∼1.2 larger than the ETGs. While the median ⟨μ⟩_ e of the ETGs and LTGs is similar, the featureless class is ∼1 mag arcsec^-2 fainter. Although they have similar median R_ e, the featureless and ETG classes differ significantly in their median ⟨μ⟩_ e, suggesting that their evolution is different and that the featureless galaxies are not a subset of the ETGs. While massive ETGs typically exhibit negative or flat colour gradients, dwarf ETGs generally show positive colour gradients (bluer centres). The growth of ETGs therefore changes from being `outside-in' to `inside-out' as we move from the dwarf to the massive regime. The colour gradients of dwarf and massive LTGs are, however, similar. Around 46 per cent of dwarf ETGs show prominent, visually-identifiable blue cores which extend out to ∼1.5 R_ e. Finally, compared to their non-interacting counterparts, interacting dwarfs are larger, bluer at all radii and exhibit similar median ⟨μ⟩_ e, indicating that interactions typically enhance star formation across the entire galaxy. galaxies: formation – galaxies: evolution – galaxies: dwarf – galaxies: structure § INTRODUCTION Dwarf galaxies (M_⋆ < 10^9.5 M_⊙) dominate the galaxy census in all environments <cit.>, making them important for a complete comprehension of galaxy evolution. The physical characteristics of dwarfs make them particularly useful for probing the physics of galaxy evolution. For example, their shallow potential wells make dwarfs more sensitive probes of key physical processes, such as baryonic feedback, tidal perturbations, and ram pressure than their massive counterparts <cit.>. Similarly, the high dark-matter fractions in these systems <cit.>, make them good laboratories for probing the nature of dark matter <cit.>. In spite of their utility as cosmological probes, much of our current understanding of dwarf galaxies comes from studies in the very nearby Universe. This is because typical dwarfs are not bright enough to be detectable in the shallow surveys, e.g. the SDSS <cit.>, that have dominated the astrophysical landscape over the past few decades <cit.>. The dwarfs that do appear in these surveys tend to have high star formation rates (SFRs) which make them brighter and detectable in shallow surveys <cit.>. However, this also biases them towards objects that tend to be bluer and may skew the morphological mix towards late-type systems <cit.>. Unbiased studies of the dwarf population outside our local neighbourhood requires surveys that are both deep and wide (as is the case here). A rich literature exists on key structural properties in massive galaxies, such as galaxy size (typically parametrised by the half-light or `effective' radius, R_ e), the effective surface-brightness (⟨μ⟩_ e, defined here as the mean surface brightness within the effective radius) and colour profiles and gradients. In the massive-galaxy regime, these structural parameters tend to change both as a function of stellar mass and morphology. For example, around M_⋆ ∼ 10^10 M_⊙, massive early-type galaxies (ETGs) have R_ e values that are around a factor of 2 smaller and ⟨μ⟩_ e values that are around a factor of 2.5 brighter than late-type galaxies (LTGs) <cit.>. However, at higher stellar masses (M_⋆ ∼ 10^11 M_⊙) massive ETGs and LTGs exhibit similar R_ e and ⟨μ⟩_ e <cit.>, while beyond this value ETGs become larger than their LTG counterparts. The colour gradients in massive ETGs are generally flat or negative i.e. massive ETGs are typically either red throughout or redder in their centres <cit.>, with the bluer outer regions attributed to late-epoch satellite accretion. Massive LTGs, on the other hand, generally have negative colour gradients in the inner regions and flat or positive gradients in the outskirts <cit.>. The progressive reddening in the outskirts is attributed to non-axisymmetric features, such as spirals or bars and/or a star formation cut off due to a surface density threshold <cit.>. While dwarf galaxies that are detected in shallow surveys, but which reside outside the local neighbourhood, are likely to show the biases outlined above, relatively unbiased explorations of dwarfs are possible in the Local Group <cit.> or in the very nearby Universe via surveys such as MATLAS <cit.>, NGVS <cit.> and FDS <cit.> which identify dwarfs either around nearby massive galaxies or in nearby groups and clusters. By construction, such studies offer insights into dwarf galaxy evolution in relatively high-density environments. In such dense environments, past work has found four main morphological classes: dwarf ellipticals, which are systems with central light concentrations and smooth light distributions like those found in the massive-galaxy regime <cit.>, dwarf `spheroidals', which are diffuse low-surface-brightness systems which lack a central light concentration <cit.>, dwarf spirals, which are rotationally-supported systems with spiral structure <cit.> and dwarf irregulars which appear chaotic in their structural appearance and are thought to host significant amounts of gas <cit.>. Several studies <cit.> have compared the structural properties of these types. For example, at a stellar mass of M_⋆ ∼ 10^8 M_⊙, the R_ e values of dwarf ellipticals and irregulars are similar (∼ 1 kpc in the g-band), with the dwarf spheroidals typically larger by around a factor of 2. At such stellar masses the typical ⟨μ⟩_ e of dwarf ellipticals and irregulars (in the g-band) is ∼ 24 mag arcsec^-2, while dwarf spheroidals are a factor of ∼4 fainter. At lower stellar masses, e.g. M_⋆ ∼ 10^7 M_⊙, all dwarf morphological types show a similar large scatter around effective radii and effective surface brightness values of R_ e,g ∼ 0.7 kpc and ⟨μ⟩_ e,g ∼ 25 mag arcsec^-2 respectively. The colour gradients in dwarfs that reside in high-density environments are found to mostly vary from flat or negative at the upper end of the stellar masses spanned by dwarfs (M_⋆ ∼ 10^9.5 M_⊙), to flat or positive at lower stellar masses <cit.>. Dwarf irregulars, in particular, generally have positive gradients <cit.>. While the studies described above have shaped our understanding of dwarfs in nearby high-density regions, much less is known about the bulk of the dwarf population that lives in low-density environments. To address this, we have constructed, in a recent paper <cit.>, a mass-complete, unbiased sample of 257 dwarf (10^8 M_⊙ < M_⋆ < 10^9.5 M_⊙) galaxies at z<0.08 in the COSMOS field which, at these redshifts hosts galaxies in groups and the field i.e. low-density environments. Visual inspection of ultra-deep optical images of these dwarfs from the Hyper Suprime-Cam reveals three principal morphological classes in dwarfs that inhabit low-density environments: ETGs, i.e. elliptical and S0 systems, LTGs, which show evidence of disks and `featureless' dwarfs which show neither the central light concentration seen in ETGs nor any spiral structure that typifies LTGs. 43, 45 and 10 per cent of dwarfs correspond to the ETG, LTG and featureless class, while a small fraction (2 per cent) are morphologically irregular. The featureless class is akin to the dwarf spheroidals in high-density regions. However, <cit.> label them as featureless rather than spheroidal because, at least in the field, the featureless systems are not produced by cluster-specific processes that are thought to give rise to the dwarf spheroidals in high-density environments. Indeed, as we show later in this study, the featureless dwarfs deviate strongly from the dwarf ETGs in their structural properties, which suggests that the featureless galaxies are not a subset of the ETG population. This is analogous to the findings of <cit.>, who noted significant structural differences between dwarf spheroidal and dwarf elliptical galaxies in high density environments. It is worth noting that, while the dwarf ETGs and LTGs are akin to the well-established morphologies seen in the massive-galaxy regime <cit.>, the featureless class is essentially missing in the massive-galaxy regime. <cit.> have used this sample to study the frequency of dwarfs in each morphological class and their key properties e.g. colours, local environments, incidence of interactions and the extent to which the visually classified morphologies can be separated using standard morphological parameters. This paper is a companion study, which focuses on exploring the structural properties of the Lazar et al. dwarfs, as a function of their stellar mass and morphology. In particular, we probe typical structural parameters that underpin similar work in the massive-galaxy regime: size (parametrised by R_ e), ⟨μ⟩_ e and colour profiles and gradients. The overall aim of this study, when combined with <cit.>, is to establish a low-redshift benchmark for the morphological and structural properties of dwarf galaxies in low-density environments. This is particularly desirable, given the imminent arrival of data from deep-wide surveys like the Legacy Survey of Space and Time <cit.>, which will enable similar analyses over large areas of the sky. This paper is organised as follows. In Section <ref>, we describe the sample of nearby dwarf galaxies, from <cit.>, that underpins this study. In Sections <ref> and <ref>, we explore the values of R_ e and ⟨μ⟩_ e of our dwarfs, as a function of stellar mass and morphology. In Section <ref>, we study the colour gradients of our dwarfs as a function of stellar mass and morphology. We summarise our findings in Section <ref>. § A SAMPLE OF NEARBY DWARF GALAXIES Our study is based on the dwarf galaxy sample constructed by <cit.>. In this section we describe aspects of the creation of this catalogue that are relevant to our study. We direct readers to <cit.> for further details about its construction. The sample is assembled using the Classic version of the COSMOS2020 catalogue <cit.>. This catalogue provides accurate physical parameters (e.g. photometric redshifts, stellar masses and star formation rates) for ∼1.7 million sources in the ∼2 deg^2 COSMOS field <cit.>. The parameter estimation employs deep photometry in 40 broadband filters spanning the UV through to the mid-infrared, from the following instruments: GALEX <cit.>, MegaCam/CFHT <cit.>, ACS/HST <cit.>, Hyper Suprime-Cam <cit.>, Subaru/Suprime-Cam <cit.>, VIRCAM/VISTA <cit.> and IRAC/Spitzer <cit.>. Aperture photometry in the optical and infrared filters is extracted using the SExtractor <cit.> and IRACLEAN <cit.> codes respectively. The physical parameters are then calculated using the LePhare SED-fitting algorithm <cit.>. The wide wavelength baseline results in photometric redshift accuracies better than ∼1 and ∼4 per cent for bright (i<22.5 mag) and faint (25<i<27 mag) galaxies respectively. The dwarf galaxy sample is then constructed by selecting objects that are classified as galaxies by LePhare (`type' = 0 in the COSMOS2020 catalogue), exhibit an extendedness of 1 in the HSC griz filters (i.e. are classified as galaxies by the HSC pipeline), have stellar masses in the range 10^8 M_⊙ < M_⋆ < 10^9.5 M_⊙, redshifts in the range z<0.08 and lie outside masked regions. The final sample contains 257 dwarf galaxies, with median redshift and stellar mass errors of 0.02 and 0.08 dex respectively. As noted in <cit.>, given the depth of the data, this dwarf galaxy sample is mass complete and therefore offers an unbiased statistical sample of galaxies which can be used to study the morphological and structural properties of the nearby dwarf population. §.§ Morphological classification via visual inspection The galaxies in this final sample are then visually classified, as described in <cit.>, using optical gri colour-composite images, and their unsharp-masked counterparts, from the HSC-SSP Ultra-deep layer. This layer has a 5σ point source depth of ∼28 magnitudes, around 5 magnitudes deeper than standard depth SDSS imaging and almost 10 magnitudes deeper than the magnitude limit of the SDSS spectroscopic main galaxy sample. The median HSC seeing is ∼0.6 arcseconds, around a factor of 2 better than the SDSS. The visual inspection is used to classify the dwarfs into three principal morphological classes: early-type galaxies (ETGs) i.e. elliptical and S0 systems, late-type galaxies (LTGs) which show evidence of disks and featureless galaxies which show neither the central light concentrations seen in ETGs or any disk structure that typifies LTGs. The inspection is also used to flag dwarfs that show evidence of interactions e.g. internal asymmetries, tidal features and tidal bridges that connect galaxies in ongoing mergers. We direct readers to Section 3 in <cit.> for more details of the morphological classification. Figure <ref> shows an abridged version of the images presented in <cit.>, with examples of dwarfs in the different morphological classes. Finally, as noted in <cit.>, the COSMOS2020 galaxy population, in our redshift range of interest, resides preferentially in low-density environments (i.e. galaxies in groups and the field). §.§ Masking, PSF correction and construction of surface-brightness profiles In each band, we manually mask flux from interloper sources, such as background galaxies and foreground stars. We use the azimuthal flux interpolation method from <cit.> to reconstruct the missing flux in the masked regions (see <cit.> for further details). Figure <ref> presents examples of the original gri colour-composite image (column 1) and the final image produced using this method (column 2), which is then used to calculate structural parameters. The morphology of each galaxy is indicated in the lower left corner of each gri galaxy image (dETG = dwarf ETG, dLTG = dwarf LTG, and dF = dwarf featureless). We construct surface-brightness profiles for our dwarfs using the Python-based galaxy ellipse fitter in the package. This ellipse fitter is based on the IRAF package <cit.>. The profiles are corrected for light smearing due to the PSF in the following way, as described in <cit.>: * We first fit 2D single-component Sérsic profiles <cit.> using the module within the Python package . * We use the resultant Sérsic index, half-light radius and the magnitude obtained from the fitting to produce a 2D model image for each galaxy using <cit.>. * We convolve this image with the Hyper Suprime-Cam PSF calculated by <cit.>. We use g and i band PSFs for the g and i band galaxy images and the g-band PSF for images in other filters (because only g and i-band PSFs are available from Montes et al.). Note that, while the results presented in this study only involve the g and i bands, structural parameters are calculated in other bands (r and z) and are available to the reader upon request. * We calculate surface-brightness profiles from the convolved and unconvolved model images using the method described above and calculate the ratio between the two in each isophotal bin. This ratio is a measure of the effect of the PSF on the surface-brightness profile. * Finally, we correct for light smearing due to the PSF by multiplying the initial surface-brightness profile (obtained from the original galaxy image) in each isophotal bin by the ratio calculated in the previous step. Examples of the final surface-brightness profiles for different dwarf morphologies are presented in column 3 of Figure <ref>. The corresponding (g-i) colour profiles are presented in column 4. We also calculate curves of growth using and apply the same PSF correction procedure to these profiles. Using the corrected curves of growth, we then calculate R_ e (studied in Section <ref>) and ⟨μ⟩_ e (studied in Section <ref>) for each dwarf galaxy. Finally, we note that before the PSF correction procedure is applied, we eliminate, from our final sample, galaxies with a Sérsic index higher than 4 and effective radii smaller than the PSF (i.e. 3 pixels). This is because such objects are either close to being unresolved or exhibit sharply peaked surface brightness profiles which hinder the performance of to produce images of the convolved and unconvolved models. Our final sample of objects comprises 211 dwarf galaxies, out of the original sample of 257 in <cit.>. § SIZE We begin in Figure <ref> by presenting R_ e as a function of stellar mass for our dwarf population. Different morphological classes are shown using different symbols, while galaxies are colour-coded using their rest-frame (g-i) colour. Interacting systems are indicated using crosses. In our redshift and stellar mass ranges of interest, the galaxy population is bimodal around (g-i) ∼ 0.7 <cit.>. Red and blue galaxies are defined as those with rest-frame (g-i) greater and less than 0.7 respectively. Table <ref> summarises the median values of the effective radius for different galaxy populations. Note that, since we use photometric redshifts in this study, the physical sizes (in kpc) incur large errors when the uncertainties in the photometric redshifts are propagated through the conversion from angular to physical size (as indicated by the large y-axis error-bars in the right-hand panel of Figure <ref>). Therefore, we largely base our analysis in this section on the angular sizes (left-hand panel of this figure) which, in turn restricts us to studying the relative sizes between different morphological classes (rather than physical sizes). However, we also show the physical sizes for completeness in the right-hand panel of this figure. Figure <ref> and Table <ref> together indicate that ETGs and LTGs are well separated in R_ e, with the median R_ e of ETGs being around a factor of 2 smaller than that of LTGs. Note that the two morphological types show similar values of median stellar mass, so this trend is not driven by the ETGs and LTGs having significantly different stellar mass distributions (which can be important since size, to a certain extent, scales with stellar mass). The featureless dwarfs lie in between the ETGs and LTGs, with a median R_ e that is around a factor of 1.2 larger than their ETG counterparts (in spite of having a median stellar mass than is around 0.5 dex smaller). The median values of R_ e of red (i.e. quenched) and blue (i.e. star-forming) galaxies do not show significant differences within each morphological class. When all morphological classes are considered together, red galaxies show a weak trend of being more compact than blue galaxies. Table <ref> indicates that interacting galaxies show modest differences in median R_ e compared to their non-interacting counterparts. The median difference when all morphologies are considered together is ∼33 per cent, suggesting that interactions act to puff up dwarfs and increase their sizes. The dwarf ETGs show the largest discrepancy between interacting and non-interacting galaxies, with the interacting ETGs being ∼42 per cent larger than their non-interacting counterparts. We proceed by comparing the trends we find in our dwarf population to those that are known in the massive galaxy regime. Studies such as <cit.> and <cit.> have used SDSS data of massive (M_⋆ > 10^9.5 M_⊙) galaxies in the nearby Universe to compare galaxy sizes as a function of morphology (where they distinguish between ETGs and LTGs using a Sérsic index threshold of 2.5). They find that the median R_ e of galaxies with M_⋆ ∼ 10^10 M_⊙ is a factor of 2 higher in massive LTGs than in their ETG counterparts, similar to the ratio seen in our dwarf sample. However, the sizes of massive LTGs and ETGs become similar at M_⋆ ∼ 10^11 M_⊙, beyond which massive ETGs exhibit larger R_ e than the LTGs. It is worth considering our findings in conjunction with the results from these studies. We note first that past studies already indicate that the slope of the effective radius – stellar mass relation in ETGs becomes flatter as we move from the massive to the dwarf regime <cit.>. Combining our Figure <ref> with Figure 4 in <cit.> then suggests that the slope of this relation is indeed much shallower in dwarf ETGs than in their massive counterparts. This apparent discontinuity in the slope of the effective radius – stellar mass relation in ETGs supports the notion <cit.> that the principal processes that dominate the evolution of ETGs are different in the dwarf and massive regimes. As noted already by <cit.> and <cit.>, dwarf ETGs are likely to evolve primarily via secular process like gas accretion, while massive ETGs are influenced more by interactions. We conclude this section by considering the differences between red and blue galaxies in the massive and dwarf regimes. Recent work in the literature <cit.> suggests that in nearby (z<0.1) galaxies with M_⋆ ∼ 10^10 M_⊙ the median R_ e of star forming (blue) galaxies is a factor of 2 higher than in quiescent (red) galaxies. The median values of R_ e of these sub-populations become similar at M_⋆ ∼ 10^11 M_⊙. In the massive-galaxy regime, the relative trends between star-forming/blue and quiescent/red galaxies closely mirror those between massive ETGs and LTGs. This is largely driven by the fact that, in this regime, ETGs and LTGs dominate the red and blue populations respectively. On the other hand, our dwarf sample does not show significant differences in the median effective radius between red and blue galaxies because, as shown in <cit.>, more than 50 per cent of our dwarf ETGs are optically blue, in contrast with the massive regime where optically blue ETGs are rare <cit.>. Finally, we note that, since our sample covers the redshift range z<0.08, in principle, it also spans a range of different physical scales. However, if we restrict our galaxies to a narrower redshift range (0.06<z<0.08) the trends obtained above remain unchanged. This is largely driven by the fact that ∼70 per cent of our galaxies actually reside in this redshift range. § EFFECTIVE SURFACE BRIGHTNESS Figure <ref> presents median ⟨μ⟩_ e vs stellar mass for our dwarf galaxies. As in Figure <ref> above, different morphological classes are shown using different symbols, while galaxies are colour-coded using their rest-frame (g-i) colour. Interacting systems are indicated using crosses. A scaling relation is present, with ⟨μ⟩_ e becoming brighter with stellar mass. Table <ref> summarises the median values of ⟨μ⟩_ e for different galaxy populations. The dwarf ETGs are marginally brighter in this quantity that their LTG counterparts, with the featureless class being around 1 mag arcsec^-2 fainter. Not surprisingly, with the exception of LTGs, red galaxies within each morphological class have fainter median ⟨μ⟩_ e values than their blue counterparts, with the difference being largest in the featureless galaxies. While interacting systems are larger, as described above, they have a similar median ⟨μ⟩_ e as their non-interacting counterparts. This suggests that, as already noted in <cit.> (see also the discussion in Section <ref> below), that interactions trigger star formation which boosts the surface brightness and compensates for the increase in size. It is worth noting that the featureless class is similar to ETGs in terms of R_ e but differs from the ETG population in terms of ⟨μ⟩_ e. Note that these results do not change if the analysis is restricted to the mass range where our featureless galaxies mostly reside (10^8 M_⊙ < M_⋆ < 10^8.5 M_⊙). The spatial distributions of baryons within these classes therefore show strong differences which suggests that their formation histories are distinct. These findings suggest that the featureless class in low-density environments should not be considered to be a subset of the ETG population. In the massive-galaxy regime, around a stellar mass of M_⋆ ∼ 10^10 M_⊙, ETGs are typically brighter in ⟨μ⟩_ e by a factor of 2.5 than LTGs <cit.>. As the stellar mass approaches M_⋆ ∼ 10^11.5 M_⊙, the median ⟨μ⟩_ e for ETGs and LTGs becomes progressively more similar. The differences between ETGs and LTGs, in terms of their effective ⟨μ⟩_ e, is therefore more significant in the massive regime (M_⋆ > 10^10 M_⊙) than for dwarfs. We note that, while the results presented in Figures <ref> and <ref> are based on i-band images, the relative trends do not change if other filters (e.g. g, r or z) are used. It is interesting to consider how our galaxies relate to `ultra-diffuse galaxies' (UDGs), a population of faint, diffuse galaxies that exist across a variety of environments <cit.> and have sometimes been considered to be a new class of object. In Figure <ref>, we plot ⟨μ⟩_ e vs R_ e for our dwarfs in the g, r and i bands, with typical UDG criteria indicated using a dashed rectangle <cit.>. Figure <ref> indicates that the featureless galaxies, which resemble the structure of UDGs the most, are found across a large range of surface brightnesses. Furthermore, galaxies in our sample that reside in the UDG region include members of all three morphological classes i.e. these galaxies are not morphologically distinct from the rest of the galaxy population. Taken together, this suggests that, rather than being a novel class of object, galaxies in low-density environments that satisfy the UDG criterion are simply part of the fainter and more diffuse end of the overall galaxy population <cit.>. We note that a similar conclusion has been drawn about UDG-type systems in high-density environments by <cit.>. We caution, however, that, some of our galaxies in the UDG region have relatively large errors in their physical sizes and that the lower mass threshold used here (M_⋆ > 10^8 M_⊙) results in very few of our dwarfs satisfying the UDG criterion (the parameter space defined by this criterion is likely to be populated more by galaxies that have M_⋆ < 10^8 M_⊙ and are fainter). Probing galaxies fainter than those in this study is likely needed to consolidate this result. § COLOUR PROFILES AND GRADIENTS In Figure <ref>, we present median (g-i) colour profiles and colour gradients in our dwarf galaxies. Note that, throughout this section, to ensure that our results are not noisy, we only calculate the colour in a given radial bin if it has at least 10 data points. The top row presents these quantities for the full mass range spanned by our sample (10^8 M_⊙ > M_⋆ > 10^9.5 M_⊙), while the middle and bottom rows correspond to the lower and upper halves of our mass range respectively. The left and right-hand columns show the (g-i) colour and its gradient as a function of the radius normalised by R_ e. As shown in <cit.>, the featureless galaxies only appear in the lower half of our stellar mass range, as result of which this morphological class is missing in the bottom row. In each panel the solid line represents the running median value, while the shaded region indicates the error in the running median. We first consider dwarfs across our full stellar mass range. Dwarf LTGs exhibit a `U' shaped profile with a negative gradient in their central regions out to ∼1.5 R_ e. At this point the (g-i) colour reaches a plateau, which represents the bluest region of the galaxy. Beyond ∼2 R_ e the colour progressively reddens and the gradient becomes positive. On the other hand, the dwarf ETGs exhibit positive colour gradients until ∼2.5 R_ e, beyond which the colour gradient becomes negative and the colours in the outskirts of the dwarf ETGs become progressively bluer. It is worth noting that even though their colour gradients are the opposite of each other, dwarf ETGs and LTGs show similar colours in their central regions (i.e. at < 0.5 R_ e). Dwarf featureless galaxies show a shallow positive (i.e. close to flat) gradient throughout their colour profiles. They generally exhibit redder central regions (out to around R ∼ R _e) than both ETGs and LTGs, suggesting the existence of comparatively older stellar populations. At larger radii (R > R_ e) the featureless galaxies are slightly bluer than ETGs but significantly redder than their LTG counterparts. The middle and bottom rows of Figure <ref> show that there are no qualitative differences between the two mass bins in their colour profiles and gradients. However, we note that both the ETGs and LTGs have redder central regions (by ∼0.1 mag at R<R_ e) in the high mass bin compared to their counterparts in the low mass bin. Dwarf ETGs in the high mass bin also exhibit redder colours at all radii than ETGs in the low mass bin. We proceed by comparing our findings to what is known in the massive-galaxy regime. The majority of massive LTGs have negative colour gradients at R < R_ e which flatten or turn positive at outer radii <cit.>. This behaviour resembles that seen in our dwarf LTGs. The physical processes that cause the reddening of the colour in the outskirts are still a matter of debate but some models postulate that this behaviour could be driven by the outward radial migration of evolved stars via intrinsic secular processes <cit.> combined with a star formation cut-off at a certain surface density threshold in the outer regions of the galaxy <cit.>. Massive ETGs (M_⋆ > 10^9.5 M_⊙) typically have flat or negative colour gradients <cit.>. These gradients and the resultant bluer colours in the outskirts of these systems are thought to be caused by satellite accretion events <cit.>, with the stars from the satellites, which are typically bluer, settling in the outskirts of the massive ETGs. It is worth noting, however, that at the lower mass end of the massive regime (10^9.5 M_⊙ < M_⋆ < 10^10.5 M_⊙), a minority (between 10 and 30 per cent) of ETGs in relatively low-density environments do show positive gradients <cit.>, driven by recent star formation activity in their cores within the last ∼1 Gyr. When considered together with our results, this suggests that the incidence of blue ETGs (driven by blue central regions) does indeed increase with decreasing stellar mass. The differences in the colour gradients between massive and dwarf ETGs suggests that the principal mode of stellar mass growth is likely to be different. The evolution of massive ETGs is likely to be driven by `inside-out' growth, where the core of the galaxy forms earlier in cosmic time and material is then accreted through minor mergers in the outskirts. However, ETGs in the dwarf regime appear to experience `outside-in' growth, which is likely to be driven less by local environment and interactions and more by stellar feedback and secular processes <cit.>. This `outside-in' scenario is thought to proceed via star formation starting in the outskirts of the galaxy (e.g. via gas accretion) and propagating inwards causing gas heating and expansion due to supernova events. Since low mass galaxies have shallow potential wells, some of the gas is blown away from the outer regions of the galaxy. The gas that remains cools down and sinks deeper towards the galactic centre giving rise to further star formation events. As a result, the galaxy is left with older stellar populations in the outer regions (since the gas reservoir is depleted due to the supernova-driven galactic wind) and younger stellar populations in the galactic centre. Note that our results are consistent with the conclusions of past studies <cit.> that have suggested that the transition between inside-out and outside-in growth takes place somewhere between M_⋆ ∼ 10^10 M_⊙ and M_⋆ ∼ 10^10.5 M_⊙. It is worth separately exploring the colour profiles of two interesting sub-populations of dwarfs: ETGs which have visually identified blue cores and dwarfs that have been flagged as interacting in <cit.>. As noted in that study, significant blue cores appear to be common in dwarf ETGs, with around 46 per cent of these systems showing cores that are clearly visible by eye (two such examples can be seen in the top row of Figure <ref>). Figure <ref> shows that the blue cored dwarf ETGs are bluer than their non-cored counterparts at all radii. However, beyond R ∼ 1.5 R_ e, the colour gradients become indistinguishable in the cored and non-cored populations (i.e. the colour profiles are roughly parallel to each other). At R < 1.5 R_ e, however, the colour of the blue-cored ETGs diverges significantly from their non-cored counterparts, reaching an offset of -0.35 mag in the very centre. Most of the light from the blue core therefore appears to originate from the region enclosed within ∼ 1.5 R_ e in the blue-cored systems. We conclude our study by exploring dwarfs that have been flagged as interacting in the visual inspection performed in <cit.>. Figure <ref> shows that, regardless of morphology, interacting dwarfs tend to be bluer than their non-interacting counterparts at all radii. This is consistent with the finding in <cit.> that the median integrated colour of interacting dwarfs is typically bluer than their non-interacting counterparts. The bluer colours suggest that the interactions are enhancing the star formation activity in these systems, as has been suggested in the recent literature <cit.>. In particular, the consistent offset between the colour profiles of the interacting and non-interacting dwarfs suggests that star formation may be enhanced across the entire extent of the galaxy (rather than being localised in, say, the central regions). It is worth noting that the blueward colour offset in the interacting systems is largest in the very central regions (R < 0.5 R_ e) which suggests that the star formation enhancement is largest in the galactic centre. These results appear consistent with recent studies <cit.> which show that star formation enhancement via interactions in dwarfs could indeed be spatially extended and triggered by large-scale tidal compressions in the inter-stellar medium, which act across the inner and outer regions of the galaxy. Note that the apparent truncation in the (g-i) colour for interacting dwarf ETGs is artificial and caused by the fact that the number of data points for R > 2 R_ e,i drops below 10 (the threshold we apply for calculating the median colour in a given radial bin). § SUMMARY We have used a complete, unbiased sample of 211 nearby (z<0.08) dwarf (10^8 M_⊙ < M_⋆ < 10^9.5 M_⊙) galaxies, to study the structure of nearby dwarf galaxies in low-density environments. In particular, we have studied galaxy size (parametrised by R_ e), ⟨μ⟩_ e and the (g-i) colour profile and colour gradient, as a function of stellar mass and morphology. Our main conclusions are as follows. * Dwarf ETGs and LTGs are well separated in R_ e, with the median R_ e of LTGs being around a factor of 2 larger than that of the ETGs. The featureless dwarfs lie in between the ETGs and LTGs, with a median R_ e that is around a factor of 1.2 larger than their ETG counterparts. * The median R_ e of red (i.e. quenched) and blue (i.e. star-forming) galaxies do not show significant differences within each dwarf morphological class. When all morphological classes are considered together, there is a weak trend of red galaxies being more compact than blue galaxies. * Dwarf ETGs are marginally brighter in median ⟨μ⟩_ e that their LTG counterparts, with the featureless class being around 1 mag arcsec^-2 fainter. * The colour profiles and gradients of dwarf ETGs differ significantly from their massive counterparts. Dwarf ETGs typically show positive gradients (i.e. bluer central regions), while massive ETGs either have red colours throughout or a red core with blue outer regions (i.e. negative or flat gradients). The divergence in colour profiles suggests that ETGs have different formation channels in the dwarf and massive regimes. Massive ETGs are likely to evolve `inside-out', as a result of minor mergers adding stars to their outskirts. Dwarf ETGs, on the other hand, evolve `outside-in', as a result of stellar-feedback driven galactic winds being more effective at quenching their outskirts while star formation continues in their central regions. * Dwarf LTGs show similar colour profiles as their massive counterparts, exhibiting negative colour gradients in their central regions (R < 1.5 R_ e) and positive gradients in the outer regions (R > 2 R_ e). * Interacting systems are larger but have a similar median ⟨μ⟩_ e as their non-interacting counterparts. This suggests that interactions trigger star formation, which boosts the surface brightness and compensates for the increase in size. Regardless of morphology, the colour profiles of interacting dwarfs are bluer than their non-interacting counterparts at all radii, with the blueward colour offset being largest in the very central regions. This indicates that, in the dwarf regime, the enhancement of star formation due to interactions typically takes place across the entire extent of the galaxy. * The dwarf featureless and ETG classes do not differ significantly in terms of their median R_ e but do differ in their median ⟨μ⟩_ e. The distributions of baryons within these classes therefore show strong differences, suggesting that their formation histories are different. This suggests that, in low-density environments, the featureless class should not be considered to be a subset of the ETG population. We note, however, that our sample of dwarf featureless galaxies is relatively small due to our lower stellar mass limit of 10^8 M_⊙. Larger samples of galaxies extending to lower stellar masses are likely needed to confirm this result. * Dwarfs in our sample that reside in the UDG region in the R_ e vs ⟨μ⟩_ e parameter space include members of all morphological classes and are a continuous extension of the galaxy population towards lower values of ⟨μ⟩_ e. This suggests that, rather than being a novel class of object, galaxies in low-density environments that satisfy the UDG criterion are simply part of the fainter and more diffuse end of the overall galaxy population. * The prominent blue cores that are visually identified in around 46 per cent of dwarf ETGs extend out to ∼ 1.5 R_ e. § ACKNOWLEDGEMENTS We warmly thank the anonymous referee for several constructive suggestions which helped us improve the original manuscript. We thank Pierre-Alain Duc, Elizabeth Sola and Liza Sazonova for many interesting discussions. IL and BB acknowledge PhD studentships from the Centre for Astrophysics Research at the University of Hertfordshire. SK and IL acknowledge support from the STFC (grant number ST/Y001257/1). SK and AEW acknowledge support from the STFC (grant number ST/X001318/1). SK also acknowledges a Senior Research Fellowship from Worcester College Oxford. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission. § DATA AVAILABILITY The structural parameters produced in this work are available via common online repositories. They can also be obtained by contacting the authors. mnras
http://arxiv.org/abs/2409.02415v1
20240904034142
Local map Construction Methods with SD map: A Novel Survey
[ "Jiaqi Li", "Pingfan Jia", "Jiaxing Chen", "Jiaxi Liu", "Lei He" ]
cs.CV
[ "cs.CV" ]
UTF8gbsn Local map Construction Methods with SD map: A Novel Survey Jiaqi Li^*, Pingfan JiaJiaqi Li is with the Department of Civil Engineering, Tsinghua University, Beijing, 100084, China (e-mail: lijq22@mails.tsinghua.edu.cn). Pingfan Jia is with the School of Computer Science, Beihang University, Beijing, China (e-mail: pingfan@buaa.edu.cn). Jiaxing Chen is with the School of Vehicle and Mobility, Tsinghua University, Beijing, 100084, China (e-mail: chenjx23@mails.tsinghua.edu.cn). Jiaxi Liu is with the Civil and Environmental Engineering Department, University of Wisconsin-Madison, WI, 53715, USA (e-mail: jliu2487@wisc.edu). , Jiaxing Chen, Jiaxi Liu, Lei HeLei He is the corresponding author, with School of Vehicle and Mobility, Tsinghua University, Beijing, 100084, China (e-mail: helei2023@mail.tsinghua.edu.cn). ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT In recent years, significant academic advancements have been made in the field of autonomous vehicles, with Local maps emerging as a crucial component of autonomous driving technology. Local maps not only provide intricate details of road networks but also serve as fundamental inputs for critical tasks such as vehicle localization, navigation, and decision-making. Given the characteristics of SD map (Standard Definition Map), which include low cost, ease of acquisition, and high versatility, perception methods that integrate SD map as prior information have demonstrated significant potential in the field of Local map perception. The purpose of this paper is to provide researchers with a comprehensive overview and summary of the latest advancements in the integration of SD map as prior information for Local map perception methods. This review begins by introducing the task definition and general pipeline of local map perception methods that incorporate SD maps as prior information, along with relevant public datasets. And then it focuses on the representation and encoding methods of multi-source information, as well as the methods for fusing multi-source information. In response to this burgeoning trend, this article presents a comprehensive and meticulous overview of the diverse research efforts in this particular field. Finally, the article addresses pertinent issues and future challenges with the aim of guiding researchers in understanding the current trends and methodologies prevalent in the field. § INTRODUCTION Local map perception is a critical and challenging task in the field of intelligent driving. It involves detailed understanding and real-time modeling of the environment surrounding the vehicle, serving as the foundation for decision-making and navigation in autonomous systems. Local maps not only provide information about roads and lanes but also encompass the detection and recognition of obstacles, traffic signs, pedestrians, and other dynamic or static objects. This information is essential for ensuring the safe operation of the vehicle and efficient path planning. Without precise local map perception, autonomous vehicles may deviate from their routes, cause traffic accidents, and potentially endanger passenger safety. Thus, Local map perception plays an indispensable role in the autonomous driving ecosystem. Unlike typical object detection, Local map perception must handle complex and dynamic environmental information while maintaining high accuracy under various lighting conditions and weather situations. For example, shadows on the road, light reflections, dynamic obstacles, and occlusions of road signs can all interfere with Local map perception. Additionally, sensor noise and data delays further complicate the perception task. Therefore, developing robust Local map perception technologies is crucial for achieving safe and reliable autonomous driving. To solve this kind of problem, many researchers have proposed various methods. Chen and Lei <cit.> proposed a method for visual localization and map construction using ground texture, enhancing the accuracy of positioning and the precision of map updates through global and local optimization. <cit.> enhanced online map prediction and lane topology understanding by leveraging SD maps and integrating SD map information through a Transformer encoder, which alleviates problems with obscured lane lines or poor visibility and significantly improves the performance of lane detection and topology prediction. <cit.> proposed an innovative video lane detection algorithm that enhances the feature map of the current frame using an occlusion-aware memory-based refinement (OMR) module, leveraging obstacle masks and memory information to improve the detection accuracy and robustness under occlusion. RVLD<cit.> improved the reliability of lane detection by recursively propagating the state of the current frame to the next frame, utilizing the information from the previous frames. Besides, there are Laneaf<cit.>, LaneATT<cit.>, Streammapnet<cit.> to ease these issues. In previous autonomous driving research, high-definition maps (HDMap) have been vital. HDMap, characterized by absolute and relative accuracies within 1 meter, provides high precision, freshness, and richness of electronic maps, including extensive road and environmental information. These maps offer precise navigation and positioning services to support safe and efficient autonomous driving. However, HDMap faces significant challenges, primarily in terms of real-time updates and cost control. Urban road environments change frequently, and any minor alteration can impact the driving safety of autonomous vehicles. Traditional HDMap production methods require substantial time and resources, making real-time updates difficult, both <cit.> and <cit.> have pointed out similar issues. Moreover, the production and maintenance costs of HDMap are extremely high, with costs reaching thousands of dollars per kilometer using traditional methods. In this context, the "highly rely on perception, reducing dependence on maps" approach has gained widespread recognition within the industry. This approach emphasizes the use of onboard sensors for autonomous driving perception tasks, supplemented by lightweight map information. This strategy reduces reliance on real-time map updates, lowering maintenance costs, while lightweight map information can effectively compensate for certain limitations of onboard sensors, enhancing the model's robustness. SD map, as a widely used electronic map in traffic navigation and geographic information services, feature low production and maintenance costs, easy accessibility, and small data size, making it suitable as lightweight maps to assist onboard sensors in constructing local maps for autonomous driving. Despite the promising prospects and numerous challenges in constructing Local maps based on SD map, there is a lack of comprehensive research reviews in this area. To address this gap, this review aims to provide a thorough overview of the latest advancements in Local map construction methods utilizing SD map. Specifically, the focus is on the application of SD map information representation methods and multimodal data fusion techniques in local map perception tasks. This research delves into the major developments, challenges, and research directions in this field. The contributions to the existing body of knowledge are as follows, The existing literature on local map construction using SD map as a prior is reviewed. The strengths and limitations of these approaches are analyzed, providing insights into their effectiveness and applicability in real-time autonomous driving applications. The representation and encoding methods of information from various sensors, as well as the fusion techniques for multi-source sensor data, are highlighted, which are crucial for real-time local map generation. The underlying principles, architecture, and performance of these methods are discussed, shedding light on their feasibility and practicality in the field key challenges and open research questions in local map construction using SD map as a prior are identified. § BACKGROUND In this section, the definition of Local map Construction with SD map is clarified, and the general pipeline for this type of task is summarized. The composition and application scenarios of SD map are introduced. Finally, commonly used public datasets and evaluation metrics in Local map perception tasks are listed. §.§ Task Definition of Local map Construction with SD Map The task of Local map perception involves creating an accurate map representing the vehicle's surrounding environment to support autonomous driving decision-making and planning. This task typically relies on data from various sensors, including cameras, LiDAR, radar, and GPS. Additionally, incorporating prior information from SD map enhances the model's robustness and mitigates the impact of uncertainties from onboard sensors, improving the overall model performance. The core of the Local map perception task is real-time sensing and understanding of the vehicle's surroundings. The general process of neural networks used for Local map construction can be summarized into several key components, as illustrated in Fig. <ref>. After inputting surround view images and LiDAR point clouds, the overall architecture of the Local map construction network can be depicted as consisting of different parts: a backbone for image feature extraction, a PV2BEV (Perspective View to Bird's Eye View) module for perspective transformation, a module for multimodal feature fusion, and task-specific heads for lane detection. These components form the basic framework of the Local map perception network. The images and point cloud data captured by the surround view cameras and LiDAR are first processed by a backbone to obtain (multi-scale) image features. These features are then transformed to the BEV perspective using the PV2BEV module, followed by fusion with SD map data through a modality fusion module, and finally output through different task-specific heads. §.§ Standard Definition Map SD map, short for Standard Definition Map, is a digital map technology providing basic geographic information and road network structures. It is widely used in everyday navigation and geographic information services, offering convenience to users. SD map primarily provides the centerline skeleton of roads without detailed lane information, road signs, or other high-precision environmental features. For the task of Local map construction, SD map offers three main advantages. First, SD map data is easily accessible. It can typically be obtained for free from open geographic data sources such as OpenStreetMap <cit.>, making it suitable for large-scale applications. Second, compared to HDMap, the production and maintenance costs of SD Map are significantly lower. Lastly, SD map has high universality, covering most types of roads, and can provide relevant road information for local map construction tasks. Platforms like OSM and Baidu Maps can serve as data sources for SD map. For instance, OpenStreetMap (OSM) is a collaborative project created and maintained by global volunteers, providing free, editable, and open-content maps. OSM data includes a wide range of geographic information such as roads, buildings, parks, and rivers, which users can freely access, edit, and use. §.§ Datasets In the field of BEV Local map construction, commonly used datasets include KITTI<cit.>, nuScenes<cit.>, ApolloScape<cit.>, Argoverse<cit.>, Openlane<cit.>, and Waymo<cit.> Open Dataset. The KITTI<cit.> dataset, created by the Karlsruhe Institute of Technology and the Toyota Technological Institute, provides stereo camera, LiDAR, and GPS/IMU data, covering urban, rural, and highway scenes, and is suitable for tasks such as object detection, tracking, and road detection. nuScenes<cit.>, released by Motional, includes data from six cameras, five radars, one LiDAR, IMU, and GPS, suitable for urban traffic scenarios under various weather and lighting conditions. ApolloScape<cit.>, released by Baidu, offers high-precision 3D annotated data covering various urban road scenes, suitable for tasks like lane detection and semantic segmentation. Argoverse<cit.>, released by Argo AI, includes stereo camera, LiDAR, GPS, and IMU data, providing detailed 3D annotations and lane markings, primarily used for 3D object detection and lane detection. The Waymo<cit.> Open Dataset, released by Waymo, covers a variety of weather and traffic conditions, providing high-quality data from LiDAR and cameras, suitable for tasks such as 3D object detection, tracking, and lane detection. OpenLane-V2<cit.>, also known as OpenLane-Huawei or Road Genome, is a benchmark dataset for next-generation autonomous driving scene road structure perception, jointly open-sourced by Shanghai Artificial Intelligence Laboratory and Huawei Noah's Ark Laboratory. It is the first dataset to include the topological relationships of road structures in traffic scenes. ONCE-3DLanes<cit.> dataset, a real-world autonomous driving dataset with lane layout annotation in 3D space, is a new benchmark constructed to stimulate the development of monocular 3D lane detection methods. It is collected in various geographical locations in China, including highway, bridge, tunnel, suburb and downtown, with different weather conditions (sunny/rainy) and lighting conditions (day/night). The whole dataset contains 211K images with its corresponding 3D lane annotations in the camera coordinates. CurveLanes<cit.> is a new benchmark lane detection dataset with 150K lanes images for difficult scenarios such as curves and multi-lanes in traffic lane detection. It is collected in real urban and highway scenarios in multiple cities in China. All images are carefully selected so that most of them image contains at least one curve lane. More difficult scenarios such as S-curves, Y-lanes, night and multi-lanes can be found in this dataset. §.§ Common Evaluation Metrics §.§.§ Metrics for Lane Extraction Mean Average Precision (mAP) is a common metric used to evaluate the performance of object detection models. mAP measures the precision of a model at various threshold levels by matching predicted bounding boxes with ground truth boxes to calculate true positives (TP), false positives (FP), and false negatives (FN). Initially, predicted boxes are matched with ground truth boxes based on a specified IoU (Intersection over Union) threshold. Then, precision (TP / (TP + FP)) and recall (TP / (TP + FN)) are calculated for each class and used to plot the Precision-Recall curve. The area under this curve is calculated using interpolation methods to obtain the Average Precision (AP) for a single class. Finally, the mean of the AP values across all classes gives the mAP, reflecting the overall detection performance of the model, with higher values indicating better performance. mAP = 1/N∑_i=1^NAP_i Mean Intersection over Union (mIoU) is a commonly used metric to evaluate the performance of semantic segmentation models. mIoU measures the classification accuracy of the model at the pixel level for various objects. The calculation involves several steps. For each class, the IoU is computed by dividing the number of intersecting pixels (Intersection) between the predicted and ground truth areas by the union of these areas (Union). This calculation is performed for each class, and the mean IoU across all classes gives the mIoU, providing an average performance evaluation of the model's segmentation accuracy, with higher values indicating better segmentation performance. mIoU = 1/C∑_c=1^CTP_c/FP_c + FN_c + TP_c Traditional object detection metrics like mAP may not fully capture all important aspects of the detection task, such as the estimation of object speed and attributes, and the accuracy of position, size, and orientation. Therefore, the nuScenes<cit.> Detection Score (NDS) has been proposed to comprehensively account for these factors. NDS integrates multiple key metrics to overcome the limitations of existing metrics and provide a more holistic performance evaluation. The NDS calculation formula is as follows: NDS = 1/2× (mAP + mATE) In this formula, mAP represents the mean Average Precision, measuring detection accuracy. The TP set contains the average values of five True Positive Metrics: ATE (Average Translation Error), ASE (Average Scale Error), AOE (Average Orientation Error), AVE (Average Velocity Error), and AAE (Average Attribute Error). §.§.§ Metrics for Topology Reasoning OpenLane-V2<cit.> breaks down the task into three subtasks: 3D lane detection, traffic element recognition, and topology reasoning. The overall task performance is described using the OpenLane-V2 Score (OLS), which is the average of the metrics for each subtask. The metric for 3D lane detection, DETl, can be expressed as the mean AP at different thresholds t∈ T, T={1.0,2.0,3.0} [Equation], where AP is calculated using the Fréchet distance. Traffic element detection is evaluated similarly to object detection using AP, with an IoU threshold set to 0.75. Traffic elements have various attributes, such as traffic light colors, which are closely related to lane accessibility, so attributes must also be considered. Assuming A is the set of all attributes, the evaluation includes attribute classification accuracy in. OLS = 1/4[ DET_l + DET_t + f(TOP_ll) + f(TOP_lt) ] DET_l = 1/|T|∑_t ∈ TAP_t DET_t = 1/|A|∑_a ∈ AAP_a OpenLane-V2 uses the TOP score to evaluate the quality of topology reasoning, akin to the mAP metric but adapted for graphs. Essentially, this converts the topology prediction problem into a link prediction problem and calculates mAP (mean APs of all vertices) to assess algorithm performance. The first step is to determine a matching method to pair ground truth and predicted vertices (i.e., centerlines and traffic elements). For centerlines, Fréchet distance is used; for traffic elements, IoU is used. When the confidence score of an edge between two vertices exceeds 0.5, they are considered connected. The vertex AP is obtained by ranking all predicted edges of a vertex and calculating the mean of cumulative precision: TOP = mAP = 1/|V|∑_v ∈ V∑_n̂∈N̂(v) P(n̂)1(n̂∈ N(v) )/|N(v)| § MULTIMODAL REPRESENTATION §.§ Image Data In the perception task of BEV, the image information of the panoramic camera is the most important input data, and the common feature extraction method of the panoramic image follows the paradigm of autonomous driving perception task BEVformer<cit.> or LSS<cit.>. The Backbone module of neural networks extracts 2D image features from various camera perspectives through classic and lightweight convolutional networks such as ResNet-50<cit.> or 101, Mobilenets<cit.>, EfficientNet<cit.>, V2-99<cit.> and so on. Among them, the ResNet series is widely used because of ResNet architecture addresses the vanishing gradient problem in deep neural networks during training by introducing residual blocks. Variants like ResNet enhance feature extraction capabilities by increasing network depth and width. These networks are extensively utilized in BEV Local map perception tasks due to their outstanding performance in image recognition and feature extraction. Typically, a Feature Pyramid Network (FPN)<cit.> module is appended to the Backbone module. The FPN integrates feature maps of different scales, generating more robust multi-scale feature representations. This seems to be the default basic configuration, and the number of fusion layers can be selected according to the network type. This multi-scale feature fusion aids in improving the detection and recognition of objects of varying sizes, thereby enhancing overall performance. In addition to these lightweight and simple backbone, larger backbone networks will be the mainstream trend in the future. With the success of Transformers in the field of computer vision<cit.>, Transformer-based feature extraction methods have also been applied to BEV Local map perception tasks such as Swin<cit.>. Refer to the methods on the Nuscece leaderboard, The state-of-the-art methods all use pre-trained VIT-L as the backbone network, or its variant EVA-02<cit.>. This large pre-trained backbone network is the key to improving model performance, although the large number of parameters and computational complexity of large models can seriously affect inference speed. Nevertheless, its performance directly promotes detection accuracy. The training of these large models requires massive data support, but data labeling is costly and limited, and self supervised training methods will become mainstream. With the widespread application of BERT<cit.> pre-trained models in various downstream tasks for self supervised tasks in natural language processing, it has demonstrated a powerful ability to learn language representations. Similarly, in self supervised learning in computer vision tasks, MAE<cit.> randomly masks patches on images and implements self supervised learning of masked images. The achievements of MIM<cit.> based pre-training algorithms are flourishing in the field of computer vision. This self supervised pre-training model can not only solve the problem of high cost labels, but also better learn the representation relationship of images. Whether based on CNN or Transformer methods, the ultimate goal is to obtain high-quality feature representations of panoramic images. For BEV Local map perception tasks, feature representation is crucial as it directly affects the accuracy and robustness of the perception system. The global feature extraction mechanism of FPN module or Transformer can significantly improve the overall performance of the network, making it more effective in perception and decision-making in complex driving environments. §.§ Lidar Points Data In the Local map perception task of BEV, in addition to using a pure visual surround camera as a single data input, multimodal methods fuse multi-modal information such as lidar point clouds and camera data to perform depth aware BEV transformation. Compared to single vision and multimodal (RGB+LiDAR) methods, multimodal fusion methods perform excellently in accuracy despite increasing additional computational complexity. The processing of lidar point cloud data is a crucial step in multimodal perception tasks. The feature extraction of lidar point cloud data in P-mapnet<cit.> requires voxelization of the point cloud first, followed by the use of multi-layer perceptron (MLP) to extract local features of each point. Maximizing pooling selects the largest feature value from multiple local features to form a global feature representation, enhancing the model's global perception ability of point cloud data<cit.>. Given the LiDAR point cloud P and panoramic images I. M = F_2(F_1(P,I)), where 𝐹_1 represents the feature extractor, extracting multimodal inputs to obtain BEV features, 𝐹_2 represents the decoder, and outputs the detection results. The method in MapLite 2.0<cit.> further integrates lidar point cloud data with data from other sensors and integrates it with coarse road maps obtained from SD maps (such as OpenStreetMap) Use the coarse route map information from the SD map to refine the geometric shape and topological structure of the road. This not only improves the accuracy of the map, but also enhances the understanding of complex road environments <cit.>. It is also used to generate high-definition maps online by projecting LiDAR intensity data in bird's-eye view. By integrating multimodal data, not only does it provide detailed spatial information, but it also enables precise semantic segmentation of the driving environment. §.§ SD Map Data In the context of enhancing Local map perception tasks, incorporating SD map information as prior knowledge can significantly improve the performance of vision and lidar sensors, particularly in long-distance and occlusion scenarios <cit.>. To integrate SD maps effectively into network structures while preserving their unique road information, various representations have been explored. SD maps can generally be categorized into two forms: raster and vector. An example of an SD map is illustrated in Fig. <ref>. This figure demonstrates how different forms of SD map representations can be utilized to supplement the local map construction process, thereby enhancing the overall performance of perception systems. M = F_2(F_1(P,L,S)), Feature extractors can contain multiple modal data. Here S is the SD map prior that comes in the form of road center-line skeletons. Where 𝐹_1 represents the feature extractor, extracting multimodal inputs to obtain BEV features, 𝐹_2 represents the decoder, and outputs the detection results. §.§.§ Representation of Raster MapLite2.0 <cit.> was the first to introduce SD map into Local map perception tasks. PriorLane <cit.> modeling the map as a binary image, where 1 represents the drivable area and 0 represents the non drivable area. Similarly, MapVision<cit.> also uses the one hot encoding method, then concatenates the position encoding information and extracts the SD map features through the encoder. The SD map is aligned with ego data through the KEA module proposed in the article, and then fused with sensor data to obtain a mixed expression. Both P-MapNet and MapLite2.0 use rasterization to represent SD map, but the difference is that after P-MapNet<cit.>, a CNN network is used to extract information from the rasterized SD map, which is used as a source of additional information for BEV feature optimization (i.e. key and val); MapLite2.0 takes SD map as the initial estimate of HDMap, converts it into a BEV perspective and combines it with images input from sensors. It is trained through a Convolutional Neural Network to predict its semantic labels. Finally, these semantic segmentation results are transformed into distance transformations for specific labels, and a structured estimator is used to maintain Local map estimates and integrate SD map priors. §.§.§ Representation of Vectors SMERF<cit.> was the first to propose a Transformer based encoder model for road topology inference. MapEX<cit.> and SMERF have similar representations of map elements, introducing a ployline sequence representation and a Transformer encoder to obtain the final map representation of the scene. Specifically, the roads in SD map are first abstracted in the form of polylines. For the ployline data, N data points are obtained through uniform sampling. Then, after sin cos encoding, the N-d dimensional line description is obtained. Consider a vertical line with small curvature, which is characterized by very similar x or y axis values for all points. Directly inputting the coordinates of these points into the model may result in insufficient differentiation of this curvature. Therefore, using sine embedding will make this difference more apparent, thereby improving the interpretability of the model for these features. In practice, the coordinates of each line will be normalized to the range of (0,2π) relative to the BEV range before embedding the coordinates of each line. These encoded data will go through several layers of transformers to obtain map feature representations. §.§.§ Encoding of Other Information SMERF: In addition to encoding the polyline coordinates of SD map. SMERF use one hot encoding to encode the type of road into a vector with dimension K (the number of road types). For ground elements within the perceptual range, M * (N * d + K) encoded data will be obtained, which will be transformed through several layers to obtain map feature representations. The ablation experiment has shown that adding more road type information can improve the effectiveness of lane detection and road topology inference. § MULTIMODAL FUSION METHOD The method of using only images as input, exemplified by MapTR <cit.>, based on the encoder-decoder architecture, has established a classic paradigm for local map construction, paving the way for subsequent approaches. Streammapnet <cit.> further enhances this by incorporating comprehensive temporal information, significantly improving performance in occluded areas. 3D LaneNet <cit.> adopts an end-to-end learning framework, integrating tasks such as image encoding, spatial transformation between image views and top-down views, and 3D curve extraction into a single network. Gen LaneNet <cit.> proposes a two-stage framework that decouples the learning of image segmentation sub-networks and geometric encoding sub-networks. Additionally, several monocular 3D lane detection methods, such as <cit.>, <cit.>, and <cit.>, focus solely on visual images as input. Numerous models, including <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>, also rely solely on visual images. On the other hand, HDMapNet <cit.>, a representative multimodal method, integrates LiDAR point clouds by encoding these features and predicting vectorized map elements in bird's-eye views, achieving effective fusion of multi-sensor data. Furthermore, other models, such as <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>, incorporate LiDAR point cloud data as additional input. Fig. <ref> illustrates the development trends in local map construction over recent years. Considering the cost of constructing high-precision maps, Maplite 2.0 <cit.> was the first to introduce SD maps into local map perception tasks. MapEX <cit.> addresses situations where existing map information is incomplete or inaccurate by converting existing map elements into non-learnable queries and combining them with learnable queries for training and prediction. SMERF <cit.> and P-MapNet <cit.> combine the feature representation of SD maps with camera input features using a multi-head cross-attention mechanism, enabling more effective lane topology inference. To achieve effective fusion of visual BEV features and SD map semantic information, BLOS-BEV <cit.> has explored various feature fusion methods. Additionally, methods such as PriorLane <cit.>, FlexMap <cit.>, Bayesian <cit.>, TopoLogic <cit.>, LGMap <cit.>, MapVision <cit.>, RoadPainter <cit.>, and EORN <cit.> integrate SD map priors to support local map construction, a trend that is gradually gaining traction. A perspective transition is required before fusion. The focus of this section is to convert feature information extracted from 2D camera sensor images (commonly referred to as perspective views (PV)) into BEV features. Local map perception tasks typically view the ground as a plane, in the Bird's Eye View, Establishing a map in BEV, because on one hand, BEV facilitates information fusion of multiple sensors, and existing advanced BEV object detection work can provide a good foundation. There are both geometric based and network-based methods for converting the perspective from PV to BEV. Geometric methods can be divided into those based on isomorphic transformations and those based on depth estimation. Network based methods can be divided into MLP based and Transformer based methods. The conversion from PV to BEV based on Transformer can usually be directly achieved using the BEV perception model. MapTR<cit.> in Fig. <ref> proposes an optimized GTK module based on the View Transformer module in BEVFormer. §.§ Align Due to the inherent errors in GPS signals and the influence of vehicle movement, both vectorized and rasterized SD map priors inevitably have spatial misalignment with the current BEV space, making it difficult to fully align the two. Therefore, before fusion, it is necessary to spatially align the SD map prior with the current BEV operating space. FlexMap<cit.> uses SLAM trajectory and corrected RTK trajectory to calculate offset and achieve spatial alignment. To solve this problem, PriorMap<cit.> sets up a KEA (Knowledge Embedding Alignment) module to embed SD map prior knowledge and align it with image features in space. Specifically, first, a feature extraction network is used to extract feature points from the image and feature points from SD map prior knowledge. Subsequently, these feature points are spatially matched using an alignment algorithm based on attention mechanism. Finally, the aligned feature points are further processed through a fusion transformer network. Enhanced the accuracy and robustness of Local map perception algorithms. Similarly, P-MapNet<cit.> first downsamples the rasterized SD map prior, and then introduces a multi head cross attention module that allows the network to use cross attention to determine the most suitable alignment position, effectively enhancing BEV features using SD map prior. As Fig. <ref>, P-MapNet's ablation experiment shows that even in the case of weak alignment with BEV space, directly concatenating SD map prior information still improves the performance of the model. On this basis, adding CNN modules and multi head cross attention modules can further improve the performance of the model. This demonstrates the important role of SD map prior information in Local map perception tasks, even without strict alignment, simply adding rasterized SD map priors can improve model performance. §.§ Fusion After obtaining multi-sensor data feature representations, fusion processing is required to obtain stronger feature representations. In order to align the features of different sensors, it is necessary to achieve fusion in BEV level features. The image BEV features are obtained from the look around image through the perspective conversion module. In SMERF, the SD map feature interacts with the BEV feature through cross attention mechanism. Firstly, the BEV feature is encoded into a query vector and initialized through self attention mechanism. Given the SD map of the scene, LGMap, As shown in Fig. <ref> evenly sample along each of the polylines for a fixed number of points. With sinusoidal embedding, BEVFormer apply cross-attention between the SD map feature representation with features from vision inputs on each encoder layer. The SD map features are encoded into key and value vectors, which are then computed through Cross Attention to obtain the final fused camera, BEV features of SD map. In addition to the common fusion methods of attention mechanisms, BLOS-BEV shows in Fig. <ref>, explore different fusion schemes that combine visual BEV features and SD map semantics to achieve optimal representation and performance, three SD map fusion techniques were explored: addition, concatenation, and cross-attention. Although all fusion methods outperform those that do not use SD maps, cross attention fusion of SD maps performs the best on the nuScenes and Argorse datasets, demonstrating excellent generalization performance and outstanding performance over long distances (150-200m). In P-mapnet, point cloud information has been added, and the lidar point cloud has been voxelated and MLP processed to obtain the feature representation of each point, resulting in Lidar BEV. Fusion of Image BEV and Lidar BEV is used to obtain further fused BEV features. Further convolutional downsampling of the fused BEV features can alleviate the misalignment problem between image BEV features and LiDAR BEV features. Then, through cross attention mechanism, the SD map good feature interacts with the fused BEV feature, resulting in the final fusion of camera and lidar point cloud, BEV features of SD map. Similarly, MapVision and MapEX, as Fig. <ref> and Fig. <ref> used the SD map feature as the key and value, and the feature map formed from multi-perspective images as the query to perform cross-attention. To address these issues such as occlusion and limited sensing range can result in inaccuracies, RoadPainter present a novel SD map interaction module that effectively augments BEV features by incorporating beyond-visual-range in Fig. <ref>. EORN in Fig. <ref> rasterize the SD map and generate a SD map in BEV. An SD encoder based on a ResNet-18 to extract SD map features. SD map feature is then interpolated and concatenated with the BEV feature from images BEV along the channel dimension. Fusion methods uses a simple two-layer convolutional neural network, the ConvFuser, which fuses concatenated features and outputs the fused BEV features. Another method involves a graph-based encoder that fuses SD map graphs with BEV features and combines these with the outputs from the centerline deformable decoder using a multi-head attention mechanism. The subsequent decoder can calculate and output corresponding results for different tasks through query queries from BEV features containing rich information. § CONCLUSION AND DISCUSSIONS This section explores the challenges of Local maps construction with SD map and points out the future direction. Lastly, this research make conclusions. §.§ Challenges and Future Prospects Enhancements in SD map Encoding and Processing Methods. Proper encoding and processing methods are crucial for leveraging SD map prior information in Local map perception tasks. Current studies employ relatively simple encoding and processing methods for SD map information, whether using raster or vector representations. Future research could explore more efficient encoding and feature extraction methods. Improvements in Aligning SD map Prior Information with BEV Space. Due to the accuracy limitations of GPS sensors, it is challenging to perfectly align SD map prior information with the current BEV operational space. This spatial misalignment can affect the model's detection accuracy to some extent. Enhancing spatial alignment methods can further improve model performance. Future research could consider incorporating temporal information to enhance the alignment accuracy between SD map prior information and BEV space. Inference of Road Topological Relationships. The topological relationships in Local maps can be divided into two branches: the topological relationships between roads (primarily representing road connectivity) and the topological relationships between roads and traffic signs (including traffic control signals and other directional signs). Enhancing scene understanding of the road environment is crucial for high-level autonomous driving tasks. The OpenLane-v2 dataset is the first public dataset providing topological relationships between roads and between roads and traffic signs. Current research focusing on this area is still limited. Future work could model the topological structures of road networks and the scene understanding tasks of traffic signs using graph neural network models. Incorporating More SD map Prior Information. Existing research has demonstrated that incorporating more road type information can enhance model performance. However, beyond the basic road network positions and road types, SD map can provide richer prior information. For example, OpenStreetMap offers additional information such as the number of lanes, lane directions, and road topological relationships. Future research could attempt to integrate this diverse information as SD map priors to further enhance the robustness and accuracy of Local map perception models. §.§ Conclusions In this article, the literature on local map construction using SD maps was reviewed, highlighting the pivotal role of SD maps in this task. The definition and core aspects of local map construction with SD maps were presented, demonstrating their significance in developing accurate and reliable maps. Commonly used public datasets and their corresponding evaluation metrics were enumerated. The main processes of leading technological approaches were summarized, focusing on the representation and encoding methods for data from various sensors, such as LiDAR, cameras, and radar. Advanced fusion techniques for integrating multi-source sensor data, fusion methods, were explored, along with their respective strengths and limitations. The evaluation prospects and design trends of local map construction models were discussed. This included addressing emerging challenges, such as improving SD map alignment with BEV perspectives and enhancing encoding and processing methods. The potential of incorporating detailed SD map prior information to model road topological relationships was considered, with the goal of improving scene understanding and supporting higher-level autonomous driving tasks. unsrt l20mm < g r a p h i c s > Jiaqi Li received the B.S. degree in Surveying and Mapping Engineering from the Qinghai University, Xining, China, in 2022. He is currently pursuing a M.S. degree at the Department of Civil Engineering, Tsinghua University, Beijing, China. Li’s research interests include computer vision and online HDMapping. l20mm < g r a p h i c s > Pingfan Jia received his M.S. degree from the School of Computer Science at Beihang University in 2022. He is currently working at the Multimodal Perception and Computing Research Lab, where he is engaged in cutting-edge Computer Vision research. His research interests are focused on Autonomous Driving Perception and 3D Semantic Scene Completion. l20mm < g r a p h i c s > Jiaxing Chen received an M.S. degree in Electrical and Computer Engineering from the University of Illinois, Chicago, USA, in 2021 and worked as an Algorithm engineer at the National Innovation Center of Intelligent and Connected Vehicles, Beijing, China from 2021 to 2022. He is currently pursuing a Ph.D. degree at the School of Vehicle and Mobility, Tsinghua University, Beijing, China. Chen’s research interests include computer vision and perception based on Vehicle-Road-Cloud architecture. l20mm < g r a p h i c s > Jiaxi Liu received a B.E. and an M.E. degree in Mechanical Engineering at the School of Vehicle and Mobility, Tsinghua University. He is now pursuing a Ph.D. degree in the Civil and Environment Engineering Department, at the University of Wisconsin-Madison. His research interests include collaborative perception, real-time perception, vehicle-road-cloud integration systems, and LLM-assist autonomous driving. l20mm < g r a p h i c s > Lei He received his B.S. in Beijing University of Aeronautics and Astronautics, China, in 2013, and the Ph.D. in the National Laboratory of Pattern Recognition, Chinese Academy of Sciences, in 2018. From then to 2021, Dr. He served as a postdoctoral fellow in the Department of Automation, Tsinghua University, Beijing, China. He worked as the research leader of the Autonomous Driving algorithm at Baidu and NIO from 2018 to 2023. He is a Research Scientist in automotive engineering with Tsinghua University. His research interests include Perception, SLAM, Planning, and Control.
http://arxiv.org/abs/2409.02098v1
20240903175440
CRAFT Your Dataset: Task-Specific Synthetic Dataset Generation Through Corpus Retrieval and Augmentation
[ "Ingo Ziegler", "Abdullatif Köksal", "Desmond Elliott", "Hinrich Schütze" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.LG" ]
Estimating the coherence of noise in mid-scale quantum systems Inés de Vega Received ; accepted ============================================================== § ABSTRACT Building high-quality datasets for specialized tasks is a time-consuming and resource-intensive process that often requires specialized domain knowledge. We propose Corpus Retrieval and Augmentation for Fine-Tuning (CRAFT), a method for generating synthetic datasets, given a small number of user-written few-shots that demonstrate the task to be performed. Given the few-shot examples, we use large-scale public web-crawled corpora and similarity-based document retrieval to find other relevant human-written documents. Lastly, instruction-tuned large language models (LLMs) augment the retrieved documents into custom-formatted task samples, which then can be used for fine-tuning. We demonstrate that CRAFT can efficiently generate large-scale task-specific training datasets for four diverse tasks: biology question-answering (QA), medicine QA and commonsense QA as well as summarization. Our experiments show that CRAFT-based models outperform or achieve comparable performance to general LLMs for QA tasks, while CRAFT-based summarization models outperform models trained on human-curated data by 46 preference points. § INTRODUCTION Large language models (LLMs) demonstrate strong generalization capabilities across diverse tasks <cit.>, but optimizing these models for specific tasks remains a considerable challenge. Although zero-shot and few-shot prompting methods provide some degree of adaptability <cit.>, task-specific fine-tuning generally delivers better performance, particularly for specialized and out-of-domain tasks <cit.>. A key challenge for effective fine-tuning is obtaining high-quality task-specific examples at large scale. Traditionally, creating high-quality datasets for specific tasks involves a time-consuming and resource-intensive process, often requiring extensive manual curation and annotation (e.g. <cit.>). This challenge is particularly acute for low-resource domains or novel tasks where existing datasets may be limited or non-existent. On the other hand, “raw” (i.e., unannotated, free-text) web-crawled corpora are known for their diversity and potential utility for various tasks <cit.>. Prior work has used raw data by targeted crawling of recipe websites <cit.> or word-specific filtering of crawling metadata to gather examples from pre-training corpora for sentiment analysis and summarization tasks via ratings <cit.> and bullet point summaries found in news articles <cit.>. These approaches either rely on a predefined task definition based on keywords, or on the targeted crawling of websites which are expected to contain the desired content. This reliance hinders the generalization of these methods to tasks where such prior knowledge is unavailable, difficult to define, or highly context-dependent. In this work, we propose Corpus Retrieval and Augmentation for Fine-Tuning (CRAFT) to curate task-specific samples from raw data for a wide variety of tasks. CRAFT only requires a small set of few-shot examples from a user to initiate the process of crawling and structuring task examples. CRAFT first detects relevant corpus examples from large-scale unannotated corpora using similarity-based retrieval. Then it uses LLMs to structure these examples into a proper task format, effectively transforming free-text documents into custom-formatted task samples for fine-tuning. We demonstrate the effectiveness of CRAFT on four diverse tasks: three QA tasks – in biology, medicine and commonsense – as well as a text summarization generative task. Our results show that models fine-tuned on CRAFT-generated datasets achieve performance that is either better than or comparable to instruction-tuned LLMs. This holds across diverse tasks, LLMs, and dataset sizes, highlighting the effectiveness of our approach. We publicly release the code to craft datasets for other tasks as well as all datasets and checkpoints at https://github.com/ziegler-ingo/CRAFTgithub.com/ziegler-ingo/CRAFT. § RELATED WORK §.§ Optimizing LLMs for Specific Tasks Prompting: Prompts are added to the input to provide additional context that guides the computation and output of a model <cit.>. A prompt usually takes the form of a natural language instruction <cit.>. Prompting is commonly used with instruction-tuned models to define tasks and extract responses from language models, using natural language, without gradient updates. Zero-Shot Inference: Originally discovered in the vision domain, zero-shot inference <cit.> is a technique that allows models to generalize their learned knowledge from pre-training to previously unseen classes, tasks, or sample instance variants at inference time without gradient updates. Pre-training LLMs on large corpora produces semantic representations that are generally applicable to multiple downstream tasks. GPT-2 <cit.> demonstrated that the acquired capabilities can then be activated by prompting a new task in natural language. However, zero-shot inference often falls short of the performance achieved by few-shot learning <cit.>. Few-Shot Learning: In few-shot learning, the model is provided with a small number of task-specific examples at inference time. The few-shot examples are given to the model in the prompt, in a technique known as in-context learning <cit.>. While full fine-tuning generally requires a substantial amount of labeled data, few-shot learning offers an inexpensive alternative to adapt a model to a new task with a limited number of examples <cit.>. Nonetheless, few-shot learning faces several challenges, including inaccurate assessment of the underlying data distribution <cit.>, biases related to small sample sizes <cit.>, and sensitivity to shot length <cit.>, shot quality and noise <cit.>. Full Fine-Tuning: During full fine-tuning, all model parameters are updated on a large dataset with the goal of adapting the model to a domain, task or dataset <cit.>. This approach usually provides the best performance by learning task-specific patterns and relationships that may not be captured by pre-training and zero- or few-shot learning alone. However, it requires a dataset of appropriate size. Instruction Tuning: Instruction tuning <cit.> is a type of full fine-tuning that optimizes a model to produce more relevant answers to questions or instructions <cit.>. This approach enables language models to better understand and follow user intents rather than simply continuing the input text. Instruction-tuned models regularly produce answers that are preferred by humans for tasks ranging from question-answering to summarization <cit.>. The main challenge is to obtain a large high-quality dataset that is both task-specific and in the desired instruction-output format. Low-Rank Adaptation: Full fine-tuning may be too expensive for LLMs but the difference between pre-trained weights and their fine-tuned counterparts often has low rank <cit.>. Low-Rank Adaptation <cit.> approximates these low-rank matrices during fine-tuning, and is efficient because it freezes the full model and only learns the low-rank matrices, which typically results in learning the equivalent of 2% of the model's parameters. §.§ Synthetic Data Generation Synthetic data refers to artificially generated data that mimics the characteristics of real-world data <cit.>. It can be generated using statistical <cit.> or deep neural approaches <cit.> with the aim of replicating the patterns, distributions, and structures found in real-world datasets. Fully Synthetic Data Generation: A dataset is fully synthetic if the question or instruction, the possible context, as well as the answers are generated synthetically. For instance, Self-Instruct <cit.>, Unnatural Instructions <cit.>, Alpaca <cit.>, and Evol-Instruct <cit.> are examples of fully synthetic general-purpose data generated by LLMs. More focused approaches for task-specific fine-tuning data generation have also been proposed, especially based around the rephrasing of already existing task-specific datasets <cit.>. Methods that use general-purpose corpora have recently been proposed for generating pre-training data <cit.>. When used for fine-tuning data generation, these methods are either based around complex and resource-intensive multi-agent workflows <cit.> or are restricted to a small set of tasks, as the generation process relies on a model that has been fine-tuned for those tasks <cit.>. The two greatest drawbacks of current approaches to fully synthetic data generation are repetition and low quality. Unnatural Instructions reported that a majority of their samples have a BERTScore <cit.> of above 45% when compared to other samples in the generated dataset. Self-Instruct faces similar issues, with generated instructions often having ROUGE-L scores <cit.> greater than 0.4 compared to the provided seed instructions. Both approaches also only contain about 54%-56.5% correct samples, while the correctness rate in Alpaca is as low as 17% <cit.>. This suggests that a large portion of the samples in these datasets may not be useful for fine-tuning models. Partially Synthetic Data Generation: In partially synthetic data generation, a portion of the input, context, or output is generated synthetically, while the remaining portion is human-curated. It is distinct from approaches that combine fully synthetic and purely human-curated samples at the dataset level, such as Phi <cit.>. One recent approach is reverse instruction generation <cit.>, where a language model, provided with a human-curated output in context, generates the instruction that would have prompted this output. This produces more coherent and correct input-output pairs because the LLM does not need to generate the longer and more complex component of the data sample. There are also approaches where, conversely, the output is synthetically generated from human-curated input samples. Such methods employ distillation techniques to extract patterns from larger, more capable models to teach those patterns and skills to smaller models <cit.>. Partially synthetic data generation alleviates some of the quality and diversity concerns of fully synthetic data generation. However, taking a raw corpus document as the output can result in generating noisy or unnecessary information <cit.>, but data augmentation can mitigate these problems when generating pre-training data <cit.>. However, when data augmentation was used to generate fine-tuning data, it required GPT-4 <cit.> to build an intermediate synthetic dataset to fine-tune a sample creator model <cit.>. This approach can result in a sample creator model that is a distilled version of the larger model's knowledge and data, while also limiting the model's task flexibility, depending on the synthesized training data. In contrast, CRAFT produces fully synthetic data but leverages the quality and diversity advantages of human-written documents from partially synthetic data generation approaches while removing noise through augmentation. Our approach does not require intermediate datasets, nor a separately fine-tuned model, nor knowledge distillation from a larger model; instead, it relies only on a small number of human-curated examples, retrieval, and in-context learning. § THE CRAFT APPROACH §.§ Architecture Overview CRAFT is used to fine-tune language models by generating task-specific synthetic datasets, given a few human-curated examples of the task. During CRAFT (see Figure <ref>), we retrieve human-written, free-text documents from a large collection of corpora by calculating their similarity to the provided few-shots and transforming them into the task-specific format through augmentation. The only human effort required is in writing a small number of high-quality examples of the target task. CRAFT has two phases: In the initial phase, an embedding database is created from large corpora. While this phase can be resource-intensive, its cost is incurred only once for all subsequent tasks, and it can be easily expanded with new corpora. In the second phase, the user-generated, task-specific few-shot examples are embedded, enabling the retrieval of relevant documents by calculating similarity measures between few-shots and corpus documents. Once relevant documents are retrieved, an instruction-tuned LLM is used to augment the retrieved free-text documents into a task-specific design, generating synthetic task samples in the layout that is needed for instruction-tuning (illustrated in Figure <ref>). Finally, the synthetic dataset is used to fine-tune a task-specific language model. We report implementation details for the whole CRAFT framework in Appendix <ref>. §.§ Few-Shot Examples A small number of human-curated few-shots serve as the “definition” of the task, i.e., they indicate how the task is to be performed. The few-shot samples consist of three elements: (i) a long text that mirrors in language, content, and accuracy what a high-quality corpus sample from the web should look like, (ii) a natural language instruction for the task to be performed, which can take the form of a direct instruction or a question about the text, and (iii) an output that satisfies the instruction or answers the question the way that the final model should later respond. Length statistics for texts, instructions, and outputs of our few-shots can be found in the XS row of Appendix <ref>. We note that the task does not need to be explicitly specified. For example, there is no need to state the task as “biology question-answering”; it is sufficient for the human-curated few shots to focus on QA in the domain of biology. If multiple-choice questions or single-letter outputs are in the few-shots, this will result in a corresponding dataset and fine-tuned model behavior. These examples show that CRAFT is highly customizable: Few-shot examples enable users to tailor the model’s behavior to specific formats, use cases, or domains. Users can create few-shots with unique terminology, style preferences, or domain-specific constraints, optimizing the retrieval and the final model’s performance for particular tasks. §.§ Corpora and Embedding Database The embedding database is a key element of CRAFT as it provides, for all corpora, embeddings of human-written documents that should be retrievable for task-specific augmentation. It is, therefore, important that the embedding database encompasses a wide variety of linguistically and semantically diverse documents. This diversity can be achieved by including corpora that exhibit different writing styles, tones, and vocabularies. Task-specific, task-agnostic, public, and also private documents can provide a comprehensive coverage of relevant information. The more varied the documents in the embedding database, the better the coverage will be for diverse or rare tasks. Notably, CRAFT can also handle sensitive company data, as the encoding, storage, and retrieval can be performed on-site. §.§ Document Retrieval Our retrieval system is task agnostic, both in terms of domain and complexity, in contrast to previous approaches <cit.>. The CRAFT approach relies on human-curated few-shot examples as query documents and can dynamically retrieve any document of the base corpora. As the few-shot samples include a text containing the domain, the instruction or question, as well as the output, the resulting embedding representation of the sample contains contextualized <cit.> semantic information about both the domain and the nature of task to be performed. Relevant text documents that contain similar latent features as the few-shots are retrieved from the corpora by calculating similarity scores based on the embedded few-shots and corpus samples. As corpus size increases, the risk of retrieving redundant or similar corpus samples also increases. This is partly due to the growing volume of documents, but also because the diversity of documents within the corpora may plateau, resulting in a higher proportion of similar documents. Designing few-shots that are sufficiently diverse in topic may alleviate this issue. For example, when creating few-shots for biology question-answering, various subtopics of biology, such as genetics, anatomy, or physiology, should be covered to broaden the range of retrieved documents. §.§ Task Sample Synthesis The retrieved documents naturally contain noise <cit.> and lack the formatting required for fine-tuning. Therefore, it is necessary to convert these free-text documents into appropriate task samples by removing noise and undesired sections. To address this, we utilize instruction-tuning prompt templates <cit.> to augment the free-text documents into task-specific training data while simultaneously eliminating noise. A few-shot task template consists of three elements: (i) one or more few-shots, (ii) a corpus sample, (iii) and a brief instruction for the model to generate instruction-output pairs from the content of the corpus sample. Aside from the brief instruction, it is easy to assemble these templates from material we already have. The template only structures all information from the instruction, the few-shots, and the retrieved corpus samples to generate one continuous string that serves as input for the model generating the synthetic task samples. In this setup, the contents of the few-shots serve as in-context examples for the completion of the instruction. Figure <ref>, step 3, shows an example of how these templates guide the model in augmenting the corpus samples into synthetic task samples. Any instruction-tuned language model can be used for this purpose. This augmentation step not only rephrases the text but also condenses the retrieved document down to the essential information required for the task. The result of this step produces the final synthetic instruction-output pairs that can be used to fine-tune a language model. Figure <ref>, step 4, shows an actual example output from the generated pool of synthetic training samples, and Appendix <ref> provides an overview of length statistics from the stages of corpus retrieval up to the synthesized input-output pairs. § EXPERIMENTAL SETUP §.§ Tasks We generate datasets for all tasks in sizes 100, 500, 5000, and 25,000. We refer to the few-shots as a dataset of size XS, and to the sizes ranging from 100 to 25,000 as S, M, L, and XL, respectively. Implementation details related to fine-tuning can be found in Appendix <ref>. Multiple-Choice QA: We generate three synthetic QA datasets for biology (BioQA), medicine (MedQA), and commonsense (CSQA). The datasets all follow the MMLU multiple-choice format <cit.>, where each question is accompanied by a number of answer options. Exactly one of these options is correct, and the task is to identify the correct answer. The expected output is the corresponding letter label of the correct answer. Generative: In addition, we develop two synthetic datasets for the generative tasks of summarization and recipe generation (RecipeGen). The goal of summarization is to convey accurate and concise information; recipe generation is additionally focused on creating coherent and structured text that adheres to specific formatting and stylistic conventions <cit.>. To build a synthetic summarization dataset, we first select a corpus sample and instruct the model to extract an extensive but unsummarized section of text. In the second step, the extracted section is transformed into a summary format, optionally incorporating elements from the raw text, such as abstracts, conclusions, or TLDRs. This approach avoids using the original corpus samples as to-be-summarized text documents, which can be lengthy and overly broad and may result in uninformative summaries. §.§ Evaluation §.§.§ Metrics QA Tasks: All multiple-choice QA tasks are evaluated using accuracy. We follow the evaluation approach of MMLU and assess the logarithmic probabilities for the vocabulary options corresponding to the letter labels of the answer choices <cit.>. Accordingly, greedy decoding without temperature scaling is performed. Depending on the number of answer choices, this evaluation can range from options A and B to options A through E. Generative Tasks: Automated metrics like ROUGE <cit.> and METEOR <cit.> are resource-efficient as evaluation metrics but face limitations in generative tasks. They rely heavily on n-gram overlap, which may not accurately reflect the true quality of the generated text <cit.>. They also assume that the reference text provides a complete and accurate representation of the desired output, which is not always guaranteed <cit.>. Reference texts can be low-quality, contain errors and ambiguities, which leads to unreliable evaluation results. This issue is illustrated by the findings of <cit.>, who demonstrated that human reviewers frequently rate gold-standard benchmark answers among the worst answer options. This issue worsens when generated texts differ significantly in length from references, as subsequence-based metrics struggle to capture such variations <cit.>. As an alternative, we opt to evaluate generations using LLMs as a judge <cit.>. In this setup, the LLM effectively acts as a human annotator, providing a binary preference score for each pair of outputs, resulting in a win rate as the final metric <cit.>. This approach has become increasingly common: LLMs have been successfully employed as annotators while demonstrating high inter-rater reliability <cit.>. For general-purpose outputs, we use the popular Alpaca-Eval benchmark <cit.> that evaluates multiple LLMs on about 650 human-curated questions <cit.>. We select Llama 3 70B <cit.> as our annotator model due to its open nature and cost-efficiency for high-volume experiments. As of July 2024, Llama 3 70B ranks 4th in human agreement with a score of 67.5, close to customized GPT-4 versions at 69.2. §.§.§ Datasets To assess the quality of our synthetically generated datasets, we compare the performance of models trained on them to those trained on human-curated datasets. BioQA: We use the 800 sample test split from the biology subsection of ScienceQA <cit.>. ScienceQA sourced expert-curated question-answer pairs from online learning platforms, ensuring a high level of quality and accuracy. The dataset's answer options range from two to five, have a single correct answer per question, and are randomized to prevent pattern recognition. MedQA: We use the 4183 samples from the MedMCQA validation split <cit.>. The dataset is comprised of entrance exam questions to two of India's postgraduate institutions. All dataset samples are sourced from preparation or real exams created by medical professionals. All questions have 4 answer options with one correct answer option per question. CSQA: We benchmark on the 2541 validation set samples in CommonsenseQA 2.0 <cit.>. The dataset was generated through a gamified but controlled question generation process, where players were incentivized to design challenging yes/no questions by earning points when beating an AI model. The generated questions were then cross-validated by other players, independent validators, and another model, to ensure that the questions were well-formed, answerable, and representative of common sense. RecipeGen: We randomly sample 1000 samples from the higher-quality subsection of the RecipeNLG dataset <cit.>. The recipes were scraped from cooking websites and post-processed using a fine-grained and standardized cleaning and formatting process to ensure correctness. Each recipe features a title, the list of required ingredients, as well as the steps to produce the meal. We only include samples not present in C4, based on the URL in each dataset. Summarization: We use 1000 samples from the test split of the CNN-DailyMail dataset <cit.>. The dataset is commonly used for summarization because it consists of articles and stories from CNN and DailyMail alongside their highlights in abstract or bullet point format presented at the top of newspaper pages or websites. § RESULTS §.§ Baselines We compare the performance of CRAFT, trained on our synthetic datasets, against three baselines. The few-shot baseline is a model fine-tuned only on the XS size CRAFT dataset, with human-curated few-shots. It serves as the primary baseline since this model uses all human-curated data available in our pipeline. The second baseline is the instruction-tuned model, Mistral 7B Instruct v0.2 <cit.>, which has been fine-tuned on proprietary instruction-following datasets that mix various tasks and sources. This baseline provides a meaningful comparison, as it is similar in size and instruction-tuned like CRAFT models, though it is trained on undisclosed datasets of unknown quality and quantity. Thus, matching or exceeding the performance of instruction-tuned models with our synthetic data would indicate that CRAFT can produce high-quality datasets. The upper bound of expected performance is fine-tuning the models on the in-domain human-curated training splits from the chosen evaluation datasets. This baseline represents the optimal performance achievable with human-quality datasets. §.§ Scaling the Data We report the performance gain when scaling up our training data in Figure <ref>. We report the mean and standard deviation across three seeds. We observe consistent improvements across four tasks as we increase the data size. Relative to the baseline models trained with only few-shot examples, we see improvements of 17% (from 66.9 to 78.1), 12% (from 55.3 to 62.1), 23% (from 39.1 to 48.0), and 124% (from 43.7 to 97.9) for BioQA, CSQA, MedQA, and Summarization, respectively. This shows that CRAFT can be used for diverse tasks, starting with just a few curated examples. We also find appropriate scaling for each set of examples, ranging from 100 to 25,000 across all tasks. Additionally, that models trained with fewer examples (32, 100) exhibit much more variance than those trained with 5,000 and 25,000 examples, as indicated by the gray regions in the plots that visualize the standard deviation. For all tasks, we achieve results that are are clearly better of comparable to Mistral Instruct. It is worth noting that CRAFT uses an LLM in a limited way (to restructure and rewrite existing corpora) that seem to exclude the possibility that distillation may have played a role here. However, even if distillation were to be considered the reason for good CRAFT performance, the results indicate otherwise: we use the same model (Mistral Instruct) to paraphrase existing corpora examples but achieve even stronger results. Finally, we observe that CRAFT models outperform those trained with official human-curated large-scale data in Summarization. For other tasks, while we observe lower performance than with official data, we speculate that this could be due to in-domain evaluation for official human-curated data. We use their test split to evaluate our models, which may give these models an unfair advantage. We investigate this further in the next section. §.§ OOD Generalization and Data Contamination We now report experiments to understand the level of data contamination or similarity between test and training examples in the experiments introduced in Figure <ref>. We conduct 5-gram weighted Jaccard similarity analyses between CRAFT datasets and the test data. For each sample, we combine the instruction and output and gather 5-gram frequencies for the whole dataset. We then calculate the Jaccard similarity between the 5-gram frequency distributions of the respective CRAFT and test dataset, where 5-grams receive weight proportional to their frequency. This analysis shows that all CRAFT datasets have less than 0.4% similarity with the task test set, whereas the original human-authored datasets show much higher similarities: BioQA (17.9%); CSQA (4.4%), MedQA (1.1%), and Summarization (0.3%); this may indicate some overlap between train and test splits. Since we have curated the few-shots manually, rather than copying from existing datasets, our low overlap (0.4%) may be expected. To further investigate CRAFT's performance improvement, we select four out-of-domain datasets for the biology question answering task and compare CRAFT with the human-curated baseline. In Table <ref>, we compare the human-curated baseline with CRAFT on the in-domain dataset for the baseline and four out-of-domain (OOD) datasets selected from MMLU <cit.>. Although the baseline outperforms CRAFT by more than 11% in the biology subset of the ScienceQA test set (in-domain for the baseline), CRAFT outperforms the baseline on 3 out of 4 OOD datasets. On average, CRAFT outperforms the baseline by 4%. §.§ Negative Results: Recipe Generation Out of the five tasks we selected, we observe non-scaling behavior in one task: Recipe Generation. While our manually curated few-shots are of high quality, we see a drop when scaling from 32 to 25,000 examples, as illustrated in Figure <ref>. CRAFT's performance is still better than the baseline with official human data, which means that the final dataset is usable. However, we explore why this reverse scaling occurs and examine the drop in performance. An initial analysis suggested that the CRAFT pipeline tends to find less relevant examples over time. We conducted automated data quality analysis to analyze this on a larger scale. For 500 randomly sampled instructions from different sizes of CRAFT datasets (i.e., the training sets), we used the Llama 3 8B Instruct model to answer the instructions. Then, using Llama 3 70B Instruct as a judge, we compared win rates, i.e., which output the model preferred: the gold output in the CRAFT dataset or the output generated by Llama 3 8B Instruct. We report the average win rate against the Llama outputs as the data quality metric. Higher scores indicate that the pipeline created higher quality output than Llama 3 8B Instruct's answers. We observe that data quality drops when scaling up to 25,000 examples. The 100 and 500 example sets have win rates around 0.4, while the win rate for 25K drops to 0.3. We believe this is the cause of the performance drop with scaling. While the final dataset is still useful (it outperforms the baseline with official human data), the next version of CRAFT should include either effective stopping criteria or additional validators for quality. §.§ Base Model Comparison In previous sections, we fine-tuned CRAFT models using the pretrained Mistral 7B model. Now, we repeat the experiments using the pretrained Llama 3 8B model. We observe similar trends across all tasks, and the relative improvement is comparable when scaling up from few-shots to 25,000 examples, as illustrated in Table <ref>. In all experiments, we manually curated 32-shot examples and expanded our synthetic data examples from that point. However, even curating 32 examples can be time-consuming. We can limit the number of few-shots to just eight examples to bootstrap the CRAFT process. Figure <ref> shows the results compared to the CRAFT pipeline with 32-shots. While the final results with 25,000 examples are slightly lower for 8-shots, the trend is similar for both 8- and 32-shot examples. We observe that the model trained with 25,000 samples, based on running CRAFT with 8 few-shots, significantly outperforms the model trained with only 32 few-shots (i.e., no extra synthetic data). This suggests that if there are time and resource limitations, using CRAFT with fewer initial examples leads to better models than trying to curate and train only on more human-curated few-shot examples. § CONCLUSION In this work, we introduced CRAFT (Corpus Retrieval and Augmentation for Fine-Tuning), a framework for generating task-specific synthetic datasets grounded in text corpora. CRAFT requires only a small set of human-curated few-shot examples to bootstrap the creation of large-scale training data by leveraging existing corpora and instruction-tuned language models. Our experiments across multiple tasks, including biology, medicine, and commonsense question-answering, as well as summarization, demonstrate that models fine-tuned on CRAFT-generated datasets can match or outperform strong baselines, including instruction-tuned models and those trained on human-curated datasets. Notably, CRAFT-based models showed better generalization capabilities on out-of-domain datasets compared to models trained on human-curated data, highlighting the robustness of our approach. While CRAFT shows promising results for most tasks, we also identified limitations in scaling performance for recipe generation, emphasizing the need for careful quality control and potential stopping criteria in future iterations. Nevertheless, the overall success of CRAFT in producing high-quality synthetic datasets with minimal human effort opens up new possibilities for efficient and adaptable model fine-tuning across a wide range of domains and tasks. § ACKNOWLEDGEMENTS IZ and DE have been supported by the European Union’s Horizon 2020 research and innovation program under grant agreement No. 101135671 (TrustLLM). AK and HS have been funded by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) project SCHU 2246/14-1. IZ was also supported by a G-Research Travel Grant to present this work at the ELLIS Doctoral Symposium 2024 in Paris. acl_natbib § IMPLEMENTATION DETAILS §.§ Few-Shot Design The BioQA few-shot text samples were compiled from a diverse range of sources, including textbooks <cit.>, openly accessible materials, and Encyclopedia Britannica. For MedQA, we primarily drew upon openly accessible websites, such as the National Institutes of Health, National Health Service, Food and Drug Administration, Mayo and Cleveland Clinics, and other medicine-related websites, to generate our few-shot text samples. The CSQA few-shot text samples were sourced from a variety of online resources, including blogs, articles, and other websites tailored to the specific topic at hand. From sources that are not openly accessible through websites, continuous text snippets were directly extracted and used as texts, while all other material was shortened, rephrased, or restructured by the authors. This process ensures that articles which may have been crawled through C4 <cit.> do not produce exact matches at retrieval time. Since none of these texts have direct question-answer pairs available, the question, answer options, and the answer were generated by the authors for each sample. Table <ref>, step 1, shows an example. The recipe generation few-shots were taken from blogs and other openly accessible websites. Their design always includes a simple instruction or question to cook a meal, as well as bulleted or numbered lists of ingredients and steps to produce the meal. Sometimes, the recipes on websites were described only as continuous text. In such cases, the authors added the list of ingredients and steps to create the few-shots. Similar to the QA few-shots, it is desirable to increase the vocabulary diversity for retrieval. Therefore, we made sure to cover a wide range of recipes in the few-shots. To design summarization few-shots, we collected a wide variety of texts from websites, blogs, and magazines, as well as specialized sources such as GitHub issues. Here, the few-shots feature a text, an instruction to create a summary, as well as the summary output. On some websites, summarized versions of the main text are given, while in other cases continuous or bulleted summaries were created by the authors. §.§ Embedding Database To create the embeddings, SentenceTransformers <cit.> is used, specifically a MiniLM <cit.> version () optimized for cosine-similarity search between two document pairs. This model creates 384D embeddings, which are stored in an HDF5 database <cit.>, that allows for native storage and retrieval of array-like data. §.§ Corpora To enable retrieving human-written documents for general-purpose as well as specialized domains, we include four large corpora. The Colossal Clean Crawled Corpus <cit.> dataset consists of approximately 750GB of English data, pre-cleaned by the creators to exclude non-informative texts. We use a 305GB subset of the dataset, which excludes not-safe-for-work or offensive content. The English Wikipedia corpus comprises a diverse and high-quality collection of textual information on a wide range of topics. We use the January 2nd, 2024 dump of English Wikipedia, which we pre-process using WikiExtractor <cit.> to extract clean text documents. The Stack Exchange corpus <cit.> features a structured format with title, body, and best-voted answer collected from the Stack Exchange network. The dataset encompasses 173 distinct sub-communities, covering both technical and non-technical topics. The WikiHow corpus <cit.> presents information in a step-by-step instructional layout, making it a valuable resource for tasks such as summarization or recipe generation. Each document consists of a title, clear instructions, and accompanying text. After filtering out documents with fewer than 200 characters or more than 25,000 characters, the resulting datasets contain approximately 362 million documents from C4, 10.5 million documents from English Wikipedia, 9.5 million documents from Stack Exchange, and 190,000 documents from WikiHow. The resulting 383 million documents take up approximately 247GB of storage when GZIP compressed <cit.> and represented as 16-bit NumPy arrays <cit.>. §.§ Document Retrieval To retrieve relevant human-written samples, we employ a two-step process that approximates a global similarity search. Due to the large size of the embedding database (383 million documents), a direct global pair-wise comparison is impractical. In the first step, we divide the embedding database sequentially into subsections of approximately 350,000 documents each. We calculate cosine similarity between each subsection and the few-shot samples, and store the top 5% most similar documents in memory. This reduces the number of documents considered from 383 million to approximately 19 million. In the second step, we recalculate cosine similarity and perform traditional top-k similarity retrieval on the filtered documents. This yields the indices of the top-k most similar documents to the few-shot examples. Optimizing solely for top-k similarity between individual few-shot samples and the embedding database may lead to limited variation and high similarity among the retrieved documents. Conversely, optimizing for similarity by averaging the embedding representations of all few-shot samples may prioritize a single dominant topic, potentially biasing the selection. To balance these approaches, we adopt a mixed similarity retrieval strategy, combining two methods: (i) 50% of samples are retrieved based on individual top-k similarity to each few-shot sample, ensuring a minimum number of similar samples for each few-shot and mitigating topic dominance, and (ii) the remaining 50% of samples are retrieved based on fully averaged similarity, which aims to help uncover more latent topics. §.§ Task Sample Synthesis To facilitate the creation of high-quality synthetic task samples, we align the text generation process to the few-shot design through in-context learning <cit.>. This approach helps reduce issues like hallucinations <cit.> and formatting errors, although it does not entirely prevent them. Specifically, three few-shot examples from the human-curated few-shots are dynamically and randomly sampled for each input prompt and corpus sample, interleaving them with the instruction to augment the text. Since each few-shot example includes a text similar in length to corpus samples, the three-shot prompting technique can result in input prompts that often exceed 10,000 tokens, sometimes reaching 20,000 tokens. For optimal performance, it is recommended to use models offering long context lengths. In our experiments, all task samples are created using Mistral 7B Instruct v0.2 <cit.> along with the few-shot task template shown in Table <ref>, step 3. During the generation process, vLLM <cit.> is used with temperature 0.7, top-k <cit.> and top-p <cit.> sampling at 40 and 0.9, respectively. The maximum generation length was determined based on preliminary experiments with the goal of accommodating full-length sample generation. For all QA datasets we limited samples to 256 tokens, whereas recipe generation and summarization were limited at 1280 and 1536 tokens, respectively. To enable quality control checks such as checking whether the QA datasets have the desired number of answer options, or whether the samples have been fully generated, we instruct the model to produce all task samples as valid JSON objects with a set number of keys and proper naming. We perform checks on each JSON object to ensure that the questions, instructions, and generative task answers exceed a reasonable minimum length, as well as that the answer option generated for the QA tasks contain valid letters. If no valid JSON is found, or keys are missing, or any formatting check fails, we discard the sample. Furthermore, we filter task samples that are either too similar to the few-shots or to other generated task samples using fuzzy string matching <cit.> at a token set similarity ratio of 0.85. To account for the removal of samples during deduplication, we recommend retrieving approximately twice as many corpus samples as the number of desired final synthetic task samples. A detailed breakdown of the retrieval and deduplication quantities for each task is provided in Appendix <ref>. §.§ Training and Optimization For all experiments, low-rank adaptation <cit.> fine-tuning is performed with 16-bit BrainFloat <cit.> as the computation data type. All experiments are implemented using PyTorch <cit.> and the Hugging Face packages <cit.>. For optimization, the adaptive momentum optimizer with decoupled weight decay <cit.> of 5% and a learning rate of 1 × 10^-4 is employed. A linear learning rate schedule is applied with a warm-up ratio of 10%. Fine-tuning is performed for 3 epochs across all tasks and dataset size variations. When fine-tuning on only the human-curated few-shots, a batch size of 2 is adopted. In all other scenarios, fine-tuning is performed with a batch size of 16 or the equivalent with gradient accumulation steps. Following <cit.>, LoRA adapters are used on every linear layer, specifically on the query-, key-, value-, output-, gate-, up-, and down-projection matrices. We use a rank and α parameter of 64, the adapter's dropout rate is set to 0.1, and the bias terms of the update matrices are deactivated. In total, this adds 2.3%, or about 160 million parameters, of the model's 7 billion base parameters as LoRA adapters. The rest of the model remains frozen during fine-tuning, and the updated parameters will be merged with the base model after fine-tuning. § FILTERING STATISTICS § DATASET STATISTICS
http://arxiv.org/abs/2409.03437v1
20240905113855
The first law of binary black hole scattering
[ "Riccardo Gonzo", "Jack Lewis", "Adam Pound" ]
gr-qc
[ "gr-qc", "hep-th" ]
rgonzo@ed.ac.uk Higgs Centre for Theoretical Physics, School of Physics and Astronomy, University of Edinburgh, EH9 3FD, UK J.E.Lewis@soton.ac.uk School of Mathematical Sciences and STAG Research Centre, University of Southampton, Southampton SO17 1BJ, United Kingdom A.Pound@soton.ac.uk School of Mathematical Sciences and STAG Research Centre, University of Southampton, Southampton SO17 1BJ, United Kingdom § ABSTRACT In the last decade, the first law of binary black hole mechanics played an important unifying role in the gravitational two-body problem. More recently, binary black hole scattering and the application of high-energy physics methods have provided a new avenue into this classical problem. In this Letter, we connect these two themes by extending the first law to the case of scattering orbits. We present derivations based on classical S-matrix, Hamiltonian, and pseudo-Hamiltonian methods, the last of which allows us to include dissipative effects for the first time. Finally, a “boundary to bound” map links this first law to the traditional bound-orbit version. Through this map a little-known observable for scatter orbits, the elapsed proper time, is mapped to the Detweiler redshift for bound orbits. The first law of binary black hole scattering Adam Pound September 9, 2024 ============================================= Introduction.—The discovery of gravitational waves (GWs) from compact binary systems opened a new chapter in astronomy. Given the enhanced sensitivity and expanded frequency range of future GW detectors, we expect a dramatic increase in the number and variety of detectable compact binary sources <cit.>. Increasingly accurate waveform models will be needed to detect and analyse these sources <cit.>, calling for the development of new tools to study the classical two-body problem. Motivated by GW modeling, a host of techniques have been developed to solve the two-body problem in general relativity, including numerical relativity, which numerically solves the fully nonlinear Einstein equations <cit.>; gravitational self-force (GSF) theory, a perturbative method that applies when one body is much smaller than the other <cit.>; and Post-Newtonian (PN) and Post-Minkowskian (PM) theory, weak-field expansions that apply when the two bodies are widely separated <cit.>. Historically, focus has been on the bound, inspiralling systems that are the dominant sources for GW detectors. However, the case of hyperbolic, scattering encounters is now of great interest: it is now known that data for scattering orbits can inform bound-orbit models using the effective one-body framework <cit.> or through an analytic continuation from scattering to bound observables <cit.>, spurring the development of new particle physics tools <cit.> that have enabled analytical computations of the two-body scattering Hamiltonian and related observables at high PM order <cit.>. In the bound case, synergies between different methods have consistently helped drive progress <cit.>. An important tool in those synergies has been the first law of binary black hole (BH) mechanics <cit.>, which describes how a binary system responds to variations of its parameters (see also <cit.>). This law has played an important role in the most accurate GSF waveform model <cit.> and in utilizing GSF results within PN, effective one body, and numerical relativity calculations <cit.>; see <cit.> for a review. For spinless particles, the binary's response to variations is determined by a basis of observables ℬ^< consisting of the periastron advance ΔΦ, the radial frequency Ω_r, and the averaged Detweiler redshift ⟨ z ⟩ <cit.>. To date, a first law for scattering scenarios has not been derived. In this Letter, we establish such a law and find the corresponding basis of scattering observables ℬ^>. Two of these observables are well studied: the deflection angle χ and the time delay Δ t. We complete the basis with a third observable: the elapsed proper time Δτ. Our approach is based on a pseudo-Hamiltonian formulation of GSF theory <cit.>. This allows us to include dissipative contributions in the first law, unlike all previous formulations for bound orbits. By comparing to bound-orbit formulations, we also establish a novel analytic continuation between the elements of the scattering ℬ^> and bound ℬ^< bases of observables. Finally, we link our calculations to high-energy physics methods, proving that the exponential representation of the classical S-matrix <cit.> provides a generating functional for the basis of scattering observables and deriving a first law from a PM Hamiltonian. Conventions We use geometric units with G=c=1 and the (-+++) metric signature. First law in the probe limit.—In the GSF approach, the smaller body (of mass m_1) is treated as a point-particle perturbing the spacetime of the large body, which we take to be a Schwarzschild BH of mass m_2. We first consider the probe limit (0SF order), in which the particle moves on a geodesic of the Schwarzschild metric g_αβ^Schw. The particle's motion is governed by the geodesic Hamiltonian H_0 = (1/2) g^μν_Schw p_μ p_ν, where p_α = m_1 g_αβ^Schwd x^β/dτ is the particle's 4-momentum and τ is its proper time. Assuming, without loss of generality, that the motion lies on the equatorial plane θ = π/2, we label the position of the particle with x^α(τ)=(t(τ), r(τ), π/2, φ(τ)). Because of Schwarzschild's Killing symmetries, the particle's energy and angular momentum E = -p_t,0 and L= p_ϕ,0 are conserved (here and below, a subscript 0 indicates the on-shell geodesic value). We now consider unbound geodesic orbits that begin and end at r=∞; such orbits have E>m_1 and L > L_crit(E), where L_crit(E) is a critical value of the angular momentum <cit.>. Following Carter's application of Hamilton-Jacobi theory <cit.>, we use the constants of motion P_i=(m_1,E,L) as canonical momenta and transform to canonical coordinates (X^i,P_i) using the type-2 generating function W(t,r,φ;P_i) = -E t + Lφ + I_r,0(r;P_i) , I_r,0(r;P_i) = ∫_r_m^r d r p_r,0(r;P_i) , where r_m(P_i) is the geodesic's minimum radius (i.e., closest approach to the BH). g^μν_ Schwp_μ,0 p_ν,0 = -m_1^2 implies p_r,0(r;P_i) = √(E^2 r^4 -r (r-2 m_2) (L^2+m_1^2 r^2))/r (r -2 m_2) . In the coordinates (X^i,P_i), where X^i=∂ W/∂ P_i, the Hamiltonian is simply its on-shell value, H_0=-m_1^2/2, meaning that Hamilton's equations become <cit.> m_1 d X^i/dτ=∂ H_0/∂ P_i = -m_1δ^i_1 . Therefore X^2 and X^3 are constants, while X^1 is linear in τ. Since X^i=∂ W/∂ P_i, this implies ∂ W/∂ E|_out = ∂ W/∂ E|_in , ∂ W/∂ L|_out = ∂ W/∂ L|_in , τ_out - τ_in = - [∂ W/∂ m_1|_out - ∂ W/∂ m_1|_in] , where “in” and “out” denote the initial, incoming state and final, outgoing state. Equations (<ref>) and (<ref>) imply that the total changes in coordinate time, azimuthal angle, and proper time between initial and final states are Δ t_0 = t_out - t_in = ∂/∂ E[I_r,0(r_out;P_i) - I_r,0(r_in;P_i) ], Δφ_0 = φ_out - φ_in = -∂/∂ L[I_r,0(r_out;P_i) - I_r,0(r_in;P_i) ], Δτ_0 = τ_out - τ_in = -∂/∂ m_1[I_r,0(r_out;P_i) - I_r,0(r_in;P_i) ]. We are interested in the limit where the initial and final states are defined at past and future timelike infinity, with r_ in=∞ = r_ out, passing through the single radial turning point r_m. In this limit, Δφ_0 remains finite, but I_r,0(r_ in/out;P_i), Δ t_0, and Δτ_0 all diverge. However, we can define regularized versions. Using a convenient dimensionless regulator ϵ <cit.>, we first define the scattering radial action I^>,ϵ_r,0 (P_i) = 2 ∫_r_m^+∞d r r^ϵ p_r,0(r;P_i) . This converges for Re(ϵ)<-1, but for physical observables we will be able to analytically continue to ϵ=0. In terms of I^>,ϵ_r,0 we can write the regularized r_ in/out→∞ limit of Eq. (<ref>) for the full scattering path as Δφ^ϵ_0 = π + χ^ϵ_0 = - ∂/∂ L I^>,ϵ_r,0(P_i) , where χ_0=lim_ϵ→0χ^ϵ_0 is the physical scattering angle, and Δ t^ϵ_0 = ∂/∂ E I^>,ϵ_r,0(P_i) , Δτ^ϵ_0 = - ∂/∂ m_1 I^>,ϵ_r,0(P_i) . Unlike χ^ϵ_0, the elapsed times Δ t^ϵ_0 and Δτ^ϵ_0 diverge as ϵ→0. The associated physical observables, which are well defined when ϵ→0, are relative measurements: the difference between Δ t_0 (P_i) along the geodesic and Δ t_0 (P_i,ref) along some reference orbit. In the Supplemental Material, we provide exact expressions for the geodesic scattering observables as well as the first few terms in their PM expansions (corresponding to m_1 m_2 / L≪ 1). Finally, Eqs. (<ref>) and (<ref>) can be immediately combined into a single equation, δ I^>,ϵ_r,0 = -(π + χ_0) δ L+ Δ t^ϵ_0 δ E - Δτ^ϵ_0δ m_1 . This is our first law for scattering geodesics. Here and below, we discard terms that vanish at ϵ=0, and equalities should be understood in this sense. First law at all SF orders.—Beyond leading order in the mass ratio m_1/m_2, the particle generates a metric perturbation h_αβ on the Schwarzschild background. m_1 then moves on a geodesic of a certain effective metric g̃_αβ = g_αβ^Schw + h^ R_αβ <cit.>, where h^ R_αβ is a certain regular piece of h_αβ. At 1SF order, we can write h^ R_αβ in terms of the Detweiler-Whiting Green's function G_R^αβα^'β^' <cit.>: h_R^αβ(x^μ;Γ)= 1/m_1∫_Γ G_R^αβα^'β^'(x^μ, x'^μ(τ̃')) p̃_α^'p̃_β^' dτ̃' . Here p̃_α:=g̃_αβ dx^β/ dτ̃, τ̃ is proper time in g̃_αβ, and Γ denotes the particle's phase-space trajectory. Due to curvature-induced tail effects, G_R^αβα^'β^' is nonzero for all points x'^μ in the past of x^μ, implying h_R^αβ at a point on Γ depends on the entire prior history of Γ. At higher SF orders, there is no known Green's-function form analogous to (<ref>), but an appropriate h^ R_αβ(x^μ;Γ) exists at all SF orders <cit.>. In this setting, we again consider scattering orbits with initial parameters P_i=(m_1,E,L). The particle's energy and angular momentum evolve due to dissipation, but the orbit remains planar (θ=π/2, p̃_θ=0). For L above a critical threshold, the orbit remains close to a Schwarzschild geodesic with the same initial P_i <cit.>. Since the motion is geodesic in g̃_μν, it obeys Hamilton's equations with the test-mass pseudo-Hamiltonian H = (1/2) g̃^μνp̃_μp̃_ν <cit.>. However, we deviate from <cit.> by restricting to the 4D phase space (x^A, p̃_A) satisfying the on-shell condition H = -m_1^2/2, with x^A=(r,φ). Solving the on-shell condition for p̃_t=-ℋ(t, x^A, p̃_A;Γ) gives the new pseudo-Hamiltonian ℋ = 1/g̃^tt[g̃^tAp̃_A-√((g̃^tAp̃_A)^2-g̃^tt(g̃^ABp̃_A p̃_B+m_1^2))] = -p_t(r,p̃_A) - h^μν_ R(t,x^A;Γ) p̃_μp̃_ν/2g^tt_ Schw p_t+ O(m_1^2/m_2^2), with t now the parameter along the trajectory. ℋ is referred to as a pseudo-Hamiltonian because it depends on the trajectory Γ={(x^A(t), p̃_A(t)) | t ∈ℝ}. Hamilton's equations in this context read d x^A/ d t= [∂ℋ/∂p̃_A] and dp̃_A/ d t=-[∂ℋ/∂ x^A] , where [·] indicates specification of Γ as the self-consistent trajectory <cit.> passing through (x^A,p̃_A); prior to that specification, Γ is treated as independent, and derivatives do not act on it. We go further by replacing m_1 by m_1' in Eq. (<ref>), setting m'_1=m_1 only when [·] is applied. Importantly, Eq. (<ref>) captures the full dynamics, including dissipation, unlike an ordinary Hamiltonian description. We derive the first law from H. Doing so will require its relationship to the redshift z: z := dτ̃/d t = [∂ℋ/∂ m_1] . To establish this relationship, we consider the normalization condition -m_1=p̃_μd x^μ/dτ̃ =(-ℋ+p̃_A [ ∂ℋ/∂p̃_A] ) z^-1 , where we used p̃_t = -ℋ together with (<ref>). Next we note the first line of (<ref>) shows that, at fixed m'_1, ℋ is a homogeneous function of (m_1,p̃_A) of order 1. Euler's homogeneous function theorem hence implies ℋ= m_1 ∂ℋ/∂ m_1+p̃_A ∂ℋ/∂p̃_A. Comparing this with (<ref>), we obtain (<ref>). Now, to derive the first law, we loosely follow <cit.> by considering how H changes under variations δ P_i of the initial data, with δ x^A_ in=0. Given (<ref>) and (<ref>), we find [δℋ] = [ ∂ℋ/∂ x^Aδ x^A ] + [ ∂ℋ/∂p̃_Aδp̃_A ] + [ ∂ℋ/∂ m_1δ m_1 ] = - dp̃_A/d tδ x^A + d x^A/d tδp̃_A + dτ̃/d t δ m_1 . Since h^ R_μν vanishes for an inertial particle in Minkowski <cit.>, its contribution to p̃_A and H vanishes in the initial state, such that p̃^ in_φ=p^ in_φ,0=L and H_ in=-p^ in_t,0=E. We isolate δ L and δ E in (<ref>) by defining `interaction' quantities 𝓅̃_φ:=p̃_φ-L, 𝓅̃_r=p̃_r, and ℋ:= H-E, such that δ E + [δℋ] = - d𝓅̃_A/d tδ x^A + d x^A/d tδ𝓅̃_A + dφ/d tδ L + dτ̃/d tδ m_1 . Next, we integrate (<ref>) along the physical scattering trajectory from t=-∞ to t=+∞, introducing the regularized integral ⟨ f⟩_Γ :=∫_Γdt r^ϵ f as in the 0SF case. Integrating the term d𝓅̃_A/d tδ x^A by parts, we obtain Δ t^ϵδ E+⟨ [ δℋ ] ⟩_Γ = δ⟨𝓅̃_A d x^A/d t⟩_Γ + Δφ^ϵδ L + Δτ̃^ϵδ m_1 , where Δ t^ϵ:=⟨ 1 ⟩_Γ, Δφ^ϵ := ⟨dφ/d t⟩_Γ, and Δτ̃^ϵ := ⟨dτ̃/d t⟩_Γ. We have discarded boundary terms by choosing Re(ϵ) sufficiently negative and discarded terms that arise from derivatives acting on r^ϵ because they vanish when analytically continued to ϵ=0. Defining also the regularized `interaction' action I^>,ϵ := ∫_Γ dt r^ϵ𝓅̃_A d x^A/d t=I^>,ϵ_r+∫_Γdφ 𝓅̃_φ , with I^>,ϵ_r:=∫_Γ dr r^ϵp̃_r, we rewrite (<ref>) as Δ t^ϵδ E+⟨ [ δℋ] ⟩_Γ = δ I^>,ϵ + Δφ^ϵδ L + Δτ̃^ϵδ m_1 . Equation (<ref>) is valid at all SF orders. We can write it in two alternative forms by absorbing ⟨ [ δℋ ] ⟩ into either `renormalized' observables B^>_ ren or variables {I^>,ϵ_ ren,E_ ren,L_ ren}. In the first approach, we write ⟨ [ δℋ ] ⟩ = ⟨ [ ∂_P_iℋ ] ⟩δ P_i and define χ^ϵ_ ren := χ^ϵ -⟨ [ ∂_Lℋ ] ⟩, Δ t^ϵ_ ren := Δ t^ϵ +⟨ [ ∂_Eℋ ]⟩, Δτ̃^ϵ_ ren := Δτ̃^ϵ -⟨ [ ∂_m_1ℋ ]⟩. Equation (<ref>) then becomes δI^>,ϵ = -(π + χ^ϵ_ ren) δ L + Δ t^ϵ_ renδ E - Δτ̃^ϵ_ renδ m_1 . In the second approach, following <cit.>'s treatment of the bound case, we define renormalized variables E_ ren = λ E , L_ ren = λ L , I^>,ϵ_ ren = λ I^>,ϵ . Choosing λ(P_i) appropriately to eliminate ⟨ [ δℋ ] ⟩ from Eq. (<ref>), we are left with δI^>,ϵ_ ren = -(π + χ^ϵ) δ L_ ren + Δ t^ϵδ E_ ren - Δτ̃^ϵδ m_1 . Equations (<ref>) and (<ref>) are two equivalent forms of the first law for scattering orbits, valid at all SF orders and including all dissipative effects. From scattering to bound.—There is a well-known analytic continuation between the deflection angle χ for unbound orbits and the periastron advance ΔΦ for bound orbits, as well as between the scattering and bound radial actions <cit.>. Here, using the first laws for unbound and bound motion, we extend these analytic continuations to include all the observables in the scattering and bound bases, ℬ^> = (χ,Δ t^ϵ,Δτ^ϵ) and ℬ^< = (ΔΦ,Ω_r,⟨ z ⟩). We limit our analysis to 0SF order, as the analytic continuation for the radial action is not known to be valid beyond 0SF order due to nonlocal-in-time tail effects <cit.>. We write the first law for bound geodesics in terms of the bound radial action, I^<_r,0 (P_i) = 2 ∫_r_-(P_i)^r_+(P_i) d r p_r,0(r;P_i) , where r_∓ are the orbit's minimum and maximum radius (i.e., the radii at periapsis and apoapsis). Following the same arguments as for unbound orbits, one can write the accumulated φ, t, and τ over a single radial period (T_r,0=2π/Ω_r,0) as derivatives of I^<_r,0, leading to the first law for bound orbits <cit.>: δ I^<_r,0 = -(2π + ΔΦ_0) δ L + 2 π/Ω_r,0δ E- 2 π/Ω_r,0⟨ z ⟩_0 δ m_1 , where ⟨ z ⟩_0 := (1/ T_r,0) ∫_0^T_r,0dt (dτ /dt)_0. Knowing the scatter-to-bound map for the radial action <cit.>, I^<_r,0 (P_i) = lim_ϵ→0[I^>,ϵ_r,0(E,L,m_1)-I^>,ϵ_r,0(E,-L,m_1)], and comparing Eq. (<ref>) to Eq. (<ref>), we immediately conclude that there is an analytic continuation between the full set of scattering and bound observables: ΔΦ_0 = χ_0 (E,L,m_1) + χ_0 (E,-L,m_1) , 2 π/Ω_r,0 = lim_ϵ→0[Δ t^ϵ_0 (E,L,m_1) - Δ t^ϵ_0 (E,-L,m_1)] , 2 π⟨ z ⟩_0 /Ω_r,0 = lim_ϵ→0[Δτ^ϵ_0 (E,L,m_1) - Δτ^ϵ_0 (E,-L,m_1)] . We note that the infrared divergences in Δ t^ϵ_0 and Δτ^ϵ_0 cancel in these expressions because the divergences are independent of L; see the Supplemental Material. First law from the S-matrix.—In this section, we put our first law in the broader context of the quantum S-matrix description of the classical two-body problem <cit.>. Given the two-body initial state of well-separated massive point particles Ψ_in = p_1 p_2 of mass m_1 and m_2, the action of the unitary S-matrix operator, Ŝ = 𝒯exp(-i/ħ∫d t H_int(t)) , describes the time evolution of the state in terms of the interaction Hamiltonian H_int, with 𝒯 denoting time ordering. The classical two-body scattering dynamics in the asymptotic ħ→ 0 limit is equivalently obtained by evaluating the action, and therefore H_ int, on-shell. Motivated by (<ref>), we define the exponential representation Ŝ = exp(i N̂ /ħ) <cit.>, where N̂ is a Hermitian operator. We then study the real-valued two-body matrix element N(𝔼,q,m_1,m_2) := ⟨ p_1' p_2'|N̂|p_1 p_2⟩ , where we defined Ψ_out = p'_1 p'_2, the initial total energy 𝔼 of both particles, and the exchanged momentum q^μ = p_1'^μ-p_1^μ=p_2^μ-p_2'^μ. To make contact with the incoming total angular momentum 𝕃= (m_1 m_2 √(γ^2 - 1) b)/𝔼, where γ := p^μ_1+p_1'^μ/2p_2μ+p'_2μ/2 and b is the impact parameter in the center of mass (CM) frame, we perform the Fourier transform N^>,ϵ(𝔼,𝕃,{m_a}) = 1/4 m_1 m_2 √(γ^2-1)∫d^2+2 ϵ q/(2 π)^2+2 ϵ e^-i b(𝕃) · q/ħ N(𝔼,q,{m_a}) , using dimensional regularization with d=4+2 ϵ. Infrared, 1/ϵ divergences arise due to the long-range nature of gravity, but their analytic structure is understood  <cit.>. In complete generality, the expectation value (<ref>) is a function of the kinematic data (𝔼,𝕃,{m_a}), and its variation in the phase space is δ N^>,ϵ = c^ϵ_𝕃 δ𝕃+ c^ϵ_𝔼 δ𝔼 + ∑_a=1,2 c^ϵ_m_a δ m_a , where c^ϵ_𝕃, c^ϵ_𝔼, and c^ϵ_m_a are gauge-invariant coefficients. Using insights from the PM Hamiltonian description <cit.> and the relation with the radial action <cit.>, we now identify the coefficients with the observables B^>. First, by matching the scattering angle χ in the CM frame, it was shown that the matrix element (<ref>) in the conservative case agrees with the radial action up to a constant proportional to 𝕃 <cit.>: N^>,ϵ|_cons = ∫_𝒞_r^>,ϵd r p̃_r,c.m.(r;𝔼,𝕃,{m_a})_𝕀^>,ϵ_r + π𝕃 , where p̃_r,c.m. is the radial relative momentum in the CM frame and 𝒞_r^>,ϵ is the contour of integration for scattering orbits, which implicitly includes a regulator ϵ inherited from the dimensional regularization. In the Supplemental Material, using the PM conservative Hamiltonian and its symmetries in the CM frame <cit.>, we then provide a proof of the following conservative first law for the two-body scattering problem, δ N^>,ϵ = -χδ𝕃+ Δ t^ϵδ𝔼 - ∑_a=1,2Δτ^ϵ_aδ m_a , where Δτ^ϵ_a is the elapsed proper time of particle a and Δ t^ϵ is the elapsed global time. In general, the two-body incoming energy 𝔼 and angular momentum 𝕃 do not agree with the conserved one-body counterparts E and L of the GSF calculation performed earlier. Moreover, while N^>,ϵ is computed here in the CM frame, usually the GSF calculation is in the initial rest frame of the heavy BH <cit.>. However, at 0SF order, we can trivially identify 𝔼→ E + m_2 and 𝕃→ L and both frames, as the relative radial momentum p̃_r,c.m. coincides with the geodesic one p_r,0. Then (<ref>) identically matches (<ref>), and we have χ^ϵ→χ^ϵ_0, Δ t^ϵ→Δ t^ϵ_0 and Δτ^ϵ_1→Δτ^ϵ_0. Notice that δ N^>,ϵ includes also a variation with respect to m_2, which we held fixed in our earlier variations. In the GSF context, we would expect the coefficient of such a variation to be related to the surface gravity of the primary BH <cit.>, while in the PM limit it should asymptote to the elapsed proper time of the BH <cit.>. At nSF order, the matching with our SF first law is more challenging as the dynamics of the heavy BH is integrated out compared to the S-matrix case; we leave this for future work. Incorporating dissipation in this framework is possible by combining the in-in expectation value <cit.> with the exponential representation of Ŝ <cit.>: for every observable 𝒪,[Notice that in the limit ħ→ 0, the commutators (-i/ħ)[ · , · ] are equivalent to suitable Dirac brackets {·,·} which generate the time evolution in the classical phase space <cit.>.] ⟨Δ𝒪⟩ = Ψ_inŜ^† 𝒪 ŜΨ_in - Ψ_in𝒪 Ψ_in = ∑_j=1^+∞(-i)^j/ħ^j j! [N̂, [N̂, …, [N̂_j times, 𝒪] … ] ] , where now also the N̂ matrix elements with on-shell gravitons are relevant. This suggests a physical principle to connect the coefficients (c^ϵ_𝕃,c^ϵ_𝔼,c^ϵ_m_a) to observables at all orders, with dissipation included; see for example Eq. (3.48) of <cit.>. Conclusion.—In recent years, the study of unbound orbits through the S-matrix formalism has transformed the gravitational two-body problem. In this Letter, we developed a powerful new tool for such studies: an extension of the first law of binary BH mechanics to the unbound (and dissipative) case. Our derivation at all SF orders utilizes a novel version of the pseudo-Hamiltonian formalism <cit.>. More generally, we showed how the first law can be derived from the variation of the classical S-matrix in the phase space of the kinematic data P_i =(ℰ,L,{m_a}); see Fig. <ref>. In that context, the S-matrix can be interpreted as a generating functional of classical observables. Among those observables, we have highlighted the elapsed proper time Δτ^ϵ as a new, core element of the (regularized) basis of scattering observables ℬ^> = (χ, Δ t^ϵ, Δτ^ϵ). Using the relation between the scattering and bound radial action, we also established a full correspondence (<ref>) between the bases of scattering observables ℬ^> and bound observables ℬ^< at 0SF order, as again summarized in Fig. <ref>. This extends the well-known map between the deflection angle and periastron advance. Given the first law's varied applications for bound orbits <cit.>, we expect our work to open many new avenues for scattering calculations. We particularly encourage self-force scattering calculations <cit.> of the observables Δ t^ϵ and Δτ^ϵ. Natural extensions also present themselves. First, one might consider scattering orbits of spinning BHs <cit.>. Second, we considered only two-body matrix elements, but nothing prevents us from studying the variation of matrix elements involving on-shell graviton states, which should be related to the gravitational waveform. Finally, we hope that linking the first laws for scattering and bound orbits beyond 0SF can shed light on the tail effects that have limited the applicability of scatter-to-bound maps <cit.>. Acknowledgments. We thank Alex Le Tiec for comments on a draft of this Letter. RG would like to thank A. Ilderton, M. Porrati, K. Rajeev and C. Shi for useful discussions. JL and AP gratefully acknowledge helpful conversations with Soichiro Isoyama and Takahiro Tanaka about pseudo-Hamiltonian dynamics and with Leor Barack, Chris Kavanagh, Rafael Porto, Davide Usseglio, and Maarten van de Meent about scatter-to-bound maps. RG is grateful to the Mainz Institute of Theoretical Physics (MITP) of the Cluster of Excellence PRISMA+ (Project ID 390831469), for its hospitality and its partial support during the completion of this work. JL acknowledges the support of an STFC studentship. AP acknowledges the support of a Royal Society University Research Fellowship and a UKRI Frontier Research Grant (as selected by the ERC) under the Horizon Europe Guarantee scheme [grant number EP/Y008251/1]. § SUPPLEMENTAL MATERIAL In this Supplemental Material, we provide closed-form and PM-expanded formulas for the regularized scattering observables ℬ^>=(χ,Δ t^ϵ,Δτ^ϵ) at 0SF order. We then discuss how they relate, through analytic continuation, to the analogous observables for bound geodesics. Finally, we provide a derivation of the conservative first law for scattering orbits from the S-matrix formalism. §.§ Observables at 0SF order All of the observables can be obtained from the scattering radial action (<ref>) via Eqs. (<ref>) and (<ref>). Changing variable to u=1/r, we write the radial action as I^>,ϵ_r,0(P_i) = 2 ∫_0^u_ md u/u^2+ϵ √(B(u))/(1 -2 m_2 u) , B(u) = 2 L^2 m_2 (u-u_ m)(u-u_1)(u-u_2) , where the analytic form of the radial roots (u_ m,u_1,u_2) of B(u) = 0 follow from Cartan's formula <cit.>: u_ m = 1/6 m_2(1 + 2 βcos(ζ-2 π/3)) , u_1 = 1/6 m_2(1 + 2 βcos(ζ/3)) , u_2 = 1/6 m_2(1 + 2 βcos(ζ+2 π/3)) , and we have defined the constants ζ =arccos(L^2+(36 m_1^2-54 E^2) m_2^2/β ^3 L^2) , β =√(1-12 m_1^2 m_2^2/L^2) . Deflection angle.—The deflection angle χ=Δφ-π describes the total change of the azimuthal angle in the scattering process, relative to the value (π) for a straight line in Minkowski spacetime. This quantity does not require any regularization as it is infrared finite. To see how that finiteness follows from the radial action, note that the 1/ϵ divergence of the radial action is L-independent: I^>,ϵ_r,0 (P_i) |_1/ϵ = 2 m_2 (2 E^2 -m_1^2)/√(E^2-m_1^2)1/ϵ , in agreement with amplitude calculations <cit.>. Therefore, the deflection angle for a scattering geodesic, which is calculated from ∂ I^>,ϵ_r,0/∂ L, is always infrared finite. Now, to evaluate the deflection angle from (<ref>), we notice that the action of the L-derivative operator on the integration boundary u_ m vanishes because B(u_ m) = 0, and therefore we only need to act at the integrand level. This allows us to express (see the strategy adopted in <cit.>) the geodesic deflection angle as χ_0 = 4 L u_ m/√(E^2-m_1^2) F_D^(2)(1,1/2,1/2,3/2; u_ m/u_1,u_ m/u_2) , where F_D^(n) is Lauricella's hypergeometric function F_D^(n)(α,{β_j}_j=1^n, γ ;{x_j}_j=1^n)=Γ(γ)/Γ(α) Γ(γ-α) ×∫_0^1  d t t^α-1(1-t)^γ-α-1∏_j=1^n(1-x_j t)^-β_j . Coordinate time delay.—The total change in coordinate time along the geodesic is trivially infinite. Physically it only makes sense to measure the difference between this time and the time as measured along a reference path; for an appropriate reference path, that difference is the (Shapiro) time delay <cit.>, which represents the amount by which signals are slowed down in the presence of a gravitational source. Inspired by <cit.>, we take as a reference the clock of an observer moving along a scattering geodesic at very large L_ref≫ L but fixed E_ref = E and define our physical infrared-finite time-delay observable as Δ t^rel_0,L_ref = lim_ϵ→ 0[Δ t^ϵ_0(P_i) - Δ t^ϵ_0 (P_i,ref) |_𝒪(m_1m_2/L_ref)], where it is understood that we neglect subleading terms in m_1 m_2/L_ ref. Given (<ref>) and (<ref>), we notice that this definition is always infrared safe at geodesic order, as expected. We also note that this definition is completely analogous to χ_0, which we can write as χ_0 = lim_ϵ→ 0[Δφ_0^ϵ(P_i) - Δφ_0^ϵ (P_i,ref) ] because lim_ϵ→ 0Δφ_0(P_i, ref)=π. Using the definition (<ref>) and the representation (<ref>) of the radial action, we can write the regularized elapsed coordinate time as Δ t^ϵ_0 = 2 √(π)m_1/√(E^2-m_1^2) u_ m^1-ϵΓ(-1+ϵ)/Γ(-1/2+ϵ) × F_D^(3)(-1+ϵ,1,1/2,1/2,-1/2+ϵ; 2 m_2 u_ m, u_ m/u_1,u_ m/u_2), where we have used the definition (<ref>). Turning to the physical time delay (<ref>), at geodesic order the PM expansion gives the familiar structure <cit.>: Δ t^rel_0,L_ref = 2 m_2 E (3 m_1^2-2 E^2)/(E^2-m_1^2)^3/2log(L/L_ref) + 15 π E m_2^2/2 L + 𝒪(1/L^2) . Relative elapsed proper time.—The elapsed proper time along scattering orbits has received, to our knowledge, very little attention in the literature (except, possibly, for plunging orbits related to holography applications <cit.>). Very recently,  <cit.> and <cit.> investigated the proper time from the theoretical and experimental perspective, but without focusing on the two-body problem. In the body of this Letter, we showed that this is a crucial ingredient in our basis of scattering observables. Using the same strategy as before, we find that the definition in Eqs. (<ref>) and the radial action (<ref>) give Δ τ^ϵ_0 = 2 m_1 √(π)/√(E^2-m_1^2) u_ m^1-ϵΓ(-1+ϵ)/Γ(-1/2+ϵ) × F_D^(3)(-1+ϵ,1/2,1/2,-1/2+ϵ; u_ m/u_1,u_ m/u_2) , which is the regularized elapsed proper time along scattering orbits. The elapsed proper time relative to the large-L_ ref reference orbit is, in analogy with Eq. (<ref>), Δτ^rel_0,L_ref = lim_ϵ→ 0[Δτ^ϵ_0(P_i) - Δτ^ϵ_0 (P_i,ref) |_𝒪(m_1m_2/L_ref)], which has the PM expansion Δτ^rel_0,L_ref = 2 m_1^3 m_2 /(E^2-m_1^2)^3/2log(L/L_ref) + 3 π m_1 m_2^2/2 L + 𝒪(1/L^2) . It is worth noticing that, unlike the time delay Δ t^rel, the proper time is not defined for massless geodesics and therefore Δτ^rel→ 0 as m_1 → 0. §.§ Scatter-to-bound maps at 0SF order In the body of the Letter, we derived the scatter-to-bound maps (<ref>) directly from the first laws for unbound and bound geodesics. Here we observe that these maps also follow from direct inspection of the integral representation of the observables. We begin with those representations for unbound geodesics. The observables are simply the (regularized) net change in φ, t, and τ over the orbit: π + χ_0 = Δφ_0 = 2 ∫_r_ m(P_i)^∞ d r (dφ/d r)_0 , Δ t^ϵ_0 = 2 ∫_r_ m(P_i)^∞ d r r^ϵ(d t/d r)_0 , Δτ^ϵ_0 = 2 ∫_r_ m(P_i)^∞ d r r^ϵ(dτ/d r)_0 . We next write the observables for bound orbits in analogous forms. The periastron advance ΔΦ_0 corresponds to the azimuthal precession over a period of the bound orbit, 2π + ΔΦ_0 = 2 ∫_r_-(P_i)^r_+(P_i) d r (dφ/d r)_0 , where, recall, r_±(P_i) are the two radial turning points of the bound geodesic. Similarly, the radial coordinate-time frequency Ω_r,0 and the averaged redshift ⟨ z ⟩_0 can be written as <cit.> 2 π/Ω_r,0 = 2 ∫_r_-(P_i)^r_+(P_i) d r (d t/d r)_0 , (2 π/Ω_r,0) ⟨ z ⟩_0 = 2 ∫_r_-(P_i)^r_+(P_i) d r (dτ/d r)_0 . Each of the bound-orbit integrals can be written in terms of unbound-orbit integrals by analytically continuing to E>m_1 and negative L. The key properties in this exercise are p_r,0(r;E,L,m_1) = p_r,0(r;E,-L,m_1), which follows immediately from Eq. (<ref>), and the relationship between turning points for bound and unbound geodesics <cit.>, r_±(E,L,m_1) = r_ m(E,∓ L,m_1) . Expressed in terms of p_t,0=-E, p_φ,0=L, and p_r,0, the observables read 2π + ΔΦ_0 = 2 ∫_r_-(P_i)^r_+(P_i) d r g^φφL/g^rrp_r,0(r;P_i) , 2 π/Ω_r,0 = 2 ∫_r_-(P_i)^r_+(P_i) d r -g^ttE/g^rrp_r,0(r;P_i) , (2 π/Ω_r,0) ⟨ z ⟩_0 = 2 ∫_r_-(P_i)^r_+(P_i) d r/g^rrp_r,0(r;P_i) . In each case, we pull the constant (E or L) out of the integral and use Eqs. (<ref>) and (<ref>) to write the integral as ∫_r_-(P_i)^r_+(P_i)F(r)d r/p_r,0(r;P_i) = ∫_r_-(P_i)^∞F(r)d r/p_r,0(r;P_i) - ∫_r_+(P_i)^∞F(r)d r/p_r,0(r;P_i) = ∫_r_m(P_i)^∞F(r)d r/p_r,0(r;P_i) - ∫_r_ m(E,-L,m_1)^∞F(r)d r/p_r,0(r;E,-L,m_1). Here F(r) is one of g^φφ/g^rr, g^tt/g^rr, or 1/g^rr, which in each case depends only on r. Now comparing the observables  (<ref>) to the observables (<ref>) for unbound orbits, and noting Eq. (<ref>), we recover Eq. (<ref>). §.§ Conservative first law from the S-matrix formalism We prove here the validity of the first law in the conservative case for the S-matrix, combining the relation of the PM Hamiltonian <cit.> with amplitudes and with the radial action <cit.>, which is then linked directly to the exponential representation of the S-matrix <cit.>. Albeit the Hamiltonian extracted from the amplitude is gauge-dependent, the following derivation—and therefore our first law—is independent of the choice of gauge. We first define the center of mass frame for the massive spinless particles 1 and 2 as the one where the incoming states have momenta (E_1, p⃗) and (E_2,-p⃗) while the outgoing states have momenta (E_1, p⃗^') and (E_2,-p⃗^'), with E_j = √(|p⃗|^2+m_j^2). Conservation of energy then implies |p⃗| = |p⃗^'|^2, and the exchanged momentum in the scattering process becomes q^μ = (0 ,q⃗) = (0 ,p⃗ - p⃗^'). We then introduce polar coordinates in the plane of the motion, x⃗ = (r cos(ϕ),r sin(ϕ),0), where r is the distance between the two bodies and ϕ is the azimuthal phase. We can then define an effective conservative Hamiltonian in the CM frame, H^PM_c.m.(r, |p⃗|)=∑_a=1,2√(|p⃗|^2+m_a^2)+V(r, |p⃗|) , where V(r, |p⃗|^2) is the interaction potential for spinless particles, which can be extracted systematically from conservative amplitudes order by order in the PM expansion <cit.>. Here x^i and p_i are canonically conjugate to each other <cit.>, and denoting the norm of the center of mass spatial momentum at infinity |p⃗|(r → +∞) as p_∞ = 1/4 𝔼^2 (𝔼^2-(m_1-m_2)^2)(𝔼^2-(m_1+m_2)^2) , we obtain the conservation laws for the energy and angular momentum, 𝕃 = p_ϕ = b p_∞ , 𝔼 = H(r,|p⃗|^2= p_r,c.m.^2+L^2/r^2). At this point, we can proceed as in <cit.> and consider the variation of our Hamiltonian H^PM_c.m.(r,p_r,c.m.,ϕ,𝕃,{m_a}) in the phase space: δ H^PM_c.m. = ∂ H^PM_c.m./∂ rδ r+∂ H^PM_c.m./∂ p_r,c.m.δ p_r,c.m. +∂ H^PM_c.m./∂𝕃δ𝕃 + ∑_a=1,2∂ H^PM_c.m./∂ m_aδ m_a = -d p_r,c.m./d tδ r+d r/d tδ p_r,c.m. +dϕ/d tδ𝕃 + ∑_a=1,2∂ H^PM_c.m./∂ m_aδ m_a , where we have used Hamilton's equations in the last line. We can now recognize the redshift variables z_1,z_2 <cit.>, z_a = ∂ H^PM_c.m./∂ m_a = dτ_a/d t , where τ_a is the elapsed proper time of particle a in the CM frame. We can now proceed as in our pseudo-Hamiltonian derivation, using the regularized integration ⟨ F(·) ⟩ = ∫d t r^ϵ F(·) on both sides of the equality (<ref>). In this case, though, ⟨ H(r,p_r,ϕ,𝕃,{m_a}) ⟩ = 𝔼 ⟨ 1⟩ = 𝔼 Δ t^ϵ , where Δ t^ϵ is the elapsed global time. Therefore, following similar steps to the ones in the main text and dropping total boundary terms (which are zero on-shell), we obtain δ𝕀_r^>,ϵ = -(π + χ) δ𝕃+ Δ t^ϵδ𝔼 - ∑_a=1,2Δτ^ϵ_aδ m_a , where we have defined 𝕀_r^>,ϵ = ⟨ p_r,c.m.d r/d t⟩ , χ = ⟨dϕ/d t⟩ , Δτ^ϵ_a = ⟨dτ_a/d t⟩ . Finally, we need to link the radial action defined here with the expectation value of the N̂ operator for the two-body conservative case. We now recall the proof in Sec. 3.1.3 of <cit.>, where it was shown (non-perturbatively) that χ = - ∂ N^>,ϵ(𝔼,𝕃,{m_a})/∂𝕃 , where all the other variables are kept fixed; showing therefore that, as expected, N^>,ϵ = 𝕀_r^>,ϵ + π𝕃 . This concludes the proof of the conservative first law (<ref>) in the S-matrix formalism.
http://arxiv.org/abs/2409.03003v1
20240904180021
Electroweak gauge invariant Higgs multiplets
[ "M. Maniatis" ]
hep-ph
[ "hep-ph" ]
Black hole singularity resolution from unitarity Lucía Menéndez-Pidal September 9, 2024 ================================================ § INTRODUCTION T.D. Lee originally proposed extending the Higgs sector to two Higgs-boson doublets to provide a new source of CP violation <cit.>. This motivation is still today valid. For a review of the two-Higgs doublet model (2HDM) see for instance <cit.>. The 2HDM has been studied with respect to stability, elecroweak symmetry breaking, CP symmetries, as well as general symmetries; see for instance <cit.>. Most extensions of Higgs multiplets in the literature focus on singlet and doublet copies. In supersymmetric models, for instance, additional Higgs-boson singlets and doublets appear in a rather natural way. As an example let us mention the next-to minimal sypersymmetric model - see the reviews <cit.> - which besides two Higgs-boson doublets contains one complex singlet. What about Higgs-boson triplets, quadruplets, and so on? With respect to general multiplets there is a severe restriction arising from the parameter defined as ρ = m_W^2/cos^2 (θ_W) m_Z^2 . Here the masses of the electroweak gauge bosons are denoted by m_W and m_Z and the electroweak mixing angle by θ_W. This parameter is measured to be close to one <cit.>. As usual, from the kinetic terms of the Higgs multiplets, written gauge covariantly, we compute the masses of the electroweak gauge bosons. From the neutral components of the Higgs multiplets we get contributions to the masses of the electroweak gauge bosons. Numbering the Higgs multiplets with i, denoting their weak isospin by T(i), its 3 component by T_3(i), and the vacuum-expectation value of its neutral component by v_i, the ρ parameter at tree level is given by <cit.>, ρ = ∑_i ( T(i) (T(i)+1) - T_3^2(i) ) |v_i|^2 /2 ∑_i T_3^2(i) |v_i|^2 . From this computation of the ρ parameter follows that for Higgs-boson singlet models with T=T_3=0 and Higgs-boson doublet models with T=1/2 and T_3=±1/2 we find ρ=1, consistent with experimental observation. One special case is the Standard Model which, since it employs one Higgs-boson doublet, predicts also ρ=1 at tree level. An example of a higher multiplet resulting in ρ=1, as follows from (<ref>), are Higgs septuplets corresponding to T=3 and T_3= ± 2. Another possibility to comply with this constraint is to add multiplets with vanishing vacuum-expectation values of their neutral component. Also it is possible to have a potential with several multiplets and with the vacuum-expectation values arranged by an appropriate potential to eventually give ρ =1. Examples are the Georgi-Machacek models <cit.> with one Higgs-boson doublet besides Higgs-boson triplets. The potential is arranged in these type of models such that the vacuum-expectation values satisfy at tree level the condition ρ=1. Let us also mention the recent work <cit.>, where a Higgs-boson quadruplet is combined with a doublet. Eventually, even with (<ref>) in mind, we see that it is at least in principle possible to consider models with more general Higgs-boson multiplets. In this sense, we here would like to study Higgs potentials of arbitrary Higgs multiplets. Stability and electroweak symmetry breaking of Higgs-multiplet models have been studied in <cit.> and <cit.>; for the case of an arbitrary number of doublets in <cit.>. Couplings of Higgs multiplets to the electroweak gauge bosons have been investigated in <cit.>. Let us briefly return to the Standard Model, where a single Higgs-boson doublet field φ^(2)(x) belongs to C^2. This field transforms as a doublet under electroweak transformations as indicated by the superscript. It is straightforward to introduce a bilinear field, K^(2)(x) = φ^(2)^†(x) φ^(2)(x) which is bilinear in the doublet fields, real, and it is a singlet under electroweak gauge transformations. Bilinear fields have been studied in detail for the case of an arbitrary number of doublets <cit.>. Here we will generalize the bilinear approach to arbitrary multiplets. Specifically, the bilinear representation enables us to determine the domain of these multiplets. This domain can be split into distinct parts with respect to the behavior of the multiplets with respect to electroweak symmetry breaking, that is, unbroken, fully, and partially broken electroweak symmetry →. Having found the domain of the multiplets with respect to electroweak-symmetry breaking, we proceed and study the corresponding Higgs potential. In the potential we shall consider multiplets in the adjoint, that is, bilinear representation. This has many advantages, for instance with respect to the study of symmetries of the potential. For instance, for the case of two Higgs-boson doublets, it has been shown that CP transformations are given by reflections in terms of bilinears <cit.>. Let us mention in this context that for the case of the 2HDM, a description free of gauge redundancies has been given for the complete model <cit.>, including the Yukawa and gauge couplings. However, considering potentials of different multiplets it turns out to be possible to form gauge-invariant terms which are not bilinear in all the multiplets. One example is a gauge-invariant tensor product of a pair of Higgs-boson doublets with one Higgs-boson triplet. Motivated by a recent study of Higgs-boson quadruplets and doublets <cit.>, we show how these gauge-invariant terms can be formed systematically. The essential idea is to express the multiplets in terms of irreducible symmetric doublet representations. Finally, we use the description of Higgs-boson multiplets in both the adjoint and symmetric representations to examine the general symmetries of the potential. As an example we study standard CP symmetries for the case of general Higgs multiplets. § HIGGS MULTIPLETS We are considering Higgs-boson multiplets with respect to electroweak gauge symmetry. In analogy with ordinary spin with respect to the group , the Higgs multiplets have to have isospin T=0, 1/2, 1, 3/2, 2, … with the 3 component running from T_3 = -T, -T+1, …, +T. Considering the group U(1)_Y, each Higgs multiplet is assigned a hypercharge Y. The Higgs multiplets corresponding to isospin T have multiplicity m= 2T+1 in analogy with the spin. The charges of the components of the Higgs fields in the fundamental representation follow from the Gell-Mann-Nishijima formula Q = Y + T_3 <cit.>. If we choose, for instance, a hypercharge Y= -T_3 we find the lowest component of the Higgs-multiplet in the fundamental representation, corresponding to -T_3, to be electrically neutral. This of course is only one example and any hypercharge may be assigned to the multiplets. Independent of the assignment of the hypercharge the electric charge decreases from top to bottom in each component by one elementary charge unit. In general, the tensor product of m-1 copies of doublets of , that is, 2 ⊗ 2 ⊗…⊗ 2 has one totally symmetric irreducible representation of dimension m. It is very convenient to represent all multiplets in the symmetric representation in order to form gauge-invariant tensor products among them. Next, we examine the totally symmetric irreducible representations Δ_(a_1, …, a_m-1) in detail. As usual, the round brackets indicate that the tensor is totally symmetric under any permutation of the indices. For instance, the Higgs triplet corresponding to multiplicity m=3 can be represented as, Δ_11 = ϕ^1, Δ_12 = Δ_21 = 1/√(2)ϕ^2, Δ_22 = ϕ^3 If we assign to the triplet the hypercharge Y=1 we get with the help of the Gell-Mann-Nishijima formula the expressions Δ_11 = ϕ^++, Δ_12 = Δ_21 = 1/√(2)ϕ^+, Δ_22 = ϕ^(0) , that is, (Δ_ij) = [ ϕ^++ 1/√(2)ϕ^+; 1/√(2)ϕ^+ ϕ^(0) ]. Similar, we find for a quadruplet in terms of the symmetric representation the explicit expressions, Δ_111 = ϕ^1, Δ_112 = Δ_121 = Δ_211 = 1/√(3)ϕ^2, Δ_122 = Δ_212 = Δ_221 = 1/√(3)ϕ^3, Δ_222 = ϕ^4 . In general, the totally symmetric doublet product representation for a multiplet of multiplicity m in terms of the fundamental components reads Δ_(1, …, 1_n_1, 2, …, 2_m-n_1-1) = 1/√(m-1n_1)ϕ^m-n_1. Since the doublet representation is totally symmetric as indicated by the round brackets enclosing the indices, this expression holds for all permutations of the indices 1 and 2. In Tab. <ref> we summarize the properties of the Higgs multiplets. Now we seek to determine the domain of the Higgs multiplets. To achieve this we construct the Higgs multiplets in the adjoint representation in an analogous way as outlined in <cit.> for the case of Higgs-boson doublets. The case of three Higgs-boson doublets has been studied in detail in <cit.> and in <cit.> the case of an arbitrary number of doublets is discussed. Let us denote by n_m the present number of copies of Higgs-boson multiplets with multiplicity m. In particular n_1 denotes the number of singlets, which carry electroweak isospin T=0, that is, T=0=T_3=0. Of course, singlets are by definition gauge invariant. More generally, for the case of n_m copies of Higgs bosons of multiplicity m we build the matrix .ψ^(m) = [ φ_1^(m)^; ⋮; φ_n_m^(m)^ ] = [ ϕ^1_1 ⋯ ϕ^m_1; ⋮ ⋮; ϕ^1_n_m ⋯ ϕ^m_n_m ]∈ (n_m × m) . Note that for the entries of this matrix, for instance ϕ^2_3, the subscript labels the number of the copy, whereas the superscript gives the component of this copy of the multiplet. For arbitrary multiplets the next step is to form from ψ^(m) the gauge-invariant matrix: K^(m) = ψ^(m)ψ^(m)^† = [ φ_1^(m)^†φ_1^(m) ⋯ φ^(m)_n_m^†φ^(m)_1; ⋱ ; φ_1^(m)^†φ^(m)_n ⋯ φ^(m)_n_m^†φ^(m)_n_m ]∈ (n_m × n_m) . By construction, the quadratic, gauge-invariant matrix K^(m) is hermitian, has rank smaller than or equal to m and is positive semidefinite. The rank condition follows from the fact that the (n_m × m) matrix ψ^(m) has trivially a rank smaller than or equal to m, whereas hermiticity and semi definiteness follow directly from the definition of the matrix K^(m). As an example let us consider the two-Higgs triplet model. The two triplets correspond to T=1 with multiplicity m=3. We have in this case ψ^(3) = [ φ^(3)_1^; φ^(3)_2^ ] = [ ϕ^1_1 ϕ^2_1 ϕ^3_1; ϕ^1_2 ϕ^2_2 ϕ^3_2 ]. If we assign the hypercharge Y=1 to these two triplets we will have the charges of the components, ψ^(3) = [ φ^(3)_1^; φ^(3)_2^ ] = [ ϕ^++_1 ϕ^+_1 ϕ^0_1; ϕ^++_2 ϕ^+_2 ϕ^0_2 ]. The matrix K^(3) defined in (<ref>) for the case ψ^(3) in (<ref>) is of dimension 2 × 2 and reads K^(3) = ψ^(3)ψ^(3)^† = [ φ^(3)_1^†φ^(3)_1 φ^(3)_2^†φ^(3)_1; φ^(3)_1^†φ^(3)_2 φ^(3)_2^†φ^(3)_2 ]. The next step is to write the gauge-invariant bilinear matrix K^(m) for each set of the n_m multiplets in a basis of the unit matrix and the (generalized) Pauli matrices, K^(m) = 1/2 K^(m)_αλ_α, α∈{0, …, n_m^2-1} . Here we have added to the generalized Pauli matrices the conveniently scaled identity matrix written with an index zero, λ_α with α∈{0, …, n_m^2-1}, λ_0 = √(2/n_m)_n_m . An explicit scheme to construct the generalized Pauli matrices can for instance be found in <cit.>. The matrices λ_α satisfy the equations, (λ_αλ_β) = 2 δ_αβ, (λ_α) = √(2 n_m)δ_α 0 . Multiplying (<ref>) on both sides with (generalized) Pauli matrices and taking traces we can express the bilinears in terms of the Higgs multiplets, K^(m)_α = ( K^(m)λ_α) . These bilinears are real, gauge invariant expressions. In our example of two Higgs triplets we find for the real gauge-invariant bilinears explicitly K^(3)_0 = φ^(3)_1^†φ^(3)_1 + φ^(3)_2^†φ^(3)_2, K^(3)_1 = φ^(3)_1^†φ^(3)_2 + φ^(3)_2^†φ^(3)_1, K^(3)_2 = i φ^(3)_2^†φ^(3)_1 - i φ^(3)_1^†φ^(3)_2, K^(3)_3 = φ^(3)_1^†φ^(3)_1 - φ^(3)_2^†φ^(3)_2 . Let us determine the domain of the real bilinears K^(m)_α, α∈{0, …, n_m^2-1}. From the explicit construction we get for the zero component K^(m)_0 = ( K^(m)λ_0) = √(2)/n_m( φ_1^(m)^†φ^(m)_1 + … + φ^(m)_n_m^†φ^(m)_n_m) and we find immediately K^(m)_0 ≥ 0 . The rank condition (K^(m)) ≤ m translates to the bilinears as follows (here we follow closely <cit.>): Since ψ^(m) has trivially a rank smaller than or equal to m, this holds also for the bilinear matrix K^(m)=ψ^(m)ψ^(m)^†. We can diagonalize K^(m) by a unitary similarity transformation, U K^(m) U^† = (κ_1, …, κ_m, 0, …, 0) . Positive semidefiniteness translates for the eigenvalues to κ_i ≥ 0, i ∈{1, …, m} . For any hermitean matrix K^(m)∈ (n_m × n_m) with eigenvalues κ_i we can introduce the symmetric sums, s_0 = 1, s_k = ∑_1 ≤ i_1 < i_2 < … < i_k ≤ n_mκ_i_1κ_i_2⋯κ_i_k, k ∈{1, …, n_m} . In particular we have s_0 = 1, s_1 = κ_1 + ⋯ + κ_n_m and s_n_m = κ_1 ·…·κ_n_m = (K^(m)). The symmetric sums can be expressed recursively in terms of basis-independent traces of powers of the bilinear matrix <cit.>, s_0 = 1, s_k = 1/k∑_i=1^k (-1)^i-1 s_k-i( (K^(m))^i), k ∈{1, …, n_m} . Explicitely, the first symmetric sums read s_0 = 1, s_1 = (K^(m)) = √(n_m/2) K^(m)_0, s_2 = 1/2( ^2(K^(m)) - ((K^(m))^2) ), s_3 = 1/6( ^3(K^(m)) - 3 ((K^(m))^2)(K^(m)) + 2 ((K^(m))^3) ) . As stated before, the matrix K^(m) has rank smaller than or equal to m. Having K^(m) of rank r, with 0 ≤ r ≤ m, this is equivalent to the first r symmetric sums positive and the remaining ones vanishing. This can be shown as follows: Suppose we have a matrix K^(m) of rank r, 0 ≤ r ≤ m. The number of positive, non-vanishing eigenvalues is therefore r and without loss of generality we set κ_i > 0 for i ∈{1, …, r}, κ_r+1 = κ_r+2= … = κ_n_m = 0 . From the definition of the symmetric sums in (<ref>) we see that s_k = 0 for k>r since each summand in the sum is formed from a product of k different eigenvalues, where at least one is vanishing. Therefore, for a matrix K^(m) with rank 0 ≤ r ≤ m we have s_i > 0 for i ∈{1, …, r}, s_r+1 = s_r+2= … = s_n_m = 0 . Vice versa, suppose we have the conditions for the symmetric sums as given in (<ref>). From the definition of the sums we see that starting with the last symmetric sum condition, s_n_m=0, that at least one eigenvalue, say κ_n_m, without loss of generality, has to vanish. Recursively from the preceding condition we find that succesively one more eigenvalue has to vanish. Eventually we find all the equality conditions in (<ref>) to be satisfied. It remains to be shown that for s_i > 0 with i ∈{1, …, r} follows κ_i >0. This can be proven by induction. The induction base follows from considering r=1: For s_1>0 we have κ_1 = s_1>0 and this forms the base of induction. Now we assume that for s_i >0, i ∈{1, …, r} it follows that κ_i >0. Under this assumption we have to show that this holds for r+1. We can see this by considering the last condition, that is, s_r+1 = κ_1 ·…·κ_r ·κ_r+1 > 0. Under the assumption κ_1 ·…·κ_r >0 follows κ_r+1 > 0, which completes the proof. With these preparations we can now find the domain of the bilinears. Each Higgs multiplet of multiplicity m consists of m complex components, therefore, the domain is φ^(m)∈C^m. Of course, the domain is not unique with respect to gauge symmetries. For the general case, where we have n_m Higgs multiplets of multiplicity m, we have the domain ψ^(m)∈C^n_m × m. The bilinears on the other hand are constructed from the bilinear matrix, that is, K^(m) = ψ^(m)ψ^(m)^† of dimension n_m × n_m with rank smaller than or equal to m. Therefore, the domain of the bilinears with respect to n_m Higgs multiplets of multiplicity m is given in terms of the symmetric sums s_i from (<ref>), s_i ≥ 0 for i ∈{1, …, m} , s_i = 0 for i ∈{m+1, …, n_m} . The expressions for the symmetric sums in terms of bilinears are given in (<ref>). In this way we have found the domain of the multiplets in a gauge-invariant way. Let us see this in some examples: For n_1 Higgs complex singlets corresponding to multiplicity m=1, that is, φ^(1)_1, …, φ^(1)_n_1 we have ψ^(1) of rank smaller than or equal to one. Therefore, in terms of bilinears, we have s_1 ≥ 0 that is K^(1)_0 ≥ 0 as well as s_2=0, …, s_n_1=0. The case of doublets has been discussed in terms of bilinears in <cit.>. Let us recall the results. In case of n_2 Higgs-boson doublets, we have to have K^(2) of rank smaller than or equal to two. This gives s_1= K_0 ≥ 0 as well as s_2 ≥ 0 and s_3= … = s_n_2 =0. For the special case n_2=2, that is, the 2HDM we have, for instance, the domain of the bilinears K^(2)_0 ≥ 0, s_2= 1/2 (^2(K^(2)) - ((K^(2))^2)) = 1/4[ (K^(2)_0)^2 - (K^(2)_1)^2 -(K^(2)_2)^2-(K_3^(2))^2 ] ≥ 0. Note that we can always express the entries of the bilinear matrix K^(m) in terms of bilinear coefficients K^(m)_α. Let us recall that the Higgs-boson multiplets play a special role in particle physics since they provide in particular the masses to the fermions. In the Standard Model, for instance, one Higgs-boson doublet provides the masses to all fermions. In the two-Higgs doublet model, 2HDM, where the matrix K^(2) is of rank smaller than or equal to two, the observed electroweak symmetry breaking with a neutral vacuum of the doublets singles out the rank one case. The rank two case correspond to a completely broken electroweak symmetry and the rank zero case corresponds to an unbroken electroweak symmetry. With view at the Gell-Mann-Nishijima formula, we see that for any assignment of hypercharge to a multiplet the electric charge drops by one unit from top to down of the components. This means that we can have at most one neutral component of the Higgs-boson multiplets. In case only the neutral component acquires a non-vanishing vacuum-expectation value at a minimum of the potential, this therefore corresponds to the case of rank one of the matrix K^(m). It is convenient to separate these different cases corresponding to different rank conditions. We refer to the studies mentioned above for a detailed discussion of this subject in the 2HDM and the nHDM. In analogy to the nHDM this can be done by imposing Lagrange multipliers ensuring the rank conditions. § THE HIGGS MULTIPLET POTENTIAL We now consider the general case of a multi-Higgs potential, where we have corresponding to each multiplicity m with m=1, 2, 3, … the number of n_m copies of Higgs-boson multiplets. In the potential we typically would like to consider only terms up to the fourth power of Higgs-boson fields, since otherwise, we would spoil renormalizability. However, with respect to the analysis here, we are not restricting the order of the potential. Starting with n_1 copies of Higgs-boson singlets, there are, due to the singlet nature, no restrictions with respect to symmetry for any term which may appear in the potential. As singlets have isospin zero, their hypercharge equals their electric charge. Therefore, gauge invariance imposes the restriction that each potential term must have a total electric charge of zero. Concerning multiplets with multiplicity larger than one, we consider their irreducible totally symmetric tensor product representations. As we have seen, a multiplet of multiplicity m can be written in the symmetric representation with m-1  indices, Δ_(i_1… i_m-1). Writing all multiplets in this form allows us to build in a convenient way all gauge-invariant expressions which may appear in the potential. These multiplets transform in each index in the fundamental representation of . With the index notation borrowed from <cit.> we write conjugated multiplets with upper indices, Δ^*i_1, …, i_m-1≡ (Δ_i_1, … i_m-1)^* with i_k ∈{1,2} . For instance, we write the anti quadruplet in the form Δ^*(ijk). The conjugated multiplets transform under in every upper index in the anti-fundamental representation. Having the notation with upper and lower indices available, any product of multiplets with paired, that is, contracted indices, one above and one below, ensures invariance under transformations. The hypercharge of the conjugated multiplets is also opposite to the original multiplets. An example, where we have one copy of the Higgs-boson doublet is the potential of the Standard Model, where a quadratic term is conventionally written as φ^†φ. In the symmetric representation we write this term Δ^*iΔ_i. This term is manifestly gauge-invariant since the index i is contracted and any hypercharge assigned to the doublet appears once with positive and once with negative sign. Another example is the product Δ_i Δ_j Δ^* ij denoting an -invariant product of two doublets with one triplet. It general we have to take care that the assignments of the hypercharges results in total hypercharge zero in every tensor product forming a term of the potential. For instance if we assign to the doublets Δ_i the hypercharge 1/2 and to the triplet Δ_(ij) the hypercharge +1 we find the expression Δ_i Δ_j Δ^* ij to be gauge-invariant. We see at this point that we can form terms exclusively built from an odd number of copies of multiplets of the same multiplicity only if we assign to them a vanishing hypercharge. On the other hand, forming tensor products of different multiplicities, we can form odd terms with non-vanishing hypercharges. An example is the term Δ_i Δ_j Δ^* ij with the appropriate assignments of hypercharge given in the last paragraph. We can also form fundamental representations from anti-fundamental representations, that is, we can lower indices by applying the epsilon tensor, ϵ_ijΔ^* … j … with ϵ = i σ_2 = [ +0 1; -1 0 ] . An expression of this form transforms with respect to the index i in the fundamental representation of . An example is a term Δ_j Δ_k Δ^*mϵ_miΔ^* ijk which is invariant under . Eventually we arrive at the general recipe to construct a gauge invariant potential: * Write all multiplets beyond singlets as totally symmetric tensors, Δ_i, Δ_(ij), Δ_(ijk), …. * Form conjugated multiplets with upper indices Δ^*i, Δ^* (ij), Δ^*(ijk), …. * Build products of the multiplet terms only with contracted indices. Here we consider that indices of the conjugated multiplets can be lowered by multiplying with ϵ = i σ. Singlets have no further restriction in this respect since they do not carry an  index. * We ensure that the assigned hypercharge in every tensor product adds to zero. We now would like to illustrate this approach in some examples. Let us again consider the potential of the Standard Model employing one copy of a Higgs-boson doublet φ^(2). In the symmetric representation we write the doublet as Δ_i. We form the conjugated doublet Δ^* i and find the well-known, most general potential up to order four, V_SM = μ^2 Δ^* iΔ_i + λ (Δ^* iΔ_i)^2 . Next, let us consider the two-Higgs doublet model: In this case we have two copies of Higgs-boson doublets which we write as Δ_1 i and Δ_2 i with the additional lower index labelling the number of the copy. The most general potential reads V_2HDM = μ^2_abΔ_a^* iΔ_b i + λ_abcdΔ_a^* iΔ_b iΔ_c^* jΔ_d j . with a,b,c,d ∈{1, 2} . Note that we have to sum over pairs of  indices i, j, as well as over pairs of the copy labels a, b, c, d. Extending the range of the indices labelling the copies of the doublets, we can generalize the potential to cases with an arbitrary number of doublets. Eventually let us consider the case of one doublet corresponding to m=2 with the assignment of hypercharge +1/2 and one triplet, that is m=3 with hypercharge +1. We find the most general gauge-invariant potential up to order four, V_23 = μ^2 Δ^* iΔ_i + M^2 Δ^* ijΔ_ij + c_1 Δ_ijΔ^* iΔ^* j + c_1^* Δ^* ijΔ_iΔ_j + λ_1 Δ^* iΔ_iΔ^* jΔ_j + λ_2 Δ^* ijΔ_ijΔ^* klΔ_kl + λ_3 Δ^* ijΔ_jkΔ^* klΔ_li + λ_4 Δ^* ijΔ_ijΔ^* kΔ_k . § CHANGE OF BASIS Let us consider now a change of basis. Basis changes are defined as a unitary mixing of the n_m copies of Higgs multiplets in the fundamental representation, φ^(m)_1,…, φ^(m)_n_m with the same multiplicity m, ψ'^(m) = [ φ_1^'(m)^; ⋮; φ_n_m^'(m)^ ] = U^(m)[ φ_1^(m)^; ⋮; φ_n_m^(m)^ ] = U^(m)ψ^(m) , with U^(m)∈ (n_m × n_m) . Under this unitary transformation the bilinear matrix transforms, as can be seen directly from its definition (<ref>), K'^(m) = U^(m)K^(m)U^(m)^† . This in turn means for the bilinears; see (<ref>), K'^(m)_α = ( K'^(m)λ_α ) = ( K^(m)U^(m)^†λ_α U^(m)) = R_αβ ( K^(m)λ_β ) = R_αβ K^(m)_β , α, β∈{0, …, n_m^2-1} , where we have written U^(m)^†λ_α U^(m) = R^(m)_αβλ_β . Since λ_0 is the scaled identity matrix this reads, K'^(m)_0 = K^(m)_0, K'^(m)_a = R_ab K^(m)_b, a, b ∈{1, …, n_m^2-1}. The matrix R^(m) = (R^(m)_ab) has the properties R^(m)^* = R^(m), R^(m)^ R^(m) = _n_m^2-1, (R^(m)) = 1 . We see that R^(m)(U^(m)) ⊂ SO(n_m^2-1) but only forms a subset. Similar to the case of doublets we find for any type of mulitplets, that a basis change corresponds in terms of bilinears to a proper rotation. Under basis changes, the symmetric representations of the multiplets Δ, where we suppress the symmetric  indices, transform in the same way as the multiplets in the (anti) fundamental representation, that is, Δ^(m)_a → U^(m)_abΔ^(m)_b , Δ^(m)*_c →Δ^(m)*_d U^(m)†_dc. The indices a, b, c, d label the copies of the multiplets. We may consider basis changes of only specific multiplets, or all multiplets. In a model of two Higgs-boson doublets and two Higgs-boson triplets we could for instance be interested in a basis change of the two doublets and/or of the two triplets. A Higgs potential is invariant under a change of basis if every gauge-invariant term is invariant. Let us consider a general term built from arbitrary multiplets, λ^a_1… a_n c_1 … c_mΔ^(m_1)_a_1…Δ^(m_n)_a_nΔ^*(m_n+1)_c_1…Δ^*(m_n+m)_c_m. We emphasize that the multiplets can have different multiplicities indicated by the superscript index, whereas the lower indices a_k and c_k label the copy. This potential term is invariant under a changes of bases, (<ref>), if we simultaneously transform the parameter as λ^a_1… a_n c_1 … c_m→ U^(m_n+1)_c_1 d_1… U^(m_n+m)_c_m d_mλ^b_1… b_n d_1 … d_m U^(m_1) †_b_1 a_1… U^(m_n) †_b_n a_n . The unitary transformations correspond to the copies present with respect to the multiplets. Let us also consider terms of the potential written in terms of the adjoint representation, that is in terms of bilinears, which transform as shown in (<ref>). Let us consider an arbitrary term written in terms of bilinears, ξ^μ_1 …μ_n K^(m_1)_μ_1… K^(m_n)_μ_n . This term is invariant under changes of bases, if we simultaneously with the changes of bases transform the parameter as ξ^μ_1 …μ_n→ξ^ν_1 …ν_n R^(m_1) _ν_1 μ_1… R^(m_n) _ν_n μ_n Also, we may have mixed terms in the potential written in terms of bilinears and symmetric representations. Then the indices of the parameters have to transform accordingly, that is, every copy index corresponding to a multiplet in the symmetric representation with respect to the unitary transformations, and every index with respect to a bilinear with respect to an orthogonal rotation. § SYMMETRIES OF THE HIGGS-MULTIPLET POTENTIAL Similar to the study of basis transformations we can find an appropriate treatment of symmetries of the Higgs-multiplet potential. Let us mention that this is a generalization of the study of symmetries in the 2HDM potential; see for instance <cit.>. First of all we note that in a Higgs-multiplet model we have in particular to consider the kinetic terms of the Higgs multiplets. Therefore we have the condition that any symmetry transformation of the multiplets must be respected by the corresponding kinetic terms. Apart from a transformation of the space-time argument of the Higss-multiplet fields, any symmetry transformation must therefore be a subgroup of an unitarity transformation. For the n_m copies of multiplets with multiplicity m in the fundamental representation, the symmetry transformation extends the basis change transformations (<ref>) by transformation of the argument, that is, ψ^(m)(x) →ψ'^(m)(x) = U^(m)ψ^(m)(x') , with U^(m)∈ (n_m × n_m) . For the bilinears, the unitary transformation corresponds to an orthogonal transformation besides the transformation of the space-time argument x → x'; compare with (<ref>), K'^(m)_α(x) = X^(m)_αβ K^(m)_β(x') α, β∈{0, …, n_m^2-1} . Generalizing (<ref>), any symmetry transformation should be of the form K^(m)_0(x) → K'^(m)_0(x) = K^(m)_0(x'), K^(m)_a(x) → K^'(m)_a(x) = X^(m)_ab K^(m)_b(x'), X^(m) = (X^(m)_ab) ∈ O(n_m^2-1) , a, b ∈{1, …, n_m^2-1} . Note that for any symmetry transformation matrix X^(m) of the bilinears, there is, except for gauge transformation, a unique unitary transformation U; compare with (<ref>) and appendix <ref>, U^(m)^†λ_a U^(m) = X^(m)_a bλ_b . Eventually, we consider the irreducible symmetric representations. Taking the transformation of the argument into account we want to consider the symmetry transformations; compare with (<ref>), Δ^(m)_a(x) → U^(m)_abΔ^(m)_b(x') , Δ^(m)*_c(x) →Δ^(m)*_d(x') U^(m)†_dc. Note that the indices a, b, c, d label the copies of the multiplets. Now, a Higgs potential is invariant under a symmetry transformation, if every gauge-invariant term is invariant. Let us consider a general term build from arbitrary copies of multiplets in the symmetric representation, λ^a_1… a_n c_1 … c_mΔ^(m_1)_a_1(x) …Δ^(m_n)_a_n(x) Δ^*(m_n+1)_c_1(x) …Δ^*(m_n+m)_c_m(x). We emphasize again that the multiplets can have different multiplicities written as an additional upper index. We consider here the general case of n multiplets in the fundamental and m multiplets in the anti-fundamental representation. The term (<ref>) is invariant under the symmetry transformation (<ref>) if the parameter fulfills the condition λ^a_1… a_n c_1 … c_m = U^(m_n+1)_c_1 d_1… U^(m_n+m)_c_m d_mλ^b_1… b_n d_1 … d_m U^(m_1) †_b_1 a_1… U^(m_n) †_b_n a_n . Similar, we find the condition for a term of the potential written in terms of the adjoint representation, that is in terms of bilinears. Let us consider an arbitrary term written in terms of bilinears, ξ^μ_1 …μ_n K^(m_1)_μ_1(x) … K^(m_n)_μ_n(x) . This term is invariant under the transformation (<ref>), if the bilinear parameter fulfills the condition ξ^μ_1 …μ_n = ξ^ν_1 …ν_n X^(m_1) _ν_1 μ_1… X^(m_n) _ν_n μ_n . Also, we may have mixed terms in the potential written in terms of bilinears and symmetric representations. Then the condition of invariance follow from the parameters: Under the transformation with respect to every index corresponding to a symmetric representation, and every index with respect to a bilinear the parameter have to stay invariant. Let us turn to spontaneous symmetry breaking. This refers to the case of a symmetry of the potential, that is, we have a symmetry respected by the potential but not by the vacuum. In the case of a symmetric representation of the Higgs multiplets let us suppose we have a symmetry transformation (<ref>) such that (<ref>) holds. Similar, for the case of potential terms written in bilinear form, we assume to have a transformation (<ref>) such that (<ref>) holds (and similar for the mixed case). The symmetry is often called explicit in this context. For a vacuum of the potential we can then check whether this symmetry is respected. This requires for the multiplets in the symmetric representation, ⟨Δ^(m)_a ⟩ = U^(m)_ab⟨Δ^(m)_b ⟩ , ⟨Δ^(m)*_c ⟩ = ⟨Δ^(m)*_d ⟩ U^(m)†_dc, a,b,c,d ∈{1, …, n_m} . For the case of bilinear representations we have to check at the vacuum, ⟨ K^(m)_a ⟩ = X^(m)_ab⟨ K^(m)_b ⟩ , a,b ∈{1, …, n_m^2-1} . In the case that also (<ref>), respectively (<ref>) holds, the vacuum respects the symmetry, that is, the symmetry is not spontaneously broken. § CP TRANSFORMATIONS IN THE HIGGS MULTIPLET MODEL As an application of the symmetry study in the last section let us consider the standard CP transformation of the multiplets. The CP transformation is defined by charge conjugation of the multiplets simultaneously with a parity transformation of the space-time argument, ψ^(m)(x) →ψ'^(m)(x) = ψ^(m)^*(x'), with x=(t, x), x'=(t, -x) . Similar to the terminology in work <cit.> we call these standard CP transformations in contrast to generalized CP transformations where simultaneously the multiplets are unitarily mixed. Indeed it has been shown that the simultaneous mixing of the multiplets can give new types of transformations which do not correspond to standard CP transformations in a different basis. The bilinear matrix K^(m)(x) transforms under the standard CP transformation as K^(m)(x) →K^'(m)(x) = ψ'^(m)(x) ψ'^(m)^†(x) = (ψ^(m)(x') ψ^(m)^†(x') )^ = K^(m)^(x') . For the bilinears we get K^(m)_α(x) → K^'(m)_α(x) = (K^'(m)(x) λ_α) = (K^(m)(x') λ_α^) = C̃^(m)_αβ K^(m)_β(x'), α, β∈{0, …, n_m^2-1} , where we define the matrix C̃^(m) by λ_α^ = C̃^(m)_αβλ_β . From the properties of the generalized Pauli matrices we find in particular, C̃^(m)_00 = 1 , and all off-diagonal terms of C̃^(m) vanishing. Therefore, we drop the trivial first row and the first column and define the matrix C^(m)=(C^(m)_a b) with a, b ∈{1, …, n_m^2-1}. Since the generalized Pauli matrices can be decomposed into symmetric and antisymmetric matrices, we get for the matrices C^(m), C^(m) = (± 1, …, ± 1) . The transformations of the bilinears are then, K^(m)_0(x) → K^(m)_0(x’), K^(m)_a(x) → C^(m)_ab K^(m)_b(x’) with λ_a^ = C̃^(m)_abλ_b, a, b ∈{1, …, n_m^2-1} . We see that, besides the space-time transformation of the argument of the multiplet fields, CP transformations give proper or improper rotations, depending on the number of symmetric and antisymmetric generalized Pauli matrices. For example, for the cases of two or three copies of multiplets of the same multiplicity we find reflection symmetry transformations, that is, matrices C^(m) with determinant minus one. However, for instance in the case of four or five multiplets of the same multiplicity we can see that the standard CP transformations correspond to proper rotations in bilinear space. Eventually, the irreducible symmetric representations transform under standard CP transformations as Δ^(m)_a(x) →Δ^*(m)_a(x') , Δ^*(m)_c(x) →Δ^(m)_c(x'). Note that we have suppressed again the indices. Having formulated the standard CP transformation in terms of the bilinears, (<ref>), and in terms of the symmetric representation, (<ref>) we can verify the potential with respect to this symmetry. As discussed in the last section this requires to check the conditions (<ref>) in the case of the symmetric representation, respectively (<ref>) in the case of bilinear representations. Note that this study of standard CP transformations can directly be generalized to non-standard CP transformations, where an arbitrary unitary mixing is permitted - in addition to the complex conjugation and the parity transformation of the field argument. In the case we have a potential explicitly invariant under standard CP transformations, the conditions (<ref>), respectively (<ref>) determine whether this symmetry is respected by the vacuum. Explicitly, these conditions for the standard CP symmetry considered here read ⟨Δ^(m)_a ⟩ = ⟨Δ^*(m)_a ⟩ . for the symmetric representations. In the case of bilinears we have the conditions ⟨ K^(m)_a ⟩ = C^(m)_a b⟨ K^(m)_b ⟩ . Of course, (<ref>) and (<ref>) require to minimize the potential. As an example let us consider the standard CP transformations in the case of n_2 copies of Higgs-boson doublets. We write the potential in the symmetric representation, V_NHDM = μ^2_abΔ_a^* iΔ_b i + λ_abcdΔ_a^* iΔ_b iΔ_c^* jΔ_d j . with a,b,c,d ∈{1, … n_2} . The potential has to be real, requiring for the quadratic term that we must have μ^2^*_abΔ_a iΔ_b^* i = μ^2_abΔ_a^* iΔ_b i = μ^2^*_baΔ_a^* iΔ_b i, that is, μ^2_ab = μ^2^*_ba. Similar we have λ_abcd = λ_badc^*. The CP transformation (<ref>) is given by Δ_a i→Δ_a^* i and Δ_a^* i→Δ_a i. Using the relation μ^2_ab = μ^2^*_ba we find μ^2_abΔ_a iΔ_b^* i = μ^2^*_abΔ_a iΔ_b^* i = μ^2_abΔ_a iΔ_b^* i, that is, the condition μ^2_ab = μ^2^*_ab as expected. Similar we get the condition λ_abcd = λ_abcd^*. § EXAMPLE POTENTIAL OF TWO HIGGS-BOSON DOUBLETS AND THREE TRIPLETS Let us illustrate the formalism in an explicit example of a potential with two Higgs-boson doublets, φ_1^(2) = [ ϕ_1^1; ϕ_1^2 ], φ_2^(2) = [ ϕ_2^1; ϕ_2^2 ] . and three Higgs-boson triplets, φ_i^(3) = [ ϕ_i^1; ϕ_i^2; ϕ_i^3 ], i ∈{1,2, 3} . This corresponds to n_2=2 and n_3=3. Usually we want the potential, as well as the complete Lagrangian of the model to be restricted by a symmetry. For a physically viable model this is typically required since, for instance, for two Higgs-boson doublets, the most general Yukawa couplings would lead to large flavor-changing neutral currents - which have never been observed. Here we will assume a specific potential of the form V_example = m_2^2 (φ_1^(2)^†φ_1^(2) + φ_2^(2)^†φ_2^(2)) + m_3^2 (φ_1^(3)^†φ_1^(3) + φ_2^(3)^†φ_2^(3) + φ_3^(3)^†φ_3^(3)) + λ_2 (φ_1^(2)^†φ_1^(2) + φ_2^(2)^†φ_2^(2))^2 + λ_3 ( φ_1^(3)^†φ_1^(3) + φ_2^(3)^†φ_2^(3) + φ_3^(3)^†φ_3^(3))^2 + λ_23 (φ_1^(2)^†φ_2^(2) + φ_2^(2)^†φ_1^(2)) · (φ_1^(3)^†φ_2^(3) + φ_2^(3)^†φ_1^(3)) We translate the Higgs-doublet and triplet fields to the bilinear formalism. We form the 2 × 2 matrices ψ^(2) and the 3 × 3 matrices ψ^(3), ψ^(2) = [ φ_1^(2)^; φ_2^(2)^ ], ψ^(3) = [ φ_1^(3)^; φ_2^(3)^; φ_3^(3)^ ] . Then we can built the bilinear 2 × 2, respectively, 3 × 3 matrices, K^(2) = ψ^(2)ψ^(2)^† = [ φ_1^(2)^†φ_1^(2) φ_2^(2)^†φ_1^(2); φ_1^(2)^†φ_2^(2) φ_2^(2)^†φ_2^(2) ], K^(3) = ψ^(3)ψ^(3)^† = [ φ_1^(3)^†φ_1^(3) φ_2^(3)^†φ_1^(3) φ_3^(3)^†φ_1^(3); φ_1^(3)^†φ_2^(3) φ_2^(3)^†φ_2^(3) φ_3^(3)^†φ_2^(3); φ_1^(3)^†φ_3^(3) φ_1^(3)^†φ_2^(3) φ_3^(3)^†φ_3^(3) ] . Now the bilinears follow in the basis of the Pauli and the Gell-Mann matrices together with the identity matrices σ_0 = _2, and λ_0 = √(2/3)_3, K^(2)_α = ( K^(2)σ_α), α∈{0, …, 3}, K^(3)_β = ( K^(3)λ_β), β∈{0, …, 8} . Explicitly, we find K^(2)_0 = φ^(2)_1^†φ^(2)_1 + φ^(2)_2^†φ^(2)_2, K^(2)_1 = φ^(2)_1^†φ^(2)_2 + φ^(2)_2^†φ^(2)_1, K^(2)_2 = i φ_2^(2)^†φ^(2)_1 - i φ^(2)_1^†φ^(2)_2, K^(2)_3 = φ^(2)_1^†φ^(2)_1 - φ^(2)_2^†φ^(2)_2 , and K^(3)_0 = √(2/3) ( φ^(3)_1^†φ^(3)_1 + φ^(3)_2^†φ^(3)_2 + φ^(3)_3^†φ^(3)_3 ), K^(3)_1 = φ^(3)_1^†φ^(3)_2 + φ^(3)_2^†φ^(3)_1, K^(3)_2 = φ^(3)_1^†φ^(3)_3 + φ^(3)_3^†φ^(3)_1, K^(3)_3 = φ^(3)_2^†φ^(3)_3 + φ^(3)_3^†φ^(3)_2, K^(3)_4 = i φ^(3)_2^†φ^(3)_1 - i φ^(3)_1^†φ^(3)_2, K^(3)_5 = i φ^(3)_3^†φ^(3)_1 - i φ^(3)_1^†φ^(3)_3, K^(3)_6 = i φ^(3)_3^†φ^(3)_2 - i φ^(3)_2^†φ^(3)_3, K^(3)_7 = φ^(3)_1^†φ^(3)_1 - φ^(3)_2^†φ^(3)_2, K^(3)_8 = 1/√(3) ( φ^(3)_1^†φ^(3)_1 + φ^(3)_2^†φ^(3)_2 - 2 φ^(3)_3^†φ^(3)_3 ) . The inversion of these relations allow us to replace all the scalar products of the type φ^(m)_i^†φ^(m)_j by the bilinears, φ^(2)_1^†φ^(2)_1 = 1/2( K_0^(2) + K_3^(2)), φ^(2)_2^†φ^(2)_1 = 1/2( K_1^(2) - i K_2^(2)), φ^(2)_2^†φ^(2)_2 = 1/2( K_0^(2) - K_3^(2)) , and φ^(3)_1^†φ^(3)_1 = K_0^(3)/√(6) + K_7^(3)/2 +K_8^(3)/2√(3), φ^(3)_2^†φ^(3)_1 = 1/2( K_1^(3) - i K_4^(3)), φ^(3)_3^†φ^(3)_1 = 1/2( K_2^(3) - i K_5^(3)), φ^(3)_2^†φ^(3)_2 = K_0^(3)/√(6) - K_7^(3)/2 +K_8^(3)/2√(3), φ^(3)_3^†φ^(3)_2 = 1/2( K_3^(3) - i K_5^(3)), φ^(3)_3^†φ^(3)_3 = K_0^(3)/√(6) - K_8^(3)/√(3) . Note that we get the remaining expressions of the scalar products by recognizing that (φ_i^(m)^†φ_j^(m))^* = φ^(m)_j^†φ^(m)_i and recalling that the bilinears K_μ^(m) are real. Now we express the potential (<ref>) in terms of bilinears by using (<ref>), (<ref>), giving, V_example = m_2^2 K^(2)_0 + √(3/2) m_3^2 K^(3)_0 + λ_2 (K^(2)_0)^2 + 3/2λ_3 (K^(3)_0)^2 + λ_23 K^(2)_1 K^(3)_1 . We study the potential with respect to standard CP symmetries. From (<ref>) we find the explicit transformation matrices for the two doublets and the three triplets, C^(2) = (1, -1, 1), C^(3) = (1,1, -1,-1,-1,1, 1) . We can now check the conditions of the parameters, (<ref>), to see whether or not the potential is explicitly CP conserving. In particular we find the only non-trivial condition λ_23 C^(2)_11 C^(3)_11 = λ_23. With view at (<ref>) this condition is fulfilled, therefore, the potential is explicitly symmetric under CP transformations. § CONCLUDSIONS There is substantial motivation to explore models with Higgs multiplets beyond singlets and doublets. Despite the stringent constraints imposed by the electroweak ρ parameter, viable scenarios still exist where Higgs bosons with higher multiplicities can play a role. Building on our previous work, we have demonstrated that gauge redundancies in the description of Higgs multiplets generally obscure the physical interpretation of these models. For an arbitrary number of Higgs-boson multiplets, we have employed gauge-invariant methods to determine the domain, utilizing bilinear representations similarly to the approach used for multiple Higgs-boson doublets. However, the potential may include terms that, while gauge-invariant, are not bilinear in all the present multiplets. Using methods akin to those discussed in <cit.>, we treated the multiplets within the irreducible symmetric representation, with indices. Just as Lorentz invariance is reflected by contracted indices, contracted indices signify invariance. This approach allows for the construction of all gauge-invariant potential terms for an arbitrary assignment of hypercharge to the multiplets. Finally, we have investigated general symmetries in terms of bilinears and symmetric representations. Specifically, we illustrated how standard CP transformations manifest in multi-Higgs potentials and provided an explicit example to demonstrate this. We are very thankful to Otto Nachtmann, Lohan Sartore, and Kristjan Kannike for valuable comments and suggestions. This work is supported by Chile ANID FONDECYT project 1200641. § ONE-TO-ONE MAP OF BILINEARS TO HIGGS MULTIPLETS Here we want to show the one-to-one correspondence of the Higgs bilinears to the Higgs multiplets - except for unphysical gauge degrees of freedom. The proof of this is a generalization of the proof given in theorem 5 of <cit.>. We consider the general case of n_m copies of Higgs multiplets φ^(m)_1(x), …, φ^(m)_n_m(x). As outlined above these n_m multiplets of the same type are written together in one matrix, ψ^(m) = [ φ_1^(m)^; ⋮; φ_n_m^(m)^ ] = [ ϕ^1_1 ⋯ ϕ^m_1; ⋮ ⋮; ϕ^1_n_m ⋯ ϕ^m_n_m ] ∈ (n_m × m) . We shall proof now that for any hermitean, positive semidefinite matrix K^(m) of dimension n_m × n_m of rank less than or equal to m, there are Higgs multiplets φ^(m)_1(x), …, φ^(m)_n_m(x) such that K^(m) = ψ^(m)ψ^(m)^†. For a given matrix K^(m) of dimension n_m × n_m the Higgs multiplets are determined uniquely - except for a electroweak gauge transformation . Suppose we have a hermitean, positive semidefinite matrix K^(m) of dimension n_m × n_m of rank less than or equal to m. We can diagonalize this matrix such that K^(m) = W(x) ( κ_1(x), …, κ_m(x), 0, …, 0) W^†(x) The matrices W(x) are unitary matrices of dimension (n_m × n_m) and the eigenvalues fulfill κ_i(x) ≥ 0, i ∈{1, …, m} . Having diagonalized K^(m) we set ψ(x) = W(x) ( [ √(κ_1(x)) 0; 0 ⋱ 0; 0 √(κ_m(x)); 0; ]). We see that ψ(x) has the form (<ref>) and that K^(m)(x) = ψ(x) ψ^†(x). It remains to show that the ψ(x) fulfilling K^(m)(x) = ψ(x) ψ^†(x) are unique up to gauge transformations. Suppose, we have two field configurations ψ(x) and ψ'(x) of the multiplets giving the same matrix K^(m)(x) = ψ(x) ψ^†(x) = ψ'(x) ψ'^†(x) Diagonalization of K^(m) as in (<ref>) then reads ( [ [ κ_1(x) 0; ⋱; 0 κ_m(x) ] 0; 0 0 ]) = ( W^†ψ) ( W^†ψ)^† = ( W^†ψ' ) ( W^†ψ' )^† . From this follows that we can write ( W^†ψ) = ( [ χ^(m)_1(x)^; ⋮; χ^(m)_n_m(x)^; 0 ]), ( W^†ψ' ) = ([ χ'^(m)_1(x)^; ⋮; χ'^(m)_n_m(x)^; 0 ]), where χ^(m)_i^†(x) χ^(m)_j(x) = χ'^(m)_i^†(x) χ'^(m)_j(x) = κ_i(x) δ_ij . From this we conclude that there is a unitary matrix U_G ∈ U(m × m) such that χ'^(m)_i = U_G χ^(m)_i, i ∈{1, … m} . With this we find for the multiplets ψ' and ψ W^†(x) ψ'(x) = W^†(x) ψ(x) U_G^ (x), and noting that W(x) is unitary we find eventually ψ'(x) = ψ(x) U_G^ (x), that is, as stated, ψ'(x) and ψ(x) are related by a gauge transformation. JHEP
http://arxiv.org/abs/2409.02858v1
20240904163256
Revisiting ILP Models for Exact Crossing Minimization in Storyline Drawings
[ "Alexander Dobler", "Michael Jünger", "Paul J. Jünger", "Julian Meffert", "Petra Mutzel", "Martin Nöllenburg" ]
cs.DS
[ "cs.DS", "cs.CG" ]
Theoretical modelling of the exceptional GRB 221009A afterglow [ September 9, 2024 ============================================================== § ABSTRACT Storyline drawings are a popular visualization of interactions of a set of characters over time, e.g., to show participants of scenes in a book or movie. Characters are represented as x-monotone curves that converge vertically for interactions and diverge otherwise. Combinatorially, the task of computing storyline drawings reduces to finding a sequence of permutations of the character curves for the different time points, with the primary objective being crossing minimization of the induced character trajectories. In this paper, we revisit exact integer linear programming (ILP) approaches for this -hard problem. By enriching previous formulations with additional problem-specific insights and new heuristics, we obtain exact solutions for an extended new benchmark set of larger and more complex instances than had been used before. Our experiments show that our enriched formulations lead to better performing algorithms when compared to state-of-the–art modelling techniques. In particular, our best algorithms are on average 2.6–3.2 times faster than the state-of-the-art and succeed in solving complex instances that could not be solved before within the given time limit. Further, we show in an ablation study that our enrichment components contribute considerably to the performance of the new ILP formulation. § INTRODUCTION Storyline drawings are a well-studied visualization style for complex event-based temporal interaction data and have been popularized by the xkcd comic “Movie Narrative Charts” in 2009 <cit.>. They show a set of characters, e.g., from the plot of a movie or book, and how they interact or co-occur in a sequence of events over time, e.g., by participating in the same scene or conversation of the evolving story. A storyline drawing is a two-dimensional diagram, where the x-axis represents time and the y-dimension is used for the vertical grouping of characters according to their interaction sequence. The exact temporal distance of interactions is usually not depicted, only their order. Each character is represented as an x-monotone curve, and interactions are represented by vertically grouping the curves of the participating characters at the x-coordinate corresponding to the interaction time. Characters that are not participating in an interaction at any specific point in time are vertically separated from each other. <Ref> shows an example of a storyline drawing. Due to their popularity and the intuitive data encoding, they are well suited for visual storytelling and have since been used as visual metaphors for representing a variety of different event-based data sets beyond the original book and movie plots <cit.>, e.g., for software projects <cit.>, newspaper articles <cit.>, political debates on social media <cit.>, visual summaries of meeting contents <cit.>, scientific collaborations <cit.>, sports commentary <cit.>, and gameplay data <cit.>. There is one main degree of freedom when computing and optimizing storyline drawings: the (vertical) linear order and positioning of the characters at each discrete time steps. The only hard constraint is that all characters participating in the same interaction must be consecutive as a group. This degree of freedom can thus be used to minimize the number of crossings between character curves, their wiggles (i.e. the amount of vertical movement of character lines in the visualization), and any excessive white space in the diagram, which are the three major objectives that have been identified for computing storyline drawings <cit.>. While the number of crossings is determined purely combinatorially by the sequence of character permutations, wiggles and white space depend on the actual y-coordinates assigned to each character curve at each point in time. In this paper, our interest is the combinatorial crossing minimization problem. It is the primary objective in practical storyline optimization pipelines <cit.>, where it forms the input to the subsequent steps of reducing wiggles and white space while maintaining the character order. Additionally, crossing minimization is one of the most fundamental graph drawing problems <cit.> and it is well known that graph drawings with fewer crossings increase readability <cit.>. Unfortunately, crossing minimization in storyline drawings is an -hard problem <cit.> and hence practical approaches for storyline visualization usually resort to heuristics, even though they cannot guarantee optimal solutions. We consider the crossing minimization problem from the opposite side and revisit exact integer linear programming (ILP) approaches <cit.> for computing provably optimal solutions. Such approaches often lead to practical exact algorithms. Our goal is to improve on the runtime performance of such exact methods by enriching the models with both new problem-specific insights and better heuristics. Faster exact algorithms for crossing minimization in storyline drawings are practically relevant for two reasons: firstly, solving moderately sized instances to optimality within a few seconds provides a strictly better alternative to commonly used suboptimal heuristics, and secondly, knowing optimal solutions for a large set of representative benchmark instances (even if their computations take several minutes or up to a few hours) is a prerequisite for any thorough experimental study on the performance of non-exact crossing minimization heuristics and for generating crossing-minimum stimuli in user experiments. Related Work. Tanahashi and Ma <cit.> introduced storyline drawings as an information visualization problem, provided the first visual encoding model, and defined the above-mentioned optimization criteria (crossings, wiggles, white space). They suggested a genetic algorithm to compute storyline drawings. Ogawa and Ma <cit.> used a greedy algorithm to compute storylines to depict software evolution. Due to slow computation times of previous methods, Liu et al. <cit.> split the layout process into a pipeline of several subproblems ordered by importance, the first one being crossing minimization. They solved the character line ordering by an iterated application of a constrained barycenter algorithm, a classic heuristic for multi-layer crossing minimization <cit.>. Their results were obtained in less than a second and had fewer crossings than those computed by the genetic algorithm <cit.>, which took more than a day to compute on some of the same instances. Tanahashi et al. <cit.> enhanced previous methods to take into account streaming data and apply a simple sequential left-to-right sorting heuristic. Recent practical works on storyline drawings focus on other aspects, such as an interactive editor <cit.> or a mixed-initiative system including a reinforcement learning AI component <cit.>; both these systems apply a two-layer crossing minimization heuristic <cit.>. Several authors focused on the combinatorial crossing minimization problem and its complexity. Kostitsyna et al. <cit.> observed that the -hardness of the problem follows from a similar bipartite crossing minimization problem <cit.> and proved fixed-parameter tractability when the number of characters is bounded by a parameter k. Gronemann et al. <cit.> were the first to model the problem as a special type of tree-constrained multi-layer crossing minimization problem. They implemented an exact branch-and-cut approach that exploits the equivalence of the quadratic unconstrained 0/1-optimization problem with the maximum cut problem in a graph. They managed to solve many instances with up to 20 characters and 50 interactions optimally within a few seconds. Van Dijk et al. <cit.> proposed block-crossing minimization in storyline drawings, which counts grid-like blocks of crossings rather than individual crossings. They showed -hardness and proposed greedy heuristics, a fixed-parameter tractable algorithm, and an approximation algorithm. In a follow-up work, van Dijk et al. <cit.> implemented and experimentally evaluated an exact approach for the block crossing minimization problem using SAT solving. A different variation of storylines was studied by Di Giacomo et al. <cit.>, who considered ubiquitous characters as x-monotone trees with multiple branches, enabling characters to participate in multiple simultaneous interactions; they solved the crossing minimization aspect using an adaptation of the previous SAT model <cit.>. Dobler et al. <cit.> consider time interval storylines, where additionally to the order of characters, the order of time steps in so-called time-intervals can be permuted. The problem is also similar to crossing minimization in layered graph drawing, which was introduced by Sugiyama et al. <cit.>. The problem is to draw a graph with its vertices on multiple parallel lines while minimizing crossings. A notable difference to storyline crossing minimization is that vertices can have arbitrary degree and that edges can span more than one layer. For a survey of algorithms and techniques in layered graph drawing, we refer to Healy and Nikolov <cit.>. Contributions. The contributions of this paper are the following: * We identify structural properties of storyline drawings and prove that there exist crossing-minimum drawings satisfying them, reducing the search space of feasible solutions. * We propose a new ILP formulation exploiting these structural insights in order to (i) significantly reduce the number of required constraints and (ii) apply symmetry breaking constraints to strengthen the ILP model. * We introduce several new heuristics that support the exact solver, either as initial heuristics to improve branch-and-bound pruning or for deriving integral solutions from fractional ones during the incremental ILP solving process. * We have compiled a new benchmark set of storyline instances, including those of earlier studies, as well as several challenging new ones. * We have conducted a detailed experimental evaluation of our new ILP model using the above benchmark set. We compare its ability to solve instances with state-of-the-art ILP models. Moreover, in an ablation study, we show that our further enhancements (e.g., adding symmetry breaking constraints and novel heuristics) contributes considerably to the performance of both the new and several state-of-the-art ILP formulations. * We show that our ILP models are able to solve previously unsolved instances from the literature and obtain a speedup of 2.6–3.2 compared to the state of the art. Data sets, source code, evaluation, and a visualization software are available on https://doi.org/10.17605/OSF.IO/3BUA2. § PRELIMINARIES Permutations. Given a set X={x_1,…,x_n}, a permutation π is a linear order of its elements, or equivalently, a bijective mapping from {1,2,…,|X|} to X. For x,x'∈ X we write x≺_πx' if x comes before x' in π. For Y⊆ X, π[Y] is the permutation π restricted to Y, formally, for y,y'∈ Y, y≺_π[Y]y' if and only if y≺_πy'. For two permutations π,ϕ of two sets X and Y with X∩ Y=∅, we denote by π⋆ϕ their concatenation. Given two permutations π,π' of the same set X, the inversions between π and π' is the number of pairs x,x'∈ X such that π^-1(x)<π^-1(x') and π'^-1(x)>π^-1(x'). Problem input. A storyline instance consists of a 4-tuple (T,𝒞,ℐ, A) where T={t_1,t_2,…,t_ℓ} is a set of totally ordered time steps (or layers), 𝒞={c_1,c_2,…,c_n} is a set of characters, and ℐ={I_1,I_2,…,I_m} is a set of interactions. Each interaction I∈ℐ has a corresponding time step (I) ∈ T and consists of a set of characters (I) ⊆𝒞. Further, A maps each character c∈𝒞 to a consecutive set of time steps, i.e., A(c)={t_i,t_i+1,…, t_j} for 1≤ i≤ j≤ℓ. We say that character c is active at the time steps in A(c), it starts at t_i and ends at t_j. We define AC(t) for t∈ T as the set of all characters c∈𝒞 active at time t, i.e., AC(t)={c∈𝒞| t∈ A(c)}. Clearly, for each interaction I∈ℐ, (I)⊆AC((I)) must hold. Next, we define the set of all characters active in the time interval [t_i,t_j] (1≤ i≤ j≤ℓ) as AC(t_i,t_j)=AC(t_i)∩AC(t_i+1)∩…∩AC(t_j). For a time step t∈ T we define the set of interactions at t as ℐ(t)={I∈ℐ|(I)=t} and its corresponding set of characters as CI(t)=⋃_I∈ℐ(t)(I). Without loss of generality, for the interactions at time step t we assume that |ℐ(t)| 0 and that the sets of characters of the interactions ℐ(t) are pairwise disjoint. This is a reasonable assumption as characters usually participate in at most one interaction at any given time, e.g. in movies. Important notation is also illustrated in <ref>. Problem output. Solutions to storyline instances (T,𝒞,ℐ, A) consist of a sequence of ℓ permutations S=(π_1,π_2,…,π_ℓ) such that π_i is a permutation of AC(t_i) for all i=1,…,ℓ satisfying the condition that the set of characters of each interaction I∈ℐ(t_i) appears consecutively. We call S a storyline solution or drawing. The number of crossings cr(π_i, π_i+1) between two consecutive permutations π_i and π_i+1 is defined as the number of inversions of the two permutations π_i[C] and π_i+1[C], where C=AC(t_i)∩AC(t_i+1). The number of crossings in a storyline solution is ∑_i=1^ℓ-1cr(π_i,π_i+1). The problem addressed in this paper is the following. [Storyline Problem] Given a storyline instance (T,𝒞,ℐ,A), find a storyline drawing S with the minimum number of crossings. § STANDARD MODELS FOR THE STORYLINE PROBLEM The most natural ILP formulation to solve <ref> has a quadratic objective function and is based on the linear ordering model, which uses binary variables in order to encode the linear ordering at each time step. The number of crossings between two subsequent time steps is then given by the number of inversions of the two permutations. From now on, we assume that characters c_u,c_v,c_w are pairwise different, even if we write for example c_u,c_v∈ C for some set C of characters. §.§.§ Quadratic Model (QDR) For each time step t_i,i=1,2,…,ℓ and each tuple of characters c_u,c_v∈AC(t_i) we introduce a binary ordering variable x_i,u,v which is equal to 1 if and only if c_u≺_π_ic_v. The quadratic model QDR is given as follows: min ∑_i=1^ℓ-1∑_c_u,c_v∈AC(t_i,t_i+1)x_i,u,vx_i+1,v,u QDR x_i,u,v = 1- x_i,v,u for all i=1,…,ℓ; c_u,c_v∈AC(t_i) with u<v EQ x_i,u,v+x_i,v,w+x_i,w,u≤ 2 for all i=1,…,ℓ; c_u,c_v,c_w∈AC(t_i) LOP x_i,u,w=x_i,v,w for all i=1,…,ℓ; I∈ I(t_i); TREE c_u,c_v∈(I), u<v; c_w∈AC(t_i)∖(I) x_i,u,v∈{0,1} for all i=1,…,ℓ; c_u,c_v∈AC(t_i), BIN The character curves for c_u and c_v cross between the two layers t_i and t_i+1 if and only if one of the terms x_i,u,vx_i+1,v,u and x_i,v,ux_i+1,u,v equals 1. The (<ref>) and (<ref>) constraints ensure transitivity of the set of characters AC(t_i) present at time step t_i and guarantee that they define a total order. For all interactions I∈ I(t_i) the (<ref>) constraints ensure that characters from I appear consecutively at the respective time step t_i. §.§.§ Linearized Model (LIN) The standard linearisation of quadratic integer programs introduces additional variables y_i,u,v that substitute the quadratic terms x_i,u,vx_i+1,v,u for all t_i, i=1,2,…,ℓ-1 and each tuple of characters c_u,c_v∈AC(t_i,t_i+1) in the objective function. In order to link the new variables with the ordering variables, we introduce the following constraints: y_i,u,v≥ x_i,u,v-x_i+1,u,v for all i=1,…,ℓ; c_u,c_v∈AC(t_i,t_i+1) CR Obviously, the variable y_i,u,v is forced to 1, if the character c_u is before c_v at time step t_i in the solution represented by the y-variables, and the order of both characters is reversed at time step t_i+1. The linearised model (<ref>) is given as follows. min ∑_i=1^ℓ-1∑_c_u,c_v∈AC(t_i,t_i+1)y_i,u,v LIN y_i,u,v,x_i,u,v satisfy (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>) §.§.§ Max-Cut Model (CUT) Gronemann et al. <cit.> have suggested a formulation based on the transformation of the problem into a quadratic unconstrained binary program with additional (TREE) constraints, which is then solved using a maximum cut approach. Here, we omit the detour via the quadratic binary program and directly provide the corresponding maximum cut formulation. Starting with a feasible storyline drawing Ŝ=(π̂_1,…,π̂_ℓ), we define the graph G_M=(V_M,E_M): The vertex set V_M is given by a vertex v^* and the union of the sets V_i (i=1,…,ℓ), where V_i has a vertex c_uv^i for each pair c_u,c_v∈AC(t_i) with c_u≺_π̂_ic_v. We introduce an edge between the vertices c_uv^i and c_pq^i+1 if the corresponding characters coincide. In the case that c_u=c_p and c_v=c_q, the (type-1) edge e=e_uv^i gets a weight of w_e=-1, and in the case that c_u=c_q and c_v=c_p, the (type-2) edge e=e_uv^i gets a weight of w_e=1. We define the constant K as the number of edges of type (2). The intention is the following: An edge of type (1) results in a crossing if and only if it is in the cut, and an edge of type (2) results in a crossing if and only if it is not in the cut. This construction allows for associating the maximum cut objective function values W to corresponding crossing numbers K-W. In particular, W=0 for the empty cut corresponds to the number of crossings K in Ŝ. In order to guarantee that the characters of an interaction appear consecutively, we introduce type-3 edges with weight 0 from the additional vertex v^* to every vertex in V_i for all i=1,…,ℓ, and add the additional constraints (<ref>). We introduce a binary variable z_e for every edge e∈ E_M in the graph, which is 1 if and only if the edge is contained in the computed cut. The following model guarantees that every optimal solution corresponds to a constrained maximum cut in the graph G_M that provides the optimal solution to the storyline problem. The constraints (<ref>) capture the fact that any intersection of a cut and a cycle in a graph has even cardinality. The correctness is provided in <cit.>, see also <cit.>. max ∑_e∈ E_Mw_ez_e CUT ∑_e∈ Fz_e-∑_e∈ C∖ Fz_e≤| F| -1 , F⊆ C, | F | oddCYC 0≤ z_(v^*,c_uv^i)+z_(v^*,c_vw^i)-z_(v^*,c_uw^i)≤ 1 for all i=1,…,ℓ; c_uv^i,c_vw^i,c_uw^i∈ V_i LOPC with c_u≺_π̂_ic_v≺_π̂_ic_w z_(v^*,c_uw^i)=z_(v^*,c_vw^i) if c_u,c_v≺_π̂_ic_w z_(v^*,c_wu^i)=z_(v^*,c_wv^i) if c_u,c_v≻_π̂_ic_w for all i=1,…,ℓ; I∈ I(t_i); c_u,c_v∈(I); c_w∈AC(t_i)∖(I) TRC z_e∈{0,1} BIC § STRUCTURAL PROPERTIES OF STORYLINE SOLUTIONS In this section, we identify structural properties of storyline solutions that will help us to optimize the models proposed in <ref>, and that guide the exact optimization process. For the results in this section, we introduce some definitions and observe some properties of the function . Let π be a permutation of the set X, and ϕ be a permutation of the set Y⊆ X. We define the permutation ψ=(π,ϕ) as the permutation of the set X such that ψ(i)=π(i) if π(i)∈ X∖ Y ϕ(j) if c=π(i)∈ Y with j=π[Y]^-1(c) In particular, all elements in X∖ Y are ordered according to π, and all elements in Y switch their position so that they are ordered according to ϕ. Notice that (π,ϕ)[Y]=ϕ. Let π=(c,e,b,d,f,a,g) and ϕ=(a,b,c). Then (π,ψ)=(a,e,b,d,f,c,g). Consider three consecutive time steps t_i,t_i+1,t_i+2 (1≤ i≤ℓ-2), we observe a special type of triangle inequality. Let C⊆AC(t_i,t_i+2). For any feasible solution S we have (π_i[C],π_i+1[C])+(π_i+1[C],π_i+2[C])≥(π_i[C],π_i+2[C]). Next, consider two permutations π_i,π_i+1 of a storyline solution such that C=AC(t_i)∩AC(t_i+1). Consider two sets X,Y⊆ C. We define (π_i,π_i+1,X,Y) as the amount of (c,c')∈ X× Y such that either c≺_π_ic' and c'≺_π_i+1c, or c'≺_π_ic and c≺_π_i+1c'. I.e., these are the crossings between time steps t_i and t_i+1 with one character in X and one character in Y. We set (π_i,π_i+1,X)=(π_i,π_i+1,X,X). It holds that (π_i,π_i+1,X)=(π_i[X],π_i+1[X]). Let X,Y be sets such that X∪̇Y=C⊆AC(t_i,t_i+1). It holds that (π_i,π_i+1, C)=(π_i,π_i+1,X)+(π_i,π_i+1,Y)+(π_i,π_i+1,X,Y). Now, we define two properties of storyline drawings. <ref> captures that the relative order of characters in an interaction can be propagated backwards. Let S=(π_1,π_2,…,π_ℓ) be a solution to a storyline instance (T,𝒞,ℐ, A). Let I∈ℐ, t_i=(I) and C=(I). Let 1< j(I)≤ i be the index of the earliest time step t_j(j) such that C⊆AC(t_j(I),t_i) and ∀ k∈{j(I)+1,…, i}:CI(t_k)∩ C=∅∃ I∈ℐ(t_k):C⊆(I). We say that S is I-consistent if ∀ k∈{j(I),j(I)+1,…,i}:π_k[C]=π_i[C]. Further, we say that S is type-1-consistent if it is I-consistent for all I∈I. <ref> defines the property that between suitable pairs of interactions with the same set of characters, these characters are kept together between the two time steps. Note that this is not implied by type-1 consistency. Let S=(π_1,π_2,…,π_ℓ) be a solution to a storyline instance (T,𝒞,ℐ, A). Consider two interactions I_1,I_2∈ℐ such that * (I_1)=(I_2)=C, * i=(I_1)<(I_2)=j, and * ∀ k∈ℕ: i<k<j⇒[CI(t_k)∩ C=∅∃ I_3∈ℐ(t_k):C⊆(I_3)]. We say that S is (I_1,I_2)-consistent if ∀ i<k<j:∃π^a,π^b:π_k=π^a⋆π_i[C]⋆π^b. Further, we say that S is type-2-consistent if it is (I_1,I_2)-consistent for all such pairs (I_1,I_2). The following lemma shows that we can achieve type-1 consistency for storyline drawings without increasing the number of crossings. Essentially, if a storyline solution is not type-1 consistent for an interaction I, we can propagate the relative order of characters in that interaction forward from the time step t_j(I) from <ref>. lemmatypeonelemma Let (T,𝒞,ℐ, A) be an instance with a solution S. We can construct from S a type-1-consistent solution S' such that (S')≤(S). If S is type-2-consistent, so is S'. Let S=(π_1,π_2,…,π_ℓ) be a solution such that there exists I∈ℐ such that S is not I-consistent. We choose such an interaction I that is as “late as possible”, i.e. I∈ℐ(t_i) with i maximal. We construct S'=(π_1',π_2',…,π_ℓ') such that * (S')≤(S), * S' is I-consistent, * for each I'∈ℐ such that S is I'-consistent and (I')≥(I), S' is also I'-consistent. Let t_i=(I) and C=(I) and define j(I)<i as in <ref>. For all 1≤ k≤ℓ such that k≤ j(I) or k>i we set π_k'=π_k. For k∈{j(I)+1,j(I)+2,…,i} we set π_k'=(π_k,π_j(I)[C]). It is clear that the new solution S' is I-consistent and is still a solution to the storyline instance (characters involved in interactions are still consecutive in the respective time steps). We first argue that (S')≤(S). First, it is clear that (π'_k,π'_k+1)=(π_k,π_k+1) for k≤ j(I)-1 and k>i as the involved permutations did not change. It remains to show that ∑_k=j(I)^i(π_k',π_k+1')≤∑_k=j(I)^i(π_k,π_k+1). We also have that ∑_k=j(I)^i(π_k',π_k+1')= ∑_k=j(I)^i(π_k',π_k+1',C)+ ∑_k=j(I)^i(π_k',π_k+1',AC(t_k,t_k+1)∖ C)+ ∑_k=j(I)^i(π_k',π_k+1',C,AC(t_k,t_k+1)∖ C), and ∑_k=j(I)^i(π_k,π_k+1)= ∑_k=j(I)^i(π_k,π_k+1,C)+ ∑_k=j(I)^i(π_k,π_k+1,AC(t_k,t_k+1)∖ C)+ ∑_k=j(I)^i(π_k,π_k+1,C,AC(t_k,t_k+1)∖ C). Thus it is enough to show that the inequality holds for the respective terms separately. * ∑_k=j(I)^i(π_k',π_k+1',C)≤∑_k=j(I)^i(π_k,π_k+1,C): We have ∑_k=j(I)^i(π_k,π_k+1,C) =∑_k=j(I)^i(π_k[C],π_k+1[C]) ≥∑_k=j(I)^i(π_k[C∩AC(t_i+1)],π_k+1[C∩AC(t_i+1)]) ≥(π_j(I)[C∩AC(t_i+1)], π_i+1[C∩AC(t_i+1)]) =(π'_i[C∩AC(t_i+1)], π_i+1'[C∩AC(t_i+1)]) =∑_k=j(I)^i(π_k',π_k+1',C) The first inequality holds because C∩AC(t_i+1) is a subset of C. The second inequality holds because of <ref>. The last equality holds because the only crossings between characters from C from t_j(I) to t_i+1 appear between t_i and t_i+1 as the relative order of characters from C in S' is the same from j(I) to i. * ∑_k=j(I)^i(π_k',π_k+1',AC(t_k,t_k+1)∖ C)≤∑_k=j(I)^i(π_k,π_k+1,AC(t_k,t_k+1)∖ C): Let k∈{j(I), j(I)+1,…,i} and let C_a=AC(t_k,t_k+1)∖ C. We have that (π_k',π_k+1',AC(t_k,t_k+1)∖ C) =(π_k'[C_a],π_k+1'[C_a]) =(π_k[C_a],π_k+1[C_a]) =(π_k,π_k+1,AC(t_k,t_k+1)∖ C). The key is that π_k[C_a]=π_k'[C_a] and π_k+1[C_a]=π_k+1'[C_a], as we did not change the relative order of non-interaction characters, that is, characters that are not in C. As the equality holds for each term, it also holds for the sum. * ∑_k=j(I)^i(π_k',π_k+1',C, AC(t_k,t_k+1)∖ C)≤∑_k=j(I)^i(π_k,π_k+1,C, AC(t_k,t_k+1)∖ C): We again argue for each k∈{j(I),j(I)+1,…,i} separately and consider two cases. If k=i then the relative order of character pairs (c,c')∈ C× (AC(t_k,t_k+1)∖ C) is the same for π_i and π_i' as C must be consecutive in t_i, and π[AC(t_k,t_k+1)∖ C] = π'[AC(t_k,t_k+1)∖ C]. The relative order is also the same in π_i+1' and π_i+1 as π_i+1'=π_i+1. Thus (π_k,π_k+1,C,AC(t_k,t_k+1))∖ C)=(π_k',π_k+1',C,AC(t_k,t_k+1)∖ C) holds. In the remaining case, we have that k∈{j(I),j(I)+1,…,i-1}. Consider a character c∈AC(t_k,t_k+1)∖ C. Let α be the amount of characters from C that are crossed by c between t_k and t_k+1 in S. We show that c crosses at most α characters from C between t_k and t_k+1 in S', which is enough to show the claim. Let abv_k be the amount of c'∈ C such that c'≺_π_kc. Define abv_k+1 by replacing k with k+1 in the definition. Equivalently define abv_k' and abv_k+1' by replacing π_k by π_k' and π_k+1 by π'_k+1, respectively. By the definition of we have that abv_k=abv_k' and abv_k+1=abv_k+1'. By the pigeonhole principle, c must cross at least β=|abv_k-abv_k+1| characters from C between t_k and t_k+1 (for S and S'). Notice that β is the exact amount of crossings for S' and β≤α must hold as β is a lower bound. Thus c crosses less than or equal characters from C between t_k and t_k+1 in S' when compared to S. Putting these inequalities together, we get (S')≤(S). It is easy to see that we constructed S' such that it satisfies (2). It remains to show (3), i.e. that for each I'∈ℐ such that S is I'-consistent and (I')≥(I), S' is also I'-consistent. We show the claim by contradiction, so assume to the contrary that there exists some I' with (I')≥(I), S is I'-consistent, and S' is not. First, it is worth noting that we only change relative orders of character pairs involving at least one character from C. We consider different cases. * (I')=(I): Then (I)∩(I')=∅. Hence, we did not change relative orders of character pairs from (I') and we obtain a contradiction. * (I')>(I)=t_i. We again consider two cases. * (I')⊆(I): It follows that j(I')<i and further j(I')≤ j(I). As we changed orders for character pairs from (I'), we have that S is already not I'-consistent, a contradiction. * (I')⊈(I): Then j(I')≥ i. But we only changed orders up to i, so S' is already not I'-consistent, a contradiction. Lastly, assume that S is type-2-consistent, we prove that S' is as well. We proceed by contradiction. Hence, let there be (I_1,I_2) as in <ref> such that S is (I_1,I_2)-consistent, but S' is not. It follows that the interval [t_j(I)+1,t_i] has non-empty intersection with [(I_1),(I_2)]. It further follows that (I_1)⊆ C. But if S was already (I_1,I_2)-consistent, we did not change relative orders nor positions of characters in (I_1) during the above process, a contradiction. The proof is concluded by applying the above procedure inductively. A similar result with a related proof argument holds for type-2 consistency. lemmatypetwolemma Let (T,𝒞,ℐ, A) be an instance with a solution S. We can construct from S a type-2-consistent solution S' such that (S')≤(S). If S is type-1-consistent, so is S'. Let S be a solution that is not type-2-consistent, i.e. there exist interactions I_1,I_2∈ℐ such that S is not (I_1,I_2)-consistent with i=(I_1),j=(I_2) and C=(I). Choose such a pair such that j-i is maximized. We construct S'=(π_1',π_2',…,π_ℓ') such that * (S')≤(S), * S' is (I_1,I_2)-consistent, and * for each I_1',I_2'∈ℐ such that S is (I_1',I_2')-consistent, S' is also (I_1',I_2')-consistent, as follows: For k∈{1,2,…,i}∪{j+1,j+2,…,ℓ} we set π_k'=π_k. Now we find a character from C that has the fewest crossings with characters not in C between t_i and t_j. Formally, we define c^*∈ C such that c^*=*argmin_c∈ C∑_k=i^j-1(π_k,π_k+1,{c},AC(t_k,t_k+1∖ C)). For each k∈{i+1,i+2,…,j} we do the following. Let π^a,π^b be such that π_k=π^a⋆ (c^*)⋆π^b, where (c^*) is the unit permutation of the set {c^*}. We set π'_k=π^a[AC(t_k)∖ C]⋆π_i[C]⋆π^b[AC(t_k)∖ C]. Informally, we first remove C from π_k and then insert the permutation π_i[C] at the position of c^*. We first argue that (S')≤(S). First, it is clear that (π'_k,π'_k+1)=(π_k,π_k+1) for k≤ i-1 and k>j as the involved permutations did not change. It remains to show that ∑_k=i^j(π_k',π_k+1')≤∑_k=i^j(π_k,π_k+1). We split up the crossings into types, as in the proof of <ref>; i.e. we consider crossings between pairs C× C,AC(t_k,t_k+1)× C, and AC(t_k,t_k+1)×AC(t_k,t_k+1), respectively. * ∑_k=i^j(π_k',π_k+1',C)≤∑_k=i^j(π_k,π_k+1,C): We have ∑_k=i^j(π_k,π_k+1,C) =∑_k=i^j(π_k[C],π_k+1[C]) ≥∑_k=i^j(π_k[C∩AC(t_j+1)],π_k+1[C∩AC(t_j+1)]) ≥(π_i[C∩AC(t_j+1)], π_j+1[C∩AC(t_j+1)]) =(π'_j[C∩AC(t_j+1)], π_j+1'[C∩AC(t_j+1)]) =∑_k=i^j(π_k',π_k+1',C) The first inequality holds because C∩AC(t_i+1) is a subset of C. The second inequality holds because of <ref>. The last equality holds because the only crossings between characters from C from t_i to t_j+1 appear between t_j and t_j+1 as the relative order of characters from C in S' is the same from i to j. * ∑_k=i^j(π_k',π_k+1',AC(t_k,t_k+1)∖ C)≤∑_k=i^j(π_k,π_k+1,AC(t_k,t_k+1)∖ C): Let k∈{i,i+1,…, j} and let C_ac=AC(t_k,t_k+1)∖ C. We have that (π_k',π_k+1',AC(t_k,t_k+1)∖ C) =(π_k'[C_ac],π_k+1'[C_ac]) =(π_k[C_ac],π_k+1[C_ac]) =(π_k,π_k+1,AC(t_k,t_k+1)∖ C). The key is that π_k[C_ac]=π_k'[C_ac] and π_k+1[C_ac]=π_k+1'[C_ac], as we did not change the relative order of non-interaction characters, that is, characters that are not in C. As the equality holds for each term, it also holds for the sum. * ∑_k=i^j(π_k',π_k+1',C, AC(t_k,t_k+1)∖ C)≤∑_k=i^j(π_k,π_k+1,C, AC(t_k,t_k+1)∖ C): This inequality holds by definition of c^*. Each character from C is involved in the same crossings between t_i and t_j. The crossings between t_j and t_j+1 are exactly the same, as relative orders of character pairs in C× (AC(t_k,t_k+1)∖ C) are the same in S and S' for k=j,j+1. The following is a direct consequence. For each storyline instance (T,𝒞,ℐ, A) there exists a crossing-minimum solution S that is type-1-consistent and type-2-consistent. <ref> is the main ingredient for a new ILP formulation given in <ref>. It shows that we can in specific cases assume the order of characters C_a above and C_b below an interaction at time step t_i to be equal to the relative order at t_i-1. This is similar to type-1-consistency, where the relative order of characters in an interaction sometimes can be kept. theorempropthm Let (T,𝒞,ℐ, A) be a storyline instance. There exists a crossing-minimum solution S=(π_1,π_2,…,π_ℓ) with the following property. For all t_i∈{t_2,t_3,…,t_ℓ} with |ℐ(t_i)|=1, where ℐ(t_i)={I}, the following holds. * ∃ C_a,C_b:π_i=π_i[C_a]⋆π_i[(I)]⋆π_i[C_b], * if C_a⊆AC(t_i-1,t_i), then π_i[C_a]=π_i-1[C_a], * if (I)⊆AC(t_i-1,t_i), then π_i[(I)]=π_i-1[(I)], and * if C_b⊆AC(t_i-1,t_i), then π_i[C_b]=π_i-1[C_b]. Consider a crossing-minimum solution S^*=(π_1^*,π_2^*,…,π_ℓ^*). We construct a new storyline instance (T,𝒞,ℐ', A) where ℐ'=ℐ∪ℐ_S^*, with ℐ_S^* containing the following interactions. For each t_i∈{t_2,t_3,…,t_ℓ} with |ℐ(t_i)|=1 and ℐ={I}, let C_a,C_b∈AC(t_i) such that π_i^*=π_i[C_a]*π^*_i[(I)]⋆π^*_i[C_b]. If C_a⊆AC(t_i-1,t_i), then add to ℐ_S^* the interaction I_a with (I_a)=C_a and (I_a)=t_i. If C_b⊆AC(t_i-1,t_i), then add to ℐ_S^* the interaction I_b with (I_b)=C_b and (I_b)=t_i. Note that S^* is still a crossing-minimum solution for the new instance. Hence, we apply <ref> to S^* for the new storyline instance. We obtain a new solution S that is crossing-minimum and type-1-consistent. Type-1-consistency for new interactions implies (2) and (4), type-1 consistence of the original interactions implies (3). Lastly, S is also a crossing-minimum solution of the original instance as (S)≤(S^*) by <ref>, and the statement follows. § REFINING THE ILP MODELS We apply our structural insights from <ref> to the models (besides the (<ref>)-model) to obtain a new ILP formulation, including a reduction of the number of (LOP) constraints in <ref> via <ref> and the inclusion of additional symmetry breaking constraints in <ref> via <ref>. §.§ The Propagated Linear Ordering Model (PLO) For our new formulation, we take the linearized model (<ref>) as basis, but remove some of the (<ref>)-constraints for time step t_i as we can make use of propagating the ordering at t_i-1 by <ref> as follows. If ℐ(t_i) for i>1 contains only one interaction I, and no characters outside the interaction start at t_i (i.e., AC(t_i)∖AC(t_i-1)⊆CI(t_i)), we only include a part of the (<ref>)-constraints for time step t_i using a representative character c_w∈(I): From the set of (<ref>)-constraints containing at least one character in AC(t_i)∖(I), we keep only those that contain exactly two characters in AC(t_i)∖(I) and the representative character c_w∈(I). This is sufficient, because we can define the order of the active characters in t_i relative to the order of the characters in the interaction I based on <ref>. Hence, let c_w be a representative character from the set (I), and consider a pair of characters c_u,c_v∈AC(t_i)∖(I). By <ref>, if both c_u and c_v are above or below c_w, then their relative order can be fixed by their relative order at t_i-1. Otherwise, their relative order is already given by their relative order to c_w. That is, if, e.g., c_u is above c_w and c_v is below c_w, then we know that c_u is above c_v. To ensure the above, we add the following constraints in addition to the (<ref>)-constraints for c_u, c_v, and c_w at time step t_i. x_i,u,v ≥ x_i-1,u,v+x_i,u,w+x_i,v,w-2PROP-R1 x_i,u,v ≥ x_i-1,u,v+x_i,w,u+x_i,w,v-2PROP-R2 The two constraints ensure that c_u is above c_v if the requirements are met. By switching c_u and c_v, these constraints also ensure the case that c_u is below c_v. If additionally (I)⊆AC(t_i-1) we can apply <ref> (3) to further reduce the number of those (<ref>)-constraints, whose triples are taken from the set (I): In this case, we do not add any of the (<ref>)-constraints for the characters in I, but instead for each pair c_u,c_v∈(I), we add the following constraint ensuring that the relative order of c_u and c_v is the same for t_i and t_i-1. x_i,u,v=x_i-1,u,vPROP-I If both reductions for (<ref>) apply, we get a quadratic rather than cubic number of constraints for t_i. We call this formulation propagated linear order (PLO). Note that this idea of reducing the number of (<ref>)-constraints also works for any of the other standard ILP models. theoremplotransitivity Every optimal solution to the formulation (PLO) corresponds to a crossing minimum storyline drawing. We only have to show that solutions to PLO satisfy transitivity constraints for the ordering variables, as we include the constraint set (<ref>) and (<ref>). Optimality follows from <ref>. We show this by induction on i – the time steps. For the base case of i=1 the variables x_1,u,v for c_u,c_v∈AC(t_1) satisfy transitivity, as we include all (<ref>) constraints for t_1. For the induction step, assume that transitivity is satisfied for all time steps t_i' with i'<i. We show that transitivity is satisfied for t_i. We perform a case distinction. * If |ℐ(t_i)|>1 or (AC(t_i)∖AC(t_i-1))⊈CI(t_i), transitivity is clearly satisfied, as we again include all (<ref>)-constraints for this time step. * Otherwise, there is exactly one interaction I at time t_i. Consider now three distinct characters c_u,c_v,c_w∈AC(t_i). We show transitivity between this triple for t_i by considering memberships with respect to I. * If {c_u,c_v,c_w} contains at least one character from (I) but {c_u,c_u,c_w}⊈(I) then transitivity for these three characters is achieved because of the (<ref>)-constraints. * If {c_u,c_v,c_w}∩(I)=∅, then we know that {c_u,c_v,c_w}⊆AC(t_i-1). Thus, transitivity between this triple follows from (<ref>) and (<ref>) and because, by the induction hypothesis, the same triple satisfies transitivity for time step t_i-1. * If {c_u,c_v,c_w}⊆(I), we again have two cases. If (I)⊈AC(t_i-1), transitivity is clearly satisfied as we add all (<ref>)-constraints between triples in (I) for this time step. Otherwise, transitivity for this triple follows from the induction hypothesis and (<ref>). §.§ Symmetry Breaking Constraints We introduce the set (SBC) of symmetry breaking constraints that are based on <ref> and might improve the solving process of the models, as they constitute equalities: * We can assume that a crossing-minimum solution is type-1 consistent. Thus let I∈ℐ with t_i=(I) and let j(I) be defined as in <ref>. For all pairs c_u,c_v∈(I) and all j(I)≤ k<i we can add the following constraint enforcing type-1 consistency: x_k,u,v=x_i,u,vSBC-1 * We can assume that a crossing-minimum solution is type-2 consistent. Thus, let I_1,I_2∈ℐ be two distinct interactions satisfying the properties of <ref>. Let i=(I_1) and j=(I_2). For all i<k<j, all pairs c_u,c_v∈(I_1), and all c_w∈AC(t_k)∖(I_1) we add the following constraint, enforcing type-2 consistency: x_k,u,w=x_k,v,wSBC-2 § IMPLEMENTATION In this section, we discuss relevant implementation details and new heuristic-based approaches to improve our algorithms. §.§ Initial Heuristic Our initial heuristic follows a decomposition strategy. The original problem instance is first split into smaller subproblems, each comprising many consecutive time steps, which are of such a size that it is possible to quickly compute a crossing minimal solution for this subproblem. The initial heuristic solution S_h is computed by optimally solving these subsequent “slices” of the original problem instance and piecing together a global solution. This works as follows. First, a crossing minimal solution S_1 = (π_1, …, π_ℓ̂) is computed for the first many time steps of the original problem using the [section:newformulation]PLO ILP formulation. The first many layers of S_1 are assigned to the initial heuristic solution S_h. In the next iteration, we compute an optimal solution for time steps ,…,+-1 while enforcing the ordering of layer ŝ by fixing the ordering variables accordingly. Again, we fix the first many permutations of this solution as the permutations of the corresponding time steps in the heuristic solution S_h. This is continued until the heuristic solution is computed for all time steps t_1,t_2,…,t_ℓ. We choose < in order to include as much information about the global problem into the sliced subproblem as possible (in the form of the time steps that are about to come), while achieving a good tradeoff between time needed to solve all subproblems and the number of crossings of S_h. Experimental evaluation showed that = 5, = 30 yields a good tradeoff between runtime and solution quality. §.§ Rounding and Local Improvement Heuristics We propose a rounding heuristic that exploits fractional LP-solutions. Furthermore, we try to improve these solutions as well as incumbent solutions found by the solver software by proposing three local improvement heuristics Rem-DC, Push-CR, and SL-Bary. Rounding Heuristic. We propose a strategy to round fractional solutions of the ordering variables x_i,u,v to valid integer solutions corresponding to a drawing of the storyline instance. In particular, we compute permutations π_i for t_i, going from i=1 to ℓ in this order, and convert them to integer solutions of ordering variables in the natural way. This works as follows. First, for each interaction I∈ℐ(t) we compute a permutation π_I of its characters (I). For this we compute for each c∈(I) the value (c) which is computed as A+B where A is the number of c'∈(I) such that x_i,c',c>0.5+ϵ and B is the number of c' such that |x_i,c,c'-0.5|≤ϵ, {c,c'}⊆AC(t_i-1), and c'≺_π_i-1 c[This is useful as setting all ordering variables to 0.5 is a valid solution to the LP relaxation of most considered models and thus ordering variables often assume this value]. Clearly, B is only positive if i>1. Then π_I is computed by sorting (I) by their -values. If the model contains symmetry breaking constraints, this sometimes leads to an infeasible ordering. In this case, we find an order π_I that also satisfies the symmetry breaking constraints imposed by π_i-1 (some pairs of characters must have the same relative order in π_i and π_i-1) as follows. We construct π_I iteratively by always selecting the character c∈(I) that has the smallest value (c) and, furthermore, there is no character c' which was not selected yet and needs to precede c according to π_i-1 and the symmetry constraints. This is implemented in quasi-linear time using a priority queue. Local Improvement Heuristics. Rem-DC (remove double crossings) This heuristic finds pairs of characters that cross twice, and both crossings can be removed without increasing the total number of crossings. Formally, this is possible for a drawing S and two characters c,c' if there exist 1≤ i<j≤ℓ with j-i>1 such that * c and c' cross between t_i and t_i+1, and t_j-1 and t_j, and * for all k∈ℕ with i<k<j, c and c' either belong to the same interaction in t_k, or they both are in no interaction for t_k. Then for all k as above we can exchange c and c' in π_k. This removes the double-crossing between c and c' and further does not introduce new crossings. Push-CR This heuristic proceeds from i=2,…,ℓ in this order and tries to push crossings between π_i-1 and π_i forward by one time step: Let C be a maximal set of characters such that (1) all characters in C appear consecutively in π_i, (2) C⊆AC(t_i-1,t_i), and (3) all characters in C either appear in the same interaction in t_i or no character in C is part of an interaction. For each such set of characters C we replace in π_i, π_i[C] by π_i-1[C]. By similar arguments as in <ref> this never increases the number of crossings. Bary-SL Lastly, we describe a variant of the barycenter heuristic <cit.> for storylines that iteratively improves a storyline drawing by updating π_i for 1≤ i≤ℓ based on π_i-1 and π_i+1 or one of them if not both exist. It is only applied to π_i if |ℐ(t_i)|=1. Informally, we say that a pair of characters c,c' is comparable if c and c' have the same relative order in π_i-1 and π_i+1. We compute an ordering π_i such that most comparable pairs have the same relative order in π_i-1,π_i,π_i+1 as follows. We compute the directed auxiliary graph G_C whose vertex set is a subset S of AC(t_i) and which contains an arc from c to c' for each comparable pair c and c' such that c is before c' in π_i-1 and π_i+1. Then, an order of S is built by iteratively selecting the vertex from G_C with the fewest incoming arcs. We also ensure that characters c that are not part of I are above or below the characters in I depending on which option leads to fewer crossings between c and (I) with respect to the considered time steps t_i-1,t_i,t_i+1. The algorithm computing the order based on the graph G_C is then applied to the characters in the interaction yielding π_I, and those not in the interaction yielding π_C, respectively. The ordering π_C is inserted into the maximum position of π_I such that all characters before π_C “prefer” being above the interaction with regard to crossings with (I). The new π_i is only accepted if it decreases the number of crossings. Both Bary-SL and Push-CR are applied successively to layers 2,…,ℓ. This is repeated five times and applied to valid integer solutions found by the solver and the rounding heuristic described above. If enabled, the rounding heuristic is applied to every LP solution found by the solver. Rem-DC is applied five times to each pair of characters. §.§ Max-Cut Implementation Details Since the original implementation of Gronemann et al. <cit.> is not available, we provide our own implementation that was optimized beyond their algorithm. After reading the input, we first find an initial starting solution by applying adapted barycenter techniques as described in <cit.>. We start the root relaxation with the objective function and the tree constraints (<ref>) as the only constraints, and start separating the odd cycle (<ref>) (as suggested by Charfreitag et al. <cit.>) and the (<ref>) constraints. The (<ref>) constraints are separated by complete enumeration. Whenever a new LP solution is available, all nonbinding inequalities are eliminated, and we try to exploit the information in the (fractional) solution in order to obtain a better incumbent solution. The root phase ends when no violated inequalities are found. Then the branch-and-cut phase is started by changing the variable types from continuous in the interval [0,1] to binary. In the Gurobi “MIPSOL” callbacks at branch-and-cut nodes with an integer solution, we check if the integral solution is the characteristic vector of a storyline drawing. If so and if the number of crossings is lower than the one of the incumbent solution, the latter is updated, otherwise, the exact (<ref>) and (<ref>) separators are called to provide violated inequalities that are passed to Gurobi as lazy constraints. In the Gurobi “MIPNODE” callbacks it is tried to exploit the fractional solution for a possible update of the incumbent solution. §.§ Implementation of the ILP Models The models (<ref>), (<ref>), and (PLO) include many symmetries regarding ordering variables and crossing variables. For each t_i∈ T and pair of characters c_u,c_v∈AC(t_i) we only keep the ordering variables x_i,u,v and crossing variables y_i,u,v with u<v. The constraints are adjusted with standard projections <cit.>. We take the linearized model (<ref>) as basis for our refined ILP model described in Section <ref>, because preliminary experiments showed no performance gain from refining (<ref>) instead. Further, the linearized model (<ref>) is competitive with the max-cut approach when implemented in Gurobi. Therefore, we decided on refining the linearized model that is simpler to implement and more accessible when compared with the max-cut approach. Furthermore, implementing any of the ILP models naively includes up to 𝒪(ℓ n^3) (<ref>)-constraints in the model. We have experimented with adding these constraints during a cutting-plane approach and also by including them into the Gurobi solver as lazy constraints, i.e., constraints that the solver can decide to include at later stages during the solving process. We decided to always add (<ref>) as lazy constraints, as this leads to the best performance. Hence, we consider the following algorithms for our experimental evaluation. * : the max-cut formulation (as a baseline) implemented as described in <ref> * : the linearized model (<ref>) with (<ref>)-constraints included as lazy constraints * : the quadratic model (<ref>) with (<ref>)-constraints included as lazy constraints * : the PLO formulation with (<ref>)-constraints included as lazy constraints The latter three algorithms are by default extended with the symmetry breaking constraints described in <ref> (), the initial heuristic from <ref> (), and the rounding and local improvement heuristics from <ref> (). This is not done for , as it should serve as a state-of-the-art baseline and allow comparison with Gronemann et al. <cit.>. § EXPERIMENTS AND EVALUATION In our experimental evaluation, we are interested in the following research questions. Q1: Does the algorithm based on our new ILP model dominate the state-of-the-art model ? Will we be able to solve hard instances that have not been solved to optimality before? How do the various algorithms compare to each other? Q2: What effect do the structural insights have when applied to the -formulation? Q3: What is the effect of the newly introduced components , , and ? In the following we describe our experimental setup, our benchmark instances, and the results of our study. We also provide our results and analysis on https://doi.org/10.17605/OSF.IO/3BUA2. §.§ Setup Systems employed for all experiments have AMD EPYC 7402, 2.80GHz 24-core CPUs and 1024GB of RAM, running Ubuntu 18.04.6 LTS; experiments were run using a single thread. is implemented in and compiled with , , and flag , all remaining code is written in , compiled with and in mode. To solve the ILPs we used Gurobi 11.0.1. The time limit is 3600s (same as Gronemann et al. <cit.>) and the memory limit is 16GB for all experiments. We do not know the memory limit for Gronemann, however memory was certainly not our limiting factor. The time for the initial heuristic is negligible (<1% of the overall runtime), so it is not counted towards the solving time. To mitigate performance variability, we ran each instance-setting combination with five different seeds provided to Gurobi; data displayed below corresponds to the seed with the median runtime. With this, an instance counts as “solved in the time limit”, if the majority of the five runs does not time out. Source code for the new formulations is available on https://doi.org/10.17605/OSF.IO/3BUA2. §.§ Test Data The instances used in our computational study are taken from the literature <cit.>. However, we also present a new data set, including existing instances, in a specifically designed storyline data format, together with tools for transformation and visualization of the storyline layouts on https://doi.org/10.17605/OSF.IO/3BUA2. We also provide data on best known crossing numbers. The existing instances from Gronemann et al. <cit.> consist of three book instances from the Stanford GraphBase database <cit.>, i.e., Anna Karenina (anna), Les Misérables (jean) and Adventures of Huckleberry Finn (huck), and the movie instances TheMatrix, Inception, and StarWars. The instances gdea10, gdea20 from Dobler et al. <cit.> consist of publication data from 10 (resp. 20) authors from the GD conference. The publication instances ubiq1, ubiq2 are from Di Giacomo et al. <cit.>. Furthermore, anna and jean are split up into slices of 1-4 consecutive chapters as was done by Gronemann et al. <cit.>[We could not replicate this process fully equivalently, as sometimes our optimal crossing numbers are different to those of Gronemann et al. <cit.>.]. The new instances consist of scenes from nine blockbuster movies, namely Avatar, Back to The Future, Barbie, Forrest Gump, Harry Potter 1, Jurassic Park, Oceans 11, Oppenheimer, and Titanic. In all 59 resulting instances, characters are active from their first interaction to their last interaction, and most instances have one interaction per time step. §.§ Evaluation Several instances could be solved within 30s by all algorithms, others could not be solved within the time limit by any of the algorithms. The three instances TheMatrix, Inception, and StarWars used commonly in heuristic storyline visualization were all solved within 450ms. We exclude all these instances and focus on the 23 challenging instances that remain. Out of these, the maximum number of characters is 88, and the maximum number of layers is 234. A table showing detailed statistics of test data and executions of each algorithm is given in <ref>. Answering and . Figure <ref> displays the number of instances solved over time for each algorithm. We observe that the algorithms differ in their ability to solve challenging instances: solves the most, followed by and , with last. In fact, solves one instance that cannot be solved by any other algorithm, and additionally solves six instances that could not be solved by Gronemann et al. <cit.> and three more than within the same time limit. Hence, we answer the first part in positively. For further illustration, the exact runtimes per instance are also shown in <ref>. Answering , the structural insights as applied in reduce the number of constraints by a factor of five on average, comparing and . More so, they enhance Gurobi's capabilities of strengthening the LP relaxation, as the two instances not solved by are solved by in the root, while starts branching early and times out. enters branching in all 23 instances, in two, in three, in seven instances. solves 21 out of 23 instances in the root, the remaining two with branching. Furthermore, we computed the speedup factor of , , and , when compared with on instances where both respective algorithms did not time out. This factor is the runtime of divided by the runtime of, e.g., . The geometric means of these values are 2.6 for , 3.2 for , and 2.7 for . Hence, our new algorithms are 2.6–3.2 times faster than the state-of-the-art algorithm . Ablation study to answer . We conduct an ablation study to discern the impact each of the methods proposed in <ref> has on the algorithms' performance. To this end, we enable all the proposed methods as the baseline configuration for , , and , namely ,   and . Then, each component is disabled one at a time to measure the component's impact on overall performance. In <ref> we present the speedup factors of the algorithms vs. their counterparts with the specific component disabled (see also <ref> for more details). From this table, we conclude that and the are beneficial for all algorithms, while has a small to no noticeable impact. This is further supported by <ref>, which shows that disabling or , results in all the formulations solving fewer instances (curves with no and no are below the baseline). Further, <ref> depicts the speedup factors per instance. This is because introduces equalities between two variables, and hence improve presolving capabilities and reduce the search space that solvers have to explore. The heuristics of help the solver find optimal solutions early in the process. This answers . Further observation. By inspecting <ref> in more detail, we can make the following interesting observations. * enters branching on every instance. This results in solving very small instances faster than the other algorithms, however it struggles to solve larger instances. * The larger the instance, the fewer constraints has compared to . * Most of the instances can be solved within the root of the branch-and-bound tree for most formulations. Large Instances. Finally, we demonstrate that our implementations are capable of solving even very large instances to proven optimality: Figure <ref>(a) shows the raw drawing of the data for Victor Hugo’s Les Misérables <cit.> as provided in the data file of the Stanford GraphBase <cit.>. After roughly 7 hours of single thread computation, we obtained the proven crossing minimum layout shown in Figure <ref>(b). § CONCLUSION AND FUTURE WORK As shown in our experimental study, our new methods and algorithms dominate the state-of-the-art algorithms and are able to solve large instances to optimality, while the newly introduced improvements are beneficial towards all considered formulations. We observe two directions for future work. * Our new components for improvement could be implemented into the max-cut formulation. However, initial experiments have shown that the simple linearized formulation (<ref>) performs comparably to the more complex max-cut formulation, hence we expect that this will result in a negligible or no improvement over our proposed formulations. * Out of our 59 instances (see https://doi.org/10.17605/OSF.IO/3BUA2) we were able to solve 55 when increasing the time limit. The remaining four unsolved instances should pose a challenge to engineer new exact methods for crossing minimization in storylines. § DETAILED STATISTICS ON INSTANCES AND ALGORITHM EXECUTIONS. |l|l|r|r|r|r|r| Table of runtimes for the 23 instances. B&B = number of branch-and-bound nodes, Root LB = lower bound computed at the root node, number of constraints, overall runtime in seconds, and the crossing number or interval of best lower bound and upper bound if not solved within time limit. The value t.l. means time limit and |𝒞|, ℓ, and |ℐ| are the number of characters, layers, and interactions, respectively. Instance Algorithm B&B Root LB #constraints Time[s] crossings Table of runtimes for the 23 instances (continued) Instance Algorithm B&B Root LB #constraints Time[s] crossings Avatar 1 229 238496 106.4 229 |𝒞|=35 628 226 820944 284.2 229 ℓ=54 3295 80 358.5 229 |ℐ|=54 1 229 841.9 229 Barbie 1 138 1125220 304.0 138 |𝒞|=67 1 138 6393964 123.4 138 ℓ=63 558 48 66.6 138 |ℐ|=63 1 138 923.6 138 HarryPotter1 1 236 520981 776.9 236 |𝒞|=52 597 229 2361796 1123.5 236 ℓ=54 11288 77 t.l. [223,239] |ℐ|=54 1 236 1444.9 236 Oceans11 1 275 1271525 536.9 275 |𝒞|=60 578 271 7018542 1087.9 275 ℓ=96 11908 82 t.l. [244,284] |ℐ|=96 1 275 2106.1 275 anna1-2 1 57 1370481 47.8 57 |𝒞|=57 1 56 6926150 38.2 57 ℓ=116 561 38 37.6 57 |ℐ|=116 1 57 53.1 57 anna1-3 1 108 4241743 1634.2 108 |𝒞|=83 1 108 30520530 966.5 108 ℓ=164 1792 52 t.l. [104,113] |ℐ|=164 1 108 2857.8 108 anna2-3 1 28 1793109 29.4 28 |𝒞|=67 1 28 10538684 12.8 28 ℓ=106 100 22 9.3 28 |ℐ|=106 1 28 30.6 28 anna2-4 9 76 3738699 701.4 76 |𝒞|=80 1 76 25966988 472.7 76 ℓ=155 562 47 136.3 76 |ℐ|=155 2 76 1339.5 76 anna3-5 1 107 4882063 1157.5 107 |𝒞|=88 1 107 36830405 919.4 107 ℓ=168 2570 59 1168.5 107 |ℐ|=168 1 107 1404.2 107 anna4-5 1 70 1568650 80.9 70 |𝒞|=60 1 70 8222852 48.9 70 ℓ=120 1642 37 124.8 70 |ℐ|=120 1 70 112.1 70 anna4-6 1 157 3228069 2087.8 157 |𝒞|=71 119 157 19956977 1960.4 157 ℓ=176 1090 77 2138.1 157 |ℐ|=176 1 157 2890.4 157 anna5-6 1 76 1821015 84.6 76 |𝒞|=63 1 76 10021591 64.9 76 ℓ=127 782 37 92.6 76 |ℐ|=127 1 76 83.0 76 anna6-7 1 78 1586302 155.5 78 |𝒞|=60 1 78 8300109 73.9 78 ℓ=118 917 33 110.2 78 |ℐ|=118 1 78 232.6 78 anna6-8 193 121 2266237 2700.7 121 |𝒞|=65 556 118 13025530 2155.1 121 ℓ=146 3193 52 2352.8 121 |ℐ|=146 1 121 2280.4 121 anna7 1 8 525061 0.6 9 |𝒞|=47 1 8 2151380 0.6 9 ℓ=62 3 6 0.4 9 |ℐ|=62 824 8 39.2 9 gdea20 1 41 108934 26.5 41 |𝒞|=19 1 41 212494 11.3 41 ℓ=100 171 19 8.6 41 |ℐ|=100 1 41 92.2 41 huck 1 41 2130471 5.7 42 |𝒞|=74 1 41 13537255 3.3 42 ℓ=107 140 30 4.5 42 |ℐ|=107 9547 41 2148.8 42 jean1-3 1 53 3616478 72.8 53 |𝒞|=73 1 53 24790611 44.4 53 ℓ=253 1070 36 108.9 53 |ℐ|=253 1 53 123.5 53 jean1-4 1 172 5643798 3306.5 172 |𝒞|=79 1 172 42117296 2739.6 172 ℓ=329 1792 75 t.l. [149,183] |ℐ|=329 1 156 t.l. [156,565] jean2-3 1 33 663871 30.7 33 |𝒞|=41 1 33 2671943 11.0 33 ℓ=158 181 24 7.5 33 |ℐ|=158 1 33 56.3 33 jean2-4 1 154 1358514 3150.6 154 |𝒞|=47 210 154 6283384 t.l. [154,159] ℓ=234 1587 68 t.l. [138,156] |ℐ|=234 1 148 t.l. [148,468] jean3-4 1 128 780027 1943.7 128 |𝒞|=41 330 126 3209062 t.l. [127,132] ℓ=175 1720 53 2157.5 128 |ℐ|=175 1 127 t.l. [127,170] jean4-5 1 96 521469 125.7 96 |𝒞|=36 1 96 1939809 111.3 96 ℓ=149 1476 50 171.2 96 |ℐ|=149 1 96 238.0 96
http://arxiv.org/abs/2409.02864v1
20240904164314
Bioinformatics Retrieval Augmentation Data (BRAD) Digital Assistant
[ "Joshua Pickard", "Marc Andrew Choi", "Natalie Oliven", "Cooper Stansbury", "Jillian Cwycyshyn", "Nicholas Galioto", "Alex Gorodetsky", "Alvaro Velasquez", "Indika Rajapakse" ]
cs.AI
[ "cs.AI", "cs.IR", "cs.SE" ]
The Impact of Balancing Real and Synthetic Data on Accuracy and Fairness in Face Recognition Andrea Atzori0000-0002-6910-206X Pietro Cosseddu0009-0006-4998-5164 Gianni Fenu0000-0003-4668-2476 Mirko Marras0000-0003-1989-6057 Accepted Sep 3 2024 to ApJ Letters ====================================================================================================================================== § ABSTRACT We present a prototype for a Bioinformatics Retrieval Augmentation Data (BRAD) digital assistant. BRAD integrates a suite of tools to handle a wide range of bioinformatics tasks, from code execution to online search. We demonstrate BRAD's capabilities through (1) improved question-and-answering with retrieval augmented generation (RAG), (2) BRAD's ability to run and write complex software pipelines, and (3) BRAD's ability to organize and distribute tasks across individual and teams of agents. We use BRAD for automation of bioinformatics workflows, performing tasks ranging from gene enrichment and searching the archive to automatic code generation and running biomarker identification pipelines. BRAD is a step toward the ultimate goal to develop a digital twin of laboratories driven by self-contained loops for hypothesis generation and testing of digital biology experiments. § INTRODUCTION Large language models (s) and scientific foundation models (s), have expanded the potential applications of artificial intelligence <cit.>. Originally designed for machine translation, s have now found applications in areas such as question-and-answering, code generation, and data analysis. These models are driving innovation across disciplines, advancing research in aerospace <cit.>, chemical engineering <cit.>, and bioinformatics <cit.>. Allowing models to function as independent agents capable of analyzing data and making decisions, or as collaborative teams where agents share instructions and results, offers a promising avenue for enhancing human-in-the-loop supervision in AI-driven discovery <cit.>. By enhancing scientific understanding and accelerating discovery, these technologies offer unprecedented opportunities for research. Although a single can accomplish a myriad of tasks, Multi-Agent systems are increasing in popularity <cit.>. By delegating tasks to an array of individual -powered agents, these agents can collaborate to solve various tasks, such as software development <cit.> and biomedical discovery <cit.>. In bioinformatics, Multi-Agents can power autonomous labs equipped with data analysis, enhancing scientific knowledge and accelerate discovery <cit.>. Our ultimate goal is the design of a self-driving, digital biology laboratory. Self-driving labs integrate both experimental and analytical processes for a lab and vastly accelerate scientific progress. Although they are still in their infancy <cit.>, closed loop systems have already been implemented in other domains such as chemistry <cit.> and synthetic biology <cit.>. Bioinformatics Retrieval Augmentation Data (BRAD) digital assistant integrates recent and technologies with traditional bioinformatics tools, creating an advanced -based agent, capable to be chained together as a Multi-Agent System that can conduct complex decision-making and execute digital biology experiments. The prototype BRAD research assistant utilizes a to operate an array of modules. The Module integrates documentation, protocols, and literature using Retrieval Augmented Generation (RAG) <cit.>, while the Module enables searches across online data and literature repositories. Additionally, the Module automates code generation and execution. Together, these main features—supported by a robust software framework—allow BRAD to provide in depth responses to research grade user queries by performing its own independent research. We demonstrate BRAD's ability to execute standard research workflows efficiently. Individually, a BRAD bot can handle tasks like literature searches, gene set enrichment from databases such as Enrichr or Gene Ontology, and generation and execution of Python and MATLAB software. When organized effectively, BRAD bots can work in tandem to solve more complex challenges such as biomarker identification, network analysis, or exploratory data analysis. This collaboration between BRADs allows each agent to tackle tasks of greater complexity, producing results that surpass the sum of their individual efforts (see <ref>). The remainder of this article first presents the anatomy (or software architecture) of BRAD and then presents several results of BRAD's ability to dynamically learn new topics and use cases of this code, such as biomarker identification and enrichment. § ANATOMY OF BRAD At the heart of this architecture is an (s) that orchestrates the usage of BRAD's different modules. These modules enable BRAD to interface directly with documentation, published research, the internet, software, and user data. This modular architecture grants BRAD the flexibility to integrate diverse tools while offering users a framework to customize and expand BRAD's capabilities. When responding to a user query, BRAD's modules adheres to a standard template (see <ref>, left): (1) a semantic router <cit.> selects the most appropriate module to handle the query; (2) within the selected module, a series of prompts are used to retrieve literature, search online, execute code, or other module specific tasks; (3) an summarizes BRAD's findings and continues the conversation. For queries demanding multiple modules, BRAD's acts similar to a ReAct agent <cit.> by breaking down the query into a series of tasks to be handled by individual modules. These tasks are executed sequentially, with BRAD compiling the results from each module to form a cohesive response. The can create linear workflows or more advanced pipelines that include feedback loops, branching, and other control flow mechanisms. The backbone of BRAD's design is code that supports its integration with other software and maintain its ease of use. Each BRAD chatbot or chat session outputs all figures, downloaded data, and other relevant information to a centralized output directory. This directory also maintains a comprehensive log that tracks BRAD's interactions with the user, databases, and other tools. Built on the LangChain framework, BRAD can utilize s and APIs either locally or from providers like https://build.nvidia.com/meta/llama3-70bNVIDIA and https://openai.com/OpenAI. It can also be integrated into any LangChain or LangGraph architecture, ensuring broad compatibility with standard frameworks <cit.>. In the remainder of this section, we outline the capabilities of BRAD's main modules, with further implementation details available in SI.<ref>. §.§ Lab Notebook Module The Module connects BRAD to internally stored documents, including literature, protocols, textbooks, and any other files. It employs RAG to tailor the 's responses based on curated user-provided information. BRAD can choose between a previously constructed document database or create one dynamically. For further detail on database creation, see SI <ref>. RAG functions in two phases: retrieval and generation <cit.>. During retrieval, the aforementioned database is searched to find chunks of text relevant to the user’s query. In the generation phase, the retrieved documents are combined with the user query to create a prompt for the . This approach enhances response quality and reduces hallucinations <cit.>. The module allows users to control database construction on their own literature and customize the retrieval and generation processes (see <ref>). BRAD supports various techniques, including similarity-based search, maximum marginal relevance (MMR) <cit.>, and multi-query retrieval during the retrieval phase, as well as compression <cit.> and reranking <cit.> in the generation phase. Example Use Case. As an example of the output, see <ref> (Q1): How can we view Pore-C as a hypergraph? This question requires the synthesis of information from different fields. For instance, Pore-C is a recently developed experimental genomics technique that characterizes three-dimensional chromatin structure by identifying multi-way contacts in the chromatin polymer. On the other hand, hypergraphs are a higher-order generalization of graphs that allows edges to contain multiple nodes. Since Pore-C data captures multi-way contacts, hypergraphs are a natural way to view Pore-C data. Using OpenAI's GPT-4.o mini as the underlying for response generation, we equipped BRAD with three different configurations to test BRAD's question and answering capabilities: vanilla , a standard RAG with no enhancements (i.e. only MMR retrieval and generation), and an enhanced RAG with additional features, including MMR, multi-query retrieval, reranking and contextual compression. When BRAD employs a RAG to improve its response, it is supplemented with ten chunks, generated by recursively chunking the papers our database[See <https://rajapakse.lab.medicine.umich.edu/papers/digital-library>]. In our view, the enhanced RAG, where BRAD makes full use of its , best answers this question. Despite the fact that this question has a clear cut meaning, it is still quite open ended, allowing the BRAD chatbots freedom to dive into the specifics as much as possible. Consequently, we deem vanilla 's response to be the worst. In particular, its discussion of Higher-Order Interactions parrots its own previously generated text. Additionally, although it claims to give perspectives into how to analysis and visualize the data, it completely forgets to discuss visualization techniques. Comparing this to standard RAG, standard RAG is also overly repetitive, as seen through repeating information in the first paragraph and the enumerated list of concepts. However, its discussion about hyperedges is more insightful. Even though it also fails to give pointed recommendations about visualization tools, it comments about the capability rather than omit the discussion altogether. As a result, we believe the enhanced RAG best answers the question. It fixes many of the issues that the standard RAG fixes without the overly repetitive nature of the standard RAG. Specifically, its discussion on Higher-Order Interactions and Analysis and Algorithms adds new insights compared to both vanilla and the standard RAG. 0.95Q1: How can we view Pore-C as a hypergraph? [raster valign=top, raster halign=center,raster columns=1, raster rows=1, colframe=white,colback=white,colbacktitle=blue!50!white, ] [title = Enhanced RAG,colback=blue!5!white,colframe=blue!75!black,fonttitle=, width = 8cm] Pore-C is a technique used to study the three-dimensional organization of the genome, particularly in the context of chromatin interactions. To view Pore-C as a hypergraph, we can consider the following aspects: ⋮ In summary, conceptualizing Pore-C data as a hypergraph allows for a more nuanced understanding of genomic interactions, accommodating the complexity of higher-order relationships that are inherent in chromatin organization. [title = Standard RAG,colback=green!5!white,colframe=green!75!black,fonttitle=, width = 8cm] Pore-C is a technique used to study the three-dimensional organization of the genome, particularly in the context of chromatin interactions. Viewing Pore-C data as a hypergraph can provide a more nuanced understanding of the complex relationships between genomic elements. In a hypergraph, nodes represent individual genomic features (such as genes, regulatory elements, or chromatin domains), and hyperedges represent interactions or relationships that can connect multiple nodes simultaneously. This is particularly relevant for Pore-C data, where interactions are not limited to pairs of genomic elements but can involve multiple elements at once. ⋮ In summary, viewing Pore-C data as a hypergraph allows for a richer representation of the complex interactions within the genome, facilitating deeper analysis and understanding of chromatin architecture and its functional implications. [title = Vanilla ,colback=red!5!white,colframe=red!75!black,fonttitle=, width = 8cm,before= ] Pore-C is a technique used to study the three-dimensional organization of the genome, particularly focusing on chromatin interactions. To view Pore-C as a hypergraph, we can consider the following aspects: ⋮ In summary, conceptualizing Pore-C data as a hypergraph allows for a more nuanced understanding of the complex interactions within the genome, enabling researchers to explore the functional implications of these interactions in a more comprehensive manner. See SI<ref> for the full vanilla output. Quantitative Assessment. Beyond the qualitative assesment of the module, we employed the RAG Assessment (RAGAs) framework to quantify the advantage of BRAD's RAG system <cit.>. We generated a set of research questions and evaluated BRAD’s ability to answer them using various features of the . These questions were manually classified into three different topics: Mathematics, Biology, and Computer Science and Data. This framework employs an evolution-based question generation approach to produce a diverse range of questions <cit.>. This process begins by invoking an to generate a basic set of questions from the document database. Then, it takes those same questions as an input and asks the to "evolve" the questions by making them more complex or simple. This prompt engineering is preformed iteratively until the final set of questions are generated. BRAD then answers the generated questions, and its performance is scored based on the expected answer for each question. A detailed explanation of how each metric is calculated is included in SI <ref>. Using the same document database, we generated three types of questions: (1) simple questions where answers are found directly in the database, (2) reasoning questions that require deeper understanding of a single piece of text, and (3) multicontext questions that require understanding multiple documents to answer successfully. We asked each of the same three BRAD chatbots as before the generated set of questions and compared their responses using RAGAs's evaluation metrics. Compared to its counterparts, the standard RAG overall has the greatest mean answer correctness, indicating it is the best performer. However, the enhanced RAG has a higher mean answer correctness best performance on questions related to computer science and data. We believe that during contextual compression, these mathematical symbols' meaning is lost in its summary, leading to enhanced RAG having a lower mean answer correctness than standard RAG. The faithfulness, a proxy for reliability, of all three models is relatively low. This demonstrates that the BRAD chatbots do not regurgitate what was presented in the papers. Interestingly, RAG consistently has substantially higher faithfulness. Although this may be perceived as a positive, there can be adverse effects as shown in Q1, where the standard RAG parrots text found in the retrieved documents but does not provide synthesis or further analysis. Context precision is fairly high for all three models, indicating that chunks relevant to the ground truth are being retrieved for the RAG systems and that the vanilla , which does have the lowest context precision, provides on-topic answers. We see significantly less variability in the enhanced RAG's context precision with multi-context questions, validating reranking's effectiveness. This is further validated with higher answer correctness scores for questions that span multiple topics as BRAD needs to identify the most important chunks from at least two distinct subtopics. Finally, both the standard and enhanced RAG significantly outperform the vanilla with respect to answer relevancy, demonstrating considerable mitigation of hallucinations. Note that context recall, which is not shown, was consistently above 95% for all three versions. §.§ Software Module BRAD's Module features a multistage pipeline that utilizes s to write and execute code in response to user queries. It supports running Python, MATLAB, and Bash scripts. Although BRAD is primarily designed for workflows that involve saving data to files at each stage, as is common in bioinformatics, this module also supports generated code that avoids writing files between stages. BRAD performs a series of steps to safely execute a piece of software (see <ref> and SI.<ref>). The process starts with BRAD reviewing the documentation of all available scripts and leveraging the to select the code that best addresses the task. Based on user input and selected code, BRAD uses the software documentation, few shot learning examples, and the to to generate the appropriate code. Refer to SI.<ref> for details on the implementation. To ensure safe execution of generated code, BRAD is restricted to running whitelisted software. This allows BRAD to verify that the correct paths, inputs, outputs, and other arguments are provided for each script. Additional checks ensure the closure of all delimiters and other issues identified while testing -generated code. Errors identified during code validation are sent back to the along with the original query for refinement. This iterative process of code generation and validation may continue until the code executes without error. Once all issues are resolved, the code is run. We designed scripts to execute generated code without running whitelisted scripts. While this approach allows BRAD to fully utilize generated code, this process introduces several challenges: (1) ensuring the safety of -generated code, and (2) generating bug-free, free-form code with s. To demonstrate this feature, we designed a script that enables the safe execution of the full library with BRAD. However, ensuring generated code is error free remains a challenge. §.§ Digital Library Module The module provides BRAD access to an array of literature, archives, and online databases (see <ref>). Building on a standard RAG, BRAD can retrieve and share information from these sources, either directly with the user or through other modules. Below, we outline the integration of several databases, and we'd like to note the relative ease of integrating new databases. Enrichr. Enrichr is a comprehensive gene set enrichment tool commonly used in many bioinformatics pipelines <cit.>. We integrated Enrichr into BRAD to leverage gene set annotations, curated pathways, and ontology knowledge to enhance BRAD's responses to user queries. Gene Ontology. The Gene Ontology (GO) database is an initiative to provide a uniform approach and vocabulary to studying genes and gene products across different species <cit.>. When gene names, GO terms, or other keywords are identified, BRAD has the ability to define these GO terms and retrieve charts that diagram their relationships with other genes. Moreover, the GO terms are associated with papers that may be integrated via other databases and modules of BRAD. Literature Repositories. BRAD integrates the arXiv, bioRxiv, and PubMed literature databases. BRAD can both display search results to the user, link articles, summarize recent abstracts, download and display papers, and embed the new papers into its database. Integration with Other Modules. Online retrieval works symbiotically with the other modules. For instance, the Lab Notebook RAG and the Literature database search complement each other. These modules can be used in tandem application for performing literature reviews and constructing RAG databases (see <ref> and SI<ref>). When the Lab Notebook does not contain literature relevant to a user query, new literature can be obtained to improve the RAG database. Conversely, when new literature is obtained, the RAG is used to summarize the documents and respond to the user. Similar iterations can occur between the software engine and the database searches. §.§ Additional BRAD Modules BRAD also contains modules responsible for documentation generation, planning and executing multistage queries, documentation and code generation, and more. Refer to SI.<ref>-<ref> for additional implementation details. § DIGITAL LABORATORY This section presents various examples and results achieved by BRADs. We demonstrate the ability to (1) perform a comprehensive search about a gene, using network analysis, multiple databases, and RAG, (2) automatically build a RAG database targeting a specific question from scratch, (3) run biomarker identification pipelines, and (4) perform automatic exploratory data analysis. The first three examples are performed using single BRAD chatbot, and performing exploratory search of data is done with multiple BRAD agents that pass information between one another. Each use case demonstrates how a single BRAD or a team of BRADs tackles or automates complex research tasks under human supervision. §.§ Enhanced Search Integrating the and modules improves quality of the RAG responses generated by the . Consider, for instance, Q2: What role does PCNA play in the cell cycle and imaging? BRAD orchestrates the following process: (1) identifies which genes interact with PCNA; (2) determines the pathways related to PCNA; and (3) uses the related genes and pathways to search the literature and formulate a comprehensive answer. In addition to providing a detailed text response, this process generates supplementary materials such as images, spreadsheets, and other documentation to offer further context to the user (see <ref> and SI.<ref>). Q2: What role does PCNA play in the cell cycle and imaging? [raster equal height,raster valign=top, raster halign=center,raster columns=1, raster rows=1, colframe=white,colback=white,colbacktitle=blue!50!white, ] [title = System Output,colback=blue!5!white,colframe=blue!75!black,fonttitle=, width = 12cm] PCNA’s Role in the Cell Cycle: PCNA (Proliferating Cell Nuclear Antigen) plays a crucial role in the cell cycle, specifically in the S phase. It is loaded onto DNA during S phase, and its interaction with the minimal human Cdt1 PIP degron triggers the degradation of a cell cycle reporter, providing a direct physical readout of DNA replication. PCNA’s Role in Cell Cycle Imaging: PCNA is a DNA replication sensor that can be used to track the cell cycle. It appears as a diffuse nuclear pattern in G1 phase, punctate in S phase, diffuse again in G2 phase, and diffuses throughout the whole cell in M phase. It is a high-contrast S/G2 transition marker, but requires high-resolution images and regular nuclear morphology and monolayer culture. ... See <ref> for the full output and generated documents. §.§ Building a RAG Database We evaluated BRAD's capacity to respond to questions without a pre-existing database. BRAD was tasked with constructing a database from scratch to address research questions. Through alternating between the and modules (see fig.<ref>, bottom ), BRAD successfully built a local literature database and retrieved relevant information online to answer the questions. Consider the question Q3: What are single cell foundation models? As these are a recent technology, BRAD has no information regarding these models. Recognizing the insufficiency of available information from the empty database, BRAD gathers research papers and articles related to single-cell analysis and foundation models. With this new information, BRAD enhances its response. This process was repeated twice until the deemed the response to be satisfactory. The final BRAD response provided a fairy detailed overview of single-cell foundation models, including several notorious examples and their use cases. For a comprehensive summary of how BRAD executed this process, see SI.<ref>. Q3: What is a single cell foundation model? [raster valign=top, raster halign=center,raster columns=1, raster rows=1, colframe=white,colback=white,colbacktitle=blue!50!white, ] [title = RAG: Iteration 1 (empty database),colback=blue!5!white,colframe=blue!75!black,fonttitle=, width = 6cm] Based on the provided source and content, the initial database is empty, and I couldn't find any information regarding “single cell foundation models." I am confident that I cannot provide an accurate answer to this question with the current information available. Search Terms * Single cell foundation models * Machine learning for single cell data analysis * Single cell machine learning * Deep learning for single cell analysis [title = RAG: Iteration 2 (populated database),colback=blue!5!white,colframe=blue!75!black,fonttitle=, width = 6cm] Single cell foundation models are artificial intelligence models designed to analyze and understand data from individual cells. They are constructed using single-cell transcriptomic data, which measures the expression levels of genes in individual cells. These models utilize machine learning algorithms, such as transformers, to learn gene-level and cell-level representation. They can accurately categorize known cell types and have the potential to uncover new biological insights in single-cell RNA sequencing data. Single cell foundation models are used in genomics and multi-omics research, showing promise in various downstream tasks, such as identifying cell types and states, discovering novel cell populations, inferring gene regulation networks, and more. Examples of single cell foundation models include scGPT, Genefomer, and scFoundation. They are a growing field of study, with ongoing advancements expected as single-cell technologies and omics sciences continue to develop. §.§ Biomarker Identification and Enrichment Biomarker identification – finding genes that indicate a biological state – remains a challenge <cit.>. A typical framework for sensor selection involves two steps: (1) modeling and (2) observability analysis. Equipped with modeling and observability software from <cit.>, BRAD was provided with given several datasets <cit.> and tasked with uncovering potential biomarkers. After normalizing data with , BRAD built several models of dynamics for each dataset using Dynamic Mode Decomposition and Data Guided Control frameworks. BRAD then passed the models into observability software to identify and rank the contribution of each gene as a sensor. Finally, BRAD queried the top-ranked genes in Enrichr to determine the relevant biological pathways and processes associated with the data. BRAD built models of the dynamics in both the data space and based upon Geneformer embeddings <cit.>, which were also generated by the module. While both processes model the dynamics, BRAD correctly identified, based on the and documentation found in the observability code, that the observability codes should not be applied in Geneformer embedding space. This constraint results from the inability to interpret observability results in embedding space. A challenge for scs will be to make these results interpretable in the original space of the data, perhaps using a decoder or alternative dynamics perspective. Despite reporting that the observability codes did not make sense to execute, BRAD still ran these codes in the embedded space. §.§ Automated Cellular Reprogramming Analysis A team of BRADs was employed to study in silico perturbations of Fibroblasts toward hematopoietic stem cells (HSCs). Several hundred reprogramming regimes were identified with Data Guided Control <cit.>, each consisting of using a subset of ten transcription factors, and after embedding each recipe with Geneformer <cit.>, the BRADs were tasked with performing exploratory analysis to identify the best recipe. Coordination. A central BRAD generated high-level instructions for a team of five BRAD workers based on detailed documentation of the experiment. Each worker used the module to translate these objectives into tasks for the module. Expert review led to adjustments, including switching to cosine similarity, removing the literature review task, and adapting file paths for our lab's system. This human-in-the-loop (HITL) approach ensures the BRAD team's analysis aligns with our research goals and lab environment. Execution. After revising the plans, the BRAD team executed its analyses, generating figures, code, and new data files. In minutes, the BRADs produced outputs that would typically take humans hours or even days. However, some duplication of files suggested better data structuring was needed initially. While the figures were unpolished, they enabled rapid exploratory analysis of cell clusters from different transcription factor reprogramming, ensuring reproducibility and allowing for human feedback. All outputs from the BRADs require thorough human review, shifting researchers' roles from writing and debugging code to validating software, documenting analysis pipelines, and reviewing results. This shift enhances research efficiency and aligns with the software development life cycle, where validation and testing are essential in ensuring accuracy. § DISCUSSION This work demonstrates the potential of leveraging s to power semi-autonomous bioinformatic assistants, such as BRAD. Our prototype BRADs were deployed on an array of tasks, ranging from standard research projects such as searching literature or gene enrichment, to more complex procedures such as executing software pipelines and performing exploratory data analysis. Through automation, we achieved improved speed, reproducibility, and documentation of our workflows, albeit on well-known, standard tasks in a prototyped environment. The major strengths of the BRAD system lie in its ability to reliably perform common research tasks. The breakdown and distribution of problems, such as the multiple tools used to answer Q2, the automated execution of the biomarker pipeline, and the distributed workflow on Geneformer, split seemingly complex tasks into small, manageable problems that can be understood and executed by an . This modular approach offers both a blueprint for scaling up the number of agents that collaborate while still allowing humans to have fine control over the different aspects of the pipeline. There are two central challenges in our current system. First, providing instructions that allow an to split and distribute a complex task in a thoughtful way is a nontrivial problem for the user. Inconsistencies between the organization and capabilities of the user's instructions, the , and the available cause issues. Furthermore, the detection of an error or corruption in a multi-stage or multi-BRAD workflow is often unclear to the system, particularly without user intervention. Maintaining a HITL is both necessary to effectively use BRAD and an important stopgap for preventing errors from propagating in the system. For instance, in the Geneformer example, the human was used to modify parameters or instructions to the individual agents prior to their execution. Allowing users to interact with the ensures that the initial human instructions are appropriately interpreted prior to distributing the instructions to the team of agents. To augment this, each human-approved plan is saved for potential future use in order to maximize the HITL contributions. The second challenge of this work is effectively passing information between different modules and agents. For instance, the structure, purpose, and organization of new data files created by one agent in the Geneformer experiment must be communicated between agents that touch the same file. Passing information between modules of a single BRAD is managed with memory, and we plan to develop new solutions for passing information between multiple agents. In summary, this work highlights the potential use cases of BRADs working as a Multi Agent system to act as research assistants. Despite the persisting challenges, there currently are opportunities to use s for well-defined and repetitive tasks, such as running software pipelines or performing enrichment. BRAD, as a tool, successfully demonstrates these capabilities. Moving forward, we aim to enhance the planning and coordination between modules and multiple BRAD agents, with the goal of advancing towards a fully autonomous digital laboratory. § ACKNOWLEDGMENTS We thank the members of the Rajapakse Lab, Adam Lord, and Santosh Srivastava for helpful and inspiring discussions. We also thank Vivan Nyati, a student of Greenhills, Ann Arbor, for his valuable contributions over the summer. This work was supported by the Defense Advanced Research Projects Agency award number HR00112490472 (IR), the Air Force Office of Scientific Research (AFOSR) award number FA9550-22-1- 0215 (IR), support from NVIDIA (IR), and NIGMS GM150581 (JP). Supporting Information § ANATOMY OF BRAD EXTENDED This section provides an in-depth overview of BRAD's software architecture, intended for readers who are either interested in implementing a similar system or seeking to understand the technical specifications of BRAD. For a more comprehensive guide, please refer to the https://drive.google.com/file/d/1X8hSXvoGOEv4bR4B3mf3EYRF-bbIzRsd/view?usp=sharingsoftware manual here. §.§ Base Architecture Behind each module of BRAD is a framework that standardizes information flow through BRAD, memory, and external resources. The configuration parameters for each module are specified in . These default settings can be overridden by supplemental configuration files when instantiating individual BRAD instances. Module specific parameters are further detailed in their respective sections. Language Model Integration. BRAD's integration is built atop the LangChain interface for querying s. This supports a wide array of s that could be integrated into a BRAD chatbot. For ease of use, BRAD provides a simplified interface to run s hosted by https://build.nvidia.com/explore/discoverNVIDIA and https://platform.openai.com/docs/overviewOpenAI, and also supports locally hosted https://github.com/ggerganov/llama.cpp. Output Directory. An output directory is created for each BRAD chatbot. These output directories are the location where any downloaded database results, new literature, images, or other main results will be placed. Within , the parameter allows users to specify where these directories should be created. New directories are placed in the appropriate location and time stamped. After exiting a chatbot, the session state will be saved within the output directory and can be used to turn on BRAD from the same session or starting point in the future. Logging. A detailed log is provided within each output directory. The log is a formatted file that tracks the state of BRAD as it responds to each query. It records user inputs, outputs, and the specific actions taken, such as calls, file operations, or the use of other module-specific tools. There are three primary uses for these log files: (1) Clarity: The log files record every database query, executed software, referenced literature, and other resources used. Users can refer to the log file to understand how BRAD processes an input. (2) Documentation: BRAD's documentation generation features, including both code and PDF generation, utilize the log files to detail the process after completion. (3) Debugging: These logs are useful for identifying errors. Separate from the log files but also useful for understanding how BRAD responds to a user query, a flag can be set in to have BRAD display each of the steps taken along the way. Utilities. A large array of utilities have been developed to standardize the interface between the and each module. This includes parsing responses, reading data from files, managing the output directory, and more. §.§ Lab Notebook Module RAG employs a two step process: retrieval and generation. During the retrieval phase, a database is searched to identify documents and information similar to the user query. During the generation phase, the retrieved documents and user query are combined to form a single prompt for the . Based upon the additional context provided in the retrieved documents, this RAG can provide improve responses by (1) reducing hallucinations, (2) improving the accuracy of responses, and (3) providing verifiable sources for each response. BRAD contains several features to augment the traditional RAG system. Database Construction and Chunking. Chunking breaks documents into smaller pieces or chunks of texts that can later be used to help an respond to a user query. It is important to divide documents to create more focused text that RAG models can use to generate better responses. Although there are various ways to split up texts, BRAD employs two methods: * Recursive chunking divides text into smaller chunks based on size and a set of separators, such as periods and line breaks. * Semantic chunking encodes sentences into an embedding space and then applies similarity metrics to determine which sentences should be grouped into the same chunk Although semantic chunking creates more interpretable chunks, recursive chunking captures for more complex linguistic structures. By identifying sections with a high density of separators, we can filter out chunks that belong to reference sections of a paper. The distribution of these separators is bimodal, with little data in the middle range, helping to prevent the RAG from retrieving chunks filled with buzzwords but lacking substantive content. Similarity Search v. MMR. Given a user query, BRAD has two mechanisms for retrieving the initial text chunks: Similarity Search and Maximum Marginal Relevance (MMR). In both methods, the user query is embedded as a vector so that similarity metrics, primarily cosine distance, quantify the relevance of chunks of text to the user query. Similarity search compares chunks to the prompt and determines which chunks are most similar to the prompt. This is a straight forward approach to chunk retrieval but suffers from possible redundancies in the chunks which may hamper BRAD's response. MMR retrieves chunks by finding chunks that are maximize similarities to the prompt and to other chunks based on a parameter λ. The MMR based retrieval selects a more diverse set of chunks, mitigating the redundancy issue. Multi-query Retrieval. A challenge of retrieval is questions or queries with similar semantic meaning can be worded differently. To mitigate this, multi-query retrieval takes the original user input and uses an to create several different ways of expressing the same query. For each new expression as well as the original user query, the aforementioned retrieval methods are applied to find chunks relevant to the user query. Compression. Since the retrieved chunks can be unorganized, we use contextual compression to summarize the contexts of each of the selected chunks. This method passes each chunk one at a time, along with the user query, to an with instructions to summarize the retrieved text, highlight the pieces of text relevant to the user's query, and to remove information not relevant to the user's query. Since the summaries are often shorter than the chunks themselves, this method can be effective when the context of an is a limiting factor of the generation stage of RAG. While this is the most computationally expensive feature of BRAD's RAG system, since an call is used for each retrieved chunk, this feature, produced the greatest improvements in the quality of BRAD's responses. Reranking. Reranking is a common feature in RAG systems and provides a secondary validation of the relevance of retrieved chunks of text, prior to the generation stage. To maximize relevance and diversity of chunks given to the during generation, we implemented a PageRank based reranker. From the retrieved chunks, we create a weighted graph where the nodes are the chunks of text and the edges are the similarity between the chunks. Then, the PageRank algorithm is used to select the most relevant chunks to be used in the generation phase. By having this secondary filter, an increased number of text chunks can be retrieved from the RAG database without necessarily passing duplicated or irrelevant information to the . Scalability of Database and Runtime. Prior to runtime of the RAG, creating the vectorized text database can take several minutes to hours, depending on its size. For instance, our database consisting of more than 500 papers and books we created a database that consist of roughly 18000 pages. Although creating the vectorized database takes a few minutes, once the database is created, BRAD runs considerably fast, despite the size of the database. Furthermore, based on the , this can also heavily influence computational cost. While the enhancements mentioned above improve the performance of the RAG, they also introduce additional computational costs. §.§ Software Module BRAD's coding module provides a multistage pipeline that uses s in conjunction with custom documentation to both analyze data as well as improve the response to a user's query. BRAD contains implementations to execute python, , and bash scripts or pipelines. Multistage Execution Pipeline. A series of steps are required for BRAD to run a piece of code. The process begins with a user's query, where BRAD first reads the documentation of all available code to determine the appropriate script to execute. With the users input and selected code, BRAD will reread, in greater detail, the documentation of the code to execute. Using this information, along with a set of examples based on the principle of few shot learning, an then writes a snippit of code to execute the appropriate script. Once an initial block of code is generated, a series of validations are performed prior to execution. To address issues found in the testing issues of this pipeline with llamas 2 and 3, among other models, the proposed code is verified to ensure closure of all delimiters and correctly pointing to an output directory for all read or write operations, among other common syntax errors. After these validations, the code is compiled to ensure it is bug-free before execution. Any errors identified during the validation stages are passed, along with the code and original query, back to the for refinement. This iterative process of code generation and validation may be repeated several times until the code executes without error. Once all issues are resolved, the code is run. Software Specification. In order for BRAD to run code, the following requirements are enforced: * Location: All code that a user wants BRAD to run must be included in a BRAD specific path. * Documentation: Any code to be run must be well documented, adhering to standard python or conventions, enabling BRAD to understand the software's purpose and execution process. * Output Arguments: Any code that writes information to disk should accept at least one argument where BRAD can indicate the correct output directory for new files. These minimal requirements ensure that BRAD can effectively manage, understand, and execute the code while maintaining a consistent and organized workflow. Additionally, these three requirements are intended to allow for ease of adoption of previously generated code. Software Limitations and Considerations. While useful and effective, this framework for powered execution of code has several limitations and vulnerabilities. It expects bug-free software from the user, which can be challenging to ensure. It can be susceptible to malicious code, if the scripts executed by Brad are not curated. §.§ Digital Library Beyond traditional text-based retrieval, there is a growing collection of online databases common to bioinformatics workflows that can be integrated into the RAG pipeline. Here, we outline the integration of several databases and highlight that our architecture allows for the seamless addition of new databases into BRAD; see <ref>. Enrichr. Enrichr is a comprehensive gene set enrichment tool commonly used in many bioinformatics pipelines <cit.>. We integrated Enrichr into BRAD to leverage gene set annotations, curated pathways, and ontology knowledge to enhance BRAD's responses to user queries. This allows for enrichment of literature mined gene sets as well as those found in users data directly within the chatbot's interface. Gene Ontology. The Gene Ontology (GO) database is a bioinformatics initiative to provide a uniform approach and vocabulary to studying genes and gene products across different species <cit.>. This database aims to provide a consistent, curated vocabulary and annotations for genes across different species, making it an excellent resource to integrate into BRAD. When gene names, GO terms, or other keywords are identified in BRADs responses to the user, BRAD has the ability to define these GO terms and download charts that diagram their relationships with other genes. Moreover, the GO database provides papers associated with different terms, and these papers can be integrated via the webscraping codes. PubMed, arXiv, and bioRxiv. While RAG allows users to supplement s with additional information from a text database, the response quality is constrained by the relevance of the literature in the initial database. However, databases may sometimes fail to provide satisfactory answers due to containing irrelevant information or lacking the latest research. To address this, BRAD incorporates the ability to update its literature database. We implemented searching of arXiv, bioRxiv, and PubMed, but could broaden this process to include additional databases. BRAD can both display search results to the user, link articles, summarize recent abstracts, download and display papers, and embed the new papers into its database. This enables BRAD to form a closes, human in the loop system. User feedback – either in the form of detected poor responses or in the form of explicitly telling BRAD to update – allow BRAD to improve its retrieval knowledge base over time. When retrieving new documents on PubMed, BRAD only gathers publicly accessible papers. This ultimately limits the documents BRAD has access to, excluding key information and notable findings. To combat this problem, users may directly add their own documents to databases to ensure their favorite findings are accounted for in BRAD's responses. §.§ Planner Module Unlike other modules of BRAD that respond to individual user queries, the module can handle complex queries that require multiple tools. Given a user input, the scheduler determines a series of smaller tasks that can be executed using one module at a time whose collective output will address the user's query. These tasks are shown to and edited by a human, after which point the set of tasks are automatically executed; see <ref> for a high level schematic of a workflow involving the . §.§ Module Selection and Control Flow A series of mechanisms are utilized by Brad to determine the appropriate module to execute specific tasks or orchestrate complex processes involving multiple modules. Module Selection with Semantic Routing. BRAD's direction of a query to the RAG or any other module is a standard control flow problem. To route a users query to the appropriate module or function, BRAD uses a series of semantic routers. To manage the control flow and determine which databases or modules to use in response to a user query, we employ semantic routers to make fast, real time decisions. Semantic routers are prediction or classification models that process user queries to determine the appropriate path for the prompt to flow through the chatbot. At a high level, BRAD uses semantic routing to determine if a query requires fetching data online, access to private data or custom software, or a standard RAG and response. At a lower level, these routers can be used to identify specific function calls and parameters from queries to BRAD. Adaptive Routing through Reinforcement Learning. Routers are initialized with a series of prompts to guide BRAD's control flow, and these prompts are refined over time via a simple reinforcement learning procedure. When interacting with BRAD, semantic routers direct user prompts to use different modules, perform online retrieval, or query the text database. Users can force a query down a path or indicate if a prior query was misrouted. As users interact with BRAD, the set of prompts to direct down different paths is recorded. This feedback is incorporated into the router, enabling similar queries to follow the appropriate route in the future. Control Flow. The router can manage more intricate tasks schedules beyond simple linear execution. For example, the scheduler might first execute RAG to gather initial information and then, based on the quality of the answer, proceed to search the web for additional literature. The scheduler's combination of an instruction pointer and an instruction list allows for sophisticated control flow patterns, including loops, if/else statements, and other logic. For instance, consider the scenario where BRAD receives a query addressed at the RAG but has no text database for retrieval. In this case, the scheduler might construct a three stage loop consisting of: (1) download relevant literature to the user's query and include it in the database; (2) propose an answer based on RAG to the user's query; and (3) determine if the answer in stage (2) is satisfactory, in which case the chain ends, or if the answer is unsatisfactory, the instruction pointer will return to step (1). In Step (3), Brad uses the at the core of BRAD to evaluate complex control flow decisions and determine if previous outputs in the chain satisfy particular conditions set by the scheduler. §.§ Prompt Templates A core component of BRAD is a prompt template library designed to interface with various modules and generate responses for the users. This library of templates contains two distinct classes and is based on recent prompting techniques. Input v. Output Prompts. We developed two classes of prompts corresponding to the primary uses of an within BRAD. The first class of prompts is designed to transform a user's query into actions, leveraging the to generate or execute code. This includes tasks such as data or software selection, generating or editing code to execute, or selecting databases and online search terms, among other tasks. In these prompts, a series of techniques for improving the responses via structured output, few shot prompting, and other methods are used. The second class of prompts is used to generate output responses for the user. These include the generation stage of RAG, where pieces of text are formatted into a prompt template to improve user responses, prompts used to summarize data resources and code utilized, and prompts for generating output PDF documentation. These prompts also convey specific instructions and often are templated to insert information and data retrieved while executing various modules. Few Show Prompting. Few-shot prompting is a technique to provide responses by including a set example outputs as input to the model. For instance, to execute code with BRAD a set of prompts have been constructed that contain examples on to construct an appropriate piece of code in either or python. These few examples are both built into BRAD's prompt library and can additionally be provided in the documentation of software being run. By providing examples, the variability in response structure as well as the quality of the 's output are improved. Chain of Thought Prompting. Chain of thought prompting is a technique that uses reasoned through examples as input to an to demonstrate to how to reason through complex or multistep queries. When asking an to solve a problem, the user could specify in the prompt to solve the problem using a series of steps. In BRAD's code execution program, a series of such prompts are provided. When formulating a function call to execute a script, the is asked to generate an appropriate piece of code based on the users input, software documentation, and few shot examples of running the code. The output of the prompt is structured to require the to explain is chain of thought and reasoning behind the constructed code. § RESULTS AND EXAMPLES This section presents extended results and implementation details related to the use cases of BRAD discussed in the paper. §.§ Lab Notebook Evaluation §.§.§ Qualitative Example Since all the BRAD responses to Q1 were too long, the complete output is provided here. 0.95Q1: How can we view Pore-C as a hypergraph? [raster equal height,raster valign=top, raster halign=center,raster columns=1, raster rows=1, colframe=white,colback=white,colbacktitle=blue!50!white, ] [title = Enhanced RAG,colback=blue!5!white,colframe=blue!75!black,fonttitle=, width = 8cm] Pore-C is a technique used to study the three-dimensional organization of the genome, particularly in the context of chromatin interactions. To view Pore-C as a hypergraph, we can consider the following aspects: * Vertices and Edges: In a hypergraph, vertices represent entities, and hyperedges connect multiple vertices. In the context of Pore-C, the vertices can represent genomic loci (e.g., genes, regulatory elements, or specific regions of the genome). The hyperedges can represent the interactions between these loci, where a single hyperedge can connect multiple loci that interact with each other within a specific spatial context. * Higher-Order Interactions: Unlike traditional graphs where edges connect pairs of vertices, hypergraphs allow for the representation of higher-order interactions. In Pore-C, certain genomic regions may interact with several other regions simultaneously, which can be effectively captured by hyperedges. This is particularly relevant in understanding complex regulatory networks and chromatin architecture. * Data Representation: When analyzing Pore-C data, one can construct a hypergraph where each hyperedge corresponds to a set of loci that are in close spatial proximity or show significant interaction frequencies. This representation can help in identifying clusters of interacting regions and understanding the functional implications of these interactions. * Analysis and Algorithms: Viewing Pore-C data as a hypergraph opens up various analytical approaches, such as community detection, clustering, and network analysis, which can provide insights into the organization and function of the genome. Algorithms designed for hypergraphs can be applied to identify key regulatory regions, interaction hubs, and other biologically relevant structures. * Visualization: Hypergraphs can be visualized using specialized software that can handle higher-order relationships, allowing researchers to explore the complex interactions revealed by Pore-C data in a more intuitive manner. In summary, conceptualizing Pore-C data as a hypergraph allows for a more nuanced understanding of genomic interactions, accommodating the complexity of higher-order relationships that are inherent in chromatin organization. 0.95Q1: How can we view Pore-C as a hypergraph? [raster equal height,raster valign=top, raster halign=center,raster columns=1, raster rows=1, colframe=white,colback=white,colbacktitle=blue!50!white, ] [title = Standard RAG,colback=green!5!white,colframe=green!75!black,fonttitle=, width = 8cm] Pore-C is a technique used to study the three-dimensional organization of the genome, particularly in the context of chromatin interactions. Viewing Pore-C data as a hypergraph can provide a more nuanced understanding of the complex relationships between genomic elements. In a hypergraph, nodes represent individual genomic features (such as genes, regulatory elements, or chromatin domains), and hyperedges represent interactions or relationships that can connect multiple nodes simultaneously. This is particularly relevant for Pore-C data, where interactions are not limited to pairs of genomic elements but can involve multiple elements at once. Here’s how you can conceptualize Pore-C as a hypergraph: * Nodes: Each node in the hypergraph can represent a specific genomic feature, such as a gene, enhancer, or other regulatory elements. * Hyperedges: A hyperedge connects multiple nodes that interact with each other. In the context of Pore-C, a hyperedge could represent a chromatin interaction that involves several genomic regions simultaneously. For example, if a gene interacts with multiple enhancers, this relationship can be captured as a single hyperedge connecting the gene and all relevant enhancers. * Multiplicity of Interactions: Pore-C data often reveals that genomic regions can interact with many others, which is a key characteristic of hypergraphs. This allows for a more comprehensive representation of the complex interactions that occur within the nucleus. * Analysis: By treating Pore-C data as a hypergraph, researchers can apply various graph-theoretical approaches to analyze the structure and dynamics of chromatin interactions. This can include identifying clusters of interacting regions, understanding the role of specific genomic features in the overall architecture, and exploring how these interactions change under different conditions. * Visualization: Hypergraphs can be visualized using specialized software that allows for the representation of multi-way interactions, making it easier to interpret the complex relationships inherent in Pore-C data. In summary, viewing Pore-C as a hypergraph allows for a richer representation of genomic interactions, capturing the complexity of chromatin organization and facilitating advanced analyses of genomic architecture. 0.95Q1: How can we view Pore-C as a hypergraph? [raster equal height,raster valign=top, raster halign=center,raster columns=1, raster rows=1, colframe=white,colback=white,colbacktitle=blue!50!white, ] [title = C. GPT-4.o,colback=red!5!white,colframe=red!75!black,fonttitle=, width = 8cm,before= ] Pore-C is a technique used to study the three-dimensional organization of the genome, particularly focusing on chromatin interactions. To view Pore-C as a hypergraph, we can consider the following aspects: * Vertices and Edges: In a hypergraph, vertices represent entities, and hyperedges connect multiple vertices. In the context of Pore-C, the vertices can represent genomic loci (e.g., genes, regulatory elements, or specific regions of the genome). The hyperedges can represent the interactions between these loci, where a single hyperedge can connect multiple loci that interact with each other. * Higher-Order Interactions: Unlike traditional graphs where edges connect pairs of vertices, hypergraphs allow for the representation of higher-order interactions. In Pore-C data, a single interaction event may involve multiple genomic regions simultaneously, which can be effectively captured by hyperedges. * Complexity of Genomic Interactions: The genome is not organized in a linear fashion; rather, it exhibits complex interactions that can involve multiple regions. By modeling these interactions as a hypergraph, we can better represent the multifaceted nature of chromatin architecture and the relationships between different genomic elements. * Analysis and Visualization: Viewing Pore-C data as a hypergraph can facilitate advanced analytical techniques, such as community detection, clustering, and network analysis, which can help identify functional modules or regulatory networks within the genome. * Applications: This hypergraph representation can be useful for understanding how different genomic regions collaborate in processes such as gene regulation, transcription, and the overall spatial organization of the genome. In summary, conceptualizing Pore-C data as a hypergraph allows for a more nuanced understanding of the complex interactions within the genome, enabling researchers to explore the functional implications of these interactions in a more comprehensive manner. §.§.§ RAG Assessment (RAGAs) evaluation metrics Five metrics were used to evaluate the utility of BRAD's RAG system: * Faithfulness: This quantifies the consistency between BRAD's answer and the literature. * Answer Relevance: This evaluates if the response is applicable to the particular question. * Context Precision: This quantifies the ranking of the chunks BRAD uses to formulate its response relative to the ground truth. * Context Recall: This measures the use of context in BRAD's response. * Answer Correctness: This is the F1 score of claims that can be verified from the ground truth. All metrics are from 0 to 1 with higher scores indicating better performance. In order to determine these five metrics, a combination of the following four components were used: question, response, context, and ground truth. Since the questions are based on cutting edge research topics, their ground truth is based in research papers in our aforementioned database. The context is the chunks of text RAG uses to generate a response. To develop quantifiable metrics from responses, (<cit.>) proposed the following scheme. For an individual question, the context used to construct the question and the answer produced by BRAD's RAG are split into small, verifiable claims. Then, statements from BRAD's answer can be compared in a one for one fashion to find claims indicated by BRAD substantiated or not substantiated in context of the literature. This approach allows us to construct a standard confusion matrix where: * TP: True Positives are statements that are present in both the ground truth and the generated response. * FP: False Positives are statements that are present in the generated response but not the ground truth. * TN: True Negatives are statements that are present in the ground truth but not in the generated response. * FN: False Negatives are statements that are present in the ground truth but not in the generated response. Faithfulness. Faithfulness refers to the factual consistency of the generated response to a question against the given context. Since this metric is entirely based on the context given, if the context given is faulty, then a factually incorrect answer can still be considered faithful. Each of the generated claims is then checked against the given context to determine if the claim can be inferred from context. Faithfulness = Number of Claims from Response inferred from Context /Number of Claims from Response. Answer Relevance. Answer Relevance is a measure of how closely the generated response and given prompt are related. Answers that are incomplete or contain redundant information are typically assigned lower scores. We use the following formula to calculate answer relevance Answer Relevance = 1/N(∑_k=1^N e_g_k· e_o/e_g_ke_o) where N is the number of generated questions, e_g_k is the vectorized embedding of the kth generated question from a generated response, and e_o is the vectorized embedding of the original question. Context Precision. Context Precision measures how highly ranked ground truth relevant chunks are compared to other chunks in the context of the response. We use the following formula to calculate context precision Context Precision = 1/Number of Relevant Chunks(∑_k=1^NNumber of Relevant Chunks up to position k·δ_k /k). where δ_k is 1 is the chunk is relevant and 0 otherwise. Although this does not directly prove anything inherently about the generated response, it does serve as a metric to demonstrate the effectiveness of page rank reranking. Context Recall. Context Recall measures how related the retrieved context compares to the ground truth. It is calculated similar to faithfulness. Context Recall = Number of Ground Truth claims that can be linked to the context/Number of Ground Truth claims. Answer Correctness. Answer Correctness measures how the generated response compares to the ground truth. It is calculated by averaging the semantic similarity between the generated response and the ground truth and factual similarity. The semantic similarity is calculated by embedding both the ground truth and the generated response into the same embedding space and calculating the cosine similarity between the two vectors (similar to answer relevance calculations). The factual similarity is calculated through the F1 score F1 = TP/TP + 1/2(FP + FN). §.§ Questions for Quantitative Experiment Provided is a list of questions used for the Quantitative experiment. * How do B cells undergo affinity maturation during the germinal center reaction? * How is the timely clearance of neutrophils critical for the resolution of inflammation? * How did Levinson and Smith use the method of proving the divergence integrated along any limit cycle to establish uniqueness in their study of limit cycles? * What steps were involved in the PacBio library preparation for the American Gut Project samples? * How does Morse theory relate to the number of critical points in a Morse function on a manifold? * How do ectopically expressed transcription factors interact with endogenous components of recipient cells' transcriptional network in induced lineage reprogramming? * How are the kinetic parameters for each gene estimated using nonlinear least-squares methods in single-cell kinetics data? * How has in vivo reprogramming with viral vectors been used as a strategy for regeneration of the central nervous system in mice? * How does Batch Normalization apply a linear transformation to the output mean in the context of OrthDNNs? * How is protein identification confirmed in the antibody characterization process? * How does the concept of the unit interval relate to the Kraft inequality in coding theory? * How is the mean radius of gyration (Rg2) of chromosome 12 TADs calculated and presented in the data analysis? * How many critical points does the height function on the n-sphere have? * How do B cells gain access to the GC reaction and what role does inter-clonal competition play in this process? * What are some challenges in studying cell-cycle dynamics, particularly in terms of protein levels and post-translational modifications? * How do the highly non-linear dynamics of cooperative and competitive regulation of transcription factors impact the differentiation efficiency in cell reprogramming experiments? * What is the significance of periodic network connectivity in the analysis of the discrete-time consensus algorithm? * How did the recognition of the power of computational techniques contribute to the growth of network analysis and network science? * How can physicians claim CME credits for reading content from the Journal of Investigative Dermatology? * How does the inclusion of temporal information in the system impact the complexity and effectiveness of the detector-generation process in immune-inspired network IDS? * How is β-catenin involved in the promitogenic effect of TGF-β? * How is IRF2 editing associated with keratinocyte differentiation and skin barrier formation? * How did the use of a "physiological" buffer impact the activity of RNA polymerases in cells during the experiment? * How does the spatial organization of human chromosomes change during the cell cycle? * How does the substitution of serines with glutamate in cyclin B1 affect its rate of nuclear import in Xenopus oocytes? * How is nuclei isolation performed for skeletal muscle tissue samples? * How can Hodge Decomposition be used to identify tie strength with high accuracy? * How does CTCF binding factor play a role in creating metastable states related to DNA methylation? * How did Lou Pecora's research interests shift from solid-state physics to chaos theory while working at the U.S. Naval Research Laboratory in Washington? * How are nucleoids released from Escherichia coli and what is their structure? * How does Cyclin E control S phase progression during Drosophila embryogenesis? * How does the experimental efficiency impact the quality of in silico contact maps in Hi-C, SPRITE, and GAM experiments? * What are some of the mathematical problems discussed by David Hilbert in his lecture at the Second International Congress of Mathematics in 1900? * How can sequence count data be used in differential expression analysis? * How were peaks called in the ChIP-seq data analysis, and what role did input controls play in this process? * How is the level of nucleosome occupancy at the TSS related to accessibility during mitosis? * How can the parameters of an affine function be obtained by evaluating the function at specific vectors? * How does over-fitting affect the RMS prediction error in data fitting models? * How does DEC205 targeting affect the interactions between GC BCs and TFH cells in the context of antigen presentation? * How do nearby trajectories behave close to the chaotic attractor in the Lorenz system? * How do deep neural networks contribute to the mapping of gene expression levels from WSIs in the study of tumor heterogeneity and survival outcomes in breast and lung cancer? * How does the spectral dimension of a simplicial complex affect the possibility of frustrated synchronization in a network? * What is the significance of transforming a matrix A into real canonical form B in the context of solving differential equations? * What is the significance of matrix inverses in solving linear equations and least squares problems? * How does supervised learning typically approach binary classification tasks? * How can data-driven algorithmic approaches help in predicting transcription factors for directed cell conversion? * Why were researchers shocked to find that the longest REM episodes occurred near the beginning of sleep, rather than near the end? * What was explored in the study regarding the temporal program of gene expression in human fibroblasts in response to serum? * How are homomorphism numbers calculated in the context of randomly weighted graphs? * How does the CalCB method correct for known Hi-C biases and CNV in the estimation of Hi-C contact matrices? * How can weights be optimized to minimize downside risk for a given target annualized return? * How can a higher-order tensor be regarded as a formal representation of a linear mapping between different spaces, such as matrix and vector spaces, based on the concept of an isomorphic link? * How is haplotype imputation utilized in single-cell chromatin conformation capture? * How are vector fields defined and approximated in the context of integrability? * How are local connectivity patterns represented in hypergraphs? * How does Edge PageRank differentiate itself from traditional centrality measures in networks? * How does T2T-CHM13 improve variant calls compared to GRCh38 in the context of the 1KGP datasets? * How has the direct conversion of various types of cells been successfully achieved and what are the potential applications of this method in medical treatment and research? * How does the local track-correction module improve tracking performance in EllipTrack? * How do binary relationships affect complex topologies in network science? * What happens to most cells in Drosophila melanogaster embryos after entering G1 phase? * What causes two Hopf bifurcation points to merge in degenerate cases? * How does a monotone subsequence in (xn) in R imply a convergent subsequence for every bounded sequence in R? * How does a spiral scaffold affect chromosome loop formation during cell division, especially in relation to inner and outer loop sizes in various models? * How are eigenvalues of a square matrix affected by adding a matrix with all entries as 1? * How do dominant-negative mutations contribute to genetic dominance in cancer development despite other initiation mechanisms? * How do germinal centers aid immune response coordination and high-affinity antibody generation, considering TFH cell movement, GC reuse, and GC B cell clonal restriction? * How do Gata4, Mef2c, and Tbx5 affect induced cardiac transcriptional reprogramming? * What concept explains cascades of period doubling bifurcations in mathematical ecology models like the logistic map and Ricker model, discovered by Feigenbaum in discrete-time dynamical systems? * How does a monotone subsequence in a bounded sequence in R relate to the Bolzano-Weierstrass Theorem? * How do manifolds contribute to proving saddle equilibrium states in dynamical systems? * What controls cell cycles in Drosophila embryogenesis, coordinating phases with growth and G1 arrest until larva hatches? * What role does Myc activity play in apoptosis and oncogenic risk in Burkitt’s lymphoma? * How does a magnetic field affect the supercurrent behavior observed by experimentalists studying tunneling supercurrents? * How has math influenced immunology, especially with help from physicists? * How does the requirement for adjacent edges to share two nodes in the random walk process affect its representation and Markovian nature, changing the use of traditional graph and matrix techniques for analysis? * How do plasma cells differ from effector cells in humoral immunity roles, given the complex regulation mechanisms at cellular and molecular levels? * How can feature mappings enhance regression models with dimension reduction and standardized features? * How can protein expression in cells be controlled through growth factors availability or production in other cell types? * How do post-translational modifications affect E2F1 stability and function in response to DNA damage like cisplatin? * How do positive eigenvalues in a matrix connect to Sylvester's Law of Inertia and the span of eigenvectors for positive eigenvalues in the transformed matrix? * How did Gemma Frisius propose longitude could be determined, as pursued by Huygens and others? * How to calculate minimal state variables for a given transfer function in linear dynamical systems? * How can campaign strategies be influenced by clustered voter representatives, and what is the role of k? * How does finding a compatible similarity measure benefit workload and efficiency in stochastic consensus clustering? * How does the spindle assembly checkpoint help ensure correct sister-chromatid alignment during mitosis? * How does V(D)J recombination impact allelic exclusion in B-cell development? * How do transcription factors affect cardiac reprogramming through co-occupied binding sites and synergistic effects of Tbx5 and Gata4? * What is the initial mechanism that holds sister chromatids together by intertwining duplicated DNA molecules at adjacent replication forks? * What technological innovation enabled sound and image transmission in the 20th century, leading to advancements in electronics? * How does Ime2 stimulate Ndt80 in yeast for meiosis I and G1/S progression? * Which locus in the (Re λ, Imλ)-plane indicates the Neimark-Sacker bifurcation with a stable closed invariant curve outside the circle? * How does Mengerian property connect to hypergraphs in graph theory? * How does AEP relate to entropy in information theory when dividing sequences into typical and nontypical sets based on sample entropy? * What kind of trajectories in the restricted three-body problem return close to their initial state? * How is Collagenase IV used in iPS cell dissociation for culture before transferring to a new dish on SNL feeder cells? * How was the "global" category of smooth manifolds and maps created from the "local" category of open sets in Euclidean space and smooth maps? * What makes t-SNE unique in visualizing high-dimensional data effectively? * How does Ig gene rearrangement in B-cell development relate to Ig gene allelic exclusion regulation? * How is Tucker decomposition connected to higher-order PCA using factor matrices and a core tensor? * How does RIG-I activation by RN7SL1 impact cGAS-STING signaling in metastatic cells? * How is the preselection productive HCDR3 length distribution approximated to calculate the percentage of initial productive IgH rearrangements removed during B cell development? * What stress-response system in mammalian cells prevents excessive cell proliferation by activating p53 in response to overproduction of key mitogenic signaling proteins? * How does RIG-I activation by RN7SL1 impact cGAS-STING signaling in cancer cells? * How does serum stimulation affect DNA replication timing and gene expression in fibroblasts? * How many common neighbors do (1,1) and (2,2) have in a SRG? * What was discussed by Laemmli about metaphase chromosome bands? * What spatial changes occur in Hi-C maps as cells progress from prometaphase to metaphase in mitosis? * What's the condition for a map's derivative to be an immersion between smooth manifolds? * What concept represents non-stationary, periodic solutions in dynamical systems and why are they significant in nonlinear dynamics? * How does community modularity change with τ values in time-evolving analysis, shown by a decreasing trend from 0.70 to 0.36? * How do high-order interactions affect diversity in ecological communities in simulations with different interaction strengths and species numbers? * What was the AS change in FOXP1 gene at day 10 post neural induction and its impact on exon 18 inclusion in H9 hESCs? * How does aligning subjective and objective probability distributions address model misspecification concerns in rational expectations models? * How do miRNAs affect E2F1 protein levels in cells with 3'UTR reporter due to c-myc and E2F1, considering transcript expression in senescent populations? * How do pigmented cells and chromatin affect skin patterns in zebrafish? * How can vectors model cash flow movement in finance? * How do p53 activators affect CDK2 activity before mitosis in relation to p21 expression? * How do Fos and Jun proteins interact in wound healing, and what role do HIF factors play in the hypoxic response to tissue injury? * How can genetic analysis help prioritize therapy for cancer patients without actionable oncogene mutations, considering gene signature overlap with specific cell types in different body regions? * What roles do cdcgenes play in cell cycle control, especially in mitosis entry and S phase replication origin activation? * What role do homoclinic orbits play in bifurcation diagrams in curve dynamics with limit cycles and saddle points? * What components form the inner kinetochore in higher eukaryotic cells and how do they aid in chromosome attachment and tension during cell division? * What are the implications of recent research on converting glial cells, like astrocytes, into neurons for brain repair therapies? * What's the BSC capacity with crossover probability for a coding scheme with two 3-length codewords and an average error probability of 0.285? * How does Morse theorem affect density of Morse functions in space of smooth functions on a manifold? * What defines a true somatic stem cell in terms of self-renewal and progeny generation, considering differentiation and lineage commitment in Waddington's model? * How does the cell cycle affect epigenetic marks for memory, considering mitosis and S-phase impact on gene expression and chromatin? * How do Wnts affect somite patterning in myogenesis? * How do Wnt proteins affect somite cell commitment? * How do cells in an epithelium maintain adhesive contacts with neighbors during division, and how does this relate to junction formation and cell cleavage in cytokinesis? * How is transversality defined in dynamical systems with periodic orbits and hypersurfaces? * How do basic fibroblast growth factor-impregnated gelatin microspheres affect fibroblast migration in 3D wound healing? * How is the adjacency matrix used in link prediction algorithms in different fields? * How do Gata4, Mef2c, and Tbx5 affect induced cardiac myocyte reprogramming and heart repair? * Which company provided the artificial dura for the primate BMI task? * How to simplify integro-differential equations using real canonical form for an n x n matrix? * Can you provide examples of research studies on dynamical processes in complex networks beyond pairwise interactions? * What is Herman's theorem's role in analyzing Morse-Smale vector fields with the Poincare-Bendixson theorem? * How do algorithms for integer multiplication use polynomial rings and tensors? * How does LZ algorithm efficiency impact file compression in terms of bits for pointers and match lengths in LZ77? * How do tangent vectors and constants characterize an expanding map on a compact Riemannian manifold, and how does this relate to stability in topological spaces? * How are external circuit parameters chosen to place complex-conjugate poles slightly right of jω-axis in a multiterminal device circuit? * How do V, D, and J segment rearrangements impact antibody and T cell receptor encoding in the immune system? * How can a cell create lasting memory of a response using transcription and DNA replication? * How does TF over-expression affect network dominance in the omnigenic model, considering simplified regulatory networks and TFs' role in cellular reprogramming? * How does a graphon relate to the node set [0,1] in terms of structure and connectivity? * What is the role of the Nbs1 subunit in activating ATM through the MRN complex in response to double-strand breaks? * What are the key features and practical uses of the Watts model in social contagions on networks, considering its relationship with Granovetter's threshold models and the incorporation of simplicial complexes in modeling transmission dynamics? * What happens to the fixed points of period-seven cycles as r changes in one or two-parameter bifurcations? * How does vertex-transitivity demonstrate the relationship between a graph's structure and its automorphisms, especially in Cayley graphs and the lexicographic product? * How does Cyclin B1-Cdk1 complexes affect CDK2 activity post-mitosis in proliferating cells? * How is the two-parameter bifurcation diagram constructed for a system with codim 1 and codim 2 bifurcations of equilibria, limit cycles, and homoclinic orbits? * How to reconstitute functional modules in vitro using gene expression and morphology from various cell types? * How do centrosomes help with sister chromatid bi-orientation during spindle assembly? * How does scRNA-seq compare to snRNA-seq in studying transcriptional reprogramming and chromatin interactions? * How does removing ECB sequences affect Start synchronization in yeast cells, considering growth rate differences? * Which fellowship organization supported the research at the Yale Stem Cell Center Genomics Core Facility? * How do factors contribute to hiHeps induction for drug metabolism? * How do aged HSCs compare to younger HSCs in terms of regenerative potential, proliferation, and lineage tracing data? * How is the Edge PageRank equilibrium point represented in a dynamical system, considering node centrality and random walks? * How can Sis interference in Vj-Jj interval prevent secondary recombination events on the same Ig allele? * How can periodic solutions be shown under Hopf conditions with Lyapunov-Schmidt reduction and Poincaré normalization? * How does lateral inhibition affect feather primordia spacing in chicks, considering growth and regeneration in pattern formation? * How does the lasso penalty promote sparsity in regression for neuroimaging data? * How is the Laplace operator shown in TT-format for tensors in linear operators and elliptic PDE solutions? * What advantage does CD9 offer in isolating murine hematopoietic stem cells? * How is a rewriting event in GNA models defined in relation to node mapping and Kronecker matrix sampling? * How is fresh frozen tissue fixed with Formaldehyde for immunofluorescence staining in mouse embryo tissue? * What causes unexplained stock market fluctuations attributed to decentralized networks and herd behavior? * What causes motor neuron degeneration in ALS and how do interactions with glia contribute? §.§ Building a Database Typically, we use our prior knowledge and publications of interest to construct a literature database for BRAD. However, the web scraping module makes it easy for BRAD to construct a new database aimed at a particular topic area or question. To demonstrate the utility of the RAG and Web Scraping features integrated into BRAD, we tested BRAD's ability to respond to questions without a pre-existing database. BRAD was initially provided an empty database. Through alternating between RAG and web scraping, following <ref>, BRAD was able to adequately answer complex research questions by retrieving relevant information online. For instance, consider the question: “What are single cell foundation models?" Initially, BRAD has no information on this topic. Responding with only the RAG module, BRAD will correctly identify a lack of information on this topic and can trigger a web search of this topic. Realizing the insufficiency of information, BRAD then proceeded to perform a web search to gather recent research papers and articles relevant to single cell analysis and foundation models. Equipped with the new information, BRAD's answer to the initial question is improved and is able to discuss this research area, providing the names of several notorious scs and a discussion of their use cases. This process outlined in <ref> is designed by the planner module with clear instructions from the user. The engine of BRAD is used at multiple stages throughout this process. Besides its use in the RAG and Web Search modules, the also acts as an evaluator to determine the quality of the RAG's responses. Once the response meets sufficient quality, the signals to exit the loop. Additionally, we have implemented safeguards to prevent the system from executing an infinite loop. Question: What is a single cell foundation model? [raster valign=top, raster halign=center,raster columns=1, raster rows=1, colframe=white,colback=white,colbacktitle=blue!50!white, ] [title = RAG: Iteration 1 (empty database),colback=blue!5!white,colframe=blue!75!black,fonttitle=, width = 6cm] Based on the provided source and content, the initial database is empty, and I couldn't find any information regarding “single cell foundation models." I am confident that I cannot provide an accurate answer to this question with the current information available. [title=Web Search, colback=red!5!white, colframe=red!75!black, halign title=flush right ] arXiv search terms: “single cell foundation models", “machine learning for single cell data analysis" [title = RAG: Iteration 1 (populated database),colback=blue!5!white,colframe=blue!75!black,fonttitle=, width = 6cm] Single cell foundation models are large-scale pretrained deep language models used for analyzing and annotating single-cell RNA sequencing (scRNA-seq) data. They can help improve the accuracy and speed of cell type annotation and identification in various research areas. Examples of these models include scBERT and a method for ultra-fast cell-type identification using specific marker combinations from single-cell transcriptomic data. These models are used to facilitate queries and analysis of large cell atlas databases, such as the Tabular Muris and Tabular Sapiens, which contain single cell transcriptomic data for mice and humans, respectively. Additionally, single cell foundation models can utilize transfer learning to enable predictions in network biology and gene regulation, making them valuable tools in understanding cell behavior and interactions in various biological processes. However, these models should be used to aid in basic scientific research and generating or validating mechanistic hypotheses, rather than providing definitive answers. [title=Web Search, colback=red!5!white, colframe=red!75!black, halign title=flush right ] arXiv search terms: “single cell machine learning", “deep learning for single cell analysis" [title = RAG: Iteration 2 (populated database),colback=blue!5!white,colframe=blue!75!black,fonttitle=, width = 6cm] Single cell foundation models are artificial intelligence models designed to analyze and understand data from individual cells. They are constructed using single-cell transcriptomic data, which measures the expression levels of genes in individual cells. These models utilize machine learning algorithms, such as transformers, to learn gene-level and cell-level representation. They can accurately categorize known cell types and have the potential to uncover new biological insights in single-cell RNA sequencing data. Single cell foundation models are used in genomics and multi-omics research, showing promise in various downstream tasks, such as identifying cell types and states, discovering novel cell populations, inferring gene regulation networks, and more. Examples of single cell foundation models include scGPT, Genefomer, and scFoundation. They are a growing field of study, with ongoing advancements expected as single-cell technologies and omics sciences continue to develop. Question: What is the role of MYOD in Weintraub's cellular reprogramming experiment? [raster valign=top, raster halign=center,raster columns=1, raster rows=1, colframe=white,colback=white,colbacktitle=blue!50!white, ] [title = RAG: Iteration 1 (empty database),colback=blue!5!white,colframe=blue!75!black,fonttitle=, width = 6cm] Based on the information provided in the initial database, I don't have any specific data about Weintraub's cellular reprogramming experiment or the role of MYOD in that experiment. After searching the literature and documents from a text database, I found that Weintraub's experiment aimed to reprogram fibroblasts into myoblasts using the MyoD gene. MyoD is a myogenic regulatory factor that plays a critical role in muscle cell differentiation and development. It reprograms fibroblasts to a muscle cell fate by binding to specific muscle-specific DNA elements and regulating the expression of muscle-specific genes. However, I couldn't find detailed information on the exact role of MYOD in this experiment. It's recommended to consult the original research article for more comprehensive insights. [title=Web Search, colback=red!5!white, colframe=red!75!black, halign title=flush right ] PubMed search terms: “MYOD", “Weintraub", “cellular reprogramming", “transcription factor", “myogenic differentiation", “muscle cell development", ... [title = RAG: Iteration 2 (populated database),colback=blue!5!white,colframe=blue!75!black,fonttitle=, width = 6cm] MYOD is a transcription factor that plays a crucial role in Weintraub's cellular reprogramming experiment, where it successfully reprogrammed fibroblasts into skeletal muscle cells when expressed alone. MYOD initiates cell reprogramming by contacting E-boxes within closed chromatin and interacting with various regulatory layers such as nucleosome density, histone post-translational modifications, and chromatin folding. Additionally, MYOD can interact with other factors, such as the basal transcription machinery and the transcription elongation factor b, to co-regulate gene expression. In the context of Weintraub's experiment, MYOD's co-activation involves interactions with the epigenetic machinery. MYOD's binding and activity can be modulated by other factors, including those indirectly recruited by MYOD to its target sites. Furthermore, MYoD makes use of long noncoding RNAs (lncRNAs) in cellular reprogramming. For instance, LncMyoD plays a role in the first step of MYOD-mediated trans-differentiation by interacting with MYOD across the genome and increasing chromatin accessibility at the co-occupied sites. LncMyoD is essential for the differentiation of muscle precursors and the myogenic conversion of fibroblast cells, but it is not needed for the differentiation of already committed myoblasts. These interactions and regulatory roles of MYOD highlight the complexity of the mechanisms underlying cellular reprogramming and the importance of understanding the molecular dynamics of transcription factors and their interacting partners in this process. §.§ Advanced Search Below, we show the full output produced by BRAD in the process of answering Q2: What role does PCNA play in the cell cycle and imaging? To answer this, BRAD executed a pipeline consisting of (1) running the module to find first order interactions of PCNA in the STRING and HuRI databases <cit.>, (2) performing gene enrichment with the module, and (3) using the RAG system in the to find other relevant information. After completing these tasks, the module was used to produce a report summarizing the entire process. The module, although not a focal point of the paper, is a module designed to process the chat log and to produce a single, document that describes BRAD's analysis. The goal is to ensure BRAD's work is transparent, well-documented, and clear, even for users unfamiliar with AI, so they can easily understand what occurred. The is implemented by using s to generate latex code, according to a user specified template, configured similarly to whitelisted software, to explain the results of each module. In the below example, which begins on the following page, BRAD specifies an extended user prompt, characteristic of inputs that could initiate this pipeline. Then, a summary of the full pipeline is provided. The final main section is a summary of the outputs, in which BRAD lists the main information retrieved from the module as well a a table and chart generated by the module, as well as references to the used literature. The report is generated by BRAD entirely in and then automatically compiled. The formatting, writing, and style of the report is not always consistent or ideal, we find that the reports often reflect or summarize the pipelines well. That said, BRAD can also provide the used to generate the report, from which a user could make edits. [colback=red, width=, boxrule=0pt, sharp corners] The remaining pages are a report automatically generated by BRAD in response to Q2. [pages=-]BRAD-output-2.pdf
http://arxiv.org/abs/2409.02325v1
20240903223635
Defect Landscape Engineering to Tune Skyrmion-Antiskyrmion Systems in FeGe
[ "Jiangteng Liu", "Ryan Schoell", "Xiyue S. Zhang", "Hongbin Yang", "M. B. Venuti", "Hanjong Paik", "David A. Muller", "Tzu-Ming Lu", "Khalid Hattar", "Serena Eley" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "cond-mat.mes-hall" ]
AIP/123-QED Department of Electrical & Computer Engineering, University of Washington, Seattle, WA 98195 Sandia National Laboratories, Albuquerque, NM 87123 Department of Applied Physics, Cornell University, Ithaca, NY 14853 Department of Applied Physics, Cornell University, Ithaca, NY 14853 Department of Physics, Colorado School of Mines, Golden, CO 80401 Platform for the Accelerated Realization, Analysis, and Discovery of Interface Materials, Cornell University, Ithaca, NY 14853 School of Electrical & Computer Engineering, University of Oklahoma, Norman, OK 73019 Center for Quantum Research and Technology, University of Oklahoma, Norman, OK 73019 Department of Applied Physics, Cornell University, Ithaca, NY 14853 Center for Integrated Nanotechnologies, Sandia National Laboratories, Albuquerque, NM 87123 Department of Nuclear Engineering, University of Tennessee, Knoxville, TN 37996 Department of Electrical & Computer Engineering, University of Washington, Seattle, WA 98195 Department of Physics, Colorado School of Mines, Golden, CO 80401 [Corresponding author: ] serename@uw.edu § ABSTRACT A promising architecture for next-generation, low energy spintronic devices uses skyrmions— nanoscale whirlpools of magnetic moments— as information carriers. Notably, schemes for racetrack memory have been proposed in which skyrmions and antiskyrmions, their antiparticle, serve as the logical bits 1 and 0. However, major challenges exist to designing skyrmion-antiskyrmion based computing. The presence of both particles in one material is often mutually exclusive such that few systems have been identified in which they coexist, and in these systems their appearance is stochastic rather than deterministic. Here, we create a tunable skyrmion-antiskyrmion system in FeGe films through ion-irradiation and annealing, and detail the structural properties of the films under these various conditions. Specifically, we irradiate epitaxial B20-phase FeGe films with 2.8 Au^4+ ions, showing evidence that the amorphized regions preferentially host antiskyrmions at densities controlled by the irradiation fluence. In this work, we focus on a subsequent, systematic electron diffraction study with in-situ annealing, demonstrating the ability to recrystallize controllable fractions of the material at temperatures ranging from approximately 150^∘ C to 250^∘ C, enabling further tunability of skyrmion/antiskyrmion populations. We describe the crystallization kinetics using the Johnson-Mehl-Avrami-Kolmogorov model, finding that growth of crystalline grains is consistent with diffusion-controlled one-to-two dimensional growth with a decreasing nucleation rate. The procedures developed here can be applied towards creation of skyrmion-antiskyrmion systems for energy-efficient, high-density data storage, spin wave emission produced by skyrmion-antiskyrmion pair annihilation, and more generally testbeds for research on skyrmion-antiskyrmion liquids and crystals. Defect Landscape Engineering to Tune Skyrmion-Antiskyrmion Systems in FeGe Serena Eley September 9, 2024 ========================================================================== § INTRODUCTION Skyrmions are topologically protected chiral spin textures that have garnered interest as potential information carriers in low-energy spintronic devices. These textures originate from an antisymmetric exchange interaction called the Dzyaloshinskii-Moriya interaction (DMI) that can arise in magnetic materials with strong spin-orbit coupling and broken inversion symmetry. Unlike the exchange interaction that leads to ferromagnetic order (a collinear spin arrangement), the DMI can tilt magnetic moments such that the combined interactions engender whirlpool-like spin structures that behave as particles that can be controlled by applied currents.<cit.> Skyrmions were first observed in B20-phase cubic materials, such as MnSi,<cit.> Fe_1-xCo_xSi,<cit.> and FeGe,<cit.> due to isotropic DMI originating from the non-centrosymmetric crystal structure. On the other hand, if the DMI is anisotropic, non-symmetric skyrmions or antiskyrmions — the antiparticles of skyrmions — can form.<cit.> Degrees of freedom within these localized magnetic textures can be indexed by parameters representing the helicity, vorticity, and topological charge Q = (1/4π)∫m(r)· [∂_xm(r)×∂_ym(r)] dx dy, in which m(r) is the magnetization unit-vector field.<cit.> Consequently, antiskyrmions have opposite topological charge from skyrmions, e.g. -1 for skyrmions and +1 for antiskyrmions.<cit.> This difference in topological charge makes skyrmion-antiskyrmion systems attractive for binary data encoding in racetrack memory and logic applications. Skyrmion-antiskyrmion systems will also serve as fertile playgrounds for studying the dynamics of skyrmion-antiskyrmion crystals and liquids<cit.> as well as skyrmion-antiskyrmion pair annihilation, which is predicted to produce a spin wave. <cit.> Few materials have been identified in which these particles coexist,<cit.> namely in Mn_2RhSn,<cit.> Co/Ni multilayers<cit.> and our work on disordered FeGe.<cit.> In fact, in a recent study,<cit.> we created a skyrmion-antiskyrmion system in epitaxial B20-phase FeGe films by ion beam modification, which induced amorphous regions within the crystalline matrix. Low-temperature electrical transport and magnetization studies revealed a strong topological Hall effect with a double-peak feature that served as a signature of skyrmions and antiskyrmions. The system can be represented by the Hamiltonian<cit.> H_ij = -∑_<i,j>J_ij(S_i·S_j) - ∑_<i,j>D_ij·(S_i×S_j) -K∑_i(S_i^z)^2 -∑_i μ_0 BS_i^z. Here, the first term is the symmetric component in which S_i and S_j represent neighboring spins and J_ij is the symmetric exchange constant. The second term is the antisymmetric exchange interaction in which the DMI vector D_ij depends on local bond geometry.<cit.> The third contribution is the uniaxial anisotropy K that, along with the direction of the DMI vector D_ij, should differ in amorphous versus crystalline regions because both depend on long-range order of the lattice, which is broken during amorphization. Lastly, in the fourth term, B is the z-oriented applied magnetic field, S_i^z is the z component of S_i, and μ_0 is the magnetic moment of the magnetic atom. In this study, we introduce another avenue for tunability of the skyrmion-antiskyrmion system in irradiated FeGe. We first create a bath of coexisting skyrmions and antiskyrmions through ion beam modification at different fluences. Second, through in-situ annealing and selective area electron diffraction, we demonstrate the ability to systematically recrystallize the amorphized regions and study the associated recrystallization kinetics. By analyzing the recrystallization kinetics using the Johnson–Mehl–Avrami–Kolmogorov (JMAK) model, we describe the dynamic processes that govern the formation of crystalline regions from the amorphous matrix. This knowledge not only enhances our ability to fine-tune the defect landscape for skyrmion and antiskyrmion stabilization but also provides insights into the thermal stability and transformation pathways of these topological structures. Ultimately, a deeper understanding of these mechanisms paves the way for more effective engineering of skyrmion-antiskyrmion systems in FeGe, potentially leading to advancements in low-energy, high-density spintronic devices. § RESULTS AND DISCUSSION §.§ Growth and Ion-Beam Modification of FeGe Films Using molecular beam epitaxy, we grew approximately 55-nm thick epitaxial films of FeGe in the B20-phase, which has space group P2_13 with eight atoms per unit cell and cubic structure, as shown in Fig. <ref>(a). To characterize the material structure, we performed x-ray diffraction (XRD), scanning transmission electron microscopy (S/TEM), and selected area electron diffraction (SAED) on the films or lamellae prepared by focused ion beam microscopy. Details regarding the materials growth procedure, STEM characterization, and lamella preparation are included the the Methods section. XRD spectra (shown in Ref. [Eley2024]) reveal a peak at 2θ =33.1 ^∘ that is consistent with the FeGe (111) B20-phase. Figure <ref>(b) displays a bright field cross-sectional STEM image of the as-grown FeGe film and the inset shows the corresponding SAED pattern. In general, SAED images show a two-dimensional (2D) slice of the reciprocal lattice, resulting in sharp spots from reflections off lattice planes. Single crystals with little disorder produce only sharp spots, whereas defined and diffuse rings result from nanocrystalline powders and amorphous samples, respectively. Here, we see only bright diffraction spots, revealing a highly ordered crystalline structure, with no observable diffuse ring. We subsequently irradiated cleaved 6 × 6 cuts of the films with 2.8 MeV Au^4+ ions using a 6 MV High Voltage Engineering (HVE) EN Tandem Van de Graaff Accelerator at the Sandia National Laboratories Ion Beam Lab. To test the effects of various defect levels, each film was irradiated with a fluence of 10^13 and 10^14 , inducing 10^-1 and 1 displacement per atom (dpa), respectively, as determined by the Stopping and Range of Ions in Matter (SRIM) simulations, shown in Fig. <ref>(a). The SRIM results predict that the irradiation process should introduce a homogeneous distribution of defects throughout the film depth, without implanting Au-ions into the FeGe. Further details of the ion beam modification process and corresponding SRIM simulations are included in the Methods section. To identify phases as well as determine the effects of irradiation on the crystalline structure, chemical composition throughout the film depth, and crystalline orientation, we performed STEM imaging, electron energy loss spectroscopy (EELS), and SAED. The low magnification cross-sectional STEM image in Fig. <ref>(b) shows a top-layer carbon coating (applied for microscopy), FeGe layer, and Si (111) substrate. In addition to these three layers, the EELS results in Fig. <ref>(c) show a surface oxide, the thin FeSi seed layer that was formed on the substrate to mediate growth of epitaxial B20-phase FeGe, no detectable Au concentration within the measurement resolution (consistent with simulations that indicate Au was not implanted in the FeGe). In the high-resolution annular bright-field STEM image in Fig. <ref>(d) of a film irradiated at 10^13 we also see irradiation-induced amorphous regions within the crystalline matrix, highlighted using red and purple false color overlays, respectively. The presence of amorphous regions is also revealed in the SAED pattern, in which a distinct diffuse ring is notable that did not exist in the SAED image of the pristine sample [Fig. <ref>(b) inset]. §.§ In-situ recrystallization of Irradiated FeGe The next goal was to develop an annealing procedure to tune the density and sizes of the amorphized versus crystalline regions in the irradiated FeGe films, which would subsequently enable tunability of the relative skyrmion and antiskyrmion populations. To actively monitor the crystalline orientation, we performed SAED with in-situ annealing on plan view lamellae of films irradiated at 10^13 and 10^14 , as well as a pristine lamella, for reference. The electron beam was incident on the FeGe (111) lattice plane and the annealing process followed the stepwise heating profile depicted in Fig. <ref>. Further details describing the experimental setup are included in the Methods section and SAED data for the pristine film is presented in the Supplementary Materials. Figure <ref>(a-h) presents the raw SAED patterns of the sample irradiated at 10^13 with each panel showing results at a fixed location while heating at the indicated temperature. We see that as the temperature rises, the number of diffraction spots noticeably increases while the intensity of the diffuse ring diminishes, a trend suggestive of heating-induced recrystallization of the amorphous regions. To approximate the relative fraction of crystalline regions and the effect of each heating stage on this fraction, we separate the raw data into amorphous and crystalline components, and then integrate to convert the 2D diffraction data into one-dimensional (1D) profiles of normalized intensity. Finally, the intensity data is plotted against the reciprocal space parameter g = 1/d, where d is the spacing between crystalline reflection planes. Refer to the Supplementary Material for more details on our signal separation procedure. Figures <ref>(i) and <ref>(j) display the resulting separated 1D profiles for the amorphous and crystalline components, respectively. For the amorphous component [Fig. <ref>(i)], the diffuse ring at 1/d = 4.9 nm^-1 matches expected values for amorphous FeGe, as reported in previous studies,<cit.> and the peak intensity decreases with increasing temperature. On the other hand, Fig. <ref>(j) illustrates an increase in the number and intensity of diffraction peaks associated with the crystalline component as the temperature rises. To identify phases associated with these peaks, we used the Materials Project database<cit.> of calculated and experimental diffraction pattern data for the Fe-Ge and Fe-Ge-O systems, and subsequently found that these peaks correspond to the B20-phase of FeGe as well as to the cubic Fe_3Ge-Fm3m phase. More information regarding the phase identification process is discussed in the Supplementary Materials. We repeated the in-situ annealing and SAED process on the sample irradiated at the higher fluence of 10^14 , observing similar recrystallization behavior, as shown in Fig. <ref>. It is important to note that the peak intensities in the SAED patterns are influenced by the image contrast during data acquisition. Though this was not adjusted during the annealing process, it did differ between runs for each sample such that peak intensities should not be compared between the different samples. As expected, the width and relative intensity of the diffuse ring (compared to the diffraction spots) is significantly stronger than observed in the sample irradiated at a lower fluence. This can be attributed to the higher irradiation dose inducing larger amorphous regions. Additionally, the greater number of diffraction spots suggests that the recrystallized grains possess more grain boundaries with varying orientations. We again separate the amorphous and crystalline components into 1D profiles of intensity versus g, shown in Figs. <ref>(i) and <ref>(j), respectively. Similar to the other irradiated film, Fig. <ref>(i) reveals that the intensity of the amorphous component decreases with temperature, but we also see that the diffuse ring persists over a wider range of temperatures. This indicates that the initial higher density of amorphous regions in sample irradiated at 10^14 results in increased stability of the amorphous phase during annealing. Regarding the crystalline component, shown in Fig. <ref>(j), we again see an increase in the number and intensity of diffraction peaks with temperature. In this case, the identifiable phase is primarily B20 FeGe. However, additional unidentified phases are observed at temperatures between 100-180^∘C at a 1/d of 4.9 nm^-1 and 5.5 nm^-1, which may originate from lattice strain, oxides, or various other defects such as twin boundaries and stacking faults.<cit.> §.§ Recrystallization Kinetics of Irradiated FeGe To determine the evolution of the effective crystalline volume fraction with temperature and time, we characterize the recrystallization kinetics using the Johnson–Mehl–Avrami–Kolmogorov (JMAK) model.<cit.> This model provides an estimate of the local Avrami exponent n, which can be used to extract information about nucleation and grain growth<cit.> — the nucleation rate at different stages of crystallization (increase or decrease), the growth mechanism (interface or diffusion controlled), and the dimensionality of the grain growth in the system (3D-bulk materials, 2D-thin film, 1D-nanotube). The classic JMAK model assumes that the phase transformation occurs under isothermal conditions and that the time t evolution of the effective crystalline volume fraction X can be expressed as<cit.> X(t) = 1 - exp[-k(t-t_0)^n], where k is the kinetic coefficient, which is constant under a fixed temperature, and t_0 is induction time (the duration between when sample heating initiates and the onset of crystallization).<cit.> Consequently, the local Avrami exponent depends on the crystalline volume fraction as follows: <cit.> n(X) = dln[-ln(1-X)]/dln(t-t_0) . Because our stepwise heating profile alternates between isothermal steps and temperature ramps, we also consider the extended JMAK model applicable for non-isothermal conditions. In this model, the crystalline volume fraction depends on the Avrami exponent, time, and temperatures as<cit.> X_extended(t) = 1 - exp{ -[ ∫_t_0^t k(T) dt ]^n }. For a constant heating rate where dT/dt ≡β, the term ∫^t_t_0 k(T) dt can be rewritten as ∫^T_T_0 k(T)/β dT such that ∫^T_T_0 k(T)dT = k'(T-T_0).<cit.> Hence, the temperature dependence of the crystalline volume fraction becomes X_extended(T) = 1 - exp{- [k'(T-T_0)/β]^n } for k'(T) = k'_0exp(E_a/RT), k'_0 is a constant, E_a is activation energy, T_0 is the onset temperature for crystallization, and R is the gas constant. Rearranging terms and taking the logarithm of both sides of the equation, we see that the local Avrmai constant under non-isothermal conditions can be expressed as<cit.> n_extended(X) = [RT^2/RT^2+E_a(T-T_0)] dln[-ln(1-X)]/dln[(T-T_0)/β]. Based on Eqs. (<ref>) and (<ref>), we calculated the local Avrami exponent using both the classic and extended JMAK model to describe the nucleation and growth mechanism of our irradiated FeGe samples. To apply these models, the effective crystalline volume fraction X must first be obtained. Having separated the diffraction data into amorphous and crystalline components, we can apply a classic Rietveld-based metric called the Degree of Crystallinity (DOC) to extract the crystalline volume fraction,<cit.>. Here, DOC is simply defined as the area-under-the-curve of the diffraction intensity for the crystalline component I_cryst(g) normalized by the area-under-the-curve for the total diffraction intensity — the sum of I_cryst(g) and the intensity of the amorphous component, I_amorph(g): DOC = ∫I_crystdg/∫I_crystdg + ∫I_amorphdg Figure <ref>(a) compares the temperature dependence of the effective crystalline volume fraction for the as-grown and irradiated films, obtained using Eq. (<ref>), and the diffraction data is shown in Figs <ref>(i,j) and <ref>(i,j). First, notice that the effective crystalline fraction for the pristine sample is around 0.92 pre-annealing and falls to around 0.82 by the end of the annealing sequence, whereas an X of around 1 would ideally be expected for a perfectly crystalline film. We attribute this reduced extracted crystallinity to artefacts — the broadening of the diffraction spots and the presence of Kikuchi lines, highlighted in Supplementary Materials Fig. S1. Likely owing to slight movements of the sample, the Kikuchi lines shift, causing an inconsistent broadening of the diffraction spots at different temperatures. Extensive diffraction data collected on the pristine sample is included in the Supplementary Materials. Before the recrystallization process starts, we see that the sample irradiated at 10^14 exhibits a crystallized volume fraction of approximately 0.22, while the film irradiated with the lower fluence has a significantly higher initial crystallized volume fraction of approximately 0.6. This disparity is consistent with our more qualitative observations from microscopy and diffraction studies, all showing that the higher ion dose leads to more amorphous regions, resulting in a lower crystallized volume fraction. We also see from Fig. <ref>(a) that after annealing at a maximum temperature of 250^∘C, X is nearly equivalent for the film irradiated at 10^13 and the as-grown film. However, the crystallized volume fraction for the sample irradiated at 10^14 reaches a lower value compared to the lower-dosed sample. This suggests that either longer annealing times are required or there are defects that have larger activation energies which require higher temperatures to eliminate. Figure <ref>(a) also reveals a distinct difference in the temperature at which X(T) upturns in each sample, showing that recrystallization in the sample irradiated at 10^14 occurs at a higher temperature compared to the one irradiated at 10^13 . This shift suggests that the higher density of the amorphous phase leads to greater phase stability and thus requires a higher activation energy for recrystallization. To validate this explanation, further experiments are needed, for example, differential scanning calorimetry to calculate the activation energy using the Kissinger method.<cit.> Next, from our crystallized volume fraction data, we can apply the JMAK model to determine the Avrami exponent and then extract information regarding the nucleation rate and mechanism from these values. As shown in Fig. <ref>, our heating profile is stepwise, therefore, neither completely isothermal nor non-isothermal. Consequently, we apply both the classic and extended JMAK models, Eq. (<ref>) and (<ref>), respectively. For the extended model, which considers non-isothermal conditions, we approximate the temperature sweep as linear, for β = 2^∘C/min, and use an activation energy of E_a = 0.1011 × 10^-19 J/atom for FeGe, based on Ref. [FeGeActivationE]. Figures <ref>(b, c) show the calculated local Avrami exponents n(X) plotted against the crystallized volume fraction X, for the film irradiated at 10^13 and 10^14 , respectively. To now resolve the nucleation rate and growth mechanism, we consider that the local Avrami exponent n(X) can be described using the equation n = a + bc.<cit.> Here, a ≥ 0 provides information on the relative nucleation rate, where a < 1, a = 0, and a > 1 for a decreasing, constant, or increasing nucleation rate with time, respectively. The term b relates to the growth mechanism, for which b = 1 indicates interface-controlled growth and b = 0.5 signifies diffuse-controlled growth. Lastly, the term c indicates the growth dimension, therefore can only be 1, 2, or 3. Based on our data, the possible indices a, b, and c are not unique. We discuss the possibilities here, which are also summarized in Table <ref>. It has been previously reported that crystallization of Fe-Ge films is diffusion controlled,<cit.> therefore we assume b = 0.5. Consequently, under this assumption, the lower limit for n(X) is 0.5, calculated from minimum values for c=1 and a=0. Therefore, for the film irradiated at 10^13 , our n(X) data that can be applied to the model lies within the range 0.64 < X < 0.85, highlighted in grey shading in Fig. <ref>(b). According to the classic JMAK model, the n value initially increases to approximately 1.2 before rapidly decreasing to below 1 with increasing temperature and X. Similarly, n(X) calculated from the extended JMAK model exhibits comparable behavior, increasing initially, though it peaks at a lower value of 1.0 before decreasing. We then use the average Avrami exponent n — 0.9 and 0.8 for the JMAK and extended JMAK models, respectively — to determine the indices. For an increasing nucleation rate (a > 1) to exist, n(X) must be greater than 1.5 even for the lowest possible c = 1, again considering b=0.5. Under the constraint of n < 1.0 and the assumption of b = 0.5, c can only be 1 and a < 1. Therefore, both models conclude that grain growth in the film irradiated at 10^13 is diffusion-controlled, one-dimensional growth with a decreasing nucleation rate. We now consider the film irradiated at 10^14 , examining our n(X) data within the range of 0.27 < X < 0.65, which is highlighted in grey shading in Fig. <ref>(c). This applicable range was determined using a lower limit of n(X) = 0.5. We see that the recrystallization behavior follows a similar pattern to that in the lower-dose film: the nucleation rate decreases since n(X) mostly remains below 1.5. However, the peak n values are higher, reaching 1.6 and 1.34 for the extended and classic JMAK models, respectively. In this case, the n for JMAK is 1.1, and for extend JMAK is 1.0. Accordingly, 1.0 < n < 1.1, a < 1, and b = 0.5 indicate that the grain growth is dominated by diffusion-controlled one-to-two dimensional growth with a decreasing nucleation rate. The difference between n(X) for the classic and extended JMAK models reaches a maximum of approximately 16% in both samples, with all terms remaining within a range that yields the same conclusions regarding nucleation. Both models indicate our irradiated FeGe film experienced a decrease in nucleation rate at all times. This is consistent with the non-zero crystallized volume fraction at the start of annealing because these unaffected crystalline regions can act as nuclei. Notably, the higher dimensional growth observed in the film irradiated at 10^14 compared to the one irradiated at 10^13 aligns with the experimental conditions: the higher density of amorphous regions in the former leads to less restricted growth directions. § CONCLUSIONS In conclusion, we developed a method of creating a tunable skyrmion-antiskyrmion system in B20-phase FeGe through ion beam modification and annealing, which is of high interest because few systems exist in which skyrmions and antiskyrmions coexist. First, in a previous study, <cit.> we showed that ion beam modification of epitaxial FeGe films through irradiation with 2.8 MeV Au^4+-ions at difference fluences induces varying densities of amorphized regions. From extensive Hall effect studies, we observed a topological Hall effect evidencing skyrmions in the as-grown film and a double-peak feature in the Hall effect data that is consistent with the presence of skyrmions and antiskyrmions in the ion-beam-modified films. Given that skyrmion formation is favorable in the crystalline regions and antiskyrmion formation in the amorphous regions, the relative populations of each quasiparticle appears dependent on the volume fractions of crystalline and amorphous regions. In this study, we added another degree of tunability to the system by developing an annealing process that progressively recrystallizes the ion-beam-modified films. By monitoring the heating-induced changes in the crystalline structure through in-situ SAED, we determined that as the temperature increases, the crystalline volume fraction is first constant, then sharply increases, before plateauing. We also found that the temperature at which the crystalline volume fraction sharply increases depends on initial structural conditions and that the mostly amorphized FeGe from high fluence irradiation does not fully recrystallize at the temperatures up to 250^∘C within the duration of our annealing process (104 minutes total, 3 minutes at 250^∘C). Lastly, by applying both JMAK models, we found that the nucleation rate in both samples decreases during recrystallization, which suggests the formation of larger grains with fewer grain boundaries. This reduction in grain boundary defects may affect the formation and motion of skyrmions and antiskyrmions in the system, in this case reducing skyrmion-defect interactions. Furthermore, the higher dimensional growth observed in the sample irradiated with a 10^14 fluence implies a more complex, diverse defect landscape that skyrmions and antiskyrmions must navigate. More generally, the process we developed can be applied to creating skyrmion-antiskyrmion systems in FeGe for technological applications such as racetrack memory<cit.> and logic<cit.> or for fundamental research on phenomena such as spin wave emission by skyrmion-antiskyrmion pair annihilation <cit.> as well as skyrmion-antiskyrmion liquids and crystals. <cit.> § METHODS §.§ Film Growth Our FeGe films on Si (111) substrates were prepared at the Platform for the Accelerated Realization, Analysis, and Discovery of Interface Materials (PARADIM) at Cornell University, applying a process similar to that detailed in Ref.  PhysRevMaterials.2.074404 for growth of Mn_xFe_1-xGe films. Using molecular beam epitaxy in a Veeco GEN10 system (2 × 10^-9 torr base pressure), an FeSi seed layer was first formed by depositing a monolayer of Fe onto a 7 × 7 reconstructed Si (111) surface then flash annealing at 500C. Fe and Ge sources were then co-deposited from 40 cc effusion cells at a rate of 0.5 Å/s and substrate temperature of 200C. This resulted in B20-phase, epitaxial FeGe films that were fairly uniform over 1.5 inch substrates. §.§ Ion Beam Modification The FeGe films were irradiated in a 6 MV HVE EN Tandem Van de Graaff Accelerator at the Ion Beam Lab at Sandia National Laboratories. Prior to the irradiation process, the beam energy and fluences were chosen based on estimations of resulting atomic displacements from Stopping Range of Ion in Matter (SRIM) simulations.<cit.> The SRIM simulation considered the densities, displacement energies, lattice energies, and surface energies indicated in Table <ref>. FeGe films cleaved into 6 x 6mm fragments were adhered onto a Si backing plate with double sided carbon tape and mounted into the tandem end station which was pumped to a base pressure of at least 1e-6torr. Ion irradiation was performed using 2.8 MeV Au^4+ ions with an ion beam current of 100nA, equivalent to an ion flux of 1.56e11ions/cm^2. §.§ Scanning Transmission Electron Microscopy and Focus Ion Beam Milling Lamellae were prepared using a Thermo Fisher Helios G496 UX-focused ion beam (FIB) at PARADIM. Plan-view lamellae were used for SAED and cross-sectional lamellae was fabricated for STEM and EELS measurements. Initially, a protective overlayer stack of 20 - 30 nm of carbon and 0.8–1 μm of Pt was deposited on the sample surface. Subsequently, a Ga-ion beam was used to carve out the thinned lamellae, which is then attached to a needle using a sputtered Pt paste to facilitate transfer onto a TEM grid. Both the cross-section and plan-view lamellae went through successive milling on both sides down to a thickness of 200 - 500 nm using a 30 keV Ga-ion beam. Lastly, the lamellae were milled at 5 keV until the Pt layer thickness fell below 20 nm. Throughout the plan-view FIB thinning process, the film surface and substrate underwent milling at a 2-3^∘ angle with respect to the ab-plane, ensuring a continuous gradient across different regions. The STEM images depicted in Fig. <ref>(b) and Fig. <ref>(d) as well as the EELS data in Fig. <ref>(c) were captured using a Thermo-Fisher Scientific Spectra 300 STEM in the PARADIM facility. §.§ Selective Area Electron Diffraction The SAED data captured during the in-situ annealing process were collected using a 300 kV Titan Transmission Electron Microscope at Sandia National Laboratories (SNL). The sample was sandwiched onto a single-tilt Gatan heating holder using a screw. Then, after the chamber was pumped down to 1.5×10^-9 torr, the sample was heated from 50^∘C to 250^∘C using the stepwise profile depicted in Fig. <ref>. This heating process alternated between a temperature ramp of 5^∘C/min for 2 minutes and then a 3-minute hold. At each holding step, the temperature was allowed to stabilize for one minute, after which the following data was collected over a period of two minutes: a bright-field (BF) TEM image of the entire sample at a magnification of 3800X, a diffraction pattern using the selected area aperture with sample distance of 480 mm with an area of 0.35 μ m^2, and a BF TEM image of where the SAED data was taken, both at 8100X. The exposure time for all the images was 1 s and, throughout the imaging process, the SAED location remained fixed and microscope voltage was maintained at 300 kV. § SUPPLEMENTARY MATERIALS See Supplementary Materials for selected area electron diffraction results on the as-grown FeGe film as well as further details regarding our SAED data analysis process. § AUTHOR DECLARATIONS §.§ Conflict of Interest The authors have no conflicts to disclose. §.§ Authors' contributions S.E. conceived and designed the experiment, with input from D.M., K.H., and T.L. M.B.V. conducted the SRIM simulations and coordinated sample preparation, irradiation, and microscopy based on Hall effect studies. H. P. grew the FeGe films. X.Z. and H.Y. prepared the FIB samples. X.Z. collected and analyzed EELS and STEM images with advice from D.M. H.Y. also collected TEM images, and conducted room-temperature SAED measurements of the pristine film, with advice from D.M. R.S. and K.H. collected the TEM and diffraction data under various temperatures. J.L. analyzed, interpreted, and modeled the diffraction data. S.E. and J.L. wrote the manuscript. S.E. and T.L. assisted with general data analysis and interpretation. All authors commented on the manuscript. This material is based upon work supported by the National Science Foundation under grants DMR-1905909 and DMR-2330562 at the Colorado School of Mines and University of Washington. This work also made use of synthesis and microscopy facilities at the Platform for the Accelerated Realization, Analysis, and Discovery of Interface Materials (PARADIM), which are supported by the National Science Foundation under Cooperative Agreement No. DMR-2039380. Part of this work was performed at the Center for Integrated Nanotechnologies, a DOE Office of Science User Facility, and Sandia National Laboratories, managed and operated by NTESS, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. DOE’s National Nuclear Security Administration. The views expressed in the article do not necessarily represent the views of the U.S. DOE or the United States Government. § DATA AVAILABILITY STATEMENT The data that support the findings of this study are available on Mendeley Data (doi.org/10.17632/xd8s497nsz.2) as a zip file. This includes .csv files containing the data used in figures, microscopy images (.tif files), python code used for the analysis, and an Origin file (.opju) containing all figures and data spreadsheets, which can be opened using Origin Viewer, a free application that permits viewing and copying of data contained in Origin project files.
http://arxiv.org/abs/2409.02245v1
20240903191948
FastVoiceGrad: One-step Diffusion-Based Voice Conversion with Adversarial Conditional Diffusion Distillation
[ "Takuhiro Kaneko", "Hirokazu Kameoka", "Kou Tanaka", "Yuto Kondo" ]
cs.SD
[ "cs.SD", "cs.AI", "cs.LG", "eess.AS", "stat.ML" ]
Inferring Cosmological Parameters on SDSS via Domain-Generalized Neural Networks and Lightcone Simulations [ September 9, 2024 ========================================================================================================== § ABSTRACT Diffusion-based voice conversion (VC) techniques such as VoiceGrad have attracted interest because of their high VC performance in terms of speech quality and speaker similarity. However, a notable limitation is the slow inference caused by the multi-step reverse diffusion. Therefore, we propose FastVoiceGrad, a novel one-step diffusion-based VC that reduces the number of iterations from dozens to one while inheriting the high VC performance of the multi-step diffusion-based VC. We obtain the model using adversarial conditional diffusion distillation (ACDD), leveraging the ability of generative adversarial networks and diffusion models while reconsidering the initial states in sampling. Evaluations of one-shot any-to-any VC demonstrate that FastVoiceGrad achieves VC performance superior to or comparable to that of previous multi-step diffusion-based VC while enhancing the inference speed.[Audio samples are available at <https://www.kecl.ntt.co.jp/people/kaneko.takuhiro/projects/fastvoicegrad/>.] § INTRODUCTION Voice conversion (VC) is a technique for converting one voice into another without changing linguistic contents. VC began to be studied in a parallel setting, in which mappings between the source and target voices are learned in a supervised manner using a parallel corpus. However, this approach encounters difficulties in collecting a parallel corpus. Alternatively, non-parallel VC, which learns mappings without a parallel corpus, has attracted significant interest. In particular, the emergence of deep generative models has ushered in breakthroughs. For example, (variational) autoencoder (VAE/AE) <cit.>-based VC <cit.>, generative adversarial network (GAN) <cit.>-based VC <cit.>, flow <cit.>-based VC <cit.>, and diffusion <cit.>-based VC <cit.> have demonstrated impressive results. Among these models, this paper focuses on diffusion-based VC because it <cit.> outperforms representative VC models (e.g., <cit.>) and has a significant potential for development owing to advancements in diffusion models in various fields (e.g., image synthesis <cit.> and speech synthesis <cit.>). Despite these appealing properties, its limitation is the slow inference caused by an iterative reverse diffusion process to transform noise into acoustic features (e.g., the mel spectrogram[For ease of reading, we hereafter focus on the mel spectrogram as a conversion target but other acoustic features can be equally applied.]) as shown in Figure <ref>(a). This requires at least approximately five iterations, typically dozens of iterations, to obtain sufficiently high-quality speech. This is disadvantageous compared to other deep generative model-based VC (e.g., VAE-based VC and GAN-based VC discussed above) because they can accomplish VC through a one-step feedforward process. To overcome this limitation, we propose FastVoiceGrad, a novel one-step diffusion-based VC model that inherits strong VC capabilities from a multi-step diffusion-based VC model (e.g., VoiceGrad <cit.>), while reducing the required number of iterations from dozens to one, as depicted in Figure <ref>(b). To construct this efficient model, we propose adversarial conditional diffusion distillation (ACDD), which is inspired by adversarial diffusion distillation (ADD) <cit.> proposed in image synthesis, and distills a multi-step teacher diffusion model into a one-step student diffusion model while exploiting the abilities of GANs <cit.> and diffusion models <cit.>. Note that ADD and ACDD differ in two aspects: (1) ADD addresses a generation task (i.e., generating data from random noise), while ACDD addresses a conversion task (i.e., generating target data from source data); and (2) ADD is applied to images, while ACDD is applied to acoustic features. Owing to these differences, we (1) reconsider the initial states in sampling (Section <ref>) and (2) explore the optimal configurations for VC (Section <ref>). In the experiments, we examined the effectiveness of FastVoiceGrad for one-shot any-to-any VC, in which we used an any-to-any extension of VoiceGrad <cit.> as a teacher model and distilled it into FastVoiceGrad. Experimental evaluations indicated that FastVoiceGrad outperforms VoiceGrad with the same step (i.e., one-step) reverse diffusion process, and has performance comparable to VoiceGrad with a 30-step reverse diffusion process. Furthermore, we demonstrate that FastVoiceGrad is superior to or comparable to DiffVC <cit.>, another representative diffusion-based VC, while improving the inference speed. The remainder of this paper is organized as follows: Section <ref> reviews VoiceGrad, which is the baseline. Section <ref> describes the proposed FastVoiceGrad. Section <ref> presents our experimental results. Finally, Section <ref> concludes the paper with a discussion on future research. § PRELIMINARY: VOICEGRAD VoiceGrad <cit.> is a pioneering diffusion-based VC model that includes two variants: a denoising score matching (DSM) <cit.>-based and denoising diffusion probabilistic model (DDPM) <cit.>-based models. The latter can achieve a VC performance comparable to that of the former while reducing the number of iterations from hundreds to approximately ten <cit.>. Thus, this study focuses on the DDPM-based model. The original VoiceGrad was formulated for any-to-many VC. However, we formulated it for any-to-any VC as a more general formulation. The main difference is that speaker embeddings are extracted using a speaker encoder instead of speaker labels, while the others remain almost the same. Overview. DDPM <cit.> represents a data-to-noise (diffusion) process using a gradual nosing process, i.e., x_0 →x_1 →…→x_T, where T is the number of steps (T = 1000 in practice), x_0 represents real data (mel spectrogram in our case), and x_T indicates noise x_T ∼𝒩(0, I). By contrast, it performs a noise-to-data (reverse diffusion) process, that is, x_T →x_T-1→…→x_0, using a gradual denoising process via a neural network. The details of each process are as follows: Diffusion process. Assuming a Markov chain, a one-step diffusion process q(x_t | x_t-1) (t ∈{1, …, T }) is defined as follows: q(x_t | x_t-1) = 𝒩(x_t; √(α_t) x_t-1, β_t 𝐈), where α_t = 1 - β_t. Owing to the reproductivity of the normal distribution, q(x_t | x_0) can be obtained analytically as follows: q(x_t | x_0) = 𝒩(x_t; √(α̅_̅t̅) x_0, (1 - α̅_̅t̅) 𝐈), where α̅_t = ∏_i=1^t α_i. Using a reparameterization trick <cit.>, Equation (<ref>) can be rewritten as x_t = √(α̅_̅t̅) x_0 + √(1 - α̅_̅t̅) ϵ, where ϵ∼𝒩(0, 𝐈). In practice, β_t is fixed at constant values <cit.> with a predetermined noise schedule (e.g., a cosine schedule <cit.>). Reverse diffusion process. A one-step reverse diffusion process p_θ(x_t-1 | x_t) is defined as follows: p_θ(x_t-1 | x_t) = 𝒩(x_t - 1; μ_θ(x_t, t, s, p), σ_t^2 𝐈), where μ_θ indicates the output of a model that is parameterized using θ, conditioned on t, speaker embedding s, and phoneme embedding p, and σ_t^2 = 1 - α̅_t - 1/1 - α̅_̅t̅β_t. Unless otherwise specified, x_0, s, and p are extracted from the same waveform. Through reparameterization <cit.>, Equation (<ref>) can be rewritten as x_t - 1 = μ_θ(x_t, t, s, p) + σ_t z, where z∼𝒩(0, I). Training process. The training objective of DDPM is to minimize the variational bound on the negative log-likelihood 𝔼 [ - log p_θ (x_0) ]: ℒ_DDPM(θ) = 𝔼_q(x_1:T | x_0) [ - logp_θ(x_0:T)/q(x_1:T | x_0) ]. Using Equation (<ref>) and the following reparameterization μ_θ(x_t, t, s, p) = 1/√(α_t) ( x_t - 1 - α_t/√(1 - α̅_̅t̅) ϵ_θ(x_t, t, s, p) ), Equation (<ref>) can be rewritten as follows: ℒ_DDPM(θ) = ∑_t=1^T w_t 𝔼_x_0, ϵ [ ‖ϵ - ϵ_θ(x_t, t, s, p) ‖_1 ], where ϵ_θ represents a noise predictor that predicts ϵ using x_t, t, s, and p. See <cit.> for detailed derivations of Equations (<ref>)–(<ref>). Here, w_t is a constant and is set to 1 in practice for better training <cit.>. In the original DDPM <cit.>, the L2 loss is used in Equation (<ref>); however, we use the L1 loss according to <cit.>, which shows that the L1 loss is better than the L2 loss. Conversion process. When ϵ_θ is trained, VoiceGrad can convert the given source mel-spectrogram x_0^src into a target mel-spectrogram x_0^tgt using Algorithm <ref>. Here, we use the superscripts src and tgt to indicate that the data are related to the source and target speakers, respectively. In this algorithm, a target speaker embedding s^tgt and a source phoneme embedding p^src are used as auxiliary information. To accelerate sampling <cit.>, we use the subsequence { S_K, …, S_1 } as a sequence of t values instead of { T, …, 1 }, where K ≤ T. Owing to this change, α_S_k is redefined as α_S_k = α̅_S_k/α̅_S_k-1 for k > 1 and α_S_k = α̅_S_k for k = 1. σ_S_k is modified accordingly. Note that VC is a conversion task and not a generation task; therefore, x_0^src is used as an initial value of x (line 1) instead of random noise x_T ∼𝒩(0, I), which is typically used in a generation task. For the same reason, the initial value of t is adjusted from T to S_K < T (line 2) to initiate the reverse diffusion process from the midterm state rather than from the noise. § PROPOSAL: FASTVOICEGRAD §.§ Rethinking initial states in sampling In Algorithm <ref>, the two crucial factors that affect the inheritance of source speech are the initial values of (1) x and (2) t. Rethinking the initial value of x. When the initial value of x is set to x∼𝒩(0, 𝐈) (a strategy used in generation), no gap occurs between training and inference; however, we cannot inherit the source information, that is, x_0^src, which is useful for VC to preserve the content. In contrast, when x_0^src is directly used as the initial value of x (the strategy used in VoiceGrad), we can inherit the source information, but a gap occurs between training and inference. Considering both aspects, we propose the use of a diffused source mel-spectrogram x_S_K^src, defined as x_S_K^src = √(α̅_S_K) x_0^src + √(1 - α̅_S_K) ϵ. In line 1 of Algorithm <ref>, x_S_K^src is used instead of x_0^src. The effect of this replacement is discussed in the next paragraph. Rethinking the initial value of t (i.e., S_K). As S_K is closer to T, x can be transformed to a greater extent under the assumption that it contains more noise, but can corrupt essential information. As this is a nontrivial tradeoff, it is empirically investigated. Figure <ref> shows the relationship between S_K and DNSMOS <cit.> (corresponding to speech quality) and that between S_K and speaker verification accuracy (SVA) <cit.> (corresponding to speaker similarity). We present the results for two cases in which x_0^src and x_S_K^src are used as the initial values of x. K was set to 1; that is, one-step reverse diffusion was conducted. We observe that SVA improves as S_K increases because x is largely transformed toward the target speaker in this case. When x was initialized with x_0^src, DNSMOS worsens as S_K increases. In contrast, when x was initialized with x_S_K^src, DNSMOS worsens once but then becomes better, possibly because, in the latter case, the gap between training and inference is alleviated via a diffusion process (Equation (<ref>)) as S_K increases. Both scores decreasd significantly when S_K = 1000, where x was denoised under the assumption that x is noise. These results indicate that the one-step reverse diffusion should begin under the assumption that x contains the source information, albeit in extremely small amounts.[Note that, if K is sufficiently large, adequate speech can be obtained even with S_K = 1000 at the expense of speed.] A comparison of the results for x_0^src and x_S_K^src indicates that x_S_K^src is superior, particularly when considering the SVA. Based on these results, we used x_S_K^src with S_K = 950 in the subsequent experiments. Figure <ref> shows the results for this setting. §.§ Adversarial conditional diffusion distillation Owing to the difficulty in learning a one-step diffusion model comparable to a multi-step model from scratch, we used a model pretrained using the standard VoiceGrad as an initial model and improved it through ACDD. Inspired by ADD <cit.>, which was proposed for image generation, we used adversarial loss and score distillation loss in distillation. Adversarial loss. Initially, we considered directly applying a discriminator to the mel spectrogram, similar to the previous GAN-based VC (e.g., <cit.>). However, we could not determine an optimal discriminator to eliminate the buzzy sound in the waveform. Therefore, we converted the mel spectrogram into a waveform using a neural vocoder 𝒱 (with frozen parameters) and applied a discriminator 𝒟 in the waveform domain. More specifically, adversarial loss (particularly least-squares GAN <cit.>-based loss) is expressed as follows: ℒ_adv(𝒟) = 𝔼_x_0 [ (𝒟(𝒱(x_0)) - 1)^2 + (𝒟(𝒱(x_θ)))^2], ℒ_adv(θ) = 𝔼_x_0 [ (𝒟(𝒱(x_θ)) - 1)^2 ], where x_0 represents a mel spectrogram extracted from real speech. x_θ represents a mel spectrogram generated using x_θ = μ_θ(x_S_k, S_K, s, p) (one-step denoising prediction defined in Equation (<ref>)), where x_S_K is the S_K-step diffused x_0 via Equation (<ref>). The adversarial loss is used to improve the reality of x_θ through adversarial training. Furthermore, following the training of a neural vocoder <cit.>, we used the feature matching (FM) loss, defined as ℒ_FM(θ) = 𝔼_x_0[ ∑_l = 1^L 1/N_l ‖𝒟_l(𝒱(x_0)) - 𝒟_l(𝒱(x_θ)) ‖_1 ], where L indicates the number of layers in 𝒟. 𝒟_l and N_l denote the features and the number of features in the l-th layer of 𝒟, respectively. ℒ_FM(θ) bears x_θ closer to x_0 in the discriminator feature space. Score distillation loss. The score distillation loss <cit.> is formulated as follows: ℒ_dist(θ) = 𝔼_t, x_0 [ c(t) ‖x_ϕ - x_θ ‖_1 ], where x_ϕ is one-step denoising prediction (Equation (<ref>)) generated by a teacher diffusion model parameterized with ϕ (frozen in training): x_ϕ = μ_ϕ(sg(x_θ, t), t, s, p). Here, sg denotes the stop-gradient operation, x_θ, t is the t-step diffused x_θ via Equation (<ref>), and t ∈{ 1, …, T }. c(t) is a weighting term and is set to α_t in practice to allow higher noise levels to contribute less <cit.>. ℒ_dist(θ) encourages x_θ (student output) to match x_ϕ (teacher output). Total loss. The total loss is expressed as follows: ℒ_ACDD(θ) = ℒ_adv(θ) + λ_FM ℒ_FM(θ) + λ_dist ℒ_dist (θ), ℒ_ACDD(𝒟) = ℒ_adv(𝒟), where λ_FM and λ_dist are weighting hyperparameters set to 2 and 45, respectively, in the experiments. θ and 𝒟 are optimized by minimizing ℒ_ACDD(θ) and ℒ_ACDD(𝒟), respectively. § EXPERIMENTS §.§ Experimental settings Data. We examined the effectiveness of FastVoiceGrad on one-shot any-to-any VC using the VCTK dataset <cit.>, which included the speeches of 110 English speakers. To evaluate the unseen-to-unseen scenarios, we used 10 speakers and 10 sentences for testing, whereas the remaining 100 speakers and approximately 390 sentences were used for training. Following DiffVC <cit.>, audio clips were downsampled at 22.05kHz, and 80-dimensional log-mel spectrograms were extracted from the audio clips with an FFT size of 1024, hop length of 256, and window length of 1024. These mel spectrograms were used as conversion targets. Comparison models. We used VoiceGrad <cit.> (Section <ref>) as the main baseline and distilled it into FastVoiceGrad. A diffusion model has a tradeoff between speed and quality according to the number of reverse diffusion steps (K). To investigate this effect, we examined three variants: VoiceGrad-1, VoiceGrad-6, and VoiceGrad-30, which are VoiceGrad with K = 1, K = 6, and K = 30, respectively. VoiceGrad-1 is as fast as FastVoiceGrad, whereas the others are slower. For an ablation study, we examined FastVoiceGrad_adv and FastVoiceGrad_dist, in which score distillation and adversarial losses were ablated, respectively. As another strong baseline, we examined DiffVC <cit.>, which has demonstrated superior quality compared to representative one-shot VC models <cit.>. Based on <cit.>, we used two variants: DiffVC-6 and DiffVC-30, that is, DiffVC with six and 30 reverse diffusion steps, respectively. Implementation. VoiceGrad and FastVoiceGrad were implemented while referring to <cit.>. We implemented ϵ_θ using U-Net <cit.>, which consisted of 12 one-dimensional convolution layers of 512 hidden channels with two downsampling/upsampling, gated linear unit (GLU) activation <cit.>, and weight normalization <cit.>. The two main changes from <cit.> were that speaker embedding s was extracted by a speaker encoder <cit.> instead of a speaker label, and t was encoded by sinusoidal positional embedding <cit.> instead of one-hot embedding. We extracted p using a bottleneck feature extractor (BNE) <cit.>. We implemented 𝒱 and 𝒟 using the modified HiFi-GAN-V1 <cit.>, in which a multiscale discriminator <cit.> was replaced with a multiresolution discriminator <cit.> that showed better performance in speech synthesis <cit.>. We trained VoiceGrad using the Adam optimizer <cit.> with a batch size of 32, learning rate of 0.0002, and momentum terms (β_1, β_2) = (0.9, 0.999) for 500 epochs. We trained FastVoiceGrad using the Adam optimizer <cit.> with a batch size of 32, learning rate of 0.0002, and momentum terms (β_1, β_2) = (0.5, 0.9) for 100 epochs. We implemented DiffVC using the official code.[<https://github.com/huawei-noah/Speech-Backbones/tree/main/DiffVC>] Evaluation. We conducted mean opinion score (MOS) tests to evaluate perceptual quality. We used 90 different speaker/sentence pairs for the subjective evaluation. For the speech quality test (qMOS), nine listeners assessed the speech quality on a five-point scale: 1 = bad, 2 = poor, 3 = fair, 4 = good, and 5 = excellent. For the speaker similarity test (sMOS), ten listeners evaluated speaker similarity on a four-point scale: 1 = different (sure), 2 = different (not sure), 3 = same (not sure), and 4 = same (sure), in which the evaluated speech was played after the target speech (with a different sentence). As objective metrics, we used UTMOS <cit.>, DNSMOS <cit.>, and character error rate (CER) <cit.> to evaluate speech quality. We used DNSMOS (MOS sensitive to noise) in addition to UTMOS (which achieved the highest score in the VoiceMOS Challenge 2022 <cit.>) because we found that UTMOS is insensitive to speech with noise, which typically occurs when using a diffusion model with a few reverse diffusion steps. We evaluated speaker similarity using SVA <cit.>, in which we verified whether converted and target speech are uttered by the same speaker. We used 8,100 different speaker/sentence pairs for objective evaluation. The audio samples are available from the link indicated on the first page of this manuscript.foot:samples §.§ Experimental results Table <ref> summarizes these results. We observed that FastVoiceGrad not only outperformed the ablated FastVoiceGrads (FastVoiceGrad_adv and FastVoiceGrad_dist) and VoiceGrad-1, which have the same speed, but was also superior to or comparable to VoiceGrad-6 and VoiceGrad-30, of which calculation costs were as six and 30 times as FastVoiceGrad, respectively. Furthermore, FastVoiceGrad was superior to or comparable to DiffVCs (DiffVC-6 and DiffVC-30) in terms of all metrics.[On the Mann–Whitney U test (p-value > 0.05), FastVoiceGrad is not significantly different from VoiceGrad-30/6 and DiffVC-30 but significantly better than the other baselines for qMOS, and FastVoiceGrad is significantly better than all baselines for sMOS.] For a single A100 GPU, the real-time factors of mel-spectrogram conversion and total VC (including feature extraction and waveform synthesis) for FastVoiceGrad are 0.003 and 0.060, respectively, which are faster than those for DiffVC-6 (fast variant), which are 0.094 and 0.135, respectively. These results indicate that FastVoiceGrad can enhance the inference speed while achieving high VC performance. §.§ Application to another dataset To confirm this generality, we evaluated FastVoiceGrad on the LibriTTS dataset <cit.>. We used the same networks and training settings as those for the VCTK dataset, except that the training epochs for VoiceGrad and FastVoiceGrad were reduced to 300 and 50, respectively, owing to an increase in the amount of training data. Table <ref> summarizes the results. The same tendencies were observed in that FastVoiceGrad not only outperformed VoiceGrad-1 (a model with the same speed) but was also superior to or comparable to the other baselines. § CONCLUSION We proposed FastVoiceGrad, a one-step diffusion-based VC model that can achieve VC performance comparable to or superior to multi-step diffusion-based VC models while reducing the number of iterations to one. The experimental results demonstrated the importance of carefully setting of the initial states in sampling and the necessity of the joint use of GANs and diffusion models in distillation. Future research should include applications to advanced VC tasks (e.g., emotional VC and accent correction) and an extension to real-time implementation. § ACKNOWLEDGEMENTS This work was supported by JST CREST Grant Number JPMJCR19A3, Japan. IEEEtran
http://arxiv.org/abs/2409.02403v1
20240904030346
Pseudo-timelike loops in signature changing semi-Riemannian manifolds with a transverse radical
[ "W. Hasse", "N. E. Rieger" ]
math.DG
[ "math.DG", "gr-qc", "math-ph", "math.MP" ]
Pseudo-timelike loops in signature changing manifolds]Pseudo-timelike loops in signature changing semi-Riemannian manifolds with a transverse radical 1,2]W. Hasse [3,4]N. E. Riegern.rieger@math.uzh.ch [1]Institute for Theoretical Physics, Technical University Berlin, Hardenbergstr. 36, Berlin, 10623, Germany [2]Wilhelm Foerster Observatory Berlin, Munsterdamm 90, Berlin, 12169, Germany *[3]Department of Mathematics, University of Zurich, Winterthurerstrasse 190, Zurich, 8057, Switzerland [4]Current address: Mathematics Department, Yale University, 219 Prospect Street, New Haven, 06520, CT, USA In 1983, Hartle and Hawking introduced a conceptually intriguing idea involving signature-type change, which led to the no-boundary proposal for the initial conditions of the universe. According to this proposal, the universe has no beginning because there is no singularity or boundary in spacetime; however, there is an origin of time. Mathematically, this entails signature-type changing manifolds where a Riemannian region smoothly transitions to a Lorentzian region at the surface where time begins. We present a coherent framework for signature-type changing manifolds characterized by a degenerate yet smooth metric. We then adapt firmly established Lorentzian tools and results to the signature-type changing scenario, introducing new definitions that carry unforeseen causal implications. A noteworthy consequence is the presence of locally closed time-reversing loops through each point on the hypersurface. By imposing the constraint of global hyperbolicity on the Lorentzian region, we demonstrate that for every point p∈ M, there exists a pseudo-timelike loop with point of self-intersection p. Or put another way, there always exists a closed pseudo-timelike path in M around which the direction of time reverses, and a consistent designation of future-directed and past-directed vectors cannot be defined. [ * September 9, 2024 ===================== § INTRODUCTION According to popular ideas about quantum cosmology, classical cosmological models contain an initial Riemannian region of Euclidean signature joined to a final semi-Riemannian region with the usual Lorentzian signature <cit.>. In 1983 Hartle and Hawking <cit.> introduced a conceptually intriguing idea involving signature-type change, which led to the no-boundary proposal for the initial conditions of the universe. According to the Hartle–Hawking proposal, the universe has no beginning because the spacetime lacks any singularity or boundary.[Although singularities can be considered points where curves terminate at finite parameter values, providing a general definition remains difficult <cit.>.] In such singularity-free universes, there is no distinct beginning, but they do have an origin of time <cit.>.   Since a signature-type changing metric is necessarily either degenerate or discontinuous at the locus of signature change <cit.>, we will allow for the metric to become degenerate. Hence, in the present article we will discuss singular semi-Riemannian manifolds for which the metric constitutes a smooth (0,2)-tensor field that is degenerate at a subset ℋ⊂ M, where the bilinear type of the metric changes upon crossing ℋ.   Although the compatibility of the Riemannian and Lorentzian domains is assumed to be established, insofar as the metric should be smooth on the interface ℋ, the behavior of curves as they cross this interface still requires further study. Moreover, in a manifold where the signature changes from (+,+,…,+) to (-,+,…,+), the conventional concept of timelike (or spacelike) curves does not exist anymore. This gives rise to a new notion of curves called pseudo-timelike and pseudo-spacelike curves. In order to define these curves we make a detour to draw upon the concept of the generalized affine parameter which we use as a tool to distinguish genuine pseudo-timelike (and pseudo-spacelike, respectively) curves from curves that asymptotically become lightlike as they approach the hypersurface of signature change.   We endeavor to adapt well-established Lorentzian tools and results to the signature changing setting, as far as possible. This task proves to be less straightforward than anticipated, necessitating the introduction of new definitions with unexpected causal implications, reaching a critical juncture in our exploration. We draw upon the definition of pseudo-time orientability and the given absolute time function to decide whether a pseudo-timelike curve is future-directed. This establishes the definition for the pseudo-chronological past (and pseudo-chronological future) of an event.   In this article we show that for signature-type change of the delineated type, all these considerations lead to a surprising theorem revealing the non-well-behaved nature of these manifolds. In a sufficiently small region near the junction of signature change ℋ, transverse signature-type changing manifolds with a transverse radical exhibit local anomalies: Specifically, each point on the junction facilitates a closed time-reversing loop, challenging conventional notions of temporal consistency.[In more informal terms, in general relativity, a closed timelike curve is a smooth, timelike loop where, at every intersection point, the direction of movement is consistently the same. In contrast, a loop is a broader concept where a timelike curve loops back on itself, but the direction of movement at the intersection points might not always be the same. This is a more intuitive explanation; for a precise mathematical definition and its extension to a setting with signature-type change, see Definition <ref>.] Or put another way, there always exists a closed pseudo-timelike path in M around which the direction of time reverses, and along which a consistent designation of future-directed and past-directed vectors cannot be defined. By imposing the constraint of global hyperbolicity on the Lorentzian region, the global analog can be proven by showing that for every point p∈ M, there exists a pseudo-timelike loop such that the intersection point is p. In simpler terms, a transverse, signature-type changing manifold with a transverse radical has always pseudo-timelike loops. §.§ Transverse type-changing singular semi-Riemannian manifold Unless otherwise specified, the considered manifolds, denoted as M with dimension (M)=n, are assumed to be locally homeomorphic to ℝ^n. Moreover, these manifolds are expected to be connected, second countable, and Hausdorff. This definition also indicates that all manifolds have no boundary. Additionally, we will generally assume that the manifolds under consideration are smooth. Unless stated otherwise, all related structures and geometric objects (such as curves, maps, fields, differential forms, etc.) are assumed to be smooth as well.   A singular semi-Riemannian manifold is a generalization of a semi-Riemannian manifold. It is a differentiable manifold having on its tangent bundle a symmetric bilinear form which is allowed to become degenerate.   Let (M,g) be a singular semi-Riemannian manifold and let be p∈ M. We say that the metric changes its signature at a point p∈ M if any neighborhood of p contains at least one point q where the metric's signature differs from that at p. We align with <cit.> in requiring that (M,g) be a semi-Riemannian manifold with M≥2, where g is a smooth, symmetric, degenerate (0,2)-tensor on M, and ℋ:={q∈ M:g|_q is degenerate}. This means ℋ is the locus where the rank of g fails to be maximal. In addition, we assume that one connected component of M∖ℋ is Riemannian, denoted by M_R, while all other connected components (M_L_α)_α∈ I⊆ M_L⊂ M are Lorentzian, where M_L:=α∈ I⋃M_L_αrepresents the Lorentzian domain. Furthermore, we assume throughout that the point set ℋ, where g becomes degenerate is not empty.   Moreover, we impose the following two conditions <cit.>:   * We call the metric g a codimension-1 transverse type-changing metric if d(([g_μν])_q)≠0 for any q∈ℋ and any local coordinate system ξ=(x^0,…,x^n-1) around q. Then we call (M,g) a transverse type-changing singular semi-Riemannian manifold <cit.>.  This implies that the subset ℋ⊂ M is a smoothly embedded hypersurface in M, and the bilinear type of g changes upon crossing ℋ. Moreover, at every point q∈ℋ there exists a one-dimensional subspace, denoted as the radical Rad_q⊂ T_qM, within the tangent space T_qM that is orthogonal to all of T_qM at that point.  * The radical Rad_q is transverse for any q∈ℋ. Henceforward, we assume throughout that (M,g) is a singular transverse type-changing semi-Riemannian manifold with a transverse radical, unless explicitly stated otherwise.   Recall that the radical at q∈ℋ is defined as the subspace Rad_q:={w∈ T_qM:g(w,)=0}. This means g(v_q,)=0 for all v_q∈ Rad_q. Note that the radical can be either transverse or tangent to the hypersurface ℋ. The radical Rad_q is called transverse <cit.> if Rad_q and T_qℋ span T_qM for any q∈ℋ, i.e. Rad_q+T_qℋ=T_qM. This means that Rad_q is not a subset of T_qℋ, and obviously, Rad_q is not tangent to ℋ for any q.   As a consequence from the above two conditions and inspired by <cit.>, we get the following   Let M be a singular semi-Riemannian manifold endowed with a (0,2)-tensor field g and the surface of signature change defined as ℋ:={q∈ M:g|_q is degenerate}. Then (M,g) is a transverse, signature-type changing manifold with a transverse radical if and only if for every q∈ℋ there exist a neighborhood U(q) and smooth coordinates (t,x^1,…,x^n-1) such that g=-t(dt)^2+g_ij(t,x^1,…,x^n-1)dx^idx^j, for i,j∈{1,…,n-1}.   In the style of time-orthonormal coordinates in Lorentzian geometry we denote the coordinates in Theorem <ref> as radical-adapted Gauss-like coordinates. It is now possible to simplify matters by using these coordinates whenever dealing with a transverse, signature-type changing manifold with a transverse radical. Notably, signature-type change and the radical-adapted Gauss-like coordinates imply the existence of an uniquely determined, coordinate independent, natural absolute time function h(t,x̂):=t in the neighbourhood of the hypersurface <cit.>. Then the absolute time function establishes a foliation <cit.> in a neighborhood of ℋ, such that ℋ is a level surface of that decomposition. §.§ Statement of results Before presenting the main results we require some new definitons.   Given a differentiable curve γ[a,b]→ M, with [a,b]⊂ℝ, where -∞<a<b<∞. Then the curve γ=γ^μ(u)=x^μ(u) in M is called pseudo-timelike (respectively, pseudo-spacelike) if for every generalized affine parametrization (see Definition <ref>) of γ in M_L ∃ ε>0 such that g(γ',γ')<-ε (respectively, g(γ',γ')>ε).   In simpler terms, we call a curve pseudo-timelike if it is timelike in the Lorentzian domain M_L and does not become asymptotically lightlike as it approaches the hypersurface where the signature changes. Consequently, a pseudo-timelike loop is a generalization of a pseudo-timelike curve that loops back on itself. However, unlike a regular closed curve where the direction of movement would be the same at every intersection point, in a pseudo-timelike loop, the direction of movement at the intersection points is not necessarily the same (see Definition <ref>).   A vector field V on a signature-type changing manifold (M,g) is pseudo-timelike if and only if V is timelike in M_L and its integral curves are pseudo-timelike.   A signature-type changing manifold (M,g) is pseudo-time orientable if and only if the Lorentzian region M_L is time orientable.   In a sufficiently small region near the junction of signature change, transverse, signature-type changing manifolds with a transverse radical exhibit local anomalies. Specifically, each point on the junction gives rise to the existence of closed time-reversing loops, challenging conventional notions of temporal consistency.   Let (M,g̃) be a transverse, signature-type changing, n-dimensional (n≥2) manifold with a transverse radical. Then in each neighborhood of each point there always exists a pseudo-timelike loop.   The existence of such pseudo-timelike curves locally near the hypersurface that loop back to themselves, gives naturally rise to the question whether this type of curves also occur globally. In the global version a key notion is global hyperbolicity which plays a role in the spirit of completeness for Riemannian manifolds. By imposing the constraint of global hyperbolicity on the Lorentzian region, we demonstrate   Let (M,g̃) be a pseudo-time orientable, transverse, signature-type changing, n-dimensional (n≥2) manifold with a transverse radical, where M_L=M∖(M_R∪ℋ) is globally hyperbolic. Assume that a Cauchy surface S is a subset of the neighborhood U=⋃_q∈ℋU(q) of ℋ, i.e. S⊆(U∩ M_L)=⋃_q∈ℋ(U(q)∩ M_L), with U(q) being constructed as in Theorem <ref>. Then for every point p∈ M, there exists a pseudo-timelike loop such that p is a point of self-intersection. § PSEUDO-CAUSAL AND PSEUDO-LIGHTLIKE CURVES In an n-dimensional manifold (M,g) where the signature-type changes from (+,+,,+) to (,+,,+), the conventional concept of a timelike curves does not make sense anymore. From a suitable given point in the Lorentzian region, the junction might be reached in finite proper time, but there is no time concept in the Riemannian region. Hence, curves in the Riemannian domain are devoid of causal meaning and cannot be distinguished as timelike, spacelike or null. In signature-type changing manifolds this gives rise to a novel notion of curves. In order to define those curves we have to make a detour to draw upon the concept of the generalized affine parameter. §.§ Properties of the Generalized Affine Parameter In this section we want introduce the notion of pseudo-timelike curves and pseudo-spacelike curves. However, we need a method to discern genuine pseudo-timelike (and pseudo-spacelike, respectively) curves from those that asymptotically become lightlike as they approach the hypersurface of signature change. The generalized affine parameter will prove useful to draw this distinction. For this, we require a notion of completeness so that every C^1 curve of finite length as measured by such a parameter has an endpoint. Ehresmann <cit.> and later Schmidt <cit.> appear to have been the first ones to propose using so-called generalized affine parameters to define the completeness of general curves <cit.>. The generalized affine parameter turns out to be a particularly useful quantity for probing singularities because it can be defined for an arbitrary curve, not necessarily a geodesic.   Let M be an n-dimensional manifold with an affine connection and γ J→ M a C^1 curve on M. Recall that a smooth vector field V along γ is a smooth map V J→ TM such that V(t)∈ T_γ(t)M for all t∈ J. Such a smooth vector field V along γ is said to be a parallel field along γ if V satisfies the differential equation ∇_γ'V(t)=0 for all t∈ J (see <cit.> for further details).   Choose now any t_0∈ J and a C^1 curve γ J⟶ M through p_0=γ(t_0). Let {e_1,e_2,…,e_n} be any basis for T_γ(t_0)M. Let E_i be the unique parallel field along γ with E_i(t_0)=e_i for 1≤ i≤ n. Then {E_1(t),E_2(t),…,E_n(t)} forms a basis for T_γ(t)M for each t∈ J. We can now write γ̇(t), the vector tangent to γ at p_0, as a linear combination of the elements of the chosen basis with coefficients V^i(t): γ̇(t)=∑_i=1^nV^i(t)E_i(t)V^i(t)E_γ(t)i with V^i J⟶ℝ for 1≤ i≤ n. Then the generalized affine parameter μ=μ(γ,E_1,…,E_n) of γ(t) associated with this basis is given by μ(t)=∫_t_0^t√(∑_i=1^n[V^i(t)]^2)dt=∫_t_0^t√(δ_ijV^i(t)V^j(t))dt, t∈ J. The assumption that γ is C^1 is necessary to obtain the vector fields {E_1,E_2,…,E_n} through parallel translation.   Furthermore, we have   The curve γ has a finite arc-length in the generalized affine parameter μ=μ(γ,E_1,…,E_n) if and only if γ has finite arc-length in any other generalized affine parameter μ=μ(γ,E̅_1,…,E̅_n).   Note that the generalized affine parameter of a curve depends on the chosen basis. In effect, one treats the parallel-transported basis of vectors as though they were the orthonormal basis of a Riemannian metric and then defines the “length” of γ(t) accordingly. Note that if the metric g is positive definite, the generalized affine parameter defined by an orthonormal basis is arc-length. This characterization of completeness manages to discern exactly what we wanted to get winnowed. Also, the beauty of this definition is that μ can be defined on any C^1 curve; it works for null curves just as well as for timelike or spacelike curves. Moreover, any curve of unbounded proper length automatically has an unbounded generalized affine parameter <cit.>.   If only one generalized affine parameter reaches finite value all of them do - and that is the only information we need with respect to completeness. This reasoning is based on the following estimate: For any two basis of T_γ(t)M which are parallelyl transported along γ, the components V^i(t) with respect to another basis are given by Ṽ^j(t)=∑_i=1^nA_i^jV^i(t). We then have γ̇(t)=∑_i=1^nV^i(t)E_i(t)=∑_i=1^nṼ^i(t)Ẽ_i(t). The constant items A_i^j are entries in a constant, non-degenerate n× n matrix A. Hence, there exists its inverse matrix A^-1 such that V^j(t)=∑_i=1^na_i^jṼ^i(t). Accordingly, the generalized affine parameters with respect to these basis are μ(t)=∫_t_0^t√(∑_i=1^n[V^i(t)]^2)dt and μ̃(t)=∫_t_0^t√(∑_i=1^n[Ṽ^i(t)]^2)dt. From this it follows that |Ṽ(t)|=|∑_i=1^nA_i^jV^i(t)|≤∑_i=1^n| A_i^j|| V^i(t)|≤ijmax| A_i^j|_i=1^n∑| V^i(t)|. Then, by virtue of the Cauchy-Schwarz inequality: |Ṽ^j(t)|^2≤ijmax| A_i^j|^2(∑_i=1^n| V^i(t)|·1)^2(∑_i=1^n| V^i(t)|)^2 ≤ijmax| A_i^j|^2(∑_i=1^n| V^i(t)|^2)·(∑_i=1^n1)=n·ijmax| A_i^j|^2(∑_i=1^n| V^i(t)|^2). Thus we have ∑_j=1^n|Ṽ^j(t)|^2≤∑_j=1^n(n·ijmax| A_i^j|^2(∑_i=1^n| V^i(t)|^2)) =n^2·ijmax| A_i^j|^2(∑_i=1^n| V^i(t)|^2). On the other hand, we get ∑_j=1^n| V^j(t)|^2≤ n^2·ijmax| a_i^j|^2(∑_i=1^n|Ṽ^i(t)|^2). Combining both estimates yields ∑_j=1^n|Ṽ^j(t)|^2≤ n^2·ijmax| A_i^j|^2(∑_j=1^n| V^i(t)|^2) ≤ n^2·ijmax| A_i^j|^2(n^2·ijmax| a_i^j|^2(∑_i=1^n|Ṽ^i(t)|^2)) ⟺1/n^2·ijmax| A_i^j|^2_j=1^n∑|Ṽ^j(t)|^2 ≤∑_i=1^n| V^i(t)|^2≤ n^2·ijmax| a_i^j|^2(∑_i=1^n|Ṽ^i(t)|^2), ⟹c_11/√(n^2·ijmax| A_i^j|^2)√(∑_j=1^n|Ṽ^j(t)|^2) ≤√(∑_i=1^n| V^i(t)|^2)≤c_2√(n^2·ijmax| a_i^j|^2)√(∑_i=1^n|Ṽ^i(t)|^2) ⟹ c_1·μ̃(t)≤μ(t)≤ c_2·μ̃(t). §.§ Application of the generalized affine parameter in a signature-type changing manifold Let M=M_L∪ℋ∪ M_R be an n-dimensional transverse type-changing singular semi-Riemannian manifold with a type-changing metric g, and ℋ:={q∈ M:g|_q is degenerate} the locus of signature change. We further assume that one component, M_L, of M∖ℋ is Lorentzian and the other one, M_R, is Riemannian.   Given a continuous and differentiable curve γ[a,b]⟶ M, with [a,b]⊂ℝ, where -∞<a<b<∞. Then the curve γ=γ^μ(u)=x^μ(u) is a pseudo-lightlike curve if   * its tangent vector field in the Lorentzian component M_L is null, * its tangent vector field in the Riemannian component M_R is arbitrary. A similar definition applies for a pseudo-causal curve. Note that an analogous definiton for pseudo-timelike and pseudo-spacelike curves turns out to be problematic as the definition would also include curves that asymptotically become lightlike as they approach ℋ, see Figure <ref>.   For example we may refer to the metric g=t(dt)^2+(dx)^2 defined on ℝ^2, and the non-parametrized, non-geodesic curve γ given by tan x=2/3√(| t|^3)·sgn(t), with -π/2<x<π/2. We rearrange this equation so that the variable x is by itself on one side (Figure <ref>):   3/2tan x=sgn(x)·|3/2tan x|=sgn(t)·| t|^3/2 ⟺tsgn(t)·| t|=sgn(x)·(|3/2tan x|)^2/3 ⟺ t=sgn(x)·(|3/2tan x|)^2/3.   Reintroducing the transformation as suggested by Dray <cit.> T=∫_0^t√(|t̃|)dt̃=2/3√(| t|)^3·sgn(t)   gives us the metric expression g=sgn(T)(dT)^2+(dx)^2, and for the curve γ we get T=tan x. Hence, the curve in the (T,x)-coordinate system is just the tan-function and its derivative is 1/cos^2(x). As a result, γ is in M_L timelike, approaching from timelike infinity the lightcone, and tangentially touches the light cone at T→0 (where the derivative becomes 1/cos^2(0)=1 in the limit). These are the sort of curves we want to avoid in our definition. Note that the (t,x)-coordinates are characterized by the fact that, unlike the (T,x)-coordinates, they cover the entire manifold M. Moreover, if the curve γ=(T(s),x(s)) is parametrized by arc length s, then in the (t,x)-coordinate system both dx/ds and dt/ds diverge in M_L:   -1=-(dT/ds)^2+(dx/ds)^2=-(dT/dxdx/ds)^2+(dx/ds)^2   =(-(dtan x/dx)^2+1)(dx/ds)^2=(-1/cos^4x+1)(dx/ds)^2   ⟺(dx/ds)^2=-1/(-1/cos^4x+1)   ⟹x→0limdx/ds=x→0lim±√(-1/(-1/cos^4x+1))=±∞.   -1=-(dT/ds)^2+(dx/ds)^2=(-1+1/(dT/dx)^2)(dT/ds)^2=(-1+1/(dtan x/dx)^2cos^4x)(dT/dt)^2(dt/ds)^2   =(-1+1/(1+tan^2x)^2)·| t|(dt/ds)^2=(-1+1/(1+T^2)^2)·| t|(dt/ds)^2   =(-1+1/(1+4/9| t|^3)^2)·| t|(dt/ds)^2   ⟺(dt/ds)^2=-1/(-1+1/(1+4/9| t|^3)^2)·| t|   ⟹t→0limdt/ds=t→0lim±√(-1/(-1+1/(1+4/9| t|^3)^2)·| t|)=±∞.   While the components of γ' do not diverge in the (T,x)-coordinate system, both dx/ds and dt/ds diverge in M_L in the (t,x)-coordinate system. Because of this dependency of coordinates the criterion of divergence is not useful for defining pseudo-timelike and pseudo-spacelike curves. That is where the coordinate-independent generalized affine parameter comes into play.   Let M=M_L∪ℋ∪ M_R be an n-dimensional transverse type-changing singular semi-Riemannian manifold, g be a type-changing metric, and ℋ:={q∈ M:g|_q is degenerate} the locus of signature change. We further assume that one component, M_L, of M∖ℋ is Lorentzian and the other one, M_R, is Riemannian. Given a continuous and differentiable curve γ[a,b]→ M, with [a,b]⊂ℝ, where -∞<a<b<∞. Then the curve γ=γ^μ(u)=x^μ(u) in M is called pseudo-timelike (respectively, pseudo-spacelike) if for every generalized affine parametrization of γ in M_L ∃ ε>0 such that g(γ',γ')<-ε (respectively, g(γ',γ')>ε).[Since Definition <ref> is already independent of a choice of coordinates and instead refers to a (generally anholonomic) basis, the above Definition <ref> is also coordinate independent. The independence of Definition <ref> from the choice of this basis is a direct consequence of Proposition <ref>. In particular, in the case of a basis change we just relegate to the Estimate <ref>.]   Revisiting Example <ref>, we find that both coordinate vector fields, ∂/∂ T and ∂/∂ x, are covariantly constant in M_L and M_R (this is because the Christoffel symbols all vanish in the (T,x)-coordinate system). Hence, we can parallel transport ∂/∂ T and ∂/∂ x along any curve in M_L and M_R, with the transport being path-independent (no anholonomy). Since we aim at parametrizing the curve γ by the generalized affine parameter μ with respect to the coordinate vector fields ∂/∂ T=1/√(| t|)∂/∂ t and ∂/∂ x we are able to start with an arbitrary parametrization. Hence, let γ(t)=(T(t),x(t)) be parametrized by t, and then γ̇(t)=dT/dt∂/∂ T+dx/dt∂/∂ x. By means of Definition <ref> we immediately get V^0(t)=dT/dt=√(| t|) and   V^1(t)=dx/dt=d/dtarctan(2/3√(| t|)^3sgn(t)). And in M_L this yields V^1(t)=√(| t|)/1+4/9| t|^3. Consider now γ̃(t(s))=γ(s), in which γ̃ is related to the curve γ by reparametrization of γ by t. With this notation we have the basis fields E_γ̃(t),0=1/√(| t|)∂/∂ t and E_γ̃(t),1=∂/∂ x along γ̃. The reparametrized curve γ̃(t(s)) also gives γ̇̃̇(t)=V^i(t)E_γ̃(t),i=∂/∂ t+dx/dt∂/∂ x.   The Definition <ref> for the generalized affine parameter gives dμ/dt=√((V^0(t))^2+(V^1(t))^2)=√(| t|+| t|/(1+4/9| t|^3)^2). It now follows easily that for the reparametrization of γ̂(t) by the generalized affine parameter μ (i.e. γ̂(μ(t))=γ̃(t)) we have in M_L:   g(γ̇̂̇(μ(t)),γ̇̂̇(μ(t)))=g(dγ̂(μ(t))/dμ,dγ̂(μ(t))/dμ)=g(1/dμ/dtγ̇̂̇(t),1/dμ/dtγ̇̂̇(t)) =g(∂/∂ t+dx/dt∂/∂ x,∂/∂ t+dx/dt∂/∂ x)/(dμ/dt)^2=t+(dx/dt)^2/| t|+| t|/(1+4/9| t|^3)^2=t+| t|/(1+4/9| t|^3)^2/| t|+| t|/(1+4/9| t|^3)^2. Taking the limit t→0^-limt+| t|/(1+4/9| t|^3)^2/| t|+| t|/(1+4/9| t|^3)^2=0 reveals that the curve γ is not pseudo-timelike as it does not meet the ε-requirement of Definition <ref>.     In Section <ref> we repeatedly rather vaguely referred to the concept of a timelike (or spacelike, respectively) curve that asymptotically becomes lightlike. The above example highlights how the notion of “asymptotically lightlike” should be understood. A timelike (or spacelike, respectively) curve in M_L that is not pseudo-timelike (or pseudo-spacelike, respectively) can be thus specified as asymptotically lightlike.   Finally, if we modify the previously discussed curve γ by keeping the t-coordinate but stating x=0, we get a curve α. With the same notation as above, we then get V^0(t)=√(| t|), V^1(t)=0 and dμ/dt=√(| t|). Hence, this results in g(α̇̂̇(μ(t)),α̇̂̇(μ(t)))=1/| t|g(∂/∂ t,∂/∂ t)=t/| t|=-1 in the Lorentzian region M_L. The curve α is pseudo-timelike as it obviously does meet the ε-requirement of Definition <ref>.   This disquisition makes it clear why the notion of the generalized affine parameter is necessary and useful in order to define pseudo-timelike and pseudo-spacelike curves. If we were to loosen the requirement in Definition <ref> by replacing “for every generalized affine parametrization of γ in M_L” with “for every affine parametrization of γ in M_L”, then no curve that is timelike in the Lorentz sector and reaches the hypersurface ℋ would be pseudo-timelike throughout the entire manifold M. (However, this statement applies only to curves that actually reach the hypersurface ℋ. Timelike curves that lie entirely within M_L and maintain a “distance” from ℋ due to a tubular neighborhood within M_L also satisfy the relaxed condition, as they only have affine parameters with g(γ', γ') = const < 0.)   Similarly, any timelike curve in M_L would meet the requirements of a pseudo-timelike curve if we modified the definition by requesting “for a suitable parametrization of γ in M_L” instead of “for every generalized affine parametrization of γ in M_L”. In this regard, the concept of the generalized affine parameter is the right tool to tell apart suitable from unsuitable curves for the definition of pseudo-timelike and pseudo-spacelike curves.   Interestingly, our rationale for the new definition of a pseudo-timelike curve is reminiscent of the analysis undertaken in <cit.>. In Section 2 of <cit.> the distinction between causal curves, timelike almost everywhere curves and timelike curves is introduced in which the latter one is defined as follows: A timelike curve is a causal curve γ I⟶ M such that g(γ',γ')<-ε almost everywhere for some ε>0.  The author illustrates the situation in his Figure 1 which contrasts a timelike curve with a timelike almost everywhere curve. The latter one can not be viewed as a timelike curve because it approaches a null vector at its break point. Compared to our setting, however, the culprit here is that the curve is not differentiable at the breaking point. However, if we were to make the curve differentiable by bending its upper section, it still wouldn't be timelike. Its restriction to the lower region before the inflection point is timelike, but it cannot be extended upwards into a timelike curve. Now, imagine we are not in Minkowski spacetime, but instead, a signature-type change occurs at the (former) inflection point so that the 'upper half' of the space becomes Euclidean (in this case, the figure would correspond to the (T,x)-coordinates, not the (t,x)-coordinates, in the toy model). In this scenario, the curve restricted to the Lorentz sector would be timelike, but after the signature change is 'reversed', it cannot be extended upwards into a timelike curve. In this sense, the entire curve in the signature-changing version of this example is not pseudo-timelike.   Now we can slightly modify the definition of a (simply) closed curve in order for it to correctly apply to signature-type changing singular semi-Riemannian manifolds M with a metric g:   A smooth, pseudo-timelike curve γ I⟶ M is said to be chronology-violating when there is a subset of γ[I] homeomorphic to S^1 such that there are at least two parameters s_1,s_2∈ I that satisfy γ(s_1)=γ(s_2), and γ belongs to one of the following two classes:[Note that this means that there must be at least one such subset to fulfill this definition.] * The pseudo-timelike curve γ is periodic, i.e the image γ[I] is homeomorphic to S^1. Moreover, for s_1, s_2∈ I the associated tangent vectors, γ'(s_1) and γ'(s_2), are timelike and positively proportional. We denote this type of curve as closed pseudo-timelike curve. * The curve γ intersects itself for s_1, s_2∈ I and the associated tangent vectors, γ'(s_1) and γ'(s_2), are timelike whereas the tangent directions are not necessarily the same (i.e. they do not need to be positively proportional). This type of curve is said to contain a loop. § GLOBAL STRUCTURE OF SIGNATURE-TYPE CHANGING SEMI-RIEMANNIAN MANIFOLDS First, let us revisit the definitions of the following concepts related to manifold orientability.    <cit.> A smooth n-dimensional manifold M is orientable if and only if it has a smooth global nowhere vanishing n-form (also called a top-ranked form).[An orientation of M is the choice of a continuous pointwise orientation, i.e. the specific choice of a global nowhere vanishing n-form.]   For a differentiable manifold to be orientable all that counts is that it admits a global top-ranked form - it is not important which specific top-ranked form is selected.   To ensure thoroughness, we also want to mention the definition of parallelizability, which likewise does not involve any metric and is therefore again applicable to manifolds with changing signature types. It is well-known that a manifold M of dimension n is defined to be parallelizable if there are n global vector fields that are linearly independent at each point. We define it similarly to the approach in <cit.>:   A smooth n-dimensional manifold M is parallelizable if there exists a set of smooth vector fields {V,E_1,…,E_n-1} on M, such that at every point p∈ M the tangent vectors {V(p),E_1(p),…,E_n-1(p)} provide a basis of the tangent space T_pM. A specific choice of such a basis of vector fields on M is called an absolute parallelism of M.   Equivalently, a manifold M of dimension n is parallelizable if its tangent bundle TM is a trivial bundle, so that the associated principal bundle of linear frames has a global section on M, i.e. the tangent bundle is then globally of the form TM≃ M×ℝ^n. Moreover, it is worth pointing out that orientability and also parallelizability are differential topological properties which do not depend on the metric structure, but only on the topological manifold with a globally defined differential structure.   It is worth mentioning that given an absolute parallelism of M, one can use these n vector fields to define a basis of the tangent space at each point of M and thus one can always get a frame-dependent metric g by defining the frame to be orthonormal. Moreover, the special orthogonal group, denoted SO(n,ℝ), acts naturally on each tangent space via a change of basis, it is then possible to obtain the set of all orthonormal frames for M at each point qua the oriented orthonormal frame bundle of M, denoted F_SO(M), associated to the tangent bundle of M.   The next three definitions, however, depend not only on the underlying manifold but also on its specific type-changing metric g. For our purpose, let (M,g) be a smooth, signature-type changing manifold (possibly with boundary).   A vector field V on a signature-type changing manifold (M,g) is pseudo-timelike if and only if V is timelike in M_L and its integral curves are pseudo-timelike (in the sense of Definition <ref>).[Keep in mind that a timelike vector field is a vector field V on a spacetime manifold (M,g) where the vectors at every point are timelike, meaning g(V(p),V(p))<0 for all points p on the manifold.]   A signature-type changing manifold (M,g) is pseudo-time orientable if and only if the Lorentzian region M_L is time orientable.[A pseudo-time orientation of such a manifold (M,g) corresponds to the specific choice of a continuous non-vanishing pseudo-timelike vector field V on M.]   A singular semi-Riemannian manifold is pseudo-time orientable if and only if there exists a vector field X ∈𝔛(M) that is pseudo-timelike. "⟹" Let a singular semi-Riemannian manifold (M,g) be pseudo-time orientable. This means the Lorentzian region M_L is time orientable. A Lorentzian manifold is time-orientable if there exists a continuous timelike vector field. Accordingly, there must exist a continuous timelike vector field X ∈𝔛(M_L) in the Lorentzian region. As per Definition <ref>, a vector field X in a signature-type changing manifold is pseudo-timelike if and only if X is timelike in M_L and its integral curve is pseudo-timelike; this means that X is allowed to vanish on M_R.[In this part of the proof, the only thing that matters is whether the “pseudo-timelike vector field” is allowed to vanish on M_R. This question is independent of whether the “generalized affine parameter” condition is required in M_L, because the issue of whether the vector field “is allowed to vanish on M_R” concerns only its “magnitude”, while the “generalized affine parameter” condition pertains solely to its “direction” (specifically, that the vector field is not asymptotically lightlike).] Hence, we can extend the vector field X arbitrarily to all of M, and per definition X ∈𝔛(M) is pseudo-timelike.   "⟸" Let X∈𝔛(M) be a pseudo-timelike vector field in a singular semi-Riemannian manifold (M,g). Hence, as per Definition <ref>, X is timelike in M_L. A Lorentzian manifold is time-orientable if and only if there exists a timelike vector field. Since X is a timelike vector field on M_L, the Lorentzian region M_L is time-orientable. Then, according to Definition <ref>, the signature-type changing manifold (M,g) is pseudo-time orientable. According to that, such a definition of a pseudo-time orientation is possible if M_L admits a globally consistent sense of time, i.e. if in M_L we can continuously define a division of non-spacelike vectors into two classes. For a transverse, signature-type changing manifold (with a transverse radical), this definition arises naturally because, in M_R, all vectors can be considered spacelike. Additionally, all non-spacelike vectors on ℋ are lightlike.[Note that this applies generally, including in the case of a tangent radical, since there are no timelike vectors on ℋ. However, the subsequent division into two classes requires a transverse radical.] In the case that Rad_q∩ T_qℋ={0} ∀ q∈ℋ, these lightlike vectors can be naturally divided into two classes: those pointing towards M_L and those pointing towards M_R.   Consider the classic type of a spacetime M with signature-type change which is obtained by cutting an S^4 along its equator and joining it to the corresponding half of a de Sitter space, Figure <ref>. The de Sitter spacetime is time-orientable <cit.>, hence M is pseudo-time orientable.   A signature-type changing manifold (M,g) of dimension n is pseudo-space orientable if and only if it admits a continuous non-vanishing spacelike (n-1)-frame field on M_L. This is a set of n-1 pointwise orthonormal spacelike vector fields on M_L.[A pseudo-space orientation of a manifold (M,g) corresponds to the specific choice of a continuous non-vanishing field of orthonormal spacelike (n-1)-beins on M_L.]   Every parallelizable manifold M is orientable.   In Lorentzian geometry the fact of M being time-orientable and space-orientable implies that M is orientable <cit.>. The proposition below illustrates that this result from Lorentzian geometry cannot be applied to signature-type changing manifolds.   Even if a transverse, signature-type changing manifold (M,g) with a transverse radical is pseudo-time orientable and pseudo-space orientable, it is not necessarily orientable. Consider an arbitrary manifold of (M)=2 with a change of signature, for which the conditions of Proposition <ref> are given (in higher dimensions, the same idea can be carried out through a trivial augmentation of dimensions). In case this manifold is non-orientable, there is nothing to show. However, if it is orientable, cut out a disk from the Riemannian sector and replace it with a crosscap, equipped with any Riemannian metric. In a tubular neighborhood of the cutting line, construct a Riemannian metric that mediates between the metrics of the crosscap and the rest (this is possible due to the convexity of the space formed by all Riemannian metrics). This surgical intervention results in the transition to a non-orientable manifold with a change of signature. Since the intervention is limited to the Riemannian sector, the conditions of the proposition remain unaffected. Thus Proposition <ref> is proven.   One can always “switch” between non-orientability and orientability using the crosscap. Starting with an orientable manifold, one transitions to non-orientable by replacing a crosscap (if already present) with a disk. If no crosscap is present, such a transition occurs by replacing a disk with a crosscap.   The Möbius strip 𝕄 has a non-trivial vector bundle structure over S^1, which means that the bundle cannot be trivialized globally. Specifically, the Möbius strip is a line bundle over S^1 with a non-trivial twist.[The Möbius strip is particularly interesting because it can be found on any arbitrary non-orientable surface. Additionally, any Lorentzian manifold 𝕄×ℝ^n based on the Möbius strip crossed with ℝ^n either fails to be time-orientable or space-orientable <cit.>.] Hence, 𝕄 is neither parallelizable nor orientable.   To see this, consider the Möbius strip 𝕄 = ℝ×ℝ/∼ with the identification (t,x) ∼ (t̃,x̃) ⟺ (t̃, x̃) = ((-1)^kt, x + k), k ∈ℤ. Notice that the identification has no bearing on proper subsets of ((-1)^kt, x + k), k ∈ℤ, and the fibre ℝ is a vector space.   As 𝕄 is a fiber bundle over the base space S^1, a section of that fiber bundle must be a continuous map σ: S^1⟶𝕄 such that σ(x) = (h(x), x) ∈𝕄. For σ to be continuous, h must satisfy -h(0) = h(k). The intermediate value theorem guarantees that there is some x̃∈ [0, k] such that h(x̃) = 0. This means that every section of 𝕄 intersects the zero section, and the sections that form a basis for the fibre are not non-zero everywhere.   A pseudo-spacetime is a 4-dimensional, pseudo-time oriented, semi-Riemannian manifold with a type-changing metric.   Let (ℝ^n, g) be a transverse, signature-type changing n-manifold with a transverse radical, and let ℋ⊂ℝ^n be a codimension one closed hypersurface of signature change without boundary.[Here “closed” is meant in the topological sense of “the complement of an open subset of ℝ^n” and not in the manifold sense of “a manifold without boundary that is compact.”] Then ℋ is always orientable. This can be shown by a purely topological argument, as in <cit.>.   Let (M,g) be a transverse, signature-type changing, oriented, n-dimensional manifold with a transverse radical, and let ℋ⊂ M be the hypersurface of signature change. Then ℋ is also oriented. The hypersurface of signature change, as a closed submanifold of codimension one, is the inverse image of a regular value of a smooth map f M →ℝ. Specifically, ℋ = f^-1(c) for some regular value c ∈ℝ. The manifold M is oriented, so its tangent bundle TM is oriented, meaning there is a consistent choice of orientation on each tangent space T_p M for p ∈ M. Since ℋ is a hypersurface in M, at each point q ∈ℋ, the tangent space T_q ℋ is a subspace of the tangent space T_q M of dimension n-1, and therefore Tℋ is a subbundle of TM. The remaining direction in T_q M can be described by a normal vector N(q), which is a vector in T_q M that is perpendicular to T_q ℋ.   Since M is oriented, for each point q ∈ℋ, the tangent space T_q M has an orientation that can be described by an ordered basis, say {v_1, …, v_n-1, N(q)}, where {v_1, …, v_n-1} is an oriented basis for T_q ℋ and N(q) is the normal vector. Hence, this induces a consistent orientation on T_q ℋ across all points q ∈ℋ, since the orientation of M provides a consistent choice of N(q) across ℋ. Therefore, ℋ inherits a consistent orientation from M, proving that ℋ is oriented.   Moreover, without loss of generality, we can choose 1 as a regular value (see also <cit.>). Thus, ℋ := f^-1(1) = {p ∈ M | f(p) = 1} is a submanifold of M of dimension n-1. For every q ∈ℋ, the tangent space T_q ℋ = T_q (f^-1(1)) to ℋ at q is the kernel (df_q) of the map df_q T_q M → T_1 ℝ. Then T_q ℋ = ⟨grad f_q ⟩^, and therefore the gradient grad f yields an orientation of ℋ.   Provided a transverse, signature-type changing manifold (M,g) with a transverse radical is pseudo-time orientable, then we can choose one of the two possible time orientations at any point in each connected component of M_L, and thus designating the future direction of time in the Lorentzian regime. On ℋ all non-spacelike vectors are lightlike and smoothly divided into two classes in a natural way: the vectors located at an initial base point on ℋ are either pointing towards M_L or towards M_R. This together with the existent absolute time function (that establishes a time concept <cit.> in the Riemannian region) can be considered as arrow of time on M.   Let (M,g) be a pseudo-time orientable, transverse, signature-type changing n-dimensional manifold with a transverse radical. Then in the neighborhood of ℋ the absolute time function h(t,x̂):= t, where (t,x̂):=(t,x^1,…,x^n-1), imposes a natural time direction by postulating that the future corresponds to the increase of the absolute time function. In this way, the time orientation is determined in M_L.   Note that ∂_t, with an initial point on ℋ, points in the direction in which t=h(t,x̂) increases while x_i remains constant. Away from the hypersurface, the future direction is defined relative to ℋ by the accordant time orientation of M_L. Recall that functions of the type, such as the absolute time function, typically lead to metric splittings by default. (Future-Directed) A pseudo-timelike curve (see Definition <ref>) in (M,g) is future-directed (in the sense of Definiton <ref> and Remark <ref>) if for every point in the curve   (i) within M_L the tangent vector is future-pointing, and (ii) on ℋ the associated tangent vector with an initial base point on ℋ is future-pointing, if applicable.     Respective past-directed curves are defined analogously. Notice that, per assumption, one connected component of M∖ℋ is Riemannian and all other connected components (M_L_α)_α∈ I⊆ M_L⊂ M are Lorentzian. This configuration could (at least locally) potentially allow for a M_L-M_R-M_L-sandwich-like structure of M, where ℋ consists of two connected components (ℋ_α)_α∈{1,2}. Consequently, this would also imply the existence of two absolute time functions, see Figure <ref>.   Let (M,g) be a pseudo-time orientable, transverse, signature-type changing n-dimensional manifold with a transverse radical. ℐ^-(p)={q∈ M q≪ p} is the pseudo-chronological past of the event p∈ M. In other words, for any two points q,p∈ M, we write q≪ p if there is a future-directed pseudo-timelike curve from q to p in M. ℐ^+(p)={q∈ M p≪ q} is the pseudo-chronological future of the event p∈ M. In other words, for any two points p,q∈ M, we write p≪ q if there is a future-directed pseudo-timelike curve from p to q in M.   Interestingly, this definition leads to the following peculiar situation: Recall that any curve is denoted pseudo-timelike if its M_L-segment is timelike. To that effect, all curves that steer clear of M_L (and do not have a M_L-segment) are also considered pseudo-timelike. When p∈ℋ∪ M_R then the pseudo-chronological past of p is ℐ^-(p)=M∖ M_L and the pseudo-chronological future of p is ℐ^+(p)=M, see Figure <ref>. § CHRONOLOGY VIOLATING PSEUDO-TIMELIKE LOOPS In Section <ref>, we introduced the notion of closed pseudo-timelike curves on a signature-type changing background and we demonstrated how they must be defined to ensure that the concept of causality remains meaningful. In this section, we will reveal the non-well-behaved nature of transverse, signature-type changing, n-dimensional manifolds with a transverse radical. §.§ Local pseudo-timelike loops In a sufficiently small region near the junction of signature change, these manifolds exhibit local anomalies. Specifically, each point on the junction gives rise to the existence of closed time-reversing loops, challenging conventional notions of temporal consistency. One of our main results, Theorem <ref>, can now be proved quite easily.   Let (M,g̃) be a transverse, signature-type changing, n-dimensional (n≥2) manifold with a transverse radical. Then in each neighborhood of each point there always exists a pseudo-timelike loop. Let g̃=-t(dt)^2+g̃_jk(t,x^1,…,x^n-1)dx^idx^k, j,k∈{1,…,n-1}, be a transverse, signature-type changing metric with respect to a radical-adapted Gauss-like coordinate patch (U_φ,φ) with U_φ∩ℋ≠∅.[This is, U_φ is sufficiently small to be expressed in the adapted radical-adapted Gauss-like coordinate system ξ(U_φ).] Choose smooth coordinates (t_0,x_0^1,…,x_0^n-1) with t_0>0 and ξ_0>0, such that C_0:=[0,t_0]× B_ξ_0^n-1=[0,t_0]×{x∈ℝ^n-1|∑_k=1^n-1(x^k)^2≤ξ_0^2}⊂ℝ^n is contained in the domain of the coordinate chart (open neighborhood) U_φ. Then C_0×𝕊^n-2=C_0×{v∈ℝ^n-1|∑_k=1^n-1(v^k)^2=1} as a product of two compact sets is again compact.   Next, consider the function G̃ C_0×𝕊^n-2⟶ℝ, (t,x^1,…,x^n-1,v^1,…,v^n-1)↦g̃_jk(t,x^1,…,x^n-1)v^jv^k. As G̃ is a smooth function defined on the compact domain C_0×𝕊^n-2, by the Extreme Value Theorem it has an absolute minimum G_0. Hence, on (U_φ,φ) we can uniquely define g̃_0=-t(dt)^2+G_0δ_jkdx^jdx^k, j,k∈{1,…,n-1}.   By this definition, for all nonzero lightlike vectors X∈ T_pM, p∈ C_0 with respect to g̃_0, we have g̃_0=-t(X^0)^2+G_0δ_jkX^jX^k=0⟺-t(X^0)^2=-G_0δ_jkX^jX^k, then   g̃(X,X)=-t(X^0)^2+g̃_jk(t,x^1,…,x^n-1)X^jX^k =-G_0δ_jkX^jX^k+g̃_jk(t,x^1,…,x^n-1)X^jX^k =δ_jkX^jX^k·(-G_0+g̃_rs(t,x^1,…,x^n-1)X^r/√(δ_abX^aX^b)X^s/√(δ_cdX^cX^d))≥0. Clearly, g̃(X,X)≥0 because G_0>0 per definition and δ_jkX^jX^k=t(X^0)^2/G_0≥0. Therefore, the vector X∈ T_pM, p∈ C_0 is not timelike with respect to g̃. This means, within C_0 the g̃-light cones always reside inside of the g̃_0-light cones, i.e. g̃≤g̃_0 in C_0. The cull cones of g̃_0 are more opened out than those of the metric g̃. Denote p_0∈ C_0 by (t(p_0),x^1(p_0),…,x^n-1(p_0))=(t_0,x_0^1,…,x_0^n-1).   As (M,g̃) is an n-dimensional manifold for which in the neighborhood of ℋ radical-adapted Gauss-like coordinates exist, we can single out the time coordinate that defines the smooth absolute time function t whose gradient in M_L is everywhere non-zero and timelike. Hence, (M,g̃)|_U_φ can be decomposed into spacelike hypersurfaces {(U_φ)_t_i} which are specified as the level sets (U_φ)_t_i=t^-1(t_i) of the time function.[This collection of space-like slices {(U_φ)_t} should be thought of as a foliation of U_φ into disjoint (n-1)-dimensional Riemannian manifolds.] The restriction (g̃_0)_t_i of the metric g̃_0 to each spacelike slice makes the pair ((U_φ)_t_i,(g̃_0)_t_i) a Riemannian manifold.   For a lightlike curve α(t) I⟶ U_φ with starting point p_0, we have δ_jkdx^j/dtdx^k/dt>0 for each slice (U_φ)_t_i with t≠0. Lightlike curves with starting point p_0 can be parametrized with the Euclidean arc length σ in B_ξ_0^n-1, such that (g̃_0)_t(α̇(σ),α̇(σ))=δ_jkdx^j/dσdx^k/dσ=1, ∀ σ∈ I, where I is some interval in ℝ. More precisely, σ can be considered as arc length (parameter) in terms of some auxiliary Riemannian metrics, each defined on a hypersurface with t=const. Consequently we get 0=g̃_0(α̇(σ),α̇(σ))=-t(α̇^0)^2+G_0δ_ikα̇^jα̇^k =-t(dt/dσ)^2+G_01δ_ikdx^j/dσdx^k/dσ=-t(dt/dσ)^2+G_0, and this implies dσ/dt=±√(t/G_0)⟹σ(t)=±∫√(t/G_0)dt=±2/3t√(t/G_0)+const.   Since σ is given as a function of t, it represents the arc length from the starting point at t(p_0)=t_0 to t(0)=0. Then past-directed g̃_0-lightlike curves emanating from p_0 reach the hypersurface at t=0 after passing through the arc length distance σ=±∫_0^t_0√(t/G_0)dt=±2/3t_0√(t_0/G_0)+const.=±2/3√(t_0^3/G_0)+const along the said section of the curve from the fixed starting point p_0.   Provided this arc length distance satisfies σ≤ξ_0, then the past-directed lightlike curves α(t) (emanating from p_0) reach the hypersurface at t=0 while remaining within C_0. Accordingly this is also the case for g̃-lightlike curves emanating from p_0. Conversely, if σ>ξ_0 then there exist past-directed g̃_0-lightlike curves emanating from p_0 that reach the hypersurface outside of C_0.  In this case, we have σ=2/3√(t_0^3/G_0)>ξ_0⟺ t_0>√(9/4ξ_0^2· G_0) and we must adjust the new starting point p_1=(t_1,x_0) accordingly by setting t_1≤√(9/4ξ_0^2· G_0)<t_0. Thereby we make sure that all past-directed g̃-lightlike curves emanating from p_1 hit the hypersurface ℋ without leaving C_0. That is I_g̃^-(p_1)⊂ C_0⊂ U_φ⊂ M, where I_g̃^-(p_1) is the g̃-chronological past of the event p_1∈ M_L, restricted to M_L∪ℋ, see Figure <ref>.   It now suffices to connect two of such points x̂_1,x̂_2∈I_g̃^-(p_0)∩ℋ (or, if need be I_g̃^-(p_1)∩ℋ) in an arbitrary fashion within the Riemannian sector M_R. By what a pseudo-timelike loop gets generated, if U_φ was chosen small enough.   Summarized, for each neighborhood U(q) that admits radical-adapted Gauss-like coordinates ξ=(t,x̂)=(t,x^1,…,x^n-1) centered at some q∈ℋ, and U(q)∩ℋ≠∅, we are able to pick a point p_0∈ U(q) and an associated compact set C_0⊂ U(q). For the metric g̃ there exists a corresponding uniquely (i.e., only dependent on the chosen set C_0) defined metric g̃_0 with g̃≤g̃_0 within C_0.[The set C_0 does not need to be “maximal” (in some sense) and is therefore not unique.] Then we must distinguish between two cases, that is   i) with respect to the metric g̃_0 we have I_0^-(p_0)⊂ C_0, then also I^-(p_0)⊂ C_0 with respect to g̃, ii) with respect to the metric g̃_0 we have the situation I_0^-(p_0)⊈ C_0, then there exists a point p_1=(t_1,x_0)∈ C_0∖ℋ with t_1<t_0, such that I_0^-(p_1)⊂ C_0, hence also I^-(p_1)⊂ C_0 with respect to g̃.   Thus, for any point q∈ℋ we can find a sufficiently small neighborhood Ũ⊂ U(q) containing a point p∈ M_L, such that all past-directed, causal curves emanating from that point, reach the hypersurface within a sufficiently small set C_0.   Let (M,g̃) be a transverse, signature-type changing, n-dimensional manifold with a transverse radical. Then in each neighborhood of each point there always exists a pseudo-lightlike curve.   The above corollary follows directly from Theorem <ref> because we have proven that all past-directed g̃-lightlike curves emanating from p_1 hit the hypersurface ℋ without leaving C_0. That is I_g̃^-(p_1)⊂ C_0⊂ U_φ⊂ M, where I_g̃^-(p_1) is the closure of the g̃-chronological past of the event p_1∈ M_L. Hence, we have also shown that the causal past is within C_0, and furthermore, J_g̃^-(p_0)⊂I_g̃^-(p_0). And since in every neighborhood of each point q∈ℋ there always exists a pseudo-timelike loop, we can straightforwardly assert the following   A transverse, signature-type changing manifold (M,g̃) with a transverse radical has always time-reversing pseudo-timelike loops.   As a matter of course, in the Lorentizan region the tangent space at each point is isometric to Minkowski space which is time orientable. Hence, a Lorentzian manifold is always infinitesimally time- and space-orientable, and a continuous designation of future-directed and past-directed for non-spacelike vectors can be made (infinitesimally and therefore, by continuity, also locally).[In case the Lorentzian manifold is time-orientable, a continuous designation of future-directed and past-directed for non-spacelike vectors can be made allover.]    Having said that, the infinitesimal properties of a manifold with a signature change are identical to those of a Lorentzian manifold only within the Lorentzian sector. However, when examining the Riemannian sector and the hypersurface, specific distinctions arise. The Riemannian sector and the hypersurface are not infinitesimally modelable by a Minkowski space. While the Riemannian sector reveals an absence of a meaningful differentiation between past- and future-directed vectors, on the hypersurface, one has the flexibility to make arbitrary assignments of such distinctions at the infinitesimal level. If one now determines on the hypersurface whether the direction towards the Lorentzian sector is the future or past direction, it is not only a reference to the tangent space at a point. Rather, it is a local consideration.   In the context of local considerations, in a Lorentzian manifold the existence of a timelike loop that flips its time orientation (i.e. the timelike tangent vector switches between the two designated components of the light cone) is a sufficient condition for the absence of time orientability. Based on the previous theorem (at the beginning of the present subsection), this is also true for a transverse, signature-type changing manifold (M,g̃) with a transverse radical: As we have proved above, through each point on the hypersurface ℋ we have locally a closed time-reversing loop. That is, there always exists a closed pseudo-timelike path in M around which the direction of time reverses, and along which a consistent designation of future-directed and past-directed vectors cannot be defined.   An observer in the region M_L near ℋ perceives these locally closed time-reversing loops (Figure <ref>) as the creation of a particle and an antiparticle at two different points q̂,q∈ℋ.[Such locally closed time-reversing loops around ℋ obviously do not satisfy the causal relation ≪ as introduced above.] This could be taken as an object entering the Riemannian region, then resurfacing in the Lorentzian region and proceeding to move backwards in time.   So in a transverse, signature-type changing manifold (M,g̃), the hypersurface with its time-reversing loops could be tantamount to a region of particle-antiparticle origination incidents. Moreover, Hadley <cit.> shows for Lorentzian spacetimes that a failure of time-orientability of a spacetime region is indistinguishable from a particle-antiparticle annihilation event. These are then considered equivalent descriptions of the same phenomena. It would be interesting to explore how this interpretation can be carried over to signature-type changing manifolds.   For fields, take the conjugate ψ_t^A = e^-iĤtψ^* of ψ_t = e^iĤtψ: The unitary temporal evolution of the field operator for antiparticles arises from the temporal evolution of the field operator for particles by applying the same Hamiltonian operator to the adjoint field operator under time reversal. Some literature <cit.> points to the idea that concepts in quantum field theory are predicated on acausal properties derived from general relativity. In this context, Blum et al. <cit.> stress the importance of the CPT theorem (quoting verbatim): “CPT theorem is the statement that nothing would change—nobody would notice and the predictions of physics would not be altered—if we simultaneously replace particles by antiparticles and vice versa. Replace everything by its mirror image or, more exactly, exchange left and right, up and down, and front and back, and reverse the flow of time. We call this simultaneous transformation CPT, where C stands for Charge Conjugation (exchanging particles and antiparticles), P stands for parity (mirroring), and T stands for time reversal.” §.§ Global pseudo-timelike loops The existence of such pseudo-timelike curves locally near the hypersurface that loop back to themselves, gives rise to the question whether this type of curves also occur globally. We want to elucidate this question in the following.[A spacetime is a Lorentzian manifold that models space and time in general relativity and physics. This is conventionally formalized by saying that a spacetime is a smooth connected time-orientable Lorentzian manifold (M,g) with M=4. But in what follows we want to study the n-dimensional (n≥2) case.]    <cit.> A connected time-orientable Lorentzian manifold (M,g) is said to be stably causal if there exists a nowhere-vanishing timelike vector field V_a such that the Lorentzian metric on M given by g':=g_ab-V_aV_b admits no closed timelike curves. In other words, if (M,g) is stably causal then, for some timelike V_a, the metric g':=g_ab-V_aV_b on M is causal.   A partial ordering < is defined in the set of all Lorentzian metrics Lor(M) on M in the following way: g<g' iff all causal vectors for g are timelike for g'. Then the metric g_λ=g+λ(g'-g), ∀ λ∈[0,1] is a Lorentzian metric on M, as well. Also, recall that g<g' means that the causal cones of g are contained in the timelike cones of g'. A connected time-orientable Lorentzian manifold (M,g) is stably causal if there exists g'∈ Lor(M), such that g'>g, with g' causal.    Stable causality is the necessary and sufficient condition for the existence of a smooth global time function, i.e. a differentiable map T M→ℝ such that whenever p<<q ⟹ T(p)<T(q).    <cit.> A connected, time-orientable Lorentzian manifold (M,g) is called globally hyperbolic if and only if it is diamond-compact and causal, i.e., p∉ J^+(p) ∀ p∈ M.[Diamond-compact means J(p,q):= J^+(p)∩ J^-(q) is compact for all p,q∈ M. Note that J(p,q) is possibly empty.]   An equivalent condition for global hyperbolicity is as follows <cit.>. A connected, time-orientable Lorentzian manifold (M,g) is called globally hyperbolic if and only if M contains a Cauchy surface. A Cauchy hypersurface in M is a subset S that is intersected exactly once by every inextendible timelike curve in M.[An inextendible curve is a general term that refers to a curve with no endpoints; it either extends infinitely or it closes in on itself to form a circle—a closed curve. Specifically, an inextendible timelike curve is a curve that remains timelike throughout its entire length and cannot be extended further within the spacetime. In mathematical terms, a map α(a,b)→ M is an inextendible timelike curve in (M,g) if α(t) does not approach a limit as t increases to b or decreases to a, and α(t) remains timelike for all t∈(a,b). This distinguishes it from inextendible curves of other causal types, such as null or spacelike curves.]   In 2003, Bernal and Sánchez <cit.> showed that any globally hyperbolic Lorentzian manifold M admits a smooth spacelike Cauchy hypersurface S, and thus is diffeomorphic to the product of this Cauchy surface with ℝ, i.e. M splits topologically as the product ℝ× S. Specifically, a globally hyperbolic manifold is foliated by Cauchy surfaces. If M is a smooth, connected time-orientable Lorentzian manifold with boundary, then we say it is globally hyperbolic if its interior is globally hyperbolic.   The next theorem is partially based on the Local Loops Theorem <ref> and can be considered a generalization to the global case. Let (M,g̃) be a pseudo-time orientable, transverse, signature-type changing, n-dimensional (n≥2) manifold with a transverse radical, where M_L=M∖(M_R∪ℋ) is globally hyperbolic. Assume that a Cauchy surface S is a subset of the neighborhood U=⋃_q∈ℋU(q) of ℋ, i.e. S⊆(U∩ M_L)=⋃_q∈ℋ(U(q)∩ M_L), with U(q) being constructed as in Theorem <ref>. Then for every point p∈ M, there exists a pseudo-timelike loop such that p is a point of self-intersection. Let (M,g̃) be a pseudo-time orientable transverse, signature-type changing, n-dimensional (n≥2) manifold with a transverse radical, where M_L is globally hyperbolic with g̃|_M_L=g. Moreover, there is a neighborhood U=⋃_q∈ℋU(q) of ℋ sufficiently small to satisfy the conditions for Theorem <ref>, and per assumption there exists a Cauchy surface S_ε⊆(U∩ M_L), ε>0.   Due to <cit.> we know that M_L admits a splitting M_L=(ℝ_>0)_t× S_t=⋃_t∈ℝ_>0S_t, such that the Lorentzian sector M_L is decomposed into hypersurfaces (of dimension n-1), specified as the level surfaces S_t=𝒯^-1(t)={p∈ M_L𝒯(p)=t},t∈ℝ_>0, of the real-valued smooth temporal function 𝒯 M_L⟶ℝ_>0 whose gradient grad𝒯 is everywhere non-zero and, clearly, d𝒯 is an exact 1-form. Within the neighborhood U=⋃_q∈ℋU(q) this foliation ⋃_t∈ℝ_>0S_t can be chosen in such a way that it agrees with the natural foliation given by the absolute time function h(t,x̂):= t, see Remark <ref> and Definiton <ref>.[Recall that a smooth function T M⟶ℝ on a connected time-orientable Lorentzian manifold (M,g) is a global time function if T is strictly increasing along each future-pointing non-spacelike curve. Moreover, a temporal function is a time function T with a timelike gradient gradT everywhere. Since M_L is globally hyperbolic it admits a smooth global time function T and consequently it admits <cit.> a temporal function 𝒯. Hence, in the Lorentzian sector M_L there exists a global temporal function 𝒯 M_L⟶ℝ_>0, and grad𝒯 is orthogonal to each of the level surfaces S_t=𝒯^-1(t)={p∈ M_L𝒯(p)=t},t∈ℝ_>0, of 𝒯. Note that 𝒯=t is a scalar field on M_L, hence grad𝒯=gradt=(dt)^#.]   Moreover, the level surfaces (S_t)_t∈ℝ are Cauchy surfaces and, accordingly, each inextendible pseudo-timelike curve in M_L can intersect each level set S_t exactly once as 𝒯 is strictly increasing along any future-pointing pseudo-timelike curve.[Since 𝒯 is regular the hypersurfaces S_t never intersect, i.e. S_t∩ S_t'=∅ for t≠ t'.] Then, these level-sets S_t are all space-like hypersurfaces which are orthogonal to a timelike and future-directed unit normal vector field n.[In other words, the unit vector n is normal to each slice S_t, and g restricted to S_t is Riemannian.]   For ε sufficiently small, the level Cauchy surface S_ε=𝒯^-1(ε)={p∈ M_L𝒯(p)=ε},ε∈ℝ_>0 is contained in U∩ M_L=⋃_q∈ℋ(U(q)∩ M_L).[This is true because all neighborhoods U(q) with q ∈ℋ can be chosen such that the sets U(q) have a compact closure. Thus, the U(q) are not “infinitely wide,” and there exists a strictly positive value ε_max, such that for all ε < ε_max, the level Cauchy surface S_ε is contained in U ∩ M_L.]   Therefore, based on Theorem <ref>, for any p=(ε,x̂)∈ S_ε⊆(U∩ M_L) all past-directed and causal curves emanating from that point reach the hypersurface ℋ. The global hyperbolicity of M_L implies that every non-spacelike curve in M_L meets each S_t once and exactly once since S_t is a Cauchy surface. In particular, the spacelike hypersurface S_ε is a Cauchy surface in the sense that for any p̅∈ M_L in the future of S_ε, all past pseudo-timelike curves from p̅ intersect S_ε. The same holds for all future directed pseudo-timelike curves from any point p̅̅̅∈ M_L in the past of S_ε. Consequently, by virtue of Theorem <ref> and the above argument, all past-directed pseudo-timelike curves emanating from any p̅∈ M_L reach the hypersurface ℋ. Analogously we can conclude that any point p̅∈ M_L can be reached by a future-directed pseudo-timelike curve starting at some suitable point in ℋ. Recall that, based on Remark <ref>, we also know that ℐ^+(q)={p∈ M:q≪ p}=M, that is, any point in M=M_R∪ℋ∪ M_L can be reached by a future-directed pseudo-timelike curve from q∈ℋ, see Figure <ref>.   We now obtain a loop with intersection point p in M_L if, for sufficiently small ε, we first prescribe the intersection point p=(ε,x̂)∈ S_ε. And then we connect the two points lying in ℋ of the intersecting curve sections through an arbitrary curve segment in the Riemannian sector M_R (through a suitable choice of the two curve segments, we can ensure that different points on ℋ are obtained).   Theorem <ref> explicitly states that through every point in M, there always exists a pseudo-timelike loop. Therefore, this assertion holds also true for points located on the hypersurface or within the Riemannian region. In this casees, the situation is as follows: (i) If the given point lies on the hypersurface, p∈ℋ, choose a timelike curve segment that connects it to S_ε (with ε sufficiently small), then proceed from there along another timelike curve segment to another point on the hypersurface, and connect both points in the Riemannian sector. (ii) If the given point lies in the Riemann sector, p∈ M_R, choose an arbitrary loop of the form similar to those loops constructed in the proof of Theorem <ref>, and modify this loop within the Riemannian sector such that it passes through the specified point there.   The prototype of a spacetime M with signature-type change is obtained by cutting an S^4 along its equator and joining it to the corresponding half of a de Sitter space. It is a well-known fact that the full de Sitter spacetime is globally hyperbolic <cit.>, with the entire manifold possessing a Cauchy surface. When we restrict to half de Sitter space—by choosing an appropriate region bounded by a Cauchy surface—this region retains global hyperbolicity. This is because the Cauchy surface of the full de Sitter spacetime remains valid in the half-space, ensuring that every inextendible non-spacelike curve still intersects this surface exactly once. As a result, the Lorentzian sector, which corresponds to half de Sitter space, is also globally hyperbolic. Consequently, there are chronology-violating pseudo-timelike loops through each point in M.   Let (M,g̃) be a pseudo-time orientable, transverse, signature-type changing, n-dimensional (n≥2) manifold with a transverse radical, where M_L is globally hyperbolic, and S⊆(U∩ M_L)=⋃_q∈ℋ(U(q)∩ M_L) for a Cauchy surface S. Then through every point there exists a path on which a pseudo-time orientation cannot be defined. § FINAL THOUGHTS The intriguing facet of the potential existence of closed timelike curves within the framework of Einstein's theory lies in the physical interpretation that CTCs, serving as the worldlines of observers, fundamentally permit an influence on the causal past. This can also be facilitated through a causal curve in the form of a loop, i.e., the curve intersects itself. In the case of a non-time-orientable manifold, there would then be the possibility that at the intersection, the two tangent vectors lie in different components of the light cone. Thus, the “time traveler” at the encounter with himself, which he experiences twice, may notice a reversal of the past and future time directions in his surroundings during the second occurrence, even including the behaviour of his or her younger version. Regardless of whether this effect exists or not, during the second experience of the encounter, which he perceives as an encounter with a younger version of himself, the traveler can causally influence this younger version and its surroundings.   § ACKNOWLEDGMENTS NER is greatly indebted to Richard Schoen for generously welcoming her into his research group and to Alberto Cattaneo for affording her with creative independence throughout the duration of this research endeavor. Moreover, NER acknowledges partial support of the SNF Grant No. 200021-227719. This research was (partly) supported by the NCCR SwissMAP, funded by the Swiss National Science Foundation.     Data availability: No data was used for the research described in the article. 10 Aguirre+LafuenteE. Aguirre-Dabán and J. Lafuente-López. Transverse Riemann-Lorentz type-changing metrics with tangent radical. Differential Geom. Appl. 24(2) (2006), 91–100. Aguirre - On the Conformal Geometry of Transverse Riemann-Lorentz ManifoldsE. Aguirre, V. Fernández and J. Lafuente. On the conformal geometry of transverse Riemann-Lorentz manifolds. J. Geometry and Physics 57(7) (2007), 1541–1547. Beem + Ehrlich + Easley - Global Lorentzian GeometryJ. K. Beem, P. E. Ehrlich and K. L. Easley. Global Lorentzian Geometry. Marcel Dekker, New York, 2nd edition (1996). Bernal+Sanchez - On smooth Cauchy hypersurfaces and Gerochs splitting theoremA. N. Bernal and M. Sánchez. On smooth Cauchy hypersurfaces and Geroch's splitting theorem. Commun. Math. Phys. 243(3) (2003), 461–470. Bernal + Sanchez - Further Results on the Smoothability of Cauchy Hypersurfaces and Cauchy Time FunctionsA. N. Bernal and M. Sánchez. Further Results on the Smoothability of Cauchy Hypersurfaces and Cauchy Time Functions. Lett. Math. Phys. 77 (2006), 183–197. Blum et alA. S. Blum and A. Martínez de Velasco. The genesis of the CPT theorem. The European Physical Journal H. 47(5) (2022). Bott + TuR. Bott and L. W. Tu. Differential Forms in Algebraic Topology. Springer, New York (1982). BredonG. E. Bredon. Topology and Geometry. Springer, New York (1993). Dray - Gravity and Signature ChangeT. Dray, G. Ellis, C. Hellaby, C. Manogue, Gravity and Signature Change, Gen.Rel.Grav. 29 (1997), 591–597. Dray - General Relativity and Signature Change T. Dray. General Relativity and Signature Change. Advances in Differential Geometry and General Relativity, eds. Stamatis Dostoglou and Paul Ehrlich. AMS Contemp. Math. 359 (2004), 103–124. EhresmannC. Ehresmann. Gattungen von Lokalen Strukturen. Jahresberichte d. Deutschen Math. 60-2 Reprinted in (1957), 49–77. Ellis - Change of signature in classical relativityG. F. R. Ellis, A. Sumeruk, D. Coule and C. Hellaby. Change of signature in classical relativity. Class. Quantum Grav. 9 (1992), 1535–1554. Ellis - Covariant change of signature in classical relativityG. F. R. Ellis. Covariant change of signature in classical relativity. Gen. Rel. Grav. 24 (1992), 1047–1068. Geroch - What is a singularity in general relativityR. P. Geroch. What is a Singularity in General Relativity? Annals Phys. 48 (1968), 526–540. Geroch - Domain of DependenceR. P. Geroch. Domain of dependence. J. Math. Phys. 11 (1970), 437–449. Geroch - Global structure of spacetimesR. P. Geroch and G. T. Horowitz. Global structure of spacetimes. General Relativity: An Einstein Centenary Survey (1979), 212–293. Gibbons + HartleG. W. Gibbons, J. G. Hartle, Real tunneling geometries and the large-scale topology of the universe. Phys. Rev. D 42 (1990), 2458–2468. Hadley - The Logic of Quantum Mechanics Derived from Classical General RelativityM. J. Hadley. The Logic of Quantum Mechanics Derived from Classical General Relativity. Found. Phys. Lett. 10 (1997), 43–60. Hadley - The orientability of spacetimeM. J. Hadley. The orientability of spacetime. Class. Quantum Grav. 19 (2002), 4565–4572. Hall G. S. Hall. Lorentz manifolds and general relativity theory. Differential Geometry, Banach Center Publ. 12, Warsaw (1984), 47–52. Halliwell + HartleJ. J. Halliwell, J. B. Hartle, Integration contours for the no-boundary wave function of the universe, Phys. Rev. D 41 (1990), 1815–1834. Hartle Hawking - Wave function of the UniverseJ. B. Hartle and S. W. Hawking, Wavefunction of the Universe, Phys. Rev. D28 (1983), 2960–2975. Hasse + Rieger W. Hasse and N. E. Rieger, A Transformation Theorem for Transverse Signature-Type Changing Semi-Riemannian Manifolds. Digital preprint, arXiv:math.DG/2407.09699 (2024). Hawking + Ellis - The large scale structure of spacetimeS. W. Hawking and G. F. R. Ellis. The Large Scale Structure of Space-Time. Cambridge University Press, Cambridge (1973). [25]Kossowksi + Kriele - Signature type change and absolute time in general relativityM. Kossowski, M. Kriele, Signature type change and absolute time in general relativity, Class. Quantum Grav. 10 (1993), 1157–1164. [26]Kossowski - The Volume BlowUp and Characteristic Classes for TransverseM. Kossowski and M. Kriele. The Volume Blow-Up and Characteristic Classes for Transverse, Type-Changing, Pseudo-Riemannian Metrics. Geom. Dedicata. 64 (1997), 1–16. [27]Kriele + Martin - Black Holes ...M. Kriele and J. Martin. Black holes, cosmological singularities and change of signature. Class. Quantum Grav. 12 (1995), 503–512. [28]Kroon. Anti-de Sitter-like spacetimes J. A. V. Kroon. Anti-de Sitter-like spacetimes. Conformal Methods in General Relativity. Cambridge University Press, Cambridge (2016). [29]Lee - Introduction to Smooth ManifoldsJ. M. Lee. Introduction to Smooth Manifolds. Springer, New York (2013). [30]Ling E. Ling. Aspects of C^0 causal theory. Gen. Rel. Grav. 52, Paper No. 57 (2020). [31]Minguzzi E. Minguzzi. Lorentzian causality theory. Living Rev Relativ. 22(3) (2019). [32]Rodrigues - de SitterW. A. Rodrigues and S. A. Wainer. On the Motion of a Free Particle in the de Sitter Manifold. Adv. Appl. Clifford Algebras 27 (2017), 1761–1767. [33]Samelson H. Samelson. Orientability of hypersurfaces in R^n. Proc. Amer. Math. Soc. 22 (1969), 301–302. [34]Sanchez M. Sánchez. Globally hyperbolic spacetimes: slicings, boundaries and counterexamples. Gen. Rel. Grav. 54(10), Paper No. 124 (2022). [35]Schmidt - A new definition of singular points in general relativity B. G. Schmidt. A new definiton of singular points in general relativity. Gen. Rel. Grav. 1 (1971), 269–280.
http://arxiv.org/abs/2409.02808v1
20240904152528
Towards Edge-Based Data Lake Architecture for Intelligent Transportation System
[ "Danilo Fernandes", "Douglas L. L. Moura", "Gean Santos", "Geymerson S. Ramos", "Fabiane Queiroz", "Andre L. L. Aquino" ]
cs.DB
[ "cs.DB", "cs.AI", "cs.NI" ]
dfc@laccan.ufal.br Federal University of Alagoas Av. Lourival Melo Mota, S/N, Tabuleiro do Martins Maceio Alagoas Brazil 57072-970 douglas.moura@dcc.ufmg.br Federal University of Minas Gerais Av. Pres. Antônio Carlos, 6627, Pampulha Belo Horizonte Minas Gerais Brazil 31270-901 gean.santos@laccan.ufal.br Federal University of Alagoas Av. Lourival Melo Mota, S/N, Tabuleiro do Martins Maceio Alagoas Brazil 57072-970 geymerson.ramos@inria.fr Univ Lyon, Inria, INSA Lyon, CITI 56 Bd Niels Bohr, 69100 Villeurbanne Villeurbanne France 69100 fabiane.queiroz@laccan.ufal.br Federal University of Alagoas Av. Lourival Melo Mota, S/N, Tabuleiro do Martins Maceio Alagoas Brazil 57072-970 alla@laccan.ufal.br Federal University of Alagoas Av. Lourival Melo Mota, S/N, Tabuleiro do Martins Maceio Alagoas Brazil 57072-970 § ABSTRACT The rapid urbanization growth has underscored the need for innovative solutions to enhance transportation efficiency and safety. Intelligent Transportation Systems (ITS) have emerged as a promising solution in this context. However, analyzing and processing the massive and intricate data generated by ITS presents significant challenges for traditional data processing systems. This work proposes an Edge-based Data Lake Architecture to integrate and analyze the complex data from ITS efficiently. The architecture offers scalability, fault tolerance, and performance, improving decision-making and enhancing innovative services for a more intelligent transportation ecosystem. We demonstrate the effectiveness of the architecture through an analysis of three different use cases: (i) Vehicular Sensor Network, (ii) Mobile Network, and (iii) Driver Identification applications. <ccs2012> <concept> <concept_id>10010520.10010553.10010562</concept_id> <concept_desc>Computer systems organization Embedded systems</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010520.10010575.10010755</concept_id> <concept_desc>Computer systems organization Redundancy</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010520.10010553.10010554</concept_id> <concept_desc>Computer systems organization Robotics</concept_desc> <concept_significance>100</concept_significance> </concept> <concept> <concept_id>10003033.10003083.10003095</concept_id> <concept_desc>Networks Network reliability</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> [500]Distributed Systems Edge Computing [300]Distributed Systems Data Lake [100]Intelligent Transportation Systems Vehicular Sensor Network [100]Intelligent Transportation Systems Handover [100]Intelligent Transportation Systems Driver Identification Towards Edge-Based Data Lake Architecture for Intelligent Transportation System Andre L. L. Aquino =============================================================================== § INTRODUCTION Urbanization is a global phenomenon characterized by the growth and development of large cities resulting from migrating people from rural to urban areas. It involves concentrating economic activities and infrastructure in urban centers <cit.>. Rapid urbanization has modernized many people's lives and brought numerous challenges to transportation systems. These challenges include air pollution, noise, traffic congestion, and accidents <cit.>. In this context, Intelligent Transportation System (ITS) emerges as a potential solution to address the impacts of urbanization and improve the efficiency and safety of transportation systems <cit.>. ITS integrates advanced communication and information technologies with transportation infrastructure to support various applications and services, such as real-time traffic management, road safety, smart parking, and autonomous vehicles <cit.>. These applications transmit data through various communication technologies and obtain them from heterogeneous data sources, such as vehicles, road infrastructure, and citizens. The rapid growth of ITS systems made analyzing and processing the data challenging for traditional data processing systems presenting a new Big Data scenario <cit.>. However, solutions are emerging to provide a flexible and scalable data store to handle the new ITS Big Data <cit.>. To provide this support, the Logical Data Lake (DL) concept <cit.> enables efficient management of ITS Big Data, allowing advanced analytics, inference, and making data-driven decisions. Despite their proven benefits, existing DL architectures suffer from certain limitations when applied in the context of ITS: (i) Vehicles typically move at high speeds, resulting in frequent disconnections, especially when combined with reduced communication range; (ii) ITS data is dispersed and heterogeneous, which poses a significant challenge for data integration; (iii) Cloud architectures can suffer from high communication and computing overheads due to the concentration of data and processing in a single central server. To address these challenges, we propose a novel edge-based data lake architecture that provides an abstraction layer and enables efficient data integration, cleansing, and inference for decision-making applications in ITS. Multi-access Edge Computing (MEC) infrastructure provides processing and storage resources at the network edge, such as at the cellular base station <cit.>. In our approach, we design a distributed architecture across servers located at the network edge and in the cloud, with each layer utilizing the data differently. The main contribution of this research is a novel edge-based data lake architecture that offers a scalable and efficient solution to handle the diverse data requirements of ITS, enabling enhanced decision-making, improved operational efficiency, and innovative services for a more intelligent and connected transportation ecosystem. In our approach, data ingestion is performed logically and allows the direct and transparent utilization of distributed data, resulting in more efficient use of resources. We analyze three use cases. The first use case is a Vehicular Sensor Network (VSN) application, which relies on vehicle data to determine a subset of vehicles to act as aggregation points and perform data offloading. The second use case is a mobile network application, which uses data from mobile devices to optimize the handover process. Finally, the third use case is the driver identification application, which integrates data from different automotive sensors to identify individual drivers. The results show our proposed architecture's effectiveness in providing seamless data integration, low-latency processing, and flexible support for advanced data analysis in transportation systems. § RELATED WORK Big Data Analytics has the potential to revolutionize transportation systems, bringing significant improvements to traffic management, reducing congestion, and increasing road safety. However, many issues and challenges still need to be addressed to ensure the effective use of these technologies, such as effective management, storage, and processing of large and complex data sets. Existing architectures explore using advanced big data management technologies, such as cloud computing, data warehousing, and data lakes, to address these issues and handle this data efficiently and cost-effectively. Guerreiro et al. <cit.> propose an ETL (Extract, Transform, and Load) architecture for big data analytics in ITS applications, addressing an application scenario on dynamic toll charging for highways. The proposed architecture can efficiently process and model large volumes of raw traffic data using Apache Spark and SparkSQL for data processing and MongoDB for data storage. Similarly, Ramos et al. <cit.> present a zone-based data lake architecture for smart cities and governments. Their approach enables the ingestion and integration of heterogeneous data sources from IoT systems, social media, data streams, information systems databases, and Data Warehouses. Additionally, they integrate all data via a flexible metadata system, which enables manual data labeling, automatic metadata extraction, and data queries through the metadata. Zhu et al. <cit.> provided a comprehensive survey of Big Data Analytics in ITS and introduced a three-layer architecture composed of a data collection layer, a data analytics layer, and an application layer. They also presented several use cases of Big Data Analytics in ITS, such as road traffic flow prediction, road traffic accident analysis, public transportation service planning, personal travel route planning, and others. Darwish and Bakar <cit.> proposed a fog-based architecture for ITS big data real-time processing called RITS-BDA. The fog nodes can form clusters vertically or horizontally, sharing computational and storage resources. Singh et al. <cit.> proposed BlockIoTIntelligence architecture of converging blockchain and AI for the Internet of Things (IoT). They present cloud, fog, edge, and device layers hierarchically and use Blockchain technology to mitigate the security and privacy issue and provide decentralized big data analysis. Dang et al. <cit.> propose a zone-based data lake architecture for IoT, Small, and Big Data. The authors consider an analysis-oriented metadata model for data lakes that includes the descriptive information of datasets and their attributes and all metadata related to the machine learning analyses performed on these datasets. The authors implemented a web application of data lake metadata management that allows users to find and use existing data, processes, and analyses by searching relevant metadata stored in a NoSQL data store within the data lake. They also present two real-world use cases: dataset similarity detection and machine learning guidance. Table <ref> presents a comparison of the existing architectures for big data ITS. Although previous works have presented promising solutions for ITS big data, they often rely on centralized architectures based on cloud infrastructure. Furthermore, there is a lack of a deep discussion about the infrastructure level design and the exploration of potential use cases within the transportation ecosystem. Our study aims to introduce a novel data lake architecture for ITS that utilizes the capabilities of Edge Computing to address the limitations of previous works. The proposed architecture offers an abstraction layer and performs data lake operations at the network's edge in a virtualized and distributed manner, enabling scalability, efficiency, and transparency. § EDGE-BASED DATA LAKE ARCHITECTURE The data lakes play a crucial role in the overall architecture corresponding to a data storage and intelligence layer. Dealing with heterogeneity in Big Data landscapes and scaling in distributed environments are the main features that make it well suited for ITS platforms <cit.>. This layer collects, catalogs, cleans, and transforms data from IoT Devices. These tasks compose a process known as data wrangling, which enriches the data and makes it suitable for analysis. <cit.>. Nevertheless, the ultimate feature is providing data analytics, the core of the application layer. Data management is always a challenge when facing data heterogeneity. Data Lakes addresse this issue by storing all collected data in raw format and providing an evolving framework for data structuring and refinement <cit.>. Such a methodology is known as the Zone-Based Data Model <cit.>. Data collected from external sources are logically assigned to an ingested data zone <cit.>. We transform the data enhancing the quality; thus, the outcome can move into the following zones <cit.>. Such transformations run until the data reaches the maximum level of maturity at which it is ready to be analyzed by decision-makers <cit.>. A data governance layer traverses all the zones to assure data security, privacy, quality, and monitoring <cit.>. Data governance requirements are different for each zone. At the ingestion zone, data must satisfy conditions for security and consistency. The distillation zone should enforce data privacy by controlling access. The processing zone reinforces data quality, and the monitoring zone provides reports on the health of specific components and the general infrastructure of the data lake. Data governance should increase and stack as the data follows through the ingestion to the insights zone. However, this is only one of several variants of zone architecture. Such architectures generally differ in the number and characteristics of the zones <cit.>. We usually deploy data lakes in the cloud due to the considerable computing resources available and the elasticity in their allocation. Their costs are often tied only to resource demand, skipping the infrastructure maintenance costs. However, in an environment with a considerable volume of sensors constantly providing data and demand for real-time analysis, the latency in uploading it to the cloud can be a bottleneck. Nevertheless, given its more constrained computing assets, this does not justify deploying the data lake at the edge. Thus, the cooperative adoption of both approaches can balance the trade-off between physical resources and latency to minimize costs. In this light, the proposed data lake architecture is internally composed of two distributed data lakes running in the edge and cloud. Although independent, these data lakes exchange data, operating together organically. Our architecture (Figure <ref>) considers four layers: , , , . The layer represents the mechanisms used to handle the data used by devices embedded in vehicles and other traffic-related devices. The , and layers, collect, store, and sprocesses the data in batches or in real-time using distributed technologies at the edge or cloud. The layer uses the processed data. Examples of applications for ITS applications are congestion detection, traffic management, accident prediction, and other urban mobility solutions. Finally, all these layers exchange their data horizontally through a . The implements a lightweight messaging protocol to enable communication between all architecture components. An example of such a protocol is the Message Queuing Telemetry Transport (MQTT) <cit.>, which is designed to work on top of the TCP/IP protocol. This event-driven protocol acts on a publish/subscribe model, facilitating real-time communication within an ITS ecosystem. §.§ IoT Device Layer The layer, also known as the perception layer, is where the smart sensors are in the data lake architecture. These devices may include various types of sensors, such as intra-vehicular sensors (e.g., chemical detectors, video cameras, and vibration/acoustic sensors), traffic control sensors (e.g., traffic lights, radar, inductive loops), and infrastructure sensors (e.g., roadside units and roadside weather stations). This layer collects data from the environment and makes it available to other layers for heavy processing and analysis. Wireless communication technologies like Bluetooth, Wi-Fi, Zigbee, LoRa, 5G, and others are commonly used in the device layer to transmit data. In addition, the layer has a to enable the performance of specific tasks and functions, such as data transformations, device actuator control, and communication. There is also a for local buffering of the data collected by the sensor and the processing outcome. This set of components can help reduce the amount of data that needs to be transmitted to the network and reduce bandwidth consumption. Finally, the DL interface handles all communication with the other layers. §.§ Edge and Cloud Data Lake Layers The layer is mainly responsible for: [i)] * Collecting data from sensors; * Keep stored frequently used data (hot data); * Data cataloging and metadata model inference; * Real-time and reduced batch processing; * Interfacing with MEC applications. Additionally, the goal of the layer is to extrapolate the computer power of the edge. Then, we design it to: [i)] * Storing huge volumes of sporadically used data (cold data); * Costly batch processing. The data kept by the ITS infrastructure should be in any layer present in the architecture. The system sends the data from the lowest storage capacity tier to the highest as required for more costly processing or to free up storage space. It executes the reverse direction when a layer with less computing power requests data stored in the next layer. Consequently, hot data stays in the tiers closest to the application, while cold data accumulates in the cloud. and layers comprise Storage Systems, Processing Engines, and DL Interfaces. Nevertheless, the monitors and integrates the data, working as a virtual middleware in the data exchange of the other layers. For this purpose, the additional modules support the latter: Data Pipeline Orchestrator and Metadata System. The Data Orchestrator Pipeline is responsible for data ingestion from and data exchange with . For the first case, the DL Interfaces will intermediate the connection with the application, passing the received data (stream or batch) to the Orchestrator and indicating a location in the Edge Storage System. When exchanging data with the , the DL Interface will inform the source and target data location in the cloud and edge Storage Systems, and the Orchestrator will transfer it in batches. Note that such Orchestrator is a distributed system; therefore, data transfer occurs in large volumes in parallel. The Storage System stores the data on the edge. The Data Pipeline Orchestrator and the Processing Engine feed it. In the last one, the stored data results from some processing task. In addition, it returns data to all other components of the when requested via DL Interface. The Metadata System is in charge of logically integrating all data and making it accessible to the user through homogeneous modeling. It also stores and manages information about the usage and performance of the data lake, for example, data lineage, history of data access, and data transfers. When new data is ingested or created into the , it receives its location in the Storage System and the available elementary metadata. In addition, it performs some processing and delegates the bulky ones to the Processing Engine to extract relationships, hidden patterns, and semantic descriptors. When the system transfers data from the edge to the cloud, we preserve the metadata, changing only its location to reflect its address in the cloud Storage System. In this way, this data remains ingested virtually at the and can be regressed when required. All metadata is essential to empower data governance, enabling discovery, integrated queries, and unsupervised meta-analysis. Note that big data implies big metadata, then Metadata System pushes toward implementation in a distributed environment. Then, the metadata structure is across the computing nodes, which share the content with others when requested. The Processing Engine receives requests from Metadata System and DL Interface to extract meta-features and refine data when the data evolves, the output returns to the requester or saves into Storage System. The DL Interface communicates with the MEC and translates the requests into coordinated tasks to the other modules within . runs distributed along the MEC servers, sharing computing resources with the applications. Figure <ref> presents a possible implementation of edge services on a MEC server. All module nodes run side-by-side with MEC in virtual machines (VM). The Virtualization Layer manages the virtual machines, which allocates resources to each of them and intermediates the communication between them and external ones within other MEC servers or clouds. The MEC Platform consists of essential functionality required to run MEC applications. It can also enable MEC applications to provide and consume MEC services (e.g., radio network information, location, and traffic management services). Finally, the Local Storage saves the sensing data, the VMs with their data, and the MEC operational data. §.§ Application Layer The layer is responsible for providing valuable services to end-users based on the data processed by the data lake. Applications typically run on the MEC host but can also run in the cloud, on the device, or a combination. The choice of execution location depends on factors like the application's requirements, resource availability, and network conditions. These applications include typical ITS use cases, such as autonomous vehicles, smart parking, environmental monitoring, and video streaming. Other ones can also be run on this layer, such as data lake operations (e.g., distributed processing) and complex tasks (e.g., video processing, machine learning, augmented reality, and gaming) offloaded by IoT devices. The has a Storage System and Processing Engine to perform their operations autonomously and a DL Interface to communicate with the other layers of the architecture. In addition, they can interact with the Infrastructure that supports it by collecting and storing data in the storage system and changing its setup via the Processing Engine. This capability enables, for example, the adaptation of network settings to optimally meet the demand for services. § USE CASES OF ITS DATA LAKE In this section, we illustrate the potential of the proposed architecture by discussing three examples of use cases that demonstrate its benefits in ITS applications. We consider Vehicular, Mobile Network, and Driver Identification applications, illustrated in Figure <ref>. §.§ Vehicular Sensor Network Application Vehicular Sensor Network (VSN) refers to a promising paradigm of Remote Sensing, in which vehicles equipped with various sensing devices, powerful processing units, and wireless communication can act as mobile sensors and perform monitoring of the urban environment. VSN will enable various new ITS applications through the remote processing of data periodically collected by vehicles, such as improving road safety, traffic management, intelligent navigation, pollution monitoring, urban surveillance, and forensic investigations. Many applications require a periodic upload of the sensory data transmitted to a monitoring center on the internet. However, transmitting massive data over the cellular network can compromise network resources. Data offloading is a potential solution to save bandwidth and prevent cell network overload, in which only a minimal number of vehicles, called aggregation points, will transmit data on the cellular network. An aggregation point is responsible for collecting sensory data generated by neighboring vehicles using Device-to-Device (D2D) communication, aggregating and uploading them. Our challenge is to determine which vehicles will act as aggregation points. In this scenario, following the flow of Figure <ref>, the vehicles represent the and generate sensing data with a high space-time correlation. We summarize these data through aggregation operators to combine data from several sources into a single value. First, vehicles share location information to the through the DL Interface. The MEC Host represents the . Next, the Processing Engine processes the raw data in parallel and transforms it into a suitable data format for storage in the Storage System. The Metadata System adds additional metadata information, such as creation date, data format, geographic location, and others. The MEC Host host the neighbor discovery service to provide a mechanism for devices to discover nearby devices. A MEC application will run the Centrality-based algorithm <cit.> to solve the aggregation points selection problem. The algorithm uses the neighbor discovery service to model the graph and then selects the aggregation points. Finally, aggregation points collect sensory data from neighboring vehicles using D2D communication. The aggregation point stores the data locally in the Storage System, and the Processing Engine performs data aggregation. The aggregated data is transmitted to the for later analysis through the DL Interface. The proposed scheme has the following steps: [i)] * Awareness - The neighbor discovery service establishes the proximity relationship between the vehicles using the processed location data stored in the ; * Modeling - The MEC model the graph from the neighbor discovery service information; * Selection - The Centrality-based algorithm uses closeness centrality in a greedy approach to select which vehicles will access the cellular uplink; * Upload - Finally, a vehicle selected to access the cellular uplink receives a message from the MEC requesting the upload. The vehicle can then transmit the collected data. The Data Pipeline Orchestrator intermediates the data exchange with . The sensory data is ingested into the via an API, processed, and stored in the distributed file system with added metadata for organization and additional information. We evaluate the proposed solution in a realistic simulation scenario derived from data traffic (TAPASCologne <cit.>) containing more than 700,000 individual car trips for 24 hours. We compare our approach with a reservation-based algorithm <cit.>, and the results indicate an the aggregation rate improved by up to 10.45% There is an increase in the cost of uploading during peak hours when there is a greater volume of vehicles on the roads. In the TAPASCologne data traffic, we observe that an upload cost reached 167.50 kB/s at peak time. The high cost of uploading, especially in periods of higher demand, requires offloading techniques to reduce the traffic generated on the network and save bandwidth. In this way, Figure <ref> presents, in detail, the aggregation rate. It corresponds to the ratio between data volume after and before aggregation. The results suggest a significant reduction in sensing data transmitted over the cellular network in both approaches. When we use single-hop communication (1-hop), the Centrality-based algorithm has an aggregation rate close to the RB algorithm. In this scenario, the aggregation point will only collect data from its immediate neighbors. However, we can improve the aggregation rate by increasing the number of hops. Multi-hop communication (3-hops) enables data collection from more distant vehicles, resulting in a better aggregation rate. The centrality-based algorithm has an aggregation rate equal to 83.14% in peak hours. This use case presents a scenario in which the architecture enables the implementation of a solution to select aggregation points and offload sensing data. §.§ Mobile Network Application The ecosystem of mobile networks is highly dynamic, dense, and connected. As users move between communication technologies, they undergo handover processes, transitioning from one access point or base station (represented by the evolved Node B or eNB) to another <cit.>. It is essential to maintain users' connections without any perception of interruption or reduction in transfer rates while they are on the move. Mobile network solutions must ensure that user services are available even in scenarios of high mobility, such as airplanes traveling at speeds of up to 500 km/h <cit.>. Accurate strategies are essential for transferring and allocating users' resources. Handovers should happen seamlessly and quickly in areas that do not negatively affect mobility and connectivity. Optimizing the allocation of users in mobile network base stations can reduce the handover process. In typical 5G environments, the involvement of multiple stakeholders can hinder the open exchange of information regarding resource availability, operational status, performance capabilities, and service contracts <cit.>. The lack of data sharing is a big challenge that limits the capacity of systems to manage complex chains of end-to-end services. Considering the 5G architectures, these difficulties can be surpassed or minimized through a distributed ledger and an operational data lake <cit.>. The operational data lake is viewed as a logically centralized repository, as shown in Figure <ref>. In this scenario, the can represent anything connected to the mobile network, such as a vehicle or smartphone. Through the , it shares location updates, network service status, or consumes best route directions, optimized connection spots, and general information regarding anomalous events from the Data Lake. The stores this information. The processes this information and makes it available to mobile users, such as the vehicle. Additionally, the eNB represents the lake. In this case, the keeps all data required to network management and optimization. The enhances and provides additional information to the data in the . Data stored in data lakes receive little to no processing at the ingestion stage. The goal is to minimize and avoid information loss. Suppose a mobile data service provider realizes it needs to offer a new service or optimize existing ones. The data will be available to the , responsible for data filtering, transformation, aggregation, real-time analytics, event detection, and alerting. Furthermore, by harvesting the tools provided by the insights layer of the data lake, decision-makers can analyze and compare different models to select solutions while following data governance rules to attend to users' demands. Once the application and data lake are on, we proceed to show the execution of it in a simulation way. The objective of our solution is to find the best eNB activation considering the vehicle trajectory. Figure <ref> presents the mobility scenario of a user that starts at the bottom left and proceeds to the final destination at the top right. The eNB allocation considers the region of São Paulo City, Brazil. The real-world eNB locations are provided by Telebrasil's dataset <cit.>. The simulation region covers an area of (1947.65 × 1878.95)m^2 and comprises 26 eNBs provided by a local phone carrier. Each base station is labeled from 0 to 25. We utilize the SUMO (Simulation of Urban MObility) simulator <cit.> to generate the route (highlighted in blue in Figure <ref>) that yields 432 UE (User Equipment) location readings. These readings can serve as input for allocation models, such as the ones discussed by Ahmadi et al. <cit.> and Ramos et al. <cit.>. Figure <ref> shows the allocation results for both models (inner results: Ahmadi et al.; outer results: Ramos et al.). We can see that the UE is allocated through the same eNBs and performs 7 handovers, following the allocation sequence 20 -> 8 -> 25 -> 5 -> 2 -> 3 -> 13 -> 19. The models present differences regarding the allocation period along the route. For instance, the model proposed by <cit.> kept the UE connected to eNB 5 for 32.9% of the entire route, while <cit.> kept it connected for 21.9%. A decision maker can interpret the results and choose models that can keep the UE connected for longer periods in each eNB. Since human mobility is highly predictable <cit.>, we can use the data from this route stored in the of the data lake for the next time the UE starts this specific route. By planning ahead, the data lake can run prediction models through the to reduce the number of handovers by selecting the eNBs with minimal service loss for the user. For example, instead of the 7 handovers initially performed, eNB 25 and eNB 3 can be removed, resulting in the eNB handover sequence 20 -> 8 -> 5 -> 2 -> 13 -> 19. §.§ Driver Identification Application The problem of driver identification involves classifying drivers based on their behavior, in our case, using machine learning models. We consider a multivariate time series classification problem due to the nature of the data collected from various sensors over time. In this study case, we focus on the feature generation process of machine learning models. This process is responsible for generating new features from existing ones to enhance the performance of machine learning models. It involves various techniques, such as mathematical transformations, aggregations, interactions, or domain-specific knowledge. We propose to use information theory quantifies <cit.> as new features in machine learning models. Information Theory allows quantifying the information in a dynamic system, such as driver identification. Integrating principles from Information Theory into feature generation can achieve a more comprehensive and meaningful representation of the data. These new features enhance machine learning models' capability to handle complexity and extract pertinent knowledge from the data. We use as features the Complexity Statistical and Entropy measures, considering a time window of 30 seconds (i.e., a sequence of 30 samples). We assumed that each time window was a stationary series. For our study, we used a real-world dataset with driving data from four drivers <cit.>. The drivers made several trips using the same vehicle along a fixed path. Each record contains 51 features associated with an automotive sensor. However, we used only nine features, which are commonly adopted in the literature <cit.>. These features include accelerator pedal value, intake air pressure, absolute throttle position, long-term fuel, engine speed, torque of friction, engine coolant temperature, engine torque, and vehicle speed. Figure <ref> shows the driver identification framework. First, the collects the data, represented by smartphones or OBD-II interface. After that, it transmits to the responsible for executing the driver identification models. With all features evaluated, we must generate these features and the model to allow this processing. To perform this generation, the sends the data to the . Then, the data is processed, the features extracted, and the machine learning models trained. Finally, the trained model is transferred to the , enabling the user to perform low-latency inference tasks. We compare our proposed features generation, based on Complexity Statistics and Entropy, with the commonly used approaches in the literature <cit.>. We trained seven classifiers using the One-vs-Rest method: K-Nearest Neighbors (KNN), Linear Support Vector Machine (SVM), Radial Basis Function (RBF) SVM, Decision Tree, Random Forest, Multi-Layer Perceptron (MLP), and Naive Bayes. We split the dataset into a training dataset (75%) and a testing dataset (25%), using a small dataset and a large dataset with 300 and 9,700 samples from each driver, respectively. Figure <ref> presents the accuracy of each classification model. The results indicate that the classifier's performance influences the features used and the size of the dataset. In the small dataset, we can see that Entropy-Complexity has a lower performance than the other approaches. Furthermore, we can significantly improve for most of the used classifiers when we combine them with the pre-processing scheme from the literature. However, in the large data set, the literature approach was equal or better in the used classifiers. It is noted that all approaches decreased the accuracy for the large dataset compared to the small dataset. Information theory measures can provide additional features that help to distinguish patterns, behaviors, and classes in time series. Although the results for a large dataset show lower precision than the literature solution in some classifiers, for a small dataset we can obtain a significant improvement. The feature extraction step represents a significant computational overhead in training ML models. However, with the high computing power of or a decentralized solution on , we can reduce the workload and improve the efficiency of real-time data processing. § CONCLUSION In this work, we presented a distributed Edge-Based Data Lake Architecture for storing and processing ITS data. This architecture is a promising approach to handling different types of data, ranging from automotive to meteorological data, in which the data lake can provide a distributed platform for data integration. The convergence of edge computing and data lake technologies offers significant advantages, such as reducing latency, improving data security, and optimizing bandwidth usage. We analyzed three use cases of the proposed architecture. Besides the data organization and easy access, through well-established interfaces, our qualitative analysis revealed significant improvement in the organization and structure of ITS applications, such as identifying explicitly where the data is collected, stored, and processed. In particular, for the driver identification one, we identify the possibility of increasing the performance using a distributed processing in the . Overall, the proposed architecture represents a valuable contribution to the field of ITS and can pave the way for new advancements in ITS. However, some challenges remain to be addressed, such as a data selection algorithm for offloading and a detailed metadata system architecture for complete data integration. This study was partly financed by the Research Foundation of the State of Alagoas (FAPEAL) under grants E:60030.0000000352/2021 and the National Council for Scientific and Technological Development (CNPq) under grant 407515/2022-4. ACM-Reference-Format
http://arxiv.org/abs/2409.03716v1
20240905171939
Limits and correlations of T and Z components of CPT-odd coefficients of the Standard Model Extension at DUNE
[ "L. A. Delgadillo", "O. G. Miranda", "G. Moreno-Granados", "C. A. Moura" ]
hep-ph
[ "hep-ph" ]
ldelgadillof2100@alumno.ipn.mx Departamento de Física, Escuela Superior de Física y Matemáticas del Instituto Politécnico Nacional, Unidad Adolfo López Mateos, Edificio 9, 07738 Ciudad de México, Mexico Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China omar.miranda@cinvestav.mx Departamento de Física, Centro de Investigación y de Estudios Avanzados del IPN Apdo. Postal 14-740 07000 Ciudad de México, Mexico guadalupe.moreno@cinvestav.mx Departamento de Física, Centro de Investigación y de Estudios Avanzados del IPN Apdo. Postal 14-740 07000 Ciudad de México, Mexico Center for Neutrino Physics, Virginia Tech, Blacksburg, VA, 24061, USA celio.moura@ufabc.edu.br Centro de Ciências Naturais e Humanas, Universidade Federal do ABC - UFABC, Av. dos Estados, 5001, 09210-580, Santo André-SP, Brazil § ABSTRACT We consider the possible effect of the Standard Model Extension (SME) coefficient (a_L)^Z in the neutrino propagation and discuss how this can affect DUNE limits on the coefficient (a_L)^T found elsewhere. Based on an analysis that considers both coefficients, we find new constraints for them coming from a DUNE-like experiment. Furthermore, we investigate the correlations of the standard oscillation parameters, the leptonic CP-violating phase, δ_CP, and the atmospheric mixing angle, sin^2 θ_23, with respect to the SME coefficients (a_L)^T and (a_L)^Z. Limits and correlations of T and Z components of CPT-odd coefficients of the Standard Model Extension at DUNE C. A. Moura September 9, 2024 ============================================================================================================= § INTRODUCTION On the neutrino oscillation frontier, where nearly all of the available data is compatible with the standard three-flavor oscillation paradigm, precision measurements can be performed to investigate Beyond Standard Model (BSM) physics that may disturb the oscillation pattern. Looking for perturbations on the neutrino oscillation standard model brings new opportunities to access, for example, the Planck scale. In the three neutrino oscillation picture, NOvA <cit.>, T2K <cit.>, and recently IceCube <cit.> collaborations are leading the precision measurements on both the atmospheric oscillation parameters, Δ m^2_32 and sin^2 θ_23, and the CP-violating phase, δ_CP <cit.>. The current measurement of the angle θ_23 demonstrates consistency with maximal mixing (i.e., θ_23=π/4). This has implications for our understanding of neutrino masses and the nature of neutrino flavor mixing. Furthermore, the precise determination of sin^2 θ_23 is crucial for refining our understanding of the neutrino mass hierarchy, the ordering of neutrino mass eigenstates, and for probing possible sources of CP violation in the neutrino sector. On the other hand, accurately determining δ_CP is a primary goal of several ongoing and planned long-baseline neutrino oscillation experiments. Present-day measurements of δ_CP have been reported, mainly by NOvA and T2K collaborations. Their results seem to differ, and might be a challenge to interpret them if this difference persists in the future. While the NOvA collaboration best-fit agrees with a value of δ_CP≃ 0.8 π <cit.>, a similar analysis from the T2K collaboration reports δ_CP≃ 1.4 π <cit.>, under the neutrino mass Normal Ordering (NO). A recent joint oscillation analysis of the Super-Kamiokande and T2K collaborations is consistent with a best-fit value of δ_CP≃ 1.4 π <cit.>. Nevertheless, some proposals to alleviate such discrepancy on the measurement of δ_CP include: neutrino Non-Standard Interactions (NSI) <cit.> and sterile neutrinos <cit.>, among other BSM scenarios <cit.>. The Standard Model of particle physics (SM) is thought to represent a low-energy effective gauge theory of a more fundamental theory in which the Planck scale (M_Pl∼ 10^28 eV) would be the natural mass scale. Likewise, the Standard Model Extension (SME) framework <cit.> parameterizes the possible breakdowns of the Lorentz and CPT symmetries that might arise from a fundamental theory that incorporates both, the gravitational force from one side, and the strong, weak, and electromagnetic interactions of the SM from the other side. Such potential violations of the CPT and Lorentz symmetries may arise at very high energies, well above the electroweak scale, for instance, within the context of string theory <cit.>. Furthermore, it has been argued that potential Lorentz Invariance Violation (LIV) and CPT symmetry breakdown in the neutrino sector might result from neutrino NSI with scalar fields <cit.>. For reviews regarding CPT violation and LIV in the neutrino sector, see, e.g., Refs. <cit.>. As far as neutrino oscillation experiments are concerned, CPT violation in the neutrino sector was suggested in order to explain the Liquid Scintillator Neutrino Detector (LSND) and MiniBooNE anomalies <cit.>. For studies regarding possible LIV and CPT-violating effects in various neutrino oscillation experiments, we refer to <cit.>. Moreover, non-oscillatory probes of Lorentz and CPT breakdowns in the neutrino sector include: beta decay <cit.>, double beta decay <cit.>, and cosmic neutrino background <cit.>. In this paper, we examine the particular BSM case of potential violations of Lorentz invariance and CPT symmetries within the context of accelerator-based long-baseline neutrino oscillation experiments, such as the Deep Underground Neutrino Experiment (DUNE) <cit.>, a next-generation long-baseline neutrino oscillation experiment. Long-baseline neutrino experiments have sensitivity to CPT symmetry and Lorentz invariance violations through SME coefficients proportional to their baseline. One can see this through the oscillation probability expression, which can be parameterized as <cit.>, P_ν_β→ν_α^(1) = 2L(P_𝒞^(1))_αβ , where L is the baseline and (P_𝒞^(1))_αβ= Im((S^(0)_αβ)^*(𝒞^(1))_αβ) . In Eq. (<ref>), |(S^(0)_αβ)|^2 = P^(0)_ν_β→ν_α , is the standard three neutrino flavor oscillation probability. Considering only the CPT-odd coefficients (a_L)^A_αβ, for A=T,Z and α,β=e,μ,τ: (𝒞^(1))_αβ = (a_L)^T_αβ - N̂^Z (a_L)^Z_αβ , where N̂^Z = -sinχsinθcosϕ+cosχcosθ, is the Z component of the vector representing the neutrino propagation direction in the Sun-centered frame in terms of local spherical coordinates at the detector (see Fig. 2 of Ref. <cit.> for an illustration). In Eq. (<ref>), χ is the colatitude of the detector, θ is the angle at the detector between the beam direction and vertical, and ϕ the angle between the beam and east of south. Notice that we are not taking into account sidereal variations so the components A=X,Y of (a_L)^A_αβ are not relevant. The aim of this work is to investigate the consequences of the possible coexistence of the time (T) and spatial (Z) components of the CPT-odd coefficients of the SME. § EXPERIMENTAL CONTEXT AND THEORETICAL FRAMEWORK Long-baseline neutrino oscillation experiments are crucial for solving puzzles in the conventional three-neutrino paradigm as well as investigating different new physics scenarios, such as possible breakdowns of the Lorentz and CPT symmetries. In recent neutrino oscillation analyzes <cit.>, the possible violation of CPT symmetry, that would imply violation of the Lorentz invariance principle, was studied in the framework of the SME, in such a way that CPT-odd coefficients can have their time-component constrained through the neutrino propagation in long baseline experiments. However, from equations (<ref>)-(<ref>), we see that if the Z-component of the same coefficient is considered in the analysis, a correlation arises between them which affects the predicted limits on the time component alone. We introduced the SME phenomenology of CPT symmetry violation and LIV through the coefficients (ã_L)^A_αβ found in Eq. (<ref>) as part of the oscillation probability. Henceforth, we will focus on the Hamiltonian picture of neutrino propagation, which is directly related to our computing methodology explained in the next section, Section <ref>. We describe the neutrino propagation from the source to the detector considering the Hamiltonian containing three components, Ĥ=Ĥ_vacuum + Ĥ_matter + Ĥ_LIV , where Ĥ_vacuum describes the neutrino propagation in vacuum, Ĥ_matter contains the Mikheyev-Smirnov-Wolfenstein (MSW) matter potential, and Ĥ_LIV includes the perturbative effects from LIV and possibly CPT symmetry violation. Explicitly, considering only the relevant CPT-odd coefficients of the SME neutrino sector: Ĥ_LIV = ( [ a_ee a_e μ a_e τ; a_e μ^* a_μμ a_μτ; a_e τ^* a_μτ^* a_ττ; ]) - N̂^Z ( [ a_ee^Z a_e μ^Z a_e τ^Z; a_e μ^Z* a_μμ^Z a_μτ^Z; a_e τ^Z* a_μτ^Z* a_ττ^Z; ]) , where a_αβ represent the time component and a_αβ^Z the Z-spatial component of the coefficients for each neutrino oscillation channel (α,β=e,μ,τ). From now on, we neglect the script T from the time component and simplify the notation so that (a_L)^T_αβ = a_αβ and (a_L)^Z_αβ = a^Z_αβ. § METHODOLOGY In this study, we use GLoBES <cit.> and its additional NSI plugin snu.c <cit.> which was modified to implement the CPT-odd SME coefficients at the Hamiltonian. Moreover, to simulate a DUNE-like experimental configuration, we use the available DUNE configuration ancillary files <cit.> for GLoBES and specifications from DUNE technical design report (TDR) <cit.>, where can be found more details and information on the experiment. DUNE will consist of up to four modules, each containing approximately 10 kton liquid argon detectors, with a mean neutrino energy around 2.5 GeV, located at 1285 km from the beam source (on-axis) with a 1.2 MW power in the first phase of operations. We consider a time exposure of 10 years, evenly distributed between neutrino and antineutrino modes. We employ a minimum square analysis to quantify the statistical significance to CPT-odd SME coefficients, using both neutrino and antineutrino data sets. The total χ^2-function is given as <cit.> χ^2 = ∑_ℓχ̃^2_ℓ + χ^2_prior, where the corresponding χ̃^2_ℓ-function stands for each channel ℓ, with ℓ= ( ν_μ(ν_μ)→ν_e (ν_e), ν_μ(ν_μ)→ν_μ (ν_μ) ), and is provided as in Ref. <cit.> χ̃_ℓ^2= min_ξ_j[ ∑ _i^n_bin 2 { N_i,test^3 ν+LIV( Ω, Θ, {ξ_j})-N_i,true^3ν + N_i,true^3νlogN_i,true^3ν/N_i,test^3 ν+LIV( Ω, Θ, {ξ_j})}                 + ∑_j^n_syst(ξ_j/σ_j)^2 ], where N_i, true^3ν are the simulated events at the i-th energy bin, considering the standard three neutrino oscillations framework, N_i, test^3ν +LIV( Ω, Θ, {ξ_j}) are the computed events at the i-th energy bin including CPT-odd SME coefficients (one parameter at a time). In addition, Ω = {θ_12, θ_13, θ_23, δ_CP, Δ m_21^2, Δ m^2_31} is the set of neutrino oscillation parameters, while Θ = {|a_αβ|, ϕ_αβ, a_αα, ⋯} is the set of either isotropic, a_αβ, or Z-spatial, a_αβ^Z, SME coefficients, and {ξ_j} are the nuisance parameters to account for the systematic uncertainties. Moreover, σ_j are the systematic uncertainties as reported in the DUNE TDR <cit.>. To obtain our simulated events, we consider the neutrino oscillation parameters from Salas et al. <cit.> as true values, shown in Table <ref>. Furthermore, the implementation of external input for the standard oscillation parameters on the χ^2 function is performed via Gaussian priors <cit.> χ^2_prior= ∑_k^n_priors(Ω_k,true-Ω_k,test)^2/σ^2_k . The central values of the oscillation parameter priors, Ω_k,true, are fixed to their best-fit value from Ref. <cit.>, considering NO and σ_k corresponding to the 68.27% confidence level (C.L.) for each parameter k. § RESULTS AND DISCUSSIONS In this section, we present our results of the projected limits and sensitivities to the CPT-odd SME coefficients a_αβ and a_αβ^Z at a DUNE-like experimental configuration. First, we examine the correlations among the SME coefficients a_αβ and a_αβ^Z. The correlation between the time and the Z-component of the CPT-odd coefficients for DUNE is depicted in Fig. <ref>. Because the Z-component is proportional to N̂^Z, which is relatively small for the DUNE location (N̂^Z ≃ 0.16), constraints on the time component are stronger in this case. We obtain N̂^Z from the values of the angles relative to the DUNE location found in Table <ref>, Appendix <ref>. Second, in Figs. <ref> and <ref>, we examine the correlations of the coefficients a_αβ and a_αβ^Z with δ_CP. Moreover, we assess the impact of the flavor-changing a_αβ and a_αβ^Z on the future determination of δ_CP. A robust measurement of δ_CP can be accomplished even in the presence of Lorentz violating effects, as shown in Fig. <ref>. The correlations of a_αβ and a_αβ^Z with sin^2 θ_23 are presented in Figs. <ref> and <ref>. Just as importantly, the LIV phases, ϕ_αβ and ϕ_αβ^Z, might compromise the determination of δ_CP, and similarly for the atmospheric mixing angle, θ_23 <cit.>. In Fig. <ref>, we show the projected sensitivities to the isotropic coefficients a_αβ, as well as the anisotropic Z-spatial coefficients a_αβ^Z. Finally, in Fig. <ref>, we display the correlations of the LIV phases, ϕ_αβ and ϕ_αβ^Z, with respect to δ_CP. §.§ Correlations of time and Z-spatial components the CPT-odd SME coefficients Previous assessments on the isotropic SME coefficients a_αβ considered a_αβ^Z=0 <cit.>. However, in this paper, we consider the possibility of correlations between a_αβ and a_αβ^Z. In Fig. <ref>, we display in dashed lines the expected 95% C.L. sensitivity regions in the |a_αβ| vs. |a_αβ^Z| and (a_αα-a_ττ ) vs. (a_αα^Z-a_ττ^Z) projection planes, respectively on the left and right panels. The projected 95% C.L. sensitivities to the isotropic non-diagonal SME coefficients are |a_e τ|≲ 0.92× 10^-23 GeV (|a_e τ^Z|=0), |a_e μ|≲ 0.75× 10^-23 GeV (|a_e μ^Z|=0), and |a_μτ|≲ 0.68× 10^-23 GeV (|a_μτ^Z|=0), accordingly. However, existing bounds on the isotropic SME coefficients from the Super-Kamiokande experiment set |a_e τ| < 2.8 × 10^-23 GeV, |a_e μ| < 1.8 × 10^-23 GeV, and |a_μτ| < 5.1 × 10^-24 GeV <cit.>; all bounds at 95% C.L. Hence, a DUNE-like experimental configuration could improve the limits set on the isotropic SME coefficients |a_e μ| and |a_e τ| with respect to those from Super-Kamiokande. §.§ Correlations of the SME coefficients with δ_CP The correlations among δ_CP and the off-diagonal SME coefficients, |a_αβ| and |a_αβ^Z|, are displayed in Fig. <ref>. In both situations, we have marginalized over the atmospheric mixing angle, θ_23, around its 1σ uncertainty <cit.>, as well as the CPT-odd phases, ϕ_αβ and ϕ_αβ^Z, in the interval [0-2π]. Furthermore, we find that the presence of the flavor-changing (e-μ) and (e-τ) channels have a greater impact on the determination of δ_CP . Regarding the diagonal SME coefficients, taking advantage of the freedom to redefine the diagonal elements up to a global constant, we conduct an analysis of the elements a_αα-a_ττ and a_αα^Z-a_ττ^Z. The correlations with respect to δ_CP and sin^2θ_23 are shown in Figs. <ref> and <ref>, accordingly. In Fig. <ref>, we display our results of the projected 95% C.L. sensitivity regions (a_αα-a_ττ)-δ_CP (left panel) and (a_αα^Z-a_ττ^Z)-δ_CP (right panel). We notice that the determination of δ_CP is more affected by the coefficients a_ee-a_ττ and a_ee^Z-a_ττ^Z than it is by the μμ-ττ coefficients. Here, have marginalized over the atmospheric mixing angle θ_23 around its 1σ uncertainty <cit.>. Besides, all the remaining oscillation parameters were fixed to their best-fit values from Table <ref>. In Fig. <ref>, we show the precision sensitivity to the leptonic CP-phase in the presence of LIV. We observe that the precision sensitivity of δ_CP is modified, at high significance (√(Δχ^2)≳ 7σ C.L.), by the inclusion of non-zero SME coefficients (|a_αβ| or |a_αβ^Z|) from the off-diagonal (e-μ) and (e-τ) channels, while remaining practically unchanged by the presence of the (μ-τ) channel. Hence, a robust measurement of δ_CP can be accomplished even in the presence of Lorentz violating effects from the non-diagonal channels. §.§ Correlations of the SME coefficients with sin^2 θ_23 In Fig. <ref>, we show the correlations between the off-diagonal SME coefficients, |a_αβ| and |a_αβ^Z|, with the atmospheric mixing angle, sin^2θ_23. In this case, we marginalized over δ_CP around its 1σ uncertainty <cit.>, as well as corresponding CPT-odd phases, ϕ_αβ and ϕ_αβ^Z, in the interval [0-2π]. In both cases, we notice that the determination of sin^2θ_23 is mostly impacted by the SME coefficients a_e τ and a_e τ^Z, while the (e-μ) and (μ-τ) channels have a mild influence on the determination of the atmospheric mixing angle. In Fig. <ref>, we observe that the diagonal SME coefficients have a similar impact on the determination of the mixing angle, showing a positive (negative) correlation for the isotropic SME coefficients, a_ee-a_ττ (a_μμ-a_ττ), and negative (positive) correlation for the Z-spatial ones, a_ee^Z-a_ττ^Z (a_μμ^Z-a_ττ^Z). §.§ Sensitivities and correlations of the LIV phases (ϕ_αβ, ϕ_αβ^Z) with δ_CP In Fig. <ref>, we present our results of the expected sensitivities to the SME coefficients a_αβ and a_αβ^Z at DUNE, the projected 95% C.L. sensitivities to the isotropic SME coefficients are |a_e τ|≲ 1.6 × 10^-23 GeV, |a_e μ|≲ 0.9 × 10^-23 GeV, and |a_μτ|≲ 0.8 × 10^-23 GeV, respectively, while for the Z-spatial coefficients, the expected 95% C.L. sensitivities are |a_e τ^Z|≲ 7.8 × 10^-23 GeV, |a_e μ|≲ 5.7 × 10^-23 GeV, and |a_μτ|≲ 5.2 × 10^-23 GeV, accordingly. In Fig. <ref>, we display the correlations of δ_CP with the corresponding LIV phases ϕ_αβ and ϕ_αβ^Z. The magnitude of the non-diagonal SME coefficients were fixed to |a_αβ| = 1.0 × 10^-23 GeV and |a_αβ^Z| = 8.0 × 10^-23 GeV. We notice that the LIV phases, ϕ_αβ and ϕ_αβ^Z, have a similar impact on the reconstructed δ_CP phase. Furthermore, the expected allowed value of the leptonic CP phase is δ_CP / π= 1.08 ± 0.6 at 95% C.L. The (e-μ) and (e-τ) channels show higher correlations. § CONCLUSIONS High energy neutrino oscillations provide an excellent opportunity to investigate potential breakdowns of the Lorentz and CPT symmetries. In this study, we have analyzed the impact of either the isotropic or anisotropic Z-spatial coefficients for CPT-odd Lorentz violation on the determination of the mixing angle θ_23 and leptonic CP-phase δ_CP, as well as the projected sensitivities and correlations, considering a DUNE-like setup. The determination of the aforementioned atmospheric mixing angle θ_23 and phase δ_CP will be mostly impacted by effects from either the isotropic or Z-spatial CPT-odd SME coefficients from the (e-τ) channel; effects from the (e-μ) channel are moderate, and the impact from the (μ-τ) channel is muted: Figs. <ref> and <ref>. In addition, we observed that the presence of the corresponding LIV phases, ϕ_αβ and ϕ_αβ^Z, will have a similar effect on the reconstructed CP-phase, showing a significant correlation among the flavor-changing channels, (e-μ) and (e-τ), accordingly: Fig. <ref>. Regarding the diagonal elements, (ee-ττ) and (μμ-ττ), the determination of the atmospheric mixing angle θ_23 will be slightly influenced by the presence of either isotropic or anisotropic coefficients: Fig. <ref>. Effects from the diagonal (ee-ττ) channel will have a considerable impact on the leptonic CP-phase determination: Fig. <ref>. On the other hand, the expected 95% C.L. sensitivities to the isotropic SME coefficients are |a_e τ|≲ 1.6 × 10^-23 GeV, |a_e μ|≲ 0.9 × 10^-23 GeV, and |a_μτ|≲ 0.8 × 10^-23 GeV (left panel of Fig. <ref>), while for the Z-spatial coefficients, the projected 95% C.L. sensitivities are |a_e τ^Z|≲ 7.8 × 10^-23 GeV, |a_e μ|≲ 5.7 × 10^-23 GeV, and |a_μτ|≲ 5.2 × 10^-23 GeV, accordingly (right panel of Fig. <ref>). Besides, we have explored the correlations among the isotropic and Z-spatial sectors (Fig. <ref>), and limits on the isotropic a_αβ can be relaxed depending on the limits set on the a_αβ^Z coefficients. While current limits set on the (e-τ) and (e-μ) channels from the Super-Kamiokande experiment can be improved considering a DUNE-like configuration, that is not the case for the (μ-τ) channel, confirming a complementarity between the two experiments. This paper represents the views of the authors and should not be considered a DUNE collaboration paper. § ACKNOWLEDGEMENTS We would like to acknowledge J. S. Diaz for useful discussions. This work was partially supported by SNII-México and CONAHCyT research Grant No. A1-S-23238. § DUNE LOCATION DEFINITIONS In our analysis we consider a fixed value of N̂^Z. Here we show how the value of N̂^Z change in accordance with the determination of the experiment location. As displayed in Fig. <ref>, and from Eqs. (<ref>) and (<ref>), experiment location resulting in bigger values of N̂^Z could enhance the sensitivity of the Z-spatial SME coefficients (ã_L)^Z_αβ. 84 NOvA:2021nfi M. A. Acero et al. [NOvA], Phys. Rev. D 106 (2022) no.3, 032004 doi:10.1103/PhysRevD.106.032004 [arXiv:2108.08219 [hep-ex]]. NOvA:2023iam M. A. Acero et al. [NOvA], Phys. Rev. D 110 (2024) no.1, 1 doi:10.1103/PhysRevD.110.012005 [arXiv:2311.07835 [hep-ex]]. T2K:2023smv K. Abe et al. [T2K], Eur. Phys. J. C 83 (2023) no.9, 782 doi:10.1140/epjc/s10052-023-11819-x [arXiv:2303.03222 [hep-ex]]. T2K:2023mcm K. Abe et al. [T2K], Phys. Rev. D 108 (2023) no.7, 072011 doi:10.1103/PhysRevD.108.072011 [arXiv:2305.09916 [hep-ex]]. IceCube:2024xjj R. Abbasi et al. [IceCube], [arXiv:2405.02163 [hep-ex]]. T2K:2024wfn K. Abe et al. [T2K and Super-Kamiokande], [arXiv:2405.12488 [hep-ex]]. Denton:2020uda P. B. Denton, J. Gehrlein and R. Pestes, Phys. Rev. Lett. 126 (2021) no.5, 051801 doi:10.1103/PhysRevLett.126.051801 [arXiv:2008.01110 [hep-ph]]. Chatterjee:2020kkm S. S. Chatterjee and A. Palazzo, Phys. Rev. Lett. 126 (2021) no.5, 051802 doi:10.1103/PhysRevLett.126.051802 [arXiv:2008.04161 [hep-ph]]. Delgadillo:2023lyp L. A. Delgadillo and O. G. Miranda, Phys. Rev. D 108 (2023) no.9, 095024 doi:10.1103/PhysRevD.108.095024 [arXiv:2304.05545 [hep-ph]]. Chatterjee:2020yak S. S. Chatterjee and A. Palazzo, [arXiv:2005.10338 [hep-ph]]. deGouvea:2022kma A. de Gouvêa, G. Jusino Sánchez and K. J. Kelly, Phys. Rev. D 106 (2022) no.5, 055025 doi:10.1103/PhysRevD.106.055025 [arXiv:2204.09130 [hep-ph]]. Lin:2023xyk H. X. Lin, J. Tang and S. Vihonen, [arXiv:2312.11704 [hep-ph]]. Colladay:1998fq D. Colladay and V. A. Kostelecky, Phys. Rev. D 58 (1998), 116002 doi:10.1103/PhysRevD.58.116002 [arXiv:hep-ph/9809521 [hep-ph]]. Kostelecky:1988zi V. A. Kostelecky and S. Samuel, Phys. Rev. D 39 (1989), 683 doi:10.1103/PhysRevD.39.683 Kostelecky:1991ak V. A. Kostelecky and R. Potting, Nucl. Phys. B 359 (1991), 545-570 doi:10.1016/0550-3213(91)90071-5 Kostelecky:1995qk V. A. Kostelecky and R. Potting, Phys. Lett. B 381 (1996), 89-96 doi:10.1016/0370-2693(96)00589-8 [arXiv:hep-th/9605088 [hep-th]]. Colladay:1996iz D. Colladay and V. A. Kostelecky, Phys. Rev. D 55 (1997), 6760-6774 doi:10.1103/PhysRevD.55.6760 [arXiv:hep-ph/9703464 [hep-ph]]. Gu:2005eq P. H. Gu, X. J. Bi and X. m. Zhang, Eur. Phys. J. C 50 (2007), 655-659 doi:10.1140/epjc/s10052-007-0217-7 [arXiv:hep-ph/0511027 [hep-ph]]. Ando:2009ts S. Ando, M. Kamionkowski and I. Mocioiu, Phys. Rev. D 80 (2009), 123522 doi:10.1103/PhysRevD.80.123522 [arXiv:0910.4391 [hep-ph]]. Simpson:2016gph F. Simpson, R. Jimenez, C. Pena-Garay and L. Verde, Phys. Dark Univ. 20 (2018), 72-77 doi:10.1016/j.dark.2018.04.002 [arXiv:1607.02515 [astro-ph.CO]]. Klop:2017dim N. Klop and S. Ando, Phys. Rev. D 97 (2018) no.6, 063006 doi:10.1103/PhysRevD.97.063006 [arXiv:1712.05413 [hep-ph]]. Capozzi:2018bps F. Capozzi, I. M. Shoemaker and L. Vecchi, JCAP 07 (2018), 004 doi:10.1088/1475-7516/2018/07/004 [arXiv:1804.05117 [hep-ph]]. Farzan:2018pnk Y. Farzan and S. Palomares-Ruiz, Phys. Rev. D 99 (2019) no.5, 051702 doi:10.1103/PhysRevD.99.051702 [arXiv:1810.00892 [hep-ph]]. Gherghetta:2023myo T. Gherghetta and A. Shkerin, Phys. Rev. D 108 (2023) no.9, 9 doi:10.1103/PhysRevD.108.095009 [arXiv:2305.06441 [hep-ph]]. Arguelles:2023wvf C. A. Argüelles, K. Farrag and T. Katori, PoS ICRC2023 (2023), 1415 doi:10.22323/1.444.1415 [arXiv:2402.18126 [hep-ph]]. Arguelles:2023jkh C. A. Argüelles, K. Farrag and T. Katori, doi:10.1142/9789811275388_0047 [arXiv:2401.15716 [hep-ph]]. Lambiase:2023hpq G. Lambiase and T. K. Poddar, JCAP 01 (2024), 069 doi:10.1088/1475-7516/2024/01/069 [arXiv:2307.05229 [hep-ph]]. Cordero:2023hua R. Cordero and L. A. Delgadillo, Phys. Lett. B 853 (2024), 138687 doi:10.1016/j.physletb.2024.138687 [arXiv:2312.16320 [hep-ph]]. Arguelles:2024cjj C. A. Argüelles, K. Farrag and T. Katori, [arXiv:2404.10926 [hep-ph]]. Kostelecky:2003cr V. A. Kostelecky and M. Mewes, Phys. Rev. D 69 (2004), 016005 doi:10.1103/PhysRevD.69.016005 [arXiv:hep-ph/0309025 [hep-ph]]. Kostelecky:2011gq A. Kostelecky and M. Mewes, Phys. Rev. D 85 (2012), 096005 doi:10.1103/PhysRevD.85.096005 [arXiv:1112.6395 [hep-ph]]. Diaz:2016xpw J. S. Diaz, Symmetry 8 (2016) no.10, 105 doi:10.3390/sym8100105 [arXiv:1609.09474 [hep-ph]]. Torri:2020dec M. D. C. Torri, Universe 6 (2020) no.3, 37 doi:10.3390/universe6030037 [arXiv:2110.09186 [hep-ph]]. Moura:2022dev C. A. Moura and F. Rossi-Torres, Universe 8 (2022) no.1, 42 doi:10.3390/universe8010042 Barenboim:2022rqu G. Barenboim, Front. in Phys. 10 (2022), 813753 doi:10.3389/fphy.2022.813753 Murayama:2000hm H. Murayama and T. Yanagida, Phys. Lett. B 520 (2001), 263-268 doi:10.1016/S0370-2693(01)01136-4 [arXiv:hep-ph/0010178 [hep-ph]]. Barenboim:2001ac G. Barenboim, L. Borissov, J. D. Lykken and A. Y. Smirnov, JHEP 10 (2002), 001 doi:10.1088/1126-6708/2002/10/001 [arXiv:hep-ph/0108199 [hep-ph]]. Barenboim:2004wu G. Barenboim and N. E. Mavromatos, JHEP 01 (2005), 034 doi:10.1088/1126-6708/2005/01/034 [arXiv:hep-ph/0404014 [hep-ph]]. Kostelecky:2004hg V. A. Kostelecky and M. Mewes, Phys. Rev. D 70 (2004), 076002 doi:10.1103/PhysRevD.70.076002 [arXiv:hep-ph/0406255 [hep-ph]]. MiniBooNE:2011pix A. A. Aguilar-Arevalo et al. [MiniBooNE], Phys. Lett. B 718 (2013), 1303-1308 doi:10.1016/j.physletb.2012.12.020 [arXiv:1109.3480 [hep-ex]]. Katori:2012pe T. Katori [MiniBooNE], Mod. Phys. Lett. A 27 (2012), 1230024 doi:10.1142/S0217732312300248 [arXiv:1206.6915 [hep-ex]]. Bahcall:2002ia J. N. Bahcall, V. Barger and D. Marfatia, Phys. Lett. B 534 (2002), 120-123 doi:10.1016/S0370-2693(02)01714-8 [arXiv:hep-ph/0201211 [hep-ph]]. DoubleChooz:2012eiq Y. Abe et al. [Double Chooz], Phys. Rev. D 86 (2012), 112009 doi:10.1103/PhysRevD.86.112009 [arXiv:1209.5810 [hep-ex]]. MINOS:2008fnv P. Adamson et al. [MINOS], Phys. Rev. Lett. 101 (2008), 151601 doi:10.1103/PhysRevLett.101.151601 [arXiv:0806.4945 [hep-ex]]. MINOS:2010kat P. Adamson et al. [MINOS], Phys. Rev. Lett. 105 (2010), 151601 doi:10.1103/PhysRevLett.105.151601 [arXiv:1007.2791 [hep-ex]]. IceCube:2010fyu R. Abbasi et al. [IceCube], Phys. Rev. D 82 (2010), 112003 doi:10.1103/PhysRevD.82.112003 [arXiv:1010.4096 [astro-ph.HE]]. Super-Kamiokande:2014exs K. Abe et al. [Super-Kamiokande], Phys. Rev. D 91 (2015) no.5, 052003 doi:10.1103/PhysRevD.91.052003 [arXiv:1410.4267 [hep-ex]]. SNO:2018mge B. Aharmim et al. [SNO], Phys. Rev. D 98 (2018) no.11, 112013 doi:10.1103/PhysRevD.98.112013 [arXiv:1811.00166 [hep-ex]]. Barenboim:2017ewj G. Barenboim, C. A. Ternes and M. Tórtola, Phys. Lett. B 780 (2018), 631-637 doi:10.1016/j.physletb.2018.03.060 [arXiv:1712.01714 [hep-ph]]. Barenboim:2018ctx G. Barenboim, M. Masud, C. A. Ternes and M. Tórtola, Phys. Lett. B 788 (2019), 308-315 doi:10.1016/j.physletb.2018.11.040 [arXiv:1805.11094 [hep-ph]]. KumarAgarwalla:2019gdj S. Kumar Agarwalla and M. Masud, Eur. Phys. J. C 80 (2020) no.8, 716 doi:10.1140/epjc/s10052-020-8303-1 [arXiv:1912.13306 [hep-ph]]. Rahaman:2021leu U. Rahaman, Eur. Phys. J. C 81 (2021) no.9, 792 doi:10.1140/epjc/s10052-021-09598-4 [arXiv:2103.04576 [hep-ph]]. Ngoc:2022uhg T. V. Ngoc, S. Cao, N. T. H. Van and P. T. Quyen, Phys. Rev. D 107 (2023) no.1, 016013 doi:10.1103/PhysRevD.107.016013 [arXiv:2210.13044 [hep-ph]]. Sarker:2023mlz A. Sarker, A. Medhi and M. M. Devi, Eur. Phys. J. C 83 (2023) no.7, 592 doi:10.1140/epjc/s10052-023-11785-4 [arXiv:2302.10456 [hep-ph]]. Raikwal:2023lzk D. Raikwal, S. Choubey and M. Ghosh, Phys. Rev. D 107 (2023) no.11, 115032 doi:10.1103/PhysRevD.107.115032 [arXiv:2303.10892 [hep-ph]]. Sahoo:2021dit S. Sahoo, A. Kumar and S. K. Agarwalla, JHEP 03 (2022), 050 doi:10.1007/JHEP03(2022)050 [arXiv:2110.13207 [hep-ph]]. Agarwalla:2023wft S. K. Agarwalla, S. Das, S. Sahoo and P. Swain, JHEP 07 (2023), 216 doi:10.1007/JHEP07(2023)216 [arXiv:2302.12005 [hep-ph]]. Mishra:2023nqf S. Mishra, S. Shukla, L. Singh and V. Singh, [arXiv:2309.01756 [hep-ph]]. Barenboim:2018lpo G. Barenboim, C. A. Ternes and M. Tórtola, Eur. Phys. J. C 79 (2019) no.5, 390 doi:10.1140/epjc/s10052-019-6900-7 [arXiv:1804.05842 [hep-ph]]. Sahoo:2022nbu S. Sahoo, A. Kumar, S. K. Agarwalla and A. Dighe, Phys. Lett. B 841 (2023), 137949 doi:10.1016/j.physletb.2023.137949 [arXiv:2205.05134 [hep-ph]]. Majhi:2022fed R. Majhi, D. K. Singha, M. Ghosh and R. Mohanta, Phys. Rev. D 107 (2023) no.7, 075036 doi:10.1103/PhysRevD.107.075036 [arXiv:2212.07244 [hep-ph]]. Barenboim:2023krl G. Barenboim, P. Martínez-Miravé, C. A. Ternes and M. Tórtola, Phys. Rev. D 108 (2023) no.3, 035039 doi:10.1103/PhysRevD.108.035039 [arXiv:2305.06384 [hep-ph]]. Pan:2023qln S. Pan, K. Chakraborty and S. Goswami, Eur. Phys. J. C 84 (2024) no.4, 354 doi:10.1140/epjc/s10052-024-12541-y [arXiv:2308.07566 [hep-ph]]. Arguelles:2015dca C. A. Argüelles, T. Katori and J. Salvado, Phys. Rev. Lett. 115 (2015), 161303 doi:10.1103/PhysRevLett.115.161303 [arXiv:1506.02043 [hep-ph]]. Fiza:2022xfw N. Fiza, N. R. Khan Chowdhury and M. Masud, JHEP 01 (2023), 076 doi:10.1007/JHEP01(2023)076 [arXiv:2206.14018 [hep-ph]]. Cordero:2024nho R. Cordero and L. A. Delgadillo, [arXiv:2407.02729 [hep-ph]]. Shukla:2024fnw S. Shukla, S. Mishra, L. Singh and V. Singh, [arXiv:2408.01520 [hep-ph]]. IceCube:2021tdn R. Abbasi et al. [IceCube], Nature Phys. 18 (2022) no.11, 1287-1292 doi:10.1038/s41567-022-01762-1 [arXiv:2111.04654 [hep-ex]]. Testagrossa:2023ukh F. Testagrossa, D. F. G. Fiorillo and M. Bustamante, [arXiv:2310.12215 [astro-ph.HE]]. Telalovic:2023tcb B. Telalovic and M. Bustamante, [arXiv:2310.15224 [astro-ph.HE]]. Diaz:2013saa J. S. Díaz, A. Kostelecký and R. Lehnert, Phys. Rev. D 88 (2013) no.7, 071902 doi:10.1103/PhysRevD.88.071902 [arXiv:1305.4636 [hep-ph]]. Lehnert:2021tbv R. Lehnert, Phys. Lett. B 828 (2022), 137017 doi:10.1016/j.physletb.2022.137017 [arXiv:2112.13803 [hep-ph]]. KATRIN:2022qou M. Aker et al. [KATRIN], Phys. Rev. D 107 (2023) no.8, 082005 doi:10.1103/PhysRevD.107.082005 [arXiv:2207.06326 [nucl-ex]]. EXO-200:2016hbz J. B. Albert et al. [EXO-200], Phys. Rev. D 93 (2016) no.7, 072001 doi:10.1103/PhysRevD.93.072001 [arXiv:1601.07266 [nucl-ex]]. CUPID:2019kto O. Azzolini et al. [CUPID], Phys. Rev. D 100 (2019) no.9, 092002 doi:10.1103/PhysRevD.100.092002 [arXiv:1911.02446 [nucl-ex]]. Diaz:2015aua J. S. Diaz and F. R. Klinkhamer, Phys. Rev. D 93 (2016) no.5, 053004 doi:10.1103/PhysRevD.93.053004 [arXiv:1512.00817 [hep-ph]]. DUNE:2020jqi B. Abi et al. [DUNE], Eur. Phys. J. C 80 (2020) no.10, 978 doi:10.1140/epjc/s10052-020-08456-z [arXiv:2006.16043 [hep-ex]]. Diaz:2009qk J. S. Diaz, V. A. Kostelecky and M. Mewes, Phys. Rev. D 80, 076007 (2009) doi:10.1103/PhysRevD.80.076007 [arXiv:0908.1401 [hep-ph]]. Bailey:2006fd Q. G. Bailey and V. A. Kostelecky, Phys. Rev. D 74 (2006), 045001 doi:10.1103/PhysRevD.74.045001 [arXiv:gr-qc/0603030 [gr-qc]]. Huber:2004ka P. Huber, M. Lindner and W. Winter, Comput. Phys. Commun. 167 (2005), 195 doi:10.1016/j.cpc.2005.01.003 [arXiv:hep-ph/0407333 [hep-ph]]. Huber:2007ji P. Huber, J. Kopp, M. Lindner, M. Rolinec and W. Winter, Comput. Phys. Commun. 177 (2007), 432-438 doi:10.1016/j.cpc.2007.05.004 [arXiv:hep-ph/0701187 [hep-ph]]. Kopp:2006wp J. Kopp, Int. J. Mod. Phys. C 19 (2008), 523-548 doi:10.1142/S0129183108012303 [arXiv:physics/0610206 [physics]]. Kopp:2007rz J. Kopp, M. Lindner, T. Ota and J. Sato, [arXiv:0710.1867 [hep-ph]]. DUNE:2021cuw B. Abi et al. [DUNE], [arXiv:2103.04797 [hep-ex]]. DUNE:2020lwj B. Abi et al. [DUNE], JINST 15 (2020) no.08, T08008 doi:10.1088/1748-0221/15/08/T08008 [arXiv:2002.02967 [physics.ins-det]]. DUNE:2020ypp B. Abi et al. [DUNE], [arXiv:2002.03005 [hep-ex]]. Huber:2002mx P. Huber, M. Lindner and W. Winter, Nucl. Phys. B 645 (2002), 3-48 doi:10.1016/S0550-3213(02)00825-8 [arXiv:hep-ph/0204352 [hep-ph]]. deSalas:2020pgw P. F. de Salas, D. V. Forero, S. Gariazzo, P. Martínez-Miravé, O. Mena, C. A. Ternes, M. Tórtola and J. W. F. Valle, JHEP 02 (2021), 071 doi:10.1007/JHEP02(2021)071 [arXiv:2006.11237 [hep-ph]].
http://arxiv.org/abs/2409.03094v1
20240904214711
A Bayesian Optimization through Sequential Monte Carlo and Statistical Physics-Inspired Techniques
[ "Anton Lebedev", "Thomas Warford", "M. Emre Şahin" ]
stat.CO
[ "stat.CO", "cs.DC", "physics.data-an", "G.3; G.4; D.2.12" ]
A. Lebedev et al. The Hartree Centre, Keckwick Ln, Warrington, UK {anton.lebedev, emre.sahin}@stfc.ac.uk University of Manchester, M13 9PL, Manchester, UK thomas.warford@student.machester.ac.uk A. Lebedev et al. Springer Copyright Notice Copyright (c) 2023 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustra- tions, recitation,broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software,or by sim- ilar or dissimilar methodology now known or hereafter developed. Published in: Lecture Notes in Computer Science 14077 proceedings of the 23rd International Conference Prague, Czech Republic, July 3 - 5, 2023 Proceedings, Part V A Bayesian Optimization through Sequential Monte Carlo and Statistical Physics-Inspired Techniques Anton Lebedev1 Thomas Warford2 M. Emre Şahin1 School of Physics, Nankai University, Tianjin, 300071, China ================================================================================================== § ABSTRACT In this paper, we propose an approach for an application of Bayesian optimization using Sequential Monte Carlo (SMC) and concepts from the statistical physics of classical systems. Our method leverages the power of modern machine learning libraries such as NumPyro and JAX, allowing us to perform Bayesian optimization on multiple platforms, including CPUs, GPUs, TPUs, and in parallel. Our approach enables a low entry level for exploration of the methods while maintaining high performance. We present a promising direction for developing more efficient and effective techniques for a wide range of optimization problems in diverse fields. § INTRODUCTION Bayesian optimization of ever-growing models has become increasingly important in recent years and significant effort has been invested in achieving a reasonable runtime-to-solution. Unfortunately, most optimization tasks are implemented and optimized within a specific framework, resulting in a single optimized model. The proliferation of such implementations is difficult, as it requires both domain expertise and knowledge of the specific framework and programming language. Probabilistic programming frameworks such as Stan <cit.> and (Num)Pyro <cit.>, provide such support for efficient optimisation methods such as Hamiltonian Monte Carlo (HMC) algorithm which can explore complex high-dimensional probability distributions. These frameworks are powerful tools for statistical analysis and inference. Stan excels at handling complex hierarchical models with ease, which is often challenging in other probabilistic programming frameworks. Its user-friendly interface makes it accessible to those with little experience in Bayesian inference and statistical modelling, making it a popular choice for the accurate and efficient analysis of complex models. It provides domain experts with a performant tool to perform the said task with little to no programming knowledge. It achieves this by defining a "scripting language" for models and translation of these into C++ code. It suffers, however, from a lack of inherent parallelism and a formulation of its methods in heavily-templated C++. NumPyro, built on JAX <cit.>, enables efficient exploration of high-dimensional probabilistic models using different methods, while JAX itself combines the flexibility and ease-of-use of NumPy with the power and speed of hardware accelerators for efficient numerical computing. Additionally, JAX offers compatibility and portability by supporting code execution on a variety of hardware. Inspired by HMC and SMC descriptions in <cit.>, we implemented an HMC algorithm in Python using the NumPyro and JAX frameworks and with DeepPPL <cit.> utilized to translate existing Stan models into their Python equivalents. In this paper we present preliminary findings from our implementation of SMC for Bayesian parameter searches developed with its physical origins intact. § METHOD DESCRIPTION Upon review of the SMC algorithm <cit.> and its HMC kernel, it has become apparent that the approach resembles the cooling process of an ideal gas in a potential field with unknown minima, and that the algorithm's development would benefit from an understanding based on physical systems. To facilitate this understanding, we have undertaken a reformulation of the SMC and HMC algorithms to more accurately reflect the particle ensembles of statistical physics. A simple first step was the reintroduction of a temperature into the expressions for probability, seeing as the probabilities used in these methods are the maximum-entropy probabilities of an ensemble (collection) of particles at a fixed energy: p_i = e^-E_i/(k_B T)/∑_j=1^N e^-E_j/(k_B T) . Here k_B is the Boltzmann constant (carrying the dimensions of energy per degree of temperature), T the temperature and E_i = ℋ(q_i,p_i) is the energy of the particle i at position q_i with a momentum p_i given a Hamiltonian: ℋ(q,p) = p^2/2m + V(q) . As is common, the Hamiltonian encodes the dynamics of the system. Since we seek to determine the parameters that maximise the log-likelihood of the model the potential can be defined<cit.> as: V(q) := -ln(P(q|X) ) , where P(q|X) is the posterior probability density of the model, where X is the vector containing the observations and q is the (position) vector in parameter space - the parameters of the model. The formulation of (<ref>) commonly used in mathematics implies T = 1k_B or an arbitrary definition of k_B := 1. The former fixes a degree of freedom of the method, whilst the latter allows for a variation, but removes the dimensionality of the constant which, in conjunction with T ensures that the argument of the exponential remains a dimensionless number or quantity. Whilst functionally of no consequence retaining the physical dimensions of the respective quantities allows for sanity checks of formulae during development. The distribution of the "auxiliary momentum" described in <cit.> is naturally dependent on T, with a higher temperature T resulting in a broader distribution of the momenta (faster particles) and a wider area of the initial sample space covered. The HMC process is rather simple and described in <cit.> in sufficient detail. In contrast, sequential Monte-Carlo is provided in a less legible form in <cit.>. Hence we privde here the simple version we turn our attention to: * Initialise the position samples to a normal distribution on ℝ^n and the momentum samples to a normal distribution with temperature T. * Iterate for a given number of SMC steps: * Propagate each particle in the ensemble using HMC. * Determine the lowest energy for all particles, subtract it from every energy (renormalisation) and store it for future processing. * Compute and store the average parameter value. * Determine the effective size N_𝑒𝑓𝑓𝑒𝑐𝑡𝑖𝑣𝑒 of the ensemble according with <cit.>. * If resampling is necessary select N_𝑒𝑓𝑓𝑒𝑐𝑡𝑖𝑣𝑒 particles with largest weights (lowest energies) and duplicate them according to their probability until the original ensemble size is reached. Then reset the momenta to a thermal distribution and the weights of the particles to 1/N. * Compute the moving average of the stored averages, using the stored energies as weights in accordance with (<ref>). Here we must note the rescaling of the weights by subtraction of the lowest energy, which is physically motivated by the freedom to choose the origin of the energy scale. Similarly, the weighted moving average in the last step results in a smoothing of the excursions of the mean after a resampling step by weighting means with large associated energies exponentially smaller (the higher the energy the more unlikely a configuration is to occur). § NUMERICAL EXPERIMENTS §.§ Models To demonstrate the effectiveness of the implementation we have selected two simple models: * A sequence of M independent tosses of two coins - the Coin Toss (CT) model. * Item Response Model with Two-Parameter Logistic (IRT 2PL). §.§.§ CT model The CT model assumes complete ignorance of the a-priori coin bias p and its maximum a-posteriori probability estimator can be determined formally to be: P̂_MAP = K/N . Here K is the number of observed heads and N = 40 is the total number of observations. This allows us to check the numerical approximation of (<ref>) as well as its gradients, ensuring proper functioning of the implementation. The true parameters of the coin bias for each of the two coins are p_1 = 1/2 , p_2=3/4 . The potential of the CT model, along with a few sample trajectories of the particles propagating therein are shown in fig. <ref> prior to the constraining to the support of the model, and in fig. <ref> after. Here we note that we chose not to invert the momentum in case the new phase-space point (q,p) passes the Metropolis-Hastings acceptance test, contrary to alg. 1 of <cit.>. As can be seen in the figures, such an inversion results in a rather slow sampling of the potential in the case of a smooth potential. It is, however, beneficial for a rough potential and hence likely better in most practical applications. §.§.§ IRT 2PL Item Response Theory (IRT) <cit.> is a statistical model that is widely used in a variety of research fields to analyze item responses in assessments or surveys. The two-parameter logistic (2PL) model is a specific type of IRT model that assumes each item has two parameters: the difficulty parameter and the discrimination parameter. The model assumes that the item responses y_i for i = 1, …, I are Bernoulli distributed with a logit link function. The probability of responding correctly to item i is given by: P(y_i=1 | θ, a_i, b_i) = 1/1 + exp(-a_i(θ - b_i)) where θ is the person's latent ability, a_i is the discrimination parameter for item i, and b_i is the difficulty parameter for item i. §.§ Experimental Results Given the physical interpretation of the SMC iteration and the weighted average derived therefrom we expected - a-priori - a rather rapid and smooth convergence towards the true parameter values. Given a smooth potential, as in the case of the CT model, it is furthermore expected that convergence to the estimated parameters is faster for lower temperatures T. §.§.§ Quality of Estimation As can be seen in fig. <ref> , all estimates of the bias of a fair coin converge within 5 iterations to the true value 12 (indicated by the dashed red line). In case of a rather high temperature (more precisely: thermal energy) of T = 10 1/k_B the apparent limit value deviates noticeably from the true value, given the remarks above this is not unexpected and will be remedied in a future iteration of the method. In the case of the biased coin, with p_2 = 3/4 it becomes obvious that the current development stage of the method may suffer from a freeze-in (c.f. T=1 case) resp. a bias in the implementation. For a comparison reference, non-public, implementation results are marked as '+', showing a similar convergence behaviour but without the offsets. §.§.§ Performance - HMC Although a reference implementation is available for comparison, its drawbacks are its complexity and limited execution architectures. Our development utilises JAX and NumPyro, allowing us to run the method for variable models on a variety of architectures. Here we present the performance data obtained for the pure HMC implementation on different architectures. One can observe, in fig. <ref> the behaviour of the run time when running our version of HMC on multiple CPUs, parallelised via MPI. For the CT model run times for up to 2048 particles are dominated by communication and data management overhead. This is a surprisingly small number, given the simplicity of the model and the limited amount of observed data fed into it. Overall the run time growth is sub-linear up until 65538 particles for 1 CPU. The resulting speed-up of using more than one CPU is depicted in fig. <ref>, demonstrating a sub-linear increase even with two CPUs. This observation confirms that the model is too small to scale effectively, at least for fewer than 128k particles (c.f. the IRT model below). In contrast to the CT model is the IRT model, whose runtimes are displayed in fig. <ref>. One can immediately see, that the run time is dominated by overhead only below 256 particles per device (here: CPU). It is also important to note, that the run time using 4 CPU cores is larger than when only 3 are utilised. We attribute this to the apparent and unintentional spawning of multiple Python processes per MPI process. Our observations show that each MPI process spawns roughly 2 Python processes. This will be a point of future investigation. This observation explains the apparent super-linear speed-up for the CPUs displayed in fig. <ref>. One can also observe the order of magnitude reduction in the overall run time in the case of 65538 particles, when one GPU is used. The sub-linear speed-up from 1 GPU to 2 GPUs can here be explained by the fact, that the devices were asymmetrically bound to the system (PCIE-x16 vs. PCIE-x8) as well as that two devices of different generations were used: a GTX 1080 Ti and an RTX 3060. The latter point shows the flexibility of JAX and NumPyro, which allowed us to run the same method on multiple CPU cores and multiple, heterogeneous, GPUs using MPI without having to modify the code! § CONCLUSIONS AND FUTURE WORK In conclusion, we have demonstrated that using the NumPyro and JAX frameworks along with intuition from classical mechanics and statistical physics, it is possible to re-create an SMC Bayesian optimisation process, whilst enriching it with an intuitive understanding of the respective steps. The selected framework enabled us to create a simple, easy to maintain, implementation of HMC and SMC that can be parallelised to multiple computing devices of varying architectures on demand. Our implementation outperforms a similar implementation in Stan by a factor of ∼ 2 on a single CPU (core) and scales well to multiple CPUs/GPUs, with larger gains obtained for larger models (or models with more observation data) and more sampling particles. In the near future, we plan to extend MPI parallelism from the core HMC stage to the entire SMC iteration, as well as perform a thorough performance analysis and optimisation of our code, since preliminary checks indicate the existence of, e.g., unnecessary host-device data transfers. On the theoretical side we plan to continue the reformulation of SMC in the language of statistical physics and expect the possibility to include maximum-entropy methods and thermalisation/annealing into the framework, utilising the long history of MC in physics <cit.>. Having a formulation of SMC that utilises terminology of (statistical) physics we hope to be able to extend the approach from the classical onto the quantum domain. This holds the potential of including existing quantum resources into Bayesian optimisation processes without requiring the user to know the intricacies of the new architecture. § ACKNOWLEDGEMENTS During PRACE Summer of High-Performance Computing 2022, TW was able to implement parallel HMC using the discussed formulations and included correctness checks based on physical systems and models with known closed-form solutions in the implementation. splncs04
http://arxiv.org/abs/2409.03352v1
20240905085522
On-orbit calibration and long-term performance of the DAMPE trigger system
[ "Wen-Hao Li", "Chuan Yue", "Yong-Qiang Zhang", "Jian-Hua Guo", "Qiang Yuan" ]
physics.ins-det
[ "physics.ins-det", "astro-ph.IM", "nucl-ex" ]
1 .001 Calibration and performance of DAMPE trigger system W.-H. Li et al. mode = title]On-orbit calibration and long-term performance of the DAMPE trigger system 1,2]Wen-Hao Li 1]Chuan Yue [1] yuechuan@pmo.ac.cn 1]Yong-Qiang Zhang 1,2]Jian-Hua Guo 1,2]Qiang Yuan [1]organization=Key Laboratory of Dark Matter and Space Astronomy, Purple Mountain Observatory, Chinese Academy of Sciences, addressline=, city=Nanjing, postcode=210023, state=Jiangsu Province, country=China [2]organization=School of Astronomy and Space Science, University of Science and Technology of China, addressline=, city=Hefei, postcode=230026, state=Anhui Province, country=China [1]Corresponding author § ABSTRACT The DArk Matter Particle Explorer (DAMPE) is a satellite-borne particle detetor for measurements of high-energy cosmic rays and γ-rays. DAMPE has been operating smoothly in space for more than 8 years since launch on December 17, 2015. The trigger logic of DAMPE is designed according to the deposited energy information recorded by the calorimeter. The precise calibration of the trigger thresholds and their long-term evolutions are very important for the scientific analysis of DAMPE. In this work, we develop a new method for the threshold calibration, considering the influence from the electronics noise, and obtain the long-term evolutions of the trigger thresholds. The average increase rate of the trigger thresholds for the first 4 layers of the calorimeter is found to be ∼0.9% per year, resulting in variations of the high-energy trigger efficiency of cosmic ray electrons by ∼ -5% per year at 2 GeV and less than ∼ -0.05% above 30 GeV. DAMPE ,trigger threshofigd calibration, trigger efficiency [ [ MOX - Dipartimento di Matematica “F. Brioschi”, Politecnico di Milano, via Bonardi 9, 20133 Milan, Italy ^1 arashandrea.roknian@mail.polimi.it ^2 anna.scotti@polimi.it ^3 alessio.fumagalli@polimi.it September 9, 2024 =============================================================================================================================================================================================================================================== § INTRODUCTION The origin of cosmic rays (CRs) and the nature of dark matter (DM) are very important fundamental questions in physics and astronomy. The DArk Matter Particle Explorer (DAMPE; also known as “Wukong”) is a space particle detector which is dedicated to studies of CR physics and the indirect detection of DM via precise observations of high-energy CRs and γ-rays <cit.>. On December 17, 2015, DAMPE was launched into a sun-synchronous orbit at an altitude of 500 km, and has been working smoothly for more than 8 years since then. The on-orbit performance of DAMPE detector is very good and stable <cit.>, enabling us to achieve precise measurements of the electron plus positron spectrum <cit.>, proton spectrum <cit.>, helium spectrum <cit.>, boron-to-carbon and boron-to-oxygen ratios <cit.>, and the Forbush decreases associated with coronal mass ejections <cit.>. DAMPE also detect a number of γ-ray sources <cit.>, and give the most stringent limits on monochromatic γ-ray line emission thanks to its excellent energy resolution <cit.>. The DAMPE payload is composed of four sub-detectors, which are, from top to bottom, a Plastic Scintillator strip Detector (PSD) <cit.>, a Silicon-Tungsten tracKer-converter (STK) <cit.>, a Bismuth-Germanium Oxide (BGO) imaging calorimeter <cit.>, and a NeUtron Detector (NUD) <cit.>. A schematic view of the DAMPE payload with an example of an electromagnetic shower in four subdetectors are shown in Figure <ref>. Combining the signals from these four sub-detectors, one can measure the charge, direction, and energy of incident particles, and distinguish electrons/photons from protons mainly using the shower topologies in the BGO calorimeter. Signals from the four sub-detectors are controlled by the trigger system and synchronized in the data acquisition (DAQ) system <cit.>. The trigger system is designed to simultaneously achieve a high efficiency for target particles and a low global event rate to reduce the dead time of electronics <cit.>. The trigger efficiency, which depends strongly on the accuracy and stability of the calibrated trigger thresholds, is one of the most important quantities for precise flux measurements. Different from simply setting the threshold to the analog-to-digital converter (ADC) value corresponding to half of the maximum count <cit.>, we develop a new and more precise approach to determine the trigger thresholds based on the signal profile. Furthermore, we investigate the long-term variations of the thresholds as well as the consequence on the electron flux measurements in this study. § THE TRIGGER SYSTEM OF DAMPE The trigger system of DAMPE is operated by the joint work of the BGO calorimeter and the trigger board. Figure <ref> shows a brief trigger logic scheme of DAMPE. The external trigger is utilized in ground calibration before the launch (e.g., the beam test). The internal trigger is used for the on-orbit calibration of the detector. In the scientific operation mode, four event trigger logics, i.e., Unbiased Trigger, Minimum Ionizing Particle (MIP) Trigger, High Energy Trigger (HET), and Low Energy Trigger (LET), take function to record different types of events. A detailed trigger logic design could be found in <cit.>. For different event trigger logics, different thresholds are set for different VATA chips in the front-end electronics (FEEs) of the BGO calorimeter. The final threshold is a combination of common threshold (Vthr) and a DAC threshold, as shown by Figure <ref>. When the electronics signal amplitude in a single BGO crystal exceeds the threshold, a hit signal will be generated and sent to the trigger board in the DAQ crate. If the hit signals from different layers satisfy a certain type of trigger logic, the trigger board will make a decision and send a trigger signal to the FEEs of each sub-detector to activate the digitization procedure. After that, the FEEs pack the data with zero-suppression and send them to the DAQ system. A detailed introduction on design of the readout electronics for the BGO calorimeter of DAMPE could be found in <cit.>, and details about the DAQ system of DAMPE can be found in <cit.> and <cit.>. The pedestals could be affected by many factors, and the temperature should be one of the main factors. Figure <ref> gives the long-term tendencies of temperature of layer 1 bar 3 (top panel), and the evolution of the mean values (middle panel) and widths (bottom panel) of the pedestals for the same channel of the BGO. The relatively short period variations of the mean values clearly reflect the temperature effect on the detector <cit.>. In addition, a decrease trend over years can also be seen, mainly due to the electronics aging and an increase in detector temperature. The standard deviations keep relatively stable during the operation. Therefore, precisely calibrating the thresholds is very important in calculating the detection efficiency and the particle fluxes. Among the four event trigger logics, the HET which is the most widely used for the scientific analyses, is designed to record high energy CR or photon events with a good shower development in the BGO calorimeter. A set of high energy thresholds in the top four BGO layers are required to initiate the HET. In this work we will focus on the discussion of the HET. § THRESHOLD CALIBRATION METHOD To cover a wide dynamic range of about 10^6 for each BGO crystal, a photomultiplier tube (PMT) with multi-dynode readouts is coupled to each end <cit.>. The scintillation light signal is read out from three different sensitive dynodes 2, 5, and 8 (Dy2, Dy5, and Dy8) of the PMT, which corresponds to low-gain, medium-gain, and high-gain channels, respectively. For the HET logic, specifically, the output signals from the Dy5 on the positive end (P5 channel) from the top three layers and the Dy8 on the negative end (N8 channel) from the fourth layer are employed for the hit generation <cit.>. To precisely calibrate the signal thresholds, the specific channels that used for the trigger decision must be firstly identified. As long as the electronical signal amplitude of specific channel in one BGO bar of the top four layers exceeds the corresponding threshold, this channel is the target of interest. Using the on-orbit data, we can obtain the readout signal distribution for such a target channel. Ideally, the ADC distribution should show a “cutoff” at the trigger threshold position, as illustrated by the red line in the left panel of Figure <ref>. However, since there is electronics noise (see the bottom panel of Figure <ref>), the pedestal subtracted ADC distribution shows a gradual rising shape. The shape is caused by the electronic noise, it influences only the trigger efficiency around the threshold, but won't change the value of the threshold. Assuming a Gaussian distribution of pedestal, the resulting signal distribution can thus be described as f(x)=f_ org(x)∫_x_0-x^∞1/√(2π)σe^-x'^2/2σ^2dx', where f_ org(x) is the original signal distribution around the threshold cutoff, x_0 is the detector threshold, σ is the standard deviation of the pedestal. The integral of the Gaussian function represents the probability that the fluctuation of pedestal can compensate the signal and make the total readout exceed the threshold. The above formula can reduce to f(x)=f_ org(x)H(x-x_0) for the ideal case when σ→ 0, where H(x-x_0) is the step function. § RESULTS AND DISCUSSION §.§ Trigger thresholds We identify the specific channels that participate in the trigger decision for each HET event. After that, we get the ADC distribution of each channel using events accumulated in one day. Then the trigger threshold of each channel is obtained by fitting the readout signal distribution with Eq. (<ref>), assuming a power-law form of f_ org(x). In a wide energy range, the spectrum of cosmic rays is not a simple power law due to the geomagnetic field. However, in a relatively narrow range around the threshold, a power law function may well approximate the spectral behavior. As an assessment of the possible effect due to the assumed original spectral shape, we test the other form of a log-parabola spectrum, and the differences between power-law and log-parabola are very small. The ADC peak position for each channel is given by a adaptively pre-search and the fit upper limit is a little higher than the peak position as we assume the ADC distribution shows a power-law in a reasonable signal range. The ADC distributions of on-orbit data and the fitting results for four channels are shown in Figure <ref>. The rising profile of the signal distribution around the threshold can be well fitted by the theoretical function, Eq. (<ref>). It should be noted that the power law variables are all set as free parameters in the fit of the daily data considering the solar modulation on the cosmic ray spectrum. The uncertainty of the fitted threshold value is typically ∼ 0.2 ADC, which is 5 times smaller than the uncertainty of ∼ 1 ADC from the direct counting method. One can note from Figure <ref> that, there are some discrete spikes in the ADC distributions, which are from the differential nonlinearity of the AD976 chip <cit.> implemented in the electronics readout of the PMT. These spikes differ from one to another. Statistically they can be averaged when calculating the total energy deposit for an event, thereby such an effect does not result in significant bias on the energy measurement. However, if the spikes are located in the rising part of the distribution, they will strongly affect the threshold determined with the traditional method, i.e., the ADC value at half of the maximum. On the other hand, the fitting method is less sensitive to those spikes, and the obtained threshold value is more robust. The time evolution of the trigger thresholds for one channel in each of the first four years are given in Figure <ref>, from January 2016 to June 2022. The results derived with the traditional method <cit.> are also shown for comparison. The traditional method regard the spectrum before the threshold as a constant value, which is unreasonable. Therefore we use a power-law spectrum representing the cosmic ray spectrum around the threshold within a reasonable range. Our new method gives more accurate and smaller scatterings of the thresholds especially, compared with the old method. Besides the relatively short-term variations which mainly correlate with temperature <cit.>, long-term declines of the thresholds for all the four layers are visible. To evaluate the long-term variation more precisely, we convert the trigger thresholds in ADC unit to deposited energy (MeV), considering the gain of PMTs and removing the temperatue effect[There is a strong correlation between the scintillation light yield and temperature, which results in variations of the ADC values of the proton MIP spectrum. Through re-scaling the peak ADC value to the expected peak energy, approximately 23 MeV, we can effectively eliminate the temperature effect.]. In daily on-orbit calibrations, the PMT gain is characterized by the peak of the proton MIP spectrum of each Dy8 channel <cit.>. The trigger threshold energy is thereby calculated from Dy8 as E_ tri= ADC_ tri·E_ MIP/ ADC_ MIP, For the Dy5 channel, the ADC value needs to be converted to the Dy8 ADC value with the dynode-58 conversion parameters, the trigger threshold is thereby calculated as: E_ tri= ( ADC_ tri· k_58+b_58) ·E_ MIP/ ADC_ MIP, where k_58 and b_58 are the slope and the intercept of the dynode-58 relation, ADC_ MIP is the peak ADC value of the on-orbit proton MIP spectrum, and E_ MIP is the peak energy of the simulated proton MIP spectrum. The long-term evolutions of the trigger threshold energy are shown in Figure <ref>. We can find that the threshold ADC of the first three layers is half of the fourth layer, while the threshold energy is about five times that of the fourth layer. This is due to that the desired threshold is 10 MIPs for the first three layers and 2 MIPs for the fourth layer, therefore the thresholds in MeV unit differ by a factor of 5. However, the difference of threshold in ADC value is a combined result from the trigger logics, as well as the photo-attenuation sheet placed in different sides. The high energy trigger logic set is (L1 P5 & L2 P5 & L3 P5 & L4 N8), the gain of Dy8 is about 30 times larger than that of Dy5. The attenuation coefficient of the photo-attenuation sheet attached in the positive side is about 5 times smaller than the negative side. All these effects lead to the differences of the thresholds signals between the first 3 layers and the 4th layer. The threshold energy increases with time as shown in Figure <ref>, due to particles with the same energy need a higher pedestal up-fluctuation to exceed the threshold, hence the trigger efficiency becomes lower. Using a straight line to fit the long-term evolution, we obtain the relative variation rate of each channel for each bar of the first four layers, as shown in Figure  <ref>. For most of channels, the trigger threshold shows a positive increase rate. The average increase rates for different layers are given in Table 1. We can see that for deeper layers the average increase rate seems smaller, however, the change rate of different bars in the same layer differs very much, causes a relative large error on the average increase rate, so the difference between the first three layers are not very significant. The overall increase rate of trigger thresholds of energy is ∼0.9%/year, resulting in a decrease of the on-orbit trigger efficiency of DAMPE. Different from the results threshold ADC shown in Figure <ref>, the threshold energy increases with time. The general increase of the trigger thresholds is primarily due to a combined influence from the aging of the BGO crystals and the related electronics. The relevant reasons could be concluded as below: (1) The radiation damage of the BGO crystals, causing a decrease of the scintillation light production; (2) The decay of the PMT gain; (3) The long-term changing of the electronics. In addition, as the on-orbit temperature of BGO shows a increase trend, which could induce a long-term change on the scintillation light production of the BGO crystals, thereby affecting the trigger threshold. These above effects are mixed together to result in the long-term variations of the trigger thresholds. So it is very difficult to preciously calculate the influences from each reason mentioned above. §.§ Impact on electron trigger efficiency Taking the HET efficiency for electrons as an example, we study the impact of variations of the trigger thresholds on the efficiency calculation using the Monte Carlo (MC) simulation data. The MC simulation is performed with the GEANT4 tookit <cit.> based on an accurate geometric model including both the payload and the satellite platform. In general, the simulation process includes particle generation, transportation simulation, digitization and reconstruction <cit.>. In the digitization, we add the electronic responses and convert the the raw hits in each detection unit into digital signals, which have same format as the raw flight data. The real trigger logics with the energy thresholds (MeV) as obtained above are then applied to the digitized data. By applying different trigger thresholds at specific time in the MC data, the long-term variation of the trigger efficiency can be estimated. The time dependence of the HET efficiency for MC electrons in different energy bins are shown in Figure <ref>. For all energy bins, the HET efficiency shows clear decrease trends, as can be expected from Figure <ref> that the threshold energy increases with time. The change of the trigger efficiency can be well described by a linear function (see the red solid lines in Figure <ref>). The energy dependence of the variation rate of the HET efficiency is shown in Figure <ref>. The variation rate is ∼-5%/year at 2 GeV, -0.15%/year at 20 GeV, and decreases to -0.05% above 30 GeV. The CR electron spectra measurement above 20 GeV is thus not significantly affected by the time variations of the trigger thresholds. However, for the low-energy measurements, such as the time variations of cosmic-ray electron fluxes associated with solar activity <cit.> and the long-term analyses of GeV γ-ray sources, the evolution of the trigger efficiency must be properly addressed. § CONCLUSION The on-orbit trigger system of DAMPE has been working stably since the launch. The trigger threshold calibration is a key issue for the precise calculation of the trigger efficiency. In this work, we develop an adaptive fitting method for the trigger threshold calibration via considering the influence from the electronics noise. Compared with the traditional method which simply adopts the ADC value at half of the maximum number of counts, this new method is more precise and less sensitive to the differential nonlinearity of the chips. We obtain the trigger thresholds of HET and their long-term evolution. The results shows that the threshold energy mostly show an increase trend with time, with an average increase rate of ∼0.9%/year. The change rate for the four layers utilized in the HET indicates that such long-term evolution could be caused by many reasons such as changes in related electronics and radiation damage in space. As a result, the HET efficiency of electrons shows a decrease trend. The time variation rate of the HET efficiency is ∼-5%/year at 2 GeV and -0.15%/year at 20 GeV. The variation of the high trigger efficiency needs to be taken into account when it comes to research such as γ-ray analysis, observations of fine time structures in the cosmic ray fluxes, when the high energy trigger is applied. § DECLARATION OF COMPETING INTEREST The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper § DATA AVAILABILITY Data will be made available on request. § ACKNOWLEDGEMENTS This work is supported by the National Key Research and Development Program of China (No. 2022YFF0503301 ), the National Natural Science Foundation of China (Nos. 12220101003, 12173099, 12003075, 12227805), the CAS Project for Young Scientists in Basic Research (No. YSBR-061, YSBR-092), the Strategic Priority Program on Space Science of Chinese Academy of Sciences (No. E02212A02S), the Youth Innovation Promotion Association CAS, the Young Elite Scientists Sponsorship Program by CAST (No. YESS20220197). cas-model2-names 99 [Chang(2014)]Chang2014J. Chang, Dark Matter Particle Explorer: The First Chinese Cosmic Ray and Hard γ-ray Detector in Space, Chinese Journal of Space Science 34 (5) (2014) 550-557, https://www.sciengine.com/CJSS/doi/10.11728/cjss2014.05.550;JSESSIONID=5ff3b494-ed90-4aab-a2ca-6d08a079b3ef [Chang et al.(2017)]Chang2017J. Chang, G. Ambrosi, Q. An, et al., The DArk Matter Particle Explorer mission, Astropart. Phys. 95 (2017) 6-24, https:// doi.org/10.48550 /arXiv.1706.08453 [Ambrosi et al.(2019)]Ambrosi2019G. Ambrosi, et al., The on-orbit calibration of DArk Matter Particle Explorer, Astropart. Phys. 106 (2019) 18-24, https://doi.org/10.1016/ j.astropartphys.2018.10.006. [Dong et al.(2019)]Dong2019T.-K. Dong, et al., Charge measurement of cosmic ray nuclei with the plastic scintillator detector of DAMPE, Astropart. Phys. 105 (2019) 31-36, https://doi.org/10.1016/j.astropartphys.2018.10.001. [Ma et al.(2019)]Ma2019P.-X. Ma et al., A method for aligning the plastic scintillator detector on DAMPE, Res. Astron. Astrophys. 019 (2019) 082, DOI 10.1088/1674-4527/19/6/82. [Tykhonov et al.(2018)]Tykhonov2018A. Tykhonov et al., Internal alignment and position resolution of the silicon tracker of DAMPE determined with orbit data, Nucl. Instrum. Methods Phys. Res. A 893 (2018) 43-56, https://doi.org/10.1016/j.nima.2018.02.105. [Zhang et al.(2016)]Zhang2016 Z.-Y. Zhang, et al. The calibration and electron energy reconstruction of the BGO ECAL of the DAMPE detector. Nucl. Instrum. Methods Phys. Res. A, 2016, 836, 98-104, https://doi.org/10.1016/j.nima.2016.08.015. [Ambrosi et al.(2017)]Ambrosi2017G. Ambrosi et al. (DAMPE collaboration), Direct detection of a break in the teraelectronvolt cosmic-ray spectrum of electrons and positrons, Nature 552 (2017) 63–66, https://doi.org/10.1038/nature24475. [An et al.(2019)]An2019Q. An, et al. (DAMPE Collaborabtion), Measurement of the cosmic-ray proton spectrum from 40 GeV to 100 TeV with the DAMPE satellite, Sci. Adv. 5 (2019) , eaax3793, arXiv: 1909.12860 [Alemanno et al.(2021a)]Alemanno2021aF. Alemanno et al. (DAMPE collaboration), Measurement of the Cosmic Ray Helium Energy Spectrum from 70 GeV to 80 TeV with the DAMPE Space Mission, Phys. Rev. Lett. 126 201102 (2021), https://link.aps.org/doi/10.1103/PhysRevLett.126.201102. [Alemanno et al.(2022b)]Alemanno2022bF. Alemanno et al. (DAMPE collaboration), Detection of spectral hardenings in cosmic-ray boron-to-carbon and boron-to-oxygen flux ratios with DAMPE, Sci. Bull. 67 2162 (2022), https://www.sciencedirect.com/science/article/pii/S2095927322004492. [Alemanno et al.(2021b)]Alemanno2021bF. Alemanno et al. (DAMPE collaboration), Observations of Forbush Decreases of Cosmic-Ray Electrons and Positrons with the Dark Matter Particle Explorer, Astrophys. J. Letters 920 L43 (2021), doi: 10.3847/2041-8213/ac2de6. [Xu et al.(2018)]Xu2018Z.-L. Xu et al, An algorithm to resolve γ-rays from charged cosmic rays with DAMPE, Res. Astron. Astrophys. 18 27 (2018). https://doi.org/10.1088/1674–4527/18/3/27. [Duan et al.(2021)]Duan2021K.-K. Duan et al. (DAMPE collaboration), Observations of gamma-ray sources with DAMPE, in: the 37th International Cosmic Ray Conference (ICRC 2021), Berlin , Germany, 2021, 631. [Shen et al.(2021)]Shen2021Z.-Q. Shen et al. (DAMPE collaboration), Analyzing the Fermi Bubbles with DArk Matter Particle Explorer, in: the 37th International Cosmic Ray Conference (ICRC 2021), Berlin , Germany, 2021, 640. [Alemanno et al.(2022a)]Alemanno2022aF. Alemanno et al. (DAMPE collaboration), Search for gamma-ray spectral lines with the DArk Matter Particle Explorer, Sci. Bull. 67 679 (2022), https://doi.org/10.1016/j.scib.2021.12.015. [Cheng et al.(2023)]Cheng2023J.-G. Cheng, Y.-F. Liang, E.-W. Liang, Search for the gamma-ray spectral lines with the DAMPE and the Fermi-LAT observations, Phys. Rev. D 108 063015 (2023). https://journals.aps.org/prd/abstract/10.1103/PhysRevD.108.063015. [Yu et al.(2017)]Yu2017Y.-H. Yu et al., The plastic scintillator detector for DAMPE, Astropart. Phys. 94 1 (2017), https://www.sciencedirect.com/science /article/pii/S0927650516302031. [Azzarello et al.(2016)]Azzarello2016P. Azzarello et al., The DAMPE silicon tungsten tracker, Nucl. Instrum. Methods Phys. Res. A 831 378 (2016). https://api.semanticscholar.org/CorpusID:124526016. [Zhang et al.(2012)]Zhang2012Y.-L. Zhang et al., A high dynamic range readout unit for a calorimeter, Chinese Physics C 36 71 (2012), doi: 10.1088/1674-1137/36/1/012. [Huang et al.(2020)]Huang2020Y.-Y. Huang et al., Calibration and performance of the neutron detector onboard of the DAMPE mission, Res. Astron. Astrophys. 20 153 (2020), arXiv: 2005.07828. [Zhang et al.(2019)]Zhang2019 Y.-Q. Zhang, et al., Design and on-orbit status of the trigger system for the DAMPE mission, Res. Astron Astrophys. 19(9), 123 (2019), DOI 10.1088/1674-4527/19/9/123 [Feng et al.(2015)]Feng2015C.-Q. Feng et al., Design of the Readout Electronics for the BGO Calorimeter of DAMPE Mission, Nuclear Science, 62, 3117-3125(2015), DOI 10.1109/RTC.2014.7097466 [Wang et al.(2017)]Wang2017Y.-P. Wang, Wen S.C., Jiang W. et al., Temperature effects on MIPs in the BGO calorimeters of DAMPE, Chinese Physics C, 41, 106001(2017), DOI 10.1088/1674-1137/41/10/106001. [Analog Devices Inc.(2005)]AD976 Analog Devices Inc. (2005). AD976/AD976A Data Sheet, available online: <https://www.analog.com/media/en/technical-documentation/data-sheets/AD976_976A.pdf> (accessed on January 15th, 2024). [Allison, et al.(2016)]Allison2016J. Allison et al., Recent developments in Geant4, Nucl. Instrum. Methods Phys. Res. A 835 (2016) 186–225, http://dx.doi.org/10.1016/j.nima.2016.06.125. [Jiang et al.(2020)]Jiang2020W. Jiang et al., Comparison of Proton Shower Developments in the BGO Calorimeter of the Dark Matter Particle Explorer between GEANT4 and FLUKA Simulations, Chin. Phys. Lett. 37 (2020) 119601893, https://doi.org/10.1088/0256-307X/37/11/119601
http://arxiv.org/abs/2409.03606v1
20240905151235
Performance of Empirical Risk Minimization For Principal Component Regression
[ "Christian Brownlees", "Guðmundur Stefán Guðmundsson", "Yaping Wang" ]
econ.EM
[ "econ.EM" ]
defnDefinition propProposition *prop-nonumberProposition lemmaLemma thmTheorem *thm-nonumberTheorem corCorollary *asmredef* asmredef[2] @edefcurrentlabelA.#1* mytheorem 0pt . #1.#2 (#3) mytheorem asmA asmA. [#1]#2footnote[#2] Performance of Empirical Risk Minimization For Principal Component Regression Christian Brownlees^†,* Guðmundur Stefán Guðmundsson^ Yaping Wang^† September 9, 2024 ================================================================================ § ABSTRACT This paper establishes bounds on the predictive performance of empirical risk minimization for principal component regression. Our analysis is nonparametric, in the sense that the relation between the prediction target and the predictors is not specified. In particular, we do not rely on the assumption that the prediction target is generated by a factor model. In our analysis we consider the cases in which the largest eigenvalues of the covariance matrix of the predictors grow linearly in the number of predictors (strong signal regime) or sublinearly (weak signal regime). The main result of this paper shows that empirical risk minimization for principal component regression is consistent for prediction and, under appropriate conditions, it achieves optimal performance (up to a logarithmic factor) in both the strong and weak signal regimes. Keywords: empirical risk minimization, principal component regression, time series, oracle inequality JEL: C13, C14, C22, C55 [0] ^† Department of Economics and Business, Universitat Pompeu Fabra and Barcelona GSE; e-mail: , . ^ Department of Economics and Business Economics, Aarhus University; e-mail: . ^* Corresponding author. We have benefited from discussions with Gabor Lugosi, David Rossell and Piotr Zwiernik. Christian Brownlees acknowledges support from the Spanish Ministry of Science and Technology (Grant MTM2012-37195); the Ayudas Fundación BBVA Proyectos de Investigación Cientìfica en Matemáticas 2021; the Spanish Ministry of Economy and Competitiveness through the Severo Ochoa Programme for Centres of Excellence in R&D (SEV-2011-0075). Guðmundur Stefán Guðmundsson acknowledges financial support from the Danish National Research Foundation (DNRF Chair grant number DNRF154). § INTRODUCTION Principal component regression (PCR) is a regression methodology with a long and well established tradition that can be traced back to at least <cit.> and <cit.>. In a nutshell, PCR consists in forecasting a prediction target of interest on the basis of the principal components extracted from a potentially large set of predictors. PCR is a popular tool for forecasting in macroeconomics where it is documented to perform favourably relative to a number of competing approaches <cit.>. In this paper we study the properties of PCR from a learning theory perspective. Our main contribution consists in establishing nonasymptotic prediction performance guarantees for PCR. Our main result may be interpreted as a nonasymptotic analogue of classic asymptotic results on the prediction properties of PCR obtained in <cit.> <cit.>. An important feature of our analysis is that we treat PCR as a regularization procedure and we do not assume that the data are generated by a factor model. In particular, as is customary in learning theory, the relation between the prediction target and the predictors is not specified. That being said, as the factor model feinschmecker will recognize, our framework relies on assumptions and proof strategies analogous to the ones used in the factor model literature. In particular we build upon classic contributions such as <cit.>, <cit.>, <cit.> and <cit.> among others. PCR may be described as a two-step procedure. Let 𝒟 = { (Y_t, X_t')' }_t=1^T be a stationary sequence of zero-mean random vectors taking values in 𝒴×𝒳⊂ℝ×ℝ^p. The goal is to forecast the prediction target Y_t using the p-dimensional vector of predictors X_t=(X_1 t,…,X_p t)'. The first step of PCR consists in computing the T × K principal components matrix P = ( P_1, …, P_T)' associated with the T × p predictor matrix X = ( X_1, …, X_T)', for some appropriate choice of K. This may be defined as the solution of the constrained least squares problem ( B , P ) = min_ B ∈ℝ^p × K P ∈ℝ^T × K X - P B' _F^2   s.t. 1 T P' P = I_K, 1 p B' B is diagonal, where ·_F denotes the Frobenius norm. As is well known, P is given by √(T) times the first K eigenvectors of the matrix X X'. It is useful to remark here that the principal components allow us to express the vector of predictors X_t as a linear combination of the matrix of coefficients B with the K principal components P_t plus a residual vector u, that is X_t = B P_t + u_t  , where u_t = X_t - B P_t. The second step of PCR consists in computing the K× 1 least squares coefficients vector ϑ̂ associated with the regression of the T× 1 target variable vector Y = (Y_1, …, Y_T)' on the principal components matrix P. This is the solution to the least squares problem ϑ̂ = min_ϑ∈ℝ^K Y - Pϑ^2_2  , where ·_2 denotes the Euclidean norm. It is straightforward to check that ϑ̂ = P' Y / T. PCR may be interpreted as regularized empirical risk minimization.[ We remark that in this paper we use the expression “empirical risk minimization for principal component regression” only for simplicity. A more appropriate name for the procedure we study would be “regularized empirical risk minimization based on principal component analysis”.] Consider the class of prediction rules indexed by θ∈ℝ^p given by f_θ t = θ ' X_t. Then PCR can be cast as the regularized empirical risk minimization problem given by θ̂_PCR∈min_θ∈ℝ^p R_T(θ)    s.t.   V_R' θ = 0  , where R_T(θ) = 1 T∑_t=1^T ( Y_t - f_θ t )^2 is the empirical risk and V_R = ( v_K+1 , … , v_p )  , where v_i is the eigenvector associated with the i-th largest eigenvalue of the sample covariance matrix of the predictors Σ = X' X/T. The vector θ̂_PCR defined in (<ref>) is the solution to a least squares problem subject to a set of linear constraints. It is straightforward to verify that f̂^PCR_t = θ̂_PCR' X_t = ϑ̂' P_t, which implies that the forecasts produced by PCR may be equivalently expressed as linear forecasts based on constrained least squares. One of the main objectives of learning theory is to obtain a bound on the predictive performance of the ERM relative to the optimal risk that can be achieved within the given class of prediction rules. We define the risk of a prediction rule as R( θ ) = 𝔼[ ( Y_t - f_θ t )^2 ]. The optimal risk is defined as the risk of a prediction rule associated with a θ^* such that R( θ^* ) = min_θ∈ℝ^p𝔼[ ( Y_t - f_θ t )^2 ]  . The conditional risk of PCR is used to measure predictive performance. This is given by R( θ̂_PCR ) = 𝔼[ . ( Y_t - f̂^PCR_t )^2 | θ̂_PCR = θ̂_PCR(𝒟) ]  , where (Y_t, X_t)' in (<ref>) denotes an element of the process assumed to be drawn independently of 𝒟. The performance measure in (<ref>) can be interpreted as the risk of the ERM obtained from the “training” sample 𝒟 over the “validation” observation (Y_t, X_t)'. Then our aim is to find a pair (B_T(p,K),δ_T) such that B_T(p,K) → 0 and δ_T→ 0 as T →∞ for which R( θ̂_PCR ) ≤ R(θ^*) + B_T(p,K) holds with probability at least 1 - δ_T for any T sufficiently large. The inequality in (<ref>) is commonly referred to as an oracle inequality. Oracle inequalities such as (<ref>) provide non-asymptotic guarantees on the performance of the ERM and imply that the ERM asymptotically performs as well as the best linear predictor. We remark that the performance measure in (<ref>) allows us to keep our analysis close to the bulk of the contributions in the learning theory literature (which typically focus on the analysis of i.i.d. data) and facilitates comparisons. We remark that <cit.> and <cit.> consider alternative performance measures such as the conditional out-of-sample average risk of the ERM, which has a more attractive interpretation for time series applications. It turns out that these alternative measures lead to essentially the same theoretical analysis at the expense of introducing additional notation. We therefore focus on the performance measure defined in (<ref>) for clarity. This paper is related to various strands of the literature. First, the vast literature on approximate factor models, principal component analysis and spiked covariance models, which includes <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>. In particular, this work is close to the important contribution of <cit.> that studies the properties of a large class of high-dimensional models, which includes factor models, and establishes results on the predictive performance of such a class. Second, it is related to the literature on the small-ball method, which includes <cit.>, <cit.>, <cit.> and <cit.>. Third, the literature on empirical risk minimization for linear regression, which includes <cit.> and <cit.> among others. In particular, our contribution is close to the subset of the literature that deals with dependent data, as in <cit.>. The rest of the paper is structured as follows. Section <ref> introduces additional basic notation, assumptions and preliminary results. Section <ref> contains the main result of the paper and its proof is outlined in Section <ref>. Concluding remarks follow in Section <ref>. All the remaining proofs are given in the appendix. § NOTATION, PRELIMINARIES AND ASSUMPTIONS In this section we lay out the assumptions required for our analysis. Our assumptions may be interpreted as the union of standard conditions employed in the approximate factor model literature <cit.> and conditions employed in the learning literature based on the small-ball method <cit.>. We introduce some basic notation. For a generic vector x ∈ℝ^d we define x _r as [ ∑_i=1^d |x_i|^r ]^1/r for 1 ≤ r < ∞ and max_i=1,…,d |x_i| for r=∞. For a generic random variable X ∈ℝ we define X _L_r as [𝔼 ( |X|^r ) ]^1/r for 1 ≤ r < ∞ and inf{ a : ℙ( |X|> a ) = 0 } for r=∞. For a positive semi-definite matrix 𝐌 we use 𝐌^1/2 to denote the positive semi-definite square root matrix of 𝐌 and 𝐌^ -1/2 to denote the generalized inverse of 𝐌^1/2. For a generic matrix 𝐌 we use 𝐌_2 to denote the spectral norm of 𝐌. We begin with a preliminary lemma that establishes the population analogue of the principal components decomposition of the prediction vector X_t introduced in (<ref>). Let X_t be a zero-mean p-dimensional random vector with Σ = 𝔼( X_t X_t'). For any K ∈{1, … p}, define Λ_K = diag(λ_1,…,λ_K), Λ_R = diag(λ_K+1,…,λ_p), V_K = ( v_1,…, v_K) and V_R = ( v_K+1,…, v_p) where λ_1, …, λ_p and v_1, …, v_p denote the sequence of eigenvalues of Σ in a non-increasing order and the corresponding sequence of eigenvectors. Then, (i) it holds that X_t = 𝐁 P_t + u_t  , where 𝐁 = V_K Λ_K^1/2, P_t = Λ_K^-1/2 V_K' X_t and u_t = V_R ( V_R' V_R)^-1 V_R' X_t with 𝐁' 𝐁 diagonal, 𝔼 ( P_t P_t' ) = I_K, 𝔼( u_t u_t') = V_R Λ_R V_R' and 𝔼 ( P_t u_t' ) = 0_K × p. (ii) Let θ^* ∈ℝ^p be the vector of coefficients of the best linear predictor of Y_t based on X_t. Then, it holds that X_t'θ^* = P_t ' ϑ^* + u_t ' γ^* , where ϑ^* = Λ_K^1/2 V_K' θ^* and γ^* = V_R ( V_R' V_R)^-1 V_R' θ^*. Parts (i) of Lemma <ref> states that X_t can decomposed into a linear combination of the matrix of coefficients 𝐁 with the random vector P_t plus the residual random vector u_t, where P_t and u_t are orthogonal. We call P_t the population principal components and u_t the idiosyncratic component. Part (ii) implies that the best linear predictor for Y_t can be alternatively represented as a function of the population principal components P_t and the idiosyncratic component u_t with the coefficient vectors ϑ^* and γ^*. Note that since P_t and u_t are orthogonal, ϑ^* may be interpreted as the best linear predictor based on the population principal components P_t. We remark that, clearly, in the definitions of u_t and γ^* the matrix V_R ( V_R' V_R)^-1 V_R' may be simplified to V_R V_R' but we prefer to express it in this way to emphasize that this is a projection matrix. Last note that the decomposition established in part (ii) holds for any vector in ℝ^p but for our purposes it suffices to focus on the vector of coefficients of the best linear predictor. Also notice that the vector of coefficients of the best linear predictor may not be unique, but this raises no issues in our setup. We lay out the assumptions of our analysis. We say that the d-dimensional random vector U is sub-Gaussian with parameters C_m > 0 if, for any ε>0, it holds that ℙ( sup_ v : v _2=1 | v' U | > ε) ≤exp( -C_m ε^2 )  . For a univariate random variable U this is equivalent to ℙ( | U | > ε ) ≤exp( -C_m ε^2 ). [Distribution] (i) There exists a positive constant C_m such that Y_t, Y_t - P_t'ϑ^* and Z_t=Σ^-1/2 X_t with Σ = 𝔼( X_t X_t') are sub-Gaussian with parameter C_m. (ii) There exists a constant C_Z and a p-dimensional spherical random vector S such that for any B ∈ℬ(ℝ^p) it holds that ℙ( Z_t ∈ B ) ≤ C_Z ℙ( S ∈ B ), where the spherical random vector S is such that its density exists and the marginal densities of its components are bounded from above. Part (i) states that the tails of the data decay exponentially. More precisely it assumes that the prediction target Y_t, the prediction error of the best linear predictor based on the principal components P_t given by Y_t - P_t'ϑ^* and the standardized predictors Z_t have sub-Gaussian tails. We remark that such a condition is fairly standard in the analysis of large-dimensional factor models <cit.>. We also remark that the sub-Gaussian condition may be replaced by a sub-Weibull condition at the expense of longer proofs <cit.>. Part (ii) is a regularity condition on the density of the standardized predictors Z_t that is required to establish upper bounds on the probability of a certain event associated with the vector of predictors X_t in one of the intermediate propositions of our analysis. The same condition is assumed in <cit.>. Let ℱ_-∞^t and ℱ_t+l^∞ be the σ-algebras generated by { (Y_s, X_s')': -∞≤ s ≤ t} and { (Y_s, X_s')': t + l ≤ s ≤∞} respectively for some t ∈ℤ and define the α-mixing coefficients α(l) = sup_A ∈ℱ_-∞^t, B ∈ℱ_t+l^∞| ℙ(A ∩ B ) - ℙ(A) ℙ(B) |. [Dependence] There exist constants C_α>0 and r_α>0 such that the α-mixing coefficients satisfy α(l) ≤exp( -C_α l^r_α ). <ref> states that the process { (Y_t, X_t')'} has geometrically decaying strong mixing coefficients, which is a fairly standard assumption in the analysis of large dimensional time series models <cit.>. [Eigenvalues] There is an integer K ∈{1,…,p}, a constant α∈ (1/2,1] and a sequence of non-increasing nonnegative constants c_1,…,c_p with c_K>0 such that, λ_i = c_i p^α for i=1,…,K, λ_i = c_i for i=K+1,…,p. <ref> states that the first K eigenvalues of the covariance matrix Σ diverge as the cross-sectional dimension p becomes large. The rate of divergence is determined by α. We distinguish between two regimes that depend on the value of this constant. When α=1 we say that we are in the strong signal regime, which is analogous to (classic) factor models with pervasive factors <cit.>. When α∈ (1/2,1) we say that we are in the weak signal regime, which is analogous to weak factor models <cit.>. We remark that this assumption is weaker than <cit.>, which only allows for strong signals. We also point out that K here is assumed to be known and that there is a large literature devoted to the estimation of this quantity <cit.>. Last, it is important to emphasize that the assumption allows for the non-diverging eigenvalues of Σ to be zero, so our framework allows Σ to be singular. [Number of Predictors and Principal Components] (i) There are constants C_p>0 and r_p ∈ (0,r_p) such that p = ⌊ C_p T^r_p⌋ where r_p = r_α∧1 r_α+1 r_α- α. (ii) There are constants C_K>0 and r_K ∈ [0,r_K) such that K = ⌊ C_K T^r_K⌋ where r_K = 1-r_p( 1- α) ∧1 3 + 2 r_α. <ref> states that the number of predictors and the number of principal components are allowed to increase as a function of the sample size T. The rate of growth of these quantities depends on the rate of decay of the strong mixing coefficients (as measured by r_α) and the strength of the signal (as measured by α). The less dependence and the stronger the signal, the larger the numbers of allowed predictors and principal components. The assumption allows the number of predictors to be larger than the sample size T whereas the number of principal components is at most T^1/3 (up to a proportionality constant). It is important to emphasize that when the signal is weaker, the maximum rate of growth of the number of predictors is smaller. The condition on the maximum rate of growth of the number of predictors r_p is analogous to condition (9) in <cit.>, who study the properties of factor models in the weak signal regime. [Identification/Small-ball] There exist positive constants κ_1 and κ_2 such that, for each θ_1, θ_2 ∈ℝ^p, and for each t = 1, …, T ℙ( | f_θ_1 t - f_θ_2 t | ≥κ_1 f_θ_1 t - f_θ_2 t_L_2) ≥κ_2  . <ref> is the so-called small-ball assumption, and it is stated here as it is formulated in <cit.>. This assumption can be interpreted as an identification condition. If we define δ=(θ_1-θ_2) then the condition is equivalent to ℙ( | δ' X_t | ≥κ_1 δ' X_t _L_2) ≥κ_2, which can be seen as requiring the random variable δ' X_t to not have excessive mass in a neighbourhood around zero. We remark that the constants κ_1 and κ_2 measure the strength of the identification in the sense that the larger the value of these constants the stronger the identification condition. We also remark that <cit.> discuss conditions that imply <ref>. § PERFORMANCE OF EMPIRICAL RISK MINIMIZATION We now state our main result on the performance of empirical risk minimization. Suppose <ref>–<ref> are satisfied. Then for any η>0 there exists a constant C>0 such that, for any T sufficiently large, R( θ̂_PCR ) ≤ R( θ^*) + (θ^*)' V_R Λ_R V_R' θ^* + C [ 1 p^2α-1 + ( p T p^α)^2 p^2 r_α + K T ] log(T), holds with probability at least 1-T^-η. The theorem establishes a regret bound on the excess risk of PCR relative to the best linear predictor that can be obtained on the basis of the predictors X_t. The gap is made up of two terms. The first can interpreted as the approximation error of PCR and the second as the estimation error. The approximation error measures the gap between the performance of the best linear predictor based on the population principal components P_t and the best linear predictor based on the predictors X_t. The estimation error measures the gap between the performance of PCR relative to the best linear predictor based on the population principal components P_t. A number of additional remarks on Theorem <ref> are in order. First, it is insightful to provide an alternative representation of the approximation error of PCR. This may be equivalently expressed as (θ^*)' V_R Λ_R V_R' θ^* = Λ^1/2_R V_R' θ^* ^2_2 = Λ^1/2_R V_R' V_R ( V_R' V_R)^-1 V_R' θ^* ^2_2 = (γ^*)' V_R Λ_R V_R' γ^*  . This highlights that the approximation error of PCR is small when the projection of the best linear predictor θ^* on the subspace spanned by the population eigenvectors v_K+1, …, v_p is small. Differently put, if the contribution of the idiosyncratic component vector u_t is negligible then PCR has a negligible approximation error. Clearly, when γ^* = 0 the best linear predictor based on X_t coincides with the best linear predictor based on P_t and the approximation error is zero. Second, it is interesting to provide some comments on the behaviour of the approximation error. Under the rate conditions of <ref> the approximation error is asymptotically negligible as T →∞. We remark that the condition p /( p^α T) → 0 as T→∞ is also required by <cit.> in the analysis of approximate weak factor models. The approximation error is also influenced by the degree of persistence in the data, as measured by r_α, and in particular the more dependent the data are the slower the convergence of the approximation error to zero. We remark that the results of <cit.>, among others, do not depend on the degree of persistence of the data. This is due to the fact that their analysis relies on higher level conditions that imply sharp rates of convergence of certain key estimators in the analysis. It is also interesting to note that the approximation error is made up of three terms that can be easily associated with the different estimation problems embedded in PCR. The first two terms capture the estimation error of the principal components whereas the third one can be interpreted as the estimation error the principal component regression if the population principal components were observed. Last, an important question concerning the approximation error is whether the rate obtained for it in the theorem is optimal. We provide insights on this question below in Section <ref>. §.§ Optimal Performance A natural question that arises upon inspection of Theorem <ref> is whether the learning rate for the approximation error is, in some appropriate sense, optimal. We compare the learning rate established by the theorem with the optimal learning rate that could be achieved if the population principal components were observed. In this case it is well known that the optimal rate is of the order K/T <cit.>, which is achieved by the least squares estimator based on the population principal components. In this section we assume that the approximation error is zero for ease of exposition. For a given choice of the signal strength α, degree of dependence r_α and the rate of growth of the number of principal components r_K, it is straightforward to verify that the optimal rate can be recovered in Theorem <ref> provided that 1-r_K 2α-1 < 1+r_K 2-2α+2/r_α , and that the rate of growth of the number of parameters allowed is such a scenario is r_p ∈[ 1-r_K 2α-1 , 1+r_K 2-2α+2/r_α]  . In particular we note that the optimal rate can be achieved in the both the strong signal (α=1), and the weak signal (α<1) cases, provided that α > 2/3. The larger the values of α, r_α and r_K, the larger the range of admissible growth rates for the number of predictors r_p. It is interesting to provide concrete examples of these conditions. In the presence of a strong signal (α=1), a fixed number of principal components (r_K=0) and independent data (r_α = ∞)[The proofs of this manuscript assume that r_α is finite.] we obtain that R(θ̂) - R(θ^*) ≤ C ( 1/p + 1/T^2 + K/T) log (T) = O( K/Tlog(T) ), when r_p ∈ [1,∞). In the presence of a weak signal (α<1), a fixed number of principal components (r_K=0) and independent data (r_α = ∞) we obtain that R(θ̂_PCR) - R(θ^*) = O ( 1/p^2 α - 1 + p^2 - 2 α/T^2 + K/T) log (T) = O( K/Tlog(T) ), when r_p ∈ [ 1/(2α-1) , 1/(2-2α) ]. In particular when α = 3/4 the rate of growth of the number of parameters required to achieve optimal performance is r_p=2. § PROOF OF MAIN RESULT We introduce some additional notation. In this section and in the appendices we use the subscript s to denote the index of an observation in the training sample 𝒟 and the subscript t to denote the index of a validation observation. Also, for some random variable X we use X _L_2 to denote √(𝔼( X^2 | 𝒟 ) ). The proof of Theorem <ref> begins with a decomposition of an upper bound of the excess risk of the empirical risk minimizer. Define the approximate rotation matrix H = Λ_K ^-1/2 V_K' V_K Λ_K ^1/2 <cit.> and the infeasible PCR estimator ϑ̃ = min_ϑ∈ℝ^K 1 T∑_t=1^T ( Y_t - P_t'ϑ )^2. The following lemma provides a useful bound for the excess risk. We remark that the lemma only requires stationarity and finite variance. Suppose that { (Y_t, X_t')' } is a stationary sequence taking values in 𝒴×𝒳∈ℝ×ℝ^p such that 𝔼(Y_t^2) < ∞ and 𝔼(X_i t^2) < ∞ for all i = 1, …, p. Then it holds that R( θ̂_PCR ) - R( θ^*) ≤ 2 max_1 ≤ s ≤ T{ Y^2_s }𝔼( P_t - H P_t ^2_2 | 𝒟 ) + 4 ϑ̃ - H ' ϑ̂^2_2 + 4 ϑ^* - ϑ̃^2_2 + 2 u_t'γ^* ^2_L_2 . Lemma <ref> implies that in order to control the excess risk of PCR it suffices to provide appropriate bounds on the four terms on the right hand side of (<ref>). The following four propositions establish such bounds. The first two propositions control the terms 𝔼( P_t - H P_t ^2_2 | 𝒟 ) and ϑ̃ - H ' ϑ̂^2_2. We remark that these two terms capture the risk of PCR that is due to the estimation of the principal components. The proof of the propositions is based on standard arguments from the approximate factor model literature <cit.>. Suppose <ref>–<ref> are satisfied. Then for any η>0 and any T sufficiently large, 𝔼 ( P_t - H P_t ^2_2 | 𝒟 ) ≤c_K+1 2 c_K^21 p^2α-1, holds with probability at least 1-T^-η. Suppose <ref>–<ref> are satisfied. Then for any η>0 there is C>0 such that, for any T sufficiently large, ϑ̃ - H ' ϑ̂^2_2 ≤ C { K T + log (T) T + [ (p + log(T) )^r_α +1 r_α p^αT]^2 + 1 p^2α}log(T), holds with probability at least 1-T^-η. The next proposition controls the term ϑ^* - ϑ̃^2_2. We remark that this term may be interpreted as the risk of the least squares estimator of PCR based on the population principal components. The proof of the proposition is based on the so-called small-ball method, which is the same proof strategy used in <cit.>. Suppose <ref>–<ref> are satisfied. Then for any η>0 there is a C>0 such that, for any T sufficiently large, ϑ^* - ϑ̃^2_2≤ C K log (T) T, holds with probability at least 1-T^-η. The following proposition controls the error u_t'γ^* ^2_L_2, which can be interpreted as the approximation error of PCR. Suppose <ref> and <ref> are satisfied. Then it holds that u_t'γ^* ^2_L_2 = (θ^*)' V_R Λ_R V_R' θ^*  . The claim of Theorem <ref> follows from Propositions <ref> to <ref> together with the implication rule, the C_r inequality, the union bound and the fact that the sub-Gaussian assumption on Y_t (<ref>) implies that for each η>0 there is a C>0 such that ℙ( max_1 ≤ s ≤ T{ Y^2_s }≤ C log(T) ) ≤ 1-T^-η . § CONCLUSIONS This paper establishes predictive performance guarantees for principal component regression. Our analysis has a number of highlights. First, the analysis we carry out is nonparametric, in the sense that the relation between the prediction target and the predictors is not specified and, in particular, we do not assume that the prediction target is generated by a factor model. Second, our framework considers both the cases in which the largest eigenvalues of the covariance matrix of the predictors diverge linearly in the number of predictors (strong signal regime) or sublinearly (weak signal regime). A highlight of our results is that we show that, under appropriate conditions, PCR achieves optimal performance (up to a logarithmic factor) in both the strong signal and weak signal regimes. § PROOFS We recall that Σ = ( V_K Λ_K V_K' + V_R Λ_R V_R') where V_K = ( v_1,…, v_K), V_R = ( v_K+1,…, v_p), Λ_K = diag(λ_1,…,λ_K) and Λ_R = diag(λ_K+1,…,λ_p). Analogously, we have that Σ = ( V_K Λ_K V_K' + V_R Λ_R V_R') where V_K = (v̂_1,…,v̂_K), V_R = (v̂_K+1,…,v̂_p), Λ_K = diag(λ̂_1,…,λ̂_K) and Λ_R = diag(λ̂_K+1,…,λ̂_p). (i) We have that 𝐁 P_t + u_t = V_K V_K' X_t + ( I_p - V_K V_K') X_t = X_t. We have that 𝐁' 𝐁 = Λ_K^1/2 V_K' V_K Λ_K^1/2 = Λ_K, that 𝔼 ( P_t P_t' ) = Λ_K^-1/2 V_K' ( V_K Λ_K V_K' + V_R Λ_R V_R') V_K Λ_K^-1/2 = I_K, 𝔼 ( u_t u_t' ) = V_R V_R' ( V_K Λ_K V_K' + V_R Λ_R V_R') V_R V_R'= V_R Λ_R V_R', and that 𝔼 ( P_t u_t' ) = Λ_K^-1/2 V_K' ( V_K Λ_K V_K' + V_R Λ_R V_R' ) V_R V_R' = 0_K × p. (ii) Note that X_t'θ^* = P_t' 𝐁' θ^* + u_t'θ^* and that P_t' 𝐁' θ^* = P_t'Λ^1/2_K V_K' θ^* = P_t'ϑ^* and u_t'θ^* = u_t' ( I_p - V_K ( V_K' V_K)^-1 V_K') θ^* = u_t' γ^* since I_p - V_K ( V_K' V_K)^-1 V_K' is idempotent. Consider the excess risk decomposition given by R( θ̂_PCR ) - R( θ^*) = Y_t - P'_t ϑ̂^2_L_2 - Y_t - P'_t H ' ϑ̂^2_L_2 + Y_t - P'_t H ' ϑ̂^2_L_2 - Y_t - P'_t ϑ̃^2_L_2 + Y_t - P'_t ϑ̃^2_L_2 - Y_t - P'_t ϑ^* ^2_L_2 + Y_t - P'_t ϑ^* ^2_L_2 - Y_t - X_t' θ^* ^2_L_2 . The projection theorem, the fact (a+b)^2≤ 2 a^2 + 2 b^2, the fact 𝔼( P_t u_t' ) = 0 and the Cauchy-Schwarz inequality imply Y_t - P'_t ϑ̂^2_L_2 - Y_t - P'_t H ' ϑ̂^2_L_2 = Y_t - X'_t θ^* + X'_t θ^* - P'_t ϑ̂^2_L_2 - Y_t - X'_t θ^* + X'_t θ^*- P'_t H ' ϑ̂^2_L_2 = X'_t θ^* - P'_t ϑ̂^2_L_2 - X'_t θ^*- P'_t H ' ϑ̂^2_L_2 ≤ 2 P'_t ϑ^* - P'_t H ' ϑ̂ + u_t'γ^* ^2_L_2 + 2 P'_t H ' ϑ̂ - P'_t ϑ̂^2_L_2 - P'_t ϑ^*- P'_t H ' ϑ̂^2_L_2 - u_t'γ^* ^2_L_2 = P'_t ϑ^* - P'_t H ' ϑ̂^2_L_2 + 2 P'_t H ' ϑ̂ - P'_t ϑ̂^2_L_2 + u_t'γ^* ^2_L_2 ≤ 2 P'_t ϑ^* - P'_t ϑ̃^2_L_2 + 2 P'_t ϑ̃ - P'_t H ' ϑ̂^2_L_2 + 2 P'_t H ' ϑ̂ - P'_t ϑ̂^2_L_2 + u_t'γ^* ^2_L_2 ≤ 2 ϑ^* - ϑ̃^2_2 + 2 ϑ̃ - H ' ϑ̂^2_2 + 2 ϑ̂_2^2 𝔼( P_t - H P_t ^2_2 | 𝒟 ) + u_t'γ^* ^2_L_2 . To see how the projection theorem applies to the second equality note that P'_t ϑ̂ = X_t' V_K Λ^-1/2_K ϑ̂ and P'_t H ' ϑ̂ = X'_t V_K Λ^-1/2_K H ' ϑ̂. Furthermore, we have that ϑ̂_2 = 1 T P' Y _2 ≤√(1 T) P' _2 √(1 T) Y _2 = √( 1 T) Y _2 ≤max_1 ≤ s ≤ T |Y_s|  . Next, the projection theorem, the Cauchy-Schwarz inequality and the inequality 2 a b ≤ a^2 + b^2 imply that Y_t - P'_t H ' ϑ̂^2_L_2 - Y_t - P'_t ϑ̃^2_L_2 = Y_t - P'_t ϑ^* + P_t' ( ϑ^* - H ' ϑ̂ ) ^2_L_2 - Y_t - P'_t ϑ^* + P_t' ( ϑ^* - ϑ̃ ) ^2_L_2 = P_t' ( ϑ^* - H ' ϑ̂ ) ^2_L_2 - P_t' ( ϑ^* - ϑ̃ ) ^2_L_2 = 𝔼[ . [ P_t' ( ϑ^* - H ' ϑ̂ ) - P_t' ( ϑ^* - ϑ̃ ) ] [ P_t' ( ϑ^* - H ' ϑ̂ ) + P_t' ( ϑ^* - ϑ̃ ) ] | 𝒟] = 𝔼[ . [ P_t' ( ϑ̃ - H ' ϑ̂ ) ] [ P_t' ( ϑ^* - H ' ϑ̂ + ϑ^* - ϑ̃ ) ] | 𝒟] = 𝔼[ . [ P_t' ( ϑ̃ - H ' ϑ̂ ) ] [ 2 P_t' ( ϑ^* - ϑ̃) + P_t' ( ϑ̃ - H ' ϑ̂ ) ] | 𝒟] = P_t' ( ϑ̃ - H ' ϑ̂ ) ^2_L_2 + 2 𝔼[ . ( ϑ^* - ϑ̃)' P_t P_t' ( ϑ̃ - H ' ϑ̂ ) | 𝒟] = ϑ̃ - H ' ϑ̂^2_2 + 2 ( ϑ^* - ϑ̃)' ( ϑ̃ - H ' ϑ̂ ) ≤ϑ̃ - H ' ϑ̂^2_2 + 2 ϑ^* - ϑ̃_2 ϑ̃ - H ' ϑ̂_2 ≤ 2 ϑ̃ - H ' ϑ̂^2_2 + ϑ^* - ϑ̃^2_2  . The projection theorem implies Y_t - P_t' ϑ̃^2_L_2 - Y_t - P_t' ϑ^* ^2_L_2 = P_t' ϑ̃ - P_t' ϑ^* ^2_L_2 = ϑ̃ - ϑ^* ^2_2, Y_t - P'_t ϑ^* ^2_L_2 - Y_t - X_t' θ^* ^2_L_2 = u_t' γ^* ^2_L_2 . The claim then follows from (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>). §.§ Proof of Proposition <ref> We begin by noting that P_t - H P_t = Λ_K^-1/2 V_K' X_t - Λ_K ^-1/2 V_K' V_K Λ_K ^1/2Λ_K^-1/2 V_K' X_t = Λ_K^-1/2 V_K' ( I_p - V_K V_K' ) X_t = Λ_K^-1/2 V_K' V_R V_R' X_t = Λ_K^-1Λ_K^-1/2 V_K' V_KΛ_K V_K' V_R V_R' X_t = Λ_K^-1Λ_K^-1/2 V_K' ( V_KΛ_K V_K' + V_RΛ_R V_R') V_R V_R' X_t = Λ_K^-1Λ_K^-1/2 V_K' 1 T X' X V_R V_R' X_t = Λ_K^-11√(T) P' 1 √(T) X V_R V_R' X_t  . This implies that for any ε>0 ℙ ( 𝔼[ . P_t - H P_t ^2_2| 𝒟] ≥ε ) ≤ℙ ( 𝔼 [ . Λ_K^-1^2_2 1 √(T) P' ^2_2 1 √(T) X V_R V_R' X_t ^2_2| 𝒟 ] ≥ε ) = ℙ( Λ_K^-1_2^2 𝔼[ . 1 √(T) X V_R V_R' X_t ^2_2| 𝒟] ≥ε) ≤ℙ( Λ_K^-1_2^2 ≥ε_1 ) + ℙ( 𝔼[ . 1 √(T) X V_R V_R' X_t ^2_2| 𝒟] ≥ε_2 )  , for some ε_1,ε_2>0 such that ε = ε_1 ε_2. We proceed by establishing bounds for the two terms in (<ref>). First, we note that it follows from Proposition <ref> that for any η>0, for all T sufficiently large, it holds that ℙ( Λ_K^-1_2^2 ≥4 c_K^21 p^2 α) = O(1 T^η)  . Second, we establish a bound for the second term of equation (<ref>). We have that ℙ( 𝔼[ . 1 √(T) X V_R V_R' X_t ^2_2| 𝒟] ≥ε_2 ) = ℙ( 1 T 𝔼[ .∑_s=1^T ( X_s' V_R V_R' X_t )^2 | 𝒟] ≥ε_2 ) = ℙ ( 1 T 𝔼[ . ∑_s=1^T ( X_s' V_R V_R' X_t X_t' V_R V_R' X_s ) | 𝒟] ≥ε_2 ) = ℙ ( 1 T ∑_s=1^T X_s' V_R Λ_R V_R' X_s ≥ε_2 ) = ℙ ( p ( 1 T ∑_s=1^T X_s' V_R Λ_R V_R' X_s p - (Λ_R^2) p ) + (Λ_R^2) ≥ε_2 ) ≤ℙ ( p ( 1 T ∑_s=1^T X_s' V_R Λ_R V_R' X_s p - (Λ_R^2) p ) + c_K+1^2 p ≥ε_2 ) where we have used the fact that 𝔼( X_s' V_R Λ_R V_R' X_s) = ( Λ_R^1/2 V_R' 𝔼( X_s X_s') V_R Λ_R^1/2 ) = (Λ_R^2) < c_K+1^2 p. It is straightforward to verify that the sequence { X_s' V_R Λ_R V_R' X_s / p - (Λ_R^2) / p} satisfies the conditions of Lemma <ref>, implying that for each η>0 there exists a constant C such that, for all T sufficiently large, it holds that ℙ ( 𝔼[ . 1 √(T) X V_R V_R' X_t ^2_2| 𝒟] ≥ c_K+1^2 p + C p√(log(T) T )) = O(1 T^η)  . The claim of the proposition then follows from inequalities (<ref>) and (<ref>). Suppose <ref>–<ref> are satisfied. Then for any η>0, for any T sufficiently large, we have that ℙ( λ̂_K ≥ c_K p^α / 2 ) ≥ 1 - T^-η. <ref> implies λ̂_K ≥λ_K -|λ_K - λ̂_K| ≥ c_Kp^α - Σ - Σ_2 where the last inequality follows from Weyl's inequality. To establish the claim, it suffices to show that for all T sufficiently large, Σ - Σ_2 ≤c_K 2 p^α holds with probability at least 1 -T^-η. Using the representation of Lemma <ref> we get Σ - Σ = 1 T ∑_t=1^T (𝐁 P_t+ u_t )(𝐁 P_t+ u_t )'- 𝔼( X_t X_t') = 1 T 𝐁∑_t=1^T ( P_t P_t' - I_K) 𝐁' + 1 T∑_t=1^T( u_t u_t' - 𝔼( u_t u_t' ) ) + 1 T ∑_t=1^T (𝐁 P_t u_t') + 1 T ∑_t=1^T ( u_t P_t' 𝐁') = D_1 + D_2 + D_3 + D_4 . This, in turn, implies that ℙ( Σ - Σ_2 > ε) ≤∑_i=1^4 ℙ( D_i _2 > ε_i )  , for some ε_1,ε_2,ε_3, ε_4>0 such that ε = ε_1 +ε_2+ ε_3 +ε_4. Proposition <ref> and <ref> imply that for any η>0 there is a positive constant C_1 such that D_1 _2 > 1 T ∑_t=1^T ( P_t P_t' - I_K) _2 𝐁𝐁' _2 ≥ p^α C_1 ( √( K T ) + √(log T T )) = ε_1 holds with probability at most T^-η. Proposition <ref> implies that for any η>0 there is a positive constant C_2 such that D_2 _2 > C_2 [ ( p + log(T) )^r_α +1 r_α T + √( ( p + log(T) )^r_α +1 r_α T )] = ε_2 holds with probability at most T^η. Proposition <ref> and <ref> imply that for any η>0 there is a positive constant C_3 such that D_3 _2 > 1 T ∑_t=1^T P_t u_t' _2 𝐁_2 ≥ p^α/2 C_3 √( p K log T T) = ε_3 holds with probability at most T^-η. Last, note that D_3 _2 = D_4 _2 . Combining the inequalities in (<ref>), (<ref>), (<ref>) we get ℙ ( Σ - Σ_2 < ε) ≤ 1-4 T^η , where ε may be defined as ε = p^α C_1 ( √( K T ) + √(log T T )) + p^α C_2 [ 1 p^α ( p + log(T) )^r_α +1 r_α T + 1 p^α√( ( p + log(T) )^r_α +1 r_α T )] + 2 p^α C_3 √( p^1-α K log T T). The claim follows after noting that <ref> implies that, for all T sufficiently large, ε≤c_K 2 p^α. Suppose <ref>–<ref> are satisfied. Then for any η>0 there exists a positive constant C such that, for any T sufficiently large, 1 T ∑_t=1^T u_t u_t' - 𝔼( u_t u_t' ) _2≥ C [ ( p + log(T) )^r_α +1 r_α T + √( ( p + log(T) )^r_α +1 r_α T )] holds with probability at most T^-η. We assume that c_K+1 is positive (when c_K+1 is zero the claim is trivial). Consider the isotropic random vectors Σ_u^-1/2 u_t where Σ_u = 𝔼( u_t u_t' ) = V_R V_R' Σ V_R V_R' and note that <cit.> implies that if 𝒩 is a 1 4-net of 𝒮^p-1 then 1 T ∑_t=1^T Σ_u^-1/2 u_t u_t' Σ_u^-1/2 - I_p _2 = max_ x ∈𝒮^p-1| 1 T∑_t=1^T ( u_t' Σ_u^-1/2 x)^2-1 | ≤ 2 max_ x ∈𝒩| 1 T∑_t=1^T ( u_t' Σ_u^-1/2 x)^2-1 | = 2 max_ x ∈𝒩| 1 T∑_t=1^T W_ x t| , where W_ x t = ( u_t' Σ_u^-1/2 x)^2-1. To establish the claim we first show that for all T sufficiently large (<ref>) can be bounded with high probability. We begin by establishing a bound for a fixed vector x ∈𝒩. We enumerate three properties of the sequence { W_ x t}. First, 𝔼(W_ x t)=0 since 𝔼[( u_t' Σ_u^-1/2 x)^2] = x_2^2=1. Second, W_ x t is the de-meaned square of a sub-Gaussian random variable with parameter C_m (which is independent of x). To see this note that u_t' Σ_u^-1/2 x = Z_t' V_R Λ_R^1/2 V_R' V_R Λ_R^-1/2 V_R ' x = Z_t' V_R V_R' x where Z_t = Σ^-1/2 X_t. Define ρ = V_R V_R' x _2 and notice that ρ∈ [0,1]. If ρ∈ (0,1] then for any ε>0 it holds ℙ ( | u_t' Σ_u^-1/2 x| ≥ε ) = ℙ( | Z_t' V_R V_R' x ρ| ≥ερ) ≤exp (-C_m (ε /ρ )^2 ) ≤exp (-C_m ε ^2 ) where C_m>0 is defined in <ref>. If ρ = 0 then for any ε>0 it holds ℙ ( | u_t' Σ_u^-1/2 x| ≥ε ) = 0 ≤exp (-C_m ε ^2 ). Therefore u_t' Σ_u^-1/2 x is sub-Gaussian with parameter C_m. Third, the sequence {W_ x t}_t=1^T inherits the mixing properties of the sequence {(Y_t, X_t')'}_t=1^T spelled out in <ref>. These three facts imply that { W_ x t} satisfies the conditions of <cit.> (the C_r inequality and <cit.> imply that the condition spelled out in equation (1.33) of <cit.> is satisfied). Define ε_T = C_1^*( p + log(T) )^r_α +1 r_α T + [ C_1^* ( p + log(T) )^r_α +1 r_α T ]^1/2 and q_T = ⌈ T C^*_2 (p + log(T) )^1 r_α⌉ - 1 for positive constants C_1^* and C_2^* to be chosen below. Notice that <ref> implies that for all T sufficiently large it holds that q_T ∈ [1,T 2], as required by the theorem. Then for any r≥ 3 there exist a positive constant C_1 that depends on C_m and r such that, for all T sufficiently large, ℙ( | 1 T∑_t=1^T W_ x t| > ε_T ) ≤ a_1 Texp(-q_T ε_T^2 C_1 + C_1 ε_T) + a_2 Tα( ⌊ T q_T +1 ⌋)^2r 2r+1 holds, where a_1 T = 2 T / q_T + 2 [ 1 + ε_T^2 / (C_1 + C_1 ε_T) ], and a_2 T = 11 T( 1 + C_1 / ε_T ). We proceed by bounding the r.h.s. of (<ref>). First, for all T sufficiently large, we have a_1 Texp(-q_T ε_T^2 C_1 + C_1 ε_T) ≤( 2 T + 2 + 2 ε_T C_1 ) exp(-min( ε_T, ε_T^2 ) 2 C_1 q_T ) ≤exp( log( 3 T + 2 ε_T C_1 ) - C_1^*(p + log(T) )^r_α +1 r_α 2 T C_1 ( T C^*_2 (p + log(T) )^1 r_α -1 )) ≤exp( - ( C_1^* 2 C_1 C_2^* - 1 ) (p + log T))  , where in the second inequality we use the fact that min(ε_T,ε_T^2) ≥ C_1^* (p + log(T) )^r_α +1 r_α / T and the last from the condition r_p < r_α. Second, for all T sufficiently large, we have a_2 Tα( ⌊ T q_T +1 ⌋)^2r 2r+1≤exp( 2 log(T ) - C_α2 r 2r+1( T q_T +1 - 1)^r_α) ≤exp( 2 log(T ) - 1 2 C_α (C^*_2)^r_α2r 2r+1( p + log(T) ) ) ≤exp( - ( C_α (C^*_2)^r_αr 2r+1-1 ) ( p + log(T) ) )  . Combining (<ref>) and (<ref>) we get that for a given x, for all T sufficiently large, it holds ℙ( | 1 T∑_t=1^T W_ x t| > ε_T ) ≤exp( - ( C_1^* 2 C_1 C_2^* -1) (p + log T)) + exp( - ( C_α (C^*_2)^r_αr 2r+1-1 ) ( p + log(T) ) )  . The next step consist in taking the union bound over all the vectors x ∈𝒩. It follows from <cit.> that the cardinality of a 1 4-net 𝒩 of the unit sphere 𝒮^p-1 is bounded by 9^p. Setting C_2^* = ((η + log(9) + 1) (2r+1) r C_α)^1 r_α and C_1^* = 2 C_1 C_2^* (η + log(9) + 1) in (<ref>) we then obtain that, for all T sufficiently large, ℙ ( max_ x ∈𝒩| 1 T∑_t=1^T W_ x t| > ε_T ) ≤ 9^p max_ x ∈𝒩ℙ( | 1 T∑_t=1^T W_ x t| > ε_T ) ≤ 2 exp( log(9) p - (η + log(9))( p + log(T) ) ) < 2 exp(-ηlog(T)) = 2 T^η . The claim follows after noting that, for all T sufficiently large, it holds ℙ( 1 T ∑_t=1^T u_t u_t' - 𝔼( u_t u_t' ) _2 > 2 Σ_u_2 ε_T ) = ℙ( 1 T ∑_t=1^T Σ_u^-1/2 u_t u_t' Σ_u^-1/2 - I_p _2 > 2 ε_T ) ≤ℙ( max_ x ∈𝒩| 1 T∑_t=1^T W_ x t| > ε_T ) = O(1 T^η) , and noting that <ref> implies that Σ_u_2 = c_K+1. Let λ_min( M) denote the smallest eigenvalue of the square matrix M. Suppose <ref>, <ref> and <ref> are satisfied. Then for any η>0 there exists a positive constant C such that, for any T sufficiently large, (i) ℙ( 1 T ∑_t=1^T P_t P_t' - 𝔼( P_t P_t' ) _2≥ C ( √( K T ) + √(log T T )) ) = 1 T^η , (ii) ℙ( λ_min( 1 T ∑_t=1^T P_t P_t' ) ≤1 2) = 1 T^η , (iii) ℙ( ( 1 T ∑_t=1^T P_t P_t')^-1 - ( 𝔼( P_t P_t' ) )^-1_2≥ C ( √( K T ) + √(log T T )) ) = 1 T^η . (i) Following the same arguments of Proposition <ref> and denoting by 𝒩 the 1 4-net of the unit sphere 𝒮^K-1 we get that 1 T ∑_t=1^T P_t P_t' - 𝔼( P_t P_t' ) _2≤ 2 max_ x ∈𝒩| 1 T∑_t=1^T ( P_t' x)^2-1 | = 2 max_ x ∈𝒩| 1 T∑_t=1^T W_ x t|  , where W_ x t = ( P_t' x)^2-1. To establish the claim, we first show that for all T sufficiently large (<ref>) can be bounded with high probability. We note two properties of the sequence { W_ x t}. First, 𝔼(W_ x t)=0 since 𝔼[( P_t' x)^2] = x_2^2=1. Second, W_ x t is sub-exponential with parameter C'_m (which is independent of x and depending on C_m). To see this note that | P_t' x | = | Z_t' V Λ^1/2 V' V_K Λ_K^-1/2 x | = | Z_t' V_K x| where Z_t = Σ^-1/2 X_t and that V_K x _2 ∈ [0,1]. Consider the decomposition ∑_t=1^T W_ x t = ∑_t=1^T W'_ x t + ∑_t=1^T W”_ x t where W'_ x t = W_ x t1(|W_ x t| ≤ b_T) - 𝔼( W_ x t1(|W_ x t| ≤ b_T ) ) and W”_ x t = W_ x t1(|W_ x t| > b_T) - 𝔼( W_ x t1(|W_ x t| > b_T) ). Then for any ε>0 we have that ℙ( | ∑_t=1^T W_ x t| > ε) ≤ℙ( |∑_t=1^T W'_ x t| > ε 2) + ℙ( | ∑_t=1^T W”_ x t| >ε 2)  . The sequence { W'_ x t}_t=1^T is such that W_ x t' _∞ < 2b_T and has the same mixing properties as {(Y_t, X_t)}_t=1^T spelled out in <ref>. These two facts imply that { W'_ x t} satisfies the conditions of <cit.>. Define ε_T = C^* ( √( K T ) + √( T log T) ), b_T = 2 C_m' [ log ( 2 W_ x t_L_2√(T) / C^* ) + C^*(K+log T ) ] and M_T = ⌊ b_T^-1 T^1 2 / ( √( K ) + √(log T) )⌋ for positive constant C^* to be chosen below. For all T sufficiently large, we have that M_T ∈ [1,T] (note <ref> implies r_k < 1/3) and 4 (2 b_T) M_T < ε_T, as required by the theorem. Then, for all T sufficiently large, ℙ( | ∑_t=1^T W'_ x t| > ε_T ) < 4 exp( - ε_T^2 64 T M_T D(T,M_T) + 16 3 b_T M_T ε_T ) + 4 T M_Texp( -C_α M_T^r_α) holds with D(T,M_T) = 𝔼[ ( ∑_t=1^M_T W'_ x t)^2 ]. Define γ(l) = |Cov(W'_ x t, W'_ x t+l)| for l=0,…,T-1 and note that D(T,M_T) ≤ M_T ∑_l=-T+1^T-1γ(l). Define C_m 4 = W_ x t^2_L_4 (which is a constant depending only on C_m). Then, for l=0,… it holds that γ(l) ≤ 12 α(l)^1 2 W'_ x t^2_L_4≤ 48 α(l)^1 2 W_ x t^2_L_4≤ 48 α(l)^1 2 C_m,4 , where the first inequality follows from Davydov's inequality <cit.> and the second follows from the fact that W_ x t' _L_4≤ 2 W_ x t1(|W_ x t| ≤ b_T) _L_4≤ 2 W_t_L_4. This implies that D(T,M_T) < C_σ^2 M_T where C_σ^2 = 96C_m,4∑_l = 0^∞α(l)^1 2∨ 1. We then use the inequality a^2 + b^2 ≤ (a+b)^2 for a,b>0 to get that ℙ( | ∑_t=1^T W'_ x t| > ε_T ) ≤ 4 exp( - C^*2 (√(KT)+√(T log(T)) )^2 64 T C_σ^2 + 16 3 b_T M_T ε_T ) + 4 T M_Texp( - C_α M_T^r_α) ≤ 4 exp( - C^*2 K T + C^*2 T log(T) 64 T C_σ^2 + 16 3 C^*T ) + 4 T exp( - C_α M_T^r_α) ≤ 4 exp( - C^*2 64 C_σ^2 + 16 3 C^*( K + log(T) ) ) + 4 exp(log(T)- C_α M_T^r_α)  . Note that for all T sufficiently large, we have M_T^r_α > K + log(T) since r_K < 1 3 + 2 r_α as implied by <ref>. We may write that ℙ( | ∑_t=1^T W'_ x t| > ε_T ) ≤ 5 exp( - C^*2 64 C_σ^2 + 16 3 C^*( K + log(T) ) )   . The sequence { W_ x t”}_t=1^T is such that, for all T large enough, ℙ( | ∑_t=1^T W”_ x t| > ε_T) ≤1 ε_T𝔼| ∑_t=1^T W”_ x t| ≤T ε_T𝔼 | W”_ x t | ≤2 T ε_T𝔼| W_ x t1(|W_ x t| > b_T) | ≤2T ε_T W_ x t_L_2 1( |W_ x t| > b_T ) _L_2 = 2T ε_T W_ x t_L_2ℙ( |W_ x t| > b_T )^1 2≤2 T W_ x t_L_2ε_Texp( - C_m' b_T 2 ) = 2 T W_ x t_L_2 C^* (√(K T ) + √( T log(T) )) exp( - C_m' b_T 2 ) ≤2 W_ x t_L_2√(T) C^*exp( - C_m' b_T 2 ) = exp( log(2 W_ x t_L_2√(T) C^*) - C_m' b_T 2 ) = exp( -C^* ( K + log(T) ))  , where the first inequality follows from Markov's inequality. Combining (<ref>) and (<ref>) we have that, for all T sufficiently large, max_x ∈𝒩ℙ( | 1 T∑_t=1^T W_ x t| > ε_T T ) ≤exp( -C^* (K + log (T)) ) + 5 exp( - C^*2 64 C_σ^2 + 16 3 C^*( K + log(T) ) )  , for the fixed vector x. It remains to take the union bound over all the vectors x ∈𝒩. Using the same arguments of Proposition <ref> we have that, for all T sufficiently large, ℙ( max_x ∈𝒩| 1 T∑_t = 1^T W_ x t| > ε_T T ) ≤ 9^K max_x ∈𝒩ℙ( | 1 T∑_t = 1^T W_ x t| > ε_T T) ≤ 9^K exp( -C^* (K + log(T)) ) + 9^K · 5 exp( - C^*2 64 C_σ^2 + 64 3 C^*( K + log(T) ) ) ≤1 T^η where the last inequality follows for a sufficiently large choice of the constant C^*. The claim follows after noting that, for all T sufficiently large, it holds ℙ( 1 T ∑_t=1^T P_t P_t' - 𝔼( P_t P_t' ) _2 > 2 ( √( K T ) + √(log(T) T)) ) ≤ℙ( max_ x ∈𝒩| 1 T∑_t=1^T W_ x t| > ε_T T ) ≤1 T^η . (ii) We begin by noting that λ_min(1 T ∑_t=1^T P_t P_t') = min_ x ∈𝒮^K-1 x' ( 1 T ∑_t=1^T P_t P_t' ) x = min_ x ∈𝒮^K-1 x' I_K x - x' ( I_K- 1 T ∑_t=1^T P_t P_t' ) x ≥ 1 - 1 T ∑_t=1^T P_t P_t' - I_K _2  . It then follows from part (i) that for any η>0, for all T sufficiently large, 1 - 1 T ∑_t=1^T P_t P_t' - I_K _2 ≥1 2 holds with at least probability 1-T^-η, which implies the claim. (iii) First, we condition on the event of part (ii), which implies that for any η>0, for all sufficiently large T, ( 1 T ∑_t=1^T P_t P_t')^-1 exists. Then we note that conditional on the same event for any η>0 there exists a positive constant C such that, for all sufficiently large T, it holds that ℙ ( ( 1 T ∑_t=1^T P_t P_t')^-1 - 𝔼( P_t P_t' )^-1_2≥C 2 ( √( K T ) + √(log(T) T )) ) = ℙ ( ( 1 T ∑_t=1^T P_t P_t')^-1( I_K - 1 T ∑_t=1^T P_t P_t')_2≥C 2 ( √( K T ) + √(log(T) T )) ) ≤ℙ ( ( 1 T ∑_t=1^T P_t P_t')^-1_2 ≥1 2) + ℙ( 1 T ∑_t=1^T P_t P_t' - 𝔼( P_t P_t' ) _2≥ C ( √( K T ) + √(log(T) T )) ) = 2 T^η , where we have used the fact that 𝔼( P_t P_t' )^-1 = I_K. The unconditional probability of this event being realized is ( 2T^-η ) (1 - T^-η ) = T^-η. Suppose <ref>, <ref>, <ref> and <ref> are satisfied. Then for any η>0 there exits a positive constant C such that, for any T sufficiently large, it holds that ℙ( 1 T∑_t=1^T P_t u_t' _2 ≥ C √( p K log(T) T)) = 1 T^η . We assume that c_K+1 is positive (when c_K+1 is zero the claim is trivial). Let V_ij,t denote P_itu_jt and V_j,t denote the j-th column of P_t u_t' where 1≤ i≤ K and 1 ≤ j ≤ p. We begin by showing that, for each j, the sequence of K-dimensional random vectors { V_j,t}_t=1^T satisfies the conditions required in Lemma <ref>. Lemma <ref>.(ii) and (iii) establish that P_t and u_t are sub-Gaussian random vectors. Standard results on sub-Gaussian random variables imply that ℙ( sup_ v : v _2 = 1 | V_j t ' v | > ε) ≤ℙ( sup_ v : v _2 = 1 | P_t ' v | | u_jt | > ε) ≤exp(-C_m' ε) for some C_m' > 0. Lemma <ref> implies that 𝔼( V_j,t) = 𝔼( P_tu_jt) = 0, hence V_j,t is a zero-mean random vector. Moreover, standard properties of strong mixing processes imply that { V_j,t}_t=1^T inherits the mixing properties of the sequence {(Y_t, X_t)'}_t=1^T spelled out in <ref>. Lastly, <ref> implies that K = ⌊ C_K T^r_K⌋ where r_K<1. Then the conditions of Lemma <ref> applied to the sequence { V_j,t}_t=1^T are satisfied and picking η' = η+r_p we get that there exists a C such that, for all T sufficiently large, ℙ( 1 T∑_t=1^T P_t u_t' _2 ≥ C √( p K log(T) T)) ≤ℙ( √(p)max_1≤ j ≤ p1 T∑_t=1^T V_j,t_2 ≥ C √( p K log(T) T)) ≤ p max_1≤ j ≤ pℙ( 1 T∑_t=1^T Z_j,t_2 ≥ C √( K log(T) T)) = p T^η' = C T^r_p T^η+r_p = C T^η . §.§ Proof of Proposition <ref> Recall that ϑ̃ = ( 1 T P' P )^-11 T P ' Y and ϑ̂ = 1 T P' Y and note that ϑ̃ - H ' ϑ̂ = ϑ̃ - H ^-1 H H'ϑ̂ = ( 1 T P' P )^-11 T P ' Y - H ^-1 H H' 1 T P' Y = [ ( 1 T P' P )^-11 T P ' Y - 1 T P ' Y ] - H ^-1( H H' 1 T P' Y - 1 T P' Y + 1 T P' Y - 1 T H P' Y ) = [ ( 1 T P' P )^-1 - I_K ] 1 T P ' Y - H ^-1[ ( H H' - I_K ) √(1 T) P' + √(1 T) ( P' - H P' ) ] √(1 T) Y  . Then the triangle inequality and the implication rule imply that for any ε>0 and ε_i>0 for i=1,…,5 such that ε = ε_1ε_2 + ε_3 ε_5 + ε_4 ε_5 it holds that ℙ( ϑ̃ - H ' ϑ̂_2 > ε ) ≤ℙ( ( 1 T P' P )^-1 - I_K _2 > ε_1 ) + ℙ( 1 √(T) P' _2 1 √(T) Y_2 > ε_2 ) + ℙ( H H' - I_K _2 > ε_3 ) + ℙ( 1 √(T) P' - H P' _2 > ε_4 ) + ℙ( H^-1_2 1 √(T) Y_2 > ε_5 )  . The claim then follows from Proposition <ref>, Proposition <ref>, Proposition <ref> and Proposition <ref> by setting ε_1 = C_1 ( √( K / T ) + √(log(T) /T ) ), ε_2 = √( 6 log (T) / C_m ), ε_3 = C_2 [ √( K / T ) + √(log(T) / T ) + (p + log(T))^r_α +1 r_α / (p^α T) + 1 /p^α ], ε_4 = C_3 [ (p + log(T))^ r_α + 1 r_α / (p^αT) +1 / p^α ] and ε_5 = √( 3 log (T) / (2 C_m) ). The same propositions imply that the r.h.s. of (<ref>) can be bounded by O(T^-η). Suppose <ref>–<ref> are satisfied. Then for all η>0 there exists a C > 0 such that, for any T sufficiently large, √(1 T) P - P H' _2 ≤ C [ ( p + log(T) ) ^ r_α + 1 r_α p^αT + 1 p^α] holds with probability at least 1-T^-η. Let P = X V_K Λ_K^-1/2 and P = X V_K Λ_K^-1/2 where X = ( X_1 , … , X_T )'. We note that P - P H' = X V_K Λ_K^-1/2 - X V_K Λ_K ^1/2Λ_K^-1/2 V_K' V_K Λ_K ^-1/2 = X ( I_p - V_K V_K' ) V_K Λ_K^-1/2 = X V_R V_R' V_K Λ_K^-1/2 = X V_R V_R' V_KΛ_K V_K' V_K Λ_K^-1/2Λ_K^-1 = X V_R V_R' ( V_KΛ_K V_K' + V_RΛ_R V_R') V_K Λ_K^-1/2Λ_K^-1 = 1 T X V_R V_R' X' X V_K Λ_K^-1/2Λ_K^-1 = 1 √(T) X V_R V_R' X' 1 √(T) PΛ_K^-1 = 1√(T)∑_t=1^T u_t u_t' 1 √(T) PΛ_K^-1 . This implies that for any ε,ε_1,ε_2>0 such that ε = ε_1 ε_2 it holds ℙ ( √(1 T) P - P H' _2 > ε ) ≤ℙ ( 1 T ∑_t=1^T u_t u_t' _2 1 √(T) P_2 Λ_K^-1_2 > ε ) = ℙ( 1 T ∑_t=1^T u_t u_t' _2 Λ_K^-1_2 > ε) ≤ℙ( Λ_K^-1_2 >ε_1 ) + ℙ( 1 T∑_t=1^T u_t u_t' _2 > ε_2 )  . We proceed by bounding the two terms on the r.h.s. of the last inequality. First we note that Proposition <ref> implies that, for all T sufficiently large ℙ( Λ_K^-1_2 > 2 c_K p^α) = 1- ℙ( 1 λ_K≤2 c_K p^α) = O(1 T^η)  . Second, we note that 1 T∑_t=1^T u_t u_t' _2 ≤1 T∑_t=1^T u_t u_t' - 𝔼( u_t u_t') _2 + 𝔼( u_t u_t') _2 = 1 T∑_t=1^T u_t u_t' - 𝔼( u_t u_t' ) _2 + c_K+1 . Proposition <ref> then implies that, for all T sufficiently large, we have ℙ( 1 T ∑_t=1^T u_t u_t' _2 > C ( (p+log(T))^r_α + 1 r_α T + √((p+log(T))^r_α +1 r_α T)) + c_K+1) = O(1 T^η) . The claim of the proposition then follows from inequalities (<ref>) and (<ref>) after noting that C ( (p+log(T))^r_α + 1 r_α T + √((p+log(T))^r_α +1 r_α T)) + c_K+1≤ C' ( (p+log(T))^r_α + 1 r_α T + c_K+1)  . Suppose <ref>–<ref> are satisfied. Then for any η>0 there exist a positive constant C such that, for any T sufficiently large, H H'- I_K _2≤ C ( √( K T ) + √(log(T) T ) + (p+log(T))^r_α +1 r_α p^α T + 1 p^α) , holds with probability at least 1- T^-η. By repeated application of the triangle inequality we get H H'- I_K _2≤ H H' - 1 T H P' P H' _2 + 1 T H P' P H' - I_K _2 ≤ H _2^2 I_K - 1 T P' P _2 + 1 T H P' P H' - 1 T P' P H' + 1 T P' P H'- 1 T P' P_2 ≤ H _2^2 1 T∑_t=1^T P_t P_t' - I_K _2 + 1 T ( P H'- P )' P H' _2 + 1 T P' ( P H'- P ) _2 ≤ H _2^2 1 T∑_t=1^T P_t P_t' - I_K _2 + 1 √(T) P - P H' _2 ( 1 √(T) P _2 H _2 + 1 √(T) P_2 )  . For any ε>0, ε_1>0, ε_2>0, ε_3>0 and ε_4>0 such that ε = ε_1ε_2 + ε_3 ε_4 it holds that ℙ ( H H'- I_K _2≥ε ) ≤ℙ( H _2^2 ≥ε_1 ) + ℙ( 1 T∑_t=1^T P_t P_t' - I_K _2≥ε_2 ) + ℙ( √(1 T) P - P H' _2 ≥ε_3 ) + ℙ( 1 √(T) P _2 H _2 + 1 √(T) P_2 > ε_4 )  . The claim then follows from Proposition <ref>, Propositions <ref> and Proposition <ref> by setting ε_1 = 2 c_1/c_K, ε_2 = C_1 ( √( K / T ) + √(log T /T ) ), ε_3 = C_2 [ (p + log(T)) ^ r_α + 1 r_α / (p^αT) +1/p^α] and ε_4 = 2 √(c_1/c_K) + 1. The same propositions imply that the r.h.s. of (<ref>) can be bounded by O(T^-η). Suppose <ref>–<ref> are satisfied. Then for any η>0, for any T sufficiently large, it holds that (i) ℙ ( √(1 / T) P _2 ≥√(2) ) = O( T^-η ), (ii) ℙ ( H _2≥√(2c_1 / c_K) ) = O( T^-η ), (iii) ℙ ( H^-1_2≥2 √(c_1 / c_K) ) = O( T^-η ), (iv) ℙ ( √( 1 / T) Y _2 > √(ηlog(T)/C_m ) ) = O( T^-η ). (i) By the triangle inequality, the fact that 𝔼( P_t P_t' ) = I_K and Proposition <ref> we have that for any η>0 there exists a positive constant C such that, for all T sufficiently large, √(1 T) P ^2_2 ≤1 T P' P - I_K _2 + I_K _2 ≤ C ( √( K T ) + √(log T T )) + 1 ≤ 2 holds with probability at least 1-O(T^-η). (ii) We begin by noting that H_2 = Λ_K ^-1/2 V_K' V_K Λ_K ^1/2_2≤Λ_K ^-1/2_2 V_K _2 V_K _2Λ_K ^1/2_2 = √(c_1 p^α)Λ_K ^-1/2_2  , where we have used the fact that <ref> implies Λ_K ^1/2_2 = √(c_1 p^α). Now we may write ℙ( H _2≥√(2c_1 c_K)) ≤ℙ( √(c_1 p^α)Λ_K ^-1/2_2 ≥√(2c_1 c_K)) = ℙ( c_1 p^α1 λ_K≥ 2 c_1 c_K) = 1- ℙ( λ_K ≥ c_K p^α 2) ≤ O(1 T^η) , where the last inequality is implied by Proposition <ref>. (iii) We begin by noting that H^-1_2 = H' ( H')^-1 H^-1_2 ≤ H _2 ( H')^-1 H^-1_2 = H _2 I_K + ( H')^-1 H^-1 - I_K _2 ≤ H _2 + H _2 ( H')^-1 H^-1 - I_K _2 ≤ H _2 + H _2 H H' - I_K _2 1- H H' - I_K_2  , where we have used the inequality A^-1 - B^-1_2 ≤ A^-1_2^2 B - A _2 1 - A^-1_2 B - A _2  , where A and B are invertible n× n matrices. The claim then follows from Proposition <ref> and part (ii) of this proposition. (iv) By <ref>, for any ε > 0, ℙ( √( 1 / T) Y _2 > ε) ≤ℙ( max_t ≤ T |Y_t| > ε) ≤ T ℙ(|Y_t| > ε) ≤ T exp ( -C_m ε^2 ). Thus, by choosing ε = √(ηlog(T)/C_m ) we have that √( 1 / T) Y _2 > √( (1+η)log(T)/C_m ) holds at most with probability O(T^-η). §.§ Proof of Proposition <ref> Define the empirical risk differential for an arbitrary ϑ∈ℝ^K as ℒ_ϑ = R_T(ϑ) - R_T(ϑ^*) = 1 T∑_t=1^T ( P_t' ϑ - P_t' ϑ^* )^2 + 2 T∑_t=1^T (Y_t- P_t' ϑ^* )( P_t' ϑ - P_t' ϑ^* )  . Assume that it holds that P_t ' ϑ - P_t ' ϑ^* _L_2 = ϑ - ϑ^* _2 > 72 C^1 2_σ^2κ_1^2 κ_2 √(K log (T) T) . Condition on the events of Proposition <ref> and Proposition <ref>, for any η>0, for any T sufficiently large, with probability at least 1-O(T^-η), we have that 1 T∑_t=1^T ( P_t ' ϑ - P_t ' ϑ^* )^2 (a)≥κ_1^2 κ_2 2 P_t ' ϑ - P_t ' ϑ^* ^2_L_2(b)> 36 C^1 2_σ^2 P_t ' ϑ - P_t ' ϑ^* _L_2 √(K log (T) T) (c)≥| 2 T∑_t=1^T (Y_t- P_t ' ϑ^* )( P_t ' ϑ - P_t ' ϑ^* ) |  , where (a) follows from Proposition <ref>, (b) follows from condition (<ref>), and (c) follows from Proposition <ref>. Thus, conditional on the events of Proposition <ref> and Proposition <ref> and assuming (<ref>) holds we have high probability that ℒ_ϑ > 0. Since the empirical risk minimizer ϑ̂ satisfies ℒ_ϑ̂≤ 0 then conditional on the same events we have P_t ' ϑ̂ - P_t ' ϑ^* _L_2≤ 72 C^1 2_σ^2κ_1^2 κ_2 √(K log (T) T). The claim then follows after noting that Y_t - P_t' ϑ̃^2_L_2 - Y_t - P_t' ϑ^* ^2_L_2 = P_t' ϑ̃ - P_t' ϑ^* ^2_L_2≤ C_σ^2( 72 κ_1^4 κ_2^2 )^2 K log (T) T . It is important to emphasize that the L_2 norm in (<ref>) is the L_2 norm conditional on {ϑ̃ = ϑ̃(𝒟)}. The equality in (<ref>) follows from the fact that 𝔼 [( Y_t- P_t ' ϑ^* )( P_t' ϑ^*-f̃^PCR_ϑ t )] = 𝔼 [( Y_t- P_t' ϑ^* ) P_t' (ϑ^* -ϑ )] = 0, which is implied by the first order condition for a minimum for R(ϑ), as ϑ^* is the unique value of ϑ such that 𝔼 [( Y_t- P_t ' ϑ ) P_t'] = 0. Suppose <ref>–<ref> are satisfied. Then for any η>0, for all T sufficiently large and any ϑ∈ℝ^K, 1 T∑_t=1^T ( P_t ' ϑ^* - P_t ' ϑ )^2 ≥κ_1^2 κ_2 2 P_t ' ϑ^* - P_t ' ϑ^2_L_2 , holds with probability at least 1-T^-η. This follows from <cit.>. Note that the proposition there establishes an analogous claim for η=1, but inspection of the proof shows that it is straightforward to allow for any η>0. Note that those results require that r_K < r_α / (r_α + 1), which is implied by <ref>. Also, note that <ref> implies that the small-ball condition is also satisfied by P_t ' ϑ^* - P_t ' ϑ. Suppose <ref>–<ref> are satisfied. Then for any η>0 and any ϑ∈ℝ^K, there exists a positive constant C such that, for all T sufficiently large | 1 T∑^T_t=1(Y_t- P_t ' ϑ^*)( P_t ' ϑ^*- P_t ' ϑ ) | ≤ C P_t ' ϑ^* - P_t ' ϑ_L_2 √(K log (T) T) , holds with probability at least 1-T^-η. We begin by noting that for any ϑ∈ℝ^K ∖ϑ^* we have that | ∑^T_t=1(Y_t- P_t ' ϑ^*)( P_t ' ϑ^*- P_t ' ϑ) / T P_t ' ϑ^* - P_t ' ϑ_L_2| = | 1 T∑^T_t=1 (Y_t- P_t ' ϑ^*) P_t' ν| = | 1 T∑^T_t=1 [ (Y_t- P_t ' ϑ^*) P_t' ] ν| = | 1 T∑^T_t=1 W_t'ν|  , where W_t=(W_1 t,…,W_K t)' with W_i t = (Y_t- P_t ' ϑ^*) P_it and ν = (ϑ^*-ϑ) / P_t ' ϑ^* - P_t ' ϑ_L_2. Note that ν_2 = 1. Then, for any ϑ∈ℝ^K ∖ϑ^* it holds ℙ( | ∑^T_t=1(Y_t- P_t ' ϑ^*)( P_t ' ϑ^*- P_t ' ϑ)| T P_t ' ϑ^*- P_t ' ϑ_L_2 > ε) ≤ℙ( sup_ϑ∈ℝ^K ∖ϑ^* | ∑^T_t=1(Y_t- P_t ' ϑ^*)( P_t ' ϑ^*- P_t ' ϑ ) | T P_t ' ϑ^*- P_t ' ϑ_L_2 > ε) = ℙ( sup_ν: ν_2 =1| 1 /T∑^T_t=1 W'_tν| >ε) ≤ℙ( 1/T∑^T_t=1 W_t _2 > ε)  . <ref> and Lemma <ref> imply that W_t is sub-exponential for some parameter C_m'>0 for each i. Standard properties of strong mixing processes imply that { W_t} inherits the mixing properties of { (Y_t, X_t')' } spelled in <ref>. Assuming that <ref> is satisfied we have that Lemma <ref> holds which implies that, for any η>0 there exists a positive constant C such that, for all T sufficiently large, sup_ϑ∈ℝ^K ∖ϑ^* | ∑^T_t=1(Y_t- P_t ' ϑ^*)( P_t ' ϑ- P_t ' ϑ^* ) | T P_t ' ϑ - P_t ' ϑ^*_L_2≤ C √(K log (T) T) holds with probability at least 1-O(T^-η). This implies the claim. §.§ Proof of Proposition <ref> The claim follows from the fact that u_t' γ^* ^2_L_2 = (γ^*)' V_R Λ_R V_R' γ^* = (θ^*)' V_R Λ_R V_R' θ^* from Lemma <ref>. § AUXILIARY RESULTS = 1 Suppose <ref> and <ref> are satisfied. Then we have that (i) there exists some constant C_1>0 such that P_t is a sub-Gaussian vector with parameter C_1; (ii) if c_K+1>0 then there exists some constant C_2>0 such that u_t is sub-Gaussian vector with parameter C_2 otherwise u_t is degenerate at zero. (i) Since P_t = Λ_K^-1/2 V_K' X_t = V_K' Z_t, we have that for any ε>0 ℙ( sup_ v: v _2 = 1 | v' P_t| > ε) ≤ℙ( sup_ v: v _2 = 1 | v' Z_t| > ε) ≤exp(-C_m ε ^2)  . Then, we have that P_t is a sub-Gaussian vector with parameter C_1=C_m. (ii) We assume that c_K+1>0. Since u_t = V_R V_R' X_t = V_R Λ_R^1/2 V_R' Z_t, we have that for any ε>0 ℙ( sup_ v: v _2 = 1 | v' u_t| > ε) ≤ℙ( sup_ v: v _2 = 1 c_K+1^1/2 | v' Z_t| > ε) ≤exp(-C_m c_K+1ε ^2)  , where we have used the fact that V_R Λ_R^1/2 V_R'_2 ≤ V_R _2 Λ_R^1/2_2 V_R'_2 = c_K+1^1/2 . Then we have that that u_t is a sub-Gaussian vector with parameter C_2=C_m/c_K+1. Let { Z_t }_t=1^T be a stationary sequence of d-dimensional zero-mean random vectors. Suppose (i) for any ε>0 it holds sup_1≤ i≤ dℙ( |Z_i t | > ε ) ≤exp( -C_m ε ) for some C_m>0; (ii) the α-mixing coefficients of the sequence satisfy α(l) < exp( -C_α l^r_α ) for some C_α>0 and r_α > 0; and (iii) d = ⌊ C_d T^r_d⌋ for some C_d>0 and r_d ∈ [0, 1]. Then for any η>0 there exists a positive constant C_η such that, for any T sufficiently large, it holds that ℙ( 1 T∑_t=1^T Z_t _2 > C_η √(d log (T) T )) ≤1 T^η . Let C^* denote a positive constant to be chosen below. Note that ℙ( 1 T∑_t=1^T Z_t _2 ≥ C^* √(d log (T) T )) ≤ d max_1 ≤ i ≤ dℙ( | ∑_t=1^T Z_i t| ≥ C^* √(T log (T) ))  . Let ∑_t=1^T Z_i t = ∑_t=1^T Z'_i t + ∑_t=1^T Z”_i t where Z'_i t = Z_i t1(|Z_i t| ≤ b_T) - 𝔼( Z_i t1(|Z_i t| ≤ b_T ) ) and Z”_i t = Z_i t1(|Z_i t| > b_T) - 𝔼( Z_i t1(|Z_i t| > b_T) ). Then we have ℙ( | ∑_t=1^T Z_i t| > C^* √(T log (T))) ≤ℙ( |∑_t=1^T Z'_i t| > C^* 2√(T log (T))) + ℙ( | ∑_t=1^T Z”_i t| > C^* 2√(T log (T) ))  . The sequence { Z'_i t}_t=1^T has the same mixing properties as { Z_t}_t=1^T and sup_1 ≤ i ≤ d Z_i t' _∞ < 2b_T. Define ε_T = (C^*/2) √(T log (T) ), b_T = 2 (r_d + 1/2 + η ) C^-1_m log(T) and M_T = ⌊ b_T^-1√(T / log (T))⌋. For any T sufficiently large M_T ∈ [1,T] and 4 (2 b_T) M_T < ε_T, implying that the conditions of Theorem 2.1 of <cit.> are satisfied. Then we have ℙ( | ∑_t=1^T Z'_i t| > ε_T ) < 4 exp( - ε_T^2 64 T M_T D(T,M_T) + 16 3 b_T M_T ε_T ) + 4 T M_Texp( -C_α M_T^r_α)  , with D(T,M_T) = sup_1≤ i≤ d𝔼[ ( ∑_t=1^M_T Z'_i t)^2 ]. Define γ(l) = sup_1≤ i≤ d |Cov(Z'_i t, Z'_i t+l)| for l=0,1,… and note that D(T,M_T) ≤ M_T ∑_l=-M_T+1^M_T-1γ(l). For l=0,… it holds that γ(l) ≤ 12 α(l)^1 2sup_1≤ i ≤ d Z'_i t^2_L_4≤ 48 α(l)^1 2sup_1≤ i ≤ d Z_i t^2_L_4 where the first inequality follows from Davydov's inequality <cit.> and the second one follows from the fact that Z_i t' _L_4≤ 2 Z_i t1(|Z_i t| ≤ b_T) _L_4≤ 2 Z_i t_L_4 This implies that D(T,M_T) < M_T C_σ^2 where C_σ^2 = ( 96 C_m,4∑_l = 0^∞α(l)^1 2 ) ∨ 1 where C_m,4 = sup_1≤ i ≤ d Z_i t^2_L_4. We then have that d max_1≤ i ≤ dℙ( | ∑_t=1^T Z'_i t| > ε_T ) ≤ 4 C_d exp( r_d log (T) - (C^*)^2 log (T) 256 C_σ^2 + 32 3 C^* ) + 4 C_d T^1+r_dexp( - C_α M_T^r_α) < 4 C_d exp( -ηlog (T) ) + 4 C_d exp( 2log(T) - C_α M_T^r_α) = 5 C_d T^η , where the second inequality follows from a sufficiently large choice of the constant C^*. The sequence { Z”_i t}_t=1^T is such that d max_1≤ i ≤ dℙ( | ∑_t=1^T Z”_i t| > ε_T) ≤d ε_Tmax_1≤ i ≤ d𝔼| ∑_t=1^T Z”_i t| ≤d ε_T∑_t=1^T max_1≤ i ≤ d𝔼 | Z”_i t | ≤2dT ε_Tmax_1≤ i ≤ d𝔼| Z_i t1(|Z_i t| > b_T) | ≤2dT ε_Tmax_1≤ i ≤ d Z_i t_L_2 1( |Z_i t| > b_T ) _L_2 = 2dT ε_Tmax_1≤ i ≤ d Z_i t_L_2ℙ( |Z_i t| > b_T )^1 2 < 2d T ε_Tσ^2 exp( - C_m 2b_T ) ≤4 C_d σ^2 C^* √(log T )exp( (r_d + 1 2) log(T) - C_m 2 b_T) < 1 T^η , where the first inequality follows from Markov's inequality and σ^2 = sup_1≤ i ≤ d Z_i t_L_2. Equations (<ref>) and (<ref>) imply the claim. natbib
http://arxiv.org/abs/2409.02581v1
20240904100311
Object Gaussian for Monocular 6D Pose Estimation from Sparse Views
[ "Luqing Luo", "Shichu Sun", "Jiangang Yang", "Linfang Zheng", "Jinwei Du", "Jian Liu" ]
cs.CV
[ "cs.CV" ]
BMI Prediction from Handwritten English Characters Using a Convolutional Neural Network N. T. Diba1, N. Akter2, S. A. H. Chowdhury3 Dept. of Electronics & Telecommunication Engineering Rajshahi University of Engineering & Technology Rajshahi, Bangladesh nishattasnimediba09@gmail.com1, nasrinmanha123@gmail.com2, arif.1968.ruet@gmail.com3 J. E. Giti4 Dept. of Electrical & Electronic Engineering Rajshahi University of Engineering & Technology Rajshahi, Bangladesh jishan.e.giti@gmail.com4 September 9, 2024 =============================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Monocular object pose estimation, as a pivotal task in computer vision and robotics, heavily depends on accurate 2D-3D correspondences, which often demand costly CAD models that may not be readily available. Object 3D reconstruction methods offer an alternative, among which recent advancements in 3D Gaussian Splatting (3DGS) afford a compelling potential. Yet its performance still suffers and tends to overfit with fewer input views. Embracing this challenge, we introduce SGPose, a novel framework for sparse view object pose estimation using Gaussian-based methods. Given as few as ten views, SGPose generates a geometric-aware representation by starting with a random cuboid initialization, eschewing reliance on Structure-from-Motion (SfM) pipeline-derived geometry as required by traditional 3DGS methods. SGPose removes the dependence on CAD models by regressing dense 2D-3D correspondences between images and the reconstructed model from sparse input and random initialization, while the geometric-consistent depth supervision and online synthetic view warping are key to the success. Experiments on typical benchmarks, especially on the Occlusion LM-O dataset, demonstrate that SGPose outperforms existing methods even under sparse view constraints, under-scoring its potential in real-world applications. § INTRODUCTION Monocular pose estimation in 3D space, while inherently ill-posed, is a necessary step for many tasks involving human-object interactions, such as robotic grasping and planning <cit.>, augmented reality <cit.>, and autonomous driving <cit.>. Influenced by deep learning approaches, its evolution has enabled impressive performance even in cluttered environments. The most studied task in this field assumes that the CAD model of the object is known a priori <cit.>, but the accessibility of such a predefined geometry information prevents its applicability in real-world settings. To reduce reliance on specific object CAD models, recent research has shifted toward category-level pose estimation <cit.>, aiming to generalize across objects within the same category. However, these methods typically ask for extra depth information, and can falter with instances of varying appearances. Emerging real-world demands call for an object pose estimator generalizable, flexible, and computationally efficient. Ideally, a new object can be reconstructed from casually taken reference images, sans the need for fine-grained well-textured 3D structures. Reconstruction-based methods have shown the feasibility of this proposal <cit.>, which basically reconstruct the 3D object from the multi-view RGB images to substitute missing CAD model. however, reconstruction-based methods have long relied on a fixed budget of high-quality given images and the prerequisite use of Structure-from-Motion (SfM) techniques, resulting in a notably tedious and costly training process. Our method deviates from these requirements by pioneering an efficient object reconstruction method that thrives on limited reference images and the convenience of random initialization. Capitalizing on the high-quality scene representation and real-time rendering of 3DGS <cit.>, we unveil SGPose, an novel framework of Sparse View Object Gaussian for monocular 6D Pose estimation. The proposed SGPose develops geometric-aware depth to guide object-centric 3D representations from RGB only input. Requiring a mere ten views, SGPose achieves competitive performance for object pose estimation, heralding its readiness for real-world deployment. In our work, we extend a variant of 3DGS <cit.> by formulating Gaussian primitives as elliptic disks instead of ellipsoids to derive depth rendering. As illustrated in Fig. <ref>, the conceived geometric-consistent constraints guide depth acquisition. The resulting depth enables reliable online synthetic view warping and eases the challenges of sparse views for traditional 3DGS methods. Additionally, an online pruning is incorporated in terms of the geometric-consistent depth supervision, toning down common sparse view reconstruction issues like floaters and background collapses. In contrast to the conventional reliance on SfM pipelines for point cloud initialization in 3DGS, the proposed SGPose opts for a random initialization from a cuboid of 4,096 points. The proposed object Gaussian generates dense image pixels and object coordinates correspondence (2D-3D correspondence) maps efficiently using geometric-aware depth rendering, serving as a keystone advantage for monocular 6D pose estimation. An adapted GDRNet++ framework <cit.> is utilized to assess 6D pose estimation on the LM <cit.> and Occlusion LM-O <cit.> datasets. Our SGPose takes sparse view images and pose annotations to create synthetic views, object masks, and dense correspondence maps. Noteworthy, for the Occlusion LM-O dataset, we render data similar to PBR (Physically Based Rendering) data <cit.>, which further enhance the performance of the proposed method. By matching state-of-the-art performance across CAD-based and CAD-free approaches, we highlight the efficiency and flexibility of our method. To sum up, Our main contributions are: * By intaking only RGB images, the proposed geometric-aware object Gaussian derives accurate depth rendering from random point initialization; * The rendered depth ensures a reliable synthetic view warping and an effective online pruning, addressing the issue of overfitting under sparse views at an impressively low time cost; * By generating dense 2D-3D correspondences and images that simulate real occlusions using the proposed object Gaussian, our SGPose framework achieves CAD-free monocular pose estimation that is both efficient and robust. § RELATED WORK §.§.§ CAD-Based Object Pose Estimation Many previous works on pose estimation rely on known CAD models. Regression-based methods <cit.> estimate pose parameters directly from features in regions of interest (RoIs), while keypoint-based methods establish correspondences between 2D image pixels and 3D object coordinates either by regression <cit.> or by voting <cit.>, often solve poses by using a variant of Perspective-n-Points (PnP) algorithms <cit.>.NOCS <cit.> establishes correspondences between image pixels and Normalized Object Coordinates (NOCS) shared across a category, reducing dependency on CAD models at test time. Later works <cit.> build upon this idea by leveraging category-level priors to recover more accurate shapes. A limitation of these methods is that objects within the same category can have significant variations in shape and appearance, which challenges the generalization of trained networks. Additionally, accurate CAD models are required for generating ground-truth NOCS maps during training. In contrast, our framework reconstructs 3D object models from pose-annotated images, enabling CAD-free object pose estimation during both training and testing phases. §.§.§ CAD-Free Object Pose Estimation Some endeavors have been made to relax the constraints of CAD models of the objects. RLLG <cit.> uses multi-view consistency to supervise coordinate prediction by minimizing reprojection error. NeRF-Pose <cit.> trains a NeRF-based <cit.> implicit neural representation of object and regresses object coordinate for pose estimation. Gen6D <cit.> initializes poses using detection and retrieval but requires accurate 2D bounding boxes and struggles with occlusions. GS-Pose <cit.> improves on Gen6D <cit.> by employing a joint segmentation method and 3DGS-based refinement. OnePose <cit.> reconstructs sparse point clouds of objects and extracts 2D-3D correspondences, though its performance is limited on symmetric or textureless objects due to its reliance on repeatable keypoint detection. While OnePose++ <cit.> removes the dependency on keypoints resulting in a performance enhancement. Unlike these methods, which require numerous input images for training, we directly leverage the power of 3DGS <cit.> for geometric-aware object reconstruction from sparse, pose-annotated images to achieve pose estimation. § METHODS An overview of the proposed method is presented in Fig. <ref>. Given sparse views as input, the dense 2D-3D correspondence maps are encoded in the conceived object Gaussian naturally, by supervising geometric-aware depth. Consequently, the created synthetic views and correspondence maps are availed to a downstream pose estimator, to achieve CAD-free monocular Pose Estimation. §.§ Depth Rendering of Geometric-aware Object Gaussian The object geometry is described by Gaussian primitives of probability density function as <cit.>, 𝒢(𝐱)=e^-1/2(𝐱-μ)^⊤Σ^-1(𝐱-μ), where 𝐱 is a point in world space to describe the target object and μ is the mean of each Gaussian primitive (which also is the geometric center). Thus, the difference vector 𝐱-μ indicts the probability density of 𝐱, which peaks at the center μ and decreases as departing from it. By treating Gaussian primitives as the elliptical disks <cit.>, the covariance matrix Σ is parameterized on a local tangent plane centered at μ with a rotation matrix and a scaling matrix. Concretely, the rotation matrix R is comprised of three vectors 𝐭_u, 𝐭_v and 𝐭_w, where two orthogonal tangential vectors 𝐭_u and 𝐭_v indicate the orientations within the local tangent plane, and 𝐭_w=𝐭_u ×𝐭_v represents the normal perpendicular to the plane. The scaling matrix S depicts the variances of Gaussian primitives on corresponding directions, noted that there is no distribution in the direction of 𝐭_w since the Gaussian primitive is defined on a flat elliptical disk. Thereby, the 3 × 3 rotation matrix R=[𝐭_u, 𝐭_v, 𝐭_w] and the scaling matrix S=[s_u, s_v, 0] form up the covariance matrix as Σ = RSS^⊤R^⊤. By leveraging the world-to-camera transformation matrix W and the Jacobian of the affine approximation of the projective transformation matrix J, the projected 2D covariance matrix Σ^' in camera coordinates is given as, Σ^'=J W Σ W^⊤ J^⊤. By virtue of the same structure and properties are maintained by skipping the third row and column of Σ^'  <cit.>, a 2 × 2 variance matrix Σ^2 D (corresponding to 𝒢^2 D) is obtained, 𝒢^2 D(𝐱^') = e^-1/2(𝐱^'-μ^')^⊤(Σ^2 D)^-1(𝐱^'-μ^'), where 𝐱^' and μ^' stands for the projected points of 𝐱 and μ in the screen space, respectively. Furthermore, the local tangent plane is defined as, X(u, v)=μ+s_u 𝐭_u u+s_v 𝐭_v v=𝐇(u, v, 1,1)^⊤, where 𝐇=[[ s_u 𝐭_u s_v 𝐭_v 0 μ; 0 0 0 1 ]]=[[ RS μ; 0 1 ]] is a homogeneous transformation matrix mapping point (u,v) on local tangent plane into the world space. Suppose there is a ray emitting from the camera optical center onto the screen space. The geometric-aware depth d^geo is hence defined as the distance between the camera and the Gaussian primitive along the ray. Accordingly, the homogeneous coordinate of the point (u,v) projected onto the screen is <cit.>, (u^', v^', d, w)^⊤=W X(u, v)=W 𝐇(u, v, 1,1)^⊤. where w is usually set to 1 (the homogeneous representation describes a point when w ≠ 0, while it depicts a ray when w = 0). This point can be further represented as the intersection of two orthogonal planes corresponding to u^' and v^' <cit.>. Specifically , u^'-plane is defined by a normal vector (-1, 0, 0) and an offset u^', the 4D homogeneous plane thus is 𝐡_u^'=(-1, 0, 0, u^'). Similarly, v^'-plane is 𝐡_v^'=(0, -1, 0, v^'). Conversely, both planes can be transformed back to the local tangent plane coordinates as, 𝐡_u=(W𝐇)^⊤𝐡_u^', 𝐡_v=(W𝐇)^⊤𝐡_v^', in which (W𝐇)^⊤ is equivalent to (W𝐇)^-1 as show in <cit.>. According to <cit.>, since the screen point (u^', v^') must lie on both u^'-plane and v^'-plane, for any point (u, v, 1, 1) on the elliptical disk, the dot product of the transformed plane 𝐡_u and 𝐡_v with the point (u, v, 1, 1) should be zero, 𝐡_u ·(u, v, 1,1)^⊤=𝐡_v ·(u, v, 1,1)^⊤=0, by solving the equation above, the coordinates of the screen point (u^', v^') on the local tangent plane are yielded, u=𝐡_u^2 𝐡_v^4-𝐡_u^4 𝐡_v^2/𝐡_u^1 𝐡_v^2-𝐡_u^2 𝐡_v^1, v=𝐡_u^4 𝐡_v^1-𝐡_u^1 𝐡_v^4/𝐡_u^1 𝐡_v^2-𝐡_u^2 𝐡_v^1, where 𝐡_u^i, 𝐡_v^i are the elements of the 4D homogeneous plane parameters. Thus far, the Gaussian primitives can be expressed with respect to (u,v). Suppose Σ^2 D M=I, by transforming the 2D covariance matrix Σ^2 D into the identity matrix I, the probability density function can be rewritten as standardized Gaussian (with mean of zero and deviation of one), 𝒢(𝐱^')=e^-1/2(M(𝐱^'-μ^'))^⊤(M(𝐱^'-μ^')), where 𝐱^' can be further replaced by (u, v) via some linear transformations as, 𝒢(u, v)=e^ -1/2(u^2+v^2). To further take account of numerical instability introduced by inverse homogeneous transformations of Eq. <ref>, a lower bounded Gaussian is imposed <cit.>, 𝒢̂(u,v)=max{𝒢(u,v), 𝒢((u^',v^')-μ^'/r)}. When the elliptic disk is projected as the segment line in some cases, a low-pass filter (centered at μ^' with radius r) is wielded to guarantee sufficient points passed toward the screen space (the radius is set as √(2) / 2 empirically by following <cit.>). Suppose the opacity of i-th Gaussian primitive is α_i, by considering the alpha-weighted contribution along the ray, the accumulated transmittance is, T_i=∏_j=1^i-1(1-α_j 𝒢̂_̂ĵ(u, v)). To this end, the proposed object Gaussian renders both image and depth map of the object. The final color is c^α=∑_i ∈𝒩T_i α_i 𝒢̂_̂î(u, v) c_i, with c_i as the view-dependent appearance represented by spherical harmonics (SH) <cit.>. The alpha-blended depth map is formulated via the summation of geometric-aware depth d^geo of each Gaussian primitive as, d^α=∑_i ∈𝒩T_i α_i 𝒢̂_̂î(u, v) max{d^geo_i | T_i>σ}, where σ=0.5 is a threshold deciding whether the rendered depth valid. Noted that the maximum depth along the ray is selected if T_i does not reach the threshold as in <cit.>. §.§ Geometric-Consistency under Sparse Views In the circumstance of extremely sparse view reconstruction, the object Gaussian struggles with over-fitting <cit.>, where the background collapse and floaters are commonly witnessed even the rendered view deviates marginally from the given one <cit.>. In principle, the effective solutions involve online synthetic view augmentation and geometric-consistent depth supervision. Notably, effective online synthetic view augmentation remarkably reduces the need of a high budget of real images, and the multi-view geometric consistency prevents significant fluctuations on rendered depth. Synthetic View Warping Given what is at stake, it is intuitive to warp synthetic views online to enrich training samples, which encourages model to adapt from a diverse set of perspectives and brings better generalization capabilities on unseen views. Owing to lack of ground truth of synthetic views, the proposed alpha-blended depth map plays an essential role in warping process. Specifically, the rendered depth d^α is used to transform the given view into 3D points, which are re-projected as pixels of synthetic views. Formally, the pixel (u^'_g,v^'_g) of a given view is warped as (u^'_w,v^'_w) of an unseen view, which is (u_w^', v_w^')=K T[d^α K^-1(u_g^', v_g^', 1)^⊤], where K is the camera intrinsic, and the rendered depth d^α serves as the pixel-wise scaling factor to confine the re-projection within a meaningful range. Moreover, the transformation T from give view to warped view is obtained via perturbations (including rotations and translations) sampled from normal distribution randomly, by making use of tool provided in <cit.>. The warped pixels are assembled as the ground truth image I_w, ℒ_warp=ℒ_1 (Î, I_w). While the ground truth image I_w and the rendered image Î of a specific synthetic view establish supervision of the image warping loss via ℒ_1. Depth Supervision under Geometric-Consistent Considering each Gaussian is in tandem with the depth distribution of a certain region in the scene, to concentrate the geometric-aware depth of Gaussian primitives along the ray is beneficial to refine each Gaussian's contribution to overall distribution. Accordingly, the geometric-consistent loss is employed as ℒ_geo=∑_i, jω_i ω_j|d^geo_i-d^geo_j|, where ω_i= T_i α_i 𝒢̂_̂î(u, v) is the blending weight of i-th Gaussian primitive <cit.>. Geometric-consistency Guided Online Pruning Lastly, inspired by <cit.>, an online floaters pruning strategy is implemented by introducing a peak depth, d^peak= d^geo_max _i(ω_i). The peak depth <cit.> is acquired by selecting the Gaussian of highest blending weight, which also is the Gaussian of the highest opacity. To implement the multi-view geometric-aware depth comparison, the alpha-blending depth d^α and peak depth d^peak are compared under each given view. Generally, the alpha-blending depth locates slightly behind the peak depth, the differences result in a candidate region for pruning. While the corresponding opacity α_i of peak depth within the region guides the online floater pruning. §.§ 2D-3D Correspondence Generation The proposed SGPose starts from RGB data alone, without taking advantage of external depth information, yet it effectively renders reliable geometric-aware depth maps. Unlike traditional CAD-free pipelines that heavily rely on geometric initialization from SfM methods such as COLMAP <cit.>, our method handles the random initialization of a cuboid that approximates the bounding box of object. The differentiable optimization of the proposed object Gaussian is expressed as ℒ=ℒ_image+ℒ_warp+λ_1 ℒ_geo+λ_2 ℒ_normal. Concretely, ℒ_image is the image rendering loss combining ℒ_1 with the D-SSIM term from <cit.>, which is implemented in the given views only. ℒ_image for given views and ℒ_warp for synthetic views follow the identical optimization pipeline, which update the object Gaussian model alternatively. The reasons of implementing such a training strategy are two-folded. Firstly, data from respective views exhibit disparate geometric details, tackling with them independently accommodates the model to the diversified data distributions; Secondly, the stand-alone Gaussian densification and pruning mitigate fluctuations brought by view alternating. λ_1 is set as 10^4 to align up the scale of depth term with the other ones, and λ_2=0.005 for normal loss ℒ_normal=∑_i ω_i(1-n_i^⊤ϕ(u,v)) to facilitate the gradients of depth maps ϕ(u,v) in line with normal maps n_i <cit.>. Overall, the geometric-consistent supervision under sparse views reconstructs the desirable object Gaussian. The dense 2D-3D correspondences, generated to fully replace the CAD models, along with synthetic view color images and object masks, serve as the ground truth for a modified GDRNet++ <cit.> to perform monocular pose regression. Among them, the generation of 2D-3D correspondences and the simulation of realistic occlusions in images are essential for the task. For dense 2D-3D correspondences, object points are obtained by transforming the rendered depth map into 3D points of camera coordinates via the known camera's intrinsic, and in turn mapping the points to world space via the specific view parameters (rotations and translations). Pixel coordinates are calculated from the rendered object mask. Thus, the 3D points in world space and corresponding pixel coordinates are stack orderly as dense 2D-3D correspondences of any specific view, which is 𝐌_2D-3D= [[ 𝐑_obj^⊤(d^α K^-1(u^', v^', 1)^⊤-𝐭_obj); (u^', v^')_mask ]], where 𝐑_obj and 𝐭_obj are the specific view parameters. § EXPERIMENTS In this section, extensive experiments are conducted to demonstrate the competitive performance of the proposed SGPose. §.§.§ Datasets The proposed SGPose is evaluated on two commonly-used datasets, which are LM <cit.> and LM-O <cit.>. LM is a standard benchmark for 6D object pose estimation of textureless objects, which consist of individual sequences of 13 objects of various sizes in the scenes with strong clutters and slight occlusion. Each contains about 1.2k real images with pose annotations. The dataset is split as 15% for training and 85% for testing. LM-O is an extension of LM, from which to annotate one sequence of 8 objects with more severe occlusions of various degrees. For the LM dataset, approximately 1k images with 2D-3D correspondence maps are rendered for each object. For LM-O, which is designed for pose estimation in scenarios with occlusions, 50k images with significant occlusions on transparent backgrounds are rendered and involved in training with a ratio of 10:1, by following the convention of CAD-based methods for a fair comparison <cit.>. §.§.§ Implementation Details Ten real images and corresponding pose annotations are taken as input of the proposed geometric-aware object Gaussian, which are selected from real data in LM. Different selection strategies are conducted, e.g., selecting samples uniformly, randomly, in term of maximum rotation differences and maximum Intersection over Union (IoU). The simple uniform selection is adopted in our method taking account of real-world practice. The proposed object Gaussian tailors respective optimization strategies to the supervision signals. The number of points of random initialization is set as 4,096. The geometric-consistent loss involves in training at iteration 3,000 and the normal loss is enabled at iteration 7,000 as in <cit.>. Ten sparse views are given, and two synthetic views are created online around each give view at iteration 4,999, that is, 30 images involve in training for each object. The synthetic image warping spans 40% of training phase once it is activated, which updates the model alternatively with the image rendering loss. Both image rendering loss and image warping loss are endowed with equal weights, dominating the training process. Noted that it’s a little tricky to conduct pruning effectively in our problem settings. Regions of Interests (RoIs) are remained for training and backgrounds are masked out, it is possible that pruning techniques working well for distant floaters remove part of the foreground objects mistakenly, which is more pronounced when the objects are thin and tall. Thus, the objects phone and driller do not apply pruning empirically. For the occluded scene generation for LM-O dataset, we utilize pose annotations from PBR (Physically Based Rendering) <cit.> for image rendering. Each object is rendered onto the image using its respective geometric-aware object Gaussian, with all objects rendered sequentially in a single image. Realizing that individual object Gaussian do not inherently represent occlusions, we overlay the rendered images with the visible masks from PBR data to simulate occlusion scenarios effectively. The rendered masks, crafted from the proposed geometric-aware object Gaussian, is generated by mapping color images to a binary format. That is, assigning “1” to pixels within the object and “0” to those outside, thus producing a boolean array congruent with the original image's dimensions. It is possible that incorporating an extra mask loss for supervision could improve the performance of mask rendering in future work. The 2D bounding boxes for pose estiamtion are obtained by borrow an off-the-shelf object detector yolov3 <cit.>. §.§.§ Evaluation Metrics We evaluate our method with the most commonly used metrics including ADD(S)-0.1d and Proj@5pix. ADD(S)-0.1d measures the mean distance between the model points transformed from the estimated pose and the ground truth. If the percentage of mean distance lies below 10% of the object’s diameter (0.1d), the estimated pose is regarded as correct. For symmetric objects with pose ambiguity, ADD(-S) measures the deviation to the closet model point <cit.>. Proj@5pix computes the mean distance between the projection of 3D model points with given predicted and ground truth object poses. The estimated pose is considered correct if the mean projection distance is less than 5 pixels. §.§ Comparison with State-of-the-Arts §.§.§ Results on LM The proposed method is compared with the CAD-based methods DPOD <cit.>, PVNet <cit.>, CDPN <cit.>, GDR-Net <cit.>, SO-Pose <cit.> and CAD-free methods RLLG <cit.> , Gen6D <cit.>, OnePose <cit.>, OnePose++ <cit.>, GS-Pose <cit.>, and Nerf-Pose <cit.> on metric of ADD(S)-0.1d and Proj@5pix. As shown in Tab. <ref>, even under the setting of sparse view training, our method achieves comparable performance compared to most CAD-free baselines that are trained with more than 100 views, and is on par with the CAD-based methods. Notably, our proposed object Gaussian is trained on only 10 views, whereas Nerf-Pose uses 156 views for OBJ-NeRF training. In brief, the objects where our method outperforms the best CAD-free method (i.e., NeRF-Pose^†) are highlighted in bold, and where it surpasses the best CAD-based method (i.e., SO-Pose) are in italic bold. Noteworthy, Gen6D^† is refined on a subset of the LM dataset, and NeRF-Pose^† is trained on relative camera pose annotations. As show in Tab. <ref>, SGPose demonstrates an impressive 98.51% average performance using only 10 given views, outperforming all baselines according to the metric of Proj@5pix. Noted that our method uses YOLOv3 <cit.> as the object detector, while all the others use the more recent YOLOv5 <cit.>. §.§.§ Results on LM-O As demonstrated in Tab. <ref>, our method is compared with state-of-the-arts w.r.t. the metric of average recall (%) of ADD(-S). Among CAD-based methods, “real+pbr” outperforms “real+syn” because “pbr” data <cit.> incorporate occlusions in object placement, with random textures, materials, and lighting, simulating a more natural environment compared to individually rendered synthetic data. Given the heavy occlusions typical of the LM-O dataset, training with “pbr” data significantly enhances performance. In our setting, we do not have access to CAD models nor do we leverage “pbr” data. Instead, we render the synthetic images that replicates the occlusion scenarios found in the LM-O dataset, using our proposed object Gaussian. Exemplary, we exceed Nerf-Pose <cit.> by 5.83% with 55.03% compared to 49.2%, also rival NeRF-Pose^†, which is trained on relative camera poses instead of ground truth pose annotations, by 3.63%. Impressively, we even slightly outperform the best CAD-based method SO-Pose <cit.>. §.§ Ablations §.§.§ Qualitative comparison of 2D-3D Correspondence The qualitative results of selected objects are presented in Fig. <ref>, where the transformed 3D bounding boxes are overlaid with the corresponding images. As observed, the predicted poses (green boxes) mostly align with the ground truth (blue boxes). The images are cropped and zoomed into the area of interest for better visualization. The rendered images are exhibited in column (c) and (h), compared to the reference CAD models in column (a), our object Gaussian successfully retains both the silhouette and the details of the objects. The accurate geometric shapes rendered from the object Gaussian ensure the performance of the subsequent pose estimation. Nonetheless, given the inherently challenging nature of sparse view reconstruction, some imperfections in the predicted shapes are also evident. §.§.§ Training with Occlusions Since LM-O is a more challenging dataset presenting complex occlusions of objects, the integration of synthetic data that captures diverse poses and realistic occlusions is beneficial for enhancing performance. We thus avail the object Gaussian to render such images to enrich training. Quantitatively, as shown in Tab. <ref>, two different synthetic data are rendered for training, "Occluded object" indicates images containing multiple objects with occlusions, whereas "Individual object" signifies images with a single unoccluded object. We observe that the use of "Occluded object" rendering improves the performance of the proposed SGPose by large margins under all the objects. For the LM-O dataset, the proposed SGPose generates images and 2D-3D correspondences that demonstrate a diverse range of poses and realistic occlusions, as shown in Fig. <ref>. In the training process, the synthetic images are integrated at a 10:1 ratio with real images, meaning that for every ten real images, one synthetic image is included. The 2D-3D correspondence maps are projected onto the target object in the images for visualization. Compared to training with individual object rendering, the inclusion of occluded object rendering remarkably enhances the model's performance in complex scenarios where objects have partial visibility. §.§.§ Effectiveness of Synthetic View Warping As shown in Tab. <ref>, successful reconstruction of the object ape in LM requires a minimum of 33 images without synthetic view warping. By synthesising 20 novel view alone with 10 given images, the proposed SGPose maintains the performance. This demonstrates the effectiveness of synthetic view warping in reducing the reliance on real images. §.§.§ Effectiveness of Online Pruning Point cloud accuracy is quantified as the proportion of reconstructed point clouds that fall within a specified distance threshold (e.g., 3mm) relative to the ground truth point clouds, where the vertices of the object meshes serve as the ground truth reference <cit.>. The point cloud accuracy is evaluated without and with online pruning for our proposed object Gaussian, following the established protocols in <cit.> and <cit.>. The results presented in Tab. <ref> demonstrate that the online pruning removes outliers from sparse view object Gaussian reconstruction, resulting in a more accurate and compact representation. Additionally, pose estimation comparison of selected objects w.r.t. ADD(S)-0.1d and Proj@5pix without and with online pruning is presented in Tab. <ref>. Besides, sparse view reconstruction poses a challenge for object with thin and long geometry, such as lamp and glue. The timely application of online pruning, initiated as divergence threatens, ensures the model to be reconstructed successfully. §.§ Qualitative Results of Synthetic View Warping The synthetic view for each real image is generated by introducing a controlled amount of noise to the given view, ensuring that the synthetic images retain a realistic and plausible appearance. The perturbation parameters are carefully chosen to keep the object within the camera's field of view. Specifically, the Euler angles for rotation perturbation are sampled from a normal distribution with a standard deviation of 15°, capped at an upper limit of 45°. The translation perturbation along each axis is independently sampled from a normal distribution with standard deviations of 0.01 m for the x and y axes, and 0.05 m for the z-axis, respectively. Fig. <ref> displays the qualitative results. Column (a) and (e) presents the ground truth of given views, while column (b) and (f) shows the corresponding rendered images from SGPose. Columns (c) and (g) illustrate the ground truth of synthetic views, and columns (d) and (h) exhibit the rendered synthetic images, respectively. The rendered results are obtained at the iteration 30k upon completion of the training. §.§ Implementation and Runtime Analysis The experiments are conduct on an platform with an Intel(R) Xeon(R) Gold 5220R 2.20GHz CPU and Nvidia RTX3090 GPUs of 24GB Memory. Given ten 640×480 images as input, the object Gaussian costs about 10 minutes to reconstruct one object and real-time renders image, mask, and 2D-3D correspondence. An ImageNet <cit.> pre-trained ConvNeXt <cit.> network is leveraged as the backbone of our pose regression network, for a 640×480 image, the proposed SGPose takes about 24ms for inference. § LIMITATIONS In future work, we plan to reduce the training time to enable portable online reconstruction and pose estimation, thereby facilitating real-time, end-to-end pose estimation suitable for real-world applications. § CONCLUSION The proposed SGPose presents a monocular object pose estimation framework, effectively addressing the limitations of traditional methods that rely on CAD models. By introducing a novel approach that requires as few as ten reference views, the derived geometric-aware depth guides the object-centric Gaussian model to perform synthetic view warping and online pruning effectively, showcasing its robustness and applicability in real-world scenarios under sparse view constrains. The occlusion data rendered from the proposed object Gaussian substantially enhances the performance of post estimation, set our SGPose a state-of-the-art on the Occlusion LM-O dataset.
http://arxiv.org/abs/2409.02189v1
20240903180051
Collaboratively Learning Federated Models from Noisy Decentralized Data
[ "Haoyuan Li", "Mathias Funk", "Nezihe Merve Gürel", "Aaqib Saeed" ]
cs.LG
[ "cs.LG" ]
Collaboratively Learning Federated Models from Noisy Decentralized Data Haoyuan Li1, Mathias Funk1, Nezihe Merve Gürel2, Aaqib Saeed1 1Eindhoven University of Technology, Eindhoven, Netherlands 2Delft University of Technology, Delft, Netherlands {h.y.li, m.funk, a.saeed}@tue.nl, n.m.gurel@tudelft.nl September 9, 2024 ========================================================================================================================================================================================================================================================== § ABSTRACT Federated learning (FL) has emerged as a prominent method for collaboratively training machine learning models using local data from edge devices, all while keeping data decentralized. However, accounting for the quality of data contributed by local clients remains a critical challenge in FL, as local data are often susceptible to corruption by various forms of noise and perturbations, which compromise the aggregation process and lead to a subpar global model. In this work, we focus on addressing the problem of noisy data in the input space, an under-explored area compared to the label noise. We propose a comprehensive assessment of client input in the gradient space, inspired by the distinct disparity observed between the density of gradient norm distributions of models trained on noisy and clean input data. Based on this observation, we introduce a straightforward yet effective approach to identify clients with low-quality data at the initial stage of FL. Furthermore, we propose a noise-aware FL aggregation method, namely Federated Noise-Sifting (), which can be used as a plug-in approach in conjunction with widely used FL strategies. Our extensive evaluation on diverse benchmark datasets under different federated settings demonstrates the efficacy of . Our method effortlessly integrates with existing FL strategies, enhancing the global model's performance by up to 13.68% in IID and 15.85% in non-IID settings when learning from noisy decentralized data. federated learning, decentralized AI, data quality, data-centric machine learning § INTRODUCTION Recent advances in machine learning (ML) have led to a surge in data generated by edge devices. Nonetheless, the ubiquity and heterogeneity of data across these devices pose a significant challenge to the efficacy of training generalizable models. In response to this pressing need, federated learning (FL) is gaining traction as a decentralized paradigm in the field of distributed ML. In FL, the model is learned in a decentralized manner where each client device collaboratively trains a global model without directly sharing the local data with the centralized server <cit.>. Despite this, one of the primary challenges in ML is data quality, which directly impacts the performance and reliability of ML models. The complexity and nature of data in the context of deep models, which require large-scale data, significantly amplify these challenges <cit.>. The multifaceted nature of data quality can be compromised in both the input and feature space, leading to issues such as data incompleteness, feature corruption, and label inconsistency <cit.>. Notably, the issue of maintaining high-quality data becomes particularly challenging in FL due to its decentralized nature. FL allows to mitigate domain-specific concerns such as IP security, by imposing the requirement that the server has no visibility into client data, thereby increasing the difficulty of ensuring data quality. Moreover, FL is generally vulnerable to (often non-malicious) client failures, especially when it comes to the noisy model update, which can be seen as a special case of data perturbation <cit.>. These failures typically occur on the client side, where the server has no control over. Specifically, unreliable clients unintentionally distort the generalization of the global model by learning from noisy samples or labels with no control over the data collection phase (e.g., due to no direct interest by the federated client). In practice, the utilization of edge devices for FL is a double-edged sword where a large amount of data can be harvested to learn the model, but such data also often entails significant noise contamination. Particularly for tasks of interest like object detection, data collected from image sensors are susceptible to visual distortions, often attributed to the client's lack of technical expertise or environmental interference <cit.>. In this work, we mainly consider data corruption in the input space, where external distortions could pollute the sample of client data. While the impact of data corruption in the label space has been well-studied, little attention has been paid to the effects of corruption in the input space, which constitutes the main focus of our study. To address the issue of input space corruption, we perform a comprehensive assessment of client input in the gradient space to monitor the presence of noisy client data. From the density of gradient norm distribution, we observe a distinct disparity between models trained on noisy and clean input data. Inspired by the observation made on the gradient of loss for each client data, we provide a straightforward yet effective approach to identifying clients with low-quality data at the first stage (or training round) in FL. We further propose a noise-aware FL aggregation method, namely Federated Noise Sifting (). It can be simply used as a plug-in approach in conjunction with widely used FL strategy. Specifically, it re-weights the contributions of client models based on the characteristics of data quality (i.e., clean or noisy input) in aggregating a global model (see Figure <ref>). To demonstrate the efficacy of , we perform extensive experiments on diverse benchmark datasets under different heterogeneous settings. Our findings show that our proposed method effortlessly integrates and works well with existing FL aggregation strategies, such as FedAvg <cit.>, FedProx <cit.>, FedTrimmedAvg <cit.>, and FedNova <cit.> which makes it widely applicable. In this work, we mainly focus on the problem of noisy data in the input space, where the features in the client data are (non-maliciously) corrupted. To address this, we first investigate the gradient of the loss function, aiming to detect the presence of noisy client data. Our motivation stems from the fact that the gradient space of the model provides an informative signal regarding the usefulness of the data <cit.> as it directly captures joint information between representations and output <cit.>. Inspired by this finding, we pose the question does the gradient provide a meaningful way to distinguish between models of noisy and clean federated clients? From the density of the gradient norm, we see a distinct disparity between models trained on noisy and clean input data. We thus exploit the richness of gradient space to propose a straightforward yet effective approach to identify clients with low-quality data at the beginning stage of FL. We further propose a noise-aware FL aggregation method, namely Federated Noise Sifting (). Specifically, it re-weights the contributions of client models based on the data quality (i.e., clean or noisy input) while aggregating a global model (see Figure <ref>). can be easily used as a plug-in approach in conjunction with widely used FL strategies. To demonstrate the efficacy of , we perform extensive experiments on diverse benchmark datasets under different heterogeneous settings. Our findings show that our proposed method effortlessly integrates and works well with existing FL aggregation strategies, such as FedAvg <cit.>, FedProx <cit.>, FedTrimmedAvg <cit.>, and FedNova <cit.> which makes it widely applicable. Main contributions: Our work makes the following key contributions: * Novel single-interaction approach for identifying noisy clients. We propose an approach that employs gradient norm analysis to identify and categorize noisy (low-quality model updates) clients at the initial training stage (i.e., a first-round) in a federated context without revealing individual client data for inspection. * Robust and plug-in weights aggregation strategy. We introduce a robust and plug-in weights aggregation strategy to mitigate the adverse impacts of noisy clients, named . It offers a versatile and easily integrable solution to enhance the global model's generalization. We further substantiate the effectiveness of through its application to corrupted client data across six datasets and four FL strategies. * Systematic conceptualization of noisy FL and benchmark. We present a concept of noisy FL to understand non-malicious client-side data corruption. Based on this concept, we build[we will release the code upon acceptance.] the noisy datasets benchmark specifically tailored for FL environments with noisy input. § RELATED WORK Dealing with data quality issues is a well-studied problem in machine learning (ML). Significant efforts have been devoted to mitigating the adverse effect of low-quality data on the generalization capabilities of ML models. <cit.>, <cit.>, and <cit.> reveal that while Deep Neural Networks (DNNs) exhibit advanced generalization abilities but are notably susceptible to low-quality data, particularly to distortions like noise and blur considerably diminishing their performance. To tackle this challenge, previous work enhances DNN robustness to image distortions by selectively correcting the activations of the most noise-susceptible convolutional filters <cit.>. Likewise, <cit.> treat image distortion identification as multi-label learning and train a multi-task DNN model to improve the model generalization. Instead of reformulating the architecture of the DNN to improve the robustness of the model, <cit.> proposes to stabilize the training progress against the input perturbations. In a complementary study, <cit.> shows that fine-tuning and re-training the model using noisy data can effectively alleviate the effect of image distortion. In addition to this, numerous studies have concentrated on addressing label noise, a form of low-quality data in the label space. Training DNNs on noisy labels tends to overfit the noise patterns, which causes degradation in generalization performance on unseen data. Most of the existing methods solve the representation learning with noisy labels by estimating the noise transition matrix and innovatively combining methods like adaptation layers, loss corrections, and regularization techniques <cit.>. While these approaches yield substantial results in centralized learning, their applicability is notably limited in federated learning (FL) scenarios, where data is distributed across myriad devices, and the data quality from each client can not be guaranteed due to its private nature. Recent works in FL mainly focus on addressing the data quality issues pertaining to the label space. Many strategies have been developed to deal with the disparity of label quality in FL. These methods mitigate the impact of label noise by conducting either client selection to re-weight model updates <cit.> or data sampling for label correction or exclusion <cit.>. Additionally, other works tackle this challenge by correcting the label error. <cit.> perform label noise correction using consensus-derived class-wise information for dynamic noise identification and label correction. <cit.> tackles label noise in federated learning by updating local models with globally aligned centroids and correcting labels through global model insights. In spite of the progress that has been made in resolving label noise, data subset selection <cit.>, data valuation <cit.>, dealing with low-quality data in input space (i.e., when noise is in the samples like in images) in the federated setting still remains unexplored. Furthermore, we recognize that client data may be susceptible to backdoor attacks via adversarial methods during both inference and training time <cit.>; these concerns fall outside the scope of our investigation. To this end, our method aims for a flexible and efficient solution suitable for a range of FL strategies to learn robust models using decentralized data. § METHOD We introduce our robust federated learning (FL) method, named Federated Noise Sifting (), designed for learning from noisy input. We begin with defining the challenge of FL in the context of input noise and then describe our single-interaction approach to discover and classify noisy and clean clients. Then, we propose a generic noise-aware aggregation method that can be paired with any federated strategy to mitigate the impact of contributions from noisy clients. §.§ Problem Definition In a typical FL scenario, a total of K clients collaboratively train a machine learning model by optimizing a global objective function <cit.>. To formulate this setting, consider K disjoint local datasets by clients, where each client indexed by k holds a private dataset D_k = {(x_i, y_i)}_i=1^|D_k|, with each pair (x_i, y_i) representing a data sample such that x_i ∈𝒳 as the input and y_i ∈𝒴 as the corresponding label. The objective of the clients is to minimize the loss ℒ(θ) aggregated from the individual loss of each local client ℓ(θ, D_k) with respect to the aggregated server model parameters θ: !min_θℒ(θ) = min_θ1/∑_k ∈ [K]|D_k|∑_k ∈ [K]∑_(x_i, y_i) ∈ D_kℓ(θ; {(x_i, y_i)}) = min_θ∑_k=1^K w_k ℓ(θ; D_k) where w_k denotes the weight assigned to the k-th client's loss function. Here, we take w_k proportional to the size of local data D_k of client k such that w_k = |D_k| / ∑_k=1^K |D_k|, similar to that of <cit.>. Noisy Federated Learning We focus on a noisy federated learning setting where perturbations are present in the input space 𝒳, that often occur due to the unexpected variability in data collection or physical conditions and other environmental factors <cit.>. Consider an FL setup consisting of K participating clients, where M out of K clients' local datasets are partially contaminated by some noise, whose function is defined as η(·): 𝒳↦𝒳. Conversely, the local datasets of the remaining N clients, where N=K-M, are unaffected. To simplify the exposure, we index the noisy clients by m and clean clients by n, and denote the respective client data by D_m and D_n, where {∪_m D_m}∪{∪_n D_n}= ∪_k D_k. We assume that the local dataset of each noisy client m can be further split disjointly into clean data D_m^clean and noisy data D_m^noisy such that D_m^clean∪ D_m^clean = D_m and D_m^clean∩ D_m^clean = ∅. In such noisy federated learning setting, the global objective can be expressed as: min_θℒ(θ) = min_θ∑_n ∈ [K] s.t. D_n is cleanw_nℓ(θ; D_n)+∑_m ∈ [K] s.t. D_m is noisyw_mℓ(θ; D_m) = min_θ∑_n ∈ [K] s.t. D_n is cleanw_nℓ(θ; D_n) +∑_m ∈ [K] s.t. D_m is noisyw_m ( |D_m^clean|/|D_m|ℓ(θ; D_m^clean). .+|D_m^noisy|/|D_m|ℓ(θ; D_m^noisy) ). which is aggregated from both noisy clients ℓ_m(θ) and clean clients ℓ(θ)_n respectively. The loss function of noisy clients can be further written as: ℓ(θ) = ∑_m=1^Mw_m ( ∑_i=1^|D_m^clean|ℓ(η(x_i), y_i; θ) + ∑_j=1^|D_m^noisy|ℓ(x_j, y_j; θ) ) We assume that the local dataset of each noisy client m can be further split disjointly into clean data D_m^clean={(x_i, y_i)}_i=1^N_m^clean and noisy data D_m^noisy = {(η(x_j), y_j)}_j=1^N_m^noisy. N_m^noisy and N_m^clean respectively correspond to the number of noisy data samples and clean data samples. In such noisy federated learning setting, the global objective function can be expressed as: min_θℒ(θ) = ∑_n=1^Nw_nℓ_n(θ) + ∑_m=1^Mw_mℓ_m(θ) which is aggregated from both noisy clients ℓ(θ) and clean clients ℓ(θ) respectively. The loss function of noisy clients can be further rewritten as: min_θℓ(θ) = ∑_m=1^Mw_m ( ∑_i=1^|D_m^clean|ℓ(η(x_i), y_i; θ) + ∑_j=1^|D_m^noisy|ℓ(x_j, y_j; θ) ) Noisy Input Data In decentralized data collection, input corruption typically occurs on the client-side, influenced by varying degrees of real-world noise and varying degrees of corruption from weak to strong. To address this, we propose a definition for noisy FL data that accounts for the intensity and level of noise in each client's local datasets. Consider the partition of a global training dataset D into K local datasets D_k for k∈[K]. This partitioning follows either an Independent and Identically Distributed (IID) or a non-IID setting <cit.>. Within this set of K clients, we randomly pick up M clients as the noisy clients, in which the local datasets of these clients are contaminated by input noise, such as Gaussian blur, contrast, defocus blur (see the extended list in the Section <ref>). Consequently, the local dataset of m-th noisy client D_m is transformed into noisy local data by applying a randomly selected transformation τ from all the data transformation 𝐓 with a specific severity level ξ∈{low, medium, high}. This can be expressed as: D_m^noisy = { η(x, τ, ξ) | x ∈ D_m, τ∈𝐓}. Thus, we define the noise level of client m as NL_m = |D_m^noisy|/|D_m|, which represents the fraction of noisy corrupted data samples over the entire local data of client m. We further introduce image noise severity to quantify the extent of distortion in digital images. Severity levels are scaled from low to high, indicating the intensity of the noise. An illustration of different levels of image noise severity is shown in Figure <ref>. In addition to the image distortion, we further introduce patch-based image corruption to simulate the condition when the nullified data is injected into the client data. We term this injection of nullified images as patch-based corruption. We characterize these corruptions in the following three forms (but note that many others could exist): black patches, patches with Gaussian noise, and generative patches (we use StyleGANv2) <cit.>, illustrated in Figure <ref>. Here, specific data samples in the client's local dataset are substituted with such patches, while the corresponding labels remain unchanged, e.g., a random generative patch has an object label assigned to it. §.§ Single-Interaction Discovery of Noisy Clients Unlike centralized learning, where the server has direct access to inspect all the data, FL presents a unique challenge due to the decentralized and private nature of the data. In FL, clients contribute to the server by sharing their model parameters rather than raw data <cit.>, making it impossible to visually verify the quality of data. To address this issue, we propose a novel approach that can discover the noisy client with only a single-interaction (or in a one-shot manner) using only a local model. Specifically, we identify the noisy clients by evaluating the gradient norms <cit.> calculated from local data during the initial training round without compromising the client's data confidentiality. The challenge in training deep neural networks on input (such as images) with noise and distortions is that the model can easily affected by the perturbations, thus producing overconfident predictions on distorted input data samples <cit.> or leading to poor generalization. Motivated by the findings of <cit.>, we empirically discover that a client's local model trained on distorted inputs exhibits distinct behavior in its gradient space compared to a model trained on clean inputs (see Figure <ref>). In other words, the magnitude of gradients, when trained on clean or corrupted data, is indicative of variation in the model's learning process. The gradient norm effectively encapsulates the relationship between input features and the model’s output. Intuitively, we propose a single-interaction method to estimate data distortion by evaluating the gradient norm of the softmax cross-entropy loss ℒ during the backpropagation phase of local model training that we get at no extra cost. Given the model parameterized by θ, we define the loss of the input batch data 𝐱 with its ground true labels 𝐲 as: ℒ(θ; 𝐱, 𝐲) = -log( e^f_𝐲(θ; 𝐱) / T/∑_c=1^C e^f_c(θ; 𝐱) / T) where the f(·) signifies the model's output function, f_y(θ; 𝐱) is the output logit from the model f for the ground true label 𝐲 given the input 𝐱, and f_c(θ; 𝐱) is the output logit from the neural network for class c. The gradient norm of the input batch data 𝐱 over C classes is defined as follows: g(𝐱) = ∇_θℒ(θ; 𝐱, 𝐲) _p The term g(𝐱) refers to the gradient norm of the input batch data 𝐱. Considering the k-th client within a set of clients, the overall local dataset of client D_k is split into B mini-batches. Hence, we compute the gradient norms for the entire local dataset D_k, expressed as { g(𝐱_b) }_b=1^B. After the initial round of local training, the server aggregates the gradient norm results from each of the N clients, forming a score vector by computing variance. By applying the kernel density estimation on the aggregated gradient norm vectors from the clients, we observe a distinct separation in the distributions of clean and noisy clients in Figure <ref>. Subsequently, we apply the K-means clustering on the set of all client gradient norm vectors to distinguish two clusters, as illustrated in Figure <ref>. We then compute the centroid of each cluster, categorizing the one with the higher centroid value as `clean' and its counterpart as `noisy' inspired by the findings of <cit.>. We examine the outcomes of client clustering within the  framework. During the initial training round, the server categorizes all clients into either clean or noisy clusters based on gradient norms. We then visualize 9 images sampled from both clean and noisy clusters. As shown in Figure <ref>, the images from the noisy cluster are contaminated by a mixture of noise, whereas those from the clean cluster remain intact. A key advantage of our proposed method is its implementation at the beginning of the federated training process, eliminating the need for additional communication rounds. By utilizing only the aggregated gradient norms from each client, our approach is directly usable (as a plug-in) with various FL strategies. Our method incorporates only the gradient norm (i.e., a scalar value) along with standard model parameters during the aggregation phase, thereby not increasing communication overhead. Moreover, this method enhances security by requiring clients to share only scalar values, significantly reducing the risk of sensitive data leakage. §.§ Federated Noise Sifting In FL, the formulation of the global model normally involves the aggregation of local models from each client, weighted based on the total count of data samples in their respective local datasets <cit.>. When applying an average aggregation strategy when clients' data follows the IID setting, the contribution of each client's model parameters to the server is equivalent. Consequently, in scenarios of noisy federated learning, the contribution of noisy clients can seriously degrade the generalization quality of the server-side aggregated model, as detailed in Table <ref>. Normally, the optimization function of a standard noisy FL problem is to minimize the loss from both noisy and clean clients defined in Equation <ref>, where the model parameters from each client's model are weighted by the amount of data it has divided by the total number of data samples in all clients. To mitigate the detrimental effect of noisy clients, we propose a noise-sensitive weights aggregation strategy that is resilient to data corruption during the federated training process, termed . Specifically, we introduce the weighting factor α and β to control the contribution of model updates from clean and noisy clients, respectively. Given a set of client updates and their corresponding weights, the global parameter θ is updated by aggregating the weighted updates from all clients, including both clean and noisy ones. Considering θ_g^t as the global parameter at the training round t, K total clients including N clean clients and M noisy clients. Let w_k be the weight and θ_k be the model parameter of client k. Thus, We formulate the proposed global parameter update as follows: θ_g^t ←θ_g^t-1 - η( ∑_n ∈ [K] s.t. D_n is cleanw_nα∇ℓ(θ_n^t; D_n) + ∑_m ∈ [K] s.t. D_m is noisyw_mβ∇ℓ(θ_m^t; D_m) ) where ∇ℓ(θ_n^t; D_n) represents the gradient of the loss function for the clean client n at training round t, while ∇ℓ(θ_m^t; D_m) denotes the corresponding gradient for the noisy client m in the same training round. η denotes the learning rate. The coefficients α and β are utilized to proportionally scale the model update from clean and noisy clients, respectively. Concisely, our approach involves assigning a higher weighting factor to clean clients and a reduced one to noisy clients to lower their contributions to the overall global model. We provide the complete   algorithm in Algorithm <ref>. [ht] : Our method can be easily plugged into various FL strategies like the standard Federated Averaging. N is local clients, T is the total training rounds, E is the epoch of local training, and η is the learning rate. At each round t, the server will sample S_t clients that can be divided into n clean clients and m noisy clients. (FedAvg, ) § EXPERIMENTS §.§ Datasets, Models, and Configuration Datasets. To demonstrate the efficacy of , we develop the noisy datasets benchmark based on <cit.>. we consider the image classification task and test our approach across various datasets, including CIFAR10/100 <cit.>, Path-MNIST <cit.>, Fashion-MNIST <cit.>, EuroSAT <cit.>, and Tiny-ImageNet <cit.>. These datasets are further applied data corruptions that are defined in Section <ref> to create the noisy input for FL. Specifically, we categorize data corruption into distortion and patch-based noise. In practice, distortion corruption consists of noise addition (e.g., Gaussian noise, shot noise, impulse noise, pixelated noise), blur (e.g., defocus blur, glass blur, motion blur, Gaussian blur), photometric distortions (e.g., brightness, contrast, saturate), synthetic distortions (e.g., snow, fog, elastic, spatter), and compression artifact (e.g., JPEG compression). Moreover, we further evaluate our method on patch-based obfuscation noise, including black patch, Gaussian noise patch, and StyleGAN-based patch <cit.>. Baselines and Implementation. We apply  in conjunction with four widely used FL strategies to evaluate its effectiveness, namely FedAvg <cit.>, FedProx <cit.>, FedTrimmedAvg <cit.>, and FedNova <cit.>. We employ ResNet-18 as the main model architecture, utilizing the mini-batch SGD as the universal local optimizer for all FL strategies. Moreover, we further evaluate  on a different neural architecture ConvMixer-256/8 in Table <ref>. The optimizer is characterized by a learning rate of 0.01, an SGD momentum of 0.9, and the weight decay is set to 10^-4. We set the number of local training epochs to 5 and the global communication rounds to 150 across all datasets. We consider a setup with N = 20 clients for our experiments unless mentioned otherwise. This choice aligns with standard practices in FL and accommodates our computational limitations. The training set is distributed under both IID and non-IID settings. For the main experiments, we divide the client set into 15 noisy clients and 5 clean clients, with full client participation r_p = 1.0. We compute L_1-norm of the gradient of last layer for noisy client detection in all the cases, see Table <ref> for comparison between the effectiveness of L_1 and L_2 norms, as well as the impact of batch size on the detection performance. We conducted all experiments on NVIDIA A10 GPUs. §.§ Results and Ablation Studies Participation of noisy clients deteriorates the performance of the global model. To validate the efficacy of our proposed method, we first conduct an experiment for model training with clean and noisy input across all the datasets and utilize the same noise configuration for our further empirical evaluation. With this, we aim to evaluate the upper-bound performance that can be achieved when learning from a mixture of noisy and clean clients. Table <ref> presents the comparative results of average accuracy for all considered datasets. We focus on three specific distortions (i.e., defocus blur, Gaussian blur, contrast) due to their significant impact on degrading the model's generalization capability to simulate the worst case in noisy FL. For the generation of distorted data used in the experiments summarized in Table <ref>, each noisy sample was produced by randomly selecting a distortion type with the configuration characterized by the noise severity level ξ = high. We set the noise level to NL_m = 100% for every client m for the experiments of both Table <ref> and Figure <ref>. We see the participation of noisy clients leads to a significant degradation in the model's generalization capability across all tasks, indicating the detrimental impact of noisy data in the FL environment. Furthermore, due to the inherent lack of visibility into the data from these federated clients, the resultant global model tends to be of low quality and it may become challenging in a real-world setting to identify the underlying reasons for its poor performance. FedNS significantly improves standard federated aggregation methods. We investigate the robustness of our proposed method by applying on six image datasets with different settings under the noisy scenario. As shown in Table <ref>, the performance of all aggregation methods exhibits a general trend of improvement by simply plugging FedNS to the considered strategies. In particular, we consider the worst-case with heterogeneous data setting in Table <ref>, where 15 out of 20 noisy clients participate in the federated training with high noise severity and 100% noise level. Adding to FL strategies yields better overall performance among all the datasets, especially for some vulnerable datasets (e.g., Path-MNIST) that are sensitive to data corruption. Additionally, we demonstrate the efficacy of in dealing with patch-based noise on CIFAR-10 and Path-MNIST as presented in Figure <ref>. From Figure <ref>, we observe that FedNS consistently boosts the performance of the aggregation method across all the datasets and settings. This further shows that FedNS is capable of handling various types of noises a model can encounter in a real-world setting. Impact of noisy client distributions on performance across varying numbers of clients. We evaluate FedNS under the influence of noise client percentages with different amounts of clients at scale, where K denotes the number of clients. We select K from { 30, 50, 100 } and set the noisy client percentage as 30%, 50%, 80% , respectively. As shown in the figure, the results exhibit a consistent pattern of variation in model performance across different scenarios. Specifically, as the percentage of noisy clients increases, there is a gradual decline in test accuracy for both IID and non-IID distributions. However, FedNova+NS (IID) consistently outperforms the other methods, maintaining higher accuracy levels across all settings. This demonstrates the robustness and effectiveness of the FedNova+NS method in mitigating the adverse effects of noisy clients, especially as the number of clients increases. Impact of Client Participation Rate (r_p). Next, we examine how the client participation rate (r_p) affects the performance of . We conduct experiments on CIFAR-10 and Path-MNIST, maintaining a fixed noise configuration across all settings. We vary r_p to simulate different device participation patterns, choosing values from r_p ∈0.2, 0.5, 1.0, where r_p = 1.0 denotes full client participation. In practice, the server ensures full client participation during the first training round to identify noisy clients. Figure <ref> provides the results. We observe that incorporating  significantly enhances performance across all client participation patterns. Our experiments suggest that  remains robust to variations in client availability and data quality. Robustness of  on mixed noise conditions. Next, we investigate the robustness of  under complex noise conditions that involve a combination of different noise types, including the distortions and patch-based noises as described in Section <ref>. Table <ref> shows the performance gains achieved by  are particularly significant in high-variance datasets such as Path-MNIST and Tiny-ImageNet, where FedNova+NS demonstrates substantial improvements over standard aggregation. Notably,  consistently enhances the global model's performance across all evaluated datasets, showcasing its robustness in handling intricate noise scenarios. Our results highlight the effectiveness of  in mitigating the impact of complex noise conditions, where decentralized data may be subject to various types of distortions at the same time. Evaluation on Weight Factor(β) of Noisy Client. We further investigate the effect of weight factor β on overall performance in FedNS. The weight factor β controls the weights of aggregated noisy client local models, as described in Equation <ref>. We considered various values of β∈{0, 0.1, 0.3, 0.5, 0.7, 1.0 }, where β=0 signifies the exclusion of noisy model weights, and β=1.0 indicates the direct aggregation of model weights. As shown in Table <ref>, the setting of β=0.3 yields the best performance, suggesting an optimal trade-off. Interestingly, the exclusion of noisy model weights (β=0.0) leads to a degradation in the generalization capability of the global model. This observation suggests that incorporating mitigated noisy data enhances the robustness of the global model. FedNS on Real-world Human Annotation Errors. In this experiment, we extend our investigation to assess the efficacy of FedNS in addressing real-world data quality issues, specifically human annotation errors. While this work focuses on mitigating data corruption in the input space, we identify that FedNS can effectively handle label noise - a well-studied problem in noisy FL. Label noise occurs when data labels are incorrectly assigned while the input features remain unaltered. Specifically, we assessed FedNS on CIFAR-10/100N, two benchmark datasets featuring real-world noisy labels resulting from human annotation errors <cit.>. We employed FedAvg and FedNova as baseline approaches, adhering to the training configuration outlined in Section <ref>. From Table <ref>, we observe that FedNS consistently improved the performance across all the experiments. Our findings demonstrate that FedNS not only excels in mitigating input space corruption but also shows promising results in handling label noise. The ability to address both input space corruption and label noise positions FedNS as a valuable tool for practitioners dealing with real-world datasets, where multiple types of data imperfections may coexist. Impact of Initial-Round Client Participation on FedNS Performance. In FL, the assumption that all clients must be warmed up and participate from the very first training round may not always hold true, as some clients may only participate in later rounds. To address this, we explore the scenario where only a subset of clients participates in the initial round. Instead of collecting gradient norms solely in the first round, we iteratively apply FedNS until all clients have been engaged. We evaluate this approach by setting the first-round participation rates to 10%, 50%, and 100%, with 100% representing full client participation. The results, presented in Table <ref>, highlight the robustness of FedNS across varying levels of initial-round client participation. FedNS on Different Noise Level (NL) of Client Data. In Table <ref>, we examine the performance of  across various noise configurations. Specifically, we consider noise levels NL∈{ 50%, 80%, 100%} and noise severity ξ∈{ Medium, High }. These results demonstrate that  substantially enhances model generalization under high noise conditions, especially when clients' data is completely corrupted (i.e., 100% noise level and high severity). Conversely, in scenarios with milder noise where the impact on federated models is minimal, the implementation of  does not negatively affect the generalization process. Hence, we limit our experiments to noise levels at or above 50%, as noise levels below this threshold have negligible impact on model performance. Hyperparameters Selections for Noisy Clients Detection. One of the essential components of  is using gradient norms to identify noisy clients. We investigate several factors that may affect the performance of detecting noisy clients, specifically the selection of the L_p-norm and the batch size of gradient norms for clustering. This ablation experiment is performed on the CIFAR-10 and Path-MNIST datasets, as presented in Table <ref>. The comparison between the L_1-norm and L_2-norm shows that the L_1-norm generally outperforms higher-order norms. Moreover, we evaluate the impact of batch size by setting the mini-batch size to range from 1 to 128. Our experiments suggest that a larger batch size effectively reduces the variance of the gradient estimates. The results, shown in Table <ref>, indicate that a batch size selection around 16 ∼ 64 yields better performance across all settings. Evaluating  with alternative model architecture. In this experiment, we investigate the robustness of  when applied to a different model architecture. We employ the ConvMixer-256/8 <cit.> model and train it using FedNova on a range of datasets. The noise configuration remains consistent with the details provided in Section <ref>, and we evaluate the model's performance under both IID and non-IID settings. As shown in Table <ref>, the federated aggregation method, when paired with , achieves enhanced performance across all considered datasets. These improvements, observed consistently across tasks, demonstrate the adaptability and effectiveness of  when integrated with alternative architectural paradigms. The results highlight the versatility of our approach in enhancing the performance of different federated model regardless of the neural architecture, further emphasizing its potential as a robust and flexible method for FL. § CONCLUSION In this paper, we study the training of the federated neural networks within a noisy environment in which the input space of client data is corrupted. We propose to utilize the gradient norm of local training updates to discover the noisy clients. We then propose , an effective method designed to integrate with diverse FL aggregation methods to mitigate the impact of model updates from noisy clients. Our experimental results on several benchmark datasets show that  is robust against data perturbation, which can significantly boost the generalization of federated models. We further present extensive ablation studies to provide a better understanding of . Our findings highlight the importance for FL approaches that prioritize robustness when training models on decentralized data. By focusing on this aspect, we can enhance the reliability and stability of FL in a real-world setting, where data quality may vary significantly across participating clients. Future work will investigate 's efficacy in the language modeling domain and learning from multi-modal noisy data in a decentralized setting. IEEEtran
http://arxiv.org/abs/2409.02172v1
20240903180001
Diverse dark matter profiles in FIRE dwarfs: black holes, cosmic rays and the cusp-core enigma
[ "Sophie Koudmani", "Douglas Rennehan", "Rachel S. Somerville", "Christopher C. Hayward", "Daniel Anglés-Alcázar", "Matthew E. Orr", "Isabel S. Sands", "Sarah Wellons" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.HE" ]
firstpage–lastpage An efficient observational strategy for the detection of the Oort cloud Eran O. Ofek1,⋆, Sarah A. Spitzer1, Guy Nir2 Received ; accepted ======================================================================= § ABSTRACT Dwarf galaxies have historically posed challenges to the cold dark matter (CDM) model and, while many of the so-called `dwarf galaxy problems' have been mitigated by incorporating baryonic processes, the observed diversity of dwarf galaxy rotation curves remains a contentious topic. Meanwhile, the growing observational samples of active galactic nuclei (AGN) in dwarf galaxies have prompted a paradigm shift in our understanding of dwarf galaxy evolution, traditionally thought to be regulated by stellar feedback. In this study, we explore the potential role of AGN feedback in shaping dark matter distributions and increasing the diversity of dwarf galaxy rotation curves, using a new suite of cosmological zoom-in simulations of dwarf galaxies with the fire-3 model. Our findings indicate that the presence of active black holes (BHs) in dwarf galaxies can lead to diverse outcomes, ranging from cuspier to more core-like profiles. This variability arises from the dual role of BHs in providing additional feedback and regulating the extent of stellar feedback. Consistent with previous research, we find that AGN feedback is most impactful when cosmic ray (CR) modelling is included, with CRs from any source significantly influencing dark matter profiles. Overall, our results highlight that the interplay between stellar feedback, BHs, and CRs produces a broad spectrum of dark matter density profiles, which align with observed correlations between rotation curve shapes and baryonic dominance. This underscores the importance of including the full range of baryonic processes in dwarf galaxy simulations to address the persistent `small-scale challenges' to the CDM paradigm. methods: numerical – galaxies: active – galaxies: dwarf – galaxies: evolution – galaxies: formation – dark matter § INTRODUCTION Dwarf galaxies are pivotal in the Lambda – Cold Dark Matter (ΛCDM) model, serving as the building blocks of hierarchical structure formation. Defined as galaxies with stellar masses below 3 × 10^9 (similar to the mass of the Large Magellanic Cloud), dwarf galaxies are crucial astrophysical probes in near-field cosmology and galaxy formation studies. Due to their shallow potential wells, dwarf galaxies are ideal testbeds for studying galactic feedback processes (`baryonic physics'). They are also highly dark-matter dominated, making them valuable for testing dark matter models. However, their sensitivity to both dark matter and baryonic processes has led to several controversies, collectively known as the `dwarf galaxy problems', which highlight discrepancies between observations and ΛCDM predictions from N-body simulations <cit.>. Key issues include the missing satellites problem, which notes a deficit in the observed number of Milky Way satellites compared to theoretical predictions <cit.>, and the too-big-to-fail problem, where the central masses of the largest observed satellites do not match those of the most massive simulated subhaloes <cit.>. Additionally, the cusp-core problem refers to the observation that dwarf galaxy dark matter halo profiles are less centrally peaked or `cuspy' than predicted by the CDM model with some dwarf galaxies instead exhibiting core-like profiles <cit.>. These discrepancies have led to explorations of alternative dark matter models, such as warm dark matter (WDM) <cit.>. WDM models, which include a free-streaming cut-off at dwarf galaxy scales, can address the missing satellite problem and reduce central halo densities, thus partially resolving the too-big-to-fail problem <cit.>. However, WDM struggles with the cusp-core problem, as the resulting cores are not large enough <cit.>. Another approach involves self-interacting dark matter (SIDM), where particles scatter elastically, creating cored density profiles and reducing halo ellipticity <cit.>. Constraints from strong lensing measurements of galaxy clusters suggest a velocity-dependent cross section that decreases from dwarf to cluster scales <cit.>. Additionally, fuzzy dark matter (FDM), which proposes extremely low-mass particles like ultralight axions, produces cored central profiles and suppresses small-scale structures in the dwarf regime due to their long de Broglie wavelengths <cit.>. However, there has also been a large body of theoretical work focusing on improving the baryonic physics modelling which may largely alleviate the `dwarf galaxy problems' observed in pure CDM simulations. In particular, star formation suppression from reionization could significantly decrease the number of bright dwarf satellites <cit.> thereby resolving the missing satellite problem. Furthermore, stellar feedback <cit.> may significantly decrease the central dark matter densities of dwarf galaxies. In particular, it has been demonstrated that cyclic supernova (SN) bursts play an important role in creating cores in dwarf galaxies <cit.> and that late-time star formation is crucial for maintaining these cores <cit.>. Notwithstanding this progress in alternative dark matter models and improved baryonic physics, some prominent discrepancies between simulations and observations still appear difficult to resolve without fine-tuning the models. In particular, theoretical models significantly struggle with explaining the observed diversity of dwarf rotation curves <cit.>. Due to the self-similar nature of structure formation in the ΛCDM universe, the full mass profile of a halo can be characterised by a single parameter such as the virial mass <cit.>. Hence dwarf galaxy rotation curve shapes are predicted to be almost identical for similar maximum circular velocities – and yet a great diversity is observed ranging from extremely cuspy profiles to large cores for the same inferred dark matter halo mass <cit.>. Galaxy formation simulations based on CDM have traditionally struggled to reproduce this observed diversity of dwarf rotation curves <cit.>. Some groups find that SIDM may be able to improve these discrepancies <cit.>, whilst others find no preference between SIDM and CDM simulations with stellar feedback <cit.>. The most fundamental issue for CDM simulations lies in obtaining a balance in the burstiness of star formation histories, with some simulation set-ups predominantly producing smooth (or quenched) star formation histories and cusps <cit.> whilst others predominantly produce bursty star formation histories and cores <cit.>. One potential solution could be additional feedback processes in the dwarf galaxy regime interacting with SNe and thereby modulating the star formation histories, leading to a natural diversity in rotation curves <cit.>. Recently, the role of active galactic nuclei (AGN) in resolving the remaining dwarf galaxy problems has garnered significant interest. Traditionally, AGN feedback was only considered in massive galaxies, with stellar feedback thought to dominate in low-mass galaxies. However, these models have been called into question by the growing observational samples of AGN in dwarfs, with detections spanning the whole electromagnetic spectrum from X-ray <cit.> to optical <cit.> to IR <cit.> and to radio observations <cit.>. Intriguingly, recent targeted surveys indicate high AGN occupation fractions, on the order of 10 per cent, in dwarf galaxies, suggesting there may not be a drastic decline of AGN activity in the dwarf regime <cit.>. These tantalising observations have motivated theorists to investigate this largely unexplored regime. Analytical models point towards favourable AGN energetics in the dwarf regime <cit.> and suggest that AGN activity may be able resolve all of the remaining `dwarf galaxy problems' <cit.>. However, hydrodynamical simulations have painted a more complex picture. AGN activity in dwarf galaxies may be significantly suppressed by stellar feedback evacuating gas from the central region <cit.> and black holes (BHs) may be wandering in dwarf galaxies due to their shallow potential wells <cit.> further decreasing their accretion efficiency. The modelling of BH growth is another crucial aspect as the fiducial Bondi model suppresses the growth of low-mass BHs compared to gas-supply-limited or torque-driven schemes due to its strong dependence on BH mass <cit.>. The cosmic environment also has a crucial influence on AGN activity in the dwarf regime with minor mergers triggering AGN episodes whilst long-term residence in dense environments is detrimental to BH growth <cit.>. Despite these caveats, various groups have identified physical regimes where AGN in dwarfs can accrete efficiently in accordance with observed samples. These dwarf AGN significantly influence their host galaxies by driving powerful outflows <cit.> and suppressing star formation <cit.>. Given this potentially significant impact by AGN on the baryon cycle in dwarf galaxies, could AGN feedback also impact the dark matter distributions in dwarfs as predicted by analytical models? The interaction between AGN feedback, star formation, and dark matter in dwarf galaxies has not been thoroughly explored, though some trends have been noted, such as AGN mildly suppressing central dark matter densities <cit.>, hinting at a possible role of AGN in driving cusp-to-core transformations in dwarfs. However, the central density suppressions by the AGN in these simulations were mostly minor and did not change the qualitative nature of the central profiles. This is likely due to the effective equation of state <cit.> employed for modelling the interstellar medium (ISM) in these simulations, which leads to smoother star formation histories alongside less bursty AGN feedback at low redshifts, often operating as `maintenance-mode' feedback <cit.>. Recently, <cit.> examined the role of BH feedback in core formation within the NIHAO simulations and found negligible effects in low-mass systems, largely due to inefficient BH growth in this mass regime, constrained by the Bondi accretion scheme. In contrast, they found significant impacts of BH feedback on core formation in more massive galaxies, aligning with earlier studies that show efficient AGN activity can create cores in these galaxies <cit.>. It is therefore timely to investigate whether efficient AGN activity can induce cusp-to-core transformations in dwarf galaxies and potentially account for the observed diversity in dwarf rotation curves. Clearly, within this context it is crucial to model the multi-phase nature of the ISM but also to consider the role of non-thermal components such as magnetic fields or cosmic rays (CRs) <cit.>. In particular, it has been shown that CRs may drive core formation <cit.> though their impact in the dwarf regime remains controversial as gas-rich mergers may re-establish cores in these systems <cit.>. What is more, it has been found that CRs may significantly boost the efficiency of AGN feedback <cit.> by suppressing cold gas accretion from the CGM due to enhanced CR pressure support. Therefore, it is important to also assess the role of CRs when investigating cusp-core transformations with AGN. The aim of this paper is to systematically investigate the impact of AGN feedback and CRs on dark matter profiles in dwarf galaxies and to assess whether incorporating these additional baryonic processes may be the key to resolving the diversity of rotation curve problem. To this end, we perform simulations with and without CRs and/or AGN of three different dwarf haloes with explicit multi-phase ISM physics within the fire-3 galaxy formation framework and contrast the resulting dark matter profiles, their cosmic evolution and their connection with different feedback processes. The remainder of this paper is structured as follows. In Section <ref>, we introduce our three dwarf simulation set-ups and describe the different variations of the fire-3 galaxy formation model explored in this work. The results of our analysis are presented in Section <ref>. Firstly, we focus on the stellar and BH properties of our simulated dwarfs, including visualisations of the stellar distributions in Section <ref>, comparisons with observed nearby low-mass galaxies (see Section <ref>) and observed BH masses in dwarfs (see Section <ref>). We then analyse how our different feedback set-ups influence the dark matter profiles at z=0 in Section <ref> and examine the driving forces behind the diversity of dark matter profiles in <ref>. We investigate the cosmic evolution of the central dark matter densities in Section <ref> and assess whether this could explain the observed diversity of rotation curves in Section <ref>. We discuss our results and caveats to our modelling in Section <ref> and conclude in Section <ref>. § METHODOLOGY §.§ General set-up Our simulation suite is based on a set of 3 initial conditions (ICs) of dwarf haloes selected from the fire ICs set, spanning a range of halo masses in the `classical dwarf' regime. We select two low-mass dwarfs: m10q and m10y, with z=0 halo masses of =8×10^9 and =1.4×10^10 , respectively. Our third halo, m10z, is an intermediate-mass dwarf with z=0 halo mass = 3.5×10^10 . All of these ICs were generated using the music IC generator <cit.> and have been thoroughly explored with the fire-2 model <cit.>. In the standard fire-2 runs, m10q has a quiescent growth history and is relatively isolated, m10y forms early and has a large core, and m10z is an ultra-diffuse galaxy. To simulate these systems, we use the gizmo hydrodynamics+gravity code, employing the mesh-free finite mass method <cit.> that tracks constant-mass gas resolution elements in a Lagrangian fashion. Our simulations include both baryons and dark matter, with mass resolutions of = 262 and = 1303 , respectively. The gravitational softening for the dark matter, stars and BH particles is set to ϵ_soft = 0.0625 ckpc h^-1, whereas the softening for the gas resolution elements is set adaptively. This allows us to easily resolve core formation as demonstrated in previous work <cit.>. §.§ Galaxy formation model For the galaxy formation physics, we use the newly updated fire-3 model <cit.> which tracks the state of the multi-phase ISM and multiple forms of stellar feedback, including feedback from SNee[We couple the SN momentum following <cit.> and do not yet include the updates to the momentum coupling from <cit.>. Though we note that these changes mainly affect high-mass galaxies at low resolution, whilst the impact on high-resolution dwarfs is very limited.] of Type Ia and II, stellar winds from OB and AGB stars as well as multi-wavelength photo-heating and radiation pressure. Star formation occurs in gas that is molecular, dense, self-gravitating, self-shielding, and Jeans-unstable. All simulations include magnetic fields, using the magneto-hydrodynamics methods from <cit.> and <cit.>. The fire-3 model also has new optional physics modules compared to the previous fire <cit.> and fire-2 <cit.> models. Two of the most important optional features are a novel sub-grid BH model, including seeding, accretion and feedback, as well as the inclusion of CR physics. To separate the impacts of CRs and BHs, we perform four simulations for each set of ICs, including runs without CRs or BHs, with both CRs and BHs and with only CRs or BHs, respectively. For the runs without BHs but with CRs, no BHs are seeded and the CRs only stem from SNe. For the runs without CRs and with BHs, we incorporate all AGN feedback channels except for CR injection and SNe do not inject CRs either. See Table <ref> for an overview of the simulation suite. The modelling of BHs and CRs remains highly uncertain. Several studies have investigated the impact of different modelling assumptions for CRs <cit.> and BHs <cit.> within the fire model. For our study, we base our BH model on the extensive study by <cit.> who identified a modelling space that allows for efficient BH feedback whilst still matching observational constraints, in particular with regards to star formation distributions. We summarise these BH modelling choices in Section <ref> and point the interested reader to <cit.> for more details and an in-depth parameter study of alternatives to our choices. For the CR modelling, we employ the numerically-efficient subgrid CR model introduced in <cit.> allowing us to explore several different IC set-ups to z=0 whilst minimising the computational overhead costs that would arise from explicit CR transport. We summarise the key characteristics of the CR subgrid model in Section <ref> and refer the interest reader to <cit.> for a more detailed overview of this approach. §.§.§ BH modelling For our BH modelling, we largely adopt the default fire-3 BH implementation <cit.>, except for the AGN feedback injection where we use the continuous wind mode (rather than particle spawning) and for the CR injection where we employ the subgrid model from <cit.>. Below we summarise the salient details as well as our modifications to the fire-3 BH model and refer the interested reader to <cit.> and <cit.> for a more in-depth description and parameter studies. BHs may be seeded into the simulation from any star-forming gas particles that satisfy physically-motivated properties for the formation of low-mass seeds. In particular, the gas particles must have extremely high densities (Σ_gas≳ 5000 pc^2) and low metallicities (Z_gas≲0.001) in order to spawn BH particles of mass =100. With a DM particle mass of M_DM∼ 10^3, we therefore do not fully resolve the dynamical friction forces that would drive the orbits of our seed BHs towards the centre of the galaxy. Hence we follow <cit.> and employ a subgrid prescription where the BHs are `drifted' towards the local binding energy extremum of the stars, dark matter and BH particles so that they are not artificially ejected <cit.>. Similarly, since we cannot resolve the dynamics of binary BHs, two BHs are merged if their interaction kernels and force softenings overlap and they are gravitationally bound to one another. Once a BH particle is present, it grows via accretion by draining gas from the nearest N∼256 gas cell neighbours. <cit.> parameterized the accretion rate as ≡η_accΩ, where η_acc is the accretion efficiency, is the gas mass within the BH kernel, and Ω = √(GM_tot/R) is the orbital frequency determined at the kernel size R, based on the total mass enclosed with R, M_tot. fire-3 modulates the BH accretion rate via a gas reservoir that represents an accretion disc from which the BH particle grows at a mass growth rate = M_α disc / t_dep, where M_α disc is the mass of the accretion disc and t_dep = 42 Myr (1 + / M_α disc)^0.4 is the depletion time. While <cit.> tested various distinct accretion models (i.e. variations in η_acc), their best-fitting models favoured accretion that was powered by gravitational torques. Physically, these torques result from asymmetries in a galaxy's gravitational potential, or from interactions between (a) the dark matter and gas, (b) the stellar component and gas, or (c) the self-interaction of the gas with itself via shocks and dissipation <cit.>. The accretion efficiency in the case of gravitational torque-dominated accretion comes from the work in <cit.>, η_acc = C ([ + M_d] / M_d)^1/6/1 + 3M_d,9^1/3(/M_d), where C is a resolution-dependent constant, is the BH mass, M_d is the mass of angular-momentum support material, and M_d,9 = M_d / 10^9. The model has been studied extensively in the context of cosmological zoom-in simulations in <cit.> and <cit.>. It is important to note that we choose a value of C = 0.1 in order to have reasonable BH masses within our three simulated dwarf galaxies. However, the parameter C cannot take into account all sub-grid processes that may occur below the resolution scale. One important piece of (partially) unresolved physics is the impact of stellar feedback on the accretion flow in the vicinity of the BH. <cit.> introduce a scaling factor f_acc (such that η_acc→ f_accη_acc) that depends on the local gravitational acceleration, a_g,eff = GM(<R)/R, as f_acc = a_g,eff/a_g,eff + a_g,crit, where a_g,crit≈ 10^-7 cm s^-2. Below a_g,crit (an effective surface density of Σ_tot∼ 3000 pc^2), a significant fraction of gas is expelled due to stellar feedback <cit.>. However, we emphasise that in addition to this stellar feedback effect on unresolved scales, our simulations explicitly capture the impact of stellar feedback on larger resolved scales (comparable to the size of the BH kernel). The accretion process itself drives powerful outflows, jets and radiation that influence the properties of the galaxies hosting the BHs, collectively coined BH feedback. The fire-3 BH feedback model includes three channels: radiative feedback, mechanical feedback, and CR feedback. Each mode injects energy at a rate proportional to the BH accretion rate . As gas falls into a BH, it loses gravitational energy that is converted into radiation, with an efficiency that is dependent on the location of the innermost stable circular orbit of the BH. While the location of that orbit depends on the spin of the BH, the typical efficiency used in the literature is ϵ_r = 0.1, implying that 10% of the rest mass-energy accretion rate is converted into radiation. Given that we do not track the BH spin evolution in our simulations, we also adopt this fiducial radiative efficiency value in our work. In fire-3, that radiation is treated in the same manner as the stellar feedback — with multiband radiation transport and metallicity-dependent opacities using the LEBRON method <cit.>, except using a quasar template spectrum <cit.>. The radiation momentum flux depends on the luminosity absorbed (L_abs) in the gas as ṗ = L_abs / c. There are other outflows generated from the accretion process besides radiation-driven winds. For example, there may be jets or hydromagnetic winds from the accretion disc itself. The physics that drives these processes occurs on scales that are much below the resolution of our simulations and, therefore, need to be parameterized into a sub-grid model. The fire-3 BH model assumes that these winds have mass outflow rates proportional to the accretion rate. We assume a mass loading of one, such that Ṁ_out =. The kinetic energy rate in the wind itself depends on the wind velocity at launch, , such that Ė_mech = ^2 / 2. <cit.> recast the mechanical luminosity in terms of the new mechanical efficiency, η_mech≡^2 / (2c^2), such that Ė_mech = η_mech c^2. In this work, we choose = 10,000 (see e.g. , compiled in fig. 3 of ) and, hence, have η_mech = 5.6×10^-4. The fire-3 model also includes a CR model and, therefore, it also includes a BH feedback channel that accounts for relativistic massive particles accelerated by the AGN jets. We describe the CR propagation model in Section <ref>, but note here that the CRs are injected at a rate Ė_CR = η_CR c^2, where η_CR is the efficiency of CR feedback. In this work, we use an efficiency of η_CR = 0.01 which was identified as yielding realistic galaxy properties in the <cit.> parameter study. For the mechanical and radiative feedback, we use the solid angle weighting to inject the AGN feedback <cit.>. Following <cit.>, we inject the majority of the AGN energy within a relatively narrow opening angle by weighting the AGN wind energy and radiation pressure received by each gas cell as w(θ) = ϵ_jet (ϵ_jet + cos^2θ)/(ϵ_jet+1)(ϵ_jet+(1-cos^2θ)). We set ϵ_jet=0.35 which leads to an effective base opening angle of ∼ 7. The angle weighting is applied with respect to the total gas angular momentum accreted by the BH, J_gas, which is measured in the centre-of-mass frame of the BH – accretion disc sink particle. This allows for AGN energy injection without significantly disrupting the small-scale gas supply in the central region which favours repeated bursts and/or continuous AGN activity allowing us to probe the `physically interesting' regime for AGN influencing cusp to core transformations. As discussed in <cit.>, the large-scale collimation properties of the jet are not hugely sensitive to the small-scale opening angle since the feedback energy chooses the path of least resistance effectively recollimating even for large opening angles, also see <cit.>. Long-range radiation transport and CRs are treated via the LEBRON approximation and injected isotropically <cit.>. §.§.§ CR modelling The fire-3 model includes a comprehensive CR model that explicitly follows the multi-species/multi-spectral CR dynamics <cit.> with the entire distribution function <cit.> integrated with the magnetohydrodynamics (MHD) solver. However, there are also optional, simplified models that make a series of approximations valid in our regime of interest — dwarf galaxies. In particular, we use the sub-grid CR model described in <cit.> and briefly describe our motivation for this model below. The primary assumption is that the CR energy spectrum is spatially constant, and that CRs above ≳ 1 GeV dominate the CR pressure <cit.>. This “single-bin” model only evolves the integrated CR energy density, drastically reducing the computational time while providing a reasonable approximation of the CR pressure, which is the dominant variable that impacts galaxy evolution. The study in <cit.> showed that higher values of the isotropic diffusion coefficient κ are necessary to explain γ-ray luminosities in dwarf and L^*-galaxies. Importantly, the values of the diffusion coefficient that best match the observations leads to most CRs escaping from galaxies with low gas density (i.e. dwarfs), while in starbursting galaxies with dense gas, the CR energy is lost almost entirely to collisions. Once the CRs escape the dense gas, the adiabatic gain/loss terms from the P dV work on the low density gas also becomes negligible. Given that several of the CR physical processes are seemingly subdominant in the dwarf regime, <cit.> made a series of further simplifying assumptions that are applicable in a `streaming+diffusion' limit. In particular, the reduced sub-grid model assumes that the CR energy equation is in steady state (∂_t e_cr→ 0) and that the magnetic fields below the resolution scale are isotropically `tangled'. The latter assumption allows the anisotropic diffusion tensor κ_|| to be replaced with an isotropic diffusion coefficient κ, averaged over the resolution scale (and spatio-temporally constant). The dominant loss term for CR energy is assumed to be a combination of hadronic/pionic, Coulomb, and ionization loses — therefore, neglecting losses from diffusive reacceleration <cit.>. Streaming contributions from advective and Alfvén velocities are also ignored, leading to a two parameter model depending on the isotropic diffusion rate κ and a streaming velocity v_κ. An approximate solution for the spatial distribution of CR energy density for such a model is presented in <cit.>, and depends on an integral of an exponentially-declining function of the loss function. The resultant CR model is solved using a LEBRON-type approximation <cit.>. The `optical depth' of the CRs is equivalent the loss function we describe above, and the CR energy at a given distance from a source (i.e. SNe or AGN) is solved using the gravity tree in GIZMO. In fire-3, 10 per cent of the SN energy is converted to CRs and, as we mention above, 10 per cent of the BH luminosity is in the form of CRs. In this work, we use the values from <cit.>; κ = 5×10^28 cm^2 s^-1 and v_κ = 20 km s^-1. § RESULTS §.§ Overview of the simulation suite We begin our analysis of the simulation suite by inspecting visualisations of our simulated dwarfs' stellar distributions. Figure <ref> shows mock u/g/r Hubble images of the stars in each dwarf set-up, made by ray tracing through the gas: for each pixel, we sum up the luminosities of the star particles in each band along the line of sight. Dust associated with the gas elements attenuates the light, assuming a Milky Way-like attenuation curve in each band and a fixed dust-to-metals ratio. The side length of each projection is 10 kpc. The three rows correspond to our three different ICs whilst the different columns represent our four physics variations based on the fire-3 physics modules: the fiducial fire-3 set-up (without CRs or AGN), fire-3 with AGN, fire-3 with CRs and fire-3 with both CRs and AGN. We note that there are several uncertainties associated with modelling AGN and CRs in galaxy formation simulations and here we only explore a small subset of this parameter space that was identified by <cit.> as producing realistic galaxy properties. Furthermore, our simulations likely represent an upper limit for the impact of CRs since we are using an effective subgrid model for the CR population <cit.> which only accounts for local losses and assumes a constant diffusion and effective streaming speed. See Section <ref> and <cit.> for more details. Keeping these caveats in mind, we note that both CRs and AGN have a significant impact on the stellar distributions. Generally, the addition of CRs leads to more compact stellar distributions, whereas the simulations without CRs also have an extended diffuse component. This is partly due to CRs suppressing star formation as the additional pressure support prevents gas accretion from the CGM onto the galaxy <cit.>. For the runs with AGN feedback, the picture becomes even more complex. The runs without CRs but with AGN appear qualitatively similar to their non-AGN counterparts if not slightly more diffuse. The runs with CR, however, display systematic differences with added AGN feedback. This is consistent with the analysis from <cit.> where they find that CR-driven feedback is among the most efficient AGN feedback channels. In all cases, the runs with AGN are more compact and redder, pointing towards older stellar populations. Overall, these visualisations demonstrates that both AGN and CRs have a significant non-linear impact on dwarf galaxy evolution. §.§ Stellar assembly Before assessing the impact of our baryonic modelling choices on dark matter distributions, it is vital to confirm that our feedback implementations produce realistic dwarf galaxy properties. To this end, we assess the stellar properties of our simulated dwarfs and compare these with observational constraints in Fig. <ref>, which shows the stellar mass – halo mass (SMHM) relation, galaxy colours and stellar half mass radii. Note that in this and later plots the symbol style indicates the model variations explored in our simulation suite, as indicated by the legend. §.§.§ SMHM relation Firstly, we examine if the simulated dwarf galaxies align with observational expectations for the SMHM relation. The left panel of Fig. <ref> shows our 12 dwarf zoom-in simulations in M_halo – M_stellar space at redshift z=0. For comparison, we plot several SMHM relations from the literature, including <cit.> and <cit.>. The low-mass end of the SMHM relation remains largely unconstrained by observations, evident from the significant differences between these models. The <cit.> models extend down to log(M_halo / ) = 10.0, and the <cit.> relation was shown to also be valid down to the dwarf regime, at least to log(M_halo / ) = 10.0, by <cit.>. We plot the extrapolated relations by <cit.> and <cit.> as dashed and dotted lines, respectively. Extrapolating SMHM relations below the minimum mass considered has pitfalls, especially regarding physical processes like reionization suppression, which are effective at low masses but not included in most of the models. The scatter is especially difficult to model in this regime since more bursty star formation histories at lower stellar masses are expected to increase the scatter whilst high-redshift quenching due to reionization suppression may act to substantially reduce the scatter in the dwarf regime <cit.>. Our simulations with the m10q ICs, the least massive halo, are firmly in the extrapolated regime, whilst m10y and m10z are at the lower end of validity for the <cit.> and <cit.> relations. Hence for our simulations comparing to SMHM relations constructed for low-mass galaxy samples, rather than the fiducial SMHM relations for the general galaxy population, is more instructive. We include SMHM relations derived for dwarf galaxies by <cit.>, <cit.>, and <cit.> as shaded regions. The differences in these relations arise due to different assumptions for surface-brightness incompleteness correction, see discussion in <cit.>. We also show various observed data points from the SPARC data base <cit.> to indicate the scatter in the observations. We note that the empirical relations as well as the SPARC data base use different definitions for the halo mass, either employing the virial mass as defined by the mass enclosed in a sphere whose mean density is 200 times the critical density of the Universe at z=0 or the halo mass based on the collapse of a spherical top-hat perturbation following <cit.>. We have checked for our simulated dwarfs that the difference between these two definitions is at most 0.1 dex, and plot the simulated halo masses as the average value between these two definitions. The simulated stellar mass is calculated within twice the stellar half mass radius. Given the significant uncertainty and scatter in SMHM relations, we do not adjust the stellar masses for different radial cut-offs or initial mass functions. From this comparison with the empirical relations and SPARC observations, none of our models can be ruled out based on their integrated stellar masses. All our simulations fall within the range of the empirical SMHM relations and are well within the scatter of the observed data. The simulation set-ups with CRs (indicated by squares and diamonds) are generally less efficient at forming stars than the simulations without CRs (indicated by stars and crosses). Most simulations fall towards the upper end of the predicted SMHM relations, with the simulation the only set-up resulting in an `undermassive' galaxy. Though we note that even this set-up is still within the scatter of the most-up-to-date dwarf galaxy SMHM relation by <cit.>. §.§.§ Galaxy colours Next, we compare the galaxy colours of our simulated dwarfs with observational data. The middle panel of Fig. <ref> presents the g - r colour versus stellar mass for our simulated dwarfs. For comparison, we display the colours of SDSS galaxies from the NASA Sloan Atlas (NSA) as a grey contours, the colours of Milky-Way dwarf satellites from <cit.> as grey squares and the ELVES dwarf satellites, a nearly volume-limited sample of Milky Way-like hosts in the Local Volume, from <cit.> as grey diamonds. We use the <cit.> stellar population synthesis model to calculate galaxy colours as a function of stellar age and metallicity, assuming a <cit.> IMF, including all star particles within twice the stellar half-mass radius. For the MW dwarf satellites, we convert the B - V colours using the transformation equations from <cit.>, noting that while these equations are for stars, they should approximate galaxies well unless strong emission lines are present. One important caveat is that our simulated galaxy colours do not account for dust attenuation. However, we checked that, as expected for dwarf galaxies, our simulations have relatively low metallicities and therefore dust attenuation is expected to be minimal. All simulated dwarfs are overall in good agreement with both the local dwarf galaxy constraints and the colour distribution of SDSS galaxies. Although we note that SDSS is very incomplete in the mass regime we consider here and the ELVES dwarfs were selected to be satellites while our dwarfs are field objects. The SDSS galaxies cluster around g-r∼0.35, whilst most of our dwarfs are offset towards somewhat redder colours, clustering around g-r∼0.45. Accounting for dust attenuation would likely make this offset more severe. However, this discrepancy is also inherently linked with completeness bounds in SDSS in this mass range, with blue galaxies likely overrepresented. Interestingly, the set-up has a similar stellar mass to its AGN counterpart , but is shifted towards bluer colours due to a late-time burst in star formation which is also visible in the stellar projections in Figure <ref>. Conversely, the set-up has a lower stellar mass and is redder than its AGN counterpart, indicating a complex interplay between CRs and AGN activity. For m10y, neither CRs nor the AGN have a strong impact on the integrated galaxy colours, though there are small, yet systematic, offsets with the CR runs being redder than the no CR runs, and the AGN runs being redder than their counterparts without AGN. Finally, for m10z, the set-up with neither CRs nor AGN yields the bluest galaxy colour, , whilst the three other set-ups all have relatively similar colours, despite spanning an order of magnitude in stellar mass, indicating that all three of these formed the majority of their stars early in cosmic history. §.§.§ Stellar sizes We now turn towards examining the stellar sizes which provide crucial clues to both the star formation and dynamical histories of our dwarfs. We plot the stellar half mass radii of our simulated dwarfs at z=0 as colour-coded symbols. For comparison, we also plot the observed effective radii of local dwarf galaxies as grey diamonds based on the ELVES catalogue from <cit.>, which also includes the Milky Way dwarf measurements from <cit.> as a subsample. To make this comparison more consistent, we calculate the simulated stellar half mass radii based on the averages of 2D projections along random sightlines. Again there is appreciable scatter in the observations and our simulated dwarfs fall well within this scatter. As we found from our visual inspection of the stellar distributions in Fig. <ref>, the stellar distributions are generally more compact with CRs. The impact of AGN feedback on the stellar sizes is more complex, leading to both more extended distributions (e.g., compare with ) or more compact distributions (e.g., compare with ). The differences between the AGN and no-AGN set-ups are more pronounced if CRs are present, in agreement with the findings from <cit.> that CRs enhance the efficacy of the AGN feedback in fire. Overall our stellar sizes are in good agreement with the observations, though the simulation is slightly more compact than local dwarfs at similar stellar masses. However, we note that the observed radii are based on stellar light whilst we are calculating the simulated radii based on the stellar mass distribution. Observational effects can add significant scatter leading to both more compact half-light radii <cit.> or more extended observed distributions <cit.>, with the stellar sizes also being significantly dependent on the wavelength <cit.>. §.§ BH assembly Having analysed the stellar properties, we turn to analyse the evolution of the BHs in our simulations in the context of observational constraints. Fig. <ref> shows our simulated dwarfs in stellar mass – BH mass space. The evolutionary tracks of the dwarf simulations from z=4 to z=0 are plotted as dotted lines, colour-coded by simulation setup. Note that the BHs are generally seeded around z ∼ 8–10 with the exact seeding time depending on the metallicity and density conditions. However, we here focus on later redshifts as by z=4, the most massive progenitor dwarf hosts a BH for all model variations so that we can track these alongside the stellar mass growth of the main progenitor. Integer redshifts are marked with the marker symbols for the respective simulation set-ups as indicated in the legend. We note that there is substantial uncertainty at the low-mass end of the BH – galaxy scaling relations due to the difficulty in reliably measuring the masses of intermediate-mass BHs (IMBHs) which have a much smaller gravitational sphere of influence and generally lower luminosities. Indeed there are no BH mass measurements for dwarf galaxies in the stellar mass range covered by our simulations, 10^6 ≲ M_stellar≲ 10^8. For more massive dwarfs there are only very limited measurements and we show these observed BH masses in massive dwarfs for reference, including virial BH mass estimates for active dwarf galaxies from <cit.> and <cit.> as dark-grey squares and stars, respectively, and measurements from <cit.> (based on dynamics, reverberation mapping or scaling from the stellar velocity dispersion in the specific case of TDE events that seem to have reliable determinations of the host galaxies) as dark-grey hexagons and triangles for mean values and upper limits, respectively. Error bars are omitted for clarity. We note that there are no dynamical nuclear IMBH mass measurements below M_stellar∼ 10^9 and, excluding upper limits, the lowest BH mass estimate based on TDEs stems from a host galaxy of stellar mass M_stellar = 2.5 × 10^8. Below this mass, we are firmly in the extrapolated regime and indicate this by greying out the local BH – galaxy scaling relations below this mass. To extend our comparison to lower-mass systems, we include IMBH candidates from globular clusters around the Milky Way and M31, compiled by <cit.>, plotted in light-grey. Some globular clusters, like ω Cen, are hypothesised to be remnants of dwarf galaxies disrupted by the Milky Way's tidal field <cit.>. The presence of IMBHs in globular clusters is controversial, with significant uncertainty in BH mass values. Thus, we show a range for each cluster, indicating the lowest and highest BH mass values reported in the literature, with crosses for mean values and triangles for lower/upper limits. In addition, we plot an updated firm lower limit for the BH mass in ω Cen which was recently determined by <cit.> and a new IMBH detection in M31's most massive globular cluster which may also originate from a stripped dwarf galaxy <cit.>, both indicated in olive-green. We also show several observed BH mass – stellar mass scaling relations from the literature to give an indication of the scatter in the extrapolated relations. The <cit.> relations are based on dynamical measurements by <cit.> as well as more recent dynamical measurements and upper limits in the low-mass regime. We plot the relation for the total galaxy sample as a solid black line, and the early-type and late-type galaxy relations as loosely and densely dashed black lines, respectively. The late-type relation has a significantly lower normalization, likely due to a weaker correlation between disc mass and BH mass <cit.>. We also show the <cit.> scaling relation, based on local AGN, including dwarfs, as a dashed-dotted black line. This relation is significantly shallower than that of <cit.>, leading to larger BH masses in the low-mass dwarf regime. For the classical dwarf mass range covered by our simulations, M_stellar≲ 10^8, we would expect a flattening of the scaling relations, with the transition mass dependent on the seeding mechanism <cit.>, which sets a lower limit for BH masses. The extrapolated BH – galaxy scaling relations may hence be considered lower limits for BH masses in the dwarf regime. We also plot the high-redshift scaling relation derived by <cit.> from overmassive BHs uncovered by JWST to indicate the high BH mass to stellar mass ratios that can be reached with optimal growth conditions in the early Universe (modulo observational uncertainties). To estimate upper limits, we show the nuclear star cluster (NSC) – galaxy scaling relations. NSCs and massive BHs often coexist in galaxies, with NSCs being the dominant central mass component in dwarfs[The scatter for the M_BH/M_NSC ratio at a given galaxy mass spans over three orders of magnitude, showing a trend of increasing ratio with galaxy mass, and BHs only start to dominate at stellar masses above a few times 10^10.]. <cit.> compile NSC mass measurements extending to low-mass dwarfs with stellar masses ≲ 10^6, making the relation valid for the entire z=0 galaxy stellar mass range considered here. Keeping these observational caveats in mind, we find that all of our dwarf AGN end up being overmassive compared to the extrapolated local BH – galaxy scaling relations. Most runs are offset by about an order of magnitude with the exception of which closely follows the early-type relation from <cit.> and is only minimally offset from the <cit.> relation. Again, we emphasise that these offsets are not necessarily unexpected given that we are in the heavily extrapolated regime for these relations. Indeed the offsets are not as severe as the intrinsic high-redshift offsets inferred from recent JWST observations by <cit.> and well within the bounds provided by the nuclear star cluster relation. For the redshift evolution, we plot the BH mass of the most massive progenitor for redshifts z=1,2,3,4. We note that in all cases the set-ups without CRs have more significant stellar and BH mass growth due to the weaker AGN feedback. The difference is especially significant at low redshifts. At late times, between z=1 and z=0, all simulations with CRs only have minimal stellar mass growth. For and , there is still significant BH growth during this time indicating that whilst the CR pressure has shut down star formation, the BH is still able to accrete. For BH growth is already shut down from z=3, however, between z=4 and z=3, this run similarly experiences a rapid BH growth phase whilst star formation is already quenched. We also investigated the role of BH mergers versus accretion in growing the BHs in our dwarf simulations and find that in all cases BH growth is predominantly driven by gas accretion. In particular, we verified that merging with other BH seeds represents a negligible growth channel with a maximum of two, five and nine seed BHs forming in the m10q, m10y and m10z simulations, respectively. With a seed mass of 100, this represents only a very small fraction of the final BH mass in all cases. §.§ Dark matter distributions Fig. <ref> shows the spherically averaged dark matter density profiles as a function of radius. The three panels display these profiles for our three sets of ICs, respectively. For these profiles, we centre the dark matter distributions on the minimum potential of the main halo[Note that we also investigated alternative centering methods, including the shrinking sphere method from <cit.>, however, we find that in practice the differences are minimal as the common halo centre definitions are in good agreement for our dwarfs.]. In each case, we plot the mean z=0 profiles, averaged over Δ T=500 Myr, for our four different galaxy formation model variations as colour-coded solid lines as indicated by the legend. We also performed additional dark-matter only (DMO) runs and we plot these as solid black lines. In each cases the temporal dispersion of these profiles is indicated by shaded regions which show the 1σ scatter in the densities over Δ T=500 Myr. We note that for the majority of the simulation runs, the central density is not very sensitive to the temporal bin width as the evolution of the central density is relatively steady over the last Gyr. The simulation, however, experiences significant fluctuations in the central density from ∼ 12–13 Gyr due to mergers with substructures, so for our z=0 profiles we choose the bin width to be small enough as to not be affected by these transient features. For all of the simulated profiles, we highlight the central dark matter density at r=150 pc, a characteristic radius for distinguishing cusps from cores in observations <cit.>, with colour-coded symbols. Furthermore, we indicate the gravitational softening length at r=62.5 pc and the Power radius[We note that the required particle number for resolving the central region depends on the radius and density contrast in question but generally ranges between N=1000–3000; here we follow <cit.> and set N=2000.] <cit.> as grey and colour-coded dashed vertical lines, respectively. Within these radii, the dark matter profiles may be impacted by numerical heating effects. In addition, we show the expected NFW profile based on the virial mass as grey shaded regions indicating the 2σ scatter from the halo mass – concentration relation <cit.> to indicate the typical variations in profiles due to varying halo concentrations. The DMO runs are in good agreement with the NFW expectations, with m10y having slightly higher and m10z having slightly lower concentration than the mean relation. First of all, we find that the introduction of AGN and CRs introduces a significant amount of scatter. The simulations with the default fire-3 model, without CRs or AGN, all lead to distinct cores at r=150 pc. The addition of CRs and/or AGN, however, leads to a variety of outcomes from stronger cores to weakened cores to cusps at the same inner radius[We note that the central densities at r=150 pc are within the Power radius in most cases, however, for all cases, the cusp versus core categorisations based on the 2σ scatter around NFW expectations are unchanged between r=150 pc and the Power radius, demonstrating that the qualitative results are robust.]. For simulations without CRs, the addition of an AGN enhances the core size and further suppresses the central density for all three cases. Though it should be noted that the magnitude of the additional density suppression varies and is significantly weaker for m10y at r=150 pc. For simulations with CRs but without AGN, the profiles become cuspier than their no-CRs counterpart for m10q and m10y, whilst for m10z there is small suppression of the central densities with CRs. The addition of an AGN to these CR-based set-ups leads to a further increase of the central densities for m10q, whilst the central densities for m10y are minimally decreased. For m10z there is a much stronger impact, with the central density notable enhanced, yet still in the core regime. Overall, there is a variety of outcomes when introducing CRs and/or AGN to dwarf simulations which mostly do not follow clear trends due to the dual role of these processes as an additional source of feedback that may both suppress star formation (and therefore suppress core formation) or enhance core formation via the additional energy input. Hence great care has to be taken to untangle the contribution from these additional baryonic processes and their interplay with star formation. A further consideration is that small perturbations in hydrodynamical simulations (e.g. floating-point round-off or random number generators) can lead to chaotic-like behaviour with the small perturbations growing over time and manifesting as macroscopic differences in galaxy properties <cit.>. However, the variability we observe here is significantly larger than the variability expected from this `numerical butterfly effect' which typically leads to central density fluctuations of at most 10 to 15 per cent <cit.> whereas we observe variations ranging from 70 to 90 per cent for our different sets of ICs and galaxy formation physics configurations (see Table <ref>). Furthermore, as we show in the next Section, these central density trends are relatively steady as a function of cosmic time which would not be expected if the variability resulted from the amplification of small numerical perturbations. Overall, this then points to a physical origin of the diversity of dark matter profiles that we observe in our simulations. §.§ Driving forces of core formation Our analysis from Section <ref> indicates that our different feedback physics configurations significantly affect the central dark matter distributions of dwarf galaxies. In this Section, we explore the driving forces behind these trends by contrasting the cosmic evolution of the central dark matter densities with SN and AGN energy injection histories. Fig. <ref> shows the central dark matter densities at r=500 pc as a function of cosmic time. We choose this radius as it is small enough to showcase the differences between the models explored whilst being large enough to ensure we are not being dominated by numerical heating effects, i.e. we choose a radius that is at least the size of the Power radius for all set-ups explored. Again we average the dark matter densities over time bins of Δ T=500 Myr and indicate the mean densities as solid lines and the standard deviation in each bin by the shaded regions. The m10q set-up is shown in the first panel. Reflecting its early formation history, the dark matter density trends we found at z=0 in Fig. <ref> have persisted for the past 6 Gyr. The central densities are relatively steady with only showing significant evolution after redshift z=1 with the central density of this run showing a slow decline. Similarly for m10y (middle panel), the z=0 trends have persisted for the past ∼5 Gyr. Both of the CR-based runs, and , do not show any significant evolution at low redshifts, whilst the runs without CRs, and show a steady decline from z∼1 onwards. Note that the density fluctuations for these runs are also much stronger, reflecting the potential perturbations characteristic of core formation <cit.>. Focusing on early times between redshifts z=4 and z=1, we find that most of the runs are quite similar, except for which exhibits a strong core during that period. We also note that at t ∼ 5 Gyr, the m10y system experiences a major merger. For the DMO run this significantly raises the central density, and the strong early core for the set-up is erased by this merger. For the m10z system (right panel), there is significantly more late-time evolution as well as stronger fluctuations as reflected in its more irregular morphology, see Fig. <ref>. Indeed, the general central density trends we observe at z=0 have only been in place for the past ∼2 Gyr and the run experiences strong fluctuations for most of this period. We have verified that these potential fluctuations are driven by late-time mergers with substructures that pass through the centre and are independent of the slice width chosen or the method used to determine the halo centre when calculating the central density. Between redshifts z=2 and z=0.5, the set-ups , , and , all show a relatively similar redshift evolution of their central densities. The set-up, however, closely follows the DMO run and even surpasses it from t=7–9 Gyr, indicating strong baryonic cooling at the centre. At z∼ 0.5 a merger induces density fluctuations for all of our m10z set-ups, but most prominently for the CR-based runs and . After this merger, the two CR-based set-ups have relatively steady central densities. On the other hand, the runs without CRs, and experience significant density suppressions at late times. In the following two subsections, we turn to investigate the origin of these cosmic evolution patterns in the dark matter central densities, examining SN and AGN energy injection as a function of time. §.§.§ SN feedback Fig. <ref> shows the injected SN energy as a function of cosmic time with the instantaneous energy shown as dashed lines and the cumulative energy shown as solid lines, both binned over 250 Myr for clarity. The three columns correspond to our three sets of ICs. The no-AGN and AGN runs are shown in the top and bottom row, respectively. We focus on the SN energy injected in the central region of the halo within a fixed 1 kpc aperture (in physical coordinates). This choice is motivated by the fiducial inner radius considered by observers for determining the shapes of dwarf galaxy rotation curves (see Section <ref>) and it also corresponds to the scale of the half-mass radius for most of our systems (see Fig. <ref>). The SN energy injection rates are recalculated from the stellar ages and metallicities based on Starburst99 tables for a <cit.> IMF. We focus here on the SN II rates, which constitute the majority of SN events <cit.>, for simplicity. For reference, we also indicate the z=0 binding energy of the respective haloes as dotted lines, which we estimate as f_bar M_vir V_vir^2, where f_bar is the universal baryon fraction and V_vir is the virial velocity of the halo <cit.>. Firstly, we note that both CRs and AGN have a significant impact on star formation, and therefore the SN energy histories. As discussed in previous works <cit.>, due to their shallow potential wells, dwarf systems are very sensitive to baryonic feedback processes so that even small changes can significantly affect star formation histories. This highly variable behaviour is quite clearly illustrated by the m10q set-up in the first column. With CRs, the system experiences a higher initial burst which is powerful enough to quench the dwarf system from z ∼ 6. Hence both and end up with cuspy profiles, even though the cumulative injected SN energy for slighly exceeds the z=0 binding energy, as the absence of late-time star formation means that a cusp can be re-established quite easily after the initial burst. Note that has a slighly reduced central density at z=0 compared to due to low-level late-time star formation activity; however, these bursts are not powerful enough to induce a core. Without CRs, the initial star formation burst is less powerful, allowing for continued star formation activity. For the set-up, there are a few more powerful bursts at intermediate redshifts, which then quench the systems at much later times from z∼ 1.5. For the set-up, the AGN regulation means that the intermediate redshift star formation bursts are less intense so that star formation can continue until z=0. Both and end up with cumulative SN energy almost an order of magnitude above the z=0 binding energy leading to cored profiles; however, has a more extreme core due to the continued star formation activity <cit.>. For m10y, there is a similar picture albeit not as extreme as for the m10q halo. With CRs, high-redshift star formation is somewhat burstier, and the more powerful bursts then also mean that most of the SN activity is restricted to z ≳ 1. This is still sufficient to (cumulatively) exceed the z=0 binding energy and induce a core for both of the CR-based runs; however, the cores are significantly weaker and less extended than for the runs without CRs. Both and have continued star formation activity until z=0, higher cumulative injected SN energies, and therefore much stronger cores. Adding in AGN only has a small impact for both and compared to their no-AGN counterparts. Again the bursts at high redshift are somewhat weaker, allowing for more SN activity at lower redshifts and therefore somewhat stronger cores. However, as discussed in Sections <ref> and <ref>, the differences induced by AGN feedback are only minimal for the m10y profiles at z=0. For m10z, we obtain the most complex interplay between star formation, AGN activity and CRs. Here, the set-ups without CRs have more powerful initial star formation bursts (underlining that the initial intensity of the burst is not necessarily directly dictated by the galaxy formation physics configuration). Overall, , , and all experience episodes of star formation for most of cosmic time, although for , those are shifted to lower redshifts, whilst for the simulation SN activity is more concentrated at higher redshifts. experiences bursty star formation throughout cosmic time, leading to the highest cumulative injected SN energy and the most pronounced core. Likewise, the cumulative SN energy in the and set-ups also exceeds the z=0 binding energy and both of these result in cored profiles. Interestingly, however, star formation in the set-up is significantly suppressed, and powerful bursts are mostly restricted to high redshifts (z>2), so that the total injected SN energy remains well below the z=0 binding energy by a factor of three. This indicates that for the set-up, the core formation may be driven by the AGN. This analysis underlines that CRs and AGN primarily affect cusp-to-core transformation, and the diversity of dark matter profiles, by regulating star formation histories in our simulations. The only set-up where the AGN feedback is actively driving core formation is , where star formation is too heavily suppressed to drive core formation. In the cases of m10q and m10y, the AGN is not efficient at suppressing star formation, leading to similar or higher levels of SN activity compared to the equivalent no-AGN set-ups. However, in these scenarios, the AGN may still enhance core formation; we examine this possibility in the next section. §.§.§ AGN feedback In the fire-3 model, AGN feedback is injected in several channels including mechanical feedback, radiative feedback and CRs (see Section <ref> for details). Comparing this multi-channel AGN energy to the SN energy that is predominantly injected mechanically is not straightforward, especially since we are most concerned with feedback-induced gravitational potential perturbations. Therefore, we need to assess how the AGN energy will couple dynamically. Fully tracking and analysing coupling efficiencies of different AGN feedback modes is beyond the scope of this paper, so we make some simplifying assumptions based on previous literature. For the AGN, only a very small fraction of the injected energy is directly coupled in the form of mechanical winds, with only ∼0.6 per cent of the AGN luminosity injected as wind energy (see Section <ref>). For the radiative feedback, on the other hand, the full luminosity is injected into the simulation domain using the LEBRON method <cit.> for radiation transport. This radiation transport then predicts the radiation pressure forces, i.e. momentum flux, from the absorbed photon luminosity. Overall only a small fraction of the luminosity will therefore be energetically coupled to the gas as a radiatively driven wind. The LEBRON method has been most thoroughly tested for stellar feedback. Whilst AGN radiation has a different spectrum, we can still draw some tentative conclusions from these studies for our estimates. For the stellar feedback, only 0.4 L/c of the photon momentum couples to the gas in dwarfs, whilst the remainder escapes the galaxy, and the vast majority of this photon coupling occurs in the single-scattering regime <cit.>. We here assume that a similar fraction of the luminosity would couple to the gas for radiative AGN feedback. Following <cit.>, a fraction of approximately 5 per cent of that availabile AGN luminosity is then coupled as mechanical energy in the form of a radiatively driven wind in the single scattering regime, yielding an overall mechanical energy efficiency of 2 per cent compared to the AGN luminosity for the radiative feedback channel. For the AGN-driven CR feedback, 1 per cent of the accreted rest-mass energy, i.e. 10 per cent of the luminosity with the assumed radiative efficiency of 10 per cent, is injected as CRs. For CR transport, we employ the subgrid model from <cit.>, which interpolates between the limit in which CRs escape the galaxies with negligible losses and that in which CRs lose most of their energy catastrophically before escaping, using a formalism akin to the LEBRON method. For dwarf galaxies, we are generally far from the proton calorimetric limit where most energy is lost before CRs escape the dense gas, and we assume that (as with the radiation) 40 per cent of the CRs couple before escaping – which likely provides an upper limit. This then yields an overall mechanical energy efficiency of 4 per cent compared to the AGN luminosity for the CR feedback channel. Keeping in mind that these assumed efficiencies may be overestimating the actual coupled AGN energy, we show the `effective' injected AGN energy based on our assumed efficiencies for the different channels as a function of cosmic time in Fig. <ref>. Again the cumulative energy is shown as the solid lines and the instantaneous energy is shown as dashed lines binned over 250 Myr for clarity. The AGN energy is based on the accretion rate of the most massive BH in the main halo at z=0 and its most massive progenitor halo at higher redshifts. The three different panels represent our three different ICs and the colour coding of the lines indicates the different physics configurations as listed in the legends. Overall, we note that AGN energy injection is more steady than SN energy injection, with the AGN in most set-ups being active continuously, except for where the AGN activity, just like the star formation, is quenched at early times and is then mostly quiescent at late times apart from a relatively weak late-time burst. In all cases, the injected AGN energy matches or exceeds the binding energy though we caution that due to the extremely uncertain mechanical coupling efficiencies this has to be taken with a grain of salt and the AGN energy that is actually coupled mechanically to the gas may be lower. Furthermore, we note that whilst the SN feedback is distributed throughout the galaxy the AGN energy injection is only focussed at the centre, with radiation and mechanical winds injected in a collimated fashion, and therefore AGN feedback is likely less efficient than SN feedback in affecting the ISM at a fixed energy injection rate. Indeed, past simulations have demonstrated that a significant fraction of the AGN wind energy escapes the ISM once the winds have opened up a central cavity <cit.>. For m10q, in the CR-based run , the AGN does not have a significant impact on the central densities since the late-time activity, which is crucial for core formation, is mostly suppressed. Without CRs, for the set-up, there is a sharp AGN burst at t ∼ 8 Gyr which is also associated with pronounced central dark matter density fluctuations and followed by a steady decline in the central dark matter density suggesting that the AGN may be contributing to the formation of the core with this set-up. For m10y, the set-up has a strong AGN burst at t∼ 5.5 Gyr, which is again associated with strong central density fluctuations indicating gravitational perturbations as well as a drop in the star formation rate. With the CRs, the AGN energy injection is again significantly suppressed (though not completely shut down like in the set-up) and the relatively low levels of activity do not have a significant impact on core formation. For m10z, contrary to the other two sets of ICs, the energy injected from the CR-based run, , exceeds the AGN energy injected with the equivalent no-CR set-up . This is mostly driven by a significant burst in the set-up at t∼ 9 Gyr, which takes the overall injected AGN energy budget above the binding energy of the halo and is followed by strong density fluctuations. Most notably, even though central star formation is completely suppressed from t∼ 10 Gyr in the simulation, the AGN activity persists at relatively high levels, maintaining the core in the absence of SN-driven winds. This is driven by two factors in our AGN modelling. Firstly, at late times the ratio between BH mass and subgrid accretion disc mass in the run is quite large (M_BH/M_d∼ 10^4), which leads to depletion timescales of the order of ∼ 1 Gyr (also see Section <ref>), allowing for persistent AGN activity even after the inflows onto the BH – accretion disc particle have subsided. Secondly, whilst the gas density is significantly decreased in the central region (and therefore the gas is not star-forming), the BH is still able to accrete at low rates from this supply since there is no explicit density criterion for the BH accretion. This likely represents an optimistic estimate of the BH accretion rate and we discuss the implications and caveats of these modelling choices in more detail in Section <ref>. For the set-up the AGN has a significant burst at t∼12 Gyr which again is associated with strong central density fluctuations suggesting that here the AGN may also enhance the formation of the core. Overall, we note for most of our AGN set-ups, including , , , , and , the equivalent no-AGN set-up already has a SN-induced core. For all of these cases, apart from , the addition of an AGN further decreases the central densities enhancing the existing core. However, it is difficult to disentangle whether this additional decrease is driven directly by AGN energy injection or indirectly by suppressing high-redshift star formation and `delaying' SN activity to later times favouring core formation. Indeed, it is likely that both of these factors contribute to the enhancement of cores. The set-up is our only simulation where the AGN is efficient in globally suppressing star formation compared to its no-AGN counterpart (also see Fig. <ref>). Here, there is no SN activity for the last ∼3.5 Gyr from z∼ 0.5 and only weak SN activity between z=2 and z=0.5, yet the core is maintained. Indeed, the central density significantly decreases from z=2 to z=1 and is then maintained at low levels despite the late-time merger at z ∼ 0.5, providing strong evidence for AGN-driven core formation. Overall, we conclude that AGN feedback can induce significant scatter in dark matter density profiles, predominantly indirectly by regulating star formation histories but also in some cases directly by driving powerful potential fluctuations. §.§ Cosmic evolution of central dark matter densities in observational context We further analyse the origin of the diversity of our simulated dark matter profiles by examining the central densities in the context of observations. Fig. <ref> shows the central dark matter densities at r=150 pc as a function of virial mass (left panel), stellar mass to virial mass ratio (middle panel) and BH mass to virial mass ratio (right panel). The dotted lines indicate the (binned) redshift evolution of our simulated dwarfs from z=3 to z=0, again binned over 500 Myr. For clarity we only highlight the z=0 data point with the marker symbol corresponding to our different physics configurations, as indicated by the legend. In the left panel we also show the expected central densities for an NFW profile and a cored profile following the coreNFW profile from <cit.> as grey lines with the 2σ concentration scatter indicated by dark-grey and light-grey shaded regions, respectively. Furthermore, we show the observed central densities of 16 observed nearby dwarf galaxies, including classical dwarf spheroidals and gas-rich dwarf irregulars from <cit.>. The first panel demonstrates that our simulations are generally in good agreement with the observed central densities of nearby dwarfs, keeping in mind that the sample sizes are limited in both cases (16 observed dwarfs versus 12 simulated dwarfs). Two of the m10z simulations, and , lead to very strong cores, with the central densities being more severely suppressed than in the observations. Our simulations do not reproduce the observed strong cusps with central densities above the NFW mean relation. However, we note that the vast majority of our dwarfs have late star formation histories, with only the CR set-ups for m10q and m10y resulting in an early truncation of star formation, as defined by <cit.>, with no activity for the past 6 Gyr. These dwarfs with early truncation are associated with cuspy profiles hinting that we would require more early-forming set-ups to reproduce these strong cores. The middle panel shows the simulated and observed central densities as a function of stellar mass to virial mass ratio M_stellar/M_vir. As discussed in <cit.>, the observed dwarfs show a clear trend with higher M_stellar/M_vir being associated with lower central dark matter densities. Moreover, in the observations, as well as in previous theoretical studies, it is found that no stellar-feedback-driven cores form below a critical ratio of ( M_stellar/M_vir)_crit= 5 × 10^-4 since the integrated SN energy is insufficient to drive the required potential fluctuations <cit.>. We also find a strong correlation between M_stellar/M_vir and central density suppression for our simulated dwarfs. Indeed, the correlation is significantly tighter than for the observed dwarfs likely due to increased scatter induced by measurement uncertainties in the observations. Looking at the redshift evolution tracks of our simulated dwarfs, we find that most systems also follow these trends in a temporal sense with two notable exceptions. The CR-based runs with the m10q ICs show a slight decrease as the M_stellar/M_vir decreases. However, this is mostly a by-product of these runs being quenched at very high redshifts so that the central dark matter density evolution is dominated by the general assembly history of the halo with the virial mass increasing and the halo concentration decreasing (whilst the stellar mass remains constant) at late times. The second exception to the M_stellar/M_vir trend, however, is more notable: for the set-up the central densities as a function of M_stellar/M_vir are steadily decreasing so that a core forms despite the z=0 M_stellar/M_vir ratio being below the critical ratio identified by <cit.> and <cit.>. This provides further evidence that this run could be experiencing AGN-driven core formation. We further examine this possibility in the third panel where we plot the simulated central densities (as well as their binned redshift evolution from z=3) for the AGN-based simulation set-ups as a function of BH mass to virial mass ratio M_BH/M_vir. For the runs without CRs, , , and , the central densities again steadily decrease as a function of M_BH/M_vir. However, we caution that this does not necessarily indicate that the BHs are contributing to core formation due to the strong correlation between M_BH and M_stellar (see Fig. <ref> for scaling relations of our simulated dwarfs). The set-up is the only simulation where the central density ratio decreases as M_BH/M_vir decreases, as with the stellar mass to halo mass ratio, and this mainly reflects that the central density is unaffected by baryonic processes at late times. The set-up has a very shallow gradient again reflective of the relatively quiescent late-time evolution. Finally, the set-up exhibits a significant decrease in central densities as the M_BH/M_vir increases (opposite trend from M_stellar/M_vir) providing further support for the BH-driven core formation scenario. §.§ The diversity of rotation curves As discussed in Section <ref>, one of the most prominent remaining dwarf galaxy `problems' pertains to the observed diversity of dwarf galaxy rotation curves. From our analysis in the previous sections, we have found that AGN and CRs may significantly increase the scatter in dark matter density profiles and hence may contribute to resolving this on-going controversy. Ultimately, to make statistical predictions for dwarf galaxy rotation curve shapes, we would need a much larger simulation sample spanning a larger range of environments and halo masses. Nevertheless, we can use our simulation suite to examine the broad trends for rotation curve shapes and assess the potential for more sophisticated galaxy formation models including the impact of AGN and CRs at the low-mass end to resolve the long-standing diversity of rotation curves problem. In Fig. <ref>, we present circular velocity curves for all our dwarf simulations. As for the dark matter profiles, we average the velocity curves over Δ T=500 Myr and indicate the 1σ scatter with the shaded regions. Following the methodology presented in <cit.>, we plot circular velocity profiles for the total matter content of our dwarfs (dark matter, gas, stars, and BHs) as solid lines and for baryons (gas, stars, and BHs) as dotted-dashed lines. The total circular velocity curves for each set of ICs converge between 1 to 10 kpc, with the extent of the matter deficits in the central regions reflecting the core sizes from Fig. <ref>. For the baryonic circular velocity curves, however, there are significant differences. We also examine the gas and stellar distributions (not shown here) and find that both of these are significantly modified for galaxies with pronounced cores, which tend to have more extended stellar distributions (also see Figure <ref>). For the cored dwarfs this then leads to slowly rising rotation curves for both dark matter and stars. Dwarfs that are quiescent at z=0, and generally have cuspier profiles, are more gas poor (especially at the centre), leading to overall lower baryonic circular velocities in the central region. Following <cit.>, we quantify the cusp versus core nature of the profiles by calculating characteristic velocity ratios, comparing the inner `fiducial' rotation velocity V_fid with the asymptotic flat rotation velocity V_max. The fiducial rotation velocity is measured at the fiducial inner radius defined as: r_fid = (V_max/35 km s^-1) kpc. The rotation curve shape parameter η_rot can then be defined as η_rot = V_fid/V_max = V(r_fid)/V_max. Rapidly rising rotation curves have η_rot≲ 1, with the NFW profile yielding a rotation curve shape parameter of η_rot∼ 0.65. Cored profiles have η_rot≪ 1; see <cit.> for a detailed discussion of this parameter. As in <cit.>, we also inspect a second velocity ratio – the baryonic importance parameter, which is defined as η_bar = ( V_b,fid/V_fid)^2 = ( V_b(r_fid)/V (r_fid))^2, where V_b,fid is the contribution from baryons (gas, stars, BHs) to the circular velocity curve. Under the assumption of spherical symmetry, η_bar represents the mass fraction of baryons within the fiducial radius. The two velocity ratios η_rot and η_bar can be used to plot the `cuspiness' of the rotation curves against the contribution from baryons in the inner region of the galaxy. We plot this distribution in the rightmost panel in Fig. <ref>. For comparison, we also show observed data as collated by <cit.>. We restrict the observed galaxy sample to the classical dwarf regime with V_max≤ 60 km s^-1 to compare this more easily and directly with our simulated dwarfs. The observational data stems mainly from the SPARC sample and, in all cases, the rotation curves were inferred from high-resolution HI and/or Hα velocity fields. Given that all of the observational rotation curves are inferred from the baryonic contributions, the dwarfs with extremely low baryonic contributions are missing from the observed samples. Similarly to <cit.> we therefore show our low-η_bar simulated dwarfs at η_bar = 0.05 for clarity. Focussing first on the observations, we find that, as discussed in <cit.>, the baryonic importance parameter appears to be a poor predictor of rotation curve shapes with a significant amount of scatter in the data. Specifically, this plot highlights the appearance of profiles ranging from cusps to weak cores at low baryonic dominance and cusps to strong cores at high baryonic dominance in the observations. This large amount of scatter may naively seem at odds with the theory that baryons are driving cusp to core transformations and has been extremely difficult to reproduce with simulations. Despite the high scatter, there are nevertheless some interesting observational trends where strong cores in dwarfs tend to be associated with higher baryonic importance and strong cusps tend to be associated with lower baryonic importance. However, both of these aspects seem counter-intuitive if cores are associated with strong outflows and cusps with baryons cooling at the centre. Our simulations cover almost all observed scenarios: cusps and weak cores at low baryonic dominance (e.g., and ) and cusps and strong cores at high baryonic dominance (e.g., and ). The only missing scenario are the `ultra cuspy' dwarfs at both low and high baryonic dominance, which might be influenced by environmental factors, suggesting our sample might be too small to cover the required range of assembly histories. To reproduce these extremely cuspy dwarfs with CDM-based simulations, we would need strong cosmic inflows so that feedback becomes inefficient leading to overcooling <cit.>. Another possible pathway for inferring ultra-cuspy dwarfs from gas rotation curves constitutes strong, transient perturbation to the gas, usually resulting in an offset between the gas, stars and dark matter. However, in the fiducial fire-3 model (as well as in the SPARC sample), these transient ultra-cuspy dwarfs are generally associated with more massive systems (M_halo≳ 10^11 and V_max≳ 50 km s^-1), see <cit.> for details. Nevertheless, even with our small sample, we obtain a range of profiles at fixed baryonic importance ranging from cusps to cores, a feat that has been historically difficult to reproduce in simulations <cit.>. We are able to reproduce these trends due to the interplay of the different feedback processes resulting in varying overall feedback efficiencies for the same baryon content, either enhancing or reducing the overall feedback impact compared to simulations that do not include CRs or BHs. Cusps are generally associated with highly concentrated stellar distributions in gas-poor dwarfs, whilst cores are created by galactic fountain activity in gas-rich dwarfs that produces a core through cyclic feedback events but does not expel the baryons. In this manner, we are able to match both the large observed scatter and the weak observed trends with baryonic dominance. § DISCUSSION §.§ Comparison with previous theoretical work We have investigated the impact of BH feedback and CRs on the cusp versus core problem in dwarf galaxies with a special focus on whether including additional baryonic feedback channels may help explain the observed diversity of rotation curves in dwarf galaxies. Several groups have investigated the role of stellar feedback in driving cusp to core transformations in galaxy formation simulations with mixed results <cit.>. Most simulations tend to either predominantly produce cusps or predominantly produce cores which appears to be closely related to the star formation threshold employed and, more crucially, whether the cold, dense ISM phase is resolved in the simulations <cit.>. Nevertheless, for a given ISM model it has proven very challenging for simulations to reproduce the observed diversity of dwarf rotation curves <cit.>. The past decade has unveiled growing observational samples of dwarf galaxies with AGN. These observations raise the question of whether active BHs may increase the diversity in star formation histories as well as directly impact central dark matter distributions and thereby increase the diversity of dwarf rotation curves in simulations. Previous studies found systematic yet small-scale suppressions of central dark matter densities by AGN feedback in dwarfs <cit.>, yet in both cases, the simulations did not explicitly resolve the multiphase ISM. Cusp-to-core transformations with AGN feedback were also explored by <cit.>. However, they employed the fiducial Bondi accretion model for BH growth, which suppresses the growth of low-mass BHs such that AGN activity in dwarfs is insignificant in these simulations and does not affect dark matter distributions. In our study, we investigate, for the first time, the impact of AGN feedback on dark matter profiles in dwarfs with a resolved multi-phase ISM model and a BH growth scheme that does not suppress AGN activity in dwarf galaxies. Several works exploring BH physics within the fire model have found that CRs appear to be a key ingredient for effective AGN feedback <cit.>, so we also investigate the impact of CR feedback on the interplay between baryons and dark matter in dwarfs. We find that efficient AGN feedback can actively drive core formation and, together with CRs, significantly enhances the diversity of dwarf rotation curves by leading to more varied star formation histories. Indeed, in many cases our runs including CRs (with or without AGN) lead to cuspier profiles due to suppressed star formation activity. This is in contrast with recent findings from <cit.>, who find that CRs are prone to enhance core formation. We note that for one of our set-ups (compare to ), we also observe this behaviour, and indeed it is plausible that CRs could act to either suppress or enhance cores depending on their interaction with the ISM and the assembly history of the given dwarf galaxy. Our most important finding is that BHs and CRs as additional baryonic processes lead to varying overall feedback efficiencies that can lead to both cuspier or more cored profiles for a given assembly history. We also demonstrate the promising potential for these additional baryonic processes to resolve the diversity of dwarf galaxy rotation curves problem: our simulations exhibit a wide variety of profiles, including cuspy profiles at low baryonic dominance and strong cores at high baryonic dominance, thereby reproducing the puzzling relationship between rotation curve shapes and the gravitational importance of baryons from the observations. The former is associated with highly concentrated stellar distributions in gas-poor dwarfs, whilst the latter is a signature of galactic fountain activity in gas-rich dwarfs that produces a core through cyclic feedback events but does not expel the baryons. One remaining discrepancy with the observations lies in the `extremely' cuspy dwarfs which are even denser than NFW expectations. Within our relatively modest sample size of simulations we are unable to reproduce these. Apart from the obvious need to explore more halo masses and environments, as these objects may be linked to strong cosmic inflows which lead to overcooling <cit.>, in future work, it would also be important to explore observational uncertainties in rotation curve measurements as well as theoretical uncertainties in the modelling of BHs and CRs, and we discuss both of these aspects in the following sections. §.§ Observational uncertainties In observations, the dark matter distribution of dwarf galaxies may be inferred from the rotation curves of gas or stars, yet it has been shown that in practice these rotation curves may be highly unreliable tracers of the `true' underlying circular velocity distribution. In particular, various dynamical perturbations may lead to large discrepancies of 50 per cent or more <cit.>. Processes that are associated with inducing AGN activity, including mergers and strong cosmic gas inflows, lead to inaccurate estimates of the matter distributions in dwarfs. Interestingly, all of these studies find that observational uncertainties are more likely to lead to an underestimation of the true circular velocity, i.e. overestimation of the occurrence of cored profiles in observations, whilst the inverse error is less common though may still occur in the central few kiloparsecs due to dynamical phenomena, especially for more massive dwarfs <cit.>. Apart from the observational uncertainties in inferring dark matter mass distributions, it also remains extremely challenging to determine the drivers behind core formation from observations. In particular, there are no observational constraints on the potential role of AGN in cusp-to-core transformations. This is mainly due to the number of observed AGN in nearby dwarf galaxies being very limited. To further complicate matters, AGN are associated with disturbed gas rotation curves – see e.g. <cit.>, who find a strong association between AGN activity and disturbed gas kinematics in observed dwarf rotation curves, making it even more difficult to establish a link between AGN activity and central dark matter densities. JWST may partly alleviate these issues by disentangling rotational and outflow components with high-resolution NIRSpec-IFU observations <cit.>. From our simulations, we predict that if cored dwarf galaxies were found below the critical mass ratio of (M_stellar/M_vir)_crit = 5 × 10^4 this would be a strong indicator of AGN-driven core formation in dwarfs. §.§ Theoretical uncertainties The diversity of models presented in this work (with or without CRs and with or without AGN) are meant to represent different outcomes of `the same underlying physics' under different conditions. In particular, the impact of AGN feedback is expected (and observed) to be highly variable in the dwarf regime. The AGN activity levels in dwarfs are highly dependent on the seeding model assumed (with some dwarfs possibly not hosting any BHs at all) and whether the BH is off-centre for the majority of the dwarf's history, i.e. the no-AGN model here may represent the same underlying physics as the AGN model but in a situation where no BH seeded or only an extremely weakly accreting off-centre BH is present. Hence just seeding and BH dynamics could drive substantial variability in addition to other effects such as varying feedback efficiencies depending on BH spin evolution. Similarly, the CR subgrid model we employ here represents a case where losses within the galaxy are minimal and hence the impact of CRs in our dwarfs likely represents an upper limit. With full CR transport the outcome will likely lie somewhere in-between the no-CR and CR simulations presented in this work. It would require significantly higher resolution simulations (and correspondingly more detailed BH modelling) and explicit CR transport to depict the full variability in dark matter distributions introduced by baryonic physics within one galaxy formation model. Our simulations provide strong motivation for exploring such detailed models in future work. Below we discuss the caveats of our CR and BH modelling in more detail. §.§.§ CR modelling Whilst the focus of our paper lies on the interplay between AGN feedback and dark matter distributions in dwarfs, we also assess the impact of CRs on the cusp versus core problem since CR injection has been identified as an important AGN feedback channel. For this, we take advantage of the CR subgrid model by <cit.>, which allows for the inclusion of the impact of CRs on galaxy formation without the significant computational overhead associated with full CR transport. As discussed in previous sections, for dwarf galaxies, this subgrid model presents a good approximation since, in this mass regime, we are far away from the calorimetric limit for protons. Consequently, we can safely assume that losses within the galaxy only play a secondary role. Nevertheless, with this subgrid model, we may still be somewhat overestimating the effects of CR feedback at late cosmic times, as we are not fully capturing the inhomogeneity of CRs in the CGM, which may modify thermal instabilities <cit.>. In particular, we find that this implementation of CR feedback suppresses star formation so strongly that the addition of CRs generally leads to cuspier profiles. Indeed <cit.> also find in their validation simulations with a 10^11 halo that the subgrid CR model leads to suppressed star formation and a somewhat weaker core compared to an equivalent simulation with full CR transport, though these differences are small (∼ 1 per cent) and both CR models lead to a weaker core compared to their no-CR set-up. Our results should therefore be regarded as an upper limit for the impact of CR feedback and, whilst exploring alternative CR implementations is beyond the scope of this paper, our work provides strong motivation for investigating cusp-to-core transformations with AGN and full CR transport in future work. §.§.§ BH modelling The modelling of AGN feedback in cosmological simulations remains subject to significant theoretical uncertainties due to the vast dynamic range of relevant scales spanning (at least) 14 orders of magnitude from the event horizon (∼ 10^-6 pc for Sgr A*) to the cosmic web (∼ 10^8 pc). Hence BH growth, feedback, and dynamics cannot be modelled ab-initio and need to be included as `subgrid' processes in cosmological simulations. Here, we model the BH growth based on the torque-based accretion model <cit.>, which allows for efficient BH growth in the dwarf regime because it does not suffer from the strong BH mass dependence inherent to the widely used Bondi model <cit.>. This leads to relatively high BH masses compared to the extrapolated scaling relations – although we emphasise again that this comparison should be interpreted with caution because we would expect a flattening of the scaling relations in this mass regime <cit.>. Nevertheless, from an accretion perspective, our simulations may be exploring an upper limit on the impact of BHs in the dwarf regime. The inflow rates provided by the torque-based model are then coupled to a gas reservoir representing the accretion disc, with the depletion time of this reservoir set by the thin α-disc model. However, there are a few caveats to this approach. Firstly, the α-disc is generally only applicable in the radiatively efficient accretion regime. Secondly, this approach only tracks the mass flow rates through the disc and does not follow angular momentum flow rates. This in turn means that the BH spin evolution cannot be tracked self-consistently; therefore we have to assume constant efficiencies for the disc and jet luminosities, which otherwise could be directly inferred from the disc and BH properties <cit.>. For the AGN feedback, we only explore one set of parameters for the three AGN channels (mechanical winds, radiation and CRs) that allows us to match the main observed characteristics of nearby dwarf galaxies; see Fig. <ref>. However, as discussed in <cit.>, there are several other `plausible' AGN feedback set-ups within the fire model, leaving a large parameter space to be explored. In particular, we assume collimated injection for winds and radiation; however, the cusp versus core problem may be sensitive to alternative injection geometries, as discussed in <cit.> for the case of stellar feedback. This may allow us to also have a direct impact on dwarf core formation at lower halo masses, as in the set-up, without compromising in terms of producing realistic stellar properties. The final caveat in our model is that we do not have sufficient resolution to self-consistently model BH dynamical friction. Hence we employ a drift force to model the unresolved dynamical friction, which generally ensures that the BHs remain close to the centre of the halo. However, in dwarf galaxies, off-centre BHs may actually be physical <cit.> and in particular for massive BH binaries, the BHs orbiting around the centre could be an important contribution to core formation via core scouring. This has mainly been investigated in the context of massive elliptical galaxies <cit.>, however, these mechanisms may also translate to the dwarf regime and, in turn, the central dark matter densities of dwarfs are expected to affect binary shrinking times <cit.>. To also account for the dynamical effects of massive BHs on core formation in future work, it will be important to include the unresolved effects of dynamical friction as accurately as possible following dynamical friction estimators based on high-resolution N-body simulations <cit.>. § CONCLUSIONS In this work, we have investigated the impact of feedback from massive BHs on cusp-to-core transformations and the associated `diversity of rotation curves problem' in dwarf galaxies with a new suite of high-resolution cosmological zoom-in simulations based on the fire-3 galaxy formation model. Our suite includes three different dwarf haloes spanning a range of masses and environments within the classical dwarf regime of 8 × 10^9 < M_halo < 4 × 10^10. For each of these haloes, we investigate simulations with and without AGN as well as with and without CRs yielding four physics variations for each of the three haloes. We note that these different set-ups may represent different outcomes of the same underlying galaxy formation physics under different conditions, with the efficiency of both AGN and CRs expected to be highly variable in dwarf galaxies. Our main conclusions are as follows: * AGN may drive core formation directly as an additional source of feedback dynamically heating the central dark matter distribution; see the set-up. * AGN may also enhance core formation indirectly by suppressing high-redshift star formation and shifting SN activity to later times, which favours core formation at z=0; see e.g. the and set-ups. * CRs may also indirectly affect cusp-to-core transformations by regulating star formation histories – for most of our simulations, this leads to suppressed star formation and therefore suppressed core formation; see e.g. the and set-ups. * Our simulation suite is in good agreement with observed dark matter central density distributions, broadly following the trends from the nearby dwarfs in <cit.>. In particular, cores are generally more pronounced for higher stellar-to-halo mass ratios, M_stellar / M_vir. The only exception to this trend comes from the AGN-driven core in the set-up, which has a strongly suppressed M_stellar / M_vir ratio and a cored profile. This parameter space is a key target for observational searches for cores induced by AGN feedback. * BH feedback and CRs create a variety of cusps and cores in circular velocity profiles, with correlations between rotation curve shapes and baryonic influence that align with observations, potentially helping to resolve the diversity of rotation curves problem in dwarf galaxies. Overall, our findings suggest that AGN in dwarf galaxies can significantly impact central dark matter densities. BHs influence dwarf galaxy rotation curves through two key mechanisms: directly driving core formation via AGN feedback and indirectly regulating core formation by altering star formation histories and SN activity. CRs further contribute to this regulation by enhancing AGN feedback and suppressing star formation, thereby influencing core formation. The combined effects of AGN and CRs lead to a spectrum of central dark matter densities, reflecting the varying levels of AGN activity, star formation, CR influence, and their non-linear interaction. Due to the bursty star formation and accretion histories in dwarfs, AGN feedback is likely to occur in a highly stochastic manner; this may explain the wide range of dark matter central densities inferred from dwarf observations at fixed halo mass. Linking surveys of dwarf rotation curves with searches for AGN signatures will be crucial for future observations to elucidate the impact of BHs on dark matter profiles and to determine whether the diversity of rotation curve problem may be explained by baryonic processes. § ACKNOWLEDGEMENTS The authors are grateful for helpful discussions with Vasily Belokurov, Martin Bourne, Jenny Greene, Vid Irsic and Sergio Martin-Alvarez. The authors would also like to thank Isabel Santos-Santos for providing the observational rotation curve data. The simulations presented in this work were run on the Flatiron Institute's research computing facilities (the Iron compute cluster), supported by the Simons Foundation. SK has been supported by a Flatiron Research Fellowship and a Junior Research Fellowship from St Catharine's College, Cambridge. The Flatiron Institute is supported by the Simons Foundation. DAA acknowledges support by NSF grant AST-2108944, NASA grant ATP23-0156, STScI grants JWST-GO-01712.009-A and JWST-AR-04357.001-A, Simons Foundation Award CCA-1018464, and Cottrell Scholar Award CS-CSA-2023-028 by the Research Corporation for Science Advancement. Support for ISS was provided by NSF Collaborative Research Grant 2108318. SW received support from the NASA RIA grant 80NSSC24K0838. § DATA AVAILABILITY The data underlying this article will be shared on reasonable request to the corresponding author. mnras
http://arxiv.org/abs/2409.02775v1
20240904145508
Hydromechanical field theory of plant morphogenesis
[ "Hadrien Oliveri", "Ibrahim Cheddadi" ]
physics.bio-ph
[ "physics.bio-ph", "2020 MSC: 74B20, 74F10, 74F20, 92B99" ]
mpipz,mpicbg,csbd]Hadrien Olivericor1 inst2,rdp]Ibrahim Cheddadicor2 [cor1] mailto:holiveri@mpipz.mpg.deholiveri@mpipz.mpg.de [cor2]mailto:ibrahim.cheddadi@univ-grenoble-alpes.fribrahim.cheddadi@univ-grenoble-alpes.fr [mpipz]organization=Max Planck Institute for Plant Breeding Research, city=Cologne, postcode=50829, country=Germany [mpicbg]organization=Max Planck Institute of Molecular Cell Biology and Genetics, city=Dresden, postcode=01307, country=Germany [csbd]organization=Center for Systems Biology Dresden, city=Dresden, postcode=01307, country=Germany [inst2]organization=Université Grenoble Alpes, CNRS, Grenoble INP, TIMC, city=Grenoble, postcode=38000, country=France [rdp]organization=Laboratoire de Reproduction et Développement des Plantes, Université de Lyon, ENS de Lyon, UCBL, INRAE, CNRS, Inria, city=Lyon, postcode=69364, country=France § ABSTRACT The growth of plants is a hydromechanical phenomenon in which cells enlarge by absorbing water, while their walls expand and remodel under turgor-induced tension. In multicellular tissues, where cells are mechanically interconnected, morphogenesis results from the combined effect of local cell growths, which reflects the action of heterogeneous mechanical, physical, and chemical fields, each exerting varying degrees of nonlocal influence within the tissue. To describe this process, we propose a physical field theory of plant growth. This theory treats the tissue as a poromorphoelastic body, namely a growing poroelastic medium, where growth arises from pressure-induced deformations and osmotically-driven imbibition of the tissue. From this perspective, growing regions correspond to hydraulic sinks, leading to the possibility of complex non-local regulations, such as water competition and growth-induced water potential gradients. More in general, this work aims to establish foundations for a mechanistic, mechanical field theory of morphogenesis in plants, where growth arises from the interplay of multiple physical fields, and where biochemical regulations are integrated through specific physical parameters. plant mechanics morphogenesis growth elasticity morphoelasticity poroelasticity [2020] 74B20 74F10 74F20 92B99 Einfach schlief in dem Samen die Kraft; ein beginnendes Vorbild Lag, verschlossen in sich, unter die Hülle gebeugt, Blatt und Wurzel und Keim, nur halb geformet und farblos; Trocken erhält so der Kern ruhiges Leben bewahrt, Quillet strebend empor, sich milder Feuchte vertrauend, Und erhebt sich sogleich aus der umgebenden Nacht.Johann Wolfgang von Goethe, Die Metamorphose der Pflanzen, 1798 § INTRODUCTION §.§ Modelling the growth of plant tissues Morphogenesis is the fundamental process by which biological organisms establish their shape. This phenomenon relies upon multiple genetic, chemical, mechanical, and physical regulations, which are typically multiscale, thermodynamically open, out-of-equilibrium, and coupled in a so-called complex system. To achieve a rational understanding of morphogenesis, dedicated models are indispensable; in particular, a grand challenge is to build mechanistic continuum theories that describe the emergent physical behaviour of living tissues at the macroscopic scale. In plants, the key phenomenon driving morphogenesis is growth, the general process by which a physical body deforms by gaining mass <cit.>. Indeed, in plant tissues, adjoining cells are mechanically interconnected and constrained by an extracellular cell wall matrix which prevents cell migration and topological rearrangements <cit.>. Therefore, complex organ shapes result from the anisotropic and heterogeneous growth (and division) of the cells, generating large solid deformations of connected tissue regions <cit.>. The theory of morphoelasticity is a modern mechanical theory of growth, which has emerged historically from the nonlinear field theories of elasticity and elastoplasticity <cit.>. This framework has been adapted with remarkable success to a wealth of biological systems <cit.>, and provides a natural framework to model plant growth at the tissue scale <cit.>. Historically, an important problem in growth theory has been the characterisation of growth-induced mechanical instabilities, which refer to the loss of stability and change in configuration undergone by a growing tissue as a result to growth-induced internal stresses <cit.>. In this context, the growth deformation is treated typically as a constant bifurcation parameter controlling the onset of instability. In this respect, the focus lies on the static elasticity problems arising from growth, rather than the physical origin and kinetics of growth itself. In contrast, the so-called problem of morphodynamics consists in studying growth as a variable of a dynamical system whose evolution reflects the physical state of the tissue, reflecting the inherently dynamic nature of developmental processes <cit.>. From a biological point of view, plant growth is controlled dynamically through a multitude of genes and hormones that form heterogeneous patterning fields <cit.>. To address the role of genetic patterning in morphogenesis, continuum computational models have been proposed, based on the so-called notion of specified growth, detailed in <cit.> and applied in various context, see <cit.> <cit.>. Specified growth refers to the intrinsic growth a volume element undergoes when isolated from the rest of the tissue, reflecting some local biological identity (e.g. gene expression, polarity or, more abstractly, the action of morphogens) which control the kinematics of growth. Then, given a specified growth field, a compatible deformation can be obtained in a second step, by minimising the body's elastic energy. This approach has been instrumental in demonstrating how spatially heterogeneous gene expression patterns control the emergence of complex shapes in three-dimensions. However, from a biophysical perspective, specified growth remains a fundamentally phenomenological and essentially kinematic representation of growth, as its precise link to cellular mechanics–specifically the explicit physical action of genes–is not fully characterised. In contrast, the aim of biophysics is to explain growth from physical and mechanical principles. In this context, physically-based discrete models describe tissues at the cellular level, incorporating detailed cellular structure and geometry, mechanical anisotropies, and fine mechanisms underlying growth <cit.>. The goal in these models is then to reproduce the growth phenomenon from first principles, thereby allowing for rigorous testing of hypotheses at the cellular scale through specific biophysical parameters. However, this type of approach relies on a discretised representation of the tissue, thus it is often doomed to remain computational. While cellular models offer detailed insights, they do not enjoy the generality, minimalism, scalability, and analytic tractability of a continuum mathematical theory. §.§ The hydromechanical basis of cell growth To build such a theory on physical grounds, we start with the basic physics of cell expansion. Plant cells grow by absorbing water from their surrounding—a process driven by their relatively high osmolarity <cit.>. This phenomenon is captured by the historical model of <cit.>, later expanded in a somewhat similar manner by <cit.> and <cit.> <cit.>. This model describes the elongation of a single cylindrical cell of length ℓ. The expansion rate ℓ̇ of such cell can be expressed in terms of the volumetric influx of water through <cit.> ℓ̇= k_c ℓπ - p , with k_c the effective hydraulic conductivity the cell; π and p respectively the excess osmotic and hydrostatic pressures with respect to the outside; and where the overdot denotes differentiation w.r.t. time t. The quantity ψ = p-π, the water potential of the cell relative to the outside, characterises the capacity of the cell to attract water. Note that (<ref>) does not define a closed system, as the pressure p is related to the mechanics of the cell wall via the balance of forces. The growth of plant cells is a complex process that involves loosening, yielding under tension, and remodelling of the cell walls <cit.>. This process is modelled as a Maxwell-type viscoelastoplastic creep. In an elongating cylindrical cell, stress can be explicitly written in terms of pressure, and this rheological law can be written as ℓ̇/ℓ = Φp - y + ṗ/E , with xmaxx,· the ramp function; y a threshold pressure above which growth occurs; Φ the so-called extensibility of the cell; and E the effective Young's modulus of the cell. Equations (<ref>) now form a closed system for ℓ and p, illustrating that the turgor pressure is not a control parameter of the rate of growth, but a dependent variable of the cell expansion process. Indeed, in the steady growth regime (ṗ = 0), the pressure is fully determined by the three parameters y, π and α_c k_c / (k_c + Φ), and can be eliminated from the system: ℓ̇/ℓ = α_cΦπ - y, p = α_cπ + (1-α_c)y (with y≤ p ≤π). Recently, the Lockhart model served as a starting point for a vertex-based, multicellular hydromechanical model of plant morphogenesis, including water movements between cells, and mechanically-driven cell wall expansion <cit.>. This generalisation of the single-cell model generates several additional emergent spatiotemporal effects that arise from the coupling between wall expansion and water movements within a large developing tissue, such as hydraulic competition effects and water gradients. Overall, the relationship between water and growth, as highlighted by the Lockhart-Ortega-Cosgrove model and its multidimensional generalisation, is central to the physical functioning of plant living matter, and the basic notion that water must travel from sources (e.g. vasculature) to growing regions is a fundamental principle of development. §.§ Hydromechanical field theory for plant growth Building on this notion, we here develop a physical field theory of plant morphogenesis including tissue mechanics, wall synthesis, and hydraulics. Viewing the tissue as a network of cells exchanging water, a natural formalism is the theory of porous media, which describes the deformations of fluid-saturated materials. The modern theory of porous media has a complex history originating in the seminal works of Fick and Darcy on diffusion, and Fillunger, Terzaghi and Biot on soil consolidation, formalised later within the context of the theory of mixtures, notably based on contributions by Truesdell and Bowen <cit.>. There is now a rich body of literature exploring the application of these concepts to biology in various contexts, including biological growth. However, despite the clear significance of water in plant mechanics <cit.>, the role of hydromechanics in the context of plant growth modelling has but marginally been examined. Several notable works have featured linear poroelastic descriptions of plant matter, occasionally including growth <cit.>; however, these studies are limited to simple geometries and have not culminated in a general three-dimensional theory of growth. While the limiting role of water fluxes within plant tissues has long been debated <cit.> and remains challenging to assess experimentally, recent years have seen a resurgence of interest in this problem, with new studies supporting the potential for hydromechanical control in tissue development <cit.>. Building on these recent advancements, we propose a hydromechanical model for plant tissues seen as morphoelastic, porous materials encompassing the solid matrix of the cells and their fluid content. Technically speaking, our approach is analogous to previous models, for instance for tumour growth <cit.> or brain oedema <cit.>, however the specificities of plant living matter demand a dedicated treatment. Overall, this work aims to bridge the conceptual gap between cell biophysics and growth at the tissue scale, by describing growth as a consequence of simultaneous mechanical deformations and imbibition of the cells, thereby extending the seminal analysis of Lockhart to the framework of continuum media. The goals are then (i) to establish theoretical foundations for a generic theory of plant growth within the framework of nonlinear continuum mechanics (<ref>); and (ii) to characterise the guiding principles that emerge from multiple couplings between mechanical, physical and hydraulic fields (<ref>). § GENERAL THEORY §.§ Geometry and kinematics §.§.§ Total deformation We describe the geometry and kinematics of the evolving tissue viewed as a continuum domain embedded in the three-dimensional space <cit.>. We introduce the initial configuration ℬ_0 ⊂ℝ^3 at time t=0 of the body, and its current configuration ℬ⊂ℝ^3 at t≥ 0; see <ref>(a). The deformation of the body from ℬ_0 to ℬ is described by a smooth map χ: ℬ_0 ×ℝ→ℬ that sends material points X⃗∈ℬ_0 onto spatial points x⃗ = χ(X⃗,t)∈ℬ, at a given time t∈ℝ. The deformation gradient tensor is defined as F χ, with the material (or Lagrangian) gradient, with respect to the reference configuration. Finally, the Jacobian determinant of the deformation, J F, measures the local volumetric expansion due to χ (i.e. v = J V). Next, we describe the kinematics of the deformation by introducing the material and spatial (or Eulerian) velocities, respectively V⃗ and v⃗, defined as V⃗ (X⃗, t)=v⃗ (x⃗, t)= χ (X⃗, t)t. A natural measure of the relative rate of deformation is then the gradient of spatial velocity, L v⃗ = Ḟ F^-1, with (·) = F^-⊤ (·) the Eulerian gradient with respect to the current configuration. We also recall the standard kinematic formulae J̇ / J= v⃗ = L, with the Eulerian divergence and the trace operator. Here, the overdot denotes the material derivative J̇ = Jt, related to the Eulerian derivative Jt through the generic formula J̇ = Jt + v⃗· J. §.§.§ Growth We model growth using the framework of morphoelasticity <cit.>. The fundamental postulate of morphoelasticity is the multiplicative decomposition of deformations into an elastic deformation tensor A and a growth tensor G <cit.>: F = A G ; see <ref>(a). Constitutively, the growth deformation G is assumed to be anelastic, meaning it does not contribute to the strain energy, and describes the configurational change in the local reference configuration of the body due to local mass addition. In contrast, the elastic deformation A generates stresses and enters the strain energy function. Note that there is no requirement for the growth deformation to be compatible, that is, in general there is no deformation embeddable in the Euclidean space of which 𝐆 is the gradient, unlike F which is compatible by definition <cit.>. We define the Jacobian determinants J_A A and J_G G, so that (<ref>) J = J_AJ_G; and the rate of growth L_G Ġ G ^-1 and rate of elastic deformation L_A Ȧ A ^-1, with L = L_A + A L_G A^-1 . Finally, we introduce Γ L_G = J̇_G/J_G, measuring the relative rate of volumetric expansion due to growth. Note that, in the context of plant morphogenesis, most authors have restricted their attention to linear elasticity. In contrast, as we do not require A or G to be small here, our approach is geometrically exact and can capture arbitrarily large strains, and rich elastic behaviours such as strain-stiffening effects <cit.>. §.§ Balance laws §.§.§ Balance of mass We model the growing tissue as a poromorphoelastic solid representing the cell matrices and the water content; see <ref>(b). Specifically, the tissue is viewed as a triphasic porous medium composed of a solid phase representing the extracellular cell wall matrix (`s'), a pure fluid phase (`f'), and the cytoplasmic osmolytes (`o'). We define v⃗_s, v⃗_f, and v⃗_o, the Eulerian velocities of the respective phases, with the solid velocity chosen as the reference: v⃗_s = v⃗. We denote with ϕ_s, ϕ_f, ϕ_o the volume fractions of the solid, fluid and osmolyte components, respectively, in the current configuration (that is, the true volume occupied by each component per unit volume of tissue). Assuming that the material is saturated, we have ϕ_s + ϕ_f + ϕ_o = 1. The pore space volume occupancy is measured by the Eulerian porosity ϕ 1 - ϕ_s, which represents the ratio of pore space volume to current tissue volume. Similarly, the intermediate porosity Φ J_A ϕ measures the current pore space volume per unit intermediate volume. The Lagrangian porosity Φ^0 Jϕ= J_G Φ measures the current pore space volume per unit reference volume. Here, we assume that all phases are inherently elastically incompressible <cit.>. This assumption implies that in a macroscopic elastic deformation, the pores will shrink or dilate so as to conserve the volume of the solid phase. Thus, we can relate the solid volume fraction in the unstressed configuration (Φ_s, the solid volume fraction observed upon relieving the stresses) to that in the current configuration ϕ_s, through Φ_s = J_Aϕ_s = J_A - Φ . During growth and remodelling of the tissue, cells secrete new solid material, divide and form new cell walls that may affect the structure of the cell matrix locally, thus, the porosity. To represent this effect, we assume generally that the solid volume fraction may evolve as Φ̇_s = ℋ(x⃗, t, Φ_s, G,…), where ℋ is a function of the different variables that symbolises the local evolution of the matrix structure during growth. For example, in a scenario where cells do not divide but maintain a constant cell wall thickness, we can assume ℋ = - ΓΦ_s/3. We assume here that the tissue maintains locally a constant homeostatic cellular structure, so that ℋ=0 and Φ_s is a constant. We can rewrite (<ref>) using (<ref>) as ϕ_st+ ( ϕ_s v⃗_s)= ℋ/J_G + Γϕ_s := ξ_s, with ξ_s representing the source of solid material. Here we assume that the availability of the solid material required for growth is non-limiting, i.e., we ignore the physiological aspects of biosynthesis underlying growth (photosynthesis, nutrient uptake). Similarly, the balance of fluid mass is expressed through ϕ_ft + (ϕ_fv⃗_f) = ξ_f, where ξ_f is a possible source of water that accounts, typically, for a bulk vascularisation of the tissue. A possible constitutive law for ξ_f is discussed later in this paper. Lastly, the balance of mass for the osmolytes is expressed through ϕ_ot + ϕ_ov⃗_o = ξ_o. with ξ_o representing the synthesis/uptake of osmolytes. Summing (<ref>) using (<ref>), we obtain a single volume balance equality <cit.>: ϕ_sv⃗ + ϕ_fv⃗_s + ϕ_ov⃗_o = ξ_s + ξ_f + ξ_o. §.§.§ Balance of momentum Since the growth and fluid transport happen at timescales (hours) that are much longer than the timescale of elastic relaxation (seconds), we neglect inertia. Thus, we write the balance of momentum in terms of the Cauchy stress tensor T as T + b⃗ = 0⃗ , T = T^⊤, with b⃗ a density of applied body loads (per unit current volume), typically self-weight in our context. Henceforth, we focus on the case b⃗=0⃗. The symmetry of the stress tensor results from the absence of applied torque and micropolar structure in the material. In a more general theory, asymmetric stresses may be introduced to reflect the possible local twist of the cells <cit.>. §.§.§ Balance of internal energy To complete the theory, we discuss the thermodynamics of the system. The balance of energy for the total internal energy ℰ of a region Ω⊂ℬ encompasses internal work contributions 𝒫, heat 𝒬, and addition of new material 𝒮: ℰt = 𝒫 + 𝒬 + 𝒮 , where ℰ = ∫_Ω e v, with e the internal energy per unit volume of mixture. The work rate encompasses the work of internal forces, the works required to make room for new fluid, osmolyte and solid material (w_s) during growth, and the work w̅_o accounting for the active transport of the osmolytes, reflecting chemical processes that we do not address explicitly. Altogether, we have 𝒫 = ∫_ΩT⃗ : L v + ∫_Ω p ξ_f v- ∫_∂Ω p j⃗_f ·a⃗+ ∫_Ω p ξ_o v- ∫_∂Ω p j⃗_o ·a⃗ + ∫_Ω w_s ξ_s v + ∫_Ωw̅_o v , where j⃗_f = ϕ_f w⃗_f and j⃗_o = ϕ_o w⃗_o are the relative fluxes of fluid and osmolytes through the mixture; with w⃗_f v⃗_f - v⃗ the seepage velocity of the fluid and w⃗_o v⃗_o - v⃗; and where a⃗ and v denote respectively the outward normal surface element and the volume element for the integration. The heat contribution encompasses a bulk source r and a flux q⃗ through the boundary: 𝒬 = ∫_Ω r v - ∫_∂Ωq⃗·a⃗. Energy is also added to the system through addition of new material. We note e_f, e_o and e_s the internal energy of the fluid, the osmolytes and the solid so that 𝒮 = ∫_Ω e_s ξ_s v + ∫_Ω e_f ξ_f v - ∫_∂Ω e_f j⃗_f ·a⃗+∫_Ω e_o ξ_o v- ∫_∂Ω e_o j⃗_o ·a⃗ . Using the divergence theorem to eliminate the boundary terms, then the Leibniz rule for differentiation under the integral sign, and the localisation theorem, we derive the local form of the balance of internal energy et + ( e v⃗ ) = T⃗ : L + r + h_f ξ_f + h_o ξ_o + h_s ξ_s+ w̅_o- h_f j⃗_f + h_o j⃗_o + q⃗, where we have introduced h_f = e_f + p, h_o = e_o + p and h_s = e_s + w_s the enthalpies per unit volume of pure fluid, osmolytes and solid material, respectively. Alternatively, it is convenient to express the balance of energy in reference configuration as Ė^0 = P⃗ : Ḟ + R^0 + h_f Ξ_f^0 + h_o Ξ_o^0 + h_s Ξ_s^0 + W̅_o^0- h_f J⃗_f^0 + h_o J⃗_o^0 + Q⃗^0, where denotes the Lagrangian divergence; E^0 = Je is the internal energy per unit reference volume; P = J T F^-⊤ is the Piola–Kirchhoff stress <cit.>; and Q⃗^0 = J F^-1q⃗, J⃗_o^0 = J F^-1j⃗_o and J⃗_f^0 = J F^-1j⃗_f are the flux terms pulled back to the reference configuration. We define the quantities Φ_f^0 = Jϕ_f, Φ_o^0 = Jϕ_o, and Φ_s^0 = Jϕ_s=J_GΦ_s, measuring respectively the current volume of each component per unit reference volume, so that Ξ_s^0=Φ_s^0Γ, Ξ_o^0 = Jξ_o and Ξ_f^0 = Jξ_f are the local production of solid, osmolytes and fluid volume per unit reference volume; R^0 = Jr is the heat source per unit reference volume; W̅_o^0 = Jw̅_o is the work due to active transport per unit reference volume. §.§.§ Imbalance of entropy The second law of thermodynamics is expressed through the Clausius-Duhem inequality, expressed in terms of the entropy density S^0 (entropy per unit initial volume) as <cit.> Ṡ^0 ≥R^0 /θ+ s_f Ξ_f^0 + s_oΞ_o^0 + s_fΞ_s^0 + S̅_o^0 - Q⃗^0 /θ + s_f J⃗_f^0 + s_o J⃗_o^0 , with θ denoting the thermodynamic temperature; and where s_f, s_o and s_s are the volumetric entropy densities of the added fluid, osmolyte and solid materials, respectively; and S̅_o^0 denotes the entropy contribution from active transport. Henceforth, we assume isothermal conditions for simplicity, so we take Q⃗^0 = 0⃗ and R^0= 0. Introducing the Helmholtz free energy Ψ^0 = E^0 - θ S^0 and applying the energy balance equation (<ref>), we can reformulate the previous inequality as Ψ̇^0 ≤P⃗ : Ḟ + g_f Ξ_f^0 + g_o Ξ_o^0 + g_s Ξ_s^0 + G̅_o^0 - g_f J⃗_f^0 + g_o J⃗_o^0 , where g_s = h_s - θ s_s, g_f = h_f - θ s_f and g_o = h_o - θ s_o denote the Gibbs free energy densities of the solid, fluid and solute material, respectively; and G̅_o^0 W̅_o^0 - θS̅_o^0. As detailed in the next section, the inequality (<ref>) is useful to derive constitutive laws for growth and material transport. §.§ Constitutive laws §.§.§ Coleman-Noll procedure To close the system we need to formulate constitutive laws for the material. Therefore, we follow the approach of <cit.> and apply the Coleman-Noll procedure to the dissipation inequality (<ref>) to constrain thermodynamically the constitutive laws <cit.>. Firstly, we decompose the free energy Ψ^0 into a solid part Ψ_s^0 and a fluid part Ψ_f ^0 = Φ_f^0 g_f +Φ_o^0 g_o - Φ^0 p <cit.>; so that Ψ^0 = Ψ_s^0 +Φ_f^0 g_f +Φ_o^0 g_o - Φ^0 p. For an inherently incompressible solid component, the porosity Φ ^0 = J_G (J_A - Φ_s ) is determined by the deformation. We introduce the mechanical free energy of the material in the form W^0 ( A, G, p) = Ψ_s^0( A, G) - p J_G (J_A - Φ_s), encompassing an elastic contribution and a hydrostatic contribution. The energy per unit intermediate volume is W( A, p) = J_G^-1 W^0 ( A, G, p) = Ψ_s ( A) - p ( J_A - Φ_s), where Ψ_s( A) = J_G^-1Ψ_s^0 ( A, G) defines the strain energy density of the solid, per unit intermediate volume. Then, combining (<ref>) with the balance of mass relations Φ̇^0_f = Ξ_f^0 - J⃗_f^0 and Φ̇^0_o = Ξ_o^0 - J⃗_o^0 (<ref>), and the Gibbs-Duhem equality ṗΦ^0 = ġ_fΦ_f^0 +ġ_o Φ_o^0, we obtain Ẇ^0 ≤P⃗ : Ḟ - ṗΦ_f^0 + G̅_o^0 - J⃗_f^0 · g_f - J⃗_o^0 · g_o . Finally, expanding Ẇ^0 as Ẇ_0 J_G ^-1 = W G^-⊤:Ġ + W A : Ȧ + Wpṗ and substituting into (<ref>) using (<ref>) provides 0 ≤ P G^⊤ - J_GW A:Ȧ + A^⊤ P + g_s Φ_s^0 - J_G W G^-⊤:Ġ + G̅_o^0 - J⃗_f^0 · g_f - J⃗_o^0 · g_o . By the argument of <cit.>, the inequality (<ref>) must hold for any admissible process. In particular, since Ȧ is arbitrarily prescribed, the following standard equalities must hold universally P = W^0 A G^-⊤ ⇔ T = J_A^-1 A W A which provides the standard stress-strain relation for a hyperelastic material. The resulting dissipation inequality, 0 ≤ A^⊤ P + g_s Φ_s^0 - J_G W G^-⊤:Ġ + G̅_o^0 - J⃗_f^0 · g_f - J⃗_o^0 · g_o , highlights the different modes of dissipation in the system, coming from distinct biophysical processes, specifically growth and transport. Thus, these contributions must satisfy (<ref>) individually, hence g_s Φ_s 1- B: L_G ≥ 0 , G̅_o^0 -J⃗_f^0 · g_f - J⃗_o^0 · g_o ≥ 0. Here, B⃗ W 1-J_A A^⊤ T A^-⊤ denotes the Eshelby stress tensor, which appears as an important quantity for growth <cit.>. §.§.§ Growth law We discuss the constitutive law for the growth of the cell matrix. Focusing on the solid component, we first decompose the stress additively into a solid and fluid part as T = T_s - ϕ p 1 where T_s is the partial stress due to the solid component <cit.>: T_s = J_A^-1 A Ψ_s A - ϕ_s p 1. We can rewrite (<ref>) in terms of the partial Eshelby stress B_s Ψ_s 1 - ϕ_s J_A A^⊤ T_s A^-⊤ (= B) as g_sΦ_s 1 - B_s: L_G ≥ 0 . This dissipation inequality expresses a thermodynamic constraint on the growth law. In particular, the case g_s=0, where the solid is added with no energy to the matrix, defines a passive growth process <cit.>. Generally, from (<ref>), a natural path is then to adopt the Eshelby stress B_s as a driving force for growth. A common approach then is to postulate the existence of a homeostatic stress B^* such that B^*: L_G = Φ_sg_sΓ, so a natural growth law can be chosen of the general form L_G = ℋ : ( B^* - B_s) with ℋ a fourth-order extensibility tensor such that the inequality holds for any B_s <cit.>. In plants however, it is unlikely that such homeostatic stress is actively maintained. Instead, the growth of plant cell walls is generally seen as a passive, plastic-like process <cit.> which involves complex anisotropies and nonlinear threshold effects <cit.>. Whether simple growth laws based on the Eshelby stress can reproduce basic properties of plant matter remains unclear to us. Thus, we reserve this problem for another occasion and instead adopt a simpler, phenomenological approach. Indeed, strain-based growth laws of the form L_G = 𝒢( A) have been adopted <cit.>, where typically, the cell walls expand once they surpass a certain strain threshold. These growth laws are relatively simple to parameterise and capture elegantly some observed phenomenological properties of plant matter, for example the commonly-accepted fact that growth is slower in rigid tissues or along the direction of load-bearing cellulose microfibrils (all other things being equal). Following previous strain-based models we postulate a strain-based growth law where we assume that during growth, the strain is maintained close to a strain threshold E^*. Therefore, we posit a linear growth law of the form 𝒢( A) = 𝒦: E - E^*, with 𝒦 a fourth-order tensor controlling the rate and anisotropy of the growth (with unit inverse time); E ( A^⊤ A - 1)/2 the symmetric Lagrangian elastic tensor; E^* is a threshold strain tensor; and · is an invariant extension of the ramp function to symmetric tensors, that is, for any symmetric tensor S with eigenvalues S_i and corresponding normalised eigenvectors s⃗_i, we define Smax0,S_i s⃗_i ⊗s⃗_i. For comparison, (<ref>) is a generalisation of the growth law introduced by <cit.>. Note that there is no guarantee anymore that the dissipation inequality (<ref>), g_s Φ_s ≥ B_s :𝒢( A)/𝒢( A), will be always satisfied with g_s = 0. (However, in practice, the r.h.s. appears to be indeed negative in many cases studied here.) We conclude that strain-based growth processes may not be universally passive. We note that, recently, some authors have proposed that material insertion and pectin expansion may also drive growth, independently of turgor <cit.>. This hypothesis, albeit contentious <cit.>, implies that growth in plant cell walls may be–at least partly–an active process. §.§.§ Transport Next, we discuss the second inequality (<ref>) for the material fluxes. Using J⃗_o^0 = Jϕ_o F^-1w⃗_o and J⃗_f^0 = Jϕ_f F^-1w⃗_f and postulating an active transport contribution of the form g̅_o = ϕ_o f̅⃗̅_o ·w⃗_o with f̅⃗̅_o an effective force sustaining active transport, (<ref>) becomes -ϕ_fw⃗_f · g_f + ϕ_f w⃗_o · ( f̅⃗̅_o - g_o ) ≥ 0. To make progress, we introduce Onsager's reciprocal relations <cit.> - ϕ_f g_f = L_fo (v⃗_f - v⃗_o) + L_fs (v⃗_f - v⃗) = - L_fow⃗_o + ( L_fs + L_fo) w⃗_f, ϕ_of̅⃗̅_o - ϕ_o g_o = L_of (v⃗_o - v⃗_f) + L_os (v⃗_o - v⃗) = - L_ofw⃗_f + ( L_os + L_of) w⃗_o . with L_of=L_fo. Indeed, substituting (<ref>) into (<ref>) gives L_fs/L_fow⃗_f^2 + L_os/L_fow⃗_o^2 + (w⃗_f - w⃗_o)^2 ≥ 0 , which is universally satisfied. Inverting (<ref>) and using the Gibbs-Duhem equality ϕ p = ϕ_f g_f + ϕ_o ϕ_o <cit.> we obtain w⃗_f = f̅⃗̅_o L_of +L_osϕ_o g_o-ϕ (L_of+L_os) p/L_of (L_fs+L_os)+L_fs L_os, w⃗_o = f̅⃗̅_o (L_of+L_fs)-L_fsϕ_o g_o-L_ofϕ p/L_of (L_fs+L_os)+L_fs L_os. In the limit of high osmolyte-solid drag L_os≫ L_fs, L_of considered here we find w⃗_f ≈1/L_of+L_fsϕ_o g_o-ϕ p, w⃗_o ≈f̅⃗̅_o/L_os. Biologically, this assumption expresses the idea that water and osmolytes are exchanged by cells via distinct pathways, so that in the absence of active transport, the osmolytes remain confined within the cells. L_fs = ϕ/k, L_of = R_gθ c_o/D_o. The Gibbs free energy g_o of the solutes in the solution can be related to the molar concentration of osmolytes c_o through g_o = R_gθ/v_oln c_o + cst, with R the universal gas constant. Using the formula c_o = ϕ_o / v_aϕ, we obtain finally j⃗_f ≈ϕ_f 1/k+R_gθ c_o/D_oϕ^-1 R_gθ c_o - p When the concentration c_o is low, we recover Darcy's law <cit.> Not quite j⃗_f = - K ψ, where K is the second-order symmetric permeability tensor that characterises the macroscopic permeability of the mixture to the fluid (expressed in unit area per pressure per time), assumed to not depend on the elastic deformation A. Physically, the tensor K reflects the overall effects of cell topology, wall and cell membrane permeabilities, as well as the permeability linked to other water routes (e.g. apoplasmic route). Combining (<ref>) we can derive a single equation for the balance of mass of the mixture <cit.>: v⃗ + j⃗_o - K ψ = ΓΦ_s / J_A + ξ_o + ξ_f. To make progress, we introduce Onsager's reciprocal relations <cit.> - ϕ_f g_f = L_fo (v⃗_f - v⃗_o) + L_fs (v⃗_f - v⃗) = - L_fow⃗_o + ( L_fs + L_fo 1) w⃗_f, ϕ_of̅⃗̅_o - ϕ_o g_o = L_of (v⃗_o - v⃗_f) + L_os (v⃗_o - v⃗) = - L_ofw⃗_f + ( L_os + L_of 1) w⃗_o , where L_of=L_fo is a fluid-osmolyte drag coefficient; and L_os and L_fs are positive-definite symmetric second-order tensors that characterise the anisotropic osmolyte-solid and fluid-solid drags, respectively, to reflect the anisotropic structure of the solid matrix. Indeed, substituting (<ref>) into (<ref>) gives w⃗_f · ( L_fsw⃗_f ) + w⃗_o · ( L_osw⃗_o ) + L_fo(w⃗_f - w⃗_o)^2 ≥ 0, which is always satisfied. These relations may be viewed as a generalisation of Fick's law to several transported species. Inverting (<ref>) and using the Gibbs-Duhem equality ϕ p = ϕ_f g_f + ϕ_o ϕ_o we obtain the relative velocities w⃗_f= L_os L_fs + L_of ( L_os+ L_fs) ^-1ϕ_oL_fof̅⃗̅_o + L_osϕ_o -ϕ L_os + L_of 1 p , w⃗_o = L_os + L_of 1^-1ϕ_of̅⃗̅_o - ϕ_o g_o + L_fow⃗_f. In particular, in the limit of high osmolyte-solid drag L_os≫ L_fs, L_of, we derive w⃗_f ≈ L_fs + L_of 1 ^-1ϕ_o ϕ_o - ϕ p, w⃗_o ≈ϕ_o L_os ^-1f̅⃗̅_o - g_o. Biologically, this assumption expresses the idea that in the absence of active transport, the osmolytes remain mostly confined within the cells (with some slow diffusive leakage) and are not convected by the fluid. Following <cit.>, the drag coefficients are given as L_fs = ϕ K^-1 and L_of = R_gθ c_o/D_o, with K the second-order symmetric permeability tensor that characterises the permeability of the mixture to the fluid (expressed in unit area per pressure per time); c_o is the molar concentration of osmolytes; D_o is the diffusivity of the osmolytes in the fluid; and R_g the universal gas constant. The chemical potential g_o of the solutes can be related to their molar concentration c_o in the fluid through g_o = g_o^0 + R_gθ/v_olnc_o/c_o^0 , with v_o the molar volume of the osmolytes; and g_o^0 and c_o^0 are reference values, taken to be constant. Using the formulae j⃗_f=ϕ_fw⃗_f and c_o = ϕ_o / v_oϕ with (<ref>), we obtain finally j⃗_f ≈ϕ_f K^-1 + R_gθ c_o/D_oϕ 1 ^-1R_gθ c_o - p. For small concentration, more specifically for R_gθ c_o ≪ D_oϕ K^-1, we recover a Darcy-type law j⃗_f ≈ - ϕ_f K ψ, where ψ = p - π as in (<ref>); with the osmotic pressure π given by the van 't Hoff relation π R_gθ c_o. Combining (<ref>) we can finally derive v⃗ + j⃗_o -1 - ϕ_o - Φ_s/J_A K ψ = ξ_s + ξ_o + ξ_f. §.§.§ Elastic constitutive relation Using Terzaghi's effective stress principle <cit.> , the effective Cauchy stress T for the mixture can be expressed as the sum of the partial stresses T_s and T_f = -p 1 for the solid and the the fluid respectively (with 1 the identity tensor): T = T_s - p 1. We then can write the balance of linear and angular momenta (<ref>) as T_s = p, T_s = T_s ^⊤. Macroscopically, we assume that the solid is hyperelastic with respect to the unstressed virtual state, with Gibbs free energy function W=W( A) expressing the macroscopic strain energy per unit volume (of the intermediate state). The Cauchy stress is obtained from the Clausius-Duhem inequality and is expressed as T_s = J_A^-1 A W_s A. §.§.§ Growth law This section needs to be rewritten Lastly, we model the evolution of the growth tensor G, that is, we formulate a growth law for the material. In the biophysics community, plant cell growth has been widely described as a dissipative process akin to plastic yielding <cit.>, as initially described by Lockhart <cit.>. In this line, several authors have proposed multidimensional extensions to Lockhart's model, based on plasticity theory <cit.>, or in the form of phenomenological stress- or strain-based tensorial growth laws <cit.>. Following this idea, we make the explicit postulate that growth is a thermodynamically passive process. That is, we postulate a growth law that satisfies the Clausius-Duhem inequality in the absence of external entropy sinks: M : L_G ≥ 0 ., A natural choice for the growth law is then to take L_G as a function of M. Drawing on viscoplasticity theory, we postulate that the yield of the material can be described through a yield function Ψ, such that the elastic domain (assumed to be convex), that is the region in M-space where the material behaves purely elastically, is given by Ψ( M) ≤ 0. In other words, growth occurs if and only if Ψ( M) > 0. Then, in the anelastic domain, the yield rate is derived from the so-called principle of maximum dissipation via the normality criterion L_G = 1/ηΨ M for Ψ( M)>0, with η> 0 a growth constant akin to a viscosity (expressed with units of pressure time). For simplicity, we here restrict our attention to scenarios where A and T_s commute, then M is symmetric and we have M = J_A^-1 T_s. This is true for isotropic materials, but also for problems that have appropriate symmetries. Since M is symmetric, we can write its spectral decomposition in terms of its eigenvalues m_i (i=1,2,3) and an orthonormal family of eigenvectors m⃗_i (in corresponding order): M = ∑_i=1^3 m_i m⃗_i ⊗m⃗_i , with ⊗ denoting the tensor product. Similar to <cit.>, we assume that line elements grow along direction m⃗_i only when m_i exceeds a threshold value m^*≥0. Therefore, we define a threshold tensor M^* m^* 1, along with the excess stress tensor Δ M M - M^*, where · extends the ramp function to symmetric tensors (<ref>). A minimal, convex yield function capturing the aforementioned growth phenomenology can be then given in terms of the second main invariant J_2 = m_1 - m^*^2+m_2 - m^*^2+m_3 - m^*^2 of Δ M: Ψ ( M)= J_2/2. Using (<ref>), we obtain L_G =1/ 2ηJ_2 M = 1/η∑_i=1^3 m_i - m^*m⃗_i ⊗m⃗_i . Note that the terms growth and plasticity have been used often interchangeably by the plant biophysics community. However, despite formal analogies between the two concepts, plasticity and growth are fundamentally distinct phenomena <cit.> . While plasticity reflects microscopic reorganisation of materials (metals typically), plant tissues are virtually unable to rearrange at the cellular level, but can gain mass through wall secretion and cell divisions. For example, in (<ref>), L_G is positive semi-definite, expressing the fact that there growth induces no contraction of the material in any direction. This behaviours differs substantially from plastic materials, which typically undergo isochoric anelastic deformations with L_G=0. In our context, a useful simplification comes from the fact that, in plants, the fluid content actually accounts for most of the volume of the mixture, thus, water mass balance is barely affected by the small volume of the osmolyte and the cell walls. Therefore, we next take ξ_s ≪ξ_f, j⃗_o ≪j⃗_f, ϕ_o, ϕ_s≪ϕ and T_s ≈ T + p 1. §.§ A closed system of equations Combining (<ref>), we obtain the following system of equations: v⃗- K ψ = ξ_f , ϕ_ot + ϕ_ov⃗ + j⃗_o = ξ_o, T_s + b⃗= p, T_s = T_s ^⊤ , F = A G , L_G = 𝒦: E - E^*, T_s = J_A^-1 A Ψ_s A, ψ = p - R_gθϕ_o / v_o. In total, we have a closed system of thirty-three partial-differential equations for thirty-three variables: the nine components of A, the nine components of G, the three components of χ, the nine components of T⃗_s, and the three scalar variables p, ψ and ϕ_o. §.§ Boundary conditions This system must be equipped with appropriate boundary conditions. Defining the outer normal n⃗ to the boundary ∂ℬ, typical boundary conditions encountered in our context describing the flux of water through the boundary are Robin boundary conditions of the form j⃗_f ·n⃗ = - K ψ·n⃗= k (ψ^* - ψ) on ∂ℬ, expressing a flux across the boundary due to a difference of water potential with the outside ψ^*, where k is the interfacial hydraulic conductivity. No-flux boundary conditions K ψ·n⃗ = 0 are a particular case of (<ref>). Note that when k→∞, the condition (<ref>) simplifies to a Dirichlet constraint ψ = ψ^* on ∂ℬ. Similarly, the mass balance equation for the osmolytes may be equipped with Dirichlet (fixing c_o at the boundary) or flux boundary condition. The usual boundary condition for the stress (<ref>) is Tn⃗ = t⃗_0 on ∂ℬ , with t⃗_0 the applied traction density. §.§ Apparent elasticity of a growing tissue Given a solution to the system, an adscititious problem is to characterise the effective elasticity of the turgid tissue subject to residual stresses. Indeed, at timescales shorter than that of growth and water transport, water is effectively trapped in the tissue and the mixture can then be seen as a macroscopically incompressible hyperlastic material. Given a field of elastic pre-deformation A and pressure p, the effective strain energy function of the mixture with respect to an incremental, superimposed deformation  is Ŵ( Â, p̂) = J_A^-1 W (  A, p ) - p̂ ( - 1 ) , where p̂ is an undetermined Lagrange multiplier that accounts for the incompressibility constraint  = 1. The associated Cauchy stress is then T̂ = ÂŴ = T ( A,p) - p̂ 1. This expression is useful to describe the overall elastic response of the system to external forces, e.g. in the context of compression experiments performed on entire tissues <cit.>, or to explore mechanical stability under growth-induced differential stresses or external loads <cit.>. § LONGITUDINAL GROWTH: HYDRAULIC COMPETITION AND GROWTH-INDUCED WATER GRADIENTS §.§ General problem To illustrate the behaviour of the system, we first examine the simple scenario of a growing, straight cylindrical rod so we reduce the system (<ref>) to one dimension. We define s, S and S_0, the arc lengths in the current, intermediate and initial configurations respectively. The associated total, growth and elastic stretches are λsS_0, γSS_0 and αsS, with λ=αγ (<ref>). From (<ref>), we derive vs = α̇/α + γ̇/γ, with v the longitudinal Eulerian velocity (positive towards increasing s). Assuming that the rod is unloaded, the balance of linear momentum (<ref>) for the axial partial stress t_s yields t_s=p. System (<ref>) then reduces to sv - K ψs = ξ_f , ϕ_ot + sϕ_o v + j_o = ξ_o, vs = 1/2 τα^2 - α^*^2 + αt + v/ααs , p = Ψ_sα, ψ = p - R_gθϕ_o /v_o, where τ is the characteristic time of wall synthesis during growth (i.e. 𝒦 = 1/τ); K = K is the permeability coefficient; and j⃗_0=j_0 is the osmolyte flux. For simplicity, we focus in this section on infinitesimal elastic deformations and define ϵ = α - 1 the infinitesimal strain, with ϵ≪ 1, and the strain threshold ϵ^* = α^* - 1≥ 0. Thus, Hooke's law provides p = E ϵ, with E the effective Young's modulus of the solid. Here we assume that the osmotic pressure π is maintained constant w.r.t s and t, so we elide (<ref>) and substitute ψs=ps into (<ref>). Similarly, we take K, ϵ^*, and τ constant and homogeneous. Indeed, the focus here is on the role of a spatially heterogeneous stiffness, which has been linked experimentally <cit.> and theoretically <cit.> to organ patterning in development. Therefore, we allow the rigidity to vary spatially as E(s,t) = E_0 f(s) with f a dimensionless function of order unit, and E_0 a characteristic Young's modulus of the rod. Finally, taking τ, E_0 and √(KE_0τ) as reference time, pressure and length, respectively, we can nondimensionalise the system through the substitutions ṽ =v √(τ/KE_0), s̃ = s/√(KE_0τ), t̃ = t/τ, ξ̃_f = ξ_f τ, p̃ = p/E_0. In particular, the length ℒ_h √(KE_0τ) defines the characteristic hydromechanical length of the system. Altogether, we obtain to O(ϵ): ṽs̃ = p̃s̃ + ξ̃_f, ṽs̃ = ϵ- ϵ^* + ϵt̃ +ṽϵs̃ , with p̃ = f ϵ. To simplify notations, we drop the tildes and use nondimensionalised variables. For comparison, this system ressembles the static model of <cit.>, and extends the Lockhart-Cosgrove-Ortega model (<ref>) to a one-dimensional continuum. Note also that the r.h.s of (<ref>) is always negative or zero here, thus, by the argument of <ref>, our strain-based growth law is compatible with a passive growth process. In the next two sections, we study the effect of heterogeneous elastic moduli on the growth behaviour of the rod. §.§ Material heterogeneity and hydraulic competition Here we are interested in the boundary effects arising between two regions with different mechanical properties, specifically the effect of different stiffnesses. Therefore, we consider a rod with initial arclength S_0∈-L_0/2,L_0/2, divided in the middle into two part of different rigidity, that is, we posit the rigidity field given by f (s(S_0)) = 1 - ηθ(S_0), where 0 ≤η < 1 denotes the stiffness step at the interface s=0; and with θ the Heaviside step function (i.e. θ(x) = 0 if x<0 and θ(x) = 1 if x≥ 0). The region S_0>0 is softened, with stiffness 1-η, with respect to the base stiffness equal to unity. Assuming s(0)=v(0)=0, without loss of generality (i.e. considering an observer located at the interface), we have f(s)= 1 - ηθ(s). Here we assume no-flux boundary conditions at both ends of the domain, however we allow water entry through a bulk source ξ_f = k (p^* - p), with k the effective permeability with the outside, and p^* a constant base effective pressure, encompassing the excess osmolarity relative to the outside, and the outer hydrostatic pressure. To gain insight into the shape of the solutions, we focus on the steady regime, i.e. on self-similar stationary solutions on an infinite line. On setting ϵt=0 in (<ref>), the problem reduces to: v' = p” + k(p^* - p), v' = ϵ- ϵ^* + v ϵ', with the apostrophe denoting derivative with respect to s, and where p=fϵ. To make progress, we consider a small perturbative softening η≪ 1. Therefore we expand all variables as power series of η, i.e. p = p_0 + η p_1 + …, ϵ = ϵ_0 + ηϵ_1 + … and v = v_0 + η v_1 + …, and then treat each order in (<ref>) separately. To ease calculations, it is also convenient here to ignore the threshold effect in (<ref>) and assume p^*≥ϵ≥ϵ^*. The base solution at O (1) has uniform pressure and velocity gradient and is given by p_0= ϵ_0 = ϵ^* + kp^*/1+k, v_0 = a s, with a = p^*-ϵ^* /1+k^-1 . Unsurprisingly, p_0 can be identified to the Lockhart pressure (<ref>). At Oη, we have v_1' = p_1” - k p_1, v_1'= ϵ_1 + v_0 ϵ_1' . Eliminating v_1' and using p_1 = ϵ_1 - ϵ_0 θ(s) and p_0=ϵ_0, we obtain a single equation for p_1, p_1” - as p_1' -1+k p_1 = p_0θ(s), defined on both regions s<0 and s>0. A general solution to this equation can be expressed in terms of the Kummer confluent hypergeometric function _1F_1, the gamma function Γ and the Hermite polynomials (of the first kind) H_λ. By imposing the condition that pressure must be bounded and continuously differentiable at s=0, we can determine the four integration constants and obtain the compound asymptotic solution p (s) = p_0 + η p_0 /k+1π( π c)/Γ( c/2 ) Γ(1-c) H_-c(s√(a/2))- θ(s) - θ(-s) h_-cs√(a/2)+ O (η ^2), with c k+1/a; and h_λ = _1F_1(- λ/2; 1/2;x^2) the Hermite functions (π denotes here the ratio of a circle's circumference to its diameter). The expansion rate v' is obtained directly by substituting this expression into (<ref>). Example solutions are shown in <ref>. As can be seen, the softening of the right-hand-side region results in a pressure drop in that region, reflecting the reduced mechanical resistance of the cell walls to water entry. As pressure decreases smoothly at the junction between the soft to the stiff region, the elongation rate v' jumps discontinuously, reaching a global minimum at s=0^- and a maximum at s=0^+. This jump directly results from the strain-based growth law, and the strain discontinuity at s=0. Due to the pressure gradient, water seeps towards increasing s, i.e. from the stiff region to the soft region. The characteristic length of this seepage, L_c -p'(0)/2Δ p ≤ 1, increases when external water supply k decreases, as illustrated by the inset in <ref>(a). This result shows the emergence of hydraulic competition between the two regions. This competition concerns a larger portion of the tissue when external water supply becomes small. The presence of an external source of water k also tends to reduce the difference in pressure between the two regions (thus the seepage), as can be seen from the pressure drop Δ p = η p_0 / 1+k between the two asymptotes s= -∞ and s=+∞. This gap is maximal when k≪ 1, where water becomes scarce, with p(s) = ϵ^* - η/2θ (-s)^s + θ (s) (2-^-s)+ O (η ^2). In this limit example however, the volume of water in the rod is conserved, thus as the right-hand side region grows along a region of effective size unity, the other side, which is actually under the growth threshold, has to shrink (thus this limit is actually nonphysical). This issue is addressed later in <ref>. For comparison, <ref> also shows the case where internal fluxes along the rod are suppressed (K=0), illustrated by the dashed lines. In this case, the pressure and the velocity gradient are piece-wise continuous, with p(s)= p_0 - η p_0 θ(s)/k+1+ O (η ^2), v'(s) = a + η p_0θ(s)/1+k^-1 + O (η ^2), corresponding to the asymptotes for the general case. In the general case where the two regions exchange water, this jump in expansion rate v' across the interface is amplified by a factor 1+k^-1 with respect to the situation with zero flux. In particular, in situations where water supply is impeded (i.e. k≪ 1) and the overall growth becomes slower, this amplification factor diverges as the effect of hydraulic competition becomes more visible. In other words, although one might initially expect that water movements along the rod would serve to smooth out heterogeneities, we show that, near the interface (| s|≲ L_c), the combined effects of growth, hydraulics and mechanics actually accentuates the effect of heterogeneous mechanical properties on the growth dynamics. Interestingly, such effects have been proposed to play a role in the initiation of organs in the shoot apical meristem <cit.> (see <ref>). §.§ Water gradient in apically growing organs Plant shoots and roots grow typically along an elongating apical region <cit.>, while the rest of the tissue stiffens and stops growing. Kinematically, this type of growth is akin (but not physically equivalent) to tip growth. Here, we explore the existence of tip-like, self-similar solutions to our system. Therefore, we focus on the steady regime (<ref>), and define s, the arclength measured from the apex of the plant towards the base. In this scenario, v is then the velocity of the domain measured in the co-moving frame attached to the apex. To illustrate the role of chemical growth regulation in our framework, we here assume that growth is controlled by a growth hormone (e.g. auxin) with concentration ρ. We posit that the hormone is actively convected with velocity v_a from the apex (s=0) towards the base (s=∞), diffuses with diffusion coefficient D and is reabsorbed with rate constant β. In the steady regime, the concentration profile obeys Dρ” - ( v+v_a)ρ' - (v'+β) ρ = 0 . In the convection-dominated regime D ρ ', vρ≪ v_aρ, the hormone concentration is given by ρ (s) = ρ_0 ^-s/σ, where ρ_0 ρ(0) is the concentration at the apex; and where σβ/v_a sets the length scale for the hormonal regulation <cit.>. In plants, transported hormones such as auxin are involved in the control of cell wall elastic properties <cit.>. Therefore, we assume that the effect of the hormone is to reduce the elastic modulus of the tissue. Assuming a small, linear effect of the hormone on the elastic modulus, we write f≈1 - ηρ/ρ_0, with η≪ 1. Note that an alternative mechanism producing a gradient in rigidity could be a gradual stiffening of the tissue as the cells age and move away from the apex <cit.>. Firstly, we assume that no growth occurs in the absence of apical softening (i.e. if η = 0), thus we choose ϵ^*≥ p^*. Indeed, in <ref>, the choice ϵ^*<p^* resulted in exponential growth, precluding then the possibility of tip-like growth regimes. As previously, we solve (<ref>) asymptotically to first order in η. The base solution is p_0=ϵ_0=p^* and v_0=0. At O (η ), we have p_1” - (1+k) p_1 = p_0^-s/σ. Here k can be interpreted biologically as a rate constant characterising the effective permeability between the growing tissue and the organ's vascular bundle. Note that (<ref>) is valid only as long as the strain is above the growth threshold, i.e. ϵ>ϵ^*. Away from the tip, under this growth threshold, the system is in the purely elastic regime and obeys p_1” - k p_1 = 0. The general solutions for (<ref>) can be obtained easily. The difficulty is then to determine the unknown arclength Σ at which the threshold is attained. Enforcing the condition that pressure p is bounded and continuously differentiable, and using ϵ (Σ^-)=ϵ (Σ^+) = ϵ^* along with the no-flux condition at the tip p'(0)=0, we can express all the integration constants as functions of Σ. The latter is ultimately identified as the root of some (slightly uncomely) transcendental equation which can be solved numerically, given values of k, η, σ, ϵ^*, and p^*. <ref> shows example solutions. As can be seen in <ref>(a), the softening of the distal region results in a drop in pressure there. In the non-growing region (i.e. beyond the intersection point between the solid line and the dashed line), the pressure converges to its base value p^* as ^-s√(k), i.e., perhaps surprisingly, there exists a water potential gradient of lengthscale k^-1/2 resulting from the softening of the shoot over a length σ. We can compute the tip velocity with respect to the base as v_∞ = v(Σ), which is, as expected, a growing function of k; see the inset in <ref>(b). In the limit case k≫ 1, with p=p^*, the apex grows along a region of maximal size Σ = σlogηϵ^* / ϵ^*- p^*. This length is positive only if η > 1 - p^*/ϵ^*, i.e. when the equilibrium pressure p^* is not too far from the growth threshold ϵ^*, given a certain level of softening η, otherwise the pressure is too low to produce any growth at all. The tip velocity is then maximal, given by v_∞≈ p^* σp^*/ϵ^*-1+η+σ (p^*-ϵ^*) log(ηϵ^*/ϵ^*-p^*). Conversely, when k→ 0, the growth zone vanishes and v_∞→ 0. In this case, the base of the plant located at s→∞ is the only possible source of water but cannot sustain permanently the gradient in water potential necessary for growth. Both limit cases are plotted with solid black lines in <ref>. §.§ Parameter estimates The biological relevance of hydraulic gradients is predicated upon the ratio of hydromechanical length ℒ_h = √(KEτ) to growth region size. To estimate this ratio, we consider a linear chain of identical Lockhart cells of average length ℓ (maintained constant through cell-division), hydraulically-insulated from their environment, but exchanging water with their two adjacent neighbours with membrane conductivity κ. From dimensional considerations, we expect the bulk conductivity for this chain of cells to be K ∼ℓκ. We take κ = 10^-8–10^-7.-1.-1 <cit.>. Taking the turgor pressure p to be of the order of 1 <cit.> and the strain ϵ =0.05, the bulk elastic modulus is E=p/ϵ=20 . Taking ℓ=10 and estimating τ = 10^1–10^3 , we obtain ℒ_h ≈ 1–10 ℓ; namely, for this chain of cells, the length of interest for hydromechanical control is on the order of 1 to 10 cells. In practice, these values are very hard to measure with precision, and may vary a lot between different scenarios. § GROWTH OF A CYLINDER: HYDRAULICS AND RESIDUAL STRESS IN STEM DEVELOPMENT §.§ Overview We now move on to a fully three-dimensional, nonlinearly elastic scenario to study the growth of a cylindrical stem. Here, the focus is on the interplay between heterogeneous material properties, water fluxes, and growth in multiple dimensions, illustrating how hydraulic effects and differential material properties can constrain the growth and internal stresses within a simple three-dimensional structure. §.§ Governing equations We consider the growth of a cylinder of initial radius A and length L, and current radius a and length ℓ. The initial domain is parameterised by the system of cylindrical coordinates (R,Θ,Z), with R the radial coordinate, Θ the azimuthal angle, and Z the axial coordinate (<ref>). In an axisymmetric deformation, a point R,Θ,Z in the reference configuration is moved to the location r,θ,z in the current configuration. Thus, the deformation χ is given explicitly by r= r(R,t), θ =Θ, and z = z(Z,t), with r(0,t) = 0, z(0,t) = 0 at R=0. In virtue of the symmetry, we can identify the two orthonormal bases associated with both systems of cylindrical coordinates: E⃗_R=e⃗_r, E⃗_Θ =e⃗_θ and E⃗_Z = e⃗_z. Further assuming that the gradient of deformation is invariant by Z, the problem is effectively one-dimensional and depends only on r and t. The deformation gradient is given by F = r' e⃗_r⊗E⃗_R + r/R e⃗_θ⊗E⃗_Θ + ℓ/L e⃗_z⊗E⃗_Z, where the apostrophe denotes differentiation w.r.t. R; and ⊗ is the tensor product. The Eulerian and Lagrangian velocities are given respectively by v⃗ = v e⃗_r + z ζ̇ e⃗_z and V⃗ = V e⃗_r +(Z ζ̇ℓ/L) e⃗_z, with ζ̇ℓ̇/ℓ the relative rate of elongation. The growth and elastic tensors can be written in the cylindrical basis as G = γ_r,γ_θ,γ_z and A = α_r,α_θ,α_z. By (<ref>), we have r' = α_rγ_r, r/R=α_θγ_θ and ℓ/L = α_zγ_z. The gradient of velocity can be related to the growth and elastic stretch rates through (<ref>) vr =γ̇_r/γ_r + α̇_r/α_r , v/r = γ̇_θ/γ_θ + α̇_θ/α_θ , ζ̇= γ̇_z/γ_z + α̇_z/α_z. By symmetry, the stress is also diagonal in the basis {e⃗_r, e⃗_θ,e⃗_z }, with the total and partial Cauchy stresses given by T=T_r,T_θ,T_z and T_s=t_r,t_θ,t_z, respectively. The only non-vanishing component in the balance of momentum (<ref>) is then t_rr + t_r-t_θ/r = pr. Assuming that the end caps are free and not subject to any load, we obtain the boundary condition for the stresses at the end caps <cit.>, ∫_0^a (t_z - p) r r= 0. Further, we assume that the outer pressure at r=a is zero, so (<ref>) yields the condition p(a) = t_r(a) . The balance of mass (<ref>) reads 1/rrr v - K r ψr + ζ̇= ξ_f , where K = K 1. The source ξ_f represents the water intake from a bulk vasculature modelled as a single reservoir with water potential ψ^*, i.e. ξ_f = k(ψ^* - ψ). Assuming that the system is far from hydraulic equilibrium, i.e. ψ^* ≫ψ, we have ξ_f ≈ kψ^*, which is taken to be a function of r only. Thus (<ref>) can be integrated directly as v-K ψr = 𝒳_f - ζ̇r/2 , with 𝒳_f (r) 1/r∫_0^r ξ_f(r') r' r' , where we have used the no-flux regularity condition ψr = 0, and the geometric conditions r=0 and v=0 at R=0. For simplicity, here we take ξ_f constant so that 𝒳_f = ξ_f r /2. Similarly the osmotic pressure π is here assumed to not vary across the domain, so we elide the balance of osmolyte mass equation (<ref>) and we take simply ψr = pr. The cylinder exchanges water through its boundary at R=A, so we use the boundary condition (<ref>) ψ(a) = ψ_a ⇔ p(a) = p_a, with p_a π(a) + ψ_a. In the example treated next, we assume ξ_f=0 so that water enters the domain only through the boundary (however, for the sake of generality, we keep ξ_f in the following derivations). In plants, the organisation of the vascular bundle may vary considerably between species. The situation presented here mimics a type of architecture where the vascular bundle–xylem and phloem–is located near the epidermis, as illustrated in <ref>(b). For more complex vascular systems, we may assume a non-zero source ξ_f, or an additional water exchange point at R=0 (as in roots, where the vascular bundle is located generally near the centre ). More in general, it is easy to extend the problem to the case of a vascular bundle placed at an arbitrary position r_v ∈0,a by treating the two problems r< r_v and r> r_v separately <cit.>. The growth law (<ref>) corresponds to γ̇_i/γ_i = 1/2τ_i(R)α_i^2-α^*^2, with i∈{r,θ,z}; and τ_r, τ_θ and τ_z denote the characteristic times of material synthesis in the three separate directions of the cylinder. In growing stems, it is well known that heterogeneous material properties result in mechanical tension within the epidermis, a phenomenon called tissue tension, manifesting the existence of residual stresses. These stresses emerge from differential growth of the tissue, where the core grows relatively faster than the epidermis <cit.>. A classic experiment consists in peeling a stalk of rhubarb, which results in the detached cortex bending outward and shortening, along with rapid elongation of the exposed core when incubated in water <cit.>, revealing tension in the cortex and compression below. This difference in growth rate is likely to be due to higher growth extensibility of the cell walls of the inner tissues <cit.>. Therefore, we assume that the characteristic time τ_z is larger near the epidermis than at the origin, so that, for equal strain levels, the epidermis undergoes slower growth. We posit τ_z(R) = τ_0 + ΔτR/A^2, where τ_0 is the value at the origin, and Δτ≥ 0 defines the increment in τ_z between the core and the epidermis. Finally, the elastic constitutive equations read t_i = α_i/J_AΨ_sα_i . For simplicity, we here use an isotropic, compressible neo-Hookean strain energy function Ψ_s(α_r, α_θ, α_z) =μ/2(α_r^2+α_θ ^2+α_z^2-3-2 log (α_rα_θα_z)) +λ/2 (α_θα_r α_z-1)^2, where the material coefficients μ and λ identify to the Lamé coefficients in the linear regime. Note that in the limit of incompressibility (λ≫ 1), no growth can occur since the solid cannot expand to absorb the fluid. Henceforth we set λ = μ (corresponding to a Poisson ratio of 1/4 in the limit of linear elasticity). For simplicity, we assume that the elastic moduli are uniform <cit.>. In fact, note that a heterogeneity in elastic rigidity would not be sufficient to capture differential growth on its own. Indeed, in a scenario where only μ would be spatially heterogeneous, the axial stretch α_z (thus the growth rate) would still be uniform in a cylindrical deformation. As in <ref>, we can nondimensionalise the system using √(Kμτ_0), τ_0 and μ as reference length, time and pressure, respectively. On eliminating γ̇_θ/γ_θ and γ̇_z/γ_z using (<ref>), re-expressing the problem in the reference configuration, and rearranging the terms, we obtain a closed system of seven equations for the seven variables γ_r, α_r, α_θ, α_z, V, r, p, defined on the fixed domain ℬ_0: r' = α_rγ_r t_r ' - p'/r' = t_θ - t_r/r , p'/r' = 1/2(ζ̇- ξ_f) r + V , γ̇_r/γ_r = α_r^2-α^*^2, α̇_r/α_r + 1/2τ_r α_r^2-α^*^2 = V' /r' , α̇_θ/α_θ + 1/2τ_θα_θ^2-α^*^2= V/r , α̇_z/α_z + 1/2τ_z α_z^2-α^*^2 = ζ̇, where t_r and t_θ are given in terms of the elastic stretches via (<ref>). This system is equipped with the boundary conditions (<ref>) and the integral constraint (<ref>) that is enforced via the undetermined parameter ζ̇. Note that (<ref>) has a geometric removable singularity at R=0 due to the boundary constraint r(0,t)=0, which generates computational difficulties. This issue is easily alleviated by considering a perturbed boundary condition r(ϵ,t) ≈ϵα_r(ϵ,t) γ_r(ϵ,t) at R=ϵ, where ϵ≪ 1. §.§ Analysis of the solutions §.§.§ Steady regime Before solving the full dynamical problem, we first restrict our attention to steady growth regimes for which we enforce V=α̇_r=α̇_θ=α̇_z=0 and γ_r=γ_θ= 1 (no radial growth). In this scenario, (<ref>) can be integrated directly, revealing that a permanent parabolic pressure profile is maintained across the elongating stem, as predicted by <cit.>: p - p_a = 1/4(ζ̇- ξ_f) (r^2 - a^2). In particular, this pressure is non-negative if a^2 ≤4p_a/ζ̇- ξ_f; otherwise, the radius is too large to allow for even distribution of water given the elongation rate ζ̇, and a zone of negative pressure forms at the centre of the stem. Eq. (<ref>) may be viewed as a scaling constraint on growth, linking the kinematics (ζ̇) and geometry (a), for a given water supply (p_a, ξ_f). The rest of the system can be solved numerically (<ref>). <ref>(a–c) illustrate the steady solution and its dependency on the (nondimensionalised) stem radius A and the parameter Δτ. The density map in <ref>(a) shows the difference in axial stress T_z measured between the epidermis (R=A) and the origin (R=0), with red indicating states of epidermal tension where T_z(A)>T_z(0), and dashed line showing the level set T_z(A)=T_z(0). We show three cross-sections of the stem showing the stress distribution T_z in three cases: 1–uniform τ_z; 2–heterogeneous τ_z and thin stem; and 3–heterogeneous τ_z and thicker stem. <ref>(b, c) show respectively the elongation rate ζ̇ and mean pressure, in the A-Δτ space. <ref>(d) shows the profiles of pressure p and stresses T_z and t_z for each state labelled 1, 2 or 3 in <ref>(a). For a uniform τ_z, i.e. when Δτ=0, the axial stress is compressive in the epidermis and tensile in the core, with T_z(A)<0<T_z(0), see case 1 in <ref>(a, d). This is purely a hydraulic effect due to the higher pressure at the surface, p(A)=p_a. As can be seen, this heterogeneity in stress and pressure becomes more visible when the radius A increases. When a gradient in wall extensibility is introduced, i.e. for Δτ>0, different scenarios are possible. For a relatively slender stem (i.e. above the dashed line), a state of epidermal tissue tension is observed, where both the total stress T_z and the partial solid stress t_z are maximal at the epidermis, see case 2 in <ref>(a, d). In this example, the core is in compression (T_z(0)<0); however, the solid matrix is generally still subject to tensile stresses (t_z(R)>0 for all R), indicating that the cells remain turgid and that their walls are indeed under tension, despite the overall compressive stress. This observation challenges the conception that the cell walls should be compressed and possibly buckled due to overall tissue compression <cit.>. Overall, the distribution of stresses within a tissue is non trivial, in particular, the macroscopic stress–the one released upon cutting the tissue–is distinct from the stress experienced by the cell wall matrix. For thicker stems, below the dashed line in <ref>(a), epidermal tension can no longer be maintained as hydraulic effects override the prescribed heterogeneity in extensibility. Indeed, as can be seen in <ref>(c, d), the pressure becomes low near the origin, indicating a water deficit due to the increased distance to the source, as is especially visible in case 3 in <ref>(a, d). As a result, a lower elongation rate ζ̇ is observed; see <ref>(b). Here, even if the reduced epidermal extensibility would tend to promote epidermal tension, the epidermis is actually in compression, due to its better perfusion. A somewhat counterintuitive effect is observed where, unlike T_z, the axial stress t_z actually increases with R, i.e. the tension in the solid matrix is maximal in the epidermis notwithstanding the global compression. This effect can be interpreted in light of the heterogeneous pressure profile: While the epidermis has high turgor pressure, generating tension in the walls but overall compression within the tissue (since the epidermis is constrained by the core), the pressure near the origin is too low to generate much tension in the cell walls, and most part of the solid stress there is provided by the epidermis. For even larger radii, i.e. below the solid line in <ref>(a), a central region appears where pressure at the origin is negative, i.e. the inequality (<ref>) is violated. This extreme growth-induced effect results from the high deficit in water, and from axial tensile forces applied to the core by the epidermis, which effectively create a suction. While it is unclear whether such growth-induced negative pressure can exist, insofar as it results from the saturation assumption (<ref>), we suspect that the relative water deficit within the core, and the associated tensile stresses, could potentially participate in cavity opening during stem hollowing, described mathematically by <cit.>. Following the developments of <ref> we also assess the mechanical resistance of the stem under axial loads (i.e. its axial linear elastic modulus). We denote by  = ( α̂_r , α̂_θ, α̂_z) the incremental elastic deformation tensor, where α̂_r = r̂r and α̂_θ = r̂/r, with r̂ the radial coordinate in the incrementally deformed configuration. We first remark that the incompressibility condition (α̂_zr̂/r ) r̂r = 1 is separable and can be integrated directly as r̂ = r/ √(α̂_z) from which we obtain α̂_r = α̂_θ = 1 / √(α̂_z). The stresses are given by (<ref>) as T̂_i = t_i (α̂_rα_r ,α̂_θα_θ ,α̂_zα_z)- p - p̂ , with i∈{r,θ,z}. Taking the first variation of (<ref>) around the base solution α̂_i = 1, we derive δT̂_i = 𝒯_i δα̂_z - δp̂, with 𝒯_i t_iα_z - 1/2t_iα_r+t_iα_θ. Then, on integrating the balance of momentum (δT̂_r) r = 𝒯_θ - 𝒯_r/r δα̂_z, and using the identity δT̂_z - δT̂_r= (𝒯_z - 𝒯_r) δα̂_z, we obtain finally the total virtual reaction force of the whole stem δ F 2π∫_0^a δ T_z r r = ℳ_z δα_z , with ℳ_z the effective longitudinal linear elasticity modulus given by ℳ_z = 2π∫_0^a𝒯_z - 𝒯_r -∫_r^a 𝒯_θ - 𝒯_r/r' r'r r. As can be seen, ℳ_z depends on the pre-existing stresses and stretches within the stem. In particular, in the absence of pre-stress (α_r=α_θ=α_z=1, p=0), we recover the known value ℳ_z^0 = 3πμ B^2 for the linear response of an incompressible neo-Hookean tube under uniaxial load. <ref> shows the dependency of the relative stiffness ℳ_z / ℳ_z^0 on A and Δτ. As can be seen by comparing the level sets of <ref> with those of <ref>(a), the variation in axial stiffness of the stem is related to the presence of axial residual stresses, measured by T_z(a) - T_z(0). This observation supports the hypothesis that tissue tension confers higher rigidity to the stem <cit.>. However, unfortunately, 𝒦_z does not inform us directly on the resistance of the stem to buckling, even for a thin stem, insofar as the residually-stressed cylinder is effectively heterogeneous and anisotropic. To that end, a full and likely tedious perturbation analysis including asymmetric modes is required <cit.>. §.§.§ Dynamic regime These different states of the system can be observed in the full dynamic problem, as illustrated in <ref>. Here, we simulate the three-dimensional growth of a stem with Δτ = 2 taken as constant. To account for the rapid elongation of the stem, we also assume that the extensibility is smaller in the r and θ directions <cit.>. As predicted earlier, epidermal tension is maintained until a critical radius is reached, at which point pressure becomes too low at the origin and epidermal compression appears. As the radius increases and the mean pressure decreases, the elongation rate ζ̇ also decreases. § GROWTH OF A SPHERE As a final example, we study the full model (<ref>) including the transport of osmolytes, that is, instead of assuming a constant osmotic pressure as in <ref>, we here assume that the osmolytes can diffuse on the growing domain, here a thick spherical shell. This example illustrates how physiochemical details of organ growth can be included, for instance in the context of fruit growth which involves complex transport of sugar, gas exchanges and water fluxes. Although we do not aim here to model the sheer physiological complexity of fruit growth and maturation, our approach provides a paradigm to generalise detailed zero-dimensional multi-compartment approaches–e.g. <cit.>–to a continuum. We study the growth of a hollow sphere of inner and outer radius a and b in the current configuration (A and B respectively in the reference configuration). We introduce the system of spherical coordinates (r,θ,φ) in current configuration and (R,Θ,Φ) in reference configuration; see <ref>(a). We assume the problem to be spherically symmetric so that, as in <ref>, the problem only involves the coordinates r and t. For simplicity, we neglect active transport of the osmolytes (f̅⃗̅_o = 0) and we write L_os = (c_o R_gθ / D_os) 1 with D_os the diffusivity of the osmolytes in the solid, so that (<ref>) can be written as a standard Fickian flux j⃗_o = - D_osv_o c_or e⃗_r. We assume that the outer hydrostatic pressure is zero on both faces of the shell. For the flux of material, we assume a Dirichlet condition (<ref>) at r=a. At the outer boundary r=b, we postulate an transpiration outflux associated with a water potential ψ_ev and conductivity k_b. Altogether, we have t_r(a,t) = p(a,t), π(a,t) = π_a; ψ(a,t) = ψ_a, t_r(b,t) =p(b,t), ψr(b,t)= k_b(ψ_ev - ψ), πr(b,t)=0, with ψ_a the water potential across the inner boundary; and π_a the osmotic pressure at r=a. We assume ξ_f = ξ_o = 0, so that the only source of water and osmolyte mass is the inner boundary. Following a procedure similar to that of <ref>, we derive the (nondimensionalised) governing equations for the growing sphere: t_rr + 2/rt_r-t_θ = pr, v-r(p-π ) = b^2/r^2ḃ - k_b(ψ_ev - p_b + π_b) , vπ - D_os v_o πr = b^2 ḃπ_b /r^2 , γ̇_r/γ_r = 1/2α_r^2 - α^*^2, α̇_r/α_r + 1/2α_r^2 - α^*^2 = vr, α̇_θ/α_θ + 1/2α_θ^2 - α^*^2 = v/r, t_r = 1/α_θ^2Ψ_sα_r, t_θ = 1/α_rα_θΨ_sα_θ, where we have used the approximated van 't Hoff relation π≈ Rθϕ_o / v_o, the two no-flux boundary conditions (<ref>) to integrate (<ref>), and the neo-Hookean strain energy (<ref>); and where π_b π(b) and p_b p(b) are the undetermined osmotic and hydrostatic pressures at the outer boundary. We have assumed that the osmolyte diffusion is fast and in the quasi-steady regime, so that we set ϕ_ot=0 in (<ref>). All parameters are taken to be uniform across the domain and constant in time. <ref>(b) illustrates the evolution of pressure in a typical simulation. We see that during growth, the pressure drops in regions located far from the central source (similar to <ref>). This simulation illustrates again the dynamic nature of pressure in the context of growth and osmotic regulations with multiple interfaces. This approach may be beneficial to integrate spatiotemporal details of physiological and mechanical regulations in fruit development, as well as nonlocal couplings, which cannot be captured using more basic zero-dimensional models. § DISCUSSION While many continuum models of plant tissues morphogenesis have relied on phenomenological kinematic specifications of the growth behaviour, the establishment of a mechanistic theory of morphogenesis requires modelling growth in relation to more fundamental physical and mechanical fields <cit.>. In plants, it is widely accepted that cellular growth results from mechanical deformations of the cell walls driven by cell osmolarity. However, the explicit connection between pressure and growth has not been systematically integrated in continuum models. To bridge this gap, we proposed a formulation of plant developmental processes that extends the paradigm of Lockhart, Cosgrove and Ortega to a hydromechanical continuum description of the growth phenomenon. The role of pressure have been considered in numerous multicellular discrete models which have described growth as a result of turgor-induced cell wall expansion. However, typically, these models have treated pressure as a prescribed biological parameter, thereby neglecting the fundamentally mechanical nature of pressure. Physically, this modelling simplification originates in the assumption that growth is primarily limited by the cell wall extensibility Φ, thus, that water exchanges across cell membranes are instantaneous (i.e. p≈π). While this assumption may hold true in a first approximation, such simplification does not allow for a complete representation of the physical events occurring during growth. Recent reevaluation of water conductivity contributions in various systems has also revealed a nuanced role of hydraulic effects, which may be non-negligible <cit.>. These advancements motivate the development of new models based on a proper formulation of turgor. Indeed, the fundamental interplay between pressure, fluxes, osmolarity and cell mechanics is critical for a proper understanding of the notion of turgor, with profound experimental and conceptual implications. Thus, we assert that a robust physical description of growth should be grounded in clear balance relations. Our poroelastic theory captures the growth of a tissue as the result of simultaneous mechanically-induced cell wall expansion and water fluxes. Thus, pressure is treated as a dependent mechanical variable that mediates the growth deformations indirectly, through the balance of forces. This integrated perspective is directly in line with the original philosophy of Lockhart's approach. A key mass balance principle can be stated as follows: For growth to occur, water must flow to fill the expanding volume, irrespective of the cause of cell wall expansion. In other words, growing regions correspond to hydraulic sinks and fast-growing regions are associated with lower water potential, as exemplified in <ref>. Such sink is characterised by the development of growth-induced gradients of water potential and a water flux directed towards the growing region, an idea which has surfaced in the literature in particular in the work of Boyer and coworkers <cit.>. This flux, captured here through a Darcy-type law (<ref>), introduces a fundamental growth-induced hydromechanical length ℒ_h =√(KEτ_0) which reflects the combined effects of growth (τ), tissue permeability (K), and elasticity (E). For growing avascular domains larger than this typical length, interesting nonlocal couplings may emerge. For example, in <ref>, we predicted that a fast growing region will absorb water from its neighbourhood, thereby hindering its growth. This idea is reminiscent of patterning mechanisms based on lateral inhibition <cit.>. Such phenomenon has been predicted in previous theoretical study <cit.> and was recently observed experimentally in Arabidopsis thaliana shoot apices, where peripheral cells adjacent to incipient organs exhibit a shrinkage consistent with a water deficit <cit.>. Here we characterise mathematically the magnitude and spatial extent of this inhibition (<ref>). In particular, we show that inhibition will be amplified and more spatially extended in weakly vascularised tissues (k≪ 1), where the inhibition zone is expected to have a width ∼ℒ_h. The role of hydromechanical effects is also illustrated in three dimensions in our model of a growing stem. Firstly, we showed that the classic epidermal tension hypothesis could be naturally captured by a gradient in cell extensibility; reproducing the phenomenology of previous continuum models <cit.>. Secondly, for thick stems where the distance to the vascular bundle increases, the distribution of residual stresses was greatly perturbed by hydraulic scaling effects when pressure at the core became very low. Naturally, in reality, the vascular architecture of plants may evolve as part of the developmental process to maintain adequate vascularisation. Therefore, we anticipate that our predictions may not apply universally, and the model must be adapted to capture more realistic scenarios. Overall, this work aims to exhibit guiding principles of hydromechanical control of plant morphogenesis and illustrate the complex, nonlocal and fundamentally dynamic behaviours that emerge from integrating fundamental growth mechanisms in space and time. Several questions remain open. The most pressing problem is to formulate a constitutive law describing growth, specifically the process by which the cell walls expand and solid mass is added to the system. Although a strain-based growth law could capture the basic phenomenology of plant growth, its mechanistic interpretation is not fully established, in particular, it remains unclear which specific mechanical quantity should be adopted as a driver of growth, and whether an appropriate growth law can be derived from rational mechanics considerations (<ref>). Furthermore, a realistic growth law should also be able to capture the multiscale link between local cell structures and anisotropies and the overall growth of the tissue at the continuum level. In this context, promising efforts to derive continuum representations from the cellular structures using multiscale analysis are emerging <cit.>, and we hope that our work will motivate further advances in this area. Another question concerns the link between the effective permeability tensor and the microscopic details of water routes within the tissue. In the context of direct cell-to-cell water transport <cit.>, an approach would consist of deriving the effective hydraulic conductivity of a periodically-repeating representative cell network using two-scale analysis, as described by <cit.>. More in general, the biological details of water transport in different tissues are still an active subject of research, thus the precise biophysical interpretation of the effective conductivity is yet to be better characterised. For example, an interesting extension to our model would be to treat the apoplasmic route and the transmembrane exchanges between cell vacuoles separately <cit.>. Lastly, a natural extension to this work is to model the role of morphogens (e.g. hormones, such as auxin, or genes), using regular advection-diffusion equations, or more complex, nonlinear reaction-diffusion-advection-systems <cit.>. In the spirit of our approach, these morphogens should regulate specific physical and mechanical properties of the system, such as the rigidity, the osmolarity, the growth threshold or the cell extensibility, thereby indirectly influencing growth. Overall, this work lays the foundations of a field theory of plant morphogenesis–a closed mathematical framework in which the growth phenomenon emerges as the product of multiple, coupled physical, chemical and mechanical fields acting more or less nonlocally. The construction of such theories is a formidable challenge in plant biomechanics and, in general, in the study of active living materials. In this broader context, the unassuming plant provides an interesting paradigm to build a general theory of living tissues. tocsectionAcknowledgements § ACKNOWLEDGEMENT I.C. acknowledges support from the Institut rhônalpin des systèmes complexes (IXXI), and the Agence Nationale pour la Recherche through the research project HydroField. The authors are grateful to Alain Goriely, Andrea Giudici, Christophe Godin, and Arezki Boudaoud for insightful discussions. § NUMERICS AND IMPLEMENTATION We integrate the system using a relaxation method (backward time, centred space); see <cit.>. All simulations were implemented in the software https://www.wolfram.comWolfram Mathematica 14.0. Source code is available upon request. § TABLES OF MAIN QUANTITIES We recapitulate in <ref> the main quantities used in the paper. Finish tocsectionReferences apalike sort compress [Ali et al., 2023]ali2023revisiting Ali, O., Cheddadi, I., Landrein, B., and Long, Y. (2023). Revisiting the relationship between turgor pressure and plant cell growth. New Phytologist, 238(1):62–69. [Ali et al., 2014]ali2014physical Ali, O., Mirabet, V., Godin, C., and Traas, J. (2014). Physical models of plant development. Annual Review of Cell and Developmental Biology, 30(1):59–78. [Ali et al., 2019]ali2019simulating Ali, O., Oliveri, H., Traas, J., and Godin, C. (2019). Simulating turgor-induced stress patterns in multilayered plant tissues. Bulletin of Mathematical Biology, 81(8):3362–3384. [Ali and Traas, 2016]ali_force-driven_2016 Ali, O. and Traas, J. (2016). Force-Driven Polymerization and Turgor-Induced Wall Expansion. Trends in Plant Science, 21(5):398–409. [Alim et al., 2012]alim2012regulatory Alim, K., Hamant, O., and Boudaoud, A. (2012). Regulatory role of cell division rules on tissue growth heterogeneity. Frontiers in Plant Science, 3:174. [Alonso-Serra et al., 2024]alonso2024water Alonso-Serra, J., Cheddadi, I., Kiss, A., Cerutti, G., Lang, M., Dieudonné, S., Lionnet, C., Godin, C., and Hamant, O. (2024). Water fluxes pattern growth and identity in shoot meristems. Nature Communications, 15(1):1–14. [Ambrosi et al., 2011]Ambrosi2011 Ambrosi, D., Ateshian, G. A., Arruda, E. M., Cowin, S. C., Dumais, J., Goriely, A., Holzapfel, G. A., Humphrey, J. D., Kemkemer, R., Kuhl, E., Olberding, J. E., Taber, L. A., and Garikipati, K. R. (2011). Perspectives on biological growth and remodeling. Journal of the Mechanics and Physics of Solids, 59(4):863–883. [Ambrosi et al., 2019]ambrosi2019growth Ambrosi, D., Ben Amar, M., Cyron, C. J., DeSimone, A., Goriely, A., Humphrey, J. D., and Kuhl, E. (2019). Growth and remodelling of living tissues: perspectives, challenges and opportunities. Journal of the Royal Society Interface, 16(157):20190233. [Ambrosi and Guana, 2007]ambrosi2007stress Ambrosi, D. and Guana, F. (2007). Stress-modulated growth. Mathematics and mechanics of solids, 12(3):319–342. [Ambrosi et al., 2012]ambrosi2012interplay Ambrosi, D., Preziosi, L., and Vitale, G. (2012). The interplay between stress and growth in solid tumors. Mechanics Research Communications, 42:87–91. [Barbacci et al., 2013]barbacci2013another Barbacci, A., Lahaye, M., and Magnenet, V. (2013). Another brick in the cell wall: biosynthesis dependent growth model. PLoS One, 8(9):e74400. [Bassel et al., 2014]bassel2014mechanical Bassel, G. W., Stamm, P., Mosca, G., Barbier de Reuille, P., Gibbs, D. J., Winter, R., Janka, A., Holdsworth, M. J., and Smith, R. S. (2014). Mechanical constraints imposed by 3D cellular geometry and arrangement modulate growth patterns in the Arabidopsis embryo. Proceedings of the National Academy of Sciences, page 201404616. [Bedford and Drumheller, 1983]bedford1983theories Bedford, A. and Drumheller, D. S. (1983). Theories of immiscible and structured mixtures. International Journal of Engineering Science, 21(8):863–960. [Ben Amar and Goriely, 2005]BenAmar2005 Ben Amar, M. and Goriely, A. (2005). Growth and instability in elastic tissues. Journal of the Mechanics and Physics of Solids, 53:2284–2319. [Bessonov et al., 2013]bessonov2013deformable Bessonov, N., Mironova, V., and Volpert, V. (2013). Deformable cell model and its application to growth of plant meristem. Mathematical Modelling of Natural Phenomena, 8(4):62–79. [Bou Daher et al., 2018]daher2018anisotropic Bou Daher, F., Chen, Y., Bozorg, B., Clough, J., Jönsson, H., and Braybrook, S. A. (2018). Anisotropic growth is achieved through the additive mechanical effect of material anisotropy and elastic asymmetry. eLife, 7:e38161. [Boudaoud, 2010]boudaoud2010introduction Boudaoud, A. (2010). An introduction to the mechanics of morphogenesis for plant biologists. Trends in plant science, 15(6):353–360. [Boudaoud et al., 2023]boudaoud2023multiscale Boudaoud, A., Kiss, A., and Ptashnyk, M. (2023). Multiscale modeling and analysis of growth of plant tissues. SIAM Journal on Applied Mathematics, 83(6):2354–2389. [Boudon et al., 2015]boudon_computational_2015 Boudon, F., Chopard, J., Ali, O., Gilles, B., Hamant, O., Boudaoud, A., Traas, J., and Godin, C. (2015). A Computational Framework for 3D Mechanical Modeling of Plant Morphogenesis with Cellular Resolution. PLoS Computational Biology Computational Biology, 11(1):e1003950. [Boyer, 1988]boyer1988cell Boyer, J. S. (1988). Cell enlargement and growth-induced water potentials. Physiologia Plantarum, 73(2):311–316. [Boyer et al., 1985]boyer1985control Boyer, J. S., Cavalieri, A., and Schulze, E. D. (1985). Control of the rate of cell enlargement: excision, wall relaxation, and growth-induced water potentials. Planta, 163:527–543. [Bozorg et al., 2016]bozorg_continuous_2016 Bozorg, B., Krupinski, P., and Jönsson, H. (2016). A continuous growth model for plant tissue. Physical Biology, 13(6):065002. [Bussières, 1994]Bussieres1994 Bussières, P. (1994). Water Import Rate in Tomato Fruit: A Resistance Model. Annals of Botany, 73(1):75–82. [Chakraborty et al., 2021]CHAKRABORTY2021110736 Chakraborty, J., Luo, J., and Dyson, R. J. (2021). Lockhart with a twist: Modelling cellulose microfibril deposition and reorientation reveals twisting plant cell growth mechanisms. Journal of Theoretical Biology, 525:110736. [Chapman and Shabala, 2017]chapman2017effective Chapman, S. J. and Shabala, A. (2017). Effective transport properties of lattices. SIAM Journal on Applied Mathematics, 77(5):1631–1652. [Cheddadi et al., 2019]cheddadi2019coupling Cheddadi, I., Génard, M., Bertin, N., and Godin, C. (2019). Coupling water fluxes with cell wall mechanics in a multicellular model of plant development. PLoS Computational Biology, 15(6):e1007121. [Chickarmane et al., 2010]chickarmane2010computational Chickarmane, V., Roeder, A. H., Tarr, P. T., Cunha, A., Tobin, C., and Meyerowitz, E. M. (2010). Computational morphodynamics: a modeling framework to understand plant growth. Annual review of plant biology, 61:65–87. [Cieslak et al., 2016]cieslak2016integrating Cieslak, M., Cheddadi, I., Boudon, F., Baldazzi, V., Génard, M., Godin, C., and Bertin, N. (2016). Integrating physiology and architecture in models of fruit expansion. Frontiers in plant science, 7:1739. [Coen, 1999]coen2000art Coen, E. (1999). The art of genes: How organisms make themselves. Oxford University Press, Oxford. [Coen and Cosgrove, 2023]coen2023mechanics Coen, E. and Cosgrove, D. J. (2023). The mechanics of plant morphogenesis. Science, 379(6631):eade8055. [Coen et al., 2017]coen2017genes Coen, E., Kennaway, R., and Whitewoods, C. (2017). On genes and form. Development, 144(23):4203–4213. [Coen et al., 2004]coen_genetics_2004 Coen, E., Rolland-Lagan, A.-G., Matthews, M., Bangham, J. A., and Prusinkiewicz, P. (2004). The genetics of geometry. Proceedings of the National Academy of Sciences of the United States of America, 101(14):4728–4735. [Coleman and Noll, 1963]Coleman1963167 Coleman, B. D. and Noll, W. (1963). The thermodynamics of elastic materials with heat conduction and viscosity. Archive for Rational Mechanics and Analysis, 13(1):167 – 178. [Corson et al., 2009]corson_turning_2009 Corson, F., Hamant, O., Bohn, S., Traas, J., Boudaoud, A., and Couder, Y. (2009). Turning a plant tissue into a living cell froth through isotropic growth. Proceedings of the National Academy of Sciences, 106(21):8453–8458. [Cosgrove, 1981]cosgrove1981 Cosgrove, D. J. (1981). Analysis of the dynamic and steady-state responses of growth rate and turgor pressure to changes in cell parameters. Plant Physiology, 68(6):1439–1446. [Cosgrove, 1985]cosgrove1985cell Cosgrove, D. J. (1985). Cell wall yield properties of growing tissue: evaluation by in vivo stress relaxation. Plant physiology, 78(2):347–356. [Cosgrove, 1993]cosgrove1993water Cosgrove, D. J. (1993). Water uptake by growing cells: an assessment of the controlling roles of wall relaxation, solute uptake, and hydraulic conductance. International journal of plant sciences, 154(1):10–21. [Cosgrove, 2005]cosgrove2005growth Cosgrove, D. J. (2005). Growth of the plant cell wall. Nature reviews molecular cell biology, 6(11):850–861. [Cosgrove, 2016]cosgrove2016plant Cosgrove, D. J. (2016). Plant cell wall extensibility: connecting plant cell growth with cell wall structure, mechanics, and the action of wall-modifying enzymes. Journal of experimental botany, 67(2):463–476. [Cosgrove, 2018]cosgrove2018diffuse Cosgrove, D. J. (2018). Diffuse growth of plant cell walls. Plant physiology, 176(1):16–27. [Cosgrove and Anderson, 2020]cosgrove2020plant Cosgrove, D. J. and Anderson, C. T. (2020). Plant cell growth: do pectins drive lobe formation in Arabidopsis pavement cells? Current Biology, 30(11):R660–R662. [Coussy, 2003]coussy2004poromechanics Coussy, O. (2003). Poromechanics. John Wiley & Sons, Chichester. [Curtis, 1914]curtis1914nature Curtis, C. C. (1914). Nature and development of plants. Henry Holt and Company, New York, 4 edition. [Dainty, 1963]DAINTY1963279 Dainty, J. (1963). Water relations of plant cells. In Preston, R. D., editor, Volume 1, Advances in Botanical Research, pages 279–326. Academic Press. [De Boer, 1992]de1992development De Boer, R. (1992). Development of porous media theories—a brief historical review. Transport in porous media, 9:155–164. [De Boer, 2012]de2012theory De Boer, R. (2012). Theory of porous media: highlights in historical development and current state. Springer Berlin, Heidelberg. [Dequeker et al., 2024]dequeker2024biophysical Dequeker, B., Šalagovič, J., Retta, M., Verboven, P., and Nicolaï, B. M. (2024). A biophysical model of apple (Malus domestica Borkh.) and pear (Pyrus communis L.) fruit growth. Biosystems Engineering, 239:130–146. [Dervaux and Ben Amar, 2008]dervaux2008morphogenesis Dervaux, J. and Ben Amar, M. (2008). Morphogenesis of growing soft tissues. Physical Review Letters, 101:068101. [DiCarlo and Quiligotti, 2002]DICARLO2002449 DiCarlo, A. and Quiligotti, S. (2002). Growth and balance. Mechanics Research Communications, 29(6):449–456. [Dumais, 2021]dumais2021 Dumais, J. (2021). Mechanics and hydraulics of pollen tube growth. New Phytologist, 232(4):1549–1565. [Dumais and Forterre, 2012]dumais2012vegetable Dumais, J. and Forterre, Y. (2012). “Vegetable dynamicks”: the role of water in plant movements. Annual Review of Fluid Mechanics, 44:453–478. [Dunlop et al., 2010]dunlop2010theoretical Dunlop, J. W., Fischer, F. D., Gamsjäger, E., and Fratzl, P. (2010). A theoretical model for tissue growth in confined geometries. Journal of the Mechanics and Physics of Solids, 58(8):1073–1087. [Dupuy et al., 2007]dupuy2007system Dupuy, L., Mackenzie, J., Rudge, T., and Haseloff, J. (2007). A system for modelling cell-cell interactions during plant morphogenesis. Annals of botany, 101(8):1255–1265. [Dyson et al., 2012]dyson_model_2012 Dyson, R. J., Band, L. R., and Jensen, O. E. (2012). A model of crosslink kinetics in the expanding plant cell wall: Yield stress and enzyme action. Journal of Theoretical Biology, 307:125–136. [Eggen et al., 2011]eggen2011self Eggen, E., de Keijzer, M. N., and Mulder, B. M. (2011). Self-regulation in tip-growth: The role of cell wall ageing. Journal of theoretical biology, 283(1):113–121. [Epstein and Maugin, 2000]epstein2000thermomechanics Epstein, M. and Maugin, G. A. (2000). Thermomechanics of volumetric growth in uniform bodies. International Journal of Plasticity, 16(7-8):951–978. [Erickson, 1976]erickson1976modeling Erickson, R. O. (1976). Modeling of plant growth. Annual review of plant physiology, 27(1):407–434. [Fishman and Génard, 1998]fishman1998biophysical Fishman, S. and Génard, M. (1998). A biophysical model of fruit growth: simulation of seasonal and diurnal dynamics of mass. Plant, Cell & Environment, 21(8):739–752. [Forterre, 2013]forterre2013slow Forterre, Y. (2013). Slow, fast and furious: understanding the physics of plant movements. Journal of experimental botany, 64(15):4745–4760. [Forterre, 2022]forterre2022basic Forterre, Y. (2022). Basic soft matter for plants. In Jensen, K. and Forterre, Y., editors, Soft Matter in Plants: From Biophysics to Biomimetics, number 15 in Soft Matter Series, chapter 1, pages 1–65. The Royal Society of Chemistry, London. [Fozard et al., 2013]fozard_vertex-element_2013 Fozard, J. A., Lucas, M., King, J. R., and Jensen, O. E. (2013). Vertex-element models for anisotropic growth of elongated plant organs. Frontiers in Plant Science, 4(233). [Fraldi and Carotenuto, 2018]fraldi2018cells Fraldi, M. and Carotenuto, A. R. (2018). Cells competition in tumor growth poroelasticity. Journal of the Mechanics and Physics of Solids, 112:345–367. [Fridman et al., 2021]fridman2021root Fridman, Y., Strauss, S., Horev, G., Ackerman-Lavert, M., Reiner-Benaim, A., Lane, B., Smith, R., and Savaldi-Goldstein, S. (2021). The root meristem is shaped by brassinosteroid control of cell geometry. Nature plants, 7(11):1475–1484. [Geitmann and Ortega, 2009]geitmann_mechanics_2009 Geitmann, A. and Ortega, J. K. E. (2009). Mechanics and modeling of plant cell growth. Trends in Plant Science, 14(9):467–478. [Goriely, 2017]Goriely2017 Goriely, A. (2017). The mathematics and mechanics of biological growth, volume 45 of Interdisciplinary applied mathematics. Springer-Verlag, New York. [Goriely et al., 2010]goriely2010elastic Goriely, A., Moulton, D. E., and Vandiver, R. (2010). Elastic cavitation, tube hollowing, and differential growth in plants and biological tissues. Europhysics Letters, 91(1):18001. [Goriely et al., 2008a]goriely_elastic_2008 Goriely, A., Robertson-Tessi, M., Tabor, M., and Vandiver, R. (2008a). Elastic growth models. In Mondaini, R. P. and Pardalos, P. M., editors, Mathematical modelling of biosystems, volume 102 of Applied Optimization, chapter 1, pages 1–44. Springer-Verlag Berlin Heidelberg. [Goriely and Tabor, 1998]goriely1998spontaneous Goriely, A. and Tabor, M. (1998). Spontaneous helix hand reversal and tendril perversion in climbing plants. Physical Review Letters, 80:1564–1567. [Goriely et al., 2008b]goriely2008nonlinear Goriely, A., Vandiver, R., and Destrade, M. (2008b). Nonlinear Euler buckling. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 464(2099):3003–3019. [Green et al., 2010]green_genetic_2010 Green, A. A., Kennaway, J. R., Hanna, A. I., Bangham, J. A., and Coen, E. (2010). Genetic Control of Organ Shape and Tissue Polarity. PLoS Computational Biology Biol, 8(11):e1000537. [Gurtin et al., 2010]gurtin2010mechanics Gurtin, M. E., Fried, E., and Anand, L. (2010). The mechanics and thermodynamics of continua. Cambridge University Press, New York. [Haas et al., 2020]haas2020pectin Haas, K. T., Wightman, R., Meyerowitz, E. M., and Peaucelle, A. (2020). Pectin homogalacturonan nanofilament expansion drives morphogenesis in plant epidermal cells. Science, 367(6481):1003–1007. [Haas et al., 2021]HAAS2021100054 Haas, K. T., Wightman, R., Peaucelle, A., and Höfte, H. (2021). The role of pectin phase separation in plant cell wall assembly and growth. The Cell Surface, 7:100054. [Hamant et al., 2008]hamant2008developmental Hamant, O., Heisler, M. G., Jonsson, H., Krupinski, P., Uyttewaal, M., Bokov, P., Corson, F., Sahlin, P., Boudaoud, A., Meyerowitz, E. M., Couder, Y., and Traas, J. (2008). Developmental patterning by mechanical signals in Arabidopsis. Science, 322(5908):1650–1655. [Hamant and Traas, 2010]hamant2010mechanics Hamant, O. and Traas, J. (2010). The mechanics behind plant development. New Phytologist, 185(2):369–385. [Holland et al., 2013]holland2013mechanics Holland, M. A., Kosmata, T., Goriely, A., and Kuhl, E. (2013). On the mechanics of thin films and growing surfaces. Mathematics and Mechanics of Solids, 18(6):561–575. [Holzapfel, 2000]holzapfel2000nonlinear Holzapfel, G. A. (2000). Nonlinear solid mechanics. John Wiley & Sons, Chichester. [Jia et al., 2018]jia2018curvature Jia, F., Pearce, S. P., and Goriely, A. (2018). Curvature delays growth-induced wrinkling. Physical Review E, 98(3):033003. [Johnson and Lenhard, 2011]johnson2011genetic Johnson, K. and Lenhard, M. (2011). Genetic control of plant organ growth. New Phytologist, 191(2):319–333. [Kelly-Bellow et al., 2023]KellyBellow2023 Kelly-Bellow, R., Lee, K., Kennaway, R., Barclay, J. E., Whibley, A., Bushell, C., Spooner, J., Yu, M., Brett, P., Kular, B., Cheng, S., Chu, J., Xu, T., Lane, B., Fitzsimons, J., Xue, Y., Smith, R. S., Whitewoods, C. D., and Coen, E. (2023). Brassinosteroid coordinates cell layer interactions in plants via cell wall and tissue mechanics. Science, 380(6651):1275–1281. [Kennaway and Coen, 2019]kennaway2019volumetric Kennaway, R. and Coen, E. (2019). Volumetric finite-element modelling of biological growth. Open biology, 9(5):190057. [Kennaway et al., 2011]kennaway_generation_2011 Kennaway, R., Coen, E., Green, A., and Bangham, A. (2011). Generation of Diverse Biological Forms through Combinatorial Interactions between Tissue Polarity and Growth. PLoS Computational Biology, 7(6):e1002071. [Khadka et al., 2019]khadka2019feedback Khadka, J., Julien, J.-D., and Alim, K. (2019). Feedback from tissue mechanics self-organizes efficient outgrowth of plant organ. Biophysical journal, 117(10):1995–2004. [Kierzkowski et al., 2012]kierzkowski_elastic_2012 Kierzkowski, D., Nakayama, N., Routier-Kierzkowska, A.-L., Weber, A., Bayer, E., Schorderet, M., Reinhardt, D., Kuhlemeier, C., and Smith, R. S. (2012). Elastic domains regulate growth and organogenesis in the plant shoot apical meristem. Science, 335(6072):1096–1099. [Kierzkowski et al., 2019]kierzkowski2019growth Kierzkowski, D., Runions, A., Vuolo, F., Strauss, S., Lymbouridou, R., Routier-Kierzkowska, A.-L., Wilson-Sánchez, D., Jenke, H., Galinha, C., Mosca, G., et al. (2019). A growth-based framework for leaf shape development and diversity. Cell, 177(6):1405–1418. [Krause et al., 2023]krause2023concentration Krause, A. L., Gaffney, E. A., and Walker, B. J. (2023). Concentration-dependent domain evolution in reaction–diffusion systems. Bulletin of Mathematical Biology, 85(2):14. [Kuhl, 2014]kuhl2014growing Kuhl, E. (2014). Growing matter: A review of growth in living systems. Journal of the Mechanical Behavior of Biomedical Materials, 29:529–543. [Kutschera, 1989]kutschera1989tissue Kutschera, U. (1989). Tissue stresses in growing plant organs. Physiologia Plantarum, 77(1):157–163. [Kutschera, 2001]KUTSCHERA2001851 Kutschera, U. (2001). Gravitropism of axial organs in multicellular plants. Advances in Space Research, 27(5):851–860. [Kutschera et al., 1987]kutschera1987cooperation Kutschera, U., Bergfeld, R., and Schopfer, P. (1987). Cooperation of epidermis and inner tissues in auxin-mediated growth of maize coleoptiles. Planta, 170(2):168–180. [Kutschera and Niklas, 2007]kutschera_epidermal-growth-control_2007 Kutschera, U. and Niklas, K. J. (2007). The epidermal-growth-control theory of stem elongation: An old and a new perspective. Journal of Plant Physiology, 164(11):1395–1409. [Lang et al., 2015]lang2015propagation Lang, G. E., Vella, D., Waters, S. L., and Goriely, A. (2015). Propagation of damage in brain tissue: coupling the mechanics of oedema and oxygen delivery. Biomechanics and modeling in mechanobiology, 14:1197–1216. [Laplaud et al., 2024]laplaud2024assessing Laplaud, V., Muller, E., Demidova, N., Drevensek, S., and Boudaoud, A. (2024). Assessing the hydromechanical control of plant growth. Journal of the Royal Society Interface, 21(214):20240008. [Lee et al., 2019]lee2019shaping Lee, K. J. I., Bushell, C., Koide, Y., Fozard, J. A., Piao, C., Yu, M., Newman, J., Whitewoods, C., Avondo, J., Kennaway, R., Marée, A. F. M., Cui, M., and Coen, E. (2019). Shaping of a three-dimensional carnivorous trap through modulation of a planar growth mechanism. PLoS Biology, 17(10):e3000427. [Liang and Mahadevan, 2009]liang2009shape Liang, H. and Mahadevan, L. (2009). The shape of a long leaf. Proceedings of the National Academy of Sciences, 106(52):22049–22054. [Liu et al., 2022]LIU20221974 Liu, S., Strauss, S., Adibi, M., Mosca, G., Yoshida, S., Dello Ioio, R., Runions, A., Andersen, T. G., Grossmann, G., Huijser, P., Smith, R. S., and Tsiantis, M. (2022). Cytokinin promotes growth cessation in the Arabidopsis root. Current Biology, 32(9):1974–1985.e3. [Liu et al., 2013]liu2013pattern Liu, Z., Swaddiwudhipong, S., and Hong, W. (2013). Pattern formation in plants via instability theory of hydrogels. Soft Matter, 9(2):577–587. [Lockhart, 1965]lockhart1965analysis Lockhart, J. A. (1965). An analysis of irreversible plant cell elongation. Journal of theoretical biology, 8(2):264–275. [Long et al., 2020]long2020cellular Long, Y., Cheddadi, I., Mosca, G., Mirabet, V., Dumond, M., Kiss, A., Traas, J., Godin, C., and Boudaoud, A. (2020). Cellular heterogeneity in pressure and growth emerges from tissue topology and geometry. Current Biology, 30(8):1504–1516. [Martre et al., 2011]martre2011modelling Martre, P., Bertin, N., Salon, C., and Génard, M. (2011). Modelling the size and composition of fruit, grain and seed by process-based simulation models. New Phytologist, 191(3):601–618. [Martyushev and Seleznev, 2006]martyushev2006maximum Martyushev, L. M. and Seleznev, V. D. (2006). Maximum entropy production principle in physics, chemistry and biology. Physics reports, 426(1):1–45. [Meinhardt and Gierer, 2000]Meinhardt2000 Meinhardt, H. and Gierer, A. (2000). Pattern formation by local self-activation and lateral inhibition. BioEssays, 22(8):753–760. [Menzel and Kuhl, 2012]menzel2012frontiers Menzel, A. and Kuhl, E. (2012). Frontiers in growth and remodeling. Mechanics research communications, 42:1–14. [Merks et al., 2011]merks2011virtualleaf Merks, R. M., Guravage, M., Inzé, D., and Beemster, G. T. (2011). VirtualLeaf: an open-source framework for cell-based modeling of plant tissue growth and development. Plant physiology, 155(2):656–666. [Molz and Boyer, 1978]molz1978growth Molz, F. J. and Boyer, J. S. (1978). Growth-induced water potentials in plant cells and tissues. Plant Physiology, 62(3):423–429. [Molz and Ikenberry, 1974]molz1974water Molz, F. J. and Ikenberry, E. (1974). Water transport through plant cells and cell walls: theoretical development. Soil Science Society of America Journal, 38(5):699–704. [Molz et al., 1975]molz1975dynamics Molz, F. J., Truelove, B., and Peterson, C. M. (1975). Dynamics of rehydration in leaf disks 1. Agronomy Journal, 67(4):511–515. [Mosca et al., 2018]mosca2018modeling Mosca, G., Adibi, M., Strauss, S., Runions, A., Sapala, A., and Smith, R. S. (2018). Modeling plant tissue growth and cell division. In Morris, R. J., editor, Mathematical modelling in plant biology, pages 107–138. Springer, Cham. [Mosca et al., 2024]mosca2024growth Mosca, G., Eng, R. C., Adibi, M., Yoshida, S., Lane, B., Bergheim, L., Weber, G., Smith, R. S., and Hay, A. (2024). Growth and tension in explosive fruit. Current Biology, 34:1010–1022. [Moulton et al., 2020]moulton2020multiscale Moulton, D. E., Oliveri, H., and Goriely, A. (2020). Multiscale integration of environmental stimuli in plant tropism produces complex behaviors. Proceedings of the National Academy of Sciences, 117(51):32226–32237. [Newell et al., 2008]NEWELL2008421 Newell, A. C., Shipman, P. D., and Sun, Z. (2008). Phyllotaxis: Cooperation and competition between mechanical and biochemical processes. Journal of Theoretical Biology, 251(3):421–439. [Nonami and Boyer, 1993]nomami1993 Nonami, H. and Boyer, J. S. (1993). Direct demonstration of a growth-induced water potential gradient. Plant Physiology, 102(1):13–19. [Oliveri et al., 2024]Oliveri2024 Oliveri, H., Moulton, D. E., Harrington, H. A., and Goriely, A. (2024). Active shape control by plants in dynamic environments. Physical Review E, 110(1):014405. [Oliveri et al., 2018]oliveri2019regulation Oliveri, H., Traas, J., Godin, C., and Ali, O. (2018). Regulation of plant cell wall stiffness by mechanical stress: a mesoscale physical model. Journal of mathematical biology, 78(3):625–653. [Onsager, 1931]onsager1931reciprocal Onsager, L. (1931). Reciprocal relations in irreversible processes. I. Physical review, 37(4):405. [Ortega, 1985]ortega_augmented_1985 Ortega, J. K. E. (1985). Augmented Growth Equation for Cell Wall Expansion. Plant Physiology, 79(1):318–320. [Ortega, 2010]ortega2010plant Ortega, J. K. E. (2010). Plant cell growth in tissue. Plant physiology, 154(3):1244–1253. [Passioura and Boyer, 2003]passioura2003tissue Passioura, J. B. and Boyer, J. S. (2003). Tissue stresses and resistance to water flow conspire to uncouple the water potential of the epidermis from that of the xylem in elongating plant stems. Functional Plant Biology, 30(3):325–334. [Peng et al., 2022]peng2022differential Peng, Z., Alique, D., Xiong, Y., Hu, J., Cao, X., Lü, S., Long, M., Wang, Y., Wabnik, K., and Jiao, Y. (2022). Differential growth dynamics control aerial organ geometry. Current Biology, 32(22):4854–4868. [Peters and Tomos, 1996]peters1996history Peters, W. S. and Tomos, A. D. (1996). The history of tissue tension. Annals of Botany, 77(6):657–665. [Philip, 1958]philip1958propagation Philip, J. (1958). Propagation of turgor and other properties through cell aggregations. Plant Physiology, 33(4):271. [Pieczywek and Zdunek, 2017]pieczywek2017compression Pieczywek, P. M. and Zdunek, A. (2017). Compression simulations of plant tissue in 3D using a mass-spring system approach and discrete element method. Soft matter, 13(40):7318–7331. [Plant, 1982]plant1982continuum Plant, R. E. (1982). A continuum model for root growth. Journal of Theoretical Biology, 98(1):45–59. [Press et al., 2007]press2007numerical Press, W. H., Teukolsky, S. A., Vetterling, W. T., and Flannery, B. P. (2007). Numerical recipes: The art of scientific computing. Cambridge University Press, New York, 3 edition. [Preziosi and Farina, 2002]preziosi2002darcy Preziosi, L. and Farina, A. (2002). On Darcy's law for growing porous media. International Journal of Non-Linear Mechanics, 37(3):485–491. [Rebocho et al., 2017]Rebocho2017 Rebocho, A. B., Southam, P., Kennaway, J. R., Bangham, J. A., and Coen, E. (2017). Generation of shape complexity through tissue conflict resolution. eLife, 6:e20156. [Robinson and Kuhlemeier, 2018]robinson2018global Robinson, S. and Kuhlemeier, C. (2018). Global compression reorients cortical microtubules in Arabidopsis hypocotyl epidermis and promotes growth. Current Biology. [Rodriguez et al., 1994]rodriguez1994stress Rodriguez, E. K., Hoger, A., and McCulloch, A. D. (1994). Stress-dependent finite growth in soft elastic tissues. Journal of Biomechanics, 27(4):455–467. [Rojas et al., 2011]rojas_chemically_2011 Rojas, E. R., Hotton, S., and Dumais, J. (2011). Chemically Mediated Mechanical Expansion of the Pollen Tube Cell Wall. Biophysical Journal, 101(8):1844–1853. [Rudge and Haseloff, 2005]rudge2005computational Rudge, T. and Haseloff, J. (2005). A computational model of cellular morphogenesis in plants. In European Conference on Artificial Life, pages 78–87. Springer. [Rueda-Contreras et al., 2018]rueda2018curvature Rueda-Contreras, M. D., Romero-Arias, J. R., Aragon, J. L., and Barrio, R. A. (2018). Curvature-driven spatial patterns in growing 3D domains: A mechanochemical model for phyllotaxis. PLoS One, 13(8):e0201746. [Sachs, 1865]sachs1865 Sachs, J. (1865). Handbuch der Experimental-Physiologie der Pflanzen: Untersuchungen über die allgemeinen Lebensbedingungen der Pflanzen und die Functionen ihrer Organe, volume 4 of Handbuch der Experimental-physiologie der Pflanzen. Wilhelm Engelmann, Leipzig. [Sachs, 1882]sachs1882vorlesungen Sachs, J. (1882). Vorlesungen über Pflanzen-physiologie, volume 1. W. Engelmann, Leipzig. [Sassi et al., 2014]sassi_auxin-mediated_2014 Sassi, M., Ali, O., Boudon, F., Cloarec, G., Abad, U., Cellier, C., Chen, X., Gilles, B., Milani, P., Friml, J., Vernoux, T., Godin, C., Hamant, O., and Traas, J. (2014). An Auxin-Mediated Shift toward Growth Isotropy Promotes Organ Formation at the Shoot Meristem in Arabidopsis. Current Biology, 24(19):2335–2342. [Schopfer, 2006]schopfer2006biomechanics Schopfer, P. (2006). Biomechanics of plant growth. American journal of botany, 93(10):1415–1425. [Tiero and Tomassetti, 2016]tiero2016morphoelastic Tiero, A. and Tomassetti, G. (2016). On morphoelastic rods. Mathematics and Mechanics of Solids, 21(8):941–965. [Vandiver and Goriely, 2008]vandiver2008tissue Vandiver, R. and Goriely, A. (2008). Tissue tension and axial growth of cylindrical structures in plants and elastic tissues. Europhysics Letters, 84(5):58004. [Vandiver and Goriely, 2009]vandiver2009morpho Vandiver, R. and Goriely, A. (2009). Morpho-elastodynamics: the long-time dynamics of elastic growth. Journal of biological dynamics, 3(2-3):180–195. [Wada, 2012]wada2012hierarchical Wada, H. (2012). Hierarchical helical order in the twisted growth of plant organs. Physical Review Letters, 109(12):128104. [Wang and Zhao, 2015]wang2015three Wang, Q. and Zhao, X. (2015). A three-dimensional phase diagram of growth-induced surface instabilities. Scientific reports, 5(1):8887. [Whitewoods and Coen, 2017]whitewoods2017growth Whitewoods, C. D. and Coen, E. (2017). Growth and development of three-dimensional plant form. Current Biology, 27(17):R910–R918. [Whitewoods et al., 2020]whitewoods2020evolution Whitewoods, C. D., Gonçalves, B., Cheng, J., Cui, M., Kennaway, R., Lee, K., Bushell, C., Yu, M., Piao, C., and Coen, E. (2020). Evolution of carnivorous traps from planar leaves through simple shifts in gene expression. Science, 367(6473):91–96. [Xue et al., 2016]XUE2016409 Xue, S.-L., Li, B., Feng, X.-Q., and Gao, H. (2016). Biochemomechanical poroelastic theory of avascular tumor growth. Journal of the Mechanics and Physics of Solids, 94:409–432. [Zhang et al., 2024]zhang2024mechanism Zhang, H., Xue, F., Guo, L., Cheng, J., Jabbour, F., DuPasquier, P.-E., Xie, Y., Zhang, P., Wu, Y., Duan, X., Kong, H., and Zhang, R. (2024). The mechanism underlying asymmetric bending of lateral petals in Delphinium (Ranunculaceae). Current Biology, 34:755–768. [Zhang et al., 2020]zhang2020wox Zhang, Z., Runions, A., Mentink, R. A., Kierzkowski, D., Karady, M., Hashemi, B., Huijser, P., Strauss, S., Gan, X., Ljung, K., and Tsiantis, M. (2020). A WOX/auxin biosynthesis module controls growth to shape leaf form. Current biology, 30(24):4857–4868. [Zhao et al., 2020]zhao2020microtubule Zhao, F., Du, F., Oliveri, H., Zhou, L., Ali, O., Chen, W., Feng, S., Wang, Q., Lü, S., Long, M., Schneider, R., Sampathkumar, A., Godin, C., Traas, J., and Jiao, Y. (2020). Microtubule-mediated wall anisotropy contributes to leaf blade flattening. Current Biology, 30(20):3972–3985. [Zhu and Melrose, 2003]zhu2003mechanics Zhu, H. and Melrose, J. (2003). A mechanics model for the compression of plant and vegetative tissues. Journal of theoretical biology, 221(1):89–101.
http://arxiv.org/abs/2409.03511v1
20240905132218
A Cantor spectrum diagonal in O_2
[ "Philipp Sibbel", "Wilhelm Winter" ]
math.OA
[ "math.OA", "46L05" ]
§ ABSTRACT We prove the existence of a C^*-diagonal in the Cuntz algebra 𝒪_2 with spectrum homeomorphic to the Cantor space. Speaker and Style Disentanglement of Speech Based on Contrastive Predictive Coding Supported Factorized Variational Autoencoder Yuying Xie1, Michael Kuhlmann2, Frederik Rautenberg2 , Zheng-Hua Tan1, Reinhold Haeb-Umbach2 1. Department of Electronic Systems, Aalborg University, Denmark 2. Department of Communications Engineering, Paderborn University, Germany =========================================================================================================================================================================================================================================================== In 1986, Kumjian defined the notion of a C^*-diagonal in a C^*-algebra as an analogue to the notion of a Cartan subalgebra in von Neumann algebras (<cit.>). He showed that diagonal C^*-pairs occur as groupoid C^*-algebras of twisted étale equivalence relations. In <cit.>, Renault noted that the notion of a C^*-diagonal is sometimes too restrictive as it does not cover a large number of canonical maximal abelian subalgebras in important C^*-algebras. As an example, he mentioned the Cuntz algebras 𝒪_n and more generally graph algebras, which contain obvious regular maximal abelian subalgebras that are not C^*-diagonals. The difference is that these C^*-algebras can be constructed as groupoid C^*-algebras from groupoids which have some isotropy, more precisely, they are only topologically principal as opposed to principal. As a remedy, Renault defined the weaker notion of a Cartan subalgebra in a C^*-algebra. Let A be a C^*-algebra. An abelian sub-C^*-algebra D in A is called a Cartan subalgebra if (0) D contains an approximate unit of A, (1) D is maximal abelian in A, (2) D is regular, i.e. the normalisers of D, 𝒩_A(D) := {n ∈ A | n^*Dn ⊂ D and nDn^* ⊂ D}, generate A as a C^*-algebra, and (3) there exists a faithful conditional expectation Ψ of A onto D. D is called a C^*-diagonal if, moreover, (4) the unique extension property of pure states is satisfied, that is, every pure state on D has a unique extension to a (necessarily pure) state on A. Renault then showed the analogous result to Kumjian's theorem, that is, the reduced C^*-algebra of a second-countable, topologically principal, Hausdorff, étale, twisted groupoid contains a canonical Cartan subalgebra corresponding to the unit space, and every Cartan subalgebra in a separable C^*-algebra uniquely arises in this fashion (<cit.>). A Cartan subalgebra then is a C^*-diagonal if and only if the corresponding groupoid is principal. In the following years, motivated by results in the theory of von Neumann algebras, Cartan subalgebras were studied intensively in the C^*-algebraic setting. Several existence and non-uniqueness results for Cartan subalgebras and C^*-diagonals were obtained (e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>), where the interplay between the study of Cartan subalgebras and classification theory of C^*-algebras attracted particular attention. In <cit.>, Li completed the proof that every classifiable (in the sense of Elliott's programme; cf. <cit.>) simple C^*-algebra has a Cartan subalgebra and that a unital, separable, simple C^*-algebra with finite nuclear dimension contains a Cartan subalgebra if and only if it satisfies the Universal Coefficient Theorem (UCT) of Rosenberg and Schochet (also see <cit.>). In <cit.>, Cuntz introduced an important class of C^*-algebras, which he denoted by 𝒪_n, for 2 ≤ n ≤∞, and which were later called Cuntz algebras. For n finite, 𝒪_n is the universal C^*-algebra generated by n isometries S_1, …, S_n with pairwise orthogonal range projections which add up to the unit. The Cuntz algebra 𝒪_n then contains a subalgebra isomorphic to M_n^∞, the UHF algebra of type n^∞. It is generated by elements of the form S_μ S_ν^* for finite sequences μ, ν∈{1,…,n}^k of the same length, where S_μ = S_μ_1… S_μ_k for μ = (μ_1,…,μ_k). It is well-known that the subalgebra D_n^∞ generated by diagonal matrices in M_n^∞ is a C^*-diagonal. The corresponding subalgebra in the Cuntz algebra 𝒪_n, which we also denote by D_n^∞, is generated by elements of the form S_μ S_μ^* and turns out to be a Cartan subalgebra in 𝒪_n; however, the unique extension property of pure states is not satisfied and therefore it is not a C^*-diagonal in 𝒪_n (cf. <cit.>). In particular, an element in {1,…,n}^, the spectrum of D_n^∞, corresponds to a pure state on D_n^∞ with a unique extension to 𝒪_n if and only if the element is an (eventually) aperiodic sequence (<cit.>, <cit.>). The natural question whether 𝒪_n contains a C^*-diagonal is surprisingly difficult to answer. In <cit.>, Hjelmborg presented a construction of 𝒪_2 as the C^*-algebra of a principal groupoid; by Renault's and Kumjian's work (<cit.>, <cit.>), this yields a C^*-diagonal in 𝒪_2. Hjelmborg's argument factorises through Kirchberg–Phillips classification and, in particular, it uses Kirchberg's 𝒪_2-absorption theorem, which states that A ⊗𝒪_2 is isomorphic to 𝒪_2 if and only if A is a simple, separable, unital and nuclear C^*-algebra. The spectrum of Hjelmborg's C^*-diagonal is 𝕋×{1,2}^, and in particular one-dimensional. Via tensoring with itself, one then also obtains C^*-diagonals with higher-dimensional spectra in 𝒪_2. Until now it remained unknown whether 𝒪_2 contains a C^*-diagonal with Cantor spectrum. Below we will give an affirmative answer – and here is why we are excited about this: A long-term goal is it to develop a structure theory for Cartan and diagonal sub-C^*-algebras of a given classifiable C^*-algebra. This programme is in its infancy, and it is natural to focus on spectra which are as universal as possible, and in particular robust under taking tensor products. Perhaps more important, we hope to eventually employ K-theoretic tools to build distinguishing invariants, and Cantor spectrum promises the most direct access to such methods. We write M_n for the n × n - matrices with coefficients in , and 𝒦≅𝒦(ℓ^2()) for the compact operators on an infinite-dimensional separable Hilbert space. For i,j ∈{1,…,n}, let e_ij denote the standard matrix units in M_n (respectively in 𝒦). For a non-unital C^*-algebra A, we write A^+ for its unitisation. Before giving full details, let us outline the construction. Let X be a Cantor space and fix a minimal homeomorphism on X. Then, the associated -action is free, which implies that 𝒞(X), the C^*-algebra of continuous functions on X, is a C^*-diagonal in the crossed product C^*-algebra 𝒞(X) ⋊_α (see <cit.>), where α denotes the induced action on 𝒞(X). Let v ∈𝒞(X) ⋊_α be the unitary implementing the action, i.e. v h v^* = α (h) for all h ∈𝒞(X). Let D_2^∞ denote the canonical Cartan subalgebra in the Cuntz algebra 𝒪_2. It is generated by elements of the form S_μ S_μ^* for μ∈{1,2}^k and k ∈. Then, D_2^∞⊗𝒞(X) ⊂𝒪_2 ⊗ (𝒞(X) ⋊_α) is a Cartan subalgebra which is not a C^*-diagonal (see <cit.>). However, we will see that D:= D_2^∞⊗𝒞(X) ⊂C^*(D_2^∞⊗𝒞(X), S_1 ⊗ v, S_2 ⊗ v)=:B is a C^*-diagonal, and the ambient C^*-algebra B is nuclear and simple; see Lemmas <ref> and <ref>. But now, the infinite tensor product D^⊗∞⊂ B^⊗∞ is still a C^*-diagonal with Cantor spectrum in a simple nuclear C^*-algebra. B contains a unital copy of the Cuntz algebra 𝒪_2, and since the latter is strongly self-absorbing (see <cit.>), we have that B^⊗∞≅ B^⊗∞⊗𝒪_2; cf. <cit.>. The isomorphism with 𝒪_2 then follows from Kirchberg's 𝒪_2-absorption theorem, and we have our main result: D^⊗∞ is a C^*-diagonal in B^⊗∞ with spectrum homeomorphic to the Cantor space, and B^⊗∞ is isomorphic to the Cuntz algebra 𝒪_2. We now carry out the steps above in detail. D ⊂ B as in (<ref>) is a C^*-diagonal. Since D_2^∞⊗𝒞(X) is a Cartan subalgebra in 𝒪_2 ⊗ (𝒞(X) ⋊_α) by <cit.>, it is maximal abelian also in B and there is a faithful conditional expectation B → D_2^∞⊗𝒞(X). To see that D_2^∞⊗𝒞(X) is regular in B, it suffices to check that S_1 ⊗ v and S_2⊗ v are normalisers. But for elementary tensors of the form d ⊗ f ∈ D_2^∞⊗𝒞(X), we have (S_i ⊗ v) (d ⊗ f)(S_i ⊗ v)^* = (S_i d S_i^* ⊗ vfv^*) and (S_i ⊗ v)^* (d ⊗ f)(S_i ⊗ v) = (S_i^* d S_i ⊗ v^*fv), which again lie in D_2^∞⊗𝒞(X). Now, since sums of elementary tensors are dense in the closed subalgebra D_2^∞⊗𝒞(X) and since conjugation with S_i ⊗ v and (S_i ⊗ v)^* is continuous, it follows that S_i ⊗ v are indeed normalisers. Thus, it only remains to show the unique extension property of pure states. For this, consider an arbitrary pure state on D_2^∞⊗𝒞(X). It is of the form ρ⊗σ for a pure state ρ on D_2^∞ and a pure state σ on 𝒞(X). Let ρ⊗σ be an extension to a pure state on 𝒪_2 ⊗ ( 𝒞(X) ⋊_α). Since ρ⊗σ( 1_𝒪_2⊗ . ) is a state on 𝒞(X) ⋊_α extending σ, it must be the unique extension, denoted by σ, and hence pure. It follows from <cit.> that ρ⊗σ=ρ⊗σ for a state ρ on 𝒪_2, which extends ρ. Note that a priori, ρ does not need to be unique on 𝒪_2 but we claim that on B, ρ⊗σ already is uniquely determined. To see this, we consider an arbitrary product k = (d ⊗ f) k_1 … k_l for l ∈, where d ∈ D_2^∞ and f ∈𝒞(X), and each k_i is S_1⊗ v, S_1^*⊗ v^*, S_2⊗ v or S_2^*⊗ v^*. As the action on X is free and σ is obtained using the conditional expectation of 𝒞(X) ⋊_α onto 𝒞(X) followed by the point evaluation σ, ρ⊗σ (k) can only be nonzero if v and v^* occur the same number of times in the product. Therefore, if ρ⊗σ (k) is nonzero, then k also contains the same number of S_i's and S_i^*'s. Thus, the first tensor factor of k has to be in the subalgebra of 𝒪_2 generated by elements of the form S_μ S_ν^* with ℓ(μ) = ℓ(ν). On this subalgebra, which is isomorphic to the UHF algebra of type 2^∞, ρ also is uniquely determined. Now, since the linear span of elements like k is dense in B, ρ⊗σ is uniquely determined on B. Our proof that B is nuclear closely follows Cuntz's argument for 𝒪_n. Our initial proof of the simplicity of B also followed Cuntz's ideas, but discussions with S. Evington prompted us to use a result by Echterhoff (<cit.>). In order to set up notation, we recall the construction in the case n=2 from <cit.>, where 𝒪_2 was presented as a crossed product cut down by a projection. We fix an isomorphism ϕ : 𝒦→𝒦⊗ M_2 such that ϕ(e_11) = e_11⊗ e_11 and let id_M_2^∞: M_2^∞→ M_2^∞ denote the identity map. Then, we can define an automorphism Φ : 𝒦⊗ M_2^∞→𝒦⊗ M_2^∞ via Φ = ϕ⊗id_M_2^∞, that is Φ= ϕ⊗id_M_2^∞: 𝒦⊗ M_2 ⊗…⟶ (𝒦⊗ M_2) ⊗ M_2 ⊗… = 𝒦⊗ M_2^∞. Now, Φ canonically extends to an automorphism Φ^+ on the unitisation (𝒦⊗ M_2^∞)^+ and we obtain a -action on (𝒦⊗ M_2^∞)^+. Thus, we can consider the crossed product (𝒦⊗ M_2^∞)^+ ⋊_Φ^+. Let u ∈ (𝒦⊗ M_2^∞)^+ ⋊_Φ^+ be the unitary implementing the action, i.e. ux u^*= Φ^+(x) for all x ∈ (𝒦⊗ M_2^∞)^+. Now, 𝒪_2 is (isomorphic to) the crossed product (𝒦⊗ M_2^∞)^+⋊_Φ^+ cut down by the projection p: = e_11⊗1_M_2^∞∈ (𝒦⊗ M_n^∞)^+, i.e. 𝒪_2 ≅ p ((𝒦⊗ M_2^∞)^+⋊_Φ^+) p, sending the generators S_1 and S_2 to up and (e_11⊗ e_21⊗1_M_2⊗… ) up, respectively (also see <cit.>). Note that under this identification, we have S_i_1… S_i_kS_j_k^* …S_j_1^* = e_11⊗ e_i_1 j_1⊗…⊗ e_i_k j_k⊗1_M_2⊗… , so D_2^∞⊂𝒪_2 is identified with e_1 1⊗ D_2^∞⊂ e_11⊗ M_2^∞⊂ p ((𝒦⊗ M_2^∞)^+ ⋊_Φ^+) p. We define elements S̃_1 := S_1 ⊗ v = up ⊗ v, S̃_2 := S_2 ⊗ v = (e_11⊗ e_21⊗1_M_2⊗… ) up ⊗ v in 𝒪_2 ⊗ (𝒞(X) ⋊_α)⊂((𝒦⊗ M_2^∞)^+⋊_Φ^+ )⊗ (𝒞(X) ⋊_α). Then, S̃_1 and S̃_2 also satisfy the Cuntz relations with unit p̃:= p ⊗1_𝒞(X), i.e. S̃_i^* S̃_i = S̃_1 S̃_1^*+S̃_2S̃_2^* = p̃=p̃^*=p̃^2 and p̃S̃_i p̃=S̃_i. Furthermore, we have the identification B = C^*(D_2^∞⊗𝒞(X), S_1 ⊗ v , S_2 ⊗ v) ≅C^*(e_11⊗ D_2^∞⊗𝒞(X), S̃_1 , S̃_2), when viewing B as a subalgebra of ((𝒦⊗ M_2^∞)^+⋊_Φ^+ )⊗ (𝒞(X) ⋊_α). B is nuclear and simple. First, we note that Φ^+ ⊗α yields a -action on (𝒦⊗ M_2^∞)^+ ⊗𝒞(X) which is implemented by u ⊗ v. Thus, by the universal property of the crossed product (cf. <cit.>), there exists a surjection ((𝒦⊗ M_2^∞)^+ ⊗𝒞(X)) ⋊_Φ^+ ⊗α⟶C^*((𝒦⊗ M_2^∞)^+ ⊗𝒞(X), u ⊗ v) =:E. We consider E as a subalgebra of ((𝒦⊗ M_2^∞)^+ ⋊_Φ^+ ) ⊗( 𝒞(X) ⋊_α). As crossed products of nuclear C^*-algebras by amenable groups are nuclear (<cit.>), we find that ((𝒦⊗ M_2^∞)^+ ⊗𝒞(X)) ⋊_Φ^+ ⊗α is nuclear. Now, since E is a quotient of a nuclear C^*-algebra, it is nuclear as well (<cit.>, <cit.>), and so is the hereditary sub-C^*-algebra p̃ E p̃ (<cit.>). We now check that inside ((𝒦⊗ M_2^∞)^+ ⋊_Φ^+) ⊗ (𝒞(X) ⋊_α), B coincides with p̃Ep̃, which will conclude the proof of nuclearity. This will be done analogously to the presentation of 𝒪_2 as a crossed product cut down by a projection (see <cit.>). First, note that since S̃_1 = p̃S̃_1p̃ and S̃_2 = p̃S̃_2p̃, we have B ⊂p̃ Ep̃. For the reverse inclusion, let us consider an a ∈ E ⊂ ((𝒦⊗ M_2^∞)^+ ⋊_Φ^+ ) ⊗( 𝒞(X) ⋊_α) of the form a = ∑_i=-N^N x_i u^i ⊗ y_i v^i, where N ∈ and x_i ∈ (𝒦⊗ M_2^∞)^+, y_i ∈𝒞(X) for all i= -N,…,N. Elements of this form are dense in E. When writing x̃_i = u^-i x_i u^i and ỹ_i = v^-i y_i v^i for i < 0, we obtain a = ∑_i<0 u^i x̃_i ⊗ v^i ỹ_i+ x_0 ⊗ y_0+ ∑_i>0 x_i u^i ⊗ y_i v^i. Using that up = S_1 = 1_𝒪_2 S_1 = pup, we see that px_i u^i p = (px_ip)(up)^i for i > 0 and pu^ix̃_i p = ((up)^*)^-i p x̃_i p for i<0. This yields p̃ a p̃ = ∑_i<0 ((up)^*)^-i p x̃_ip ⊗ v^i ỹ_i+ px_0p ⊗ v_0 + ∑_i>0 (px_ip)(up)^i ⊗ y_i v ^i = ∑_i<0 (S_1^*)^-i p x̃_ip ⊗ v^i ỹ_i+ px_0p⊗ y_0+ ∑_i>0 (px_ip)S_1^i ⊗ y_i v^i = ∑_i<0 ((S_1⊗ v)^*)^-i (p x̃_ip ⊗ỹ_i)+ px_0p ⊗ y_0+ ∑_i>0 ((px_ip) ⊗ y_i)(S_1 ⊗ v)^i. Thus, p̃ E p̃ is generated as a C^*-algebra by p (𝒦⊗ M_2^∞)^+ p ⊗𝒞(X) = e_11⊗ M_2^∞⊗𝒞(X) together with S̃_1. Recall from (<ref>) that S̃_i_1…S̃_i_kS̃_j_k^* …S̃_j_1^* = e_11⊗ e_i_1 j_1⊗…⊗ e_i_k j_k⊗1_M_2⊗…⊗1_𝒞(X), whence e_11⊗ M_2^∞⊗𝒞(X) is generated by S̃_1, S̃_2 and e_11⊗1_M_2^∞⊗𝒞(X). Therefore, p̃ E p̃ is generated by S̃_1, S̃_2 and e_11⊗1_M_2^∞⊗𝒞(X), which yields that p̃ E p̃ is contained in B. We further observe that p̃Ep̃ = p̃ F p̃, where F = C^*(𝒦⊗ M_2^∞⊗𝒞(X), u ⊗ v). By the universal property of the crossed product, there exists a ^*-homomorphism (𝒦⊗ M_2^∞⊗𝒞(X)) ⋊_Φ⊗α⟶ F, the image of which contains p̃ F p̃ as a hereditary subalgebra. By <cit.>, the crossed product (𝒦⊗ M_2^∞⊗𝒞(X)) ⋊_Φ⊗α is simple, hence so is its image in F, and therefore also the hereditary subalgebra p̃ F p̃, which in turn is isomorphic to B. We proceed by taking infinite tensor products and by <cit.>, we again obtain a C^*-diagonal D^⊗∞⊂ B^⊗∞. The spectrum of D^⊗∞ is the product of the individual spectra and therefore again a Cantor space. Note that B^⊗∞ is still simple, separable, unital and nuclear. Furthermore, 𝒪_2 embeds unitally into increasingly higher tensor factors of B^⊗∞, hence embeds unitally into the central sequence algebra ℓ^∞( , B^⊗∞) / c_0( , B^⊗∞) ∩ (B^⊗∞)^'. This yields B^⊗∞≅ B^⊗∞⊗𝒪_2 (<cit.>, <cit.>), since 𝒪_2 is strongly self-absorbing (cf. <cit.>). Now by Kirchberg's 𝒪_2-absorption theorem (<cit.>, <cit.>) and since B^⊗∞ is simple, separable, unital and nuclear, we have B^⊗∞≅ B^⊗∞⊗𝒪_2 ≅𝒪_2; cf. <cit.>. Hence, we have found a C^*-diagonal in 𝒪_2 whose spectrum is a Cantor space and the proof of our theorem is complete. We close with a remark and some open questions which are motivated by both our construction and our result. B cannot coincide with 𝒪_2 and it is therefore necessary to take the infinite tensor product B^⊗∞ in our construction. Indeed, S. Evington pointed out to us that in general B carries interesting and non-trivial K-theoretic information; this will be pursued in subsequent work. (i) To what extent does our construction depend on the choice of the Cantor minimal system, and what would be a distinguishing invariant? (ii) Do (D ⊂ B) or (D^⊗∞⊂ B^⊗∞) have finite diagonal dimension in the sense of <cit.>? (iii) Does the Cuntz algebra 𝒪_n for n ≥ 3, or a general Kirchberg algebra contain a C^*-diagonal (with Cantor spectrum)? plain
http://arxiv.org/abs/2409.02548v1
20240904091000
A Joint Time and Energy-Efficient Federated Learning-based Computation Offloading Method for Mobile Edge Computing
[ "Anwesha Mukherjee", "Rajkumar Buyya" ]
cs.DC
[ "cs.DC" ]
A Joint Time and Energy-Efficient Federated Learning-based Computation Offloading Method for Mobile Edge Computing Anwesha Mukherjee and Rajkumar Buyya, Fellow, IEEE A. Mukherjee is with the Department of Computer Science, Mahishadal Raj College, Mahishadal, West Bengal, 721628, India, and a Research Visitor in the Cloud Computing and Distributed Systems (CLOUDS) Laboratory, School of Computing and Information Systems, University of Melbourne, Australia. (e-mail: anweshamukherjee2011@gmail.com). R. Buyya is with the Cloud Computing and Distributed Systems (CLOUDS) Laboratory, School of Computing and Information Systems, University of Melbourne, Australia. (e-mail: rbuyya@unimelb.edu.au). ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ § ABSTRACT Computation offloading at lower time and lower energy consumption is crucial for resource limited mobile devices. This paper proposes an offloading decision-making model using federated learning. Based on the task type and the user input, the proposed decision-making model predicts whether the task is computationally intensive or not. If the predicted result is computationally intensive, then based on the network parameters the proposed decision-making model predicts whether to offload or locally execute the task. According to the predicted result the task is either locally executed or offloaded to the edge server. The proposed method is implemented in a real-time environment, and the experimental results show that the proposed method has achieved above 90% prediction accuracy in offloading decision-making. The experimental results also present that the proposed offloading method reduces the response time and energy consumption of the user device by ∼11-31% for computationally intensive tasks. A partial computation offloading method for federated learning is also proposed and implemented in this paper, where the devices which are unable to analyse the huge number of data samples, offload a part of their local datasets to the edge server. For secure data transmission, cryptography is used. The experimental results present that using encryption and decryption the total time is increased by only 0.05-0.16%. The results also present that the proposed partial computation offloading method for federated learning has achieved a prediction accuracy of above 98% for the global model. Federated learning, Offloading, Training Time, Encryption, Energy consumption. § INTRODUCTION The energy and latency-aware computation offloading has gained a significant research interest in the last few years with the rapid growth in wireless technology and the growing popularity of Internet of Things (IoT). The IoT devices and the mobile devices including laptops, tablets, smartphones, have limited battery life and computational resources. In such a case, execution of computationally intensive tasks may not be feasible inside these mobile devices. However, if the device is able to execute the tasks then also it may take more time than offloading the task to the edge server or cloud. On the other hand, offloading of all the tasks may not be fruitful from the perspective of latency and energy consumption. For the tasks with less computations, offloading can consume more time than local execution. Hence, the decision of offload or local execution is important. Nowadays, the mobile devices have various types of recommendation applications, where model training is important. There are many applications which use reinforcement learning, machine learning, or deep learning algorithms to train the model. Further, local model training may not be always possible for the resource-constrained mobile devices. However, the user's data is confidential and the user may not like to share the data with the server. Also, the large amount of data transmission to the server for analysis creates a huge overhead on the server as well as highly increase the network traffic. Federated learning (FL) <cit.> is an emerging technology where the devices locally train models using their local datasets, and exchange model updates with the server. Finally, a global model is developed, and each device that serves as a client has a personalized model and the global model. The use of FL protects data privacy as no data sharing takes place, and through a collaborative learning a global model with high prediction accuracy can be obtained. However, the device may not able to analyse a large dataset using deep learning algorithms. In that case, the dataset is partitioned into two parts: one part is used for local model training, and the other part is offloaded to the server. However, during data offloading to the server, protection of data privacy is vital. §.§ Motivation and Contributions A mobile device has a computational task to execute. Now, the decision regarding whether to locally execute it or offload it to the edge server, is crucial with respect to the response time and energy consumption. Hence, a decision-making model is required to take the decision whether to offload or locally execute. The data samples used for training may not be very high for a mobile device, and the network parameters' values also change. In such a case, a deep learning model with comparatively small number of samples may not provide prediction with high accuracy. Further, the users' data are different, and the local model weights are different. Thus, a collaborative training can provide a better prediction model so that the response time for task execution and energy consumption of the user device during the period can be minimal. The first motivation of this work to develop a collaborative decision-making model with high prediction accuracy to decide whether to offload or locally execute. The use of FL for collaborative model training with privacy protection has gained popularity for various IoT applications. In centralized federated learning (CFL), the devices working as clients train local models with their own datasets, and exchange model updates with the server to build a global model <cit.>. In decentralized federated learning (DFL), the devices form a collaborative network among themselves to exchange model updates for developing a generalized model <cit.>. However, the mobile devices may not be always able to locally train deep learning model using a huge number of data samples. In such a case, conventional CFL and DFL models may not be feasible. The second motivation of this work is to propose a secure partial computation offloading framework for FL to deal with this issue. To address these issues, our paper makes the following contributions: * To address the first motivation, a CFL method is proposed for decision-making regarding whether to offload or locally execute. The proposed model is named as FL-based Offloading Decision-Making Model (FLDec). At first FLDec is used to decide whether a task is computationally intensive or not, based on the requested computation and user input. Here, as the underlying data analysis model Multi-layer Perceptron (MLP) is used, which is suitable for nonlinear function classification tasks. If the prediction result is computationally intensive, then the FLDec is used to decide whether to offload the task or locally execute it, based on the network parameters such as uplink and downlink traffic, network throughput, etc. Here, Long Short-Term Memory (LSTM) network is used as the underlying data analysis model, which is suitable for sequential data analysis. As the network parameters like throughput, uplink and downlink traffic are dynamic and change over time, we use LSTM in this case as the underlying model to capture the sequential pattern. * To address the second motivation, a secure partial computation offloading method for FL is proposed and implemented, which is referred to as Federated Offloading (FedOff). The user devices have large data samples, which are split into two parts. One part is used as the local dataset to train the local model, and the other part is offloaded to the edge server after encryption. The devices train their local models and exchange model updates with the edge server. The edge server after receiving data from the devices, trains a model with the received dataset after decryption, and stores the model update. After receiving model updates from the devices i.e. clients, the server performs aggregation considering the received updates as well as the stored update. The updated global model is then shared with the devices. In this work, we use symmetric key cryptography for data encryption during transmission to the edge server. However, the keys of individual devices are different. The rest of the paper is organized as follows. Section <ref> presents the existing works on offloading and FL. Section <ref> demonstrates the proposed offloading method. Section <ref> evaluates the performance of the proposed approach. Finally, Section <ref> concludes the paper. § RELATED WORK The existing research works on computation offloading, FL, and federated offloading are briefly discussed in this section. The computation offloading is a significant research area of mobile cloud computing (MCC). Service delay minimization <cit.> is one of the primary issues for mobile devices. The computation offloading from mobile device to the remote cloud suffered from delays, connectivity interruption, etc. <cit.>. The use of edge computing (EC) overcomes this problem by bringing resources at the network edge. The IoT/mobile devices are connected with the edge server, and the edge server is connected with the cloud <cit.>. The mobile devices offload the resource exhaustive computations to the edge server. The edge server executes the computation and sends the result to the mobile device, or forwards the request to the cloud server. The cloud server after executing the computation sends the result. The computation offloading in mobile edge computing (MEC) was discussed in <cit.>. The computation offloading in EC networks was illustrated in <cit.> along with a discussion on optimization methods. A joint task offloading approach with resource allocation was proposed for MEC environment in <cit.>. In <cit.>, user mobility-based computation offloading method was proposed using game theory. In <cit.>, the use of machine learning (ML) in computation offloading was reviewed. The reinforcement learning (RL) also gained popularity in computation offloading. In <cit.>, the use of RL in computation offloading was discussed in detail. The use of deep RL for task offloading was discussed in <cit.>. A deep RL-based method for action space recursive decomposition was proposed in <cit.>. FL with edge computing has gained popularity in recent years <cit.>. In <cit.>, an edge-based FL was proposed where the edge servers train their local models and exchange updates with the cloud to build the global model. However, in MEC-based FL, the mobile devices can also train their local models with own datasets and exchange model updates with the edge server that is connected with the cloud. The edge server performs aggregation in that case. The edge servers can also exchange their model updates with the cloud for developing a global model, and in that case cloud server acts as the aggregator. In <cit.>, FL with deep RL was used for computation offloading. Bidirectional LSTM (Bi-LSTM)-based FL was proposed for offloading decision-making and resource allocation in <cit.>. In <cit.>, FL was used for computation offloading in Edge-IoT systems. In <cit.>, FL-based task offloading was proposed based on multi-feature matching. To mitigate the straggler effect of FL, an offloading method was proposed in <cit.>. To deal with straggler effect and accelerate the local training in resource-constrained devices, offloading in FL was discussed in <cit.>. Limitations of existing approaches and comparison with proposed work: In the existing FL-based offloading approaches, either the offloading decision-making using FL or partial computation offloading in FL, was explored. In our work, we propose a computation offloading decision-making framework using FL, as well as a partial computation offloading method for FL. Unlike the related works on offloading decision-making <cit.>, we have not only measured the time and energy consumption for the proposed methods but also the accuracy and model loss for the proposed approaches. In <cit.> and <cit.>, the authors though measured the model loss, the energy consumption was not measured. In comparison with the existing partial computation offloading methods <cit.>, the uniqueness of the proposed work is the use of cryptography for secure data transmission to the server. The comparison of the proposed and existing offloading approaches is presented in Table <ref>. From the table we observe that the proposed approach is unique compared to the existing schemes. § PROPOSED OFFLOADING METHODS The proposed approach is divided into two parts. The first part is on decision-making regarding offloading. In this case, we have used FL to build a decision-making model regarding whether to offload or not. The second part is on partial computation offloading in FL. In the second part, we propose a secure partial computation offloading framework for FL for resource limited mobile devices. §.§ Proposed Offloading Decision-making Method We propose a decision-making method regarding whether to offload a task or not, based on its computational intensiveness, network parameters, etc. The mathematical notations used in our approach FLDec are summarized in Table <ref>. The problem statement and the proposed method are discussed as follows. §.§.§ Problem statement A device has a computational task C that is to be executed locally or remotely. * Objective 1: Decide whether C is computationally intensive or not. * Objective 2: If it is computationally intensive, then decide whether to offload C or not. §.§.§ Proposed framework To address the first and second objectives, we use the FL-based model. The steps of developing a decision-making model using FL is presented in Algorithm <ref>. Here, the clients are the participating user devices, and the server is the edge server. For the first objective, we use MLP as the underlying data analysis model. Based on the requested computational task and the respective input, it is predicted whether the task would be computationally intensive or not. For a computational task, the input plays an important role. For example, multiplication of two 3x3 matrices takes much less time to compute than the multiplication of two 300x300 matrices. Thus, based on the input as well as the task type (matrix operation, sorting, searching, etc.), it is decided whether it can be computationally intensive or not. To address the second objective, we use LSTM as the underlying model. Based on some network parameters such as network throughput, latency, uplink and downlink traffic, etc., it is decided whether to offload the task or not. If the decision is offload, C is offloaded to the edge server. If the edge server is unable to offload C, then it forwards the request to the cloud. The cloud server executes the task and sends the result to the device. The proposed offloading method is presented in Algorithm <ref>. The proposed method is pictorially depicted in Fig. <ref>. §.§.§ Computational complexity The time complexity of the FL process depends on the time complexity of model initialization, local model training, exchange of model updates, and aggregation. The time complexity of model initialization is O(1). The computational complexity of model initialization is given as O(w_ℳ_in), where w_ℳ_in denotes the weights of the initial model. The time complexity of local model training is given as O(ℛ·ℰ· (𝒟_k/B) · w_k), where w_k denotes model weights for client k. The computational complexity of local model training is given as O(ℛ·ℰ· (𝒟_k/B) · w_k · |𝒦|). The time complexity for model aggregation is given as O(ℛ· |𝒦| · w_k). The computational complexity for model aggregation is also given as O(ℛ· |𝒦| · w_k). The time complexity and computational complexity for model updates exchange both are given as O(w_ℳ + |𝒦|· w_k), where w_k denotes the model weights of client k and w_ℳ denotes model weights of the server. §.§ Proposed Federated Offloading Method In this section, we first discuss the problem statement, and then propose a partial computation offloading method for FL, which is referred to as federated offloading. In the proposed method, the user devices are the clients and the server is the edge server. The mathematical notations used in our approach FedOff are summarized in Table <ref>. §.§.§ Problem statement Let the model of an application A needs to be developed using FL, each device has a respective local dataset 𝒟_k_A, and the number of rounds is ℛ_A. In FL, each of the participating device needs to locally train the model with its local dataset. However, model training with a large number of data samples may not be feasible for all the devices. In that case, the devices put a portion of their datasets to the server. The server trains the model with the offloaded datasets and stores the model. During aggregation, the server considers this trained model also along with the received model updates from the devices. However, how much portion of data will be offloaded that is significant. Further, data security during offloading to the server is another issue. The objective is to propose a secure federated offloading method with good prediction accuracy and minimal loss. §.§.§ Proposed framework In the proposed federated offloading method, each participating user device k splits its local dataset into two parts: one part will be processed locally (D_k_loc) and the other part will be sent to the edge server (D_k_ser). After splitting the dataset, the client sends the second part (D_k_ser) to the server. The server receives D_k_ser from each participating device k, accumulates the collected data, and fits its model with the dataset. Each device k trains its model with local dataset (D_k_loc) and sends the model update to the server. The server after receiving model updates from the clients performs aggregation along with the model update for the offloaded data. The process of model update is repeated for the number of rounds ℛ_A. The proposed federated offloading method is presented in Fig. <ref>. During data transmission from the clients to the server, encryption is used for protecting the data from unauthorized access. §.§.§ Computational complexity The time complexity of the proposed federated offloading method depends on the time complexity of model initialization, data transmission from clients to the server, local model training, exchange of model updates, model training at the server, and aggregation. The time complexity of model initialization is O(1). The computational complexity of model initialization is given as O(w_ℳ_in), where w_ℳ_in denotes the weights of the initial model. The time complexity and computational complexity of data transmission from clients to the server, both are given as O(|𝒦|· (𝒟_k_ser/S)), where S denotes the data transmission speed. The time complexity of local model training is given as O(ℛ_A ·ℰ· (𝒟_k_loc/B_A) · w_k_A), where w_k_A denotes model weights for client k for application A. The computational complexity of local model training is given as O(ℛ_A ·ℰ· (𝒟_k_loc/B_A) · w_k_A·|𝒦|). The time complexity and computational complexity for model updates exchange both are given as O(w_ℳ_A + |𝒦|· w_k_A), where w_k_A denotes the model weights of client k and w_ℳ_A denotes model weights of the server for application A. The time complexity of model training at the server is given as O(ℛ_A ·ℰ· (𝒟_loc/B_A) · w_ℳ_A), where w_ℳ_A denotes the model weights of the server for application A. The time complexity and computational complexity for model aggregation both are given as O(ℛ_A · N_model· w_i), where w_i denotes model weights of ℳ_i and ℳ_i ∈ Model_all. §.§.§ Total time consumption The total time consumption in the proposed federated offloading method is determined as follows: T_foff=T_train+T_crypt where T_train=T_init+T_tr+T_loc+T_exm+T_ser+T_agg, where T_init=O(1), T_tr=O(|𝒦|· (𝒟_k_ser/S)), T_loc=O(ℛ_A ·ℰ· (𝒟_k_loc/B_A) · w_k_A), T_exm=O(w_ℳ_A + |𝒦|· w_k_A), T_ser=O(ℛ_A ·ℰ· (𝒟_loc/B_A) · w_ℳ_A), and T_agg=O(ℛ_A · N_model· w_i), where w_i denotes model weights of ℳ_i and ℳ_i ∈ Model_all. § PERFORMANCE EVALUATION In this section, firstly we analyse the performance of the proposed offloading decision-making method, and then analyse the performance of the proposed federated offloading framework. We have performed a real experimental analysis in the CLOUD lab, The University of Melbourne, to evaluate the performance of the proposed approaches. Python 3.8.10 has been used for implementation, and Tensorflow has been used for deep learning-based data analysis. For exchange of model updates in FL, MLSocket has been used. §.§ Performance of FLDec The proposed offloading decision-making model, FLDec, is evaluated based on prediction accuracy, training time, training energy consumption, and the response time in task execution and energy consumption of the user device during that period considering different real-time offloading case studies. The time consumption is measured in seconds (s) and the energy consumption is measured in joule (J). §.§.§ Experimental setup For the experiment, we have provisioned four virtual machines from the RONIN cloud environment of Amazon AWS, to act as four user nodes. Each of the user nodes has 4GB memory. We also have provisioned two virtual machines from the RONIN cloud for using as the edge server and the cloud server instance. The edge server has 4GB memory and the processor is Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz. The cloud server instance has 4GB memory and 2vCPUs. The number of rounds in FL has been set to 10, and α_r=1. §.§.§ Prediction accuracy, Training time, and Energy consumption of user device In the proposed framework, CFL is used. For predicting computational intensiveness of a task, we use MLP as the underlying model, and for offloading decision-making, we use LSTM as the underlying model. The parameter values used in MLP and LSTM are presented in Table <ref>. We have used our own dataset[<https://github.com/AnuTuli/OffFed>] for predicting whether a task is computational intensiveness or not. We have executed several computational tasks such as basic calculator design, matrix multiplication, file creation, sorting, searching, etc., using different input values. Based on the outcomes, we developed the dataset. We have obtained the prediction accuracy of 94.74% for the global model using the FL-based decision-making model, based on MLP. The results are presented in Fig. <ref>. For the local models we have achieved an accuracy of 91.17-94.74% after round three. The global model loss after round 10 is presented in Fig. <ref>. As we observe from the figure, the model converges when the loss becomes minimal. After predicting the computational intensiveness, we have used another dataset[<https://www.kaggle.com/datasets/ucimachinelearning/task-offloading-dataset/data>] that considers network parameters to predict whether to offload or locally execute, using the proposed FL-based decision-making model, based on LSTM. The accuracy, precision, recall, and F1-score, achieved by the global model and the local models of the four participating devices after round 5 are presented in Fig. <ref>. As we observe, the global model has achieved an accuracy of >99% and the local models have achieved >97% accuracy. The precision, recall, and F1-score are ≥0.99 for the global model, and ≥0.96 for the local models. The global model loss after round 10 is presented in Fig. <ref>, and we observe that the model converges when the loss becomes minimal. The confusion matrix of the global and local models are presented in Fig. <ref>. Here, two classes are presented as the two decisions: (1) locally execute, (2) offload. The total training time of the edge server including the communication time is 110.36s. The training time including the communication time for each of the participating devices are presented in Fig. <ref>, and we observe that the time consumption is 200-350s for the local models. The energy consumption of the participating devices during the entire period are also measured and presented in Fig. <ref>, and we observe that the energy consumption of the device is 1000-1600J during the training periods. Comparison with decision-making without FL: The accuracy, precision, recall, and F1-score achieved by the local and global models using LSTM without FL are also measured, and we have observed that the prediction accuracy is above 65% for all the local models and above 70% for the global model. The precision, recall, and F1-score for the all local models are above 0.6 and for the global model it is ≥0.6. The results are presented in Fig. <ref>. As we observe, the accuracy, precision, recall, and F1-score using FL are better for the global model as well as for all the local models. §.§.§ Task offloading, Response time, and Energy consumption of user device We have considered different tasks in our experiment. The FL-based model is used for offloading decision-making. At first the model is used to check whether the requested task is computationally intensive or not based on the input features such as task type and the input. If the task is predicted as computationally intensive, then, based on the input features such as uplink traffic, downlink traffic, bandwidth, latency, etc., the decision is taken whether to offload or locally execute it. For the tasks with simple computation usually local execution is performed. However, sometimes the user may need to remotely access the content or result of computation. Thus, we have considered user preference also when a request is received. If the user looks for remote access, the task is offloaded to the cloud through the edge server. Otherwise, based on the outcome of the FL-based decision-making model, the task is either locally executed or offloaded to the edge server. The response time for the considered task execution scenarios and the energy consumption of the user device during the period, are presented in Table <ref>. As the first task type, we have considered a basic calculator design that performs four operations: addition, subtraction, multiplication, and division. Most of the user devices contain this basic application, which can work irrespective of the Internet connectivity. The user inputs numbers, and the calculator generates the results according to the requested operation. As the second task type, we have considered matrix multiplication, where based on the order taken as input from the user, it is decided whether it would be computationally intensive or not. If the prediction result is computationally intensive, the FL-based decision-making model is used to predict whether to offload or not based on the present network throughput, uplink and downlink traffic, etc. Based on the prediction result, the task is either offloaded or locally executed. Here, we have considered four different cases. For the first case, the two matrices are of order 50x50, and based on the prediction result, the task has been executed locally. The response time and energy consumption of the device during the period are measured, and presented in Table <ref>. For the other three cases, based on the prediction results, the tasks are offloaded. The response time and energy consumption of the device during the period are measured, and presented in Table <ref>. As the third task type, we have considered file creation. The file has been created locally or remotely, according to the user preference. We have measured the response time and energy consumption of the device during the period, and presented the results in Table <ref>. For evaluating the performance for a wide range of users, we performed a simulation in MATLAB2024. The average data transmission speed considered in the simulation is 1000Mbps, and the task size is randomly chosen from the range 10,000-50,000 bits. In Fig. <ref>, the average response time for 100-1000 users' requests for task execution are presented. The average energy consumption of the user device during the periods are also computed and presented in Fig. <ref>. As we observe the average response time lies in the range of 3-4.25s, and the average energy consumption lies in the range of 14-22J. Comparison of task offloading using FLDec with baselines: As the baseline for comparison, we considered two scenarios: (i) all tasks are offloaded, (ii) all tasks are locally executed. We observe from Table <ref> that for the task Calculator, the proposed scheme FLDec suggested local execution. If the task has been offloaded, then the response time and energy consumption for addition, subtraction, multiplication, and division operations would be ∼90%, ∼95%, ∼85%, and ∼96%, higher than the local execution. For the task matrix multiplication, four scenarios are considered as presented in Table <ref>. For the first scenario, the FLDec has decided local execution, and it has ∼57% lower time and energy consumption than offloading it. For the other three scenarios, FLDec has suggested offload, and we have observed that the response time and energy consumption both are reduced by ∼11-31% than local execution. Hence, we observe that if all the considered tasks are locally executed, then for higher order matrix multiplication, the response time and energy consumption would be high. Similarly, if all the considered tasks are offloaded, for calculator, the response time and energy consumption would be high for all of the considered operations, and for matrix multiplication the first scenario would take more time and the energy consumption also would be high. Thus, from the results we observe that the proposed FL-based decision-making model has taken appropriate decisions for the considered case studies, and reduced the response time and energy consumption. §.§ Performance of FedOff The performance of the proposed partial computation offloading method for FL which is referred to as federated offloading or FedOff, is evaluated in terms of prediction accuracy, precision, recall, and F1-Score of the global as well as local models, total time consumption, and energy consumption of the user device during the period. The implementation diagram of the proposed framework is presented in Fig. <ref>. §.§.§ Experimental setup To evaluate the performance of FedOff, we have provisioned six virtual machines from the RONIN cloud environment. Among them five act as the clients, and one acts as the edge server. Each of the clients has 4GB memory. The edge server has 4GB memory and the processor is Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz. The number of rounds was set to 5, and α_r=1. To evaluate the performance of the proposed federated offloading method, we have considered an activity recognition dataset[<https://www.kaggle.com/datasets/nurulaminchoudhury/harsense-datatset>]. Activity recognition is one of the popular application used by mobile users. Thus, we consider this application in our work. §.§.§ Prediction accuracy The dataset has been split among five user devices as their local datasets and the server as the global dataset. The clients i.e. devices split their datasets into the following ratios: (i) local-25%, offload-75%, (ii) local-50%, offload-50%, and (iii) local-75%, offload-25%. The portion of data to be offloaded has been encrypted using Fernet cryptography, and then sent to the server. Fernet is a symmetric key cryptography method, which is based on AES-128 for encryption and HMAC using SHA256 for authentication. As symmetric key cryptography is used, the same key is used for encryption as well as decryption. However, all the devices have different keys. As the latency is vital, we have considered symmetric key cryptography in FedOff that can protect data security but at lower time consumption. We have measured the latency in data transmission from the experiment. The accuracy, precision, recall, and F1-Score of the global model, achieved by FedOff are presented in Fig. <ref>. The average accuracy, precision, recall, and F1-Score of the local models, using FedOff are presented in Fig. <ref>. The results show that FedOff has achieved >98% accuracy for the global model and >94% accuracy for the local models. The precision, recall, and F1-score are 0.98 for the global model, and >0.92 for the local models. We have also conducted the experiment for conventional FL, where no data offloading takes place, and only model updates are exchanged between the clients and the server. We have also conducted the experiment for the case where the devices are unable to train models, and the data from the devices are fully offloaded to the edge server. The edge server trains the model for individual datasets, and then performs aggregation. After developing the model, the server sends it to the clients, so that they can perform prediction using the model in future. The accuracy, precision, recall, and F1-Score of the global model for conventional FL and full offloading are presented in Fig. <ref>. The average accuracy, precision, recall, and F1-Score of the local models for conventional FL are presented in Fig. <ref>. As we observe, the accuracy, precision, recall, and F1-Score for the global model in the proposed method are almost same as that of conventional FL, and better than the case of full offloading. The global model loss for FedOff for the considered three scenarios ((i) local-25%, offload-75%, (ii) local-50%, offload-50%, and (iii) local-75%, offload-25%) are presented in Fig. <ref>. We observe that for each of the case, the loss tends to 0, which along with the nature of the curve indicates that the model has converged. §.§.§ Time and Energy consumption The total time for FedOff for the three considered scenarios ((i) local-25%, offload-75%, (ii) local-50%, offload-50%, and (iii) local-75%, offload-25%) are measured as shown in Fig. <ref>. The total time includes the total training time (training time plus communication time for model updates exchange) considering all rounds, the time consumption for data transmission, and the time consumption for data encryption and decryption. The total time for the conventional FL and full offloading are also measured, and presented in the figure. The average energy consumption of the devices during the total period are presented in Fig. <ref>. As we observe the total time and energy consumption are lowest in conventional FL, where the devices locally analyse the full dataset, and exchange model updates with the server. However, if the devices cannot analyse the full dataset, then case 1 consumes the lowest time and energy consumption. Case 2 can also be adopted, where 50% data is locally analysed, and rest 50% is offloaded to the server. If the device is unable to analyse the dataset at all, then the entire dataset is offloaded to the server. We observe from Figs. <ref> and <ref> that the use of cryptography for privacy protection during data transmission consume a very less amount of time and energy. We observe that the use of encryption and decryption increases the time and average energy consumption by only 0.05-0.16% but ensures data privacy protection during transmission. §.§ Comparison with Existing Offloading Methods This section compares the proposed approaches with the existing offloading schemes. Comparison with existing offloading schemes using simulation: In Fig. <ref>, the total time in task execution using FLDec is compared with the task execution time in existing offloading schemes <cit.>, with respect to the task size. For comparison, we have performed a simulation based on different task sizes (1000-3000 bits), while performing task offloading or local execution using FLDec, and determined the time consumption in task execution. The total energy consumption (device and edge server) during task execution period is also determined and compared with the existing offloading schemes in Fig. <ref>, with respect to the task size. As we observe from the results, FLDec has ∼75-79%, ∼47-57%, and ∼66-74%, less time consumption than joint task offloading and resource allocation method (JORA) <cit.>, multi-feature matching scheme (MFM) <cit.>, and action space recursive decomposition method (ASRD) <cit.>, respectively. We also observe that FLDec has ∼33-42%, ∼45-58%, and ∼65-68% less energy consumption than MFM, JORA, and ASRD, respectively. In FLDec, as we have used an FL-based decision-making model to decide whether to offload or not based on task type, user input, network parameters, etc., the accurate decision-making is achieved, and the time consumption as well as energy consumption in task execution are minimal. Finally, we observe that FLDec outperforms the existing offloading approaches with respect to total time and energy consumption in task execution. Comparison with existing schemes for offloading in FL using experiments: We carried out a comparative study between the proposed partial computation offloading method FedOff and the existing offloading methods for FL. As the datasets used are different, we have considered the contributions for comparison along with prediction accuracy and training time. The comparative study is presented in Table <ref>, and we observe that the model loss was 0.2-0.5 in <cit.>, and the system delay was 10-50s per round. The number of rounds in <cit.> was 100 to converge and get minimum loss. Hence, the total delay was 1000-5000s in <cit.>, considering all rounds. The prediction accuracy was 80% for <cit.>, and the training time was 343-701s per round. The total number of rounds in <cit.> was 100 to converge. Hence, the total training time was 34300-70100s in <cit.>, considering all rounds. As we observe, FedOff achieved >98% accuracy for the global model and >94% accuracy for the local models. The number of rounds to converge and get minimum loss in FedOff was 5. The total time (training time considering all rounds+data transmission time+time consumption for data encryption and decryption) was 200-500s in FedOff, and the average training time per round was 40-100s. The average energy consumption of the participating user devices during the total training period was 1000-2500J. Unlike our proposed method, related works <cit.> have not considered the energy consumption and secure data offloading aspects. § CONCLUSION AND FUTURE WORK This paper proposes an FL-based offloading decision-making model for mobile devices. Based on the task type and the user input, the FL-based decision-making model predicts whether the task is computationally intensive or not, using MLP. If the predicted result is computationally intensive, then the proposed FL-based decision-making model predicts whether to offload or locally execute the task based on the network parameters, using LSTM. Based on the predicted result the task is either locally executed or offloaded to the edge server. If the edge server is unable to execute it or if the user preference is remote execution, then, the task is offloaded to the cloud. The experimental results present that the proposed offloading decision-making method has achieved above 90% prediction accuracy, and the considered case scenarios show that the proposed method provides the result at lower response time and lower energy consumption of the user device. The results also present that the proposed offloading method has reduced the response time by ∼11-31% for computationally intensive tasks. We have also proposed a partial computation offloading method for FL in this paper, where the user devices offload a part of their local dataset to the edge server if they are unable to analyse a huge dataset. For secure data transmission at lower time symmetric key cryptography has been used. The experimental results show that the proposed partial computation offloading method for FL has achieved >98% accuracy for the global model and >94% accuracy for the local models. The results also present that the use of cryptography increases the time by 0.05-0.16% but ensures data privacy protection during transmission. As part of our future work, we will extend the proposed work to enhance prediction accuracy in situations such as when the user's requested task does not fall in the trained task category or the input provided for the task fall apart from the training dataset. Generative artificial intelligence can be used with FL for providing a good prediction model if the training dataset has a very limited number of samples. IEEEtran -0.05plus -1fil [ < g r a p h i c s > ] Anwesha Mukherjee is an Assistant Professor of the Department of Computer Science, Mahishadal Raj College, West Bengal, India. She is also a Research visitor in the CLOUDS Lab, The University of Melbourne, Australia. Her research interests include mobile computing, IoT, cloud computing, edge computing, machine learning, and federated learning. -1.5plus -1fil [ < g r a p h i c s > ] Rajkumar Buyya (Fellow, IEEE) is a Redmond Barry distinguished professor and director of the Cloud Computing and Distributed Systems (CLOUDS) Laboratory, The University of Melbourne, Australia. He has authored more than 800 publications and seven text books including “Mastering Cloud Computing” published by McGraw Hill, China Machine Press, and Morgan Kaufmann for Indian, Chinese and international markets respectively. He is one of the highly cited authors in computer science and software engineering worldwide (h-index=168, g-index=371, 151,200+ citations).
http://arxiv.org/abs/2409.03608v1
20240905151428
Temperature shift of magnetic-field-dependent photoluminescence features of nitrogen-vacancy ensembles in diamond
[ "Irena Rodzoń", "Xue Zhang", "Viktor Ivády", "Huijie Zheng", "Arne Wickenbrock", "Dmitry Budker" ]
quant-ph
[ "quant-ph", "physics.app-ph", "physics.atom-ph" ]
APS/123-QED These authors contributed equally to this work. These authors contributed equally to this work. Department of Physics of Complex Systems, ELTE Eötvös Loránd University, Egyetem tér 1-3, H-1053 Budapest, Hungary MTA-ELTE Lendület “Momentum” NewQubit Research Group, Pázmány Péter, Sétány 1/A, 1117 Budapest, Hungary Department of Physics, Chemistry and Biology, Linköping University, 581 83 Linköping, Sweden hjzheng@iphy.ac.cn § ABSTRACT Recently significant attention has been paid to magnetic-field-dependent photoluminescence (PL) features of the negatively charged nitrogen-vacancy (NV) centers in diamond. These features are used for microwave-free sensing and are indicative of the spin-bath properties in the diamond sample. Examinating the temperature dependence of the PL features allows to identify both temperature dependent and independent features, and to utilize them in diamond-based quantum sensing and dynamic nuclear polarization applications. Here, we study the thermal variability of many different features visible in a wide range of magnetic fields. To this end, we first discuss the origin of the features and tentatively assign the previously unidentified features to cross relaxation of NV center containing multi-spin systems. The experimental results are compared with theoretically predicted temperature shifts deduced from a combination of thermal expansion and electron-phonon interactions. A deeper insight into the thermal behavior of a wide array of the features may come with important consequences for various applications in high-precision NV thermometry, gyroscopes, solid-state clocks, and biomagnetic measurements. Temperature shift of magnetic-field-dependent photoluminescence features of nitrogen-vacancy ensembles in diamond Dmitry Budker September 9, 2024 ================================================================================================================== § INTRODUCTION Negatively charged nitrogen-vacancy (NV^-) centers in diamond are widely used for quantum sensing. These centers are sensitive to magnetic field, electric field, temperature, rotations, and strain, which enables the development of magnetometers, electric sensors, gyroscopes, thermometers, and multisensors <cit.>. Typically, NV-based sensors utilize optically detected magnetic resonance (ODMR) techniques involving microwave transitions between the ground-state Zeeman sublevels, see, for example, Ref. <cit.>. Microwave-free techniques have also been developed recently <cit.> based on cross-relaxation and energy level anti-crossing-related features in the magnetic field dependence of light-induced photoluminescence of the center <cit.>. For a sensor to reliably report a value of the measured parameter (e.g., magnetic field), it is important to understand and account for the effect of other parameters (e.g., temperature) to separate the effects of the two and, ideally, measure multiple parameters at the same time. With an eye on this, in this work, we investigate the temperature dependence of several cross-relaxation features used for microwave-free magnetometry and dynamic nuclear polarization applications. The zero-field splitting (ZFS) of the ground-state electronic m_s=0 and m_s±1 sublevels was found to have a nonlinear dependence on temperature <cit.>, that was analyzed theoretically <cit.>. The theoretical models identify the mechanism of the temperature dependence of the ZFS (along with those of the optical ZPL and strain-induced splittings) as a combination of thermal expansion and electron-phonon interactions. Recording the red/near-infrared photoluminescence intensity from a diamond sample with a high concentration of nitrogen-containing color centers (including NV^-) as a function of the magnetic field along one of the crystallographic axes, one observes a pattern (Figs. <ref> & <ref>) with a number of sharp features (see also Ref. <cit.>), the origin of most of which can be traced to cross-relaxation between NVs of different orientations or NVs and other paramagnetic centers such as P1 <cit.>. In addition to the sharp features, the photoluminescence also shows a broader (several hundred gauss) feature centered at zero field. This is due to the NV^- centers with axes that are not parallel to the magnetic field (i.e., “off-axis NV centers”). The transverse component mixes electron-spin sublevels and reduces the efficiency with which the NV centers are optically pumped into the “bright” ground state with zero projection on the axis, thus reducing the photoluminescence <cit.>. Also of note is the behavior of the photoluminescence curves as a function of temperature. The drop of photoluminescence towards 400 G at low temperatures is related to the dynamics in the excited state and was recently discussed in <cit.>, [The work of Ref. <cit.> examined single NV centers and there are notable differences between NV centers in different strain environments. Of note is that our samples show similar behavior although they were synthesized with different methods (E6–CVD,S2-HPHT), which generally produce different strain environment.]. In this paper, we study the temperature dependence of the cross-relaxation features in the magnetic spectrum in the temperature range of 4–300 K. We present the frequency shifts of each cross-relaxation feature and compare them with theoretical predictions. Features with relatively low-temperature dependence are observed, pointing to potential sensing applications. In addition, we develop a scheme to efficiently and reliably predict cross-relaxation features of various multi-spin systems that we later employ to assess the origin of the magnetic field-dependent PL features. This in turn helps us to predict the temperature shift of the features and underpin our temperature-dependence measurements. § THEORY We begin by listing the observed sharp features in the magnetic-field dependence of the photoluminescence and discuss their origin, see Table <ref>. In order of increasing magnetic field, we focus on features appearing at the magnetic fields of 93 G, 163 G, 342 G, 512 G (P1 center), 591 G, 732 G, 954 G, and 1024 G (the GSLAC), respectively. As can be seen in Table <ref>, with the exception of the feature at 93 G and 163 G, all other features can be associated with various cross-relaxation conditions in the system, including cross-relaxation among the NVs of different orientations, NV-P1 cross-relaxation, as well as NV-^13C related processes. For the methodology used to predict the magnetic field values of possible PL features and for an in-depth analysis of the considered multi-spin systems, see Appendix B and Appendix C, respectively. As the initial step in predicting the temperature dependence of the positions of the cross-relaxation features, we assume that the only parameter in the Hamiltonian that changes with temperature is the NV zero-field-splitting (ZFS) parameter D. This is motivated by the fact that the temperature dependence of the ODMR resonances is well described by this assumption <cit.>. In the presence of a magnetic field at an angle with the NV axis, the eigenstates of the NV Hamiltonian are superpositions of magnetic sublevels with the amplitude of each sublevel depending on the magnitude and angle of the magnetic field with respect to the NV axis. We therefore diagonalize the corresponding NV Hamiltonian to find the eigenenergies as a function of D and, correspondingly, of T. The results of this analysis are predictions of the temperature dependence, Table <ref>, that we compare with the experimental data below. We note that there are significant differences predicted for different features. A general Hamiltonian H for the ground state can be written as: H = H_z+H_ s , where H_z represents Zeeman energy Hamiltonian and H_ s=H_ ee+H_ N+H_P1 is spin-spin interaction that includes spin interaction of electron-electron H_ ee, of electron-nuclei H_ N, and of P1 centers H_P1. The full expression of those Hamiltonians can be found in Ref. <cit.>. The spin Hamiltonian for the electron-electron interaction H_ee can be written as: H_ee=(D+d^||_gs)S_z^2+d^x_gs(S_x^2-S_y^2)+d_gs^y(S_xS_y+S_yS_x) , where d^||_gs and d^x_gs,d^y_gs are the electric field NV^- coupling constants. The temperature dependence of D is described by a fitting formula <cit.> assuming two relevant phonon modes, D(T)=D_0+c_1n_1+c_2n_2 , where D_0 is the ZFS at zero temperature, and c_i describes how the corresponding phonon mode shifts the ZFS by modulating the mean positions and vibrational amplitudes of the atoms in the lattice, c_1=-54.91 MHz and c_2=-249.6 MHz respectively. n_i is the mean occupation number of the ith mode, n_i=1/e^Δ_i/k_BT-1 , where Δ_i is the mode energy with Δ_1=58.73 meV and Δ_2=145.5 meV, respectively and k_B is the Boltzmann constant. Assuming that D is the only temperature-dependent parameter in the Hamiltonian and replacing D by D(T) in the spin Hamiltonian, we can get the frequency shifts of all the cross-relaxation features at different temperatures. The detailed discussion can be found in the Appendix. § EXPERIMENTAL DETAILS §.§ Diamond samples We studied two diamond samples. The [111]-cut high-pressure high-temperature (HPHT) synthesised diamond sample S2 from Sumitomo with high density of color centers ([N]∼100 ppm, <cit.>) was chosen because the features were best visible in it. The second sample, [111]-cut CVD (Chemical Vapour Deposition) sample from Element-6 ([NV]^-∼3.8 ppm, ^13C-depleted), was investigated only for the three main features (GSLAC, 591 G and P1 center) to compare with the results obtained for the S2 sample. §.§ Experimental setup The schematic of the experiment apparatus is shown in Fig. <ref>. The diamond sample is placed on a copper plate that is subsequently arranged on the top of a cold finger in a helium flow-through cryostat (Janis ST-500) mounted on the surface of an optical table. The copper plate has good thermal conductivity in a wide range of temperatures (4 K-300 K), in addition, it is cut to reduce eddy currents. The magnetic field is generated with two electromagnets in an arrangement designed to maximize the field homogeneity and minimize thermal drifts. The two magnets are mounted on a rotation stage (not shown in Fig. <ref>), enabling equal rotation of both magnets to change the projection of the magnetic field on the z axis. The vertical position of the magnets as well as the spatial separation between them is fixed so that the diamond is located precisely in the middle, a region with optimal magnetic homogeneity. The accessible range of the magnetic field generated by the setup is between 0 to over 1024 G (GSLAC feature region). Light with a wavelength of 532 nm emitted by a CW laser (Laser Quantum GEM 532) goes through an acousto-optic modulator (AOM) and is guided with a partial reflector (PR). The reflected beam is used to monitor the output power of the laser and stabilize the intensity using a proportional–integral–derivative (PID) controller. The transmitted light is coupled into an optical fiber. The diverging beam from the output of the fiber passes through an objective and is focused on the diamond. The objective is mounted on a motorized 3D stage (not shown in Fig. <ref>) so we can optimize the parking spot of the beam on the diamond and obtain the maximum photoluminescence. The photoluminescence is then directed through a dichroic mirror and focused on the photodiode with a lens. The temperature around the diamond can be controlled within an uncertainty of ± 0.1 K in a range of 300 K-4 K, using a temperature controller (LakeShore 335). In practice, when the temperature stabilized at one setting point, we collected the magnetic spectrum of the photoluminescence by sweeping the current supplied to the magnets. The magnetic shifts of the cross-relaxation features were calculated by comparing the position of features in the spectra acquired at different temperatures. The results are shown in Fig. <ref>. § RESULTS §.§ Temperature dependence of the cross-relaxation features We designate the magnetic spectrum acquired at room temperature as a baseline, and then all the magnetic spectra at different temperatures are compared with it. The magnetic shifts are determined for the six cross-relaxation features, which are presented in Fig. <ref>. Of the prominent features at 512 G, 591 G and 1024 G, respectively, the experimental results agree well with the theoretical predictions for both diamond samples. In Fig. <ref>, only the results with the S2 sample are shown as colored dots, and the dashed lines are calculated predictions. Of particular interest are the so-called “small features” at 163 G, and 93 G, respectively. These features appear to have small shift as a function of temperature, the black symbols in Fig. <ref>, although the uncertainties are larger compared to those for more prominent features. One may notice the increasing data point fluctuation in the low-temperature region (below 150 K). This is because of the drop in photoluminescence at low temperatures making it harder to determine the center position of the profile. Despite that, the experimental data agree well for all features with theoretical predictions. The insensitivity of particular cross-relaxation features to temperature highlights promising potential applications. It enables the decoupling of the sensitivity of NV centers to temperature and to other environmental parameters, such as electric field, magnetic field, stress, etc., and the development of sensors immune to temperature fluctuation of the environment. Furthermore, the magnetic spectrum of the S2 sample presents a feature at 732 G as shown in Fig. <ref>, which we assign to a level crossing of a coupled system of two off-axis NV^- centers and a P1 center, see Table <ref> and Appendix B. The temperature shift for this feature does not appear to follow the general behavior seen for other features; however, more experiments would need to be done to make quantitative conclusions about this relatively small feature. Relying on our tentative assignment of the feature, we suspect that the structure of the 732 G feature is not always resolved in our measurements, making it hard to identify the temperature shift of the feature precisely. §.§ Hyperfine structure As part of the analysis, we measured the separations and relative amplitudes between the substructure peaks in the cross-relaxation features and found that the distances between individual peaks and their relative amplitudes for a given feature did not significantly change with temperature. This is shown for the separations in Fig. <ref>. This is important for data processing because when we cannot resolve the substructure due to, for example, poor signal-to-noise ratio, we can assume that the lineshape does not change with temperature to produce a systematic effect. § DISCUSSION & CONCLUSION We report on the identification and temperature dependence of cross-relaxation features in the magnetic spectrum for NV centers in diamonds. Theoretically, we investigate the magnetic field dependence of the energy levels and look for (avoided) level crossings of various composite systems of NV^- centers, P1 centers, and ^13C nuclear spins. We identify the causes for all the features except the low-filed features at 93 G and 163 G. By taking into account the temperature dependence of ZFS, the magnetic shifts of all features can be theoretically predicted. We carried out two measurements with two different diamond samples, E6 and S2. For the prominent features at 512 G, 591 G, and 1024 G, the experimental results agree well with the theoretical predictions for both samples. In addition, we discovered that two small cross-relaxation features, at 93 G and 163 G, seen with the S2 sample do not show a significant shift with temperature. The immunity of these features to temperature fluctuations in the environment may be of significance to NV-based quantum sensing technology. We also present magnetic shifts of the feature at 732 G, which exhibits distinctive behavior compared with other features. § ACKNOWLEDGEMENTS We thank Marcus Doherty, Andrey Jarmola, Till Lenz, Sean Lourette, Patrick Maletinsky, Matthew Markham, and Dieter Suter for informative discussions and helpful advice. This work was supported by the EU, project HEU-RIA-MUQUABIS-101070546, by the DFG, project FKZ: SFB 1552/1 465145163 and by the German Federal Ministry of Education and Research (BMBF) within the Quantumtechnologien program via the DIAQNOS project (project no. 13N16455). X. Z. was supported by Beijing National Laboratory for Condensed Matter Physics (2023BNLCMPKF013), National Natural Science Foundation of China (project no. 12404550), and by the Fundamental Research Funds for the Central Universities (DUT24RC(3)016). H.Z. acknowledges the support of Beijing Natural Science Foundation (L233021). V.I. was supported by the National Research, Development and Innovation Office of Hungary (NKFIH) within the Quantum Information National Laboratory of Hungary (Grant No. 2022-2.1.1-NL-2022-00004) and project FK 145395. V.I. also acknowledges support from the Knut and Alice Wallenberg Foundation through the WBSQD2 project (Grant No. 2018.0071). § APPENDIX A The Zeeman Hamiltonian H_z can be written as H_z=μ_B g_s B·S , where μ_B≈1.4 MHz/G is the Bohr magneton, g_s≈ 2 is the Landé g-factor for the electron, and B, S represent the magnetic field and electron-spin operator, respectively. The Hamiltonian of the interaction between the electron spin S and nuclear spin I (which can be that of either ^14N or ^15N) can be expressed as H_N= S·𝐀_N ·I + γ_N B·I + I· 𝐐 · I , where 𝐀_N represents the hyperfine interaction tensor, γ_N is the nuclear gyromagnetic ratio, and 𝐐 denotes the nuclear-quadrupole-splitting tensor. The Hamiltonian of the P1 center, which has electronic spin S_P1=1/2 and nuclear spin I_P1=1 (in the case of ^14N), is H_P1 = S_P1·𝐀·I_P1 + γ_e B·S_P1 - γ_I B·I_P1 + I_P1· 𝐐_P1 · I_P1 where 𝐀 is the hyperfine tensor which is diagonal when the quantization axis is parallel to the symmetry axis of the P1 center, 𝐀=[ A_⊥ 0 0; 0 A_⊥ 0; 0 0 A_∥ ] with A_⊥=81.3 MHz and A_∥=114 MHz <cit.>, γ_e is the electron gyromagnetic ratio, γ_I is the nuclear gyromagnetic ratio, and 𝐐_P1 denotes the nuclear-quadrupole-splitting tensor for P1 centers <cit.>. To predict the positions of cross-relaxation features, we calculate the eigenvalues of the respective Hamiltonians as a function of the magnetic field and search for the cross points in the transition energies, as shown in the figures in Table <ref>. The frequency (magnetic) shifts of those features when temperature changes are calculated by taking the difference in the eigenvalues by plugging D(T) in the spin Hamiltonian. § APPENDIX B Here we briefly describe the methodology used to identify allowed and avoided crossings of the energy levels of finite dimensional multi-spin systems. The literature uses different terminologies and pictures to describe relaxation phenomena of multi-spin systems. Line crossings and cross-relaxation features of distinct spins can also be discussed in terms of energy level crossing and avoided crossing in the composite Hilbert space of the spins. The fact that two spins have a resonance in their transition energies equally means that the energy-level structure of the composite system exhibits a degeneracy. When cross-relaxation occurs between the spins, the degeneracy is lifted by a coupling Hamiltonian term that mixes the states of the two spins. Hereinafter, we study composite systems and look for relevant energy level (avoided) crossings to identify magnetic field values of actual and potential features in the magnetic field-dependent photoluminescence (PL) signal. To this end, we construct the Hamiltonian H of a coupled spin system and diagonalize it. By shifting the diagonal element of the Hamiltonian, we make sure to have all eigenvalues in the positive domain at all considered values of the magnetic field B. This enables us to order the eigenstates ψ_i using the absolute values of the eigenenergies ε_i and avoid unwanted changes in the ordering when scanning the magnetic field that can give rise to false signals in our methodology. To look for (avoided) crossings that include the |m_S=0⟩ state of the NV center, we compute the projection p_i (B) = | ⟨m_S=0|ψ_i (B)⟩ |^2 and plot against the value of the magnetic field applied along the quantization axis. Here, we are particularly interested in crossings that involve the |m_S=0⟩ state since the mixing of these states with |m_S=± 1⟩ can lead to features in the photoluminescence signals. Crossings of the |m_S= +1⟩ and |m_S= -1⟩ states are omitted in our analysis. As an example, we discuss the case of a coupled on-axis NV center and P1 center with a ^14N hyperfine interaction included. Figure <ref> a) depicts the magnetic field dependence of energy eigenstates ε_i. The energy levels exhibit crossings and avoided crossings at different values of the external magnetic field. In Fig. <ref> b) one can see the computed values of the p_i projections as a function of the magnetic field. The p_i values drop from 1 to 0 at certain magnetic field values signaling relevant crossing and avoided crossings of the states |m_S=0⟩ and |m_S=-1⟩. Note that the crossings of the |m_S=± 1⟩ manifold are not indicated by the drop of the p_i values. For instance, close to zero magnetic fields, the energy levels do exhibit crossing in the |m_S=± 1⟩ manifold, however, since these crossings do not influence the |m_S = 0⟩ contribution of the eigenstates, no change can be observed in the p_i values. At the GSLAC and around halfway between the GSLAC and B=0, we observe significant variation of the p_i values indicating the presence of relevant crossing and avoided crossings. At the GSLAC we can see two types of features in the magnetic field dependence of the p_i values. There are four straight vertical lines close to the GSLAC as well as a Lorentzien-like peak pluss dip structure touching each other at the magnetic field value of the GSLAC. The former indicates true crossings while the latter indicates avoided crossings. The true crossings observed in Fig. <ref> may also be relevant for cross-relaxation processes, since additional perturbations, such as off-axis magnetic field, strain, and nuclear spins may enable mixing of the states beyond what is described by our Hamiltonian and lead to features in the PL signal. Indeed, GSLAC features as well as the four satellites have been experimentally observed and studied theoretically <cit.> and observed in our measurements, see Fig. <ref> and Fig. <ref>. The characteristic P1 cross relaxation feature at around 512 G is also indicated by the change of the p_i values. As another example, we depict the energy levels and the p_i values for a pair of non-collinear NVs as a function of the external magnetic field parallel to the symmetry axis of one of the NV centers in Fig. <ref>. We can observe three avoided crossings and spin mixing features, one at the GSLAC, one at 591 G, and one at 0 magnetic field. The latter indicates the so-called zero-field feature observed in NV ensembles, see <cit.> and references therein. Finally, we note that our method is not indicative of the depth of the dips caused by the different spin-relaxation features. For this, dynamic simulations can be run once the origins of the features are understood. § APPENDIX C In this Appendix, we examine NV systems including different spins, such as on- and off-axis NV-centers, P1 centers, nitrogen, and carbon nuclear spins, up to five-spin clusters. The nuclear spins interact with their corresponding electron spin through the hyperfine coupling tensors (established in the literature). For simplicity, we omit the nitrogen spin of the NV center, however, include the nitrogen of the P1 centers, which can give rise to resolvable hyperfine or superhyperfine structures. For the electron spin-electron spin coupling, we use ad hoc, sometimes overestimated, coupling terms to identify the nature of the crossings. This approach does not alter our statements, however, it helps us to separate true crossings from avoided crossings. To avoid a shift of the crossing point, we keep the secular component of the coupling tensor low. In the following, we discuss relevant few-spin systems that either explain unknown PL features or support previous identifications. On-axis NV center interacting with two or three P1 centers. As can be seen in Fig. <ref>, the presence of two interacting P1 centers can give rise to an additional broad resonance feature at around 342 G. Considering a ^14N hyperfine coupling tensor with a principal axis parallel to the axis of the NV center the crossing feature can extend from 314 G up to 368 G and include numerous lines. These results are in remarkable agreement with the experimental observations reported recently in Ref. <cit.>. The inclusion of a third P1 center, not shown, can give rise to an additional broad feature centered around 257 G. Both of these features can be seen in Fig. <ref>(b), although the detailed structure and the center of the peaks cannot be resolved and identified. Off-axis NV center interacting with two or three P1 centers. Figure<ref> depicts the energy eigenstates and the p_i values as a function of the magnetic field. Interestingly, numerous crossings including the |m_S =0⟩ state are revealed. There are altogether nine separate lines observed, the central line located at 591 G. This feature may contribute to the 591 G feature often observed in experiments and assigned to the on-axis NV - off-axis NV system, however, due to the mixing of the NV states the PL signature of the off-axis NV - two P1 system may be suppressed. When a third P1 center is added to the system, not shown, two additional crossing regions appear at lower magnetic fields. The centers of the potential cross-relaxation features are at 347 G and 497 G. Two interacting on-axis NV centers with and without one first neighbor ^13C nuclear spin. As can be seen in Fig. <ref>(a), the energy levels of two on-axis NV centers also exhibit a crossing at 342 G. This effect is referred to as NV-NV auto cross-relaxation in Ref. <cit.> and is mentioned as the main source of the PL features observed at 342 G. Due to the lack of always present strong nuclear spin interactions, however, auto cross-relaxation of aligned NV centers cannot account for the broad feature with several distinct dips centered around 342 G. Presumably, both NV-2×P1 systems and two-NV systems contribute simultaneously to the broad feature at 342 G seen in our experiments as well as in Ref. <cit.>. When a single ^13C nuclear spin from the first shell of one of the NV center is included in the system, we observe split lines ∼13 G away from the central line at 342 G, however, this cannot account for the breadth of the feature at this magnetic field value. Interestingly, we observe a ∼1 G shifted zero-field feature as well as new crossings at the left side of the GSLAC indicated by the p_i values. The notable ones are the true crossing at 879 G and an avoided crossing at 952-956 G. It is important the note concerning these new crossings close to the GSLAC, that they are not implicitly GSLAC features. The true and avoided crossings at 879 G and 952-956 G happen within a single branch of the energy levels, which include both |m_S =0⟩ and |m_S = ± 1⟩ states of the two NV centers, and not at the GSLAC crossing of two branches. Finally, the predicted temperature shift of the 956 G feature is depicted in Fig. <ref>. One an-axis and one off-axis NV center interact with one first neighbor ^13C nuclear spin. Previously, we considered the case of two on-axis NV centers and one ^13C nuclear spin. By changing the symmetry axis of one of the NV centers, we again observe new crossings, see Fig. <ref>. Close to the magnetic field of GSLAC, we find two avoided crossings at 1005 G and 1048 G. Furthermore, close to the 591 G feature, we find both a series of true crossing and avoided crossings at magnetic field values 551-556 G, 584 G, 606 G, and 636-642 G. The crossings may give rise to satellite dips close to the 591 G feature. Finally, we find additional crossings at the zero field feature at around 4-7 G and 29-32 G some of which are allowed and could give rise to small dips at the zero field feature. Two off-axis NV centers interact with one nearest-neighbor ^13C nuclear spin. Considering two off-axis NV centers, one of them includes a ^13C nuclear spin, we observe three avoided crossings in the range of 500-700 G, see Fig. <ref>(b). In particular, we find the center of the avoided crossings at 501 G, 572 G, and 700 G. The energy levels, however, do not show apparent crossings at these magnetic field values, see Fig. <ref>(a), since these features are also related to intra-branch crossings. The former two overlap with other cross-relaxation features, while the latter could not be resolved in our experiments. In addition to these features, our method reveals numerous crossings in the 24-34 G interval close to the zero-field feature. One on-axis and one off-axis NV center interact with one P1 center. The most complicated energy level structure is observed for this system including three different electron spin and a ^14N nuclear spin. In addition to the numerous apparent crossings, the p_i values indicate numerous other crossings within the bands. The crossings of different branches occur at 332-342 G, 365-394 G, 492-502 G, 795-868 G, and at the GSLAC. The crossings close to the zero field feature spread from 0 G to 96 G. To judge the relevance of these crossings and to estimate the magnitude of related PL features spin dynamics simulations are needed, which is out of the scope of the present article. Two off-axis NV centers interact with a P1 center. The solution of the two off-axis NV centers and one P1 center is considerably simpler compared to the previous case and gives rise to only a single crossing feature centered around 732 G with additional split crossing lines at 695 G, 714 G, 750 G, and 769 G. Interestingly, the central line perfectly matches the feature at 732 G consistently observed in our temperature-dependent measurement. We tentatively assign this crossing to the observed PL feature and numerically studied the shift of the crossing point, i.e., we calculated the temperature shift of the 732 G peak of the two off-axis NV centers, P1 center system assuming that it is solely caused by the change in D. As can be seen in Fig. <ref>, we observe a total of -1.84 G shift of the feature when increasing the temperature from 0 to 300 K. The magnitude of the shift is comparable with our measurements for the 732 G feature, see Fig. <ref>. However, the experimental data show a sudden change in the shift not reproduced in our simulations. A possible reason could be the predicted structure of the 732 G feature, which is not resolved in our measurement and makes the identification of the position of the feature more challenging. apsrev4-2-2
http://arxiv.org/abs/2409.03642v1
20240905160028
Derivation of normal forms for dispersive PDEs via arborification
[ "Yvain Bruned" ]
math.AP
[ "math.AP", "cs.NA", "math.NA", "math.PR", "math.RA" ]
cd calc decorations positioning shapes external pageforeground [prefix=tikzDark/] [prefix=tikz/] 0.18em/0.18em 0.18em–0.18em #1!!!#1 Ubbmmn UBOONDOX-calo=45 UBOONDOX-calomn <-> s*[1.05] BOONDOX-r-calo UBOONDOX-calobn <-> s*[1.05] BOONDOX-b-calo UBOONDOX-calomn boldUBOONDOX-calobn noitemsep,topsep=4pt,leftmargin=1.5em Ubbmmn UBOONDOX-calomn boldUBOONDOX-calobn Umathx45 Umathxmn <5> <6> <7> <8> <9> <10> <10.95> <12> <14.4> <17.28> <20.74> <24.88> mathx10 mathxUmathxmn 1mathx"91 * 𝔰 𝔦 𝔐 ∅ #1[darkblue]KS: #1 #1[darkgreen]YB: #1 #1Yvain: #1 #1Katharina: #1 ↾ reg 𝕀id Id 𝐩 reg ireg 𝒫 ↷↷ ↷ Conj ℋ 𝒫 𝒲 𝒢 𝒥 𝒜 ℰ 𝒞 𝒬 ℬ ℳ 𝒯 ℜ 𝒯_0- 𝔎 tinydots=[dash pattern=on off ] superdense=[dash pattern=on 4pt off 1pt] #1↷_#1 #1↷̌_#1 #1↷̂_#1 #1↷̅_#1 𝔬 root poly non-root
http://arxiv.org/abs/2409.02190v1
20240903180127
$Z_2$ flux binding to higher-spin impurities in the Kitaev spin liquid: mechanisms and implications
[ "Masahiro O. Takahashi", "Wen-Han Kao", "Satoshi Fujimoto", "Natalia B. Perkins" ]
cond-mat.str-el
[ "cond-mat.str-el" ]
http://arxiv.org/abs/2409.03448v2
20240905115639
Shapiro steps in strongly-interacting Fermi gases
[ "Giulia Del Pace", "Diego Hernández-Rajkov", "Vijay Pal Singh", "Nicola Grani", "Marcia Frómeta Fernández", "Giulio Nesti", "Jorge Amin Seman", "Massimo Inguscio", "Luigi Amico", "Giacomo Roati" ]
cond-mat.quant-gas
[ "cond-mat.quant-gas", "cond-mat.supr-con", "physics.atom-ph" ]
[E-mail: ] delpace@lens.unifi.it Department of Physics, University of Florence, 50019 Sesto Fiorentino, Italy European Laboratory for Nonlinear Spectroscopy (LENS), University of Florence, 50019 Sesto Fiorentino, Italy Istituto Nazionale di Ottica del Consiglio Nazionale delle Ricerche (CNR-INO) c/o LENS, 50019 Sesto Fiorentino, Italy European Laboratory for Nonlinear Spectroscopy (LENS), University of Florence, 50019 Sesto Fiorentino, Italy Istituto Nazionale di Ottica del Consiglio Nazionale delle Ricerche (CNR-INO) c/o LENS, 50019 Sesto Fiorentino, Italy Quantum Research Centre, Technology Innovation Institute, Abu Dhabi, UAE Department of Physics, University of Florence, 50019 Sesto Fiorentino, Italy European Laboratory for Nonlinear Spectroscopy (LENS), University of Florence, 50019 Sesto Fiorentino, Italy Istituto Nazionale di Ottica del Consiglio Nazionale delle Ricerche (CNR-INO) c/o LENS, 50019 Sesto Fiorentino, Italy European Laboratory for Nonlinear Spectroscopy (LENS), University of Florence, 50019 Sesto Fiorentino, Italy Istituto Nazionale di Ottica del Consiglio Nazionale delle Ricerche (CNR-INO) c/o LENS, 50019 Sesto Fiorentino, Italy European Laboratory for Nonlinear Spectroscopy (LENS), University of Florence, 50019 Sesto Fiorentino, Italy Istituto Nazionale di Ottica del Consiglio Nazionale delle Ricerche (CNR-INO) c/o LENS, 50019 Sesto Fiorentino, Italy Instituto de Física, Universidad Nacional Autonoma de Mexico, C.P. 04510 Ciudad de Mexico, Mexico Department of Engineering, Campus Bio-Medico University of Rome, Rome, Italy European Laboratory for Nonlinear Spectroscopy (LENS), University of Florence, 50019 Sesto Fiorentino, Italy Istituto Nazionale di Ottica del Consiglio Nazionale delle Ricerche (CNR-INO) c/o LENS, 50019 Sesto Fiorentino, Italy Quantum Research Centre, Technology Innovation Institute, Abu Dhabi, UAE INFN-Sezione di Catania, Via S. Sofia 64, 95127 Catania, Italy Dipartimento di Fisica e Astronomia, Università di Catania, Via S. Sofia 64, 95123 Catania, Italy European Laboratory for Nonlinear Spectroscopy (LENS), University of Florence, 50019 Sesto Fiorentino, Italy Istituto Nazionale di Ottica del Consiglio Nazionale delle Ricerche (CNR-INO) c/o LENS, 50019 Sesto Fiorentino, Italy § ABSTRACT We report the observation of Shapiro steps in a periodically driven Josephson junction between strongly-interacting Fermi superfluids of ultracold atoms. We observe quantized plateaus in the current-potential characteristics, the height and width of which mirror the external drive frequency and the junction nonlinear response. Direct measurements of the current-phase relationship showcase how Shapiro steps arise from the synchronization between the relative phase of the two reservoirs and the external drive. Such mechanism is further supported by the detection of periodic phase-slippage processes, in the form of vortex-antivortex pairs. Our results are corroborated by a circuital model and numerical simulations, overall providing a clear understanding of Shapiro dynamics in atomic Fermi superfluids. Our work demonstrates phase-coherent and synchronization effects in driven strongly-interacting superfluids, opening prospects for studying emergent non-equilibrium dynamics in quantum many-body systems under external drives. Shapiro steps in strongly-interacting Fermi gases G. Roati September 9, 2024 ================================================= Driven many-body systems exhibit rich and complex behaviors that far extend beyond their equilibrium properties <cit.>. A paradigmatic example is provided by Josephson junctions (JJs) under driving currents. A Josephson junction comprises two superconducting islands separated by a thin insulating barrier. Josephson showed that a dissipationless current I can flow across the junction , maintained solely by the phase difference ϕ between the two superconducting reservoirs <cit.>. Above a critical current I_c, the junction transitions into the so-called voltage-state: a finite voltage V develops across the junction, which is directly related to the phase difference dynamics as provided by the Josephson-Anderson relation V=ħ/2eϕ̇, ħ being the reduced Planck constant and and e the electron charge. In this regime, the Josephson current time evolution is characterized by the frequency ω_J=2e V/ħ. The junction is thus an element that operates in cycles with a natural frequency controllably set by the extrernally injected current <cit.>. When the JJ is driven by a combined DC & AC bias with frequency ω, the junction dynamics results to be synchronized with the external drive: ω_J=n ω, n being a positive integer. This synchronization results in quantized voltage plateaus, known as Shapiro steps, appearing at ⟨ V⟩_t = n ħω/2e in the time-averaged voltage-current characteristics <cit.>. Shapiro steps arise from a phase-locking process where ϕ advances by 2π n during each cycle of the driving field, establishing coherence between the junction phase and the driven field. Since their observation in superconducting Josephson junctions (SJJ) <cit.>, Shapiro steps have been investigated in various nonlinear systems, ranging from charge density waves <cit.>, colloidal systems <cit.>, superconducting nanowires <cit.>, and ^3He weak-links <cit.>. Shapiro steps in SJJs play a crucial role in metrology, calibration within electronics, and in defining quantum standards <cit.>. Recent years have witnessed significant progress in developing and operating circuit architectures employing ultracold atoms <cit.>. These atomtronics circuits feature enhanced control and flexibility of their working conditions, that may go beyond conventional electronics. One of the most promising atomic platforms for implementing such circuits consists of ultracold Fermi gases <cit.>. These systems provide the unique opportunity to realize different regimes of superfluidity, ranging from the Bardeen-Cooper-Schrieffer (BCS) limit of weakly-bound fermion pairs to a Bose-Einstein condensate (BEC) of tightly bound molecules, including the intermediate universal, strongly-correlated unitary regime <cit.>. In particular, unitary Fermi gases (UFG) serve as prototypical examples of strongly-interacting systems, sharing similarities with other strongly-interacting Fermi systems found in nature, spanning a broad range of energy and length scales, from nuclear matter to neutron stars <cit.>. Research on quantum transport phenomena within these systems has explored a variety of configurations, including mesoscopic two-terminal setups <cit.>, ring geometries <cit.>, optical lattices <cit.>, and scenarios involving disorder <cit.> or periodic modulation of the harmonic trapping potential <cit.>. Josephson effects have been observed in both three-dimensional <cit.> and two-dimensional <cit.> unitary superfluids, providing insights into the microscopic properties of these gases. However, despite various theoretical proposals <cit.>, the observation of Shapiro steps in atomic Josephson junctions has remained elusive. Recently, Shapiro steps have been predicted to occur by periodically oscillating the position of a thin tunneling barrier within a weakly interacting Bose-Einstein condensate <cit.>. While such protocol defines new avenues for exploring Shapiro steps in atomic junctions, in particular for Fermi superfluids, it remains an open question whether strong interactions could affect their dynamics. Interactions are expected to enhance phase-coherence across the junction and potentially reduce decoherence effects associated with the finite compressibility of atomic superfluids <cit.>, but their role on synchronization processes is yet to be understood. Investigating these effects is crucial for deepening our understanding of strongly-interacting systems and for revealing their microscopic behavior under external drives. In this work, we report the observation of Shapiro steps in a periodically driven atomic Josephson junction connecting two strongly-interacting Fermi superfluids. Similar to SJJs, we find that the external driving frequency determines the height of Shapiro steps, while the driving amplitude affects their width. Moreover, we access a detailed microscopic understanding of the physical mechanism behind the Shapiro steps by directly measuring the phase dynamics across the junction. By analyzing the current-phase relation under the external drive and observing the phase evolution across different Shapiro steps, we demonstrate the synchronization between the junction phase and the external drive. We observe that the Shapiro dynamics is accompanied by vortex-antivortex pairs emissions, in agreement with the results of Ref. <cit.>. Our results showcase the response of a many-body strongly-interacting system to an external drive, resulting in synchronized dynamics. § CREATING AND DRIVING THE ATOMIC JUNCTION We produce strongly-interacting Fermi superfluids of N ∼2e4 atom pairs by cooling a balanced mixture of the first and third lowest hyperfine states of ^6Li to temperatures ∼ 0.3(1) T_c, where T_c is the superfluid critical temperature <cit.>. Interparticle interactions are parametrized by 1/k_Fa, where k_F = √(2mE_F)/ħ is the Fermi wave vector, E_F the Fermi energy, and a the s-wave scattering length. We tune a in the vicinity of the Feshbach resonance at 690G <cit.>, enabling the exploration of various superfluid regimes across the BEC-BCS crossover. In this work, we focus on two different interaction regimes: a UFG with Fermi energy E_F/h = 11.8(4)kHz, and a molecular BEC at 1/k_Fa = 3.3(1) with E_F/h = 9.2(3)kHz. The gas is confined in the x-y plane using a box-like potential with dimensions L_x × L_y = 125 × 17.5 μm^2, and along the vertical z direction by a strong harmonic confinement <cit.>. The trap configuration establish a nearly homogeneous in-plane atomic density with the superfluid remaining in the three-dimensional regime. To engineer the atomic JJ, we superimpose a thin repulsive optical barrier onto the atomic sample, which can be displaced along the x-axis, see Fig. <ref> (A). The barrier intensity profile and position are dynamically controlled by a digital micromirror device (DMD), whose designed pattern is projected onto the atomic plane using a high-resolution microscope objective <cit.>. The barrier features a FWHM of 0.9(1)μ m along the x-direction <cit.>, comparable to the healing length of the investigated superfluids, ranging between 0.6-1.7 μm for typical gas parameters. The barrier height, V_0, is set to be larger than the gas chemical potential, μ_0, with V_0 / μ_0= 1.3(1) for the unitary, and 1.2(1) for the BEC JJs, ensuring that the junction operates always in the weak coupling or tunneling regime <cit.>. We inject currents into our junction by precisely moving the tunneling barrier along specific trajectories controlled with the DMD <cit.>. The in-plane homogeneous density of the superfluid ensures that the injected current I(t) is independent of its longitudinal position, and it is determined only by the instantaneous barrier velocity v(t). The instantaneous current reads I(t) = v(t) ñ L_y = v(t) N/L_x, with ñ=∫ n_3D(z) dz = N/L_x L_y the planar density. To introduce a biased alternating current, we move the barrier according to the trajectory x(t) = v_DC t + x_ACsin (ω t) <cit.>, giving rise to the instantaneous injected current I(t) = I_DC + I_ACcos(ω t), with I_DC = v_DC N/L_x, and I_AC = ω x_AC N/L_x, (Fig <ref>B). We explore the junction operation under different driving conditions by tuning the trajectory parameters v_DC, x_AC, and ω, as illustrated in Fig. <ref> B for bias (i) and composite current injections (ii). The junction dynamics can be portrayed with the well-known washboard potential analogy <cit.>, as sketched in Fig. <ref> C. The phase of the JJ follows the same equation of motion as that of a classical particle subject to a viscous force moving in a tilted washboard potential, where the tilt is given by the external current I. For a DC bias above I_c, the particle enters the running-phase regime, where ⟨ϕ̇⟩_t ≠ 0, and the junction operates resistively [Fig. <ref> C-(i)]. For an AC drive, in the overdamped regime, where friction overtakes inertia, the velocity locks to the driving tilt, resulting in a steady but synchronized descent, crossing n potential wells during each modulation cycle [Fig. <ref> C-(ii)]. This resonant motion locks the mean particle velocity to fixed values ⟨ϕ̇⟩_t = nω, leading to distinct, equally-spaced velocity steps, the Shapiro steps. § SHAPIRO STEPS IN A UNITARY JUNCTION We probe the response of our junction by measuring the I - Δμ characteristic curves as a function of the bias current, I_DC, after three driving periods T <cit.>, where T= 2π / ω. Here, Δμ = μ_L - μ_R represents the chemical potential difference between the two reservoirs measured from in-situ density images. Curves for I_AC = 0 and I_AC>0 for typical driving parameters are shown in Fig. <ref> A-B. In the undriven scenario , we recover the hallmark behavior of a current-biased JJ, featuring a zero-imbalance plateau for DC currents below I_c <cit.>. For the AC-driven JJ instead , we observe the emergence of distinct plateaus, with number, widths, and heights strongly depending on the driving parameters. To characterize the plateau properties, we perform a phenomenological fit of each step in the I - Δμ curve using multiple, patched sigmoidal functions <cit.>, from which we get the height Δμ_n, and the width Δ I_n of each step (inset of Fig. <ref> A). The step height is in agreement with Δμ_n = n ħω for various driving frequencies independently on I_AC (Fig. <ref> C), as it is determined exclusively by the external driving frequency, while the driving strength I_AC affects only the position and width of each step (Fig. <ref> D). To quantitatively describe the observed I - Δμ characteristics, we exploit the resistively and capacitively shunted junction (RCSJ) circuital model <cit.>. In this model, the junction is set in parallel with resistive and capacitive components, with the total circulating current being: I(t) = I_c sinϕ + ħ Gϕ̇ + ħ C ϕ̈. The supercurrent contribution is given by the sinusoidal current-phase relation I_c sinϕ, the resistive and capacitive contributions are G ϕ̇ and C ϕ̈, respectively. Similarly as for SJJ, the Josephson-Anderson relation Δμ = -ħϕ̇ connects the phase dynamics to the chemical potential build-up <cit.>, with Δμ playing the role of the junction voltage. The Stewart-McCumber parameter β_c = I_c C/ ħ G^2 determines the dynamic regime of the junction, discriminating between underdamped (β_c ≫ 1) and overdamped (β_c ≪ 1) <cit.>. In the latter, the undriven solution is analytically given by Δμ = G^-1√(I_DC^2-I_c^2), which we use to fit the DC measurements to extract I_c and G. In the driven scenario instead, the I-Δμ characteristic is evaluated by numerically solving Eq.<ref>. As reported in Fig. <ref> A-B, the overdamped solutions of the RCSJ model display the characteristic Shapiro steps matching the measured ones. We note that even though we estimate β_c≈ 14(2) for the unitary superfluid, the overdamped RCSJ numerical solutions well capture the observed steps. In particular, the step widths obtained from the numerical solutions of the overdamped RCSJ model are in agreement with the experimentally measured values (Fig. <ref> D). Note that this result is analogous to the Bessel function behavior of the Shapiro steps found analytically under voltage-driven modulation <cit.>, Δ I_n/I_c = |J_n (V_ac/ħω)|, but quantitatively deviates from it at small driving amplitudes, as expected under current-driven modulation <cit.>. § PHASE DYNAMICS AND SYNCHRONIZATION To link the Shapiro steps with the phase dynamics, we tune our atomic junction to operate in the BEC regime at 1/k_Fa=3.3(1). Here, the reduced interactions allow to leverage matter-wave interference between the expanding reservoirs to directly measure the relative phase ϕ <cit.>, accessing both Δμ-I and ϕ-I characteristics <cit.>, as reported in Fig. <ref> A-B for ω = 2 π×175Hz, I_AC/I_c = 0, and I_AC/I_c =1.4(1). In the non-modulated case for I_DC<I_c, we recover the expected DC Josephson trend <cit.>, i.e. ϕ = arcsin(I_DC/I_c). The measured ϕ shows larger errorbars only for currents near and above I_c, attributed to the running phase in the voltage state <cit.>. The driven scenario instead displays two main features: a diminished step contrast in the I-Δμ curve as compared to the unitary junction, and a well-defined phase ϕ within the plateau region even at finite Δμ, presenting significant fluctuations only in the transition regions between plateaus <cit.>. To support our findings, we compare the experimental results with numerical simulation performed with a dynamical classical-field method under similar experimental conditions <cit.>. The numerical simulations quantitatively agree with our results for Δμ-I and ϕ-I characteristics, capturing the smoothness of the Shapiro steps in the experimental data. The agreement with the numerical simulation evidences that the observed lower contrast of the plateaus in the BEC is inherent to our junction, which nevertheless displays Shapiro steps dynamics in a broad range of modulation parameters (Fig. <ref> of Supplementary Materials). We demonstrate the dynamical synchronization between the junction phase and the external driving by monitoring ϕ as a function of time in the 0-th and 1-st Shapiro steps (Fig. <ref> C), additionally, the 2-nd Shapiro step is shown in Fig. <ref> of Supplementary Materials. The interference fringes show an oscillating behavior in both cases, highlighted by the measured relative phase ϕ (Fig. <ref> D), which showcases the phase-locking and synchronized dynamics induced by the driving. Note that phase locking effects have also been observed in a superfluid system in a Floquet driven tilted double well potential <cit.>. Whereas in the 0-th step, the phase oscillates in phase with the injected current without reaching the critical values for phase slips, in the 1-st step ϕ reaches this critical limit once every cycle, except for the transient behavior of the first modulation, leading to periodic phase slips, marked in Fig. <ref> C-D (ii) by the red arrows. The phase-slippage process produces a clear spike in ϕ for both the dynamical classical-field simulation and overdamped RCSJ solution, associated to an increase of the errorbar on the measured phase <cit.>. Depending on the junction dimensionality, phase slips manifest as different topological excitations, from a soliton in 1D <cit.>, to vortex-antivortex pair in 2D <cit.>, and vortex rings in 3D <cit.>. The kind of excitations created is determined by which is energetically favorable <cit.>, or arises from a multi-step process such as soliton decay into different topological excitations <cit.>. In our BEC junction, we directly observe the presence of vortex-antivortex pairs as localized density depletions in the superfluid after a short time of flight <cit.> (Fig. <ref> E). For the 1-st Shapiro step, we observe vortices generated on the near-left of the barrier (i-ii) that subsequently move towards the bulk of the reservoir (iii-iv). After an additional modulation period, a second vortex-antivortex pair is similarly emitted from the barrier region (v). The dynamical classical-field simulations corroborate this mechanism (Fig. <ref>-<ref> of Supplementary Materials): at the end of the first cycle, the phase difference accumulated along the junction releases a solitonic excitation from the barrier, which then quickly decays into vortex-antivortex pairs, via snake-instability <cit.>. The periodic emission of vortex-antivortex pairs in Shapiro steps is a proxy of the underlying synchronized phase dynamics of the junction, which undergo n phase slippage processes in the n-th step of each modulation cycle, as pictorially illustrated in the washboard potential analogy of Fig. <ref> C(ii) for n=1. By measuring the number of vortex pairs N_d as a function of the injected current (Fig. <ref>), we probe the synchronization also in the strongly-interacting regime. In the non-modulated case , vortices are observed to proliferate for I_DC>I_c, signaling a contribution to the junction conductance due to collective excitations, in agreement with previous observations in three-dimensional atomic JJs <cit.>. For the AC drive instead , the number of detected vortex pairs shows a step-like trend that closely resembles the Shapiro steps observed in the I- Δμ curve measured under the same driving conditions. In particular, in the 2-nd Shapiro step, N_d doubles with respect to the 1-st step, highlighting that, as a result of the underlying synchronized phase dynamics, the number of phase slips is proportional to the step number. The dissipative and resistive dynamics giving rise to the Shapiro steps in our atomic JJ is dominated by collective excitations, playing a similar role of quasiparticles produced by Cooper pair breaking in SJJ. In fact, in all explored configurations, the junction always works in the regime of Δμ≪Δ, with Δ the superfluid gap, where broken pairs are energetically suppressed and the only accessible excitations are collective ones, namely phonons, solitons and vortices <cit.>. § CONCLUSIONS We have observed Shapiro steps in a current driven atomic JJ composed of two weakly-coupled strongly-interacting Fermi gases. Shapiro steps appear as quantized plateaus in the relative chemical potential, the analog of the voltage drop in SJJs, occurring at integer multiples of the external driving frequency. As the driving amplitude increases, the widths of these plateaus exhibit a non-linear dependence, arising from the interplay between the external drive and the non-linear characteristics of the atomic Josephson junction. We probe the synchronization process at the heart of Shapiro steps by directly accessing the microscopic phase dynamics of the junction featuring periodic phase-slippage processes which manifest as vortex-antivortex pairs. At the more fundamental level, our results also disclose the interplay between quantum phase coherence and dissipation dynamics in a strongly correlated Fermi system. Our work opens prospects for a deeper comprehension of how quasiparticles and quantum phase slips contribute to quantum transport, potentially leading to advancements in understanding the complex dynamics within driven superconducting networks <cit.>. Our driven atomtronic circuit can be indeed exploited to investigate driven atomic JJ arrays, whether arranged linearly or in annular configurations <cit.>. In turn, this opens the way to simulate important paradigms in quantum synchronization, as provided by Kuramoto-like models <cit.>, by driven atomic JJ in the presence of strong interactions and with disordered couplings between the links <cit.>. Finally, as Shapiro steps provide the voltage standards in quantum electronics <cit.>, we could leverage the quantized steps to access a direct measurement of chemical potential differences <cit.>. This approach would be specifically valuable in the intermediate regimes of the crossover superfluids, where the equation of state is challenging <cit.>. We note that Shapiro steps have been very recently observed also in a driven atomic JJ with weakly interacting bosons. § ACKNOWLEDGMENTS We thank Francesco Marino, Ludwig Mathey, and Klejdja Xhani for the discussions and Francesco Scazza for careful reading of the manuscript. G.R. and G.D.P. acknowledge financial support from the PNRR MUR project PE0000023-NQSTI. G.R. acknowledges funding from the Italian Ministry of University and Research under the PRIN2017 project CEnTraL and the Project CNR-FOE-LENS-2023. The Authors acknowledge support from the European Union - NextGenerationEU for the “Integrated Infrastructure initiative in Photonics and Quantum Sciences" - I-PHOQS [IR0000016, ID D2B8D520, CUP B53C22001750006]. This publication has received funding under the Horizon Europe program HORIZON-CL4-2022-QUANTUM-02-SGA via project 101113690 (PASQuanS2.1). J.A.S. acknowledge financial support from CONAHCyT grant CF-2023-I-72; DGAPA-UNAM-PAPIIT grant IN105724; CIC-UNAM grant LANMAC-2024, and European Community's Horizon 2020 research and innovation program under grant agreement n° 871124. § AUTHOR CONTRIBUTIONS G. D. P., V. P. S., L. A. and G. R. conceived the experiment. G. D. P., D. H.-R., N. G., M. F. F. and G. N. performed the experimental work, acquired the data and carried out the data analysis. V. P. S. performed numerical simulations. L. A and G. R. supervised the work. All authors contributed extensively to the discussion and interpretation of the results, and took part in writing the manuscript. Science § SUPPLEMENTARY MATERIALS §.§ Experimental method §.§.§ Gas preparation We initially realize a superfluid atomic Fermi gas by evaporatively cooling the first and third lowest hyperfine state of ^6Li atoms in a red-detuned, cigar-shaped harmonic trap at 690G. To obtain a superfluid in the BEC regime, during the last part of the evaporative cooling procedure we sweep the magnetic field at 633G, where the molecular scattering length is a_M = 0.6 a = 1029 a_0. After the production of the superfluid, we adiabatically ramp up in 100ms a TEM_0,1-like laser beam, that confines the atoms in the ẑ direction creating a harmonic potential, and the hard wall potential with a rectangular shape in the x-y plane created with the DMD. Both these two potentials are realized with a blue-detuned laser at 532nm. The vertical harmonic confinement has a trap frequency ω_z =2π×685(5)Hz at unitarity and ω_z =2π×416(4)Hz in the BEC regime. Subsequently, we adiabatically ramp down the initial cigar-shape potential in 100ms, and we let the system equilibrate for at least 50ms before ramping up the tunneling barrier. The superfluid produced in such a hybrid trap is homogeneous in the x-y plane since the residual harmonic confinement arising from the TEM_0,1-like and the curvature of the magnetic field corresponds to 2.5Hz, therefore negligible for the dynamics studied in this work. The Fermi energy of the system in the hybrid trap is computed directly from a Thomas-Fermi approximation yielding <cit.>: E_F = ħ^2 k_F^2/2m = 2 ħ√(ħπω_z N/m A), where ħ is the reduced Plank constant, m the mass of a ^6Li atom, and A = L_x × L_y, the area of the box trap. The chemical potential of the unitary superfluid is calculated as μ_0/h = ξ^3/4 E_F/h = 5.9 (2)kHz, where ξ≃ 0.37 is the Bertsch parameter; whereas, for the molecular BEC, we calculate it as μ_0/h = (3 π N ħ^2 ω_z a_M / 2 A √(m) )^2/3 / h = 1.20(5)kHz <cit.>. The healing length in the BEC regime is ξ_L = 0.59(1)μ m, while in the UFG the typical length scale of the superfluid is given by 2π/k_F ≃1.67(3) μ m. To produce the atomic JJ, we ramp up the barrier in its initial position in 10ms and wait for 30ms before starting the movement. The homogeneity of the cloud before ramping up the barrier ensures the same density and chemical potential in the two reservoirs at t = 0ms. The chemical potential difference Δμ is extracted by taking an in situ image of the cloud after the barrier movement and computing μ_L (μ_R) from the measured number of atoms N_L (N_R) in the left (right) reservoir. This way of computing Δμ is analogous to a measurement of the time average ⟨Δμ⟩_t. In fact, as it is shown in Fig. <ref> A, the instantaneous density accumulation and depletion, created in the vicinity of the barrier because of its motion, propagate in time away from the barrier position inside the two reservoirs, traveling at the speed of sound (measured as in Ref. <cit.>). Consequently, the time evolution of the instantaneous chemical potential difference at the barrier position, Δμ_b (t), is mapped in different positions in space. The black horizontal dashed line in the figure indicates the time at which the measurement is performed, corresponding to three modulation periods. At this time, the initial density modulations have traveled up to the edge of the cloud covering the full reservoirs. For this reason, the time average value of Δμ_b (t) is mapped in the measured value of the global chemical potential difference, Δμ, between the two reservoirs. This reasoning is expected to be less valid for higher frequencies, for which the period is shorter and the density modulations do not reach the edges at the end of the three cycles, leading to a lower value of the measured Δμ. Given the lower speed of sound in the BEC regime, the decrease happens at lower frequencies with respect to the UFG regime. We note that all the measurements of Δμ reported in this work have been performed by imaging the cloud after three modulation periods. This value is the longest number of cycles allowed by the size of the junction, for the range of barrier velocity spanned. For each ω, we also acquired a corresponding DC curve with I_AC=0, moving the barrier at constant velocity for a time of 3 T. §.§.§ Barrier characterization The barrier intensity, shape, and position are controlled with the DMD. The effective movement of the barrier is obtained by creating a sequence of different light patterns with the DMD, with the barrier position following the desired equation of motion and shining it on the atomic plane through the high-resolution microscope objective with a frame rate of 12.5kHz. Even though the DMD-made barrier movement is intrinsically discretized, the small size of a single DMD-pixel imaged on the atomic plane, 0.25 μm, ensures the smoothness of the barrier movement. We characterize the properties of the barrier movement by monitoring the position of the barrier as a function of time in the case of Fig. <ref> B. The barrier position is obtained by performing a Gaussian fit of the density depletion of the in situ images as a function of time. The extracted positions are indicated by the white points in Fig. <ref> A. We fit the trajectory with the equation of motion of the barrier x(t) = v_DC t + x_ACsin (ω t). The fitted parameters are compatible within 1% error for the value of v_DC and 0.5% error in the case of x_AC and ω with respect to the set values. The Fourier transform (FT) of the observed trajectories respect the ones with the same parameters but x_AC = 0μ m, x̅_b(t), shows an almost single-frequency behavior, with a small contribution of the second and third harmonics, as it is shown in Fig. <ref> C, confirming that our protocol provides a clean and monochromatic AC drive. The measurements reported in this work have been performed with values of x_AC in the range 1-6 μm, limited by the finite resolution of the DMD-projecting setup and the frame rate of the DMD, respectively. The intensity of the profile created to produce the tunneling barrier is calibrated with a phase imprinting method <cit.>. We create a small JJ of dimension 37.5 × 17.5 μm^2 and imprint a phase on one of its reservoir by applying a homogeneous optical potential U_0 for a short time interval Δ t. For Δ t < h/μ, we operate an almost pure phase excitation, with the phase imprinted by the light on the illuminated reservoir given by ħϕ = U_0 Δ t. To calibrate the potential U_0, we measure ϕ from the interference arising in the time-of-flight (TOF) expansion of the junction, using the phase of the non-imprinted reservoir as a reference. Figure <ref> A-B shows an example of an interference pattern, which we fit to extract ϕ with a sinusoidal function. By repeating this process for different values of the imprinting pulse power, we extract a calibration for the imprinted potential U_0 (Fig. <ref> C). To characterize the barrier height and width we image the DMD-produced barrier intensity pattern on a service camera in the DMD-projecting optical system. We acquire images of the barrier in different positions throughout the area of the JJ and extract their height and size via a Gaussian fit along the x- direction. To account for the finite resolution of the imaging system after the service camera, before performing the fit, we convolve the images with a Gaussian of FWHM= 0.63 μm, well-approximating the Point Spread Function (PSF) of the microscope objective. The barrier properties are barely varying while changing their position, providing the average values reported in the main text for V_0 and FWHM. In particular, to obtain the value of the barrier height V_0 we compare the fitted barrier height from the image with the pattern used to create the phase imprinting and employ the calibration factor to convert it into energy units. §.§.§ Phase and vortex detection The relative phase between the two reservoirs is measured from the interference fringes arising after a TOF expansion of 2ms, after abruptly switching off all the confinements. The short TOF employed for this measurement ensures that interference fringes appear only close to the barrier region (Fig. <ref>), allowing for a precise measurement of the relative phase at the junction ϕ. In particular, to quantitatively extract ϕ we restrict to a region around the instantaneous position of the barrier (dashed red rectangle area in the figure) and fit the density profile integrated along the y-direction with a sinusoidal function. We constrain the wavelength of the fringes to the same value for all the acquired interferograms, as this quantity depends only on the TOF. We fix the best fringe wavelength as the average wavelength from a preliminary sinusoidal fit to all the data acquired under different conditions. In Fig. <ref> B-C we report the measured phase evolution in the second Shapiro step, under the same conditions as in Fig. <ref> of the main text. Both in the experimental data and in the overdamped RCSJ model numerical solutions, two phase-slips processes are evident in each modulation cycle. To detect vortices in the BEC regime we employ a TOF technique as well, but, to avoid the simultaneous presence in the images of interference fringes, we ramp down the barrier in 0.24ms, wait 4ms and then perform the TOF expansion. To increase the contrast of the vortices, we ramp down the DMD-made potential during the first 1ms of TOF expansion. To observe the presence of vortices in the UFG regime we add to the TOF expansion a fast sweep of the Feshbach magnetic field to map the system in a molecular BEC, where vortices are visible as clear holes in the density <cit.>. In particular, at the end of the movement of the barrier , we wait 1ms before rapidly removing the barrier from the system in 1.5ms and then we wait 1.5ms before performing the TOF procedure, starting the magnetic field ramp after the end of the barrier movement. In both BEC and UFG regimes, the waiting time between the barrier ramp down and the TOF expansion is expected not to affect significantly the number of vortices in the system. We note that, since N_d of Fig. <ref> of the main text is measured after three modulation cycles, we would expect to see a higher number of vortex-antivortex pairs, corresponding to the periodic phase-slippage process. However, the finite size of the junction along the y-direction facilitates the vortices to escape from the bulk density, so that the measured N_d most likely corresponds to the number of vortex-antivortex pairs emitted during the last modulation cycle. We note that with a similar protocol as for detecting vortices, employing a fast sweep during the TOF, the relative phase at the junction could be measurable also at unitarity from the interferograms <cit.>. However, this technique provides more noisy interferograms, and the lower signal/noise together with the presence of vortices makes the extraction of ϕ harder. For this reason, we decided to perform the phase measurement in the BEC regime. §.§ Circuital model simulations We study the expected behavior of Δμ and of ϕ in the framework of the RCSJ model, in which the phase evolution is described by Eq. (<ref>) of the main text. We solve numerically the equation in the overdamped regime (β_c ≪ 1), neglecting the capacitance, i.e. the second derivative term. In this limit, the I_AC=0 case solution Δμ = G^-1√(I_DC^2-I_c^2) is used to extract the values of G and I_c by fitting the observed value of Δμ as a function of I_DC. These values are then used in order to simulate the dynamic in the I_AC≠ 0 case. Figure <ref> displays the results for values of the parameters as the case plotted in Fig. <ref>A-B of the main text, using the Josephson-Anderson relation Δμ = -ħϕ̇. In particular, we extract the expected average value of Δμ from the time average value of the phase derivative ⟨ϕ̇⟩ for a total evolution time of 10periods. Figure <ref>C shows the expected behavior in the absence of modulation (I_AC=0, blue line) and the predicted shape of Shapiro steps for a I_AC=1.3 I_c. In Fig. <ref>D we plot the corresponding phase at the end of 3 complete periods for the same cases of I_AC=0 and I_AC=1.3 I_c. This plot shows a similar trend to the one shown in Fig. <ref>B of the main text, apart for the regions in the transition between the steps in the modulated case, where this quantity shows a not well-defined behavior. In particular, Fig. <ref>E, shows the value of the distance between the phase at a given value of I_DC resulting from different initial values of the phase ϕ_0: Δϕ^2 (I_DC) = ∑_i,j ( ϕ(I_DC,ϕ_0 = ϕ_0,i) - ϕ(I_DC,ϕ_0 = ϕ_0,j))^2. This value diverges in this transition region. Under a voltage modulation, the Shapiro step half-widths follow the well-known relation Δ I_n/I_c = |J_n(V_AC/ħω)|. However, quantitative deviations occur under current modulation, especially for the first few steps, as shown in Fig. <ref>. §.§ Phase Fluctuations In our system, we refer to the phase fluctuations in terms of the fluctuations of the relative phase given small deviations in the initial conditions of ϕ, and not to stochastic noise terms in the RCSJ model. These fluctuations in the initial conditions are natural in our experimental setup and modify the measurement of the phase we perform after TOF, as discussed in the previous section. To quantify the stability of the phase measurements we make use of the overdamped RCSJ model. We compare the phase evolution of from 100 simulations of Eq. (<ref>) given different initial conditions ϕ(t=0) = ε, where ε follow a uniform distribution in the domain [-2π/10, 2π/10]. The ε range is chosen based on the fluctuations observed without any external current injected in the JJ. We compare our experimental data to the simulation results after three modulation cycles, see Fig. <ref>. Let us note that the data points in Fig. <ref> A-B display a larger error bar near the transitions between the plateau regions (B), and near and above I_c (A), also visible in the measured ϕ as larger errorbars. This behavior is recovered from the numerical simulations shown in Fig. <ref> D-E, where the variance measured over the ensemble of different initial conditions is non-zero only in the transition regions while maintaining a well-defined value in the central region of the plateaus. §.§ Classical-field simulation method To simulate the dynamics of ^6Li_2 molecules on the BEC side we use classical-field dynamics within the truncated Wigner approximation <cit.>. The condensate is described by the Hamiltonian Ĥ = ∫d r[ ħ^2/2m∇ψ̂^†( r) ·∇ψ̂( r) + V( r) ψ̂^†( r)ψ̂( r) + g/2ψ̂^†( r)ψ̂^†( r)ψ̂( r)ψ̂( r)], where ψ̂( r) and ψ̂^†( r) are the bosonic annihilation and creation field operators, respectively. The 3D interaction parameter is given by g=4π a_D ħ^2/m_D, where a_D is the molecular s-wave scattering length and m_D = 2 m is the molecular mass. Following the experiments we choose: a_D = 0.6 a=1029 a_0, where a_0 is the Bohr radius, V( r) = V(z) = m_D ω_z^2 z^2/2 the harmonic trapping potential, with ω_z = 2 π× 416Hz being the trap frequency in the z direction. For our simulations, we consider the same dimensions of the experimental Josephson junction, by mapping the real space on a lattice system of 253 × 37 × 8 sites with the lattice discretization length l= 0.5 μmm. We note that l is chosen such that it is to be comparable or smaller than the healing length ξ_L = ħ/√(2mgn_0) and the de Broglie wavelength, where n_0 is the density <cit.>. In the classical-field representation, we replace the operators ψ̂ in Eq. <ref> and in the equations of motion by complex numbers ψ. We sample the initial states ψ(t=0) in a grand canonical ensemble of temperature T and chemical potential μ via a classical Metropolis algorithm. For all simulations, we use T= 40 and adjust μ such that the total atom number N is about 2 × 10^4. Each initial state is propagated using the classical equations of motion. To create a Josephson junction we add the term ℋ_ex = ∫d r V(x,t) n( r, t), where n( r, t) = |ψ( r , t)|^2 is the local density and V(x, t) is the barrier potential of the form V(x,t) = V_0 (t) exp[- 2( x-x_0- x(t) )^2/w^2]. Here, V_0(t) is the height and w is the width of the barrier. The initial location of the barrier potential is fixed at x_0= 32, while x(t) is the driving term. As the barrier size is at the limit of the experimental resolution, the experimental estimation of its height is particularly challenging. For this reason, we perform the numerical simulation by fixing the value of the barrier size to be compatible with the confidence range of the experimental one and scan V_0/μ_0 to obtain the best agreement with the experimental data of AC and DC drive reported in Fig. <ref>. We choose w=0.7, and ramp up V_0(t) linearly to V_0/μ_0=1.45 over 100 and then wait for 50 to achieve equilibrium, where μ_0 is the chemical potential averaged over the z direction. This creates a weak link at x_0 by separating the cloud into two subsystems (referred to as the left and right reservoirs). The driving term is given by x(t) = v_DC t + x_ACsin(ω t), where v_DC is the DC barrier velocity, x_AC is the AC driving amplitude and ω is the AC driving frequency <cit.>. Similarly to the experiment, we write the current as I(t) = I_DC + I_ACcos(ω t), where I_DC = v_DC I_c/v_c and I_AC = x_ACω I_c/v_c, with I_c being the critical current and v_c the critical velocity. For various driving parameters, we analyze I-Δμ characteristic curves as a function of the bias current I_DC after three driving periods. Δμ= μ_R - μ_L is the chemical-potential difference between the left (μ_L) and right (μ_R) reservoir, which is determined using the total atom number in each reservoir. In Fig. <ref>A of the main text, we show the simulation results of I-Δμ curves for undriven (I_AC/I_c=0) and driven (I_AC/I_c=1.4 and ω=2 π× 175) junctions. We perform simulations of the AC response of the junction as a function of the driving frequency ω and report these results in Fig. <ref>B, which are in excellent agreement with the experimental findings. To determine the height of Shapiro steps we fit I-Δμ curves with sigmoid functions as in the experiments. To determine the ϕ-I characteristic of the junction we calculate the phase difference in the vicinity of the barrier ϕ = ϕ(x-2l) - ϕ(x+2l), where x(t) is the dynamic location of the barrier at time t. In Fig. <ref> we show the time evolution ϕ(t), averaged over many samples, at varying I_DC/I_c for the AC-driven system. From the phase change at the end of driving we obtain the ϕ-I curves, shown in Fig. <ref>B of the main text. §.§.§ Vortex dipole creation: phase slip process As mentioned in the main text, depending on the junction dimensionality, phase slips can manifest as different topological excitations <cit.>, in particular for our experimental conditions as vortex-antivortex pair in two dimensions, and arise from a multi-step process as soliton decays into a different topological excitation <cit.>. Our classical-field simulations corroborate this mechanism as shown in Fig. <ref>-<ref>, where a phase difference is accumulated along the extended junction at the end of the second (Fig. <ref>) and third cycle (Fig. <ref>). After the solitonic excitation is released from the barrier, the soliton decays into multiple vortex-antivortex pairs possibly via a snake-instability <cit.>. It is important to note that the depinning of the soliton and the subsequent decay process happens on a timescale faster than the modulation period. The critical wavevector at which the snake instability occurs near unitarity <cit.>, and at unitarity reaches k_c∼ 0.93 k_F and decreases in the BEC side of the crossover according to the formula k_c ∼√(2)ξ_L^-1. The critical wavelength is therefore λ_c=2π/k_c = √(2)πξ_L≈ 2.5(2) μm in the BEC, and λ_c=2π/k_c= 1.8(2) μm in the UFG. In particular, our junction transverse size is large compared to the critical length L_y/ λ_c ≈ 7, allowing multiple "snake" oscillations along the junction transverse direction. §.§ Shapiro steps in the BEC regime In this section, we report the characterization of the Shapiro step properties for the BEC junction (1/k_Fa = 3.3±0.1), together with a detailed description of the phenomenological fitting procedure, employed to analyze all the I - Δμ curves, independently from the interaction regime. The DC current-voltage data are fitted with the analytical solution of the overdamped RCSJ model, namely with Δμ = √( I_DC^2 - I_c^2)/G (dark blue dashed line of Fig. <ref> A), keeping both I_c and G as free parameter. On the other hand, to quantitatively characterize the Shapiro step height and width, we fit the AC I - Δμ curve with as many independent sigmoid functions as the number of visible steps (solid lines in Fig. <ref> A). In particular, we fit the n-th step with the function: Δμ = Δμ_n^rel/(1+ exp( -I_DC-I_n/Γ)))+ Δμ_n-1^rel, where Δμ_n^rel is the relative height of the n-th step and I_n its position. From these parameters we compute the n-th step height as Δμ_n = ∑_i=0^nΔμ_i^rel and its width as Δ I_n = I_n+1-I_n. The step height characterization as a function of the modulation frequency in the BEC regime is reported in Fig. <ref>. Here, the experimental data (filled symbols) are compared with the results of classical-field numerical simulations (empty symbols), performed as described in the dedicated section, and analyzed with the same fitting procedure as described above. The numerical simulations are in very good agreement with the experimental results, both confirming the presence of Shapiro steps at Δμ = n ħω in the BEC regime in the explored range of frequency. In particular, in both cases, we observe a reduction of the measured step height for increasing modulation frequency, which could be due to the coupling of the drive with trap excitation along the z-direction. In fact, for AC drive frequencies approaching the vertical trap frequency, we expect the Shapiro dynamics to mix with trap excitations, possibly redistributing the injected energy between the two contributions. We observe a similar reduction also for the junction at unitarity at ω≃ 2 π× 500Hz, compatible with this scenario given the higher trap frequency in this regime. As already mentioned above, a reduction of the step height at higher modulation frequency could also arise from the finite speed of propagation of the generated Δμ at the barrier, given by the speed of sound. For high ω, the space propagated by the Δμ pulses during three modulation cycles can be shorter than the junction size, so that the measured Δμ can be lower than the stationary value. In Fig. <ref> A, we report also the overdamped RCSJ numerical solutions for the AC I - Δμ (red and blue dashed lines), obtained by using the value of I_c and G as extracted from the fit of the DC curve. The numerical solutions qualitatively represent the measured steps, despite a smoother trend visible both in the experimental data and in numerical simulations, visible also in the data reported in Fig. <ref> A of the main text. Such a discrepancy is most likely given by larger conductance at BEC (h C = 6.5(6)s), with respect to unitarity (h C = 1.8(2)s). The conductance is here computed as the inverse of the charging energy E_c = 1/C = ∂μ_L/N_L + ∂μ_R/N_R≃ 4 ∂μ_0/N, and is therefore proportional to the compressibility of the gas. In the BEC regime, the larger compressibility, with respect to unitarity, allows for density excitations at higher contrast to propagate in the junction as a consequence of the AC drive, making the amplitude of the order parameters in the two reservoirs non-spatially homogeneous. As a result, the dynamic of the junction is no longer well describable in terms of uniquely its relative phase ϕ, and the RCSJ model becomes less accurate. We note that density excitations are observable also in the unitary junction as a consequence of the AC drive (see Fig. <ref>), but the strong interactions reduce their contrast and thus keep the overdamped RCSJ model a good approximation.
http://arxiv.org/abs/2409.02769v1
20240904144842
Magnetic, Kinetic, and Transition regime: Spatially-segregated structure of compressive MHD turbulence
[ "Guang-Xing Li", "Mengke Zhao" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.SR", "physics.comp-ph", "physics.plasm-ph" ]
0000-0003-3144-1952]Guang-Xing Li South-Western Institute for Astronomy Research, Yunnan University, Chenggong District, Kunming 650500, China 0000-0003-0596-6608]Mengke Zhao School of Astronomy and Space Science, Nanjing University, 163 Xianlin Avenue, Nanjing 210023, People’s Republic of China § ABSTRACT Turbulence is a complex physical process that emerges in multiple areas of modern physics, and in ionized environments such as interstellar gas, the magnetic field can be dynamically important. However, the exact function of the magnetic field in the ionized gas remains unclear. We use the M_ A = √(E_ k/E_B) to describe the importance of the magnetic field measured to the turbulent motion, and reveal diverse ways of mutual interaction. At low M_ A (magnetic regime), the magnetic field is well-described as force-free. Despite the strong magnetic field, the motion of gas does not stay aligned with the magnetic field. At the regime of intermediate M_ A (magnetic-kinetic transition regime), the velocity field and the magnetic field exhibit the highest degree of alignment, which is likely the result of a rapid relaxation. At high M_ A (kinetic regime), both the magnetic field and the velocity field are irregular, with no alignment. We find observational counterparts to these regimes in observations of interstellar gas. The results highlight the diverse behavior of gas in MHD turbulence and guide future interpretations of the role of the magnetic field in astrophysical observations. § INTRODUCTION Turbulence is a complex process that has puzzled scientists since the age of Leonardo da Vinci, and the capability of turbulence in controlling the evolution of interstellar gas has been known for decades <cit.>. Understanding the role of the magnetic field in compressible turbulence can be crucial for our understanding of turbulence and for interpreting astrophysical observations. Turbulence is a complex, multi-scale process <cit.> best described using scaling relations <cit.>. Past studies of compressible magnetohydrodynamics (MHD) turbulence have followed this tradition where describing the statistic properties of the region has become a priority <cit.>. Others have astrophysical applications in mind and have focused on the global quantities extracted from the simulation box <cit.>. The alignment between vector quantities such as the magnetic field B⃗, the velocity v⃗, and the current J offers insight into the behavior of the magnetized fluids. In astrophysical research, the alignment between B⃗ and v⃗ is assumed to be an indicator of the magnetic field's ability to affect the motion of the gas. The alignment between B⃗ and v⃗ in the strongly magnetized regime is a fact often taken for granted, and this picture is the foundation for understanding phenomena such as wind from disk-star systems <cit.>, where the picture of beads on a wire have been widely accepted. In this picture, field lines of a magnetic field behave like rigid wires, which guide the motion of the gas. However, it is unclear to what extent can we trust this picture. On the other hand, <cit.> have shown that the rapid alignment between v⃗ and B⃗ is the result of a rapid relaxation process, where the magnetic field does not need to dominant. The alignment between J⃗ and B⃗ is also critical. In magnetized fluids, an interesting phenomenon is the emergence of force-free fields, where the magnetic pressure much exceeds the plasma pressure, such that the Lorentz force must vanish to ensure a global balance. This force-free field is thus a direct indication of the dominance of the magnetic energy over other energetic terms. We note that the Lorentz for F⃗_⃗l⃗ is proportional to J⃗×B⃗, and vanishes when J⃗ is parallel to B⃗. The alignment between J⃗ and B⃗ is thus a clear indication of the force-free field. We study the alignment between B⃗, v⃗ and J⃗ at regions of different degrees of magnetization. To quantify the importance of the magnetic field, we use the Alfven Mach number M_ A = √(E_ k/E_B), where E_ k is the kinetic energy density and E_B is the magnetic energy density. We study the importance of the magnetic field under different conditions as characterized by M_ A. By analyzing the alignment between the magnetic field B⃗, velocity v⃗ and current J⃗, we reveal different behaviors of the system under different M_ A. § DATA AND METHOD We use numerical simulations of MHD equations performed using the Enzo code <cit.> with the constrained transport turned on. The simulation conducted in this study analyzed the impact of self-gravity and magnetic fields on supersonic turbulence in isothermal molecular clouds, using high-resolution simulations and adaptive mesh refinement techniques <cit.>. The simulation we use has initial β≈ E_ k / E_ B =0.2. However, as the simulation proceeds, different subregions have different β and M_ A. The Alfven Mach number M_ A is the indicator of magnetic field in MHD numerical simulation: M_ A = √(E_k/E_B) = √(ρσ_v^2/2/B^2/8π) = √(4πρ)σ_v/B, which is the square root of the energy ratio between kinetic energy density E_ k and magnetic energy density E_B. Based on the M_ A, we divide the MHD turbulence into three regimes: magnetic regime, B-k transition, and kinetic regime, and study the relation between the magnetic field and the gas motion. From the magnetic field B⃗, the current J⃗ can be evaluated as J = 1/μ_0∇×B⃗ , and we study the alignment between the magnetic field B⃗ and the current J⃗, and the alignment between the magnetic field B⃗ and the velocity v⃗ at different M_ A. § THREE REGIMES OF MHD TURBULENCE Based on the Alfven Mach number M_ A, we divide the simulation box into three regimes: magnetic regime (M_ A < 0.3), B-k transition (0.3< M_ A<3), and kinetic regime (M_ A>3), and study the alignment between the magnetic field B⃗, velocity v⃗, and current J⃗ at different M_ A. The relationship between the magnetic field-velocity angle θ_B⃗-v⃗, the magnetic field-current angle θ_B⃗-J⃗ and the Alfven Mach number is shown in Fig. <ref>. When plotting these distributions, we subtract the distribution expected if the vectors are randomly oriented, and focus on the additional alignment caused by physics (see Sec. <ref>). With increasing M_ A, the alignment between the magnetic field B⃗ and the current J⃗ changes from aligned to not aligned, and the alignment between the magnetic field B⃗ and the velocity v⃗ change from almost no alignment (θ_B⃗-v⃗≈ 30^∘), to a weak alignment (θ_B⃗-v⃗≈ 20^∘), back to no alignment (θ_B⃗-v⃗≈ 30^∘). We note that the magnetic force is f⃗_ L = J⃗×B⃗ . When J⃗ is parallel to B⃗, the magnetic force vanishes, and this field configuration is called the force-free configuration. The magnetic force is activated if the angle between J⃗ and B⃗ is large. The monotonic decrease of alignment between B⃗ and J⃗ at increasing M_ A is related to the decrease in the importance of the magnetic field, leading to the system moving away from the force-free regime. Based on the Alfven Mach number and the behavior of the system, we divide the simulation into three regimes: * The magnetic regime (M_ A<0.3): the magnetic energy is far above the local kinetic energy, where the B and v do not stay aligned, yet the B and J aligned. * The transition regime (B-k transition, 0.3< M_ A<3): the magnetic energy and kinetic energy have similar densities. the B and v stay aligned, and the B and J evolve from aligned to not aligned as M_ A increases. * The kinetic regime: the kinetic energy is far above the local magnetic energy, the B and v are not aligned, and B and J are not aligned. which are plotted in Figs. <ref>, <ref> and <ref>, and are summarized in Fig. <ref>. Some addition slices to the simulation box can be found in Sec. <ref> §.§ Magnetic Regime The first regime we discovered is the magnetic regime, where the magnetic energy is far above the local kinetic energy (M_ A< 0.3). This regime has two properties, the alignment between B and J which points to the formation of a force-free field and the lack of alignment between B and v. The strong alignment between B and J indicates that the field configuration is force-free, with other forces being dynamically unimportant. We also observe a lack of alignment between B and v, which challenges the common understanding of the magnetic field being “wires” that guide the motion of the gas. In contrast, the motion of gas does not appear to stay aligned with the orientation of the magnetic field line. A strong magnetic field does not necessarily lead to motions that follow the field lines. From astronomy observations, we identify the magnetic regime from low-density regions around some existing molecular clouds. One such example is the existence of striations located at the outer part of the Taurus molecular cloud <cit.>. §.§ B-k Transition At 0.3<M_ A <3, where the magnetic energy and the kinetic energy are similar, we find a transition regime characterized by a breakdown of the alignment between B⃗ and J, and a strong alignment between B⃗ and v⃗. The breakdown of the B⃗-J alignment results from a decrease in the importance of the magnetic field. However, the alignment between B⃗ and v⃗ deserves further discussions. We find the B⃗ and v⃗ only occur in the B-k transition regime, where the magnetic and kinetic energy have similar densities. This finding challenges the common understanding that the alignment between B⃗ and v⃗ indicates the dominance of the magnetic field. This alignment has been found by <cit.>, which results from a “rapid and robust relaxation process in turbulent flows”. Our findings support their conclusion. However, different from the claim that “ the alignment of the velocity and magnetic field fluctuations occurs rapidly in magnetohydrodynamics for a variety of parameters”, in our case, this alignment only occurs at the transition regime with moderate M_ A. In the B-k transition regime, the vecB and J⃗ are still moderately aligned with θ≈ 15^∘. From the reported values of the Alfven Mach numbers (or the mass-to-flux ratio) <cit.>, we believe that a large number of observations of the observations are probing gas located in this transition regime. Examples include the envelope of massive star formation Orion A <cit.>, and other star formation regions such as Taurus, L1551 and so on <cit.>. §.§ Kinetic Regime At the kinetic regime with high kinetic energy (high M_ A), the B-v not aligned, and the B-J not aligned. The lack of this alignment is the result of the weak magnetic field. In astronomical observations, this corresponds to dense, collapsing regions with strong turbulence, such as the Cyg N44 in Dr21 <cit.>. § CONCLUSIONS We investigate the effect of a magnetic field on supersonic turbulence. By evaluating the Alfven Mach number M_ A at different locations in a turbulence box and studying the alignment between the magnetic field B⃗, the velocity v⃗ and the current J⃗, we reveal the different behavior of the system under various conditions. These regimes include * The magnetic regime (M_ A<0.3): the magnetic energy is far above the local kinetic energy, where the B-v do not stay aligned, yet the B-J aligned. The magnetic field is force-free. * The transition regime (B-k transition, 0.3< M_ A<3): the magnetic energy and kinetic energy have similar densities. B and v are. aligned, and the B and J evolve from aligned to not aligned as M_ A increases. We also observe alignments between the magnetic field and the gas motion. However, this alignment between B⃗ and v⃗ does not imp a strong magnetic field but a rapid relaxation process in turbulent flows. * The kinetic regime: the kinetic energy is far above the local magnetic energy, the B-v not aligned, and the B-J not aligned. They are summarized in <ref>. Since there is a correlation between the Alfven Mach number and the gas density (Fig. <ref>), the magnetic regime exists in the lower-density part and the kinetic regime in the higher-density part. The transition regime is an intermediate state between the two. Using observational data, we find cases that support the existence of these regimes. The results guide the interpretation of new observations. It breaks down the common understanding of the magnetic field as a rigid wire that guides gas motion and replaces it with the complex behavior of the gas under different conditions. The alignment between B⃗ and J⃗ points to the dominance of the magnetic field, and the alignment between B⃗ and v⃗ is likely the result of a rapid self-organization process in turbulent flows. Some supporting cases are identified from observations of the interstellar medium. We reveal various regimes where the fluid behaves differently under different conditions. To our knowledge, this is the first time these different regimes are clearly outlined. The alignment between these quantities has been studied in a recent paper <cit.> where the authors reported strong alignments in the strongly magnetized regime and some scale-dependence alignment behavior. We are revealing a much clearer picture with the Alfven Mach number as the only parameter dictating the behavior of the fluid and the alignments. § REMOVING THE PROJECTION EFFECT IN ANGLE DISTRIBUTIONS The probability density function, p(θ), of the angle between two vectors of random distribution is distributed in an N-dimensional space as: p(θ) = Γ(n/2)/Γ(n-1/2) sin^n-2(θ)/√(π) where n is numbers of dimensions, and θ is the angle between two vectors of random distribution. In our alignment analysis, we study the angle between qualities measured in 3D. When n=3, we have p(θ) = 1/2 sinθ , which two randomly-selected vectors have a alignment angle clustered at around θ=45^∘, caused by the projection effect. To remove the projection effect, we use the probability density function p(θ) to weight the angle distribution: I( θ_ corrected) = I(θ_ original) / p(θ) where I(θ_ original) represents the original angle distribution and I(θ_ corrected) represents the corrected angle distribution, with the projected effects removed. § 2D SLICE OF MAGNETIC FIELD, VELOCITY AND CURRENT Fig. <ref> shows the distribution of density ρ, velocity field v, magnetic field B, and current field J in X-Y plane, which is a slice of 3D space.
http://arxiv.org/abs/2409.02333v1
20240903232705
Admissible groups over number fields
[ "Deependra Singh" ]
math.NT
[ "math.NT", "math.RA", "11R32, 12F12, 16K20 (Primary) 16S35, 12E30, 16K50 (Secondary)" ]
§ ABSTRACT Given a field K, one may ask which finite groups are Galois groups of field extensions L/K such that L is a maximal subfield of a division algebra with center K. This connection between inverse Galois theory and division algebras was first explored by Schacher in the 1960s. In this manuscript we consider this problem when K is a number field. For the case when L/K is assumed to be tamely ramified, we give a complete classification of number fields for which every solvable Sylow-metacyclic group is admissible, extending J. Sonn's result for K =. For the case when L/K is allowed to be wildly ramified, we give a characterization of admissible groups over several classes of number fields, and partial results in other cases. Fair Minimum Representation Clustering via Integer Programming A preliminary version of this paper appeared in CPAIOR 2024. [ =========================================================================================================================== § INTRODUCTION A central simple algebra over a field K is a finite dimensional associative K-algebra such that its center is K and it has no non-trivial two-sided ideals. It is called a central division algebra if every non-zero element is a unit (for example, the algebra of quaternions over ). The dimension of a central division algebra as a K vector space is always a square <cit.>, and the square-root of this dimension is called its index. If D is a central K-division algebra of index n then a subfield L of D containing K is maximal among all such subfields if and only if its degree over K is n <cit.>. Such a subfield is called a maximal subfield of D. For example, the complex numbers are a maximal subfield of the division algebra of quaternions over . Given a field K, the classical inverse Galois problem asks whether or not every finite group appears as the Galois group of some Galois extension of K. With the terminology in the previous paragraph, one can also ask the following question. Which finite groups G are Galois groups of field extensions L/K such that L is a maximal subfield of a central division algebra over K? Such a group G is called admissible over K or K-admissible, and the field L is called K-adequate. This connection between inverse Galois theory and division algebras was first explored by Schacher <cit.>. In this paper we investigate this problem further, and prove results about admissible groups over number fields. See the next section for discussion concerning the motivation for this problem. Over number fields, most results have focused on tamely ramified adequate extensions and Sylow metacyclic subgroups <cit.>, <cit.> (Sylow-metacyclic groups are those whose Sylow subgroups are metacyclic). Our results concern both tamely and wildly ramified adequate extensions. For tamely ramified adequate extensions, we extend Sonn's result <cit.>, and characterize number fields over which every solvable Sylow-metacyclic group is tamely admissible (see Definition <ref> for the term “tamely admissible”). More precisely, we show the following result in Theorem <ref>: Let K be a number field. Then * A solvable Sylow-metacyclic group is tamely admissible over K if and only if each of its Sylow subgroups are tamely admissible over K. * Every 2-metacyclic group is tamely admissible over K if and only if K ∩{√(-1), √(2), √(-2)} = ∅. * Let p be any odd prime, and let α_p be a primitive element of the unique degree p-extension over contained in the field extension (ζ_p^2)/. Then every p-metacyclic group is tamely admissible over K if and only if α_p ∉ K. While a -admissible group is necessarily Sylow-metacyclic (Theorem 4.1 of <cit.>), it is also known that for any given finite group G there is some number field K over which G is admissible (Theorem 9.1 of <cit.>). So a natural question is to understand how admissible groups behave as we go to higher degree number fields. As we will see, if the admissible group is not Sylow-metacyclic then any corresponding adequate extension must be wildly ramified. We investigate this phenomenon further, and the following theorem describes a key result for admissibility of p-groups in this context (see Theorem <ref>): Let K be a finite Galois extension of , and p be an odd rational prime such that ζ_p ∉ K, and p decomposes in K. Let G be a p-group. Then * If ζ_p ∉ K_p then G is K-admissible if and only if d(G) ≤ [K_:_p] + 1. * If ζ_p ∈ K_p then G is K-admissible if and only if G can be generated by [K_:_p] + 2 many generators x_1, x_2, …, x_n satisfying the relation x_1^p^s[x_1,x_2][x_3,x_4] … [x_n-1,x_n] = 1 where p^s is such that ζ_p^s∈ K_p but ζ_p^s+1∉ K_p. Here d(G) denotes the minimum number of generators of G. The admissibility problem in the general case is open even in the case of p-groups. The key challenge seems to be to handle the case when ζ_p ∈ K. But if we narrow our scope to special classes of number fields, more can be said. The following result characterizes the admissibility of odd p-groups over quadratic number fields (see Corollary <ref> for this assertion, and Definition <ref> for the term “decompose”): Let K be a quadratic number field, and G be an odd p-group for some rational prime p. Then G is K-admissible if and only if one of the following conditions holds: * prime p decomposes in K and d(G) ≤ 2, or, * prime p does not decompose in K and G is metacyclic. Analogous results for number fields of degree 3 or 4 over appear in Propositions <ref> and <ref>. We also discuss results over Galois number fields, number fields of degree 2^n over , number fields of odd degree over , and cyclotomic number fields. This paper is organized as follows. Section 1 provides additional motivation and context for the admissibility problem. In Section 2, we discuss admissibility of Sylow-metacyclic group over number fields, and characterize number fields for which every solvable Sylow-metacyclic subgroup is admissible, extending Sonn's result. Section 3 goes beyond Sylow-metacyclic groups and studies how degree of the number field influences the class of admissible groups. Finally, in Section 4, we specialize to special classes of number fields where we can make stronger statements, including Galois number fields, cyclotomic number fields, and number fields of degrees 2,3, and 4. We sometimes include extra hypotheses in stating results where doing so would make the statements simpler, and indicate how the results extend to more general situations. §.§ Acknowledgements The author would like to thank Professors David Harbater, Daniel Krashen, and Florian Pop for a number of very helpful conversations concerning material in this manuscript and related ideas. This paper is part of author’s Ph.D. thesis, completed under the supervision of Prof. David Harbater at the University of Pennsylvania. § BACKGROUND AND MOTIVATION The following observations provide motivation for studying the admissibility problem. (i) Cross product algebras provide an explicit way to work with central simple algebras over a field. More specifically, each Brauer class α∈Br(K) has a representative central simple algebra which is a G-cross product algebra over K for some finite group G, but a division algebra need not be a cross product algebra in general. On the other hand, essentially by definition, a finite group G is admissible over (a field) K if and only if there is a G-cross product division algebra over K. (ii) Let K be a field such that per(α) = ind(α) for every Brauer class α∈Br(K) (for example, a global field or a local field). Let L/K be a finite G-Galois extension with n = [L:K]. Then L is K-adequate if and only if the n-torsion abelian group H^2(G, L^⋆) has an element of order exactly equal to n (Proposition 2.1 of <cit.>). (iii) Let K be a field and let f(x) ∈ K[x] be an irreducible polynomial. One may ask whether there exists a (finite dimensional) central division algebra over K containing a root α of f(x). If per(α) = ind(α) for every α∈Br(K), then such a division algebra exists if and only if the Galois closure of K(α) is a K-adequate extension (follows from Proposition 2.2 of <cit.>). This was Schacher's motivation in the original paper to study the admissibility problem. In light of Question <ref>, the admissibility problem can be thought of as a non-commutative version of the inverse Galois problem. In particular, a K-admissible finite group first needs to be a Galois group over K. Thus if K has no non-trivial Galois groups (e.g., K is separably closed), then no non-trivial group is admissible over K. Similarly, if Br(K) = 0 then there are no non-trivial admissible groups over K since there are no non-trivial K-central division algebras. This is true in particular for C_1 fields (quasi-algebraically closed fields), which includes the following fields: * separably closed fields; * finite fields; * function field of a smooth curve over an algebraically closed field, e.g., (t); * a complete discretely valued field with an algebraically closed residue field, e.g., ((t)); * maximal unramified extension of a complete discretely valued field with a perfect residue field, e.g., _p^ur. Every finite group is known to be Galois over fields of type (iii) in the above list (by <cit.> in characteristic 0, and <cit.> in characteristic p>0). This shows that even if every finite group is Galois over a field K, there may not be any non-trivial groups admissible over K. On the other hand, if K is a local field, then every finite group which is Galois over K is also admissible over K. In fact, the following stronger statement is true. If K is a local field, then every finite Galois extension L/K is K-adequate. To see this, note that since period equals index for local fields, L is K-adequate if and only if H^2(G, L^⋆) has an element of order [L:K] (Proposition 2.1 of <cit.>). But H^2(G, L^⋆) is cyclic of order [L:K] for a local field K, and the result follows. Like the inverse Galois problem, the admissibility problem remains open in general. But unlike the inverse Galois problem, the groups that occur in this fashion are often quite restricted. For example, while every finite group is expected to be realized as a Galois group over , by Theorem 4.1 of <cit.> a -admissible group must be Sylow-metacyclic (a metacyclic group is an extension of a cyclic group by another cyclic group). On the other hand, every finite group is admissible over some number field (Theorem 9.1 of <cit.>). While the problem is open in general, including over , some results are known. Sonn <cit.> proved the admissibility of solvable Sylow metacyclic groups over . Many non-solvable groups with metacyclic Sylow subgroups have also been shown to be admissible over as well as over other classes of number fields, for example <cit.>, <cit.>, <cit.>, <cit.>. Since not every non-solvable Sylow metacyclic group is known to be even Galois over <cit.>, the problem of completely characterizing admissible groups over number fields remains out of reach at present. In <cit.>, groups that are admissible over function fields over certain complete discretely valued fields were characterized using patching techniques. Since the Brauer group is intimately related to division algebras over a field, it plays a key role in studying admissibility. Let K be a global field, and L/K be a G-Galois extension for some finite group G. For a place of K, let D_ denote the decomposition group corresponding to for the field extension L/K, and let L_ denote a completion of L with respect to an extension of the absolute value of K corresponding to . By Proposition 2.1 of <cit.>, L is K-adequate if and only if H^2(G, L^⋆) has an element of order exactly equal to [L:K]. Using this observation and the exact sequence 0 → H^2(G, L^*) →⊕_ H^2(D_, L_^*) →/→ 0 from class field theory, Schacher <cit.> obtained the following arithmetic criterion for the extension L/K to be K-adequate: [Schacher's Criterion] Let K be a global field, and L/K be a G-Galois extension for some finite group G. The field extension L/K is K-adequate if and only if for each rational prime p dividing the order of G, there are two distinct places _1,_2 of K such that the decomposition groups corresponding to these places in the field extension L/K contain a p-Sylow subgroup of G. According to Schacher's criterion, admissibility of G over K is equivalent to the existence of a G-Galois field extension of K (“inverse Galois problem”) with certain conditions on the decomposition groups of places of K (“local conditions”). This refinement of the inverse Galois problem with extra local conditions is a problem that is open in general, including for solvable groups. For example, while Shafarevich's construction shows that every solvable group can be realized as a Galois group over any number field, there is no known way to realize the given local extensions <cit.>. Grunwald-Wang theorem (Theorem 5 of Chapter 10 in <cit.>) was the first result of this kind for cyclic Galois extensions, and the most far reaching result of this kind is Neukirch's generalization of the Grunwald-Wang theorem to solvable groups of order coprime to roots of unity in the global field (Theorem 9.5.9 of <cit.>). We make extensive use of this result in addition to results on embedding problems. Observe that if the group G is a p-group for some rational prime p then in Schacher's criterion above the decomposition groups corresponding to places _1, _2 need to be the whole group G. In this sense, the structure of the Galois group of the maximal p-extension of a local field yields important insights into the admissibility problem. § SYLOW METACYCLIC GROUPS We start with two definitions. Let G be a group. We say G is a metacyclic group if it is an extension of a cyclic group by another cyclic group. I.e., there is an exact sequence of groups: 1 →/n→ G →/m→ 1 for some integers m,n > 1. A Sylow-metacyclic group is a group such that all of its Sylow subgroups are metacyclic. Schacher observed that if K is a number field to which the p-adic valuation on extends uniquely, then the p-Sylow subgroups of any K-admissible group are necessarily metacyclic (see Theorem 10.2 of <cit.>). This follows immediately from Schacher's criterion noted above. In particular, this is true for the field of rational numbers , and so a -admissible group must be Sylow-metacyclic. In the converse direction, Sonn <cit.> proved that every solvable Sylow-metacyclic group is admissible over . As noted before, there are examples of non-solvable Sylow metacyclic groups that are not even known to be Galois over , so the converse direction remains open for non-solvable groups. With this background, one may ask whether every Sylow-metacyclic group is admissible over every number field. This is known to be false. For example, the dihedral group of order 8 is not admissible over (i) (see <cit.>). So we can refine our question and ask: Can we classify the number fields K for which every solvable Sylow-metacyclic group is K-admissible? In this direction, Liedahl proved a necessary and sufficient criterion for a given odd metacyclic p-group to be admissible over a given number field (Theorem 30 of <cit.>). Before we state this result, we set some notation and terminology. For a number field K/, we say that a rational prime p decomposes in K if the p-adic valuation on extends to at least two inequivalent valuations on K. We say that a group G is tamely admissible over K if there exists an adequate G-Galois extension L/K that is tamely ramified over K. Let ζ_e ∈ be a primitive e-th root of unity for some integer e ≥ 1. For an integer q coprime to e, let σ_e,q denote the field automorphism of (ζ_e)/ determined by σ_e,q(ζ_e) =ζ_e^q. Let K be a number field and let G be a metacyclic p-group for some prime number p. We say that G admits a Liedahl presentation for K if G admits a presentation ⟨ x,y | x^e = 1, y^f = x^i, yxy^-1 = y^q ⟩ such that (ζ_e) ∩ K ⊆(ζ_e) is contained in the fixed field of σ_e,q. K(ζ_e) [dl, dash] [dr, dash] (ζ_e) [dr, dash] K [dl, dash] (ζ_e) ∩ K[d, dash] With this notation, we can restate Liedahl's result as follows. Let K be a number field, and let G be an odd metacyclic p-group for some prime number p. Then G is admissible over K if and only if at least one of the following holds: * p decomposes in K. * G has a Liedahl presentation for K. This criterion was later extended by Neftin to solvable Sylow metacyclic groups under the assumption that the adequate extension is tamely ramified (Theorem 1.3 of <cit.>). With our terminology, we can restate Neftin's result as follows. Let K be a number field and G be a solvable group. Then G is tamely admissible over K if and only if for each prime p dividing the order of the group G, the p-Sylow subgroups of G admit a Liedahl presentation for K. Note that this result has an extra hypothesis of tame admissibility. Building on their work, we classify number fields for which all solvable Sylow-metacyclic groups are admissible. This generalizes Sonn's result that is such a number field, and provides a complete answer to Question <ref>. This is the main result of this section, and is the content of Theorem <ref>. We first need some auxiliary lemmas. The following lemma is a well-known result and follows from a group theory argument. We include here a proof for completeness. Every metacyclic group G is a quotient of a semidirect product G' of two cyclic groups. Moreover, if G is a p-group for some prime number p, then G' can be chosen to be a p-group. Let G be a metacyclic group with presentation ⟨ x,y | x^e = 1, y^f =x^i, yxy^-1 = x^q ⟩. Let r be the order of y in G. Since x = y^rxy^-r = x^q^r, we have q^r ≡ 1 ( e). This allows us to define the semidirect product G' = /e⋊/r with presentation ⟨x̃,ỹ|x̃^e = 1, ỹ^r = 1, ỹx̃ỹ^-1 = x̃^q ⟩. Mapping x̃→ x, ỹ→ y defines a surjective group homomorphism G' ↠ G. It is clear from the construction that if G is a p-group for some rational prime p then so is G'. Let k be a non-archimedean local field, and k'/k be a tamely ramified G-Galois extension. Let e be the ramification index and f be the residue degree of this extension, and let q be the number of elements in the residue field of k. Then G has a presentation <x,y | x^e = 1, y^f = x^i, yxy^-1 = x^q> for an appropriate integer i. In particular, G is metacyclic. The following result is a consequence of Proposition 2.2 of <cit.>. Let K be a field such that (α) = (α) for every α∈(K) (e.g., a global field). If G is admissible (resp., tamely admissible) over K and N ⊴ G is a normal subgroup then G/N is admissible (resp., tamely admissible) over K. The following lemma shows that the presence of roots of unity constrains the tamely ramified admissible groups to be “more abelian”. Let k be a non-archimedean local field and l a prime different from the residue characteristic of k. If ζ_l^n∈ k for some n ≥ 0, then any Galois l-extension of degree dividing l^n+1 is necessarily abelian. Let k' | k be a G-Galois extension of degree d such that d | l^n+1. Let e be the ramification index and f be the residue degree of this extension. Since the residue characteristic of k is different from l, the extension k' | k is tamely ramified. If e = l^n+1 then the extension k' | k is totally and tamely ramified, and therefore it is a cyclic extension (Corollary 1 to Proposition 4.1 in <cit.>). So without loss of generality we can assume that e | l^n. Let m | k be the maximal unramified extension inside k' | k. k' [d, dash, "e"] m [d, dash, "f"] k Then k' | m and m | k are cyclic Galois extension of degrees e and f respectively. By Galois theory, there is an exact sequence of groups: 1 →/e→ G →/f→ 1. Here G is the Galois group of k' | k. Moreover, G has a presentation (see Lemma <ref>): <x,y | x^e = 1, y^f = x^i, yxy^-1 = x^q> where q is the number of elements in the residue field of k. Since char(k) ≠ l and ζ_l^n∈ k, we must have ζ_l^n∈_q (the residue field). Therefore the order of subgroup ⟨ζ_l^n⟩⊆_q^* divides q-1, the order of _q^*. Since e divides l^n, it follows that e divides q-1. Therefore x^q = x in G, and thus yxy^-1 = x in the above presentation. This implies that G is abelian. Note that n needs to be at least 2 for the above lemma to say something non-trivial since a group of order l or l^2 is necessarily abelian. Let K be an number field and p be a prime number such that ζ_p^n∈ K and p does not decompose in K. Let G be a finite group such that its p-Sylow subgroup is non-abelian of order ≤ p^n+1. Then G is not admissible over K. If G were admissible over K, then by Schacher's Criterion (see Criterion <ref>) there will be two distinct places _1, _2 of K such that K__1 and K__2 admit Galois extensions with Galois groups containing a p-Sylow subgroup of G. Since p ∈ does not decompose in K, one of these two places must have residue characteristic different from p. Without loss of generality, assume that _1 is that place, and let k = K__1. Let k' | k be a Galois extension of local fields such that the Galois group contains a p-Sylow subgroup H of G. Let m be the fixed field of H in this extension, and so k' | m is a H-Galois extension. k' [d, dash, "H"] m [d, dash] k Since ζ_p^n∈ K ⊂ k ⊆ m, and residue characteristic of m is different from p, this contradicts Lemma <ref> since H is non-abelian. Let p be a rational prime number. Then the unique non-abelian group /p^2⋊/p is not admissible over (ζ_p^n) for n ≥ 2. Since p does not decompose in (ζ_p^n) and the p-Sylow subgroup is the whole group, the result follows from the previous corollary. This generalizes the observation in <cit.> that the dihedral group D_8 of order 8 is not admissible over Q(i). For example, the unique non-abelian group /9 ⋊/3 is not admissible over (ζ_9). In fact, there is another field strictly contained in (ζ_p^2) over which the non-abelian group /p^2⋊/p is not admissible. This is shown in Lemma <ref> and requires the following lemma as an ingredient. Let k'/k be a finite G-Galois extension of non-archimedean local fields where G is a p-group for a prime number p different from the residue characteristic of k. If the ramification index is p then G is abelian. Let m be the maximal unramified extension of k contained in k'. By Lemma <ref>, G sits in an exact sequence 1 →Gal(k'/m)=/p→ G →Gal(m/k)=/p^a→ 1, and has a presentation ⟨ x, y | x^p = 1, y^p^a = x^i, yxy^-1=x^q ⟩ for some appropriate i, and q is the number of elements in the residue field of k. Here x generates the inertia group, and y is a lift of the Frobenius automorphism. Since q and p are coprime, we have y^p-1xy^p-1 = x^q^p-1 = x, i.e., y^p-1 and x commute with each other. But y' = y^p-1 is another lift of the Frobenius automorphism. So x and y' generate G, and therefore G is abelian. For any rational prime number l, the non-abelian semi-direct product /l^2 ⋊/l is not admissible over the unique degree l number field K inside (ζ_l^2). If G = /l^2 ⋊/l were admissible over K, then by Schacher's Criterion (see Criterion <ref>) there would exist a G-Galois extension L/K such that over two places of K, the decomposition group would be the whole group G. We show that this is not possible. Since l totally ramifies in K, one of these places must have residue characteristic different from l. Let be that place of K, and p be its residue characteristic. Let k = K_, and k'/k be a G-Galois extension. Note that since p ≠ l, the rational prime p is unramified in K/. This is because K ⊂(ζ_l^2) and the only prime that ramifies in (ζ_l^2)/ is l. Since G is non-abelian, must be non-archimedean. Since the residue characteristic of is different from l, k'/k is a tamely ramified extension. Since G is non-abelian, the ramification index cannot be l by Lemma <ref>. As a result, the only possibility for the ramification index is l^2. Let m be the maximal unramified extension inside k'/k, and so k'/m is totally and tamely ramified extension of degree l^2. Therefore ζ_l^2∈ m. If q is the number of elements in the residue field of k, then q^l ≡ 1 ( l^2). k' [d, dash, "l^2"] m [d, dash, "l"] k [d, dash, "f"] _p We now look at the splitting behavior of rational prime p ≠ l in the extension K/. Since K/ is an abelian extension, this is determined by class field theory. First consider the case when prime p splits in K. This happens if and only if the order f of p in (/l^2)^* divides (l-1). If p ≡ 1 ( l^2), then k already has ζ_l^2, and so by Lemma <ref>, this is not possible. Otherwise, p^l ≢1 ( l^2) and since p = q, we get ζ_l^2∉ m. So this case is not possible either. Finally, consider the case when prime p stays inert in K. In this case q = p^l (since [K:] = l), and so we have q^l = p^l^2≡ 1 ( l^2). But p^l(l-1)≡ 1 ( l^2), and therefore p^l ≡ 1 ( l^2). But this means that q ≡ 1 ( l^2), and hence ζ_l^2∈ k. But this contradicts Lemma <ref>. The following lemma isolates a useful result whose main ideas are contained in the proof of Theorem 27 and 28 of <cit.>. Let K be a number field and G be a metacyclic p-group for some prime number p. Then the following are equivalent. * G is tamely admissible over K. * There are infinitely many rational primes l such that l splits completely in K and _l admits a G-Galois extension. * There is a non-archimedean place v of K with residue characteristic different from p such that the completion K_v admits a G-Galois extension. (i)(ii). Since G is a solvable Sylow-metacyclic group and it is tamely admissible over K, the hypotheses of Theorem 1.3 of <cit.> (or see its reformulation as Theorem <ref> above) are satisfied. Therefore G has a Liedahl presentation for K (see Definition <ref> for a definition of Liedahl presentation). It follows from the proof of Theorem 27 of <cit.> that if G has a Liedahl presentation for K then there are infinitely many rational primes l that completely split in K, and have the property that _l admits a G-Galois extension. (ii)(iii) is clear. (iii)(i). Note that G is a solvable Sylow-metacyclic group. If we can show that G has a Liedahl presentation for K, then the tame admissibility of G over K will follow from Theorem 1.3 of <cit.>. We now follow the proof of Theorem 28 in <cit.> to argue that G has a Liedahl presentation for K. Let k = K_v, and let k'/k be a G-Galois extension given in the hypothesis. Since the residue characteristic of k is different from p, and G is a p-group, it follows that k'/k is tamely ramified. Therefore, by Lemma <ref>, G has a presentation <x,y | x^e = 1, y^f = x^i, yxy^-1 = x^q>, where e is the ramification index, f is the residue degree, and q is the number of elements in the residue field of k. Since G is a p-group, both e and f are powers of p. Let ζ_e be a primitive e-th root of unity. In order to show that the presentation above is a Liedahl presentation of G for K, we need to argue that ζ_e ↦ζ_e^q fixes K ∩(ζ_e). By Galois theory, that is the same thing as ζ_e ↦ζ_e^q being an automorphism of K(ζ_e)/K. Since e is coprime to the residue characteristic of k, the extension k(ζ_e)/k is unramified. Therefore ζ_e ↦ζ_e^q is an automorphism of k(ζ_e)/k. Restricting k to K shows that ζ_e ↦ζ_e^q is an automorphism of K(ζ_e)/K. We are now in a position to classify the number fields for which every metacyclic p- group is tamely admissible. Starting with odd primes, we have the following Let K be a number field, and p be an odd rational prime. The following are equivalent: * Every metacyclic p-group is tamely admissible over K. * The (unique) non-abelian group /p^2⋊/p is tamely admissible over K. * Let (α) be the unique degree p subfield of (ζ_p^2) for some primitive element α. Then α∉ K (equivalently, K ∩(ζ_p^2) ⊆(ζ_p)). (i)(ii) is clear since G = /p^2⋊/p is metacyclic. (ii)(iii). Let α be a primitive element as in the assertion (iii). For the sake of contradiction, assume that α∈ K. By Lemma <ref>, there exists a rational prime l that splits completely in K, such that the local field _l admits G = /p^2⋊/p as a Galois group. Since (α) ⊂ K, the prime l must split in (α) as well, and therefore by Lemma <ref> (iii), G is admissible over (α). This contradicts Lemma <ref>. Therefore, α∉ K. (iii)(i). Assume that α∉ K for a primitive element as in (iii). By Lemma <ref> and <ref>, it suffices to show that every semidirect product of cyclic p-groups is admissible over K. So let G = /e⋊/f be a semi-direct product of cyclic p-groups with a presentation ⟨ x,y | x^e = 1, y^f =1, yxy^-1 = x^q ⟩, corresponding to a group action φ φ : /f→Aut(/e). Since Aut(/e) ≅Gal((ζ_e)/), the action can also be thought of as φ̃ : /f→Gal((ζ_e)/), where the generator of the group /f maps to the automorphism ζ_e ↦ζ_e^q. Let H = im(φ̃), and consider the following diagram of field extensions. K(ζ_e) [dl, dash] [dr, dash] (ζ_e) [d, dash] [dr, dash] K [dl, dash] M =(ζ_e)^H [dr, dash] L = (ζ_e) ∩ K[d, dash] If we can show that the above presentation for G is a Liedahl presentation for K, then Theorem 1.3 of <cit.> (or its reformulation as in Theorem <ref>) shows that G is tamely admissible over K. To show that the above presentation (4.1) is a Liedahl presentation, we need to argue that L is fixed by the automorphism σ_e,q = ζ_e ↦ζ_e^q ∈Aut((ζ_e)/). The automorphism σ_e,q is a generator of H, and the fixed field of σ_e,q is M. Thus it suffices to show that L ⊆ M. We do this using basic Galois theory. The extension (ζ_e)/ is an extension of degree (p-1)p^i for some i ∈∪{0}. Since H is a p-group (being the image of a p-group), its fixed field M must have degree (p-1)p^j over (for some j ≤ i). On the other hand, since (ζ_e)/ is a cyclic extension and the extension (α)/ is of degree p, (α) ⊆ L if and only if p | [L:]. Since we are assuming that α∉ L, we must have that p ∤ [L:]. Equivalently, [L:] | (p-1). This means [L:] | [M:]. Once again, since (ζ_e)/ is a cyclic extension, this implies L ⊆ M and we are done. For the even prime 2, the situation is a bit more involved, and we need to consider degree two extensions in (ζ_8). To formulate the precise result, we recall some notation. Let Q_16 be the generalized quaternion group of order 16 with presentation ⟨ x,y | x^8 = 1, x^4 = y^2, yxy^-1 = x^7⟩. Let SD_16 be the semi-dihedral group of order 16 with presentation ⟨ x,y | x^8 = 1 = y^2, yxy^-1 = x^3 ⟩. We also need the following two lemmas. Let K be a number field such that K ∩{√(-1), √(-2)}≠∅. Then Q_16 is not tamely admissible over K. Moreover, if the 2-adic valuation on extends uniquely to K, then Q_16 is not admissible over K (either tamely or wildly). Using Schacher's Criterion <ref>, it suffices to show that there are no places of K with residue characteristic different from 2 such that the completion k = K_ admits Q_16 as a Galois group. Let k be such a completion, and let _q be the residue field of k. If √(-1)∈ K then q ≡ 1 (mod 4). In this case, Theorem 3.1 of <cit.> says that k cannot admit Q_16 as a Galois group. If √(-2)∈ K then q ≡ 1 or 3 (mod 8). Once again, Theorem 3.1 of <cit.> asserts that k does not have a Q_16-Galois extension. Therefore, in both cases, k does not have a Q_16-Galois extension. That is what we needed to conclude the proof. Let K be a number field with √(2)∈ K. Then SD_16 is not tamely admissible over K. Moreover, if the 2-adic valuation on extends uniquely to K, then SD_16 is not admissible over K (either tamely or wildly). Using Schacher's Criterion <ref>, it suffices to show both cases there are no places of K with residue characteristic different from 2 such that k = K_ admits G = SD_16 as a Galois group. Suppose there were a G-Galois extension l/k with ramification index e and residue index f. Of course e ≠ 1, 16 since G is not a cyclic group. By Lemma <ref>, e cannot be 2 either since G is non-abelian. The group G has a unique cyclic normal subgroup of order 4, but the quotient group is the Klein four-group, which is not cyclic. Thus e = 4 is not possible either. Therefore we must have e = 8 and f = 2. Let _q be the residue field of k = K_. Since √(2)∈ K, q ≡± 1 mod 8. If q ≡ 1 mod 8 then ζ_8 ∈ k, which contradicts Lemma <ref>. Therefore we must have q ≡ -1 mod 8. In this case, by Lemma <ref>, G has a presentation ⟨ a,b | a^8 = 1, b^2 = a^i, bab^-1 = a^q⟩ for some integer i. Since q ≡ -1 mod 8, and a has order 8, this is the same thing as ⟨ a,b | a^8 = 1, b^2 = a^i, bab^-1 = a^-1⟩. But a quick check shows that G has no two elements g,h such that g^8 = 1, and hgh^-1 = g^-1. For a number field K, the following are equivalent: * Every metacyclic 2-group is tamely admissible over K. * The groups Q_16 and SD_16 are tamely admissible over K. * K ∩{√(-1), √(2), √(-2)} = ∅ (equivalently, K ∩(ζ_8) =). (i)(ii). This follows because both Q_16 and SD_16 are metacyclic. Both of them have a cyclic normal subgroup of order 8 and the quotient by that subgroup is the cyclic group of order 2 (In fact, SD_16 is a semidirect product /8⋊/2). (ii)(iii). By Lemma <ref>, the number field K cannot contain either √(-1) or √(-2) since Q_16 is tamely admissible over K. Similarly, by Lemma <ref>, K cannot contain √(2) since SD_16 is tamely admissible over K. (iii)(i). The argument for this implication proceeds the same way as in the implication (iii) (i) in Proposition <ref>. So let K,M,L as in that proof, and e is a power of 2. If L ≠ then L ∩{√(-1), √(2), √(-2)}≠∅ since (i), (√(2)), and (√(-2)) are the only degree 2 extensions inside (ζ_e) (here, e is a power of 2). But K ∩{i, √(2), √(-2)} = ∅ by hypothesis. Therefore L ∩{i, √(2), √(-2)} = ∅, and we get L = ⊆ M. By combining Proposition <ref> and Proposition <ref> we get the main result of this section as the following theorem. Note that part (i) of the theorem reduces the admissibility of a general solvable Sylow-metacyclic group to p-groups, and those cases are handled by Proposition <ref> and Proposition <ref>. Let K be a number field. Then * A solvable Sylow-metacyclic group is tamely admissible over K if and only if all of its Sylow subgroups are tamely admissible over K. * Every 2-metacyclic group is tamely admissible over K if and only if K ∩{√(-1), √(2), √(-2)} = ∅. * Let p be any odd prime, and let α_p be a primitive element of the unique degree p-extension over contained in the field extension (ζ_p^2)/.. Then every p-metacyclic group is tamely admissible over K if and only if α_p ∉ K. Part (i) follows from Theorem 1.3 of <cit.>. Part (ii) is Proposition <ref>, and part (iii) is Proposition <ref>. We now describe some applications of this theorem. Let K be a number field and G be a metacyclic p-group for some prime number p. If p is tamely ramified in K/, then G is tamely admissible over K. If p is an odd prime then it is wildly ramified in (α_p)/, where α_p is as defined in Theorem <ref>. Since p is assumed to be tamely ramified in K/, it follows that α_p ∉ K. A similar argument shows that {i, √(2), √(-2)}∩ K = ∅ if p = 2. So the conclusion follows from Theorem <ref>. Let K be a number field, and let G be a solvable Sylow-metacyclic group with the property that for every prime number p that divides |G|, the prime p is tamely ramified in K/. Then G is tamely admissible over K. By the previous corollary each p-Sylow subgroup of G is tamely admissible over K. The result then follows from Part (i) of Theorem <ref>. We note a characterization of admissible solvable Sylow-metacyclic groups in certain cases (with no restriction on the adequate extension being tamely ramified) as a corollary of Theorem <ref>: Let K be a number field, and let G be a solvable Sylow-metacyclic group with the property that if p divides |G| then p is tamely ramified in K/ and the p-adic valuation on extends uniquely to K. Then G is K-admissible if and only if G is Sylow-metacyclic. Let G be a K-admissible group. Let p be a rational prime that divides |G|. Since the p-adic valuation extends uniquely to K, the p-Sylow subgroup of G must be metacyclic by Theorem 10.2 of <cit.>. The converse direction follows from Theorem <ref>. Let K be an abelian number field with square free conductor. Then every solvable Sylow-metacyclic group is admissible over K. With the hypothesis on the conductor, we have K ⊆(ζ_m) for a square free integer m. Therefore K ∩(ζ_p^2) = K ∩(ζ_p) for every odd prime p, and K ∩(ζ_8) =. So the hypotheses of part (ii) and (iii) of Theorem <ref> are satisfied, and the result follows. Let K = (ζ_m) be a cyclotomic field. Then every solvable Sylow-metacyclic group is tamely admissible over K if and only if m is square free. If m is square free then then by the previous corollary every solvable Sylow-metacyclic group is tamely admissible over (ζ_m). On the other hand, if p^2 divides m for some prime p then (α_p) ⊂(ζ_p^2) ⊆ K = (ζ_m) where α_p is as in Theorem <ref>. By Theorem <ref>, not every solvable Sylow-metacyclic group is tamely admissible over K. § RESULTS OVER GENERAL NUMBER FIELDS In the previous section, we largely focused on the case of adequate extension being tamely ramified. Over , this is not really a restriction, at least for solvable groups. This is because Sonn's proof (<cit.>) shows that every -admissible solvable group is in fact tamely admissible. But for a general number field this may not be true. We start with the following result which follows immediately from Schacher's criterion (Criterion <ref>): Let K be a global field and L/K be a G-Galois adequate extension which is tamely ramified. Then G must be Sylow-metacyclic. Let p be a prime number that divides the order of G. By Schacher's criterion, there is a place v of K such that the the Galois group of the local extension L_v/K_v (i.e., the decomposition group) contains a p-Sylow subgroup G_p of G. By taking the fixed field of G_p, we get a G_p-Galois extension L_v/L_v^G_p. By the hypothesis, this is a tamely ramified extension. But the Galois group of a tamely ramified finite field extension of a local field is metacyclic (see Lemma <ref>). Therefore H is metacyclic. Since this is true for every prime p that divides the order of the group G, it follows that G is a Sylow-metacyclic group. In this section, we allow our adequate extensions to be wildly ramified, in order to go beyond Sylow-metacyclic groups. Admissible groups are often characterized in terms of generators of their p-Sylow subgroups. To that end, if G is a p-group, let d(G) denote the minimum number of generators of G. Since G is a p-group, every minimal generating set of G has d(G) elements by the Burnside basis theorem. Here, “minimal” is with respect to the partial ordering on sets given by inclusion. For a number field K/, let Σ_K denote the set of places (inequivalent valuations) of K. If the extension K/ is a Galois field extension then let e_p denote the ramification degree and f_p denote the residue degree of a prime p. Let K be a number field. Let p be an odd rational prime that decomposes in K and such that p = _1^e_1_2^e_2…_m^e_m in K with [K__1:_p] ≥ [K__2:_p] ≥…≥ [K__m:_p]. If ζ_p ∉ K__i for i = 1, …, m then a p-group G is K-admissible if and only if d(G) ≤ [K__2:_p] + 1. Suppose that G is a K-admissible p-group, and L/K is a K-adequate G-Galois extension. If G = {1} then the conclusion is true, so assume that |G| > 2 since p is an odd prime. By Schacher's criterion, there are at least two places of K such that the decomposition group at these places is the whole group G. Since |G| > 2, these places are necessarily non-archimedean. Let k_1, k_2 be the completion of K at any two such places, and let l_1/k_1 and l_2/k_2 be the local Galois extensions coming from the global extension L/K. Note that the valuation corresponding to k_1 (and k_2) might have more than one prolongation to L, but the completion of L over each of those prolongations will be isomorphic to l_1/k_1 (and l_2/k_2, respectively) since L/K is a Galois extension. If one of these local fields, say k_1, has residue characteristic different from p then the extension l_1/k_1 is tamely ramified, and therefore G is a metacyclic group. In particular, d(G) ≤ 2, and so the conclusion is true. If both k_1 and k_2 have residue characteristic equal to p, then they are one of the fields K__i for i = 1, …, m. Without loss of generality assume that [k_1:_p] ≥ [k_2:_p]. Since ζ_p ∉ k_2, By a result of Shafarevich (Theorem 3 in II.5, <cit.>), the absolute Galois group of the maximal p-extension of k_2 is a free prop-p group on [k_2:_p] + 1 generators. Since G is a quotient of such a free pro-p group, d(G) ≤ [k_2:_p] + 1 ≤ [K__2:_p] + 1. This proves one direction of the theorem. For the other direction, let G be a p-group with d(G) ≤ [K__2:_p] + 1. Let k_1 = K__1 and k_2 = K__2. Once again, by Theorem 3 in II.5 of <cit.>, there exist G-Galois extensions l_1/k_1, l_2/k_2 over the local fields k_1, k_2. Since the local field k_1 does not have a primitive p-th root of unity, neither does the global field K. Therefore the hypothesis of Neukirch's generalization of Grunwald-Wang theorem (Corollary 2 in <cit.>) are satisfied, and so there exists a G-Galois global extension L/K that has l_1/k_1, l_2/k_2 as completions. By Schacher's criterion, L/K is a K-adequate extension, and therefore G is K-admissible. Let K be a number field. Let p be an odd rational prime that decomposes in K and such that p = _1^e_1_2^e_2…_m^e_m in K. The local fields K__i for i = 1, …, m do not contain a primitive p-th root of unity in each of the following situations, and therefore the conclusion of Theorem <ref> is valid: * The prime p is unramified in K/; * (p-1) ∤ [K__i:_p] for i = 1, …, m; * K/ is Galois, and (p-1) ∤ [K:]. Since p is odd, the extension _p(ζ_p)/_p of local fields is totally ramified. Therefore, if the prime p is unramified in K/ then ζ_p ∉ K__i. This proves part (i). The fact that the extension _p(ζ_p)/_p has degree p-1 shows part (ii) and part (iii) as well. It follows that for a number field K, away from a set of finitely many primes (for example, the set of ramified primes), K-admissible p-groups are completely determined by Theorem <ref>. Moreover, a group G for which each prime p dividing the order of G satisfies the conditions in Theorem <ref> is K-admissible. For example, in case of Galois number fields, we get the following Proposition <ref>. For a finite group G, let G_p denote a p-Sylow subgroup of G (all such subgroups are conjugates of each other and hence isomorphic). For a number field K that is Galois over , let K_p denote the completion of K at a valuation extending the p-adic valuation (all such completions are isomorphic over _p since K is assumed to be Galois over ). Let K be a Galois number field. Let G be an odd order group such that for each p dividing |G|, * p decomposes in K. * Either p is unramified in K, or (p-1) ∤ [K:]. * d(G_p) ≤ [K_p : _p]+1 Then G is K-admissible. Let G be such a group. Since G has odd order, it is a solvable group. Let p be a rational prime such that p | |G|. Since p decomposes in K, there are at least two inequivalent prolongation (extension) of the p-adic valuation to K. Let k_1, k_2 be the completition of K at any of these two inequivalent extensions. If p is unramified in K or (p-1) ∤ [K:] then as in Theorem <ref>, k_1,k_2 do not contain a primitive p-th root of unity. Therefore, by Theorem 3 in II.5, <cit.>, the absolute Galois group of the maximal p-extension of k_i, i = 1,2 is a free pro-p group on [K_p:_p] + 1 generators. Since d(G_p) ≤ [K_p:_p] + 1, there exist G_p-Galois extensions of local fields l_1/k_1 and l_2/k_2. Also note that ζ_p ∉ since ζ_p ∉ k_1. Similarly, for each p | |G|, we can get these G_p-Galois local extension over two distinct completions of K. Since G_p ↪ G and ζ_p ∉ K, the hypotheses of Corollary 3 of <cit.> are satisfied. Thus there is a G-Galois global extension L/K that realizes these G_p-Galois extensions of local fields at the completions. Therefore, Schacher's criterion implies that L/K is a K-adequate extension, and thus G is K-admissible. The above theorem provides sufficient conditions for a group to be K-admissible. Unlike the case of rational numbers, the question of necessary conditions remains open for general number fields K, once we go beyond p-groups and allow wildly ramified adequate extensions. But in some special cases the above conditions are also necessary. For example, in the case of nilpotent groups we can say more due to the following lemma which follows from taking the tensor products of appropriate division algebras: A nilpotent group G is admissible over a global field if and only if all of its Sylow subgroups are. This leads to the following result. Let K be a finite Galois extension of , and G be an odd nilpotent group with |G| coprime to the discriminant of K. Then G is admissible over K if and only if for each p | |G| one of the following two conditions holds: * prime p decomposes in K and d(G_p) ≤ f_p + 1, or, * prime p does not decompose in K and G_p is metacyclic. For a general nilpotent group, Lemma <ref> reduces it to the case of p-groups. So assume that G is a p-group for some odd prime number p. The prime p is unramified in K by the hypothesis that |G| is coprime to the discriminant of K. If p decomposes in K then by Theorem <ref>, G is K-admissible if and only if d(G) ≤ f_p + 1. If p does not decompose in K, then by Theorem 10.2 of <cit.>, G is metacyclic. Conversely, by Theorem <ref>, a metacyclic p-group is admissible over K. This proves the corollary for an odd p-group G. For a given number field K, the above results potentially leave out a finite set of primes for the admissibility of p-groups. If such a prime p does not decompose in K then any admissible p-group is necessarily metacyclic. In that case, one could use Theorems <ref> and <ref> to check when a metacyclic p-group is admissible over K. On the other hand, if such a prime p decomposes in K then an admissible group need not be metacyclic, and that situation is not completely understood. Nevertheless, we can obtain partial results. Theorem 10.1 of <cit.> shows that if a p-group G is admissible over a Galois number field K with [K:] = n, then d(G) ≤ (n/2) + 2. The following result can be seen as a strengthening of this theorem. Let K be a finite Galois extension of , and p be an odd rational prime such that ζ_p ∉ K, and p decomposes in K. Let G be a p-group. Then * If ζ_p ∉ K_p then G is K-admissible if and only if d(G) ≤ [K_:_p] + 1. * If ζ_p ∈ K_p then G is K-admissible if and only if G can be generated by [K_:_p] + 2 many generators x_1, x_2, …, x_n satisfying the relation x_1^p^s[x_1,x_2][x_3,x_4] … [x_n-1,x_n] = 1 where p^s is such that ζ_p^s∈ K_p but ζ_p^s+1∉ K_p. The case when ζ_p ∉ K_p follows from Theorem <ref>. So assume that ζ_p ∈ K_p, and let p^s be the largest power of p such that ζ_p^s∈ K_p. Since [(ζ_p): _p] = p-1, we get that n = [K_p:_p] + 2 is an even number, and n ≥ 4. Let G be a K-admissible p-group. If G is metacyclic and generated by g_1 and g_2, then the free pro-p group F_n on n generators x_1, … x_n has a surjective map to G by x_2 ↦ g_1, x_4 ↦ g_2 and x_i ↦ 1, i ≠ 2,4. Clearly, this map satisfies the relation x_1^p^s[x_1,x_2][x_3,x_4] … [x_n-1,x_n] = 1. Now consider the case when G is not metacyclic. Let L/K be a G-Galois K-adequate extension. By Schacher's criterion, there will be two distinct places of K for which the decomposition group corresponding to the adequate extension L/K will be the whole group G. Let k be the completion of K at one such place, and l/k be a corresponding G-Galois extension of local fields coming from the extension L/K (since L/K is Galois, all such local extensions over k will be k-isomorphic). Since tamely ramified Galois extensions of local fields have metacyclic Galois groups and G is not metacyclic, the residue characteristic of k must be p, i.e. k ≅ K_p. Since ζ_p ∈ k by assumption, the absolute Galois group of the maximal p-extension of k is a Demuškin pro-p of rank [k:_p]+2 (Theorem 4 in <cit.>). In particular, it's the pro-p group on [k:_p]+2 generators x_1, …, x_n with one relation x_1^p^s[x_1,x_2][x_3,x_4] … [x_n-1,x_n] = 1 (Theorem 7 of <cit.>), and the result follows. In the other direction, assume that G is finite p-group that can be generated by [k:_p]+2 many generators subject to the given relation. Since p decomposes in K, there are at least two distinct completions k_1, k_2 with residue characteristic p. Once again by Theorem 7 of <cit.>, there exist G-Galois local extensions l_1/k_1 and l_2/k_2. Since ζ_p ∉ K, Corollary 3 of <cit.> asserts the existence of a G-Galois global field extension L/K that has l_i/k_i, i = 1,2 as completions. By Schacher's criterion this suffices to show that L/K is K-adequate and G is admissible over K. Note that since we assumed p to be an odd prime, if ζ_p ∈ K_p then n = [K_p:_p] is divisible by (p-1). In particular, it is an even number and the above description makes sense. Adapting the proof of Theorem <ref> we get a result in the converse direction of Theorem 10.1 of <cit.>: Let K be a finite Galois extension of , and G be an odd order group such that for each prime p that divides |G|, the prime p decomposes in the number field K and K does not have a primitive p-th root of unity. If d(G_p) ≤ ([K_p:_p]/2) + 1 for each Sylow p-subgroup G_p then G is K-admissible. By Schacher's criterion, it suffices to construct a G-Galois field extension L/K such that for each prime p dividing |G|, there are two places of K for which the decomposition group is a p-Sylow subgroup of G. Since ζ_p ∉ K for each prime p dividing |G|, the hypothesis of Corollary 3 in <cit.> are satisfied. Therefore it suffices to show that for each prime p dividing the order of G, there are two (distinct) completions of K that admit G_p-Galois field extensions, where G_p is a p-Sylow subgroup of G. So let p divide |G|, and since p decomposes in K, let k_1, k_2 be two distinct completions of K with respect to valuations extending the p-adic valuation. By hypothesis, d(G_p) ≤ ([k_i:_p]/2)+1 for each i = 1,2. If ζ_p ∉ k_1 (and so ζ_p ∉ k_2 either since k_1 ≅ k_2 over _p) then by Theorem 3, the absolute Galois group of the maximal p-extension of k_i, i = 1,2 is a free pro-p group on [K_p:_p]+1 generators. In particular, k_1,k_2 admit G_p-Galois field extensions. On the other hand, if ζ_p ∈ k_1 (and so also in k_2) then the absolute Galois group of the maximal p-extension of k_i has a presentation with [K_p:_p] + 2 generators x_1, …, x_n with one relation x_1^p^s[x_1,x_2][x_3,x_4] … [x_n-1,x_n] = 1 (Theorems 7.5.11 and 7.5.12, <cit.>). Sending each x_i for i odd number to 1 gives a surjection to a free pro-p group on ([K_p:_p]/2) + 1 generators, and thus there is a G_p-Galois extension of local fields over both k_1 and k_2. This finishes the proof. The proof of the above theorem uses a result in <cit.> that generalizes the Grunwald-Wang theorem, and it also uses the description of the Galois group of maximal p-extension of local fields as Demuškin groups <cit.>, i.e., Poincaré groups of dimension 2. The presentations of these groups have a striking similarity to that of pro-p completion of fundamental groups of topological surfaces, and one might ask whether that analogy in the sense of arithmetic topology can be useful in providing an alternative description of admissible groups in this case. Similar to the case of Proposition <ref>, this result partially extends to more general solvable groups, as well as to non-Galois number fields. In the case that G is admissible and p does not decompose in K (i.e., the p-adic valuation on extends uniquely to K), one of the two places in Schacher's criterion must have residue characteristic different from p. This forces G to be metacyclic, and the characterization of admissible groups in that case is already known (see <cit.>). § ADMISSIBILITY OF TEXT-GROUPS OVER SPECIAL CLASSES OF NUMBER FIELDS This section contains results about admissibility of p-groups after specializing to certain classes of number fields, such as Galois number fields, number fields of degree 2^n and odd degree over , and finally the cyclotomic fields. As a corollary to Theorem <ref> and Theorem <ref> we get the following result for number fields that are Galois over . Here f_p is the residue degree of prime p. Let K be a Galois number field. An odd p-group with p coprime to the discriminant of K/ is K-admissible if and only if one of the following conditions holds: * prime p decomposes in K and d(G) ≤ f_p+1, or, * prime p does not decompose in K and G is metacyclic. A special class of Galois number fields are the cyclotomic number fields of type K = (ζ_l^r) for l a prime number. Since l is the only ramified prime in K/, Corollary <ref> leaves out only the case of l-groups. Since l does not decompose in K, any admissible l-group must be metacyclic by Theorem 10.2 of <cit.>. As far as the K-admissibility of l-metacyclic group is concerned, it depends on the field. Every metacyclic l-group is known to be admissible over (ζ_l), for example, by the discussion following Proposition 32 in <cit.>, or by Corollary <ref>. But it follows from Prop <ref> that there are metacyclic l-groups that are not admissible over (ζ_l^r) for r ≥ 2. §.§ Number fields of degree 2^n Specializing further to number fields Galois over that have degree [K:] a power of 2, we have Let K be a Galois number field of degree 2^n, and G be an odd p-group. Then the following assertions hold: * If p does not decompose in K, then G is K-admissible if and only if G is metacyclic. * If p decomposes in K, and either (p-1) ∤ [K_p:_p] or p is unramified in K then G is K-admissible if and only if d(G) ≤ [K_p:_p] + 1. Consider first the case when p does not decompose in K. The prime p does not divide [K:] since p is odd, and so the result follows from Corollary <ref>. The case when p decomposes in K follows from Proposition <ref>. Note that in order for (p-1) to divide the local degree [K_p:_p] which is a power of 2, p must be a Fermat prime and smaller than or equal to [K_p:_p]/2. At the time of writing this manuscript, only 5 Fermat primes are known (namely, 3, 5, 17, 257, 65537) and this list is conjectured to be exhaustive. Since every quadratic extension is automatically Galois, we can use the above corollary in that case. Moreover, there are no exceptional Fermat primes for quadratic extensions, and thus we get a complete characterization of admissible p-groups for odd primes p. Let K be a quadratic number field, and G be an odd p-group for some rational prime p. Then G is K-admissible if and only if one of the following conditions holds: * prime p decomposes in K and d(G) ≤ 2, or, * prime p does not decompose in K and G is metacyclic. Apply Corollary <ref> and observe that if a prime p splits in K then [K_p:_p] = 1. The above corollary leaves out the case of 2-groups. We point out that there are examples of quadratic number field K and metacyclic 2-groups that are not admissible over K. For example, the dihedral group of order 8 is known to not be (i)-admissible. (It follows from Corollary <ref>, for example) The next group of number fields with degree a power of two are quartic number fields, and that case is more involved than the case of quadratic number fields. First, the field can be non-Galois, and second, there is the possible Fermat prime 3 even if the field is Galois over . When the field is non-Galois, our strategy is to look at the various possible splittings of primes, and argue that the local field cannot contain p-th roots of unity. The precise result is Let K be a quartic number field. Then a p-group G for p ≠ 2,3 is admissible over K if and only if one of the following two conditions hold: * p does not decompose in K, and G is metacyclic. * p decomposes in K, and d(G) ≤| pmin([K_:_p]) + 1. Note that the case when K/ is Galois follows from Corollary <ref> after observing the following two points. First, that the only exceptional Fermat prime in this case is 3, which is excluded from the statement. Second, if the prime p decomposes in K then [K_:_p] is same for each extending the p-adic valuation. The general case is proven with a similar argument as in the Corollary <ref>. We start with the case when p does not decompose in K. In that case, since p does not divide 4 = [K:], the result follows from Corollary <ref>. In the case that p decomposes in K, we argue that none of the completions at p contain ζ_p. Let | p, and there are the following two subcases. * K_/_p is unramified. Since p is odd, and for odd primes _p(ζ_p)/_p is ramified. It follows that K_ does not contain a primitive p-th root of unity. * If K_/_p is ramified then the ramification degree can only be 2 or 3 since we assumed that p decomposes in K and [K:] = 4. If ζ_p ∈ K_ then the ramification degree must be at least four since p ≥ 5 and [_p(ζ_p) : _p] = p-1. Therefore ζ_p ∉ K_. So the hypothesis of Theorem <ref> is satisfied, and | pmin([K_:_p]) equals the second largest degree of local extensions as in Theorem <ref>. §.§ Odd degree number fields Similar to Corollary <ref>, the Galois number fields of odd degree are another special class of number fields where we can prove stronger results. Let K be a Galois number field whose degree [K:] is an odd number, and G be an odd p-group. Then the following assertions hold: * If p does not decompose in K then G is K-admissible if and only if G has a Liedahl presentation for K. (See <ref> for a definition of Liedahl presentation.) * Moreover, if in addition p is tamely ramified in K, then G is K-admissible if and only if it is metacyclic. * If p decomposes in K, then G is K-admissible if and only if d(G) ≤ [K_p : _p] + 1. The case when p does not decompose in K follows from Liedahl's result (Theorem <ref>) and Corollary <ref>. By the hypothesis, the degree of the number field [K:] is odd, whereas (p-1) is an even integer since p is assumed to be odd. It follows that (p-1) ∤ [K:]. The case where p decomposes in K now follows from Theorem <ref> in light of the observation that K/ is a Galois extension and (p-1) ∤ [K:]. The first odd degree case is the case of cubic number fields. Arguing similar to the case of quartic number fields, we get Let K be a cubic number field, and G be a p-group for p ≠ 2,3. Then G is K-admissible if and only if one of the following conditions holds: * prime p does not decompose in K and G is metacyclic. * prime p decomposes in K and d(G) ≤ 2. Since p ≠ 2,3, the case when K/ is Galois follows from Theorem <ref> once we observe that if p decomposes in K then K_p = _p. Next consider the case when K/ is not Galois. If p does not decompose in K then the result once again follows from Corollary <ref>. Finally, consider the case that p decomposes in K, and let k be any completion of K for a valuation extending the p-adic valuation on . We have [_p(ζ_p):_p] = p-1 ≥ 4 since p ≥ 5, and so ζ_p ∉ k since [k:_p] ≤ 3. Therefore, we can invoke Theorem <ref>. The result follows once we observe that for a cubic number field the second biggest local degree is necessarily 1 in Theorem <ref>. The exceptional case of p = 3 in Proposition <ref>, and more generally the case of p | [K:] in Theorem <ref>, can have a more involved description of admissible p-groups. An example of this phenomenon is Lemma <ref>, where it was shown that the non-abelian semi-direct product /l^2 ⋊/l is not admissible over the unique degree l number field K inside (ζ_l^2). In particular, /9 ⋊/3 is not admissible over the cubic number field (ζ_9+ζ_9^-1). Author Information: Deependra Singh Department of Mathematics, Emory University, Atlanta, GA 30322, USA email: deependra.singh@emory.edu
http://arxiv.org/abs/2409.02596v1
20240904102707
An Analysis of Linear Complexity Attention Substitutes with BEST-RQ
[ "Ryan Whetten", "Titouan Parcollet", "Adel Moumen", "Marco Dinarelli", "Yannick Estève" ]
cs.LG
[ "cs.LG", "cs.CL", "cs.SD", "eess.AS" ]
Rate-Adaptive Generative Semantic Communication Using Conditional Diffusion Models Pujing Yang, Student Member, IEEE, Guangyi Zhang, Student Member, IEEE, and Yunlong Cai, Senior Member, IEEE P. Yang, G. Zhang, and Y. Cai are with the College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China (e-mail: yangpujing@zju.edu.cn; zhangguangyi@zju.edu.cn; ylcai@zju.edu.cn). September 9, 2024 ====================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Self-Supervised Learning (SSL) has proven to be effective in various domains, including speech processing. However, SSL is computationally and memory expensive. This is in part due the quadratic complexity of multi-head self-attention (MHSA). Alternatives for MHSA have been proposed and used in the speech domain, but have yet to be investigated properly in an SSL setting. In this work, we study the effects of replacing MHSA with recent state-of-the-art alternatives that have linear complexity, namely, HyperMixing, Fastformer, SummaryMixing, and Mamba. We evaluate these methods by looking at the speed, the amount of VRAM consumed, and the performance on the SSL MP3S benchmark. Results show that these linear alternatives maintain competitive performance compared to MHSA while, on average, decreasing VRAM consumption by around 20% to 60% and increasing speed from 7% to 65% for input sequences ranging from 20 to 80 seconds. self-supervised learning, speech, efficiency, linear complexity § INTRODUCTION Self-supervised learning (SSL) is an approach to training machine learning models where pseudo-targets are extracted from the data itself. Since SSL is unsupervised, these models can be pre-trained on immense amounts of unlabeled data, and then obtain good results on downstream tasks using minimal amounts of labeled data. SSL methods have proven to be useful in a variety of domains including in speech processing <cit.>, where SSL has reached state-of-the-art performance in tasks like Automatic Speech Recognition (ASR) <cit.>, Emotion Recognition (ER) <cit.>, Automatic Speaker Verification (ASV) <cit.>, Spoken Language Understanding (SLU) <cit.>, and Automatic Speech Translation (AST) <cit.>. Despite their performance, training SSL models is still very costly in terms of the amount of data, GPUs, and time needed. For example, Google USM was trained on 12 million hours (or over 1,369 years) of audio <cit.>, and the base and large XLS-R models were trained with 128 and 200 GPUs, respectively <cit.>. Even in efforts to make state-of-the-art models like HuBERT and data2vec more efficient, these SSL models still require around 1,000 A100 GPU hours of training <cit.>. In a study of the architectures of SSL models for speech <cit.>, the authors identified three main culprits: (i) the Acoustic Feature Extractor (AFE), which transforms the raw waveform into a latent representation; (ii) the context encoder, which is often a large Transformer <cit.> or Conformer <cit.>; and (iii) the SSL training objective. When looking at these three culprits, BEST-RQ <cit.> seems to be theoretically one of the most efficient SSL models for speech that has been proposed. BEST-RQ starts with Mel Filterbanks, addressing culprit (i). Then, it creates pseudo labels using a frozen, randomly initialized linear projection and codebook coupled with cross-entropy training, addressing culptrit (iii). In contrast, other models, such as wav2vec 2.0 <cit.>, use a learnable codebook and typically use a combination of objectives, slowing down training. For instance, previous studies showed that BEST-RQ obtain comparable downstream performance to wav2vec 2.0 while being 2.4 times faster to train <cit.>. However, for culprit (ii), the context encoder, BEST-RQ uses Conformer layers, which are computationally expensive. Conformers, like transformers, are expensive partly due to multi-head self-attention (MHSA) time complexity being quadratic with respect to the input sequence length. Therefore, to address culprit (ii), one needs to search for an efficient alternative to MHSA. Many studies have been conducted to reduce the complexity of MHSA, but only few have been applied to speech tasks and none have have been applied to SSL for speech. Examples of such methods include HyperMixing <cit.>, Fastformer <cit.>, SummaryMixing <cit.>, and Mamba <cit.>. The main contribution of this work is to address culprit (ii) by, for the first time, evaluating the most promising linear time complexity alternatives to MHSA in an SSL for speech setting. Downstream experiments conducted following the MP3S benchmark <cit.> show that our linear-time complexity BEST-RQ maintains performance with an equivalent MHSA BEST-RQ while decreasing VRAM consumption by around 20% to 60% and increasing inference speed from 7% to 65% for input sequences from 20 to 80 seconds. As a second contribution, we open-source the code in the widely used SpeechBrain toolkit <cit.>, enabling the community to experiment with efficient SSL models for speech[<https://github.com/whettenr/brq-att-alt-exp>]. § BACKGROUND Multi-head self-attention (MHSA) has quadratic time complexity with respect to the input length due attention weights being calculated by a dot product between every query-key token pair. Despite its complexity, this dot product operation enables each token to access the global context which is important for reaching high performance <cit.>. In research on reducing the complexity of MHSA, there are two overarching methods: (i) those aiming to approximate this pair-wise token computation with a lower cost, and (ii) those that do not seek to mimic this pair-wise token computation, but instead introduce global context in another fashion. Some methods in the first family include sparse attention, such as BigBird and Longformer <cit.>, which use a combination of sliding window, global, and random attention to achieve linear complexity. The main issue with these sparse attention methods is that they cannot fully model global context as they operate on windows. Other methods in this family, such as Linformer and Linear Transformer <cit.> approximate attention by computing low-rank approximates of the key and value matrices or use the dot-product of kernel feature maps and make use of the associative property to achieve linear complexity. Yet, in practice, the aforementioned methods are still computationally expensive <cit.>. In contrast, Fastformer <cit.> has proven to perform well on text data, outperforming previously mentioned sparse attention methods and low-rank approximates. This is done by making use of additive attention and element-wise multiplication to summarize the query and key matrices as vectors resulting in linear complexity while fully modeling global context. Fastformer has also been applied to speech and proved to work well on ASR tasks with the branchformer architecture <cit.>. Due to these performances and the availability of its implementation, Fastformer is considered as a good representative candidate of the first family of methods. The other family of solutions, which do not aim to approximate the self-attention mechanism, include, HyperMixing <cit.>, SummaryMixing <cit.>, and Mamba <cit.>. HyperMixing was first introduced with the HyperMixer <cit.>. The HyperMixer is an extension of the MLPMixer <cit.>. The disadvantage of MLPMixer is that it can not handle inputs of varying length, making it not suitable for NLP or speech tasks. HyperMixer extends MLPMixer by using hypernetworks <cit.> to capture global context for inputs that vary in length. HyperMixer proved to have good performance on text tasks and fully-supervised ASR with the HyperConformer <cit.>. Alternatively, SummaryMixing does not use hypernetworks, but instead the input is passed through a parametrized function, such as a multi layer perceptron, which is averaged across all time steps, resulting in a summary vector. To introduce global context to the input sequence, this summary vector is concatenated to the input at each time step and then fed through another parametrized function, which becomes the final output. SummaryMixing proved to perform just as well as MHSA on ASR, keyword spotting, and SLU, while decreasing the amount of required memory and training time. Lastly, Mamba is a method for sequence modeling based on state space models <cit.>. State space models can be thought of as a combination of recurrent neural networks and convolutional neural networks, which scale linearly with respect to the sequence length. Mamba has shown good performance on text, audio (which were in an auto regressive SSL setting), and a two supervised speech tasks <cit.>: ASR and speech enhancement. HyperMixing, SummaryMixing, and Mamba all have available implementations, and are therefore selected as representatives of the second family of methods. Despite the good performance of these substitutes for MHSA on supervised tasks, their performance in an SSL setting for speech tasks has not yet been studied, and the focus of these studies has been mostly on ASR. Furthermore, these methods have not been scaled to large models, e.g. from only about 5 to 100M parameters for speech tasks <cit.>, which contrasts with the current scale of SSL models. Thus, in this work we explore these unexamined aspects of linear complexity alternatives by (i) experimenting in an SSL setting, (ii) looking at performance on a variety of speech tasks using a benchmark from the community, and (iii) scaling up model size to above 300M parameters. § ATTENTION ALTERNATIVES IN BEST-RQ In this section, we give an overview of BEST-RQ and describe how Fastfastformer, SummaryMixing, HyperMixing, and Mamba achieve linear complexity. Let us define the input to the self-attention module as 𝐗∈ℝ^T × d = [𝐱_1, 𝐱_2, ...,𝐱_T] or a sequence of vectors of dimension d with T time steps. The difference between methods lies in how this input is transformed to contain information about the global context. BEST-RQ is, to this day, the most efficient SSL paradigm. It is notably used by large scale models such as Google USM <cit.> and Universal-1 from AssemblyAI. BEST-RQ begins with Mel Filterbanks which are passed through two paths, (i) the model, and (ii) the random-projection quantizer. For path (i), a random portion of the Mel Filterbanks are masked and then passed through two convolutional layers, a series of conformer layers, and then a linear layer. For path (ii) the unmasked Mel Filterbanks are passed through a randomly initialized frozen linear layer. Then a codebook look up is performed following: y = min_i norm_l2(c_i) - norm_l2(Am) , where m is a stacked portion of four Mel Filterbank frames, A is the linear projection, and c is the codebook. The index, y, from this codebook look up is used as the pseudo-target for pre-training for that portion of the input. Then, the cross-entropy loss is calculated between the masked sections of the output of the linear layer from the path (i) and the corresponding pseudo-targets from path (ii). BEST-RQ originally uses self-attention and, therefore, exhibits quadratic time complexity. Fastformer. The complexity is reduced to linear by using additive attention to summarize the attention matrices as a single vector. To show how this is done, let 𝐐, 𝐊∈ℝ^T × d, be the standard linear transformations from the transformer composed of 𝐐 = [𝐪_1, 𝐪_2, ...,𝐪_T] and 𝐊 = [𝐤_1, 𝐤_2, ...,𝐤_T]. 𝐐 is summarized as a query vector 𝐪 by the following: α_t = softmax(𝐰_q^T 𝐪_t/√(d)); 𝐪 = ∑^T_t=1α_t 𝐪_t. where 𝐰_q ∈ℝ^d is a learnable vector used with each vector of 𝐐 to generate the attentions score α_t. The summary query vector 𝐪 is calculated as a weighted some of the attention scores with their respective 𝐪_t. Then, 𝐪 is multiplied by each vector in 𝐊, resulting in a global context-aware key matrix that relies on the more efficient element-wise multiplication instead of dot products. HyperMixing reduces the complexity by using a token mixing multi-layered perceptron (TM-MLP) from the MLP-Mixer <cit.>, which can be described as: 𝐓𝐌-𝐌𝐋𝐏(𝐗) = 𝐋𝐚𝐲𝐞𝐫𝐍𝐨𝐫𝐦(𝐖_1(σ(𝐖^T_2 𝐗^T))), where W_1, W_2 are weight matrices and σ represents some non-linear activation function. This is a standard MLP except the linear operations are performed on the transposed 𝐗. This can be taught of as operating on the tokens instead of the channels which introduces global context as it allows information to pass between tokens. The key difference between MLP-Mixer and HyperMixing is that 𝐖_1and 𝐖_2 are generated by another MLP and thus can vary in length. SummaryMixing. The input, 𝐗, is passed into two functions f and s taking the form, in practice, of two MLPs. The output of s is averaged across all time steps, resulting in a summary vector s̅. To introduce global context to the input sequence, s̅ is concatenated to the output of f at each time step, t, and then fed through another function, or MLP in this case, called c. The output of c becomes the final output 𝐡. The overall SummaryMixing mechanism can be expressed as: s̅ = 1/T∑_t=1^T s(𝐱_t); 𝐡 = c(f(𝐗), s̅). Calculating the summary vector s̅ is an average and thus is linear with respect to the input length. As a result, SummaryMixing is able to introduce global context with linear complexity. Mamba. The memory complexity is reduced to linear by processing the input sequence in an unidirectional manner similar to a recurrent network. Being a selective state space model, Mamba represents the past information of 𝐗 as an hidden state h_t with constant memory consumption, hence solving the problematic quadratic time complexity from MHSA. To do so, the model employs a discretized state transition matrix A̅∈ℝ^N × N, an input discretized projection matrix B̅∈ℝ^H × 1, and an output projection matrix C ∈ℝ^1 × H to compute the hidden state h_t as follow: h_t = A̅h_t-1 + B̅X_t, y_t = Ch_t. To reduce the computation overhead brought by the recurrence, an hardware-aware parallel scan algorithm introduced in <cit.> allows to unroll equation <ref> as the input sequence 𝐗 convolved with a structured kernel composed of A̅, B̅, and C fixed. With this improvement, the throughput of Mamba is five times higher than a Transformer. Finally, to introduce global context to the model, similar to MHSA, we use a bidirectional Mamba as motivated by <cit.>. § EXPERIMENTS In this section, we describe our SSL pre-training protocol and summarize the downstream tasks, giving a brief overview of the datasets and evaluation metrics for each task. §.§ Self-supervised Pre-training All models are open-sourced in the SpeechBrain <cit.> library along with the hyperparameters for each model. Architecture details. We use the implementation of BEST-RQ from <cit.> which uses Conformer layers <cit.> with relative sinusoidal positional embedding and the PyTorch multi-head self-attention. For all models the pre-training framework stays the same. We only replace the MHSA cell with either Hypermixing, Fastformer, SummaryMixing, or Mamba. The small models have 12 conformer layers and the large models have 24 layers. We adjust the hidden dimensions to make all the models have around the same number of parameters, that is, 95M and 315M for the small and large ones respectively. The detailed list of hyperparameters can be found in the open-sourced SpeechBrain recipes. Pre-training details. We pre-train all our models for the same amount of steps, set to 200k using the 960 hours of training data from the LibriSpeech <cit.> dataset. Although 200k steps is a lower number than other state-of-the-art models trained for 400k or 800k steps <cit.>, previous research has empirically shown that, 200k steps is sufficient to compare SSL model performance <cit.>. Dynamic batching is used, meaning audio files are grouped or bucketed together with those of a similar length, and the number of samples per batch varies based on the bucket size in order to keep the number of seconds of input per batch similar. In our experiments, and following the literature <cit.>, we set the batch size to 1.7 hours for all models. §.§ Downstream Tasks and Metrics For the downstream evaluation we use a portion of the tasks from the Multi-Probe Speech Self-Supervision benchmark or MP3S <cit.>. In MP3S, pre-trained models are frozen and a learned weighted sum of the outputs of the hidden layers of the pre-trained models are fed into a downstream model called a probe. Because results were shown to vary depending on the probe’s architecture, for each task two probes are provided. Furthermore, and as SSL models are commonly fine-tuned with the downstream use case, we added an evaluation with a full fine-tuning for ASR. Automatic Speaker Verification (ASV) is a binary classification task where given 2 utterances, the goal is to determine whether the speakers are the same. The evaluation metric for ASV is the Equal Error Rate (EER). For the dataset, we use VoxCeleb1 <cit.>, which is made of utterances from celebrities sourced from YouTube. The dataset is divided into train and test splits, which we used accordingly. The probes for this task are the X-Vectors <cit.> and ECAPA-TDNN <cit.>. Intent Classification (IC) is a classification task of predicting the main purpose or objective of an utterance. We use accuracy as the evaluation metric. For this task, we use the SLURP dataset <cit.> containing 18 intents coming from single-turn user interaction with a voice assistant. Some examples of intents are email, calendar, or play (as in play a song). The probes for this task are a linear probe, in which the output of the SSL model are average-pooled along the time dimension and then passed into a linear classifier, and an LSTM probe, which consists of a two-layered BiLSTM followed by a linear classification layer. Emotion Recognition (ER) is the task of predicting the emotion of a speaker. Similar to IC, we use accuracy to measure performance. We use the IEMOCAP dataset, which consists of 10 actors performing scripts with four different emotions (neutral, happy, sad and angry). The probes for this task are a linear probe, like IC, and an ECAPA-DNN probe, like ASV. Automatic Speech Recognition (ASR) is the task of transcribing what was said in an utterance. For ASR, we measure performance by the word error rate (WER). We use the LibriSpeech train-clean-100 for training, dev-clean for validation, and test-other for final testing. We report on the final test WER without a language model and with the official 4-gram language model[<openslr.org/11/>] using beam search and shallow fusion. Also, and to test performance in low-resource and out-of-domain settings, which is one use-case of SSL models, we use the two low-resource language ASR datasets proposed in MP3S, Welsh and Basque from the CommonVoice 11.0 dataset <cit.>. For LibriSpeech, we use the 2-layered BiLSTM (LSTM) and ContextNet <cit.> probes, and for CommonVoice we use the linear and LSTM probes offered by MP3S. As mentioned, it is common for pre-trained models to be fine-tuned on a given task instead of being frozen. As a representative of this, we fine tune the models using the 100 hours of labeled data in the train-clean-100 split of LibriSpeech and evaluate on the test-clean and test-other splits. The downstream architecture for this task is a feed-forward neural network with CTC loss. § RESULTS We first evaluate the speed and memory gains on a controlled toy task (Figure <ref>). The findings are then extended to real SSL pre-training and the MP3S benchmark. Speed and Memory Evaluation. We give the models randomly generated data of lengths that range from 10 to 80 seconds with 10 seconds intervals. We set the batch size to 6, which was chosen to prevent running out of memory too quickly with MHSA. We perform 10 runs at each input length and time the forward pass as well as measure the VRAM consumption. Only the forward pass is evaluated to concur with a deployment scenario. Averages of computation time and peak VRAM over the 10 runs alongside the 95% bootstrapped confidence interval are plotted in Figure <ref>. Measurements were taken on an isolated node with a Nvidia A100 80GB GPU. For input sequences of 10 seconds, the amount of time and VRAM needed is relatively similar for all models. However, a difference starts to appear with 20 seconds, where the alternatives, on average, use 24% less memory, and run 7% faster relative to MHSA. On the other extreme, at an input of 80 seconds, the alternatives use on average 64% less memory and run 65% faster. SummaryMixing and Fastformer were the fastest compared to MHSA with 15% and 11% increase in speed with an input of 20 seconds and 70% and 71% increase in speed at 80 seconds, respectively. In terms of memory, Mamba and Fastformer proved to be the most memory efficient with 28% and 29% decrease in peak memory with an input of 20 seconds, and 67% and 66% decrease in memory at 80 seconds, respectively. These findings, however, need to be validated with real-scale experiments to make sure that these alternatives can reach decent downstream performance. For pre-training time estimates, we run 5 epochs and measure the time and max VRAM on Nvidia V100 32GB GPUs keeping the batch size at 1.7 hours as in the full pre-training. We scale these numbers up to 200k steps to get an estimate of the full number GPU hours. All of the alternatives prove to be faster and use less memory except the HyperMixing-LG and Mamba-LG models. We report these in Table <ref>. Speech Recognition on LibriSpeech. With the LSTM probe (Table <ref>), MHSA performed better than Mamba, HyperMixing, and SummaryMixing. For the base models with the Contextnet probe, the base version of Mamba outperformed MHSA. For larger models with the Contextnet probe, SummaryMixing performed better than MHSA except with the LM on the test-other split, in which SummaryMixing was 0.01 behind MHSA. Despite a dedicated parameter tuning, the large Fastformer falls significantly behind the others. We then fine-tuned the SSL models on the train-100 subset of LibriSpeech (Table <ref>). Confidence intervals come from 1000 bootstraps on the test-clean set with a LM. Interestingly, the confidence intervals of HyperMixing and Mamba base models and the SummaryMixing Large model overlap with MHSA best performance. It is therefore unclear if MHSA would always outperform these alternatives. Speech Recognition on CommonVoice. This dataset tells a different story and we hypothesis that this is due to a major issue with most speech SSL model benchmarks. Indeed, Librispeech is always used both during the pre-training and downstream evaluation, introducing a clear bias. With CommonVoice (Table <ref>), and with both probes, MHSA is not the best model anymore. With the linear probe, HyperConformer and SummaryMixing both performed better than MHSA on Welsh and Bask. With the LSTM probe, Mamba performed the best followed by HyperMixing, then SummaryMixing or MHSA depending on the language. This demonstrates the ability of these alternative methods to generalize well to out-of-domain and low-resource datasets compared to MHSA. Speaker Verification. As for CommonVoice, results varied depending on the probe used, however, MHSA clearly is not the best performing solution (Table <ref>). For the ECAPA probe HyperMixing proved to be the best with an EER of 3.65 and 3.54 for the base and large models respectively. With the Xvectors probe, HyperMixing performed best out of the small models with an EER of 8.54 and for the large model SummaryMixing performed best with and EER of 8.30. Intent Classification. The accuracies observed on this task validate our previous findings (Table <ref>). All alternatives, except Fastformer, and with both probes performed significantly better than MHSA. Emotion Recognition. This task tells a similar story to intent classification, but with a slightly different leader board (Table <ref>). Again, MHSA is not the best performing solution. Indeed, Mamba performed the best with the linear probe reaching an accuracy of 64.10% and 66.15% for the base and large model respectively, compared to 60.1% and 62.1% for MHSA. For the ECAPA-TDNN probe, SummaryMixing performed the best with and accuracy of 64.34% and 65.21% for the base and large model respectively. § DISCUSSION AND CONCLUSION One of the goals of SSL pre-training is to develop a model that can represent data without the need of labeled data in a way that is useful for a variety of downstream tasks. As this process is expensive, making SSL speech models less resource intensive is an active area of research. Part of this expense is due to multi-head self-attention. While alternatives exist, when it comes to speech tasks, they have only been applied in fully supervised tasks. In this work, we explore replacing multi-head self-attention with four state-of-the-art alternatives: HyperMixing, Fastformer, SummaryMixing and Mamba in a SSL setting for speech tasks. We show that for sequences 20 seconds or more these alternatives are substantially faster and consume less memory. Based on the toy test, and considering that about 75% of LibriSpeech is below 15 seconds, we believe that would see a greater difference in the pre-training time and memory when pre-training with a set such as <cit.> where all pre-training audio was cropped to be between 32 and 64 seconds. However, for small input under approximately 20 seconds the amount of time and memory a model consumes are all relatively similar. This threshold, however, could reduce drastically if the model does not combine Mel Filterbanks with a two-dimensional CNN, as the acoustic feature extractor plays a critical role in the length of the sequence arriving to the self-attention module. For instance, one may expect that replacing this CNN with a one-dimensional one would bring this threshold to 10 seconds. Nevertheless, many files in common speech datasets are shorter than 20 seconds and many speech tasks do not require global context from anything over 20 seconds. Splitting audio files based on silence for long audio file processing or streaming is also possible which brings to question the necessity of these alternatives to MHSA. We believe that future work could involve processing large audio files and developing Large Audio Foundation Models, similar to research trends to include more context in Large Language Models (LLM). With this trend in mind, one could imagine doing common LLM tasks, such as summarization, that require long context directly from audio without the need of a transcription. Nevertheless, as a result of our findings, we believe that unless specifically working with long audio files or from the raw waveform, further speed and memory gains will not be obtained by replacing MHSA with alternatives with linear complexity. We believe that more efficient SSL for speech models might be reached by other architectural changes, pruning/quantization methods, and data selection. IEEEbib
http://arxiv.org/abs/2409.02989v1
20240904180001
Chasing the beginning of reionization in the JWST era
[ "Christopher Cain", "Garett Lopez", "Anson D'Aloisio", "Julian B. Munoz", "Rolf A. Jansen", "Rogier A. Windhorst", "Nakul Gangolli" ]
astro-ph.CO
[ "astro-ph.CO", "astro-ph.GA" ]
Christopher Cain clcain3@asu.edu 0000-0001-9420-7384]Christopher Cain School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287-6004, USA Department of Physics and Astronomy, University of California, Riverside, CA 92521, USA Department of Physics and Astronomy, University of California, Riverside, CA 92521, USA 0000-0002-8984-0465]Julian B. Muñoz Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Stop C1400, Austin, TX 78712, USA 0000-0003-1268-5230]Rolf A. Jansen School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287-6004, USA 0000-0001-8156-6281]Rogier A. Windhorst School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287-6004, USA Department of Physics and Astronomy, University of California, Riverside, CA 92521, USA § ABSTRACT Recent JWST observations at z > 6 may imply galactic ionizing photon production in excess of prior expectations. Under observationally motivated assumptions about escape fractions, these suggest a z ∼ 8-9 end to reionization, in strong tension with the z < 6 end required by the Lyα forest. In this work, we use radiative transfer simulations to understand what different observations tell us about when reionization ended and when it started. We consider a model that ends too early (at z ≈ 8) alongside two more realistic scenarios that end late at z ≈ 5: one that starts late (z ∼ 9) and another that starts early (z ∼ 13). We find that the latter requires up to an order-of-magnitude evolution in galaxy ionizing properties at 6 < z < 12, perhaps in tension with recent measurements of ξ_ ion by JWST, which indicate little evolution. We also study how these models compare to recent measurements of the Lyα forest opacity, mean free path, IGM thermal history, visibility of z > 8 Lyα emitters, and the patchy kSZ signal from the CMB. We find that neither of the late-ending scenarios is conclusively disfavored by any single data set. However, a majority of these observables, spanning several distinct types of observations, prefer a late start. Not all probes agree with this conclusion, hinting at a possible lack of concordance between observables. Observations by multiple experiments (including JWST, Roman, and CMB-S4) in the coming years will either establish a concordance picture of reionization's early stages or reveal systematics in data and/or theoretical modeling. § INTRODUCTION Despite an explosion of new data in the past decade probing cosmic reionization, little is known about how and when the process began. The ending of reionization, believed to occur at 5 < z < 6, has been constrained by observations of the Lyα forest of high-redshift QSOs <cit.>. Direct (and indirect) measurements of the mean free path from QSO spectra <cit.>, and measurements of the IGM thermal history at z ≥ 5.5 <cit.> have further corroborated this picture. The electron scattering optical depth to the CMB <cit.> constrains the midpoint of reionization to be z ∼ 7.5. Evidence of damping wings in high-redshift z ∼ 7-8 QSOs <cit.> and measurements of the neutral fraction based on (non)detections of Lyα emitters <cit.> indicate that the IGM was partially neutral at these redshifts. Understanding the timeline of reionization is crucial for revealing the nature of the ionizing sources that drove it. Explaining how reionization could be driven by galaxies alone has been historically challenging thanks to the high early measurements of τ_ es <cit.> which required very early and/or extended reionization histories. This tension eased as the measured value of τ_ es steadily decreased. Several recent works <cit.> have showed that galaxies can complete reionization by z ≈ 6 under physically reasonable assumptions about their ionizing properties. Recent evidence for an end later than z = 6 further relaxed demands on galaxy ionizing output (although see ). Concurrent efforts demonstrated that AGN are unlikely to have contributed the majority of the ionizing budget responsible for reionization (e.g. , although see ). These findings paint a relatively simple, consistent picture of reionization: it ended at 5 < z < 6, was in progress at z ∼ 7-8, and was likely driven by galaxies. Prior to JWST, the precise timing of reionization's midpoint and especially its early stages were not tightly constrained. Constraints on reionization's midpoint from τ_ es <cit.> spanned a range of ± 0.75 in redshift (at 1σ), and few direct constraints on the first half of reionization existed. The space of models proposed by the aforementioned works span a wide range of possibilities[The scenarios proposed by  (early-starting, gradually ending reionization), and  (late-starting, rapidly ending) reionization, roughly bracket the proposed possibilities. ] without contradicting observations. Indeed, one important goal for JWST is to probe the properties of galaxies at z > 7-8 in hopes of learning more about reionization's early stages. However, the first JWST results may be complicating, as much as clarifying, our understanding of reionization. JWST has allowed for measurements of the UV luminosity function (UVLF) at much higher redshifts than HST, up to z ∼ 14 <cit.>. It has also allowed us to measure the ionizing efficiency of galaxies, ξ_ ion, above z = 6 <cit.>. Recently,  pointed out that a face-value interpretation of recent UVLF and ξ_ ion measurements from JWST () combined with observationally motivated assumptions about ionizing escape fractions (f_ esc) suggests reionization ended around z ∼ 8-9, inconsistent at > 2σ with the Planck τ_ es measurement[Although the tension is slightly smaller with the recent re-measurement of τ_ es from Planck data by  - see Figure <ref>. ]. This result represents a stark reversal from the historical problem of galaxies producing too few ionizing photons to complete reionization on time <cit.>. Several bright Lyα emitters (LAEs) at z > 8 <cit.> have also been observed, with the highest-redshift detection to date at z = 10.6 <cit.> by JWST. These observations may be surprising if the IGM is mostly neutral at these redshifts, since observing Lyα emission requires some level of ionization around galaxies <cit.>. The top panel of Figure <ref> shows the volume-averaged ionized fraction (x_ HII^ V) for the three reionization models we will study in this work. The early start/early end model is motivated by the aforementioned findings of , and has a midpoint (endpoint) of z_ mid = 8.5 (z_ end = 8). The late start/late end model is motivated by Lyα forest observations at 5 < z < 6 and the Planck τ_ es measurement, has z_ mid = 6.5 and z_ end = 5. The early start/late end model also has z_ end = 5, but an earlier midpoint (z_ mid = 7.5) and a start at z ∼ 13. These three models broadly represent the three types, or categories, of possible reionization histories. The bottom panel shows τ_ es for each, compared with the  measurement (black) and the recent re-analysis of Planck data from , with shaded regions denoting ± 1σ uncertainties. The late start/late end model is consistent within 1 σ with , and the early start/late end case is similarly consistent with . In this work, we study the observational properties of the three reionization models shown in Figure <ref> using radiative transfer (RT) simulations. We will demonstrate, in accord with previous works, that observations from the z ≲ 6 Lyα forest strongly disfavor the early start/early end model, in agreement with τ_ es. We will also compare the two late-ending models to a wide range of observations with the goal of understanding whether existing data supports a late or an early start to reionization. This work is organized as follows. <ref> describes how we calibrate our three reionization models and discusses their implications for galaxy properties in light of recent JWST observations. In <ref>, we describe our methods for running RT simulations of reionization and forward-modeling various observables. We compare our models to several sets of complementary observations in <ref>, discuss the implications of our findings in <ref>, and conclude in <ref>. After discussing an observable, we will bold-face the name of the reionization model (if any) preferred by that observable. Throughout, we assume the following cosmological parameters: Ω_m = 0.305, Ω_Λ = 1 - Ω_m, Ω_b = 0.048, h = 0.68, n_s = 0.9667 and σ_8 = 0.82, consistent with  results. All distances are in co-moving units unless otherwise specified. § IMPLICATIONS OF JWST GALAXY OBSERVATIONS §.§ Reionization Model Calibration The main input to our RT simulations (described in <ref>) is the globally averaged ionizing photon emissivity of sources verses redshift, Ṅ_γ(z). In this section we describe how we use a combination of JWST observations and Lyα forest data to construct (or “calibrate”) Ṅ_γ(z) for our three models. Our starting point is the amount of non-ionizing UV light produced by galaxies, which is quantified by the UV luminosity function (UVLF). This has been measured up to z ∼ 14 by JWST using both photometry and spectroscopy <cit.>. The top two panels of Figure <ref> show two sets of UVLFs that we use in our analysis. In both panels, the dashed curves show the z < 8 UVLFs measured by  with HST. In the left panel, the solid curves denote the double-power-law (DPL) fits from  at 8 ≤ z ≤ 12, and the dotted curve is the measurement from  at z = 14.5. In the right panel, the dotted curves show results for the redshift-dependent UVLF parameters given in Eq. 3-6 of  (which is a best-fit to measurements from ). At z > 8, the UVLFs in the left panel are a factor of ∼ 3 lower than those in the right panel. We will use these sets to roughly bracket observational uncertainties on the UVLF. The lower left panel shows the integrated UV luminosity density, ρ_ UV, vs. redshift. The curves show the logarithmic average of ρ_ UV calculated using the two sets of UVLFs in the top panels, and the shaded regions show the spread between them. For illustration, we integrate the UVLF down to two limiting magnitudes - a bright cutoff of M_ UV^ cut = -17 (black) and a fainter M_ UV^ cut = -13 (magenta). The background shading denotes the redshift ranges covered by HST and JWST data. Note that at z > 8, the redshift evolution of ρ_ UV(z) is fairly insensitive to M_ UV^ cut, the main difference being normalization. This is because both sets of UVLFs have faint end slopes close to α = -2.1 at z ≥ 8, with little evolution across that redshift range <cit.>. This is in contrast to some pre-JWST expectations, which predicted a steepening of the faint-end slope with redshift and a corresponding shallow evolution in ρ_ UV for faint M_ UV^ cut <cit.>. Instead, ρ_ UV evolves quickly with redshift, with a factor of ∼ 10 evolution between z = 8 and 13. The lower right panel quantifies the redshift evolution of galaxy ionizing properties in our models using ⟨ f_ escξ_ ion⟩_L_ UV≡Ṅ_γ(z)/ρ_ UV(z) This is the UV-luminosity (L_ UV)-weighted average of the product of f_ esc and ξ_ ion for the galaxy population. The black, red, and blue curves show this quantity for our three reionization models (see legend) assuming the average ρ_ UV curve for M_ UV^ cut = -13 (lower left panel). The shaded regions show the spread in this quantity (for fixed Ṅ_γ(z)) arising from observational uncertainty in ρ_ UV(z), as shown in the lower left. We calibrate Ṅ_γ(z) for the early start/early end model such that its ⟨ f_ escξ_ ion⟩_L_ UV agrees with the observationally motivated model from  (which assumes M_ UV^ cut = -13), shown by the faded gray dashed curve. Consistent with the findings of , this model ends reionization early at z ≈ 8 (see Figure <ref>). Our late start/late end scenario is motivated by the possibility that the tension with τ_ es and the Lyα forest in the early start/early end model can be solved with a simple redshift-independent re-scaling of Ṅ_γ(z). This could be the case if the measurements of ξ_ ion assumed in  (from ) are systematically biased high, and/or if the same is true of the f_ esc values they inferred from the results of . We find that scaling Ṅ_γ(z) down by a factor of 5 from the early start/early end case brings the end of reionization to z ∼ 5. We then further adjust Ṅ_γ(z) at z < 7 at the few-percent level until we achieve good agreement with the Lyα forest at z ≤ 6 (as we will show in <ref>). The cyan-dashed curve shows the observationally motivated model from , multiplied 3.5, which also has redshift evolution similar to this scenario. However, the late start/late end scenario is not the only possibility allowed by τ_ es and the Lyα forest. It could also be that Ṅ_γ(z) is consistent with the  model at the highest redshifts (z ≳ 10), but declines at lower redshifts in such a way that reionization ends at z < 6, as required by the Lyα forest. This scenario is represented by our early start/late end model (red dashed curve). This model also ends reionization at z = 5, and by adjusting its Ṅ_γ(z) at z < 10, we can also achieve agreement with the Lyα forest. This model requires a factor of ≈ 10 decline in ⟨ f_ escξ_ ion⟩_L_ UV between z = 10 and 6, a decrease much steeper than suggested by the results of of  and . This could be achieved if ξ_ ion, f_ esc, M_ UV^ cut, or some combination of these evolves significantly across this redshift range (see <ref>). §.§ Is the early start/late end model plausible? Given the discrepancy between the evolution of ⟨ f_ escξ_ ion⟩_L_ UV in the early start/late end model and in the  and  models, it is natural to ask whether this scenario is plausible. The top panel of Figure <ref> shows ⟨ f_ escξ_ ion⟩_L_ UV for this model, alongside observationally and theoretically motivated scenarios that make various assumptions about the redshift evolution of ξ_ ion and/or f_ esc. The dotted curve is the  model scaled down by 0.2, which agrees with the late start/late end case. The dashed curve is the same, except that we extrapolate ξ_ ion (Eq. 4 in ) outside the range of M_ UV and redshift within which it was fit to data. For the dot-dashed curve, we further replace the observationally motivated f_ esc prescription assumed in  with the global f_ esc(z) from the flagship THESAN simulation <cit.>. We have re-scaled each of the gray curves by different constants to bring them as close as possible to the early start/late end model. We obtain the best agreement for the dot-dashed curve, which boosts the redshift evolution of both ξ_ ion and f_ esc relative to . This shows that the early start/late end model is plausible given evolution in f_ esc and/or ξ_ ion, but only under “favorable” assumptions about both. The bottom panel of Figure <ref> illustrates another mechanism that could work in the direction of making reionization start earlier: evolution in M_ UV^ cut. The red curve assumes that the M_ UV^ cut evolves from -10 at z = 12 to -19 at z = 6 (M_ UV^ cut(z) = -19 + 3/2(z - 6)), which causes ρ_ UV to decline much less rapidly with redshift than it does in the bottom left panel of Figure <ref>. This allows ⟨ f_ escξ_ ion⟩_L_ UV in the early start/late end scenario to evolve less quickly, in better agreement with the  model (dotted curve). This type of behavior in the galaxy population could arise from decreasing dust obscuration <cit.> and/or feedback from the IGM reducing or shutting off star formation in low-mass halos at lower redshifts <cit.>. The evolution in M_ UV^ cut assumed in this illustrative example is extreme, and likely ruled out by existing observations <cit.>, but serves to show how an evolving M_ UV^ cut could support the early start/late end model. These comparisons suggest that a factor of a few of evolution in each of ξ_ ion and f_ esc, and perhaps some in M_ UV^ cut, could explain the early start/late end scenario. Indeed, there is some observational and theoretical support for the idea that ξ_ ion could increase with redshift and/or be larger in fainter galaxies <cit.>, However, some works find that ξ_ ion is closer to constant with M_ UV, and perhaps redshift <cit.>. Unfortunately, f_ esc at these redshifts cannot be directly measured, and there is no consensus in the literature from either indirect observational estimates <cit.> or cosmological simulations <cit.>. Thus, galaxy observations alone cannot rule out the early start/late end case, although such a model does require more significant evolution in galaxy ionizing properties than suggested by current observations. As such, we conclude that the late start/late end model seems more likely based on existing data. §.§ Ionizing photon budget We show the Ṅ_γ(z) for our models in the top panel of Figure <ref>, alongside several others from the literature that also match the z < 6 Lyα forest. The cyan-solid and magenta-dashed faded curves show the “fiducial” and “early” models from , and the dot-dashed gray curve shows the reference model[We actually show the reference model + “extra un-resolved sinks” (their Fig. 6) since it assumes the same sub-grid sinks opacity model we use in this work. ] from . Our late start/late end model drops off faster than the others at z > 9, and our early start/late end model closely matches the “early” model from . The bottom panel of Figure <ref> shows the integrated ionizing photon output of galaxies per H atom, with the vertical lines denoting the end of reionization and the horizontal lines the number of photons per H atom needed to complete reionization (the so-called “ionizing photon budget”). We find a budget of ≈ 2.6, 2.8, and 2.7 photons per H atom for the late start/late end, early start/late end, and early start/early end models, respectively. All our models are mildly “absorption-dominated” - meaning N_γ/ H is (slightly) more than twice the number of baryons per H atom in the universe (1.082, counting helium). The photon budget depends mildly on the reionization history itself, but is determined mainly by the recombination rate predicted by our IGM opacity sub-grid model. Recently,  made a first attempt to observationally measure the “clumping factor”, which quantifies the recombination rate in the in-homogeneous IGM <cit.>. They found C ∼ 12 at z > 5, higher than the commonly assumed C = 3 <cit.>. For a reionization history similar to our late start/late end model, they find this higher C increases the photon budget from ≈ 1.5 to ≈ 3. Assuming that the number of recombinations, N_γ/ H - 1.082, is ∝ C, the photon budget in the late start/late end model implies an effective C ∼ 9.5, slightly below but consistent at 1σ with the  measurements[Caveats include (1) that the real effective clumping factor predicted by our model is likely not constant in redshift <cit.>, and (2) the integrated number of photons absorbed by the IGM is slightly smaller than the number produced, an effect not accounted for in . ]. Using this crude approximation, a recombination rate high enough to delay the end of the early start/early end model to z ≈ 5 would require C ∼ 85, an unrealistically high number even compared to the most extreme simulations[For z_ end = 5.5 (6), this estimation gives C ∼ 70 (55). ]. § NUMERICAL METHODS In this section, we discuss some of the relevant technical details of our RT simulations, and our methods for forward-modeling the observables discussed in the next section. The reader interested only in the results of this work may safely skip this section and pick up at <ref>. §.§ Radiative Transfer Simulations We ran RT simulations of reionization using FlexRT (Cain & D'Aloisio in prep.). FlexRT is an adaptive ray tracing code that post-processes a time series of cosmological density fields to simulate the growth of ionized regions, the ionizing background, and the IGM thermal history. The code uses a sub-grid model for opacity to ionizing photons in ionized gas, which captures the effects of ∼ kpc-scale structure that cannot be directly resolved. Our sub-grid model is based on a suite of high-resolution, coupled hydro/RT simulations that can resolve the Jeans scale of cold, pre-ionized gas, run with the setup of <cit.>. We refer the reader to <cit.>, and a forthcoming code paper (Cain & D'Aloisio in prep.) for further details about FlexRT. Ionizing sources for the RT calculation are halos taken from a dark matter (DM)-only N-body simulation run with the particle-particle-particle-mesh (P^3M) code of . This run has a box size of L_ box = 200 h^-1Mpc and N = 3600^3 DM particles, and uses a spherical over-density halo finder to identify halos. We find that our halo mass function matches the halo mass function of  to within ≈ 5-10% (HMF) for M_ halo > 3 × 10^9 h^-1M_⊙, and is ≈ 50% incomplete at M_ halo = 1 × 10^9 h^-1M_⊙. To ensure we have a significant number of halos at the highest redshifts we simulate (z > 15), we adopt a mass cutoff for our sources of M_ halo > 1 × 10^9 h^-1M_⊙, even though our HMF is incomplete there. The density fields used in our RT calculation are taken from a high-resolution hydrodynamics simulation with the same large-scale initial conditions as the N-body run, which are re-binned to N_ RT = 200^3 for the RT. Our simulations start at z = 18 and end at z = 4.8. We assign UV luminosities (L_ UV) to halos by abundance-matching to observed UV luminosity functions (UVLFs). We use the measurements of  at z < 8, those of  at 8 < z < 14, and that of  at z = 14.5 - those shown in the upper left panel of Figure <ref>. The ionizing emissivity is distributed between halos following ṅ_γ∝ L_ UV[The amounts to assuming that the product of the escape fraction and ionizing efficiency of galaxies, f_ escξ_ ion, is constant over the ionizing source population at a fixed redshift. Note that although this is inconsistent with the model for f_ esc and ξ_ ion assumed in <ref>, the spatial distribution of photons between sources is both highly uncertain and is a higher-order effect for the purposes of most of this work. We will comment whenever it becomes relevant in subsequent sections. ]. The volume-averaged ionizing emissivity of sources, Ṅ_γ = 1/L_ box^3∑_i ∈ halosṅ_γ^i, is set by hand at each redshift and used to normalize the ṅ_γ proportionality. As explained in <ref>, we calibrated Ṅ_γ(z) for each scenario using a combination of JWST data (<ref>) and measurements from the z < 6 Lyα forest (<ref>, <ref>). We bin halos by the RT cell they occupy and treat cells containing one or more halos as sources. We use a single ionizing frequency[See e.g.  for discussions of the effect of multi-frequency RT on the Lyα forest. ] (E_γ = 19 eV), chosen to reproduce the same frequency-averaged HI cross-section, ⟨σ_ HI⟩, as a power law spectrum of the form J_ν∝ν^-1.5 between 1 and 4 Rydbergs. Cells are assigned post ionization-front (I-front) temperatures using the flux-based method prescribed in . The subsequent thermal history is calculated using their Eq. 6. §.§ Modeling the Lyα forest We model the Lyα forest in post-processing using the density fields from our aforementioned high-resolution hydrodynamics simulation, which has N = 2048^3 gas cells. Ionized fractions, photo-ionization rates (Γ_ HI) and temperatures are mapped from the FlexRT simulations onto these density fields, and the residual neutral fraction in ionized regions is calculated assuming photo-ionization equilibrium and the case A recombination rate[Case A is appropriate for the under-dense gas that set the forest transmission at these redshifts. ]. The native spatial resolution of our density field is Δ x_ cell = 97.6 h^-1kpc, too coarse to fully resolve the low-density voids that set the transmission at 5 < z < 6 <cit.>. We apply the multiplicative correction prescribed in Appendix A of  to our residual neutral fractions to get an effective resolution of Δ x_ cell = 12.2 h^-1kpc. The Lyα voigt profile is approximated using the analytic fit from . Since our gas temperatures are calculated on the coarse RT grid, we do not capture the temperature-density relation on scales smaller than the RT cell size. As temperature (usually) correlates positively with density in the IGM, low (high)-density hydro cells embedded in larger RT cells will be assigned temperatures that are too high (low). To correct for this, we assign a local temperature-density relation to each cell using the procedure described in  (see also their Appendix E), which uses the IGM temperature model of . We find this lowers the mean transmission by ≈ 10-15%, since the correction cools the under-dense cells, which affect the mean transmission the most. We compute Lyα forest statistics by casting 4000 sightlines from random locations and in random directions of length 50 h^-1Mpc, for a total path length of 200 h^-1Gpc. We calculate transmission statistics from z = 4.8 to z = 6 in Δ z = 0.2 increments. Since our native resolution of 97.6 h^-1kpc is only 2-3× narrower than the typical width of Lyα line profile (≈ 15 km/s vs. 30-50 km/s), we do the integration over the line at a velocity resolution 4 × higher than that of the hydro sim. We find this reduces the mean transmission by a few percent at most. §.§ Lyα transmission around galaxies We have also modeled Lyα transmission on the red side of line center around halos that could host Lyα emitting galaxies (LAEs). This allows us to assess the statistics of LAE visibility in our models at z > 7. LAEs typically emit Lyα red-shifted to both the red and blue sides of line center <cit.>. Although any emission on the blue side would be absorbed by even the ionized part of the IGM at these redshifts, attenuating the red side requires damping wing absorption from the fully neutral IGM <cit.>. This makes LAEs a potentially powerful probe of the IGM neutral fraction. However, interpreting observed red-side Lyα emission (or lack thereof) is complicated by uncertainties in modeling the intrinsic line profile of the LAEs themselves, and other factors such as surrounding inflows/outflows and proximate self-shielding systems <cit.>. In this work, we will avoid modeling the intrinsic LAE line profile, instead focusing on the IGM transmission, T_ IGM. To calculate this, we trace 50 randomly oriented sightlines away from each halo and compute the Lyα transmission profiles along each sightline at ± 5 Å from line center (≈± 800 km/s). We use the same N = 2048^3 high-resolution hydro simulation for this calculation as for the Lyα forest, and we calculate the ionization state of the ionized gas in the same way[Our results are largely insensitive to how the highly ionized IGM is modeled, however, since the red side transmission is set by the damping wing from the neutral IGM. ]. We begin integrating the Lyα opacity 500 h^-1kpc (5 hydro cells) away from the location of the halo to avoid gas within the halo itself contributing to T_ IGM. Gas velocities relative to the halo are computed by subtracting the halo velocity measured from the N-body simulation. The gas around massive halos can have sharp line-of-sight velocity gradients owing to inflows near the halo. Sightlines pointing away from these halos see positive velocity gradients, which narrows the Lyα line in redshift space. We find that the resulting sharp jumps in velocity between adjacent cells can produce artifacts in our transmission spectra. To mitigate this, we linearly interpolate the gas velocities (and all other relevant quantities) onto a grid with 4 × higher resolution than that of the simulation when calculating T_ IGM (see  for a description of a similar procedure). We find the transmission profiles to be well-converged at this resolution. §.§ Modeling the Patchy kSZ signal CMB photons can scatter off free electrons during reionization. If the electrons are moving relative the the CMB rest frame, this results in a Doppler shift of the photons. This can shift the blackbody spectrum and result in additional temperature anisotropies in the CMB <cit.>. This is known as the Sunyaev-Zel'dovich Effect, and its resultant temperature deviation along a line of sight is given by: Δ T/T = - σ_T n_e,0∫ e^-τ_ esγ̂·q⃗/cds/a^2 where γ̂ is the line of sight direction, q⃗ = (1 + δ) χv⃗ is the ionized momentum field, σ_T is the Thompson scattering cross section and τ_ es is the CMB optical depth to the last scattering surface, and n_e,0 is the mean electron density at z=0. The integral can be broken up into a post-reionization, homogeneous kinetic Sunyaev-Zel'dovich Effect (hkSZ), and a high-z patchy kinetic Sunyaev-Zel'dovich Effect (pkSZ) while reionization is still occurring. Untangling these components is difficult, so observations use templates to subtract the hkSZ component from the measured total kSZ power. For the purposes of this work, we take any contributions from z ≳ 5 as part of the pkSZ, even in simulations where reionization ends at z > 5. This keeps the different scenarios directly comparable to each other. Note that actual measurements (such as the one from SPT by , see <ref>) must assume a fixed z_ end in their analysis. We calculate the signal using a method similar to that first suggested by . The kSZ angular power spectrum can be calculated from the 3D power spectrum of q⃗ by: C_ℓ = ( σ_T n_e,0/c)^2 ∫ds/s^2 a^4 e^-2τP_q_⊥(k = ℓ/s, s)/2 where (2π )^3 P_q_⊥(k) δ^D(k⃗-k⃗'⃗)= ⟨q_⊥(k⃗) ·q_⊥^* (k⃗'⃗)⟩ is the power spectrum of the specific ionized momentum modes perpendicular to the Fourier wave vector, q_⊥ = q - k̂ (q·k̂ ), δ^D(k-k') is the Dirac-δ function and q = ∫q⃗ e^-i k⃗·q⃗ d^3 x⃗ denotes the Fourier transform of q⃗. Only the q_⊥ mode contributes significantly to the kSZ signal, due to q_|| contributions canceling out when integrating over the line of sight. We refer the reader to and appendix A in for details. In general, it is common to write the angular power spectrum in the dimensionless form: D_ℓ = C_ℓ ( ℓ + 1 ) ℓ / (2 π). Running RT simulations imposes practical limitations on the volume of our box. This limited box size means we miss large scale velocity modes that add to the kSZ power. To compensate for this, we use the same correction applied in eq. (B1), and take advantage of linear theory to generate a correction term: P_q_⊥^ miss(k,z) = ∫_k<k_ boxd^3 k'/(2π)^3 (1-μ^2) P_χ(1+δ)( | k⃗ - k⃗'⃗ | ) P_vv^ lin ( k') where k_box = 2 π / L_box is the box-scale wavemode, μ = k̂·k̂'̂, P_χ(1+δ),χ(1+δ)(k) is the ionized matter power spectrum and P_vv^lin(k) = (ȧf/k) P_δδ^lin(k) is the velocity power spectrum in the linear approximation taken from the public code CAMB <cit.>. It was found by <cit.> that this kind of approach can still underestimate the D^pkSZ_3000 by ∼10-20% due to the irreducible or connected component of the ⟨δ_χ v ·δ_χv⟩ term being non-negligible because of the non-Gaussianity of reionization. So our calculations could be seen as conservative estimates of the power. As we will see in <ref>, an increase in power could strengthen our conclusions. As such, we do not expect this missing term to affect our qualitative results. § IMPLICATIONS OF OTHER REIONIZATION OBSERVABLES In the rest of this work, we study three other observational windows into reionization: measurements from the spectra of high-redshift QSOs at z ≤ 6.5 (<ref>), observations of Lyα-emitting galaxies at z ≥ 8 (<ref>), and the patchy kSZ effect from reionization (<ref>). Our goal will be to see if these observations, together with aforementioned JWST data, reveal a consistent picture about when reionization started and when it ended. §.§ QSO Observations at 5 < z < 6 §.§.§ The Lyα Forest The 5 < z < 6 Lyα forest is perhaps the most compelling indicator that reionization ended at z < 6. This conclusion has emerged from studies of the mean transmission of the Lyα forest and the scatter in Lyα opacities <cit.>. We will study both in this section. Figure <ref> shows the mean transmission of the Lyα forest, ⟨ F_ Lyα⟩, at 5 ≤ z ≤ 6 in our models compared with recent measurements from . The late start/late end and early start/late end models both agree well with the data - this is by design, since both Ṅ_γ histories were calibrated so that the simulations would match the  measurements of ⟨ F_ Lyα⟩. As such, these two models cannot be distinguished using the mean forest transmission alone. We see that the early start/early end model misses the measurements severely, predicting a mean transmission near unity at all redshifts. Indeed, the average HI photo-ionization rate, Γ_ HI, in ionized gas is ≈ 2.7 × 10^-11 s^-1 at z = 6, ≈ 2 orders of magnitude higher than measurements at that redshift <cit.>. Thus, the forest transmission measurements strongly disfavor a z ∼ 8 end to reionization. Because of this, we will omit the early start/early end model from many of our subsequent comparisons, and focus instead on whether observations can distinguish the other two scenarios. Figure <ref> shows the cumulative distribution function (CDF) of forest effective optical depths measured over intervals of 50 h^-1Mpc, P(< τ_ eff^50). The τ_ eff^50 distribution contains more information than ⟨ F_ Lyα⟩, since it is sensitive to the spatial fluctuations in the IGM ionization state. This shape is sensitive to the IGM neutral fraction, since neutral islands produce high-τ_ eff^50 sightlines at z < 6 in late reionization scenarios <cit.>. We show P(< τ_ eff^50) at z = 5, 5.2, 5.6, and 6 for each of our models, alongside measurements from . To show what P(< τ_ eff^50) would look like in a hypothetical early-ending model that is compatible with the forest, we re-scaled Γ_ HI in the early start/early end[The re-scaling factor used here was ∼ 10^2 at all redshifts! ] model such that ⟨ F_Lyα⟩ agrees with the measurements shown in Figure <ref>. This allows us to cleanly compare how the shape of P(< τ_ eff^50) is affected by the IGM neutral fraction at fixed ⟨ F_ Lyα⟩. At z = 6 (lower right), the early start/late end model best matches the  data. The blue dotted curve shows that a z > 6 end to reionization results in a CDF that is too narrow, implying too little scatter in τ_ eff^50, echoing the conclusions of . Conversely, the late start/late end model predicts too much scatter (the CDF is too wide). This is because it has a neutral fraction of ≈ 30% at z = 6, as compared to only 15% in the early start/late end case. At z = 5.6 (lower left), none of the models match the observations very well. Both of the late-ending models produce too much scatter in τ_ eff^50, and the early-ending one produces too little. This suggests that a model with a non-zero neutral fraction smaller than that in the early start/late end case (7.5%) would match the data best. At z = 5.2 (upper right), the early-ending scenario fits the data very well, and the other two models have too much scatter in P(< τ_ eff^50). This indicates that both late-ending models end reionization slightly too late. At z = 5 (upper left), when the neutral fraction is < 1% in all three models, they agree well with the observations. At face value, observations of P(< τ_ eff^50) at 5 ≤ z ≤ 6 seem to prefer the early start/late end model. This scenario has a lower neutral fraction at z ≤ 6 than the late start/late end case, and as such better matches the observed scatter in P(< τ_ eff^50). However, none of the scenarios in Figure <ref> match the observed P(< τ_ eff^50) at all redshifts. This is likely because our late-ending models finish reionization too late, as evidenced by the z = 5.2 comparison (upper right panel). The findings of , based on these same measurements, suggest that reionization should be complete by z = 5.3 (vs. z ≈ 5 in our models), a shift of Δ z ≈ 0.3 from our models. In Appendix <ref>, we estimate what P(< τ_ eff^50) would look like if both late-ending models finished reionization at z = 5.3. We use the FlexRT outputs at z' = z - 0.3 to calculate P(< τ_ eff^50), then re-scale Γ_ HI in ionized gas until ⟨ F_ Lyα⟩ matches measurements. We show that this procedure brings our simulations into better agreement with the measurements, and that the early start/late end model remains preferred. In , we pointed out several factors that can affect the precise timing of reionization's end in models matched to measurements of ⟨ F_ Lyα⟩. First, lack of spatial resolution in the forest can lead to an under-estimate of the mean transmission at fixed x_ HI <cit.>, resulting in a spuriously early end to reionization (by Δ z ≈ 0.2, see Fig. 11 of ) when calibrating to measurements. Our forest calculations include the resolution correction prescribed Appendix A of , so in principle they account for this effect. However, those corrections were derived in a 25 h^-1Mpc box, and it is unclear how they might change in a larger box. It is therefore possible that we have over-corrected for resolution. We also found in  that harder ionizing spectra and less clustered ionizing sources result in an earlier end to reionization at fixed ⟨ F_ Lyα⟩. Moreover, the uncertain clustering of ionizing sources also affects large-scale fluctuations in the ionizing background, which could be particularly intense if quasars played a large role in the end-stages of reionization <cit.>. Any of these effects could affect when reionization needs to end to match ⟨ F_ Lyα⟩ measurements, and the shape of P(< τ_ eff^50) at fixed neutral fraction, potentially affecting our interpretation of P(< τ_ eff^50) measurements. §.§.§ The Mean Free Path Next, we will study the mean free path to ionizing photons (MFP, λ_ mfp), the average distance an ionizing photon travels through the IGM before being absorbed. The MFP is sensitive to the distribution of neutral gas in the IGM and small-scale clumping in the ionized IGM <cit.>. We calculate the Lyman limit MFP in our simulations using the definition in Appendix C of , λ_ mfp^912 = ⟨∫ x df ⟩/⟨∫ df ⟩ = - ⟨∫_1^0 x df ⟩ where x is the position along a randomly oriented sightline, f(x) is the transmission of 912Å photons, and the angle brackets denote an average over many sightlines. found that this definition matches well with forward-modeled direct MFP measurements from QSO spectra, even in a partially neutral IGM. We caution, however, that different ways of estimating the MFP from simulations can give modestly different results <cit.>. The left panel of Figure <ref> shows the MFP in our late-ending models, compared to measurements from . Both scenarios are in broad agreement with the measurements. However, the direct measurements using QSO Lyman Continuum (LyC) spectra (all but the  points) display a preference for the late start/late end model. This is largely due to the short z = 6 direct measurements, which prefer the rapid neutral fraction-driven decline in λ_ mfp^912 in the late start/late end case. By contrast, the early start/late end model is ≈ 2σ away from the central values of the z = 6 measurements from . Both models are within 1σ of the indirect, Lyα forest-based measurements from  (see also <cit.>). This result is consistent with previous findings that ongoing reionization at z = 6 is needed to explain the direct QSO measurements <cit.>. Another effect at play is that the ionized IGM at z = 6 in the late start/late end was more recently ionized, and thus clumpier <cit.>. These factors result in direct MFP measurements preferring the late start/late end scenario. Our earlier result that P(< τ_ eff^50) measurements prefer the early start/late end model hints at a possible tension between the Lyα forest and direct MFP measurements. This is consistent with the fact that the indirect MFP measurements from  (based on P(< τ_ eff^50) itself) at z = 6 are a factor of ≈ 2 above the direct measurements. We emphasize that this tension is mild, since the 1 σ error bars of these measurements overlap. §.§.§ IGM Thermal History The IGM temperature at mean density, T_0, is shown in the right panel of Figure <ref>, alongside measurements from . Both late-ending models display a “bump” in temperature at z ≈ 5.3 due to the end of reionization. Heating by I-fronts <cit.> increases T_0 until near reionization's end, after which cooling from the expansion of the universe and Compton scattering off the CMB set the evolution of T_0 <cit.>. The peak in T_0 is higher in the late start/late end model because a larger fraction of the IGM is re-ionized at z < 6. The redshift of the bump suggested by the  measurements is closer to z ∼ 5.6 - this is consistent with our earlier finding (based on P(< τ_ eff^50)) that reionization may end Δ z ∼ 0.3 too late in our models. At face value, the early start/late end model agrees best with T_0 measurements. Although both models are consistent with the  points, the late start/late end is too hot at z ≥ 5 for the measurements there. In the early start/late end case, a larger fraction of the IGM has re-ionized at higher redshift, giving it more time to cool by z = 5. That model is also agrees well with the reionization history in the best-fitting model of , which fits a broad range of IGM temperature measurements down to z = 2. That model has ionized fractions of ≈ 35% (≈ 15%) at z = 8 (10), similar to our early start/late end scenario (which has 1 - x_ HI≈ 40%, (20%)). An important caveat is that the thermal history is sensitive at the 20-30% level to the spectrum of the ionizing radiation, through a combination of the post I-front temperature (T_ reion) and photo-heating in ionized gas afterwards <cit.>. For example, a much softer ionizing spectrum could shift the T_0 histories significantly lower at fixed reionization history (see e.g. the bottom middle panel of Fig. 3 in ). This could bring the late start/late end model into agreement with z ≤ 5 T_0 measurements. However, this would also require a later reionization history at fixed ⟨ F_ Lyα⟩ <cit.>, which would worsen the disagreement with the observed P(< τ_ eff^50) in Figure <ref>. As such, we conclude that measurements of T_0 mildly prefer the early start/late end model. §.§.§ Neutral fraction constraints at z ≤ 6.5 Finally, we compare our reionization models to observational constraints on x_ HI at z ≤ 6.5 obtained using QSO spectra. Figure <ref> compares our models with constraints from Lyα forest dark pixels <cit.>, dark gaps <cit.>, QSO damping wings <cit.>, P(< τ_ eff^50) <cit.>, and Lyα forest damping wings <cit.>. The bold lines show the reionization histories in our two late-ending models, while the faded lines show these shifted to the right by Δ z = 0.3, consistent with the discussion surrounding P(< τ_ eff^50) in <ref>. Most constraints are upper (lower) limits on the neutral (ionized) fraction. Several of these are in mild tension with the late start/late end model, and most are consistent with the early start/late end case. The z = 5.5 dark gap constraint from  and the recent QSO damping wing limits from  disfavor the late start/late end model. The recent lower limit on the neutral fraction from , derived from Lyα forest damping wings at z = 5.8, disfavors an end to reionization early than this. The same can be said of the  forest damping wing measurement at z = 5.6, although their measurement actually prefers the late start/late end case. An important caveat is that if reionization ends earlier by Δ z = 0.3 (as hinted by P(< τ_ eff^50) measurements), the tension with the late start/late end model disappears. In fact, the neutral fraction in the shifted early start/late end model cannot be much lower without being in tension with the  damping wing limit. We conclude that neutral fraction constraints at z < 6.5 mildly prefer the early start/late end model, but that relatively small, realistic shifts in the reionization history could change this conclusion. §.§ Lyα Emitters at z > 8 In this section, we study the Lyα transmission properties at z ≥ 8 around massive halos that could host bright LAEs, such as GN-z11 <cit.>. Our goal is to determine whether these observations prefer an early or late start to reionization. §.§.§ Examples of IGM transmission at z = 8 In Figure <ref>, we illustrate how Lyα transmission surrounding bright galaxies differs in the late start/late end and early start/late end models at z = 8. The solid curves show the IGM transmission (T_ IGM) vs. velocity offset (v_ off) on the red side of systemic averaged over halos with M_ UV < -17. The vertical magenta line denotes systemic Lyα, v_ off = 0. T_ IGM goes to 0 at systemic, and at v_ off > 0 displays a shape similar to the characteristic damping-wing profile. We see much higher Lyα transmission in the early start/late end case, owing to its much higher ionized fraction (indicated in the legend). At v_ off < 500 km/s, T_ IGM is a factor of 2 or more above the late start/late end model. The thin lines show individual transmission profiles for 20 sightlines surrounding the brightest galaxy in the box. These are much higher than the average at v_ off≳ 200-400 km/s, and drop below the mean at smaller v_ off due to fast in-flowing gas around this object (see discussion of inflows in <ref>). The higher transmission at large v_ off owes to this object occupying a larger ionized region than the average galaxy. The higher T_ IGM in the early start/late end model would seem to naturally explain the detection of bright galaxies hosting Lyα emission at z ≥ 8. However, it is important to note that even the late start/late end model displays some transmission, even with its 16% ionized fraction, and this can be fairly high around the most biased objects (as the thin black lines show). Indeed, even around this single halo there is significant sightline-to-sightline scatter. This suggests that a statistical sample of observations is required to judge conclusively which model is preferred <cit.>. It also suggests that some LAE detections at z ≥ 8 could be explainable even if reionization starts relatively late. §.§.§ Visibility of LAEs For an LAE with an intrinsic equivalent width EW_ int, and an average IGM transmission over the emitted line, ⟨ T_ IGM⟩_ line, the observed equivalent width EW_ obs is EW_ obs = ⟨ T_ IGM⟩_ line EW_ int An object is detectable if EW_ obs is greater than some threshold EW_ obs^min. This condition can be expressed as EW_ obs/ EW_ int = ⟨ T_ IGM⟩_ line > EW_ obs^min/ EW_ int≡ T_ thresh where we have defined T_ thresh as the minimum IGM transmission that would make the LAE detectable. To avoid assumptions about the intrinsic properties of the LAE population, we will parameterize our visibility calculations in terms of T_ thresh. We will also adopt the common simplification that ⟨ T_ IGM⟩_ line can be approximated by T_ IGM at the v_ off of the line's emission peak[This assumption is not true in general because T_ IGM can vary significantly over the width of the emission line. ]. This allows us to parameterize LAE visibility in the (T_ thresh, v_ off) parameter space. In this section, we calculate LAE visibility statistics at z = 8, 9, 10, and 11. Recently,  studied the distribution of ionized bubble sizes in reionization models similar to ours. Generally, it is expected that galaxies must inhabit an ionized bubble of radius ≳ 1 pMpc to guarantee a high level of Lyα transmission on the red side of systemic <cit.>. They found that a model similar to our early start/late end scenario is required to produce a significant number of such ionized bubbles 8 ≤ z ≤ 10. In Figure <ref>, we perform a similar analysis using our visibility calculations. We show the fraction of LAEs with T_ IGM > T_ thresh vs. redshift for several choices of T_ thresh, v_ off, and UV magnitude range (faint vs. bright galaxies). The caption gives these parameter combinations for each curve as a brightness (M_ UV) and a combination of T_ thresh and v_ off. Visibility fractions increase with decreasing T_ thresh and increasing v_ off. The latter is true because T_ IGM increases with v_ off as the damping wing opacity decreases (Figure <ref>). Visibility is also higher for brighter galaxies, which inhabit the largest ionized bubbles. The left and right panels show results for the late start/late end and early start/late end models, respectively. The solid curves show visibility for bright (M_ UV < -21) LAEs with large velocity offsets (400 km/s) and low visibility thresholds (T_ thresh = 0.2). Such objects are visible nearly 100% of the time in the early start/late end case, and 40% of the time in the late start/late end model even at z = 11. Reducing v_ off to 200 km/s (dashed curves) deceases visibility, especially in the late start/late end case, but even then 20-40% of LAEs are visible at 8 < z < 11. In the early start/late end case, the visibility fraction counter-intuitively increases with redshift. This is because the evolution in visibility is not being driven by the neutral fraction, but by inflows surrounding massive halos. At higher redshifts, brighter objects are found in less massive halos, which are surrounded by smaller inflows. This leads to increased transmission at v_ off = 200 km/s <cit.>. The dotted curves show that increasing T_ thresh from 0.2 to 0.5 (for M_ UV < -21 and v_ off = 200 km/s) has a substantial effect on visibility. In the late start/late end model, < 20% of LAEs are visible at z = 8, and this drops to near-0 at z ≥ 9. However, in the early start/late end case, 40-50% of such objects are visible across this redshift range. The dot dashed curve considers faint (-19 < M_ UV < -17) galaxies with high v_ off = 400 km/s and low T_ thresh = 0.2. These objects are visible 50-90% of the time in the early start/late end model, but < 20% of the time in the late start/late end case. Finally, the double-dot dashed curves also show faint galaxies, but with v_ off = 200 km/s and T_ thresh = 0.5, a parameter combination that minimizes LAE visibility. In the late start/late end case, fewer than 10% of such objects are visible at any redshift, while in the early start/late end model, 17% (8%) of such objects are visible at z = 10 (11), and over half are visible at z = 8. In the left panel, we show recent measurements of the fraction of galaxies hosting Lyα emitters, (the Lyα fraction, X_ Lyα^ obs), at z = 8.7 and z = 10 from . The purple points show X_ Lyα^ obs as measured from that work. Comparing these points directly with the fraction of LAEs that are visible assumes that the intrinsic fraction of galaxies hosting LAEs is unity - that is, X_ Lyα^ int = 1. For this reason, we display these points as lower limits on X_ Lyα^ obs/X_ Lyα^ int. The green points show X_ Lyα^ obs/X_ Lyα^ int assuming X_ Lyα^ int = 0.3, the Lyα fraction measured at z = 5 by , which we do not show as limits. The curves in the left panel show wide spread that is broadly consistent with the measured visibilities. However, in the right panel, all the curves are on the high end of the measurements (except for the double-dot dashed curve). At face value, these findings indicate that the observed visibility of LAEs is too low for the early start/late end scenario, instead preferring the late start/late end model. For the global LAE visibility fraction to evolve consistently with measurements from  in the early start/late end model, the double-dot dashed curve in Figure <ref> would have to characterize the bulk of the population. These are faint LAEs with emission at low v_ off that require high IGM transmission to observe. While it is true that faint LAEs tend to have low v_ off <cit.>, they also tend to have fairly high EW_ intr <cit.>. A majority of the LAEs observed at z = 5 by  in that M_ UV range have EW_ intr > 50Å, and about half have EW_ intr > 100Å. With the visibility threshold of EW_ obs^min = 25Å used in , this would imply T_ thresh < 0.5 for the majority of faint objects, and T_ thresh < 0.25 for half of them. Moreover, a significant fraction of the faint objects observed at z = 5-6 in  have v_ off > 200 km/s. Brighter galaxies in their sample generally have smaller EW_ intr, but these also tend to have higher v_ off <cit.> and inhabit larger bubbles, such that they would remain visible in the early start/late end model even if they required higher T_ thresh to detect. By contrast, in the late start/late end model, LAEs with a wide range of properties have visibilities consistent with the  measurements. §.§.§ Does GN-z11 require an early start? GN-z11 is the highest-redshift LAE detected to date, at z = 10.6. It has a broad Lyα emission feature centered at v_ off≈ 550 km/s, a full-width at half maximum (FWHM) of Δ v ≈ 400 km/s, and an observed EW of 18Å. Using a Bayesian analysis based on reionization simulations and an empirically derived model for the intrinsic EW distribution of LAEs from ,  inferred that the IGM must be at least 12% ionized at 2 σ confidence (yellow point in Figure <ref>). This constraint, at face value, clearly favors the early start/late end model (see Figure <ref> in the next section). Here, we consider whether the observed properties of GN-z11 require reionization to start early. We can estimate the EW_ intr required to produce the observed GN-z11 emission line as follows. First, we model the intrinsic line as a Gaussian with some central velocity v_ off^ intr, FWHM Δ v^ intr, and amplitude A. Then, the observed emission profile for a given sightline is given by the intrinsic profile multiplied by T_ IGM(v_ off). We also model the observed line as a Gaussian, with parameters given in the previous paragraph, and the continuum and normalization chosen to give the observed EW. For a sample of ∼ 2000 sightlines surrounding -22 < M_ UV < -21 galaxies, we fit for the parameters of the intrinsic line that, after attenuation by the IGM, gives a best fit to the observed line. The distribution of EW_ intr recovered with this procedure, P( EW_ int | EW_ obs), at z = 10.6 is shown in Figure <ref> for our late-ending models. The shaded region denotes the range of EW observed in similarly bright galaxies at lower redshifts , the highest of which is ≈ 60Å (Fig. 1 of ). We see a stark contrast between P( EW_ int | EW_ obs) in our models. Nearly all the sightlines in the early start/late end model require EW_ intr < 60Å, and about 20% require EW_ intr < 30Å. This suggests that bright LAEs with EWs on the high end of the observed distribution will produce a GN-z11-like observation most of the time in this scenario. In the late start/late end model, the distribution is much wider and shifted to much higher EW_ intr. Only 12% of sightlines allow for EW_ intr < 60Å, and none of them allow < 30Å. So, although objects such as GN-z11 are expected to be fairly rare in the late end/late start scenario, they would not be impossible to find. Note that the two M_ UV < -21 LAEs observed in  with the highest EWs (≈ 30 and 60Å) were both detected in Hα. Based off a clear detection of Hγ emission in GN-z11,  estimated that it should have strong Hα emission. As such, we can conclude that the detection of GN-z11 does not rule out the late start/late end scenario. However, if forthcoming observations reveal similar objects to be ubiquitous at z > 10, it would be strong evidence in favor of something similar to the early start/late end model. §.§.§ Constraints on x_ HI with galaxies at z ≥ 6.5 To conclude our discussion of LAEs, we look at measurements of x_ HI from observations of galaxies at z ≥ 6.5. These include constraints from the statistics of LAE detections, and those based on Lyα damping wing absorption in galaxy spectra. Figure <ref> shows a collection of these measurements and limits compared to our late-ending reionization models in the same format as Figure <ref> (with references in the caption). These constraints are all model-dependent to some degree, so showing them on the same plot may not constitute a fair comparison. Our goal here is to illustrate the diversity of constraints obtained across multiple observations and inference techniques. Unlike in Figure <ref>, we see no clear preference for either scenario. Indeed, at z < 8, several constraints prefer each of the models, while some have error bars too large to distinguish them. The only consensus that these constraints give collectively is that reionization is in progress at 7 < z < 8. At z > 10, all the constraints are based on damping wings except that of , which is based on the detection of GN-z11. There is no clear consensus between these constraints either. There is a notable dearth of constraints at 8.5 < z < 10, the redshift range where the two models differ the most. The only exception is the  point at z ∼ 8.7, which falls exactly between our models but has large error bars. It seems clear from this comparison that constraints on x_ HI from high-redshift galaxies do not, at present, display a clear preference for either a late or early start to reionization. §.§ Patchy kSZ from reionization In this section, we will turn again to the CMB to help distinguish our reionization models. We display the patchy kSZ power spectra for all three models (see <ref>) in the left panel of Fig. <ref>, along with the 1σ error bar and 95% confidence upper limits from . We see that the late start/late end model alone lies within the 1σ of the SPT measurement. The other two both fall outside this range but still within the 95% confidence upper limit, with the early start/late end case coming closest to the upper limit[We note that any measurement of the pkSZ involves assumptions about the late time, homogeneous kSZ. The methods in assume an end to reionization at z_ end = 5.5, whereas our simulations (and our pkSZ contributions) continue until z_ end∼ 5. Correcting for this would bring the measurement up slightly (∼ 0.1 μK^2), but not enough to qualitatively affect our conclusions. ]. We also include the revised 2 σ upper limit from , which is somewhat lower than the SPT result and favors the late start/late end model even more. To gain intuition for the origin of the differences in pkSZ power, we plot the differential contribution to D_ℓ=3000 per z in the right panel. Both early-starting models begin contributing power as soon as reionization starts at z = 18, as ionized bubbles form and grow to sufficient scales. The two begin diverging at z ∼ 10, as the early start/early end case finishes reionization, causing the pkSZ power at ℓ=3000 to drop abruptly at z = 8 (see annotation). This fall-off in power corresponds to the disappearance of large-scale ionization fluctuations, at which point the features in the kSZ signal on these scales are set by fluctuations in density and velocity only. In contrast, ionization fluctuations persist longer in the early start/late end model, and so continue to contribute power to the pkSZ signal at ℓ = 3000 at z < 8. In the late start/late end case, reionization begins much later but still ends at z = 5, which makes the peak in dD_3000^ pkSZ/dz narrower. The shaded red region shows that nearly half the power in the early start/late end case arises at z > 10 when the ionized fraction is < 20% in that model. In the late start/late end case, reionization is just starting around z = 10. Thus, we see that the pkSZ is highly sensitive to reionization's duration, and particularly its early stages <cit.>. We see also that although the  measurement does not rule out the early start/late end model at 2σ, it clearly prefers the late start/late end case. This finding is consistent with the recent limits on the duration of reionization from  using data from SPT and the Herschel-SPIRE experiment. They found that Δ z_50, the difference between redshifts at 25% and 75% ionized fractions, is < 4.5 at 95% confidence[One caveat is that their constraint assumes that reionization ends by z = 6, whereas in our models it completes at z ≈ 5. Their constraint would loosen by dz ∼ 1 if z = 5 were assumed. ] - our early start/late end model has Δ z_50 = 3.1. § DISCUSSION §.§ “Face-value” interpretations of the data In the previous sections, we studied how the properties of our three models compare to a broad range of observables. These include measurements of the UVLF and ξ_ ion from JWST (<ref>), inferences from the spectra of high-redshift QSOs (<ref>), Lyα transmission from z > 8 galaxies (<ref>), and constraints from the CMB (<ref>). We concluded in <ref> that measurements of the Lyα forest at z ≤ 6 strongly disfavor the early start/early end scenario. However, the evolution of the ⟨ F_ Lyα⟩ alone could not distinguish between an early and late start to reionization. Most of the other observables we studied individually displayed a preference for one or the other, but none could conclusively rule out either[We note that these preferences were determined primarily qualitative - based on “chi-by-eye” comparisons of models and observations, and as such they represent somewhat qualitative results. Still, they are useful for gauging the direction that each data set will likely to push constraints on reionization if included in a quantitative analysis <cit.>. ]. This motivates the key question in this work: when taken together, what story do these observables tell about reionization's early stages? Indeed, a major goal of the field is to synergize the constraining power of many observations to constrain reionization, and qualitative analyses like the one presented here can help chart the path for more detailed studies. We summarize our findings in Table <ref>. The left-most column lists the “categories” of observables that we studied - the CMB (blue), high-redshift galaxies (red), and z < 6.5 QSOs (magenta). The second column from the left lists each of the observables, and the remaining two columns denote whether each observable prefers a late or early start to reionization. We find that τ_ es, ⟨ F_ Lyα⟩, and measurements of x_ HI at z > 6.5 display no preference for either case. JWST observations of the UVLF/ξ_ ion, the MFP, LAE visibility at z > 8, and the SPT pkSZ measurement prefer the late start/late end case. By contrast, the Lyα forest P(<τ_ eff^50), the IGM thermal history, and x_ HI measurements at z < 6.5 prefer the early start/late end model. A key finding is that not all these observables prefer the same scenario. This suggests a possible lack of concordance between different data sets with respect to reionization's early stages. We note, however, that all three observables that favor the early start/late end model are based QSO spectra at z ≤ 6.5. Indeed, nearly all the data associated with these three observables (with the exception of the QSO damping wings) arises, directly or indirectly, from the Lyα forest, and thus cannot be treated as fully independent[Indeed, much of the data comes from the same QSO survey, XQR-30 <cit.>. ]. As we explained in <ref>, the conclusions we drew from these probes are sensitive to our modeling choices, which are necessary to link the late stages of reionization (which the forest probes directly) to its early stages. By contrast, the observables that support a late start are derived from different data sets using vastly different techniques, and at least one probe in every category prefers a late start. As such, we judge that the consensus of these probes from a wide range of data sets indicates that the late start/late end model is mildly preferred (overall) by observations. The lack of consensus between different observables has several possible resolutions. Perhaps the most straightforward is that existing observations lack the accuracy and/or precision to achieve a unanimous consensus about the early stages of reionization. Indeed, nearly all the observables considered here still have large uncertainties. Theoretical modeling uncertainties, required to interpret the data, can similarly affect these conclusions. As mentioned earlier, the QSO-based observables that seem to support an early start probe only reionization's end stages, requiring a model to infer the early history. These issues will continue to improve with time as more (and better) observational data is acquired and reionization models become faster and more accurate. However, a more concerning possibility is that forthcoming observations and rigorous theoretical analysis will reveal a statistically significant tension between different observables with respect to reionization's early stages. In this case, the fault must lie with observational systematics and/or hidden deficiencies in theoretical modeling. Either possibility presents a potential pitfall for efforts to constrain reionization with multiple data sets. Such constraints may be artificially tight if “tensions” between data sets exist. This could lead to the pre-mature conclusion that the reionization history is known to high precision. Forthcoming efforts should be aware of this potential pitfall, and take care to understand the effects of individual data sets on joint constraints. §.§ Forthcoming observational prospects In this section, we will briefly discuss prospects for future observations that would help strengthen the constraining power of some of the probes discussed here. The first is to continue improving constraints on the UVLF and ξ_ ion, particularly for faint galaxies. demonstrated that ξ_ ion could be measured reliably for faint (-17 < M_ UV < -15), lensed galaxies during reionization. Such studies, together with efforts to directly constrain the faint end of the UVLF, will be crucial for determining the redshift evolution of these quantities and whether there is a fall-off in ionizing output for the faintest galaxies (see bottom panel of Figure <ref>). Continued efforts to understand how f_ esc correlates with galaxy properties at low redshift, such as the Low-redshift Lyman Continuum Survey (LzLCS, ) will also be crucial for placing reasonable limits on the evolution of f_ esc (see also e.g. ). There is also further progress to be made with QSO-based observations at z ∼ 6. Improved constraints on the mean free path and IGM thermal history may help distinguish an early vs. late start, as Figure <ref> shows. Forthcoming observations with Euclid <cit.> will dramatically increase the number of known quasars at these redshifts, allowing for spectroscopic follow-up that will improve statistical uncertainties on both sets of measurements. Efforts to measure the relationship between Lyα forest opacity and galaxy density <cit.> may also help tighten constraints on the reionization history at z < 6.5 <cit.>. Further observations with JWST will improve the statistics of z > 8 galaxies displaying significant Lyα emission. They will also yield constraints on x_ HI from Lyα damping wing absorption at redshifts where very few bright quasars are available. Forthcoming observations with the Nancy Grace Roman telescope <cit.> will also reveal bright LAEs over a much wider area than JWST, enabling improved constraints on the early reionization history <cit.>. Forthcoming improvements on CMB constraints from multiple experiments, including the Atacama Cosmology Telescope <cit.>, SPT <cit.>, Simons Observatory <cit.>, and CMB-S4 <cit.> will improve constraints on τ_ es and pkSZ. They will may also detect new signals that probe reionization, such as patchy τ <cit.> and higher-order statistics and cross-correlations with other signals <cit.>. These will help constrain the early stages of reionization because of their sensitivity to its duration and morphology <cit.>. § CONCLUSIONS In this work, we have studied the observational properties of three representative reionization histories. In the first, reionization starts early and ends at z ∼ 8, earlier than suggested by the Lyα forest and τ_ es. This scenario is motivated by recent JWST observations of the UVLF and ξ_ ion at z > 6, which, when combined with observationally motivated assumptions about f_ esc, suggest copious ionizing photon output by high-redshift galaxies. We have investigated the observational properties of this model, alongside two others in which reionization ends much later at z ∼ 5, in agreement with the Lyα forest and τ_ es. One model starts reionization relatively late at z ∼ 9, and the other starts early at z ∼ 13. * We find, consistent with previous work, that the early start/early end scenario severely violates high-redshift QSO observations, most notably the Lyα forest. These observations require reionization to end at 5 < z < 6, or at least not much sooner. This is consistent with recent measurements of τ_ es from . Unfortunately, neither the mean transmission of the Lyα forest nor τ_ es display a clear preference for whether reionization started late or early. The former measures only the global average transmission of the IGM at reionization's end, and the latter is only an integrated constraint, and thus does not uniquely constrain reionization's early stages. * Observations of the UVLF and ξ_ ion by JWST, direct measurements of the MFP from QSO spectra, the visibility of z ≥ 8 LAEs, and the recent SPT measurement of the patchy kSZ all prefer a late start to reionization. In light of the latest UVLF measurements at z ≥ 8 from JWST, our early-starting model requires an order of magnitude of evolution in galaxy ionizing properties (quantified by ⟨ f_ escξ_ ion⟩_L_ UV) between z = 6 and 12. This is less compatible with observations than the flat evolution in our late-starting model. Direct measurements of the MFP from QSO spectra also prefer a late start, mainly because of the high neutral fraction needed to match direct measurements from QSO spectra at z = 6. The steep drop-off in LAE visibility at z > 8 observed by  is more consistent with a late than an early start. Finally, the low central value of the SPT pkSZ measurement prefers a late start, and disfavors our early-starting model at almost 2 σ. * By contrast, the distribution of Lyα forest opacities, the thermal history of the IGM, and measurements of x_ HI at z < 6.5 prefer an early start. The forest P(< τ_ eff) is too wide for the observations in our late start/late end model, preferring instead the lower neutral fraction in the early start/late end case. Constraints on x_ HI at z ≤ 6.5 from a variety of QSO-based inferences suggest a similar conclusion. The cooler IGM in the early-starting case at z ≤ 5 is also in better agreement with observations. * Our findings suggest that no single probe can conclusively rule out either the late start/late end or early start/late end model in favor of the other. However, we do find that observations across multiple independent datasets - JWST observations of galaxy properties, LAE detections, the CMB, and QSO absorption spectra - prefer a late start to reionization. The observables that prefer an early start are all derived from the same type of observations (QSO spectra) and only probe the tail end of reionization. As such, they are not fully independent probes, and they require a model to derive inferences about reionization's early stages. As such, we conclude that overall, existing observational data displays a mild preference for the late start/late end scenario. * The face-value disagreement we find between different probes suggests that (1) present observations, and the models used to interpret them, are insufficiently precise/accurate to paint a consensus picture of reionization's early stages, and/or (2) there are systematic effects (in observations and/or theoretical modeling) leading to the appearance of tension. The second possibility motivates care in interpreting the results of analyses using multiple data sets. Joint analyses using many observables could lead to artificially tight constraints on the reionization history and other quantities if tensions arising from systematics are not carefully understood. Forthcoming work, on the observational and theoretical side, should continue working to synergize the information available from many observables. Our work motivates further efforts targeting the early stages of reionization, which will yield key insights into the evolution of galaxy properties across the reionization epoch and into the cosmic dawn era. This work is also a cautionary tale that motivates careful understanding of potential systematics in both observations and theoretical modeling. Such systematics, if not studied carefully, could lead to premature conclusions about reionization. CC acknowledges helpful conversations with Seth Cohen, Timothy Carleton, Evan Scannapieco, Frederick Davies, and Kevin Croker, and support from the Beus Center for Cosmic Foundations. A.D.'s group was supported by grants NSF AST-2045600 and JWSTAR-02608.001-A. RAW acknowledges support from NASA JWST Interdisciplinary Scientist grants NAG5-12460, NNX14AN10G and 80NSSC18K0200 from GSFC. JBM acknowledges support from NSF Grants AST-2307354 and AST-2408637, and thanks the Kavli Institute for Theoretical Physics for their hospitality. Computations were made possible by NSF ACCESS allocation TG-PHY230158. aasjournal § P(< Τ_ EFF^50) FOR SHIFTED REIONIZATION HISTORIES In Figure <ref>, we show P(< τ_ eff^50) for the late start/late end and early start/late end models with their reionization histories shifted earlier by Δ z = 0.3, as described in <ref>. We show 6 redshifts between z = 5 and 6 in intervals of 0.2. This exercise shows roughly what P(< τ_ eff^50) would look like if these models ended reionization at z = 5.3 instead of 5. We see that at z = 5, 5.2, and 5.4, there is little difference between the models. At z = 5.6 and 5.8, P(< τ_ eff^50) is slightly narrower in the early start/late end case, as it is in Figure <ref>. The difference is that the early start/late end model now seems to agree well with the observations, whereas in the late start/late end case, P(< τ_ eff^50) is still too wide. At z = 6, it is difficult to tell which model is a better fit to the data due to the large number of non-detections in the data, which set the width of the green shaded region. This test demonstrates that even if reionization ends significantly earlier than it does in our models, the early start/late end scenario remains preferred by P(<τ_ eff^50). § LYΑ VISIBILITY IN THE FULL (T_ THRESH, V_ OFF) PARAMETER SPACE In this appendix, we show complete results for our LAE visibility analysis in terms of v_ off and T_ thresh presented in <ref>. Figure <ref> shows the fraction of visible LAEs at T_ thresh < 0.5 and 0 < v_ off < 800 km/s, at the four redshifts (columns) and three magnitude ranges (rows) we considered in the late start/late end model. Red (blue) regions denote high (low) LAE visibility fractions. The white contour lines denote visibility fractions of 50%, 25%, and 10% (see annotation in the left-most panel of the middle row). The hatched white box in the upper left corner of each panel denotes the region where T_ thresh > 0.2 and v_ off < 500 km/s. We expect a majority of z ≥ 8 LAEs to inhabit this region. Most LAEs in M_ UV < -17 galaxies observed at slightly lower redshifts (z ∼ 5-6, when T_ IGM is close to unity) have v_ off < 500 km/s and EW_ int < 500 Å <cit.>. For T_ thresh = 0.2, the latter would correspond to EW_ obs^min < 100 Å. At z = 8, a significant portion of the hatched region displays a high visibility fraction, especially for the brightest galaxies. For M_ UV < -21 galaxies, LAEs with v_ off > 200 km/s have a significant chance of being visible, provided they are bright enough to be observed with a factor of 2 IGM attenuation. Fainter galaxies require somewhat fainter detection thresholds, since on average they inhabit smaller ionized bubbles and are more sensitive to IGM attenuation. In the faintest M_ UV bin, v_ off > 400 kms/ and T_ thresh < 0.25 is required for half of LAEs to be visible. Still, we should expect to observe some LAEs at z = 8 in the late start/late end model, especially the brightest ones. At z > 8, the transmission of Lyα declines rapidly, especially for fainter galaxies. At these redshifts, in all but the brightest M_ UV bin, the 50% contour line does not intersect the hatched region. As we showed in Figure <ref>, this drop-off in visibility is consistent with the observed decline in X_ Lyα observed by  at z > 8. The brightest objects remain likely to be observed if v_ off > 400 km/s and T_ thresh < 0.25 all the way to z = 11. Notably, GN-z11 (M_ UV = -21.5, z = 10.6, ) has v_ off = 550 km/s, meeting this condition. The recently observed JADES-GS-z9-0 (M_ UV = -20.43, z = 9.4, ) has v_ off = 450 km/s. These objects would be likely visible in the late start/late end model if their intrinsic EWs were in the neighborhood of 100 Å, ≈ 4× larger than their observed EWs (see <ref>). These results are consistent with a universe in which most LAEs at z > 8 are obscured by the IGM, but a small number of objects that are relatively bright, have high v_ off, and/or high intrinsic EWs remain visible. Figure <ref> is the same as Figure <ref>, but for the early start/late end model. In contrast to the late start/late end case, a significant fraction of the parameter space displays high visibility, even at z = 11. Objects with M_ UV < -21 are likely to be visible up to z = 11 as long as T_ thresh≤ 0.5 and v_ off > 200 km/s. Even the faintest galaxies have a significant chance of being observed at z = 11. It should then be expected that in such a scenario, a significant fraction of LAEs - even faint ones - with typical physical properties should remain visible up to z = 11. At face value, the observed sharp decline in LAE visibility across the population of LAEs up to this redshift does not prefer this scenario.
http://arxiv.org/abs/2409.02844v1
20240904161612
Knowledge Transfer for Collaborative Misbehavior Detection in Untrusted Vehicular Environments
[ "Roshan Sedar", "Charalampos Kalalas", "Paolo Dini", "Francisco Vazquez-Gallego", "Jesus Alonso-Zarate", "Luis Alonso" ]
cs.NI
[ "cs.NI" ]
IEEE Transactions on Vehicular Technology, Vol. xx, No. x, xxx 2024 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals Knowledge Transfer for Collaborative Misbehavior Detection in Untrusted Vehicular Environments Roshan Sedar, Charalampos Kalalas, Member, IEEE, Paolo Dini, Francisco Vázquez-Gallego, Jesus Alonso-Zarate, Senior Member, IEEE, and Luis Alonso, Senior Member, IEEE September 9, 2024 ========================================================================================================================================================================================= § ABSTRACT Vehicular mobility underscores the need for collaborative misbehavior detection at the vehicular edge. However, locally trained misbehavior detection models are susceptible to adversarial attacks that aim to deliberately influence learning outcomes. In this paper, we introduce a deep reinforcement learning-based approach that employs transfer learning for collaborative misbehavior detection among roadside units (RSUs). In the presence of label-flipping and policy induction attacks, we perform selective knowledge transfer from trustworthy source RSUs to foster relevant expertise in misbehavior detection and avoid negative knowledge sharing from adversary-influenced RSUs. The performance of our proposed scheme is demonstrated with evaluations over a diverse set of misbehavior detection scenarios using an open-source dataset. Experimental results show that our approach significantly reduces the training time at the target RSU and achieves superior detection performance compared to the baseline scheme with tabula rasa learning. Enhanced robustness and generalizability can also be attained, by effectively detecting previously unseen and partially observable misbehavior attacks. Transfer Learning, Deep Reinforcement Learning, V2X, Trust, Misbehavior Detection § INTRODUCTION The proliferation of connected and automated mobility services, driven by the recent advances in vehicle-to-everything (V2X) technology, leads to a paradigm shift in transport systems. This transformation holds the promise of increased road safety, driving autonomy, and inclusive mobility options. Inevitably, this evolution has given rise to new threat vectors associated with the inherent V2X security vulnerabilities, which adversaries may maliciously exploit to disrupt system operation <cit.>. Among them, misbehavior attacks launched by rogue insiders often become difficult to detect and contain, since malicious nodes may alter their activity intelligently over time <cit.>. Although cryptographic techniques are capable of limiting outsiders by offering authentication, integrity, and non-repudiation as a first layer of defense, they may fall short in detecting rogue/dishonest behavior and identifying V2X insiders with malicious intent. As such, the trustworthiness of exchanged information cannot be guaranteed. Emerging data-driven approaches fueled by artificial intelligence and machine learning (AI/ML) tools provide a fertile ground for addressing misbehavior attacks <cit.>. With the availability of vehicular data streams, AI/ML-based schemes can facilitate the analysis of behavioral patterns for V2X entities and determine trustworthiness levels. Such capabilities have the potential to overcome the shortcomings of traditional misbehavior countermeasures, offering effective solutions to achieve demanding security requirements. Centralized detection approaches, albeit leveraging the entire set of available information fused from multiple geographical locations to reduce the risk of false positives, unintentionally introduce higher computational cost and latency <cit.>. This cogently justifies the use of distributed and decentralized AI/ML-based schemes for collaborative misbehavior detection. By diffusing distributed intelligence across vehicular edge components, such as roadside units (RSUs), resource efficiency gains are sought while detection time can be dramatically reduced. Despite the envisioned benefits of data-driven misbehavior detection, the vulnerabilities of AI/ML models introduce additional threat vectors, giving rise to finely targeted, stealthy, and scalable adversarial attacks. Such sophisticated attack types may target both model training (i.e., poisoning attacks) and test (i.e., evasion attacks) phases <cit.>, and undermine the efficacy of misbehavior detection. Collaborative misbehavior detection, relying on decentralized learning models at the vehicular edge, may further extend the attack surface and, thus, exacerbate the impact of adversarial attacks. Consequently, the pervasive adoption of AI/ML models for misbehavior detection could be hindered if security concerns related to the model vulnerabilities are not addressed. §.§ Motivation Aiming to address the complex V2X security landscape, several recent works leverage AI/ML-driven techniques for misbehavior detection <cit.>. Supervised learning schemes may be impractical in V2X scenarios with an expanded attack surface, due to limited access to labeled training examples and/or dependence on security threshold values. On the other hand, unforeseen alterations in vehicular mobility, due to either naturally drifting traffic patterns or unprecedented malicious activity, introduce challenges (e.g., model overfitting) to misbehavior detection schemes relying on conventional deep learning (DL). In this context, deep reinforcement learning (DRL) methods emerge as a compelling option for context-aware misbehavior detection <cit.>. By allowing the learner to infer optimal sequential decisions based on rewards/penalties received as a result of previous actions and accumulated experience, the performance of DRL schemes can be dynamically improved in rapidly changing environments. A number of reputation or trust-based approaches have been proposed in relevant literature to elevate trustworthiness levels in untrusted vehicular environments. Yet, their applicability in collaborative misbehavior detection may be limited, since such methods often assume infrastructure nodes (e.g., RSUs) to be legitimate and non-susceptible to adversarial attacks. Accumulating behavioral patterns of specific nodes over a time period may also be challenging due to ephemeral V2X connectivity links <cit.>. The dependence on predefined trust thresholds may further restrict the suitability of such approaches. It is plausible that decentralized learning architectures for collaborative misbehavior detection inadvertently make the underlying models attractive targets for adversarial attacks. In turn, locally trained misbehavior detection models can be influenced to reach incorrect decisions/predictions or leak confidential information. For instance, in data poisoning attacks <cit.>, an adversary aims at deliberately modifying the learning algorithm during training via false data injection or manipulation. Surprisingly, the detrimental impact of such adversarial manipulations on misbehavior detection performance remains rather unexplored. Dealing with adversarial attacks is often non-trivial and requires enhanced solutions to foster trust and stimulate confidence in data-driven misbehavior detection. §.§ Contributions Motivated by the research challenges mentioned earlier, our contribution can be summarized in three main aspects: * We introduce a novel scheme for collaborative misbehavior detection that builds on geographically distributed RSUs. Each RSU employs a DRL-based model for the detection of malicious traffic stemming from misbehaving vehicles. Leveraging transfer learning principles, the knowledge learned at source RSUs is shared with the target RSU to reuse relevant expertise for misbehavior detection. * In the presence of two data poisoning attacks with the aim of influencing misbehavior detection outcomes incorrectly, we perform selective knowledge transfer from trustworthy source RSUs to avoid negative knowledge sharing from adversary-influenced RSUs. For this purpose, a novel trust evaluation metric, referred to as semantic relatedness, is used by the target RSU to quantify the trust level of each source RSU for collaborative misbehavior detection. * Considering diverse scenarios of collaborative misbehavior detection and an open-source dataset, we evaluate the learning performance of involved RSUs in the presence of adversaries. The detection performance achieved via selective knowledge transfer is also assessed for different misbehavior types. Besides reducing the training time at the target RSU, our scheme is shown to significantly outperform the baseline scheme with tabula rasa learning, demonstrating its high effectiveness. Interestingly, our approach enhances robustness and generalizability by effectively detecting previously unseen and partially observable misbehaviors. The remainder of this manuscript is organized as follows. Section <ref> provides an overview of the existing work in the related literature. Section <ref> outlines the formulation of our proposed misbehavior detection approach, including details about the network and adversarial models considered in the study. Section <ref> presents the key components of our collaborative misbehavior detection scheme which incorporates transfer learning. Section <ref> details the experimental setup and the examined scenarios to validate our approach. Section <ref> evaluates the proposed solution and provides a detailed discussion of the obtained results. Section <ref> summarizes our final remarks and outlines potential directions for future work. § RELATED WORK This section summarizes pertinent studies in the literature, with a focus on highlighting open issues and key challenges. §.§ Misbehavior Detection Although a multitude of security mechanisms for misbehavior detection are available, existing solutions are rather limited in effectiveness due to a high number of false alarm rates (i.e., false positives and negatives) and lack of robustness against sophisticated adversarial attacks <cit.>. Interestingly, such key open issues underscore the need for advanced data-driven techniques to achieve effective misbehavior detection. Data-driven AI/ML approaches are a proving ground for misbehavior detection in vehicular networks. Recent works in this area, supervised <cit.>, unsupervised <cit.>, and DRL-based <cit.> methods stand out among others and are positioned at the forefront of the research interest in V2X security. Next, we summarize existing works on centralized and distributed data-driven AI/ML-based misbehavior detection. §.§.§ Centralized Approaches Several ML-based misbehavior detection schemes fall into the category of centralized, where models train offline centrally with aggregated training samples collected from multiple vehicles and detect misbehaviors (i.e., downstream task) locally in a decentralized way. Centralized ML model training is performed in <cit.> using feature vector inputs that originate from the position and movement-based plausibility and consistency checks performed at vehicles. Utilizing trained ML models, all participating nodes perform local detection and share the detected events with a central misbehavior authority to make the global decision on misbehaviors. Results obtained using the open-source VeReMi dataset <cit.> show that a long short-term memory (LSTM) deep classifier outperforms support vector machines (SVM) and multi-layer perceptron models. Another work in <cit.> utilizes position-related features in VeReMi to train an ensemble supervised learning scheme centrally and detect locally at vehicles or RSUs. Numerical results reveal improved detection performance compared to benchmark ML-based approaches. DL techniques have recently gained widespread attention in cyber-threat detection due to their feasibility and superior performance over traditional ML <cit.>. The work in <cit.> presents a DL-based unsupervised method for identifying misbehaving vehicles. The authors implement various DL architectures by combining different neural network models such as convolutional (CNN), LSTM, and gated recurrent units. In the proposed setup, vehicular messages are aggregated and converted into sequences at the RSU and fed to centrally trained DL models on the cloud to detect misbehaving vehicles. Numerical results based on VeReMi demonstrate that the proposed DL-based scheme outperforms existing ML approaches. In a similar context, an edge-cloud approach is proposed in <cit.>, which comprises DL engines of CNN and LSTM neural networks for handling misbehaving vehicular traffic. Time-sequence-based and sequence-image-based classification methods are implemented at the edge-cloud to train and detect. Results show that sequence-image-based classification using CNN performs better compared to traditional ML-based approaches. Recent works in DRL-based misbehavior detection  <cit.> stand out from their conventional ML/DL counterparts due to their superior detection performance, robustness to noisy training data, and ability to dynamically improve detection accuracy in highly volatile V2X environments. DRL utilizes deep neural networks (DNNs) to approximate the state-action value function, which, in turn, the DRL agent effectively learns to map input vehicular traffic to state-action values (Q-values). In <cit.>, messages transmitted by vehicles are aggregated at RSUs and offload the training of the DRL model onto the cloud server. Misbehavior detection is then performed at RSUs using the trained model. Performance assessment of the proposed DRL-based scheme using VeReMi reveals enhanced detection outcomes compared to benchmark ML- and DL-based approaches and robustness to noisy training samples. §.§.§ Distributed Approaches The studies that fall into this category execute the offline training of ML models in a distributed or decentralized fashion, either locally in vehicles' on-board units (OBUs) or in RSUs. The authors of <cit.> exploit the position and movement-based plausibility checks at the vehicle to extract feature vectors and feed them into locally trained supervised ML models such as k-nearest neighbor and SVM. Results reveal that ML models yield higher accuracy with multiple plausibility checks. The work in <cit.> uses an identical set of plausibility checks used in <cit.> to extract relevant features and feed them into an extended set of supervised ML models. The authors suggest implementing their scheme in OBUs to facilitate local detection in real-time. Numerical results show that adding plausibility checks improves recall and precision. In <cit.>, the authors train supervised ML models at the RSU by aggregating basic safety messages (BSMs) from vehicles in their communication range. Using an augmented feature set that combines information from successive BSMs in VeReMi, they achieve improved detection performance for position falsification attacks compared to existing ML-based approaches. Another work in <cit.> trains a local misbehavior detection system by using supervised ML models to identify false alert transmission and position falsification. By utilizing trained ML models, each vehicle performs local detection and evicts misbehaving vehicles based on aggregated information from all neighboring vehicles. Results show improved detection performance compared to rule- and threshold-based detectors. In summary, existing AI/ML-based misbehavior detection models can be either centrally trained in the cloud and tested locally, or both training and detection are realized in vehicles/RSUs. Yet, these models are susceptible to adversarial attacks, e.g., data poisoning <cit.>. As such, an attacker may poison the centralized model training pipeline and influence downstream tasks at vehicles or RSUs to misclassify, which could inflict a single point of failure. Similarly, locally trained misbehavior detection models may suffer from data poisoning and adversarial manipulations owing to the extended attack surface. Hence, albeit offering enhanced detection performance, such models may not be robust enough against adversarial attacks. Moreover, they may have limited capability in detecting unseen (i.e., non-anticipated) misbehavior attacks. On the contrary, we propose distributed collaborative learning to enhance misbehavior detection performance, making it robust against adversarial attacks and capable of generalizing to detect unseen attacks. §.§ Trust Evaluation Data-driven misbehavior detection models analyze the transmitted messages either individually at a local level or collaboratively at a global level. They focus on verifying the semantic correctness of the received information regardless of the sender and do not rely on the honest majority of senders. Entity-centric models, on the other hand, focus on monitoring the past behavior of nodes over time to assess their trustworthiness <cit.>. Such models primarily constitute trust-based detection techniques where each entity maintains a local table to record the reputation or trust indicators of its neighboring vehicles, and they often assume the majority of senders to be honest. In <cit.>, the authors evaluate the trustworthiness of vehicular data and nodes using the Dempster-Shafer belief framework. A vehicle's trust is evaluated on both functional and recommendation trust. Functional trust directly evaluates the trustworthiness of a vehicle, and recommendation trust evaluates the trustworthiness of neighboring vehicles. Results reveal superior performance compared to a conventional weighted-voting method for trust management. Another similar work in <cit.> presents a trust-based mechanism against DoS/DDoS attacks, with vehicles assessing neighboring senders via direct and indirect trust values. In general, these mechanisms can be integrated with data-driven AI/ML-based methods to enhance trustworthiness in misbehavior detection. The work in <cit.> proposes a context-aware data-driven trust management scheme to ascertain the integrity of information received by vehicles. The authors employ information entropy theory to calculate the trustworthiness of the content of messages and incorporate an RL model based on Q-learning to continuously adapt the trust evaluation strategy to accommodate various driving scenarios. Numerical results indicate the ability to withstand bogus information, outperforming entity-centric trust schemes. Another work in <cit.> presents a DRL-based dynamic reputation update mechanism for vehicular networks. The authors apply the Dempster-Shafer theory to combine misbehavior detection reports sent by vehicles centrally at a global level and train a DRL model to update the reputation policy. The DRL model dynamically determines the optimal reputation update value to stimulate vehicles to send true feedback. Simulation results exhibit improved performance compared to entity-centric trust schemes. In summary, most reputation or trust-based methods in related literature are entity-centric and often rely on infrastructure assistance (e.g., RSUs), assuming the presence of an honest majority of participants. Furthermore, these methods frequently resort to predefined conditions, ranging from task-specific rule sets to statically configured threshold values, and assume infrastructure nodes to be benign and immune to adversarial attacks. In addition, monitoring nodes over an extended period to evaluate trustworthiness often becomes challenging due to the transient connections and mobility patterns of vehicles. Overall, trust management models are not sufficiently robust against adversarial attacks. However, our proposed approach in this work does not rely on predefined conditions, such as an honest majority of participants or statically configured threshold values, to determine trustworthiness. §.§ Adversarial Attacks With the increasing penetration of data-driven methods into V2X systems, AI/ML models inevitably become prone to adversarial manipulations  <cit.>. In an adversarial ML setting, attackers inject adversary data intending to misguide AI/ML models to incorrect decisions. In this context, the prevalence of adversarial attacks hinders the robustness of AI/ML models, while dealing with such attacks is often non-trivial. Inherently, previously described misbehavior detection (Sec. <ref>) and trust (Sec. <ref>) models are vulnerable to adversarial ML. Poisoning and evasion attacks can be identified as the main adversarial attack scenarios <cit.>. The contamination of training data, known as poisoning, occurs during the training phase of the AI/ML model. In this type of attack, the attacker attempts to inject an imperceptible perturbation into the input data, which corrupts training and misguides the AI/ML model <cit.>. Similarly, label-flipping can also be used as a poisoning attack, yielding a simple and effective way to contaminate the training data <cit.>. In this attack, the adversary selects a subset of training samples and flips their labels. In contrast, evasion attacks take place at the testing phase, where the adversary has no influence on the target model during training, but rather tricks the model into incorrect outputs. The work in <cit.> demonstrates the vulnerability of ML-based misbehavior detection models against adversarial attacks. The authors inject carefully crafted adversarial examples using ML and DL techniques and compromise the detection model to misclassify attack samples. In this approach, ML and DL models are first trained on the genuine behavior dataset, followed by generating adversarial examples. The evaluation using VeReMi dataset reveals high false positive and false negative values. Another work in <cit.> introduces a data poisoning attack in DL models, which is equipped with attacker-chosen patterns to force misclassification. Such attacks can take place when the training is outsourced onto the cloud or when using pre-trained models in transfer learning. Similarly, DRL-based models are also susceptible to adversarial attacks. In <cit.>, the authors present a policy induction attack by injecting crafted adversarial examples into the agent's state space. Results show that the agent's average reward per epoch remains at low levels in the presence of adversarial examples. Summary: Considering the open research gaps highlighted throughout the previous state-of-the-art overview, we hereby focus on collaborative misbehavior detection by transferring the knowledge learned between distributed entities in untrusted vehicular environments. Our proposed DRL-based scheme can select trustworthy RSUs to collaborate and transfer knowledge withstanding adversarial attacks. Consequently, target or receiving entities can learn efficiently from transferred knowledge to discover unseen and partially observable misbehaviors. § SYSTEM MODEL This section introduces the vehicular network model, misbehavior detection model, and adversarial model considered in this work. We provide the details in the following. §.§ Network Model Vehicular networks typically comprise a large number of geographically distributed RSUs. RSUs are stationary entities interconnected with each other and the Internet. The usability of RSUs is multifaceted as they offer various services such as Internet access, security solutions, and real-time traffic data distribution <cit.>. Normally, RSUs have superior computational capabilities compared to in-vehicle resources, which allows vehicles to offload computation-intensive tasks to RSUs. Fig. <ref> illustrates the distributed vehicular network model considered in this work. Each RSU experiences its own vehicular environment and receives traffic within its coverage. The RSUs are interconnected via wired connections to provide reliable RSU-to-RSU communication. We assume authentication and authorization procedures have already been performed before V2X communication takes place among entities <cit.>. The involved vehicles (i.e., authenticated and authorized) periodically broadcast BSMs, which are received by RSUs in their connectivity range. BSMs include standard-related parameters such as position, speed, acceleration, heading angle, and other relevant vehicular information <cit.>. In our proposed setup, a distributed collaborative misbehavior detection system is considered where we leverage knowledge transfer in the context of transfer learning. As shown in Fig. <ref>, each RSU is equipped with a DRL-based misbehavior detection system (DRL-MDS) and detects misbehaving vehicles. Misbehavior detection is performed at the RSU level, since the vehicle may not have complete information in its communication range due to ephemeral connectivity and/or limited computational resources. A high-level illustration of the DRL-MDS workflow is provided in Fig. <ref>, and it is further elaborated in the following section (Sec. <ref>). The knowledge learned at source RSUs is transferred to the target RSU to reuse existing knowledge during its learning process. The target RSU may not necessarily need to be a neighbor, but it could also be a distant RSU, which shall be reusing the available knowledge of sources to detect future misbehaviors. It is noteworthy that RSU-based security solutions in related literature tend to assume that RSUs are trusted entities and non-vulnerable to attacks <cit.>. However, as pointed out in Sec. <ref>, stationary entities such as RSUs can easily be targeted by adversarial attackers. In this work, we assume that there may exist RSUs that are subject to adversarial attacks. §.§ Misbehavior Detection Model In our approach, each RSU is equipped with an MDS as shown in Fig. <ref>. The DRL-based model, introduced in our previous works <cit.>, is used for detecting misbehaving vehicles. Here, we present the formulation of the tabula rasa DRL model used for misbehavior detection. A tabula rasa model aims to learn efficiently from scratch without any external/previous knowledge. The details of transfer learning incorporation into the DRL model are provided in Sec. <ref>. §.§.§ DRL Model The vehicular environment considered in this study follows a Markov decision process (MDP) framework to facilitate the detection of misbehaviors through sequential decision-making. Consistent with the MDP formulation, the action of misbehavior detection changes the environment based on the decision of either genuine or malicious behavior at time-step t. Subsequently, the decision at time-step t+1 is influenced by the altered environment from the previous time-step t. The aggregated vehicular traffic at each RSU consists of a time-series repository of received BSMs with intrinsic spatiotemporal interdependencies. In this work, we consider a DRL-based misbehavior detector deployed at the edge RSU. As depicted in Fig. <ref>, the detector (agent) interacts with the vehicular environment to learn the optimal detection policy π^*. During training, the ϵ-greedy method is leveraged to strike a balance between exploration and exploitation in the agent's strategy. Next, we describe the components relevant to the DRL model. i) Agent: The agent receives the vehicular traffic data as a time series and prior related decisions as inputs (i.e., state 𝐬_𝐭), and generates the new decision made (i.e., action a_t) as output, as shown in Fig. <ref>. The agent's DQN <cit.> comprises an LSTM layer and a fully connected neural network with linear activation to generate Q-values as choices for the action a_t. At each time-step t, the agent's actions are selected by the policy π. The agent's experience, i.e., e_t = <𝐬_𝐭,a_t,r_t,𝐬_𝐭+1>, stores all the behaviors of the misbehavior detector. By exploiting experience, the detector is progressively improved to obtain a better estimation of the Q(𝐬,a) function. The objective is to maximize the expected sum of future discounted rewards by learning the π^*. The discounted reward return is expressed as R_t = ∑_k=t^Tγ^k-tr_k. Q-learning model updates are performed with learning rate α and discount factor γ as Q(𝐬_𝐭, a_t) ← Q(𝐬_𝐭, a_t) +α(r_t + γmax_a_t+1 Q(𝐬_𝐭+1, a_t+1) - Q(𝐬_𝐭, a_t)). ii) States: The state space comprises the sequence of previous actions denoted by s_action = <a_t-1,a_t,...,a_t+n-1>, and the current BSM information denoted by s_time = <X_t,X_t+1,...,X_t+n>. 𝐗_𝐭∈ℝ^d is a d-dimensional feature vector at time-step t, including information on d different features. According to the state design, the next action taken by the agent depends on the previous actions and the current BSM information. This design enables the agent to capture temporal dependencies and make more informed decisions. iii) Actions: The action space is defined as 𝒜 = {0,1}, where 1 indicates the detection of a misbehavior and 0 represents the genuine behavior. The deterministic detection policy π can be expressed as a mapping, i.e., π:𝐬_𝐭∈𝒮⟼ a_t∈𝒜, from states to actions, where π(s) prescribes the action that the agent takes at state 𝐬. In a given state 𝐬_𝐭, the agent selects the action based on the optimal detection policy given by π^* = _a ∈𝒜 Q^*(𝐬,a). iv) Rewards: The reward function R(𝐬,a) is defined based on the confusion matrix typically used in ML classification problems. A numerical value for r_t is assigned based on the ground truth information of BSMs. A positive reward is given upon correct detection of a misbehavior, i.e., true positive (TP), or a normal state, i.e., true negative (TN). A negative reward is given when a normal state is incorrectly identified as a misbehavior, i.e., false positive (FP), or a misbehavior as a normal state, i.e., false negative (FN). The agent is penalized more for FN actions than for FPs, as the correct identification of misbehavior is necessary to avoid hazardous situations. Accordingly, we define the immediate reward r_t∈ R of the agent as r(𝐬_𝐭,a_t) = a, if a_t is a TP, b, if a_t is a TN, -c, if a_t is an FP, -d, if a_t is an FN, where a, b, c, d > 0, with a > b and d > c. §.§ Adversarial Model In this work, we assume the presence of adversaries attempting to contaminate the training data and realize poisoning attacks on the distributed and collaborative DRL-MDS models. In particular, we consider two data poisoning attacks pertinent to adversarial ML: i) label-flipping and ii) policy induction attacks. In the case of label-flipping, it is assumed that the adversaries are rogue insiders who contribute to the training data or have access to the training data itself. In policy induction, we consider an exogenous attacker who can modify the state space before it is observed by the DRL agent. In both cases, attackers aim to maliciously influence the learning model and, subsequently, force incorrect outcomes in downstream misbehavior detection tasks. In a label-flipping attack, we consider an attacker who targets the malicious class to flip the labels of certain source training data instances at an RSU. In this scenario, the adversarial attacker randomly selects a set of misbehaving vehicles and flips the labels of their data instances into genuine ones, resulting in a targeted random label-flipping attack (Definition <ref>). The attacker aims to misclassify selected training samples from malicious class 1 to genuine class 0. Following Definition <ref>, we denote D_S_i as the training data of source RSU S_i, i.e., D_S_i = {(x_S_i1, y_S_i1),..., (x_S_ik, y_S_ik),..., (x_S_in, y_S_in)}, with x_S_ik∈ X_S_i representing the k-th data instance of D_S_i while y_S_ik∈ Y_S_i represents the corresponding label of x_S_ik. Given training samples {x_S_ik, y_S_ik}^n_k=1 of D_S_i, belonging to source RSU S_i with x_S_ik∈ X_S_i and y_S_ik∈ Y_S_i= {0, 1}, a poisoning attack is defined as a targeted random label-flipping attack when the attacker randomly selects a fraction ζ∈ (0, 1] of misbehaving vehicles and flips the label of the corresponding training samples from 1 to 0. In a policy induction attack, we properly adapt the adversarial attack originally presented in <cit.> in a DRL context. The malicious intent here is to force the target DRL agent to learn a policy selected by the adversary. We assume an exogenous attacker with minimal a priori information (e.g., input type and format) of the target DQN, and with knowledge of its reward function and the update frequency of the target network. The attacker can directly manipulate the target DQN's environment configuration, albeit with no control over the target network parameters, reward function, or optimization mechanism. According to Definition <ref>, the attacker creates replica DQNs (i.e., Q', Q̂') of the target's DQN (i.e., Q, Q̂) and initializes them with random parameters. Since the attacker has no knowledge of the target's DQN architecture and its parameters at every time step, a black-box technique is followed to exploit the transferability of adversarial examples. This is achieved by crafting state perturbations using replicas of the target's DQN such as those introduced in <cit.>. Moreover, the Fast Gradient Sign Method (FGSM)[ The FGSM method utilizes the neural network gradients' sign to create a perturbed adversarial input that maximizes the loss <cit.>. For a given input x, the adversarial input x^' generated by the FGSM can be summarized as x^' = x + ϵ * sign(∇_xJ(θ,x,y)), where y is the label of input x, ϵ denotes a small multiplier, θ are the model parameters, and J represents the loss.] algorithm is used to craft adversarial examples at every training time step. The attacker induces an arbitrary adversarial policy π_adv on the target DQN by injecting adversarial examples into training data. Specifically, given state observation 𝐬_𝐭+1, the attacker crafts a perturbation vector (δ̂_𝐭+1) using FGSM at every training time-step and injects it into the target DQN's state space, as a'_adv ←π_adv(𝐬_𝐭+1), δ̂_𝐭+1 ← Craft(Q̂',a'_adv,𝐬_𝐭+1), 𝐬'_𝐭+1 ←𝐬_𝐭+1 + δ̂_𝐭+1. Crafting adversarial inputs 𝐬'_𝐭 and 𝐬'_𝐭+1 requires minimizing the loss function in (<ref>) to force the target DQN to optimize towards action a', given state s_t, as min_θ' (y_t - Q'(𝐬_𝐭,a_t;θ'))^2, where y_t = r_t + γ max_a'Q̂' (𝐬'_𝐭+1,a';θ'_-) and Q^', Q̂^̂'̂ are the replica DQNs of the target DQN. § TRANSFER LEARNING-BASED DRL FRAMEWORK The distributed RSU deployments in vehicular systems and the varying spatiotemporal behavior of traffic traces, render the training of a DRL-MDS agent difficult, especially in detecting unseen and partially observable misbehavior attacks. Challenges in such complex setups include slow convergence of the learning process, overfitting, and/or sub-optimal solutions due to poor exploration <cit.>. Hence, we advocate the adoption of a transfer learning-based DRL approach to achieve distributed collaborative misbehavior detection in adversarial vehicular environments. We rely on selectively transferring knowledge from trustworthy source RSUs to avoid negative knowledge sharing from adversary-influenced RSUs, as shown in Fig. <ref>. §.§ Transfer Learning for DRL-based Misbehavior Detection TL leverages valuable knowledge acquired in one task and past experience to improve learning performance on other tasks (similar/dissimilar). The knowledge learned and previous experiences obtained from learning some source tasks, can be re-used to enhance the learning of some target tasks. In our work, the task represents the misbehavior detection performed in each RSU. Most DRL-based works in related literature (Sec. <ref>) have developed agents that can efficiently learn tabula rasa without any previously learned knowledge. Yet, tabula rasa DRL techniques require a long learning period to be effective, which can be inefficient in computationally intensive and mission-critical operations <cit.>. For instance, an agent at the start of learning may tend to spend significant time exploring the environment before finding an optimal policy. Thus, TL can be leveraged to accelerate the learning. In this context, source and target RSUs associated with TL can be represented as MDP[Henceforth, MDP and environment are used interchangeably.] agents. Accordingly, TL between RSUs can be defined as follows. Given a set of m source RSUs with MDPs 𝐌_s = {⋃_i=1^mℳ_i,s|ℳ_i,s∈𝐌_s} and a target RSU with MDP ℳ_t, the goal of TL is to learn an optimal policy π^*_t for the target RSU as π^*_t = _π_t_𝐬∼𝒮_𝐭,a ∼π_t [Q^π_t_ℳ_t(𝐬,a)], by leveraging external information ℐ_s from 𝐌_s along with internal information ℐ_t from ℳ_t. In (<ref>), π_t = ϕ(ℐ_s∼𝐌_s, ℐ_t∼ℳ_t):𝐬_𝐭∈𝒮→ a_t∈𝒜 is a policy that maps states to actions for ℳ_t, which is learned from both ℐ_t and ℐ_s. The policy π_t is estimated using a DNN. In multi-source[In a single-source scenario with |𝐌_s| = 1, tabula rasa learning occurs without any external transfer when ℐ_s=∅.] RSU deployments, according to Definition <ref>, the aim is to enhance the learning of misbehavior detection at the target RSU by leveraging knowledge acquired from a set of source RSUs. Assume m source RSUs with MDPs 𝐌_s = {(𝒮_i,𝒜_i,𝒫_i,ℛ_i,γ_i)}^m_i=1, and a detection policy π_s_i for each source RSU i. Then, the target RSU with MDP ℳ_t = (𝒮_𝐭,𝒜_t,𝒫_𝐭,ℛ_t,γ_t) aims to enhance the learning of π^*_t by leveraging ℐ_s∼𝐌_s. The external ℐ_s represents transferred knowledge on misbehavior detection from source RSUs. §.§ Knowledge Transfer Previous works (e.g., <cit.>) on the integration of TL with collaborative DRL have demonstrated effectiveness in scenarios with some degree of similarity in agents' experiences. Based on the definition of transferred knowledge, TL techniques in DRL can be realized as policy[In policy transfer, a set of policies {π_s_i}^m_i=1 from source MDPs are transferred to the target MDP, and then the target policy π_t is learned by utilizing knowledge from them.], representation[In representation transfer, the algorithm learns a feature representation of the task, such as a value function V^π(𝐬) or the Q^π(𝐬,a) function, and the knowledge learned can be either directly used in the target or indirectly using a task-invariant feature space.], and instance transfer <cit.>. We hereby employ an instance TL technique to transfer misbehavior detection knowledge from a set of source RSUs to a target RSU. The rationale lies in transferring those source samples that can enhance the detection performance of the target RSU. Our method selects only related (i.e., good/useful) samples collected from the environments of source RSUs as agents' experiences. The selection of such samples prevents negative knowledge transfer that may originate from adversarial source RSUs. In collaborative DRL-MDS, positive transfer occurs when the transferred knowledge from source RSUs improves the target RSU's performance. In negative transfer, the target's performance degrades compared to tabula rasa learning. To realize collaborative DRL-MDS, our proposed methodology employs TL with a new selective knowledge transfer scheme to prevent negative transfers from untrusted source RSUs, as in Fig. <ref>. Under positive transfer, the target RSU is expected to achieve higher cumulative return as well as reach the asymptotic performance earlier than in tabula rasa learning. Hence, measuring the degree of similarity in misbehavior detection performed at the source and target RSUs becomes crucial to avoid negative transfer overhead in TL. §.§.§ Trust Evaluation In Fig. <ref>, we provide a visualization workflow for trustworthy source RSUs' selection for transfer. We first establish an inter-agent semantic similarity metric between the source and target to ensure a positive transfer. A source RSU s_i with policy π_s_i is considered to have high semantic relatedness with the target RSU t, if π_s_i can generate the maximum cumulative return from the target RSU's environment in a limited number of training episodes. Inspired by works in <cit.>, the semantic relatedness between the source and target RSUs is defined as the gain of cumulative return for a policy π_s_i under the target reward function, G^π_s_i = ∑^N_e_j=1 R_t_j(π_s_i), ∀ i ∈ [1,m], where N_e denotes the number of training episodes and R_t(.) denotes the target reward function. By utilizing semantic relatedness, the target RSU can effectively select a subset of source RSUs that are trustworthy to contribute towards enhanced misbehavior detection. A higher cumulative return in N_e training episodes ascertains a positive transfer of detection knowledge and implies that a source RSU is more reliable for collaboration. The procedure of selecting trustworthy source RSUs for knowledge transfer is summarized in Algorithm <ref>. Without loss of generality, we normalize cumulative return G^π_s_i in a way that the scaled cumulative return G^π_scaled, s_i resides in the range of [0,1]. In Algorithm <ref>, cumulative return is computed using (<ref>) for each source RSU s_i with a policy π_s_i and Min-Max normalization is applied to scale cumulative returns {G^π_s_i}^m_i=1 into the range of [0,1]. Min-Max normalization based on cumulative return values is defined as G^π_scaled, s_i = G^π_s_i - G^π_min/G^π_max - G^π_min, where G^π_max and G^π_min denote max{G^π_s_i}^m_i=1 and min{G^π_s_i}^m_i=1, respectively. As such, the scaled cumulative return G^π_scaled, s_i of a source RSU s_i can be associated with its trust value T_tr,s_i, which is used by the target RSU to decide whether the source is trustworthy to collaborate with. We introduce a source ranking strategy 𝒻_ranking in Algorithm <ref> (line 7), used by the target RSU to sort source RSUs based on their trust values and select a subset lying above a specified threshold value T_th∈ [0,1]. The selection of a threshold to ascertain the trustworthiness of source RSUs can be either tolerant with a lower T_th value or more stringent with a higher T_th value. Reputation or trust-based methods in related literature typically use a [0,1] scale to represent trust/reputation values and consider 0.5 as a neutral value or a threshold <cit.>. §.§.§ Selective Knowledge Transfer Upon the completion of source RSUs' selection, the target RSU applies instance TL by collecting samples from k trustworthy source RSUs following their policies {π_s_i}^k_i=1 (i.e., the output of Algorithm <ref>). In target RSU training, we employ a selective knowledge transfer scheme, called experience selection, that selects source samples with high semantic relatedness. Motivated by  <cit.>, experience selection is defined as Q^*(𝐬,a) ≥ y > Q_θ(𝐬,a), where Q^*(.) represents an optimal Q-function and y = R_t(𝐬,a) + γmax_a_t+1Q_θ(𝐬_𝐭+1,a_t+1). The aim of the target RSU is to learn the optimal misbehavior detection policy π^*_t by obtaining an optimal Q-function Q^*(𝐬,a) for a given state-action pair. The expected return achieved by following an optimal policy is always greater or equal to that of any arbitrary behavior policy μ; hence, the expected return Q_μ(𝐬,a) of any arbitrary behavior policy μ can serve as a lower bound of the optimal Q-value Q^*(𝐬,a). Lower bound Q-learning <cit.> can be expressed as Q^*(𝐬,a) ≥ Q_μ(𝐬,a) = _μ[r_t + ∑_k=t+1^Tγ^k-tr_k], with convergence guarantees of Q-values <cit.>. The lower bound Q-value in (<ref>) implies that the estimated Q_θ(𝐬,a) value in (<ref>) is lower than the optimal Q^*(𝐬,a) value. Following lower bound Q-learning, experience selection aims to update Q_θ(𝐬,a) towards the lower bound of Q^*(𝐬,a) with source samples of high semantic relatedness, i.e., Q_θ(𝐬,a) ≥ y. The target RSU training with selective knowledge transfer is summarized in Algorithm <ref>. The algorithm proceeds as follows to improve the learning performance of the target RSU with the aid of experience selection. The training of the target RSU is divided into episodes. In each time step, the target RSU selects an action a_t∈𝒜 either randomly with probability ϵ or according to the best Q-value given by (<ref>) (lines 4-9). Subsequently, the chosen action a_t is executed in the environment, and the reward r_t and the next state 𝐬_𝐭+1 are observed (line 10). Assume k trustworthy source RSUs with policies {π_s_i}^k_i=1. The target RSU can form an experience samples buffer 𝒮̃ by collecting samples following k source policies (line 12) and computes rewards using its current reward function. The amount of samples collected into 𝒮̃ from each trustworthy source RSU s_i is equivalent to η_s_i·|𝒮̃|, with η_s_i = T_tr,s_i/∑^k_i=1 T_tr,s_i, ∀ i ∈ [1,k], where T_tr,s_i = G^π_scaled, s_i in (<ref>). Consequently, source RSUs with higher trust values will transfer more samples, which results in learning more from a set of highly trusted RSUs. To trade off exploitation with exploration, target samples are also added to 𝒮̃ following the online Q-network with ϵ-greedy policy (line 13). In the target's model update, parameterized by θ, training samples are drawn from 𝒮̃ (line 14), and experience selection is applied (lines 16-17) to filter out relevant samples. Conversely, samples that are not semantically important will be removed from 𝒮̃ (lines 18-19). As such, 𝒮̃ is gradually updated while equally prioritizing the remaining training samples, which results in improved sample efficiency. The target RSU can thus leverage selective knowledge transfer and train its misbehavior detection model by minimizing the loss function, L(θ) = 1/2∑_tQ_θ(𝐬_𝐭,a_t) - y_t^2, where target Q-values are given by y_t = R(𝐬_𝐭,a_t) + γmax_a_t+1Q_θ(𝐬_𝐭+1,a_t+1). Moreover, the selection of samples with Q_θ(𝐬_𝐭,a_t) ≥ y_t implies that learning updates in (<ref>) are encouraged towards the lower bound of the optimal Q^*(𝐬,a) value. Consequently, experience selection updates the misbehavior detection policy π_t towards the optimal policy π^*_t. Although experience selection improves sample efficiency, transferred knowledge from source instances may introduce bias owing to differences in data distribution or variability in misbehavior attack patterns. Therefore, to circumvent such bias, experience selection is utilized only in early training stages and, subsequently, target RSU training switches to traditional DQN learning. Algorithm <ref> for traditional DQN operates similarly but without sampling from 𝒮̃ (lines 13-14) and with no experience selection (lines 16-20). In this case, the target RSU samples a minibatch of transitions from replay memory buffer 𝒟 with training samples from the target environment, and those samples are directly used to train the misbehavior detection model by minimizing (<ref>). § EXPERIMENTS This section describes the experimental scenarios and the setup used to verify the validity and effectiveness of the proposed knowledge transfer approach for collaborative misbehavior detection. We leverage the open-source VeReMi dataset for experimentation <cit.>. §.§ Dataset Description and Pre-processing The VeReMi dataset has been generated considering 19 different misbehavior attack types with each attack type having a separate dataset. An attack dataset is made of a collection of log files, each one generated containing BSM data exchanged with neighboring vehicles over their entire trajectory. Each attack dataset contains a ground truth file that has recorded the observed behavior of all participating vehicles. BSMs constitute a three-dimensional vector for the position, speed, acceleration, and heading angle parameters. The ratio between misbehaving and genuine vehicles in VeReMi is 30%:70% for all the attack simulations. In Appendix <ref>, we briefly describe selected misbehavior attack types relevant to our study. Based on feature analysis and data pre-processing, we selected six fields, i.e., timestamp, pseudo-identity, position, speed, acceleration, and heading angle, from a BSM payload as the relevant features for misbehavior detection. To reduce dimensionality, a scalar representation of the position, speed, acceleration, and heading parameters is computed using the Euclidean norm. In VeReMi, the ground truth data do not contain labels; hence, a label for each data point was generated by comparing the ground truth against the actual transmitted value recorded in the log file of each vehicle. An attack label 1 was added if the transmitted data diverged from the ground truth; otherwise, label 0 was added for the genuine data. §.§ Examined Scenarios We conduct experiments on a set of simulation scenarios, listed in Table <ref>, to assess the proposed knowledge transfer approach. We provide empirical evidence of the benefits of TL through simulated scenarios for collaborative misbehavior detection in unpredictable and untrusted vehicular environments. The considered scenarios aim to cover diversified aspects of collaborative misbehavior detection by means of knowledge transfer between RSUs. By leveraging knowledge from source RSUs with high semantic relatedness, the target RSU can rapidly adapt to new misbehavior patterns, even in dynamic and volatile conditions. Vehicular mobility renders indispensable the study of such scenarios that may arise in a collaborative misbehavior detection setup. With that aim, we exploit the attack variants present in VeReMi. For each specific scenario, a different set of misbehavior attacks is selected, aiming to provide sufficient coverage of the available attacks in VeReMi. Next, we elaborate on the considered scenarios. §.§.§ SC1 The first scenario captures the requirement of detecting future misbehaviors of the same type. In SC1, a misbehaving vehicle can permeate an attack across multiple geographic locations across its trajectory, resulting in some RSUs experiencing the attack earlier in time (sources) compared to others (targets). SC1 arises in situations where RSU deployments are sparsely distributed in vehicular setups. In this case, the target RSU relies on collaborative misbehavior detection to proactively detect future attacks of the same type. By exploiting the evolving attack patterns , we split an attack dataset based on the timestamp field (ascending order) and assign a similar amount of training samples to each source RSU. To motivate knowledge transfer, the number of training samples allocated to a source RSU is proportionally higher than that of the target RSU. It is noted that the allocated samples per RSU preserve the ratio between misbehaving and genuine classes as set in the entire dataset. §.§.§ SC2 In this scenario, the target RSU aims to acquire knowledge from source RSUs to detect an unseen/unknown attack. Such situations may encounter in practice due to blind spots and occlusions caused by vehicular infrastructure or moving vehicles. This inevitably results in limited situational awareness in some RSUs (targets) which may not experience certain attack types. In this case, the target RSU seeks relevant knowledge from source RSUs with expertise in detecting attacks of similar type. We assume that source RSUs are capable of detecting a wider range of misbehavior attack types. On the contrary, the target RSU is trained to identify a narrower set of attacks compared to source RSUs. In such cases, the target RSU aims to leverage knowledge transfer from source RSUs to identify similar new misbehavior attacks. The VeReMi dataset includes a set of misbehavior attack types that have been generated by executing a combination of multiple attacks at once (as detailed in Appendix <ref>). We hereby evaluate two cases within this scenario, each involving different denial-of-service (DoS) variants: * Each source RSU is trained on a mix of DoS, DoS Random and DoS Random Sybil attacks. The target RSU is trained on a mix of DoS and DoS Random attacks, and leverages source knowledge to detect the unseen DoS Random Sybil attack. * Each source RSU is trained on a mix of DoS, DoS Disruptive and DoS Disruptive Sybil attacks. The target RSU is trained on a mix of DoS and DoS Disruptive attacks, and leverages source knowledge to detect the unseen DoS Disruptive Sybil attack. §.§.§ SC3 In this scenario, source RSUs are trained with different feature vectors resulting in partial observability of the attack space. Such cases arise in practice when RSUs have diverse computational capabilities, including different processing and buffer sizes, limiting their training only on specific attack types. This heterogeneity may hinder an RSU’s ability to process multiple/high-dimensional feature vectors effectively. Thus, the target RSU needs to select source RSUs based on relevant feature vectors to detect an array of misbehaviors and enhance the effectiveness of collaborative detection. We assume that each source RSU s_i is trained with feature vector ϕ_𝐢 on a targeted misbehavior attack type. Similarly, the target RSU is trained on a specific misbehavior attack type with a feature vector ϕ_𝐭 with dimensionality ρ_ϕ_𝐭<ρ_ϕ_𝐢, ∀ i. Then, the target RSU can leverage source knowledge by combining feature vectors, ⋃ϕ_𝐢, to detect a wider range of misbehaviors. We consider position- and speed-related attack variants in VeReMi, distinguishable by various feature values embedded in BSMs. Similarly to SC2, we evaluate two cases: * Each source RSU is individually trained on Constant Position Offset, Random Position, and Random Position Offset attacks. The target RSU is trained on Constant Position attack, and leverages source knowledge to detect all position-related attack variants. * Each source RSU is individually trained on Constant Speed Offset, Random Speed, and Random Speed Offset attacks. The target RSU is trained on Constant Speed attack, and leverages source knowledge to detect all speed-related attack variants. §.§ Experimental Setup The experiments were performed on a server equipped with Intel(R) Xeon(R) Gold 6230 CPUs @ 2.10GHz with 192 GB RAM and four NVIDIA GeForce RTX 2080 Ti GPUs. The components of the DRL-MDS (Fig. <ref>) are implemented in Python using Tensorflow-GPU as the backend. Considering the amount of data samples available per misbehavior type in VeReMi, we resort to a number of four RSUs in all scenarios, with three source RSUs and a single target RSU. Source RSUs comprise two genuine RSUs and an adversary-influenced (i.e., malicious) RSU. The malicious RSU is trained using a single type of adversarial attack at a time, either label flipping or policy induction attack. In this way, the level of impact on knowledge transfer from each adversarial attack type can be assessed separately. The learning rate α and discount factor γ in DRL-MDS are set to 0.001 and 0.995, respectively. Both 𝒮̃ and 𝒟 sizes at the target RSU are set to 50000. In the label-flipping attack, the parameter ζ is set to 0.05, which is equivalent to randomly selecting 5% of misbehaving vehicles from a source RSU and flip the labels of the corresponding BSMs to create a malicious RSU. In the policy induction attack, the adversarial policy π_adv is obtained based on an inverted reward function r' compared to (<ref>). The r' generates exact opposite reward values for the malicious RSU to maximize FNs and FPs. To obtain π_adv, an adversarial DQN model is trained with r' on a small portion of training samples extracted from a victim source RSU's environment. With π_adv, the policy induction attack (as in Definition <ref>) is launched by generating perturbed samples during the training of the victim source RSU to create a malicious RSU. § PERFORMANCE EVALUATION This section presents the learning performance of DRL-MDS agents for the scenarios described above using VeReMi. Next, the resulting detection performance achieved via knowledge transfer is assessed for different misbehavior attack types. §.§ Learning Performance To provide a comprehensive assessment, we evaluate the learning performance of misbehavior detection agents in i) the source RSUs with trained policies and ii) the target RSU with and without knowledge transfer. §.§.§ Source RSUs To assess the contribution of source RSUs towards knowledge transfer, we measure the average reward gains per RSU during the interactions with their own environments. In each scenario, a source RSU is set to run for 500 training episodes, and the learning performance is measured in terms of the cumulative reward values obtained following the reward function in (<ref>). Fig. <ref> illustrates the learning performance of source RSUs for two position-related misbehavior types considered for SC1. As can be visually comprehended, genuine source RSUs accumulate significantly higher average rewards compared to malicious source RSUs, which are detrimentally influenced by label-flipping and policy induction attacks. Furthermore, the negative impact on the victim source RSU from the policy induction attack is greater than that of the label-flipping attack. This can be attributed to the effectiveness of policy induction attack which injects crafted inputs at each training step towards the adversary's goal. This, in turn, results in misclassifying malicious samples as genuine ones with increased false alarms. Similar behavior can be observed in Fig. <ref> for DoS-related misbehavior variants considered in SC2. In this case, each source RSU is trained on a combination of misbehavior attack types, providing the capability to detect additional misbehavior attack types compared to SC1. In Fig. <ref>, the learning performance of source RSUs in SC3 for both position- and speed-related misbehavior variants is shown. Similarly to SC1 and SC2, genuine sources perform significantly better than malicious ones, while the pernicious impact of policy induction attack compared to label-flipping becomes apparent. §.§.§ Target RSU As shown in Algorithm <ref>, based on the available source policies, the target RSU selects a subset of source RSUs using T_th to realize knowledge transfer. As opposed to conventional predetermined thresholds (Sec. <ref>), the proposed trust evaluation supports dynamic thresholding to handle unpredictable and untrusted vehicular environments, such as the erratic behavior of RSUs due to adversarial attacks. For instance, several previous works use 0.5 as the neutral threshold value. In our case, we select 0.5 and 0.8 for T_th to assess the impact of different threshold values on knowledge transfer without loss of generality. Figs. <ref>, <ref>, and <ref> depict the learning performance of the target RSU in SC1, SC2, and SC3, respectively. We report average rewards for knowledge transfers involving different sources by calculating the average across three runs. The case of no transfer between source and target RSUs (i.e., tabula rasa learning) is also evaluated as a baseline. In addition, the black dashed lines in Figs. <ref>–<ref> represent the best return reward of the baseline scheme with tabula rasa learning. These lines demonstrate the training time reduction achieved under each transfer at the target RSU to reach a certain performance level. Moreover, Table <ref> presents specific information regarding the reduction in training time achieved through knowledge transfer, detailing the required number of training episodes and the corresponding wall-clock time. Fig. <ref> illustrates average rewards accumulated by the target RSU for a specific misbehavior with knowledge transferred from the source RSUs (shown in Fig. <ref>). It can be observed that our selective knowledge transfer approach significantly enhances the learning performance compared to tabula rasa. Specifically, the target RSU achieves a higher cumulative return and reaches asymptotic performance earlier than in the baseline. In addition, Fig. <ref> reveals the impact of different threshold values on learning performance. When a tolerant T_th = 0.5 value is set, both genuine and malicious (label-flipping and policy induction) source RSUs feature in the transfer. With a stringent T_th = 0.8, collaboration stems only from genuine source RSUs. Interestingly, even though malicious source RSUs are involved with T_th = 0.5, the sample contribution in (<ref>) and the experience selection in (<ref>) ascertain positive transfer, resulting in similar performance as in T_th = 0.8. Fig. <ref> demonstrates the learning performance of the target RSU with unseen misbehavior attack variants. For the two cases in SC2, the target RSU acquires expert knowledge from sources to detect unseen DoS Random Sybil and DoS Disruptive Sybil misbehaviors. In contrast, the baseline with tabula rasa does not possess prior knowledge of those two unseen variants. The learning curves indicate a superior performance of the target RSU compared to the baseline. This can be attributed to the fact that collaborative knowledge transfer from source RSUs (Fig. <ref>) imparts the necessary expertise to detect unseen misbehaviors. Similar to SC1 (Fig. <ref>), our approach exhibits consistent learning performance irrespective of the T_th value. Note that DoS misbehaviors in VeReMi are high-volume attacks, resulting in the frequent learning of attack patterns by both source RSUs and the target RSU. Thus, the target RSU accumulates a relatively higher total average reward for a similar number of training episodes compared to Fig. <ref>. Fig. <ref> displays the learning performance of the target RSU with partial observability of the attack space. For the two cases in SC3, the target RSU assimilates knowledge from source RSUs (Fig. <ref>) to expand the detection horizon to a wider range of misbehaviors. As such, a source RSU learns a specific misbehavior effectively using high-dimensional feature vectors, and then the acquired knowledge is distilled to the target RSU via selective knowledge transfer. In particular, each source RSU is trained with a high-dimensional feature vector comprising position, speed, acceleration, and heading angle parameters. The target RSU is trained with the position feature for the Constant Position attack and with the speed feature for the Constant Speed attack. As shown in Fig. <ref>, the target RSU in the baseline scheme achieves notably lower asymptotic performance compared to the target RSU with knowledge transfer. This difference can be attributed to the increased false alarms resulting from the partial observability of the attack space when using low-dimensional feature vectors. To circumvent this, knowledge distillation from sources with multiple/high-dimensional feature vectors can be applied to enhance the effectiveness of collaborative misbehavior detection. Note that our approach exhibits comparable learning performance for both T_th values, similar to SC1 (Fig. <ref>) and SC2 (Fig. <ref>). §.§ Detection Performance Detection performance is assessed in all scenarios in terms of Accuracy (A), Precision (P), Recall (R), and F-score (F), A = TP+TN/TP+TN+FP+FN, P = TP/TP+FP, R = TP/TP+FN, F = 2RP/R + P, respectively, by considering both genuine and misbehavior classes for each misbehavior type in VeReMi. The accuracy in (<ref>) indicates the ratio of all correct predictions to the total number of considered input samples. The higher precision values in (<ref>) indicate low FP rates, whereas higher recall values in (<ref>) indicate low FN rates. The F-score in (<ref>) provides a harmonic mean between precision and recall, which is used when FPs and FNs are vital. Therefore, a higher F-score implies better performance in our examined scenarios. Table <ref> presents the performance results obtained from our comprehensive analysis of collaborative misbehavior detection with the selective knowledge transfer approach. Results show that misbehavior detection using knowledge transfer yields high effectiveness with a very high F-score of 0.98 in SC1 under both T_th values of 0.5 and 0.8, while correctly identifying misbehaviors with low rates of FPs and FNs. It should be highlighted that the baseline scheme with tabula rasa also achieves a significantly high F-score of 0.91. This is due to the fact that tabula rasa learning obtains sufficient knowledge during training to detect future misbehaviors of the same type. Although the results are reported only for two misbehavior types in SC1, similar performance levels were observed for other misbehavior types in VeReMi. Numerical results for SC2 demonstrate effective identification of unseen attacks with knowledge transfer, achieving an F-score of 0.71 under both T_th values. Additionally, recall values approaching 1.0 further elucidate that such non-anticipated attacks can be successfully detected with a very low rate of FNs. As shown in Table <ref>, the baseline scheme is ineffective in detecting unseen attacks and achieves very low F-scores of 0.49 and 0.48 in contrast to knowledge transfers when encountering DoS Random Sybil and DoS Disruptive Sybil misbehaviors, respectively. This validates that the target RSU enhances its situational awareness and detects non-anticipated misbehaviors by acquiring knowledge from source RSUs. Detection performance in Table <ref> reveals that target learning with knowledge transfer yields significantly superior F-scores compared to the baseline scheme in SC3. Under both T_th values, the F-scores of 0.88 and slightly over 0.90 for position- and speed-related misbehaviors, respectively, demonstrate high effectiveness in detecting a partially observable attack space. Conversely, the baseline scheme becomes highly ineffective, as shown by the low F-score of 0.37 with high number of FPs and FNs for both position- and speed-related misbehaviors. It is worth noting that misbehavior variants generated by adding/subtracting an offset (i.e., Constant/Random Position Offset and Constant/Random Speed Offset) are more challenging to detect as compared to others, such as Constant Position and Constant Speed. Thus, the transfer of relevant knowledge to the target RSU becomes imperative to effectively identify such partially observable attack spaces. Overall, across all three scenarios, collaborative misbehavior detection with knowledge transfer significantly outperforms tabula rasa learning, as summarized in Fig. <ref>, demonstrating its high effectiveness. Specifically, our approach enhances robustness and generalizability by effectively detecting previously unseen and partially observable misbehavior attacks. §.§ Discussion In summary, we develop a selective knowledge transfer approach that selects knowledge learned at source RSUs with high semantic relatedness to the target RSU, enabling effective and efficient learning about various misbehaviors in a collaborative fashion. We introduce a relevant theoretical framework for the proposed selective knowledge transfer in DRL-based collaborative misbehavior detection in untrusted vehicular environments. In addition, empirical evidence demonstrating the necessity of transfer learning is provided across three examined scenarios discussed in Table <ref> through (i) asymptotic improvement in the learning performance of the target RSU (Figs. <ref>–<ref>); (ii) improved misbehavior detection performance at the target RSU, as shown in Table <ref>; and (iii) faster learning speed of the target RSU, as shown in Table <ref>. More specifically, we empirically demonstrate that the selective knowledge transfer approach, coupled with trust evaluation, significantly reduces the training time for collaborative misbehavior detection at the target RSU and produces key outcomes: i) higher performance levels in SC1 and SC3 for the detection of future attacks of the same type and only partially observable misbehaviors, respectively, and ii) a certain performance level in SC2 by effectively detecting unseen attacks. Our experiments further demonstrate that the proposed knowledge transfer method is robust across various threshold values (T_th∈ [0,1]) when selecting source RSUs. In particular, Figs. <ref>– <ref> consistently demonstrate similar learning and detection performances, irrespective of specific T_th values used. Nonetheless, it is worth mentioning that appropriate threshold selection depends on the severity of the adversarial attack model and the specific scenario being examined. The generalizability problem is an essential challenge in DL- and traditional ML-based misbehavior detection methods proposed in related work (Sec. <ref>). Conventional approaches often struggle with detecting unseen or only partially observable attack spaces. In contrast, our collaborative misbehavior detection approach leverages transfer learning to improve generalizability by effectively handling both unseen and partially observable attacks. A concern unavoidably arises with the increased computational overhead in the form of processing and buffer sizes at the target RSU when resorting to more tolerant T_th values. Lower threshold values result in the inclusion of more selected sources, which may encompass less trustworthy ones. Consequently, this poisons the experience samples buffer 𝒮̃ with samples from sources with lower trust values, incurring processing overhead on the target RSU. However, samples from less trustworthy sources will not eventually influence the target's learning process due to the experience selection. § CONCLUSIONS The current research focus in AI/ML-based collaborative misbehavior detection lies in designing methods that can effectively identify various misbehavior attacks. Such approaches, albeit resonating high detection accuracy, assume the presence of an honest majority, with inherent model resilience and without explicit defense against adversarial attacks. While the majority of solutions primarily concentrate on centralized setups, they often suffer from drawbacks related to latency, computational cost, scalability, robustness, and generalizability, due to the heterogeneity and distributed nature of vehicular environments. To address these research challenges, we proposed a DRL-based scheme that utilizes transfer learning for distributed collaborative misbehavior detection among RSUs. Our approach enables selective knowledge transfer from reliable source RSUs in unpredictable and untrusted vehicular environments by leveraging semantic relatedness with the target RSU. We empirically showcase that selective knowledge transfer, coupled with trust evaluation, is sample-efficient and effective in detecting previously unseen and partially observable misbehaviors with high detection performance. Hence, significant steps towards generalizability are attained. In future research, we intend to delve into adversarial training techniques for DRL, such as introducing perturbations to the observation space, to counter gradient-based adversarial attacks within target RSUs. Moreover, we are interested in evaluating the scalability and performance of our approach when compared to emerging blockchain-based trust management techniques in vehicular networks. [Misbehavior Attack Types] Position falsification: A vehicle transmits falsified position coordinates, leading to four different variants: i) constant position, the attacker transmits fixed position coordinates; ii) constant position offset, the attacker transmits the real position coordinates with a fixed offset; iii) random position, the attacker transmits newly generated random position coordinates; and iv) random position offset, the attacker transmits the real position coordinates with a random offset. Speed falsification: A vehicle transmits falsified speed values in its BSM, following a similar approach as in position falsification attack. This results in constant speed, constant speed offset, random speed and random speed offset variants. Disruptive: This is similar to a data replay attack, where a vehicle re-transmits previously sent messages by other vehicles. BSMs are selected at random and flood the network with stale data to disrupt genuine information from being propagated. This attack may also be carried out in DoS and Sybil modes. Denial-of-service (DoS): A misbehaving vehicle transmits BSMs at a very higher frequency than the acceptable limit set by the standard. This results in high volume of data transmission causing extensive periods of network congestion and unavailability of critical services to legitimate vehicles. DoS Random: In this attack, the attacker sets all BSM fields to random values and performs a typical DoS attack. DoS Disruptive: An attacker may re-transmit previously sent BSMs by other legitimate vehicles. BSMs are selected at random and flood the network with stale data with the intention of disrupting genuine information from being propagated. The attacker increases BSM transmission rate to realize this attack. DoS Random Sybil: The attacker changes pseudonym identities on every BSM while performing the DoS random attack. DoS Disruptive Sybil: The attacker constantly changes pseudonyms on every re-transmission of BSMs while concealing its real identity. IEEEtran § REFERENCES SECTION You can use a bibliography generated by BibTeX as a .bbl file. BibTeX documentation can be easily obtained at: http://mirror.ctan.org/biblio/bibtex/contrib/doc/ The IEEEtran BibTeX style support page is: http://www.michaelshell.org/tex/ieeetran/bibtex/ § SIMPLE REFERENCES You can manually copy in the resultant .bbl file and set second argument of \begin to the number of references (used to reserve space for the reference number labels box). 1 IEEEtran ref1 Mathematics Into Type. American Mathematical Society. [Online]. Available: https://www.ams.org/arc/styleguide/mit-2.pdf ref2 T. W. Chaundy, P. R. Barrett and C. Batey, The Printing of Mathematics. London, U.K., Oxford Univ. Press, 1954. ref3 F. Mittelbach and M. Goossens, The Companion, 2nd ed. Boston, MA, USA: Pearson, 2004. ref4 G. Grätzer, More Math Into LaTeX, New York, NY, USA: Springer, 2007. ref5M. Letourneau and J. W. Sharp, AMS-StyleGuide-online.pdf, American Mathematical Society, Providence, RI, USA, [Online]. Available: http://www.ams.org/arc/styleguide/index.html ref6 H. Sira-Ramirez, “On the sliding mode control of nonlinear systems,” Syst. Control Lett., vol. 19, pp. 303–312, 1992. ref7 A. Levant, “Exact differentiation of signals with unbounded higher derivatives,” in Proc. 45th IEEE Conf. Decis. Control, San Diego, CA, USA, 2006, pp. 5585–5590. DOI: 10.1109/CDC.2006.377165. ref8 M. Fliess, C. Join, and H. Sira-Ramirez, “Non-linear estimation is easy,” Int. J. Model., Ident. Control, vol. 4, no. 1, pp. 12–27, 2008. ref9 R. Ortega, A. Astolfi, G. Bastin, and H. Rodriguez, “Stabilization of food-chain systems using a port-controlled Hamiltonian description,” in Proc. Amer. Control Conf., Chicago, IL, USA, 2000, pp. 2245–2249. § BIOGRAPHY SECTION If you have an EPS/PDF photo (graphicx package needed), extra braces are needed around the contents of the optional argument to biography to prevent the LaTeX parser from getting confused when it sees the complicated \includegraphics command within an optional argument. (You can create your own custom macro containing the \includegraphics command to make things simpler here.) If you include a photo: [ < g r a p h i c s > ]Michael Shell Use \begin{IEEEbiography} and then for the 1st argument use \includegraphics to declare and link the author photo. Use the author name as the 3rd argument followed by the biography text. If you will not include a photo: John Doe Use \begin{IEEEbiographynophoto} and the author name as the argument followed by the biography text.
http://arxiv.org/abs/2409.03408v1
20240905104608
Prolongation of solutions and Lyapunov stability for Stieltjes dynamical systems
[ "Lamiae Maia", "Noha El Khattabi", "Marlène Frigon" ]
math.CA
[ "math.CA", "math.DS", "34A06 (Primary) 34D20, 34A34, 34B60 (Secondary)" ]
§ ABSTRACT In this article, we introduce Lyapunov-type results to investigate the stability of the trivial solution of a Stieltjes dynamical system. We utilize prolongation results to establish the global existence of the maximal solution. Using Lyapunov's second method, we establish results of (uniform) stability and (uniform) asymptotic stability by employing a Lyapunov function. Additionally, we present examples and real-life applications to study asymptotic stability of equilibria in two population dynamics models. Retrieving stellar parameters and dynamics of AGB stars with Gaia parallax measurements and CO^5BOLD RHD simulations E. Béguin 1 A. Chiavassa1 A. Ahmad2 B. Freytag2 S. Uttenthaler3 Received April 4, 2024; accepted July 16, 2024 ====================================================================================================================================================== § INTRODUCTION In recent years, the field of differential equations has witnessed a notable surge in interest surrounding Stieltjes differential equations. This renewed focus is largely driven by the pursuit of results that not only unify existing findings but also extend those related to classical derivatives <cit.> through the Stieltjes derivative. The distinction between classical derivatives and their Stieltjes counterparts lies in the nature of the differentiation process. While classical derivatives are based on limits involving small increments of function values, the Stieltjes derivative operates with respect to a derivator g:ℝ→ℝ. This derivator is typically assumed to be left-continuous and nondecreasing, characteristics that allow for a broader range of applications, particularly in contexts where certain processes may exhibit discontinuities and/or stationary periods <cit.>. Such scenarios are common in various fields, including population dynamics, and physics, where classical differentiation has limitations in capturing the complexities of real-world phenomena. Typically, investigations into first-order Stieltjes differential equations and systems focus on solutions defined on bounded intervals. However, Larivière in <cit.> turned his attention to Stieltjes differential equations on the positive real half-line. In doing so, he provided results related to the prolongation of solutions and the existence of the maximal solution. The motivation behind exploring these equations on the positive real half-line lies in the observation that many natural processes evolve over time without any inherent time limit, while some phenomena can exhibit finite-time blow-up, leading to abrupt changes or singularities. By considering the Stieltjes derivative, we aim to capture these nuanced behaviors and enhance our understanding of more complex dynamical systems from a common perspective, facilitated by the unification that this derivative provides <cit.>. In the study of dynamical systems, the stability of equilibria holds significant importance. Here, the term equilibrium refers to a state that does not change dynamically, in the sense that if a system starts at an equilibrium point, it will stay in that state indefinitely. The interaction between species within many ecosystems often relies on feedback loops, which makes stability crucial for their functioning, and resilience. Although stability is typically desirable, there are some scenarios where stable equilibrium at zero can be critical and raise concerns for several raisons. For instance, in ecological systems, this concern arises from the vulnerability to perturbations of certain species, which may increase the risk of their extinction. Within the realm of dynamical systems theory, Lyapunov's Second Method <cit.> stands as a fundamental approach for assessing the stability properties of a system near an equilibrium. This stability analysis provides insights into whether small perturbations around an equilibrium lead to convergence (stability) or divergence (instability) of solutions. The core of this method lies in the concept of the Lyapunov function. This function was the subject of numerous works in the classical literature starting from the works <cit.> and references therein. In this paper, we extend Lyapunov stability results from the classical literature to the Stieltjes case by means of the Stieltjes derivative. In doing so, we address first the prolongation of solutions considering the Stieltjes dynamical system: 𝐱_g'(t) = 𝐟(t,𝐱(t)) for g-almost all t≥ t_0≥ 0, 𝐱(t_0) = 𝐱_0∈ℝ^n; where 𝐟=(f_1,…,f_n):[0,+∞) ×ℝ^n →ℝ^n. We start by deriving corollary results using compact sets to characterize the maximal solution of (<ref>). Then, based on a generalized version of the Grönwall lemma <cit.> for the Stieltjes derivative, we establish the existence of global solutions over the whole positive real half-line. The prolongation of solutions will be essentially used later on to establish Lyapunov-like stability results for the system (<ref>), particulary when studying the asymptotic behaviour of solutions around an equilibrium. Our stability study is inspired by results from classical theory and works such as <cit.>, which address dynamic equations on time scales and impulsive differential equations. To the best of our knowledge, this is the first work to introduce Lyapunov's method adapted to Stieltjes differential equations. This paper is organized as follows: we present the theoretical framework and some preliminaries in Section 2. In Section 3, we focus on prolongation of solutions and the characterization of the maximal solution to deduce the existence of a global solution. Section 4 is devoted to Lyapunov-like stability results using Lyapunov's second method. We start by defining the stability notions: (uniform) stability and (uniform) asymptotic stability. Afterward, we present stability results for each type of stability inspired from the works <cit.>, to extend some classical results from <cit.>. In the last section of this paper, we suggest two applications to dynamics of population to study the asymptotic stability of some critical equilibria: in the first application, we model the dynamics of a population with Allee's effect <cit.> negatively impacted by train vibrations, and another application related to the dynamics of a population of Cyanobacteria in a cultured environment, keeping track of ammonia levels in the process. § PRELIMINARIES For [a,b]⊂ℝ, and 𝐮:[a,b]→ℝ^n a regulated function, the symbols 𝐮(t^+) and 𝐮(t^-) will be used to denote 𝐮(t^+)=lim_ϵ→ 0^+𝐮(t+ϵ), for all t∈[a,b), 𝐮(t^-)=lim_ϵ→ 0^-𝐮(t-ϵ), for all t∈ (a,b]. Throughout this work, we will consider g:ℝ→ℝ a monotone, nondecreasing and left-continuous function, also known as a derivator. We denote the set of discontinuity points of g by D_g={t∈ℝ: g(t^+ )- g(t)>0}. In addition, we denote C_g ={s∈ℝ: g is constant on (s-ϵ, s+ϵ) for some ϵ>0}. The set C_g is an open of the usual topology and it can be written as a countable union of disjoint intervals C_g =⋃_n∈Λ(a_n ,b_n), with Λ⊂ℕ, a_n,b_n ∈ℝ. We set N_g={a_n,b_n: n∈Λ}∖ D_g. The function g defines a Lebesgue-Stieltjes measure μ_g:ℒ𝒮_g→ [0,∞] over ℒ𝒮_g the σ-algebra of subsets of ℝ containing all Borel sets <cit.>. We refer to the measurability with respect to the σ-algebra ℒ𝒮_g as g-measurability. For any interval [a,b) ∈ℒ𝒮_g, we have μ_g([a,b))=g(b)-g(a), and μ_g({t})=g(t^+)-g(t) for all t∈ℝ. Moreover, μ_g(C_g)=μ_g(N_g)=0, the reader is referred to <cit.> for more details. We denote by ℒ_g^1([a,b),ℝ) the quotient space of the set of μ_g-integrable functions on [a,b)⊂ℝ under the equivalence relation: f∼ h : f(t)=h(t) μ_g-almost everywhere. We define the norm ·_ℒ_g^1([a,b),ℝ) on the space ℒ_g^1([a,b),ℝ) by f_ℒ_g^1([a,b),ℝ):=∫_[a,b) |f(t)| dμ_g(t), for every f∈ℒ_g^1([a,b),ℝ). In the sequel, given I⊂ℝ and a functional space Z(I,ℝ), we set Z(I,ℝ^n):=∏_i=1^nZ(I,ℝ), and we denote Z_loc(I,ℝ^n) the space of functions 𝐮:I →ℝ^n satisfying 𝐮|_[a,b]∈ Z([a,b],ℝ^n) for every interval [a,b] ⊂ I. The derivator g defines a pseudometric ρ:ℝ×ℝ→ℝ^+ given by ρ(s,t)=|g(s)-g(t)|, for every s,t ∈ℝ. We denote τ_g the topology induced by the pseudometric ρ. The topology τ_g is not necessarily Hausdorff as mentioned in <cit.>. A set A ⊂ℝ is called g-open if, for every t ∈ A, there exists r > 0 such that {s ∈ℝ : ρ(t,s)< r}⊂ A. If t ∈ D_g, then the interval (t-ϵ,t] is a g-open set. The reader is referred to <cit.> for more properties of the topology τ_g. In the following definition, we define the g-derivative of a real-valued function. Let u:[a,b] →ℝ be a function. The derivative of u with respect to g at a point t_0 ∈ [a,b]∖ C_g, is defined as: u_g'(t)= lim_s→ tu(s)-u(t)/g(s)-g(t) t ∉ D_g, u(t^+)-u(t)/g(t^+)-g(t) t ∈ D_g, provided that the limit exists, in this case u is said to be g-differentiable at t. In the next proposition, we appeal to the g-derivative of the composition of two functions established in <cit.>. Another version of this formula can be found in <cit.>. Let t∈ℝ∖ C_g, f:ℝ→ℝ and h a real function defined on a neighborhood of f(t). We assume that there exist h'(f(t)), f_g'(t) and that the function h is continuous at f(t^+). Then, the composition h∘ f is g-differentiable in t_0, and we have the formulae (h∘ f)_g'(t)= h'(f(t))f_g'(t) t ∉ D_g, h(f(t^+))-h(f(t))/f(t^+)-f(t)f_g'(t) t ∈ D_g. Throughout this paper, let · denotes the maximum norm in ℝ^n defined by 𝐱=max{|x_1|,…,|x_n|} for 𝐱=(x_1,…,x_n)∈ℝ^n. Now, we recall the g-continuity notion first introduced in <cit.>. Let 𝐮 : [a,b]→ℝ^n. We say that 𝐮 is g-continuous at t∈ [a,b] if, for every ϵ >0, there exists δ>0 such that ∀ s∈ [a,b], |g(s)-g(t)|< δ𝐮(s)-𝐮(t)<ϵ. We denote 𝒞_g([a,b],ℝ^n) the set of g-continuous functions 𝐮:[a,b]→ℝ^n on the interval [a,b]. The following proposition relates the regularity of f and g, the reader is referred to <cit.>. If 𝐮:[a,b] →ℝ^n is g-continuous on [a,b], then the following statements hold: * 𝐮 is left-continuous at every t ∈ (a,b]. * If g is continuous at t ∈ [a,b), then so is 𝐮. * If g is constant on some [c,d] ⊂ [a,b], then so is 𝐮. g-continuous functions presents interesting measurability properties, see <cit.>. Let ℬ𝒞_g([a,b],ℝ^n) denotes the Banach space of the bounded functions of 𝒞_g([a,b],ℝ^n) with respect to the supremum norm. In the next theorem, we focus on the particular case when the derivator g:ℝ→ℝ is increasing, and continuous on an interval [a,b]⊂ℝ, to derive a generalized version of Rolle's theorem and the Mean Value theorem for real-valued functions in the context of Stieltjes differentiation. Let g:ℝ→ℝ be a left-continuous and nondecreasing function, continuous and increasing on an interval [a,b]⊂ℝ. Let f:[a,b]→ℝ be g-continuous on [a,b] and g-differentiable on (a,b) satisfying f(a)=f(b). Then, there exists c∈ (a,b) such that f_g'(c)=0. As f is g-continuous on [a,b], it results from Proposition <ref> that f is continuous on [a,b]. We set m:=min_t∈[a,b]f(t), M:=max_t∈[a,b]f(t). If m=M, then f is constant on [a,b] and for all t∈ (a,b), f_g'(t)=0. Otherwise if m<M, as f(a)=f(b) we have f(a)≠ m, and f(b)≠ M. Without loss of generality, we can assume that f(a)≠ M, and f(b)≠ M, if it is not the case, we consider m instead. Thus, there exists c∈ (a,b) such that f(c)=M. Therefore, there exists δ>0 such that (c-δ,c+δ)⊂(a,b) and f(s)≤ f(c)=M for all s∈(c-δ,c+δ). As s↗ c, g(s)≤ g(c), and lim_s→ c^-f(s)-f(c)/g(s)- g(c)≥ 0. While, as s↘ c, g(s)≥ g(c), and lim_s→ c^+f(s)-f(c)/g(s)- g(c)≤ 0. Since f is g-differentiable at c, we deduce that f_g'(c)=0. As a corollary of Theorem <ref>, we state a version of the Mean Value theorem involving the Stieltjes derivative. Let g:ℝ→ℝ be a left-continuous and nondecreasing function, continuous and increasing on an interval [a,b]⊂ℝ. Let f:[a,b]→ℝ be g-continuous on [a,b] and g-differentiable on (a,b). Then, there exists c∈ (a,b) such that f_g'(c)=f(b)-f(a)/g(b)-g(a). Let us consider the function F:[a,b]→ℝ defined by F(t)=f(t)-(f(b)-f(a)/g(b)-g(a)(g(t)-g(a))+f(a)), for all t∈ [a,b]. Clearly F is g-continuous on [a,b] and g-differentiable on (a,b), satisfying F(a)=F(b). Moreover, for all t∈ (a,b), we have that F_g'(t)=f_g'(t)-f(b)-f(a)/g(b)-g(a). Applying Theorem <ref>, there exists c∈(a,b) such that F_g'(c)=0. Hence, there exists c∈(a,b) such that f_g'(c)=f(b)-f(a)/g(b)-g(a). Now, we present the notion of g-absolute continuity. A map F:[a,b]→ℝ is g-absolutely continuous, if, for every ε > 0, there exists δ > 0 such that, for any family {(a_i , b_i)}_i=1^i=n of pairwise disjoint open subintervals of [a,b], ∑_i=1^n g(b_i)-g(a_i) <δ⇒∑_i=1^n |F(b_i)-F(a_i)| < ε. We denote by 𝒜𝒞_g ([a,b],ℝ) the vector space of g-absolutely continuous functions F: [a,b]→ℝ on the interval [a,b]. In <cit.>, a Fundamental Theorem of Calculus for Lebesgue-Stieltjes integrals was introduced. Let a,b ∈ℝ be such that a<b, and let F:[a,b] →ℝ. The following assumptions are equivalent. (1) The function F is g-absolutely continuous, i.e. to each ε>0, there is some δ>0 such that, for any family {(a_j,b_j)}_j=1^m of pairwise disjoint open subintervals of [a,b], ∑_j=1^m(g(b_j)-g(a_j))<δ ∑_j=1^m|F(b_j)-F(a_j)|<ε. (2) The function F satisfies the following conditions: (a) there exists F'_g(t) for g-almost all t∈ [a,b); (b) F'_g ∈ℒ^1_g([a,b),ℝ); (c) for each t ∈ [a,b], we have F(t)=F(a)+∫_[a,t) F'_g(s) dμ_g(s). We denote by 𝒜𝒞_g([a,b],ℝ) the set of g-absolutely continuous functions. We set 𝐮=(u_1,…,u_n)∈𝒜𝒞_g([a,b],ℝ^n) if u_i∈𝒜𝒞_g([a,b],ℝ) for i=1,…,n. The following proposition provides conditions ensuring the g-absolute continuity of the composition of two functions, the proof is based on arguments as in <cit.>. Let 𝐮:[a,b]→ B⊂ℝ^n be a g-absolutely continuous function, and let v:B→ℝ be a Lipschitz continuous function on B. Then, the composition v∘𝐮∈𝒜𝒞_g([a,b],ℝ). In <cit.>, an exponential function was introduced. Let p ∈ℒ^1_g([a,b),ℝ) be such that 1+p(t)(g(t^+)-g(t)) > 0 for every t ∈ [a,b)∩ D_g. Let us define the function e_p(·,a) : [a,b] → (0,∞) for every t∈ [a,b] by e_p(t,a) = e^∫_[a,t)p̃(s) dμ_g(s), where p̃(t) = p(t) if t ∈ [a,b]∖ D_g, log(1 + p(t)(g(t^+) - g(t)))/g(t^+)-g(t) if t ∈ [a,b) ∩ D_g. In particular, given p ∈ℒ^1_g([a,b),ℝ) a function satisfying Condition (<ref>), then p̃∈ℒ^1_g([a,b),ℝ), and e_p(·, a)∈𝒜𝒞_g([a,b),ℝ), the reader is referred to <cit.> and improvements in <cit.>. Now, we appeal to the generalization of the Grönwall Lemma to the Stieltjes derivative introduced by Larivière in <cit.>, and further generalized in <cit.>. This lemma will play a crucial role in establishing global solutions defined on the positive real half-line as we shall prove in the following section. Let u∈𝒜𝒞_g([a,b],ℝ). Assume that there exist functions k,p∈ℒ^1_g([a,b),ℝ), satisfying 1+p(t)μ_g({t})>0 for all t∈ [a,b)∩ D_g, such that u_g'(t)≤ k(t)+ p(t) u(t) for g-almost all t∈[a,b). Then, u(t)≤ e_p(t,a) (∫_[a,t)e_p^-1(s,a) k(s)/1+p(s)μ_g({s}) dμ_g(s)+u(a)), t∈ [a,b]. § PROLONGATION OF SOLUTIONS AND MAXIMAL INTERVAL OF EXISTENCE Let O be a nonempty open set of ℝ^n with respect to the usual topology τ_ℝ^n, and I a g-open set of ℝ containing t_0≥ 0 with sup I > t_0. If we set Ω :=I× O, then Ω is an open set with respect to the topology τ̂_g on ℝ×ℝ^n, generated by the open sets U× V such that U∈τ_g and V∈τ_ℝ^n. Here and afterwards, B_ℝ^n(𝐱,δ) denotes the open ball of ℝ^n centered at 𝐱∈ℝ^n with radius δ>0. Let us consider the Stieltjes dynamical system: 𝐱_g'(t) = 𝐟(t,𝐱(t)) for g-almost all t≥ t_0, t∈ I, 𝐱(t_0) =𝐱_0 ∈ O, where 𝐟=(f_1,…,f_n):Ω∩([t_0,∞)×ℝ^n) →ℝ^n and Ω defined above satisfying the following assumptions: (H_Ω) For every (t,𝐱) ∈Ω with t≥ t_0, (a) one of the following conditions hold: (a1) there exists δ>0 such that (t-δ,t+δ)× B_ℝ^n(𝐱,δ) ⊂Ω; (a2) if for every δ>0, (t-δ,t+δ)× B_ℝ^n(𝐱,δ) ⊄Ω, then t∈ D_g and there exists ϵ>0 such that (t-ϵ,t]× B_ℝ^n(𝐱,δ)⊂Ω; (b) (t_0,𝐱_𝐟,t_0^+)∈Ω, and if (t,𝐱) ∈Ω∩ (D_g×ℝ^n) such that (t,𝐱_𝐟,t^+)∈Ω, then (t,𝐱_𝐟,t^+) satisfies Condition (H_Ω)(a)(1). Here and afterward, the notation 𝐱_𝐟,t^+ refers to 𝐱_𝐟,t^+:=𝐱+μ_g({t})𝐟(t,𝐱); (H_𝐟) * for all 𝐱∈ O, 𝐟(·,𝐱) is g-measurable; * 𝐟(·,𝐱_0)∈ℒ^1_g,loc([t_0,∞),ℝ^n); * 𝐟 is g-integrally locally Lipschitz continuous, i.e. for every r>0, there exists a function L_r∈ℒ^1_g,loc([t_0,∞),[0,∞)) such that 𝐟(t,𝐱)-𝐟(t,𝐲)≤ L_r(t) 𝐱-𝐲, for g-almost all t∈ I ∩ [t_0,∞) and all 𝐱,𝐲∈B_ℝ^n(𝐱_0,r)∩ O. We recall the local existence result <cit.>. Assume that the conditions in (H_𝐟) hold. Then there exists τ>0 such that the system (<ref>) has a unique solution 𝐱∈𝒜𝒞_g([t_0,t_0+τ],ℝ^n). In the sequel, as shown in <cit.>, the solution given by Theorem <ref> can be extended to intervals larger than [t_0,t_0+τ] up to a maximal interval as long as the graph of the solution does not leave Ω and the right-hand side limit of the solution at t_0+τ belongs to O. It should be noted that the derivator g may not necessarily be continuous at t_0. For this purpose, Condition (H_Ω) is necessary to seek the maximal interval of existence of the solution. Indeed, notice that Condition (H_Ω) guarantees existence of a local solution on an interval [t_0,t_0+τ] even in the case where t_0∈ D_g, and insures extension of the solution 𝐱 on some larger interval [t_0,t_0+ϵ], ϵ>τ if t_0+τ∈ D_g and 𝐱(t_0+τ^+) ∈ O. Let us define the set 𝒮(t_0,𝐱_0):={𝐱:I_𝐱 = J_𝐱∩[t_0,∞)→ℝ^n: 𝐱 is a solution of the system (<ref>)}, where J_𝐱 is a g-open interval containing t_0 with sup J_𝐱 > t_0. In the sequel, we adopt the notation t_sup:=sup I_𝐱. Let 𝐱,𝐲∈𝒮(t_0,𝐱_0). * We say that 𝐱 is smaller than 𝐲 (and we denote 𝐱≺𝐲), if and only if * I_𝐱⊂ I_𝐲; * sup I_𝐲> sup I_𝐱; * 𝐲|_I_𝐱=𝐱. In this case, we say that 𝐱 is extendible to the right and 𝐲 is a prolongation to the right of 𝐱. * We write that 𝐱≼𝐲𝐱≺𝐲 or 𝐱=𝐲. It is worth mentioning that given a solution 𝐱:[t_0,t_1)→ℝ^n such that 𝐱(t_1^-):=lim_t→ t_1^-𝐱(t) ∈ O and t_1∈ D_g, to not increase the notation, we will replace 𝐱 by the function 𝐱:[t_0,t_1]→ℝ^n defined by 𝐱=𝐱(t) t∈ [t_0,t_1), 𝐱(t_1^-), t=t_1. The next result asserts that two prolongations to the right of a solution are equal on the common interval of existence, the reader is referred to <cit.> for the proof. Assume that (H_𝐟) holds. Let 𝐱,𝐲,𝐳∈𝒮(t_0,𝐱_0) be such that 𝐲 and 𝐳 are two prolongations of 𝐱 then 𝐲=𝐳 on I_𝐲∩ I_𝐳. In <cit.>, extendible solutions to the right were characterized as follows. Assume that (H_𝐟) holds. Let 𝐱∈𝒮(t_0,𝐱_0). The following assumptions are equivalent: * 𝐱 is extendible to the right; * * Graph(𝐱):={(t,𝐱(t)): t∈ I_𝐱} is bounded; * A∪ A^+ ⊂Ω where A={(t_sup,𝐬)∈ [t_0,∞)×ℝ^n: ∃{t_n}_n⊂ I_𝐱, t_n ↗ t_sup and 𝐬=lim_n→∞𝐱(t_n)}, A^+={(t_sup,𝐬_𝐟,t_sup^+):(t_sup,𝐬)∈ A}. Let 𝐱∈𝒮(t_0,𝐱_0). We say that 𝐱 is a maximal solution of (<ref>) defined on an interval I_𝐱 if, for every 𝐲∈𝒮(t_0,𝐱_0) satisfying 𝐱≼𝐲, we have 𝐱=𝐲. I_𝐱 is referred to as the maximal interval of existence. As shown in <cit.>, the existence of the maximal solution holds as a consequence of using Zorn's lemma <cit.>. For this purpose, let us consider 𝐱∈𝒮(t_0,𝐱_0), and define the set 𝒳: 𝒳={𝐲∈𝒮(t_0,𝐱_0): 𝐱≼𝐲}. 𝒳 is a nonempty partially ordered set since 𝐱∈𝒳. Now, following the same argument as in the proof of <cit.>, by using the partial order ≼ defined in Definition <ref> instead, we deduce that every chain of 𝒳 has the largest element. This yields the existence of a maximal solution defined on ⋃_𝐲∈𝒳I_𝐲, constructed by taking the union of all the prolongations to the right of 𝐱. In addition, Theorem <ref> guarantees the uniqueness of the maximal solution, and we obtain the following theorem. Assume that (H_Ω) and (H_𝐟) hold. There exists a unique maximal solution 𝐱∈𝒮(t_0,𝐱_0) such that ω(t_0,𝐱_0):=sup I_𝐱≤∞. The next theorem highlights three alternative cases that occur, the reader is referred to <cit.> for the proof. Assume that (H_Ω) and (H_𝐟) hold. Let 𝐱∈𝒮(t_0,𝐱_0) be the maximal solution of (<ref>), then one of the alternatives holds: (A1) ω(t_0,𝐱_0)=∞; (A2) ω(t_0,𝐱_0)<∞, and for every {t_n}_n ⊂ I_𝐱 such that t_n ↗ω(t_0,𝐱_0), {𝐱(t_n)}_n is not bounded; (A3) ω(t_0,𝐱_0)<∞, moreover, there exists {t_n}_n ⊂ I_𝐱 satisfying t_n ↗ω(t_0,𝐱_0) and {𝐱(t_n)}_n is a bounded sequence, such that for every subsequence {t_n_k}_k verifying 𝐱(t_n_k) →𝐦, we have that {(ω(t_0,𝐱_0), 𝐦),(ω(t_0,𝐱_0),𝐦_𝐟,ω(t_0,𝐱_0)^+)}⊄Ω. It is worth mentioning that the maximal solution 𝐱 of (<ref>) shall be g-absolutely continuous on every interval [a,b]⊂ I_𝐱, which immediately holds from the three alternatives of Theorem <ref>. Notice that if Alternative (A1) holds then I_𝐱=[t_0,∞), if Alternative (A2) or (A3) holds, then I_𝐱=[t_0,ω(t_0,𝐱_0)) or I_𝐱=[t_0,ω(t_0,𝐱_0)]. Thanks to Theorem <ref> established by Larivière, we obtain the following corollary, which ensures the global existence of the maximal solution over the whole interval [t_0,∞) in the case where [t_0,∞)⊂ I. Assume that (H_Ω) and (H_𝐟) hold. Let 𝐱∈𝒮(t_0,𝐱_0) be the maximal solution of (<ref>). If Ω^+:={(t,𝐮) ∈Ω: 𝐮_𝐟,t^+∈Ω}=Ω, and there exists a compact set D⊂ O such that 𝐱(t) ∈ D for every t∈ I_𝐱, then ω(t_0,𝐱_0)=∞. Let us assume that ω(t_0,𝐱_0)<∞. Since D is compact, for all {t_n}_n ⊂ I_𝐱 such that t_n ↗ω(t_0,𝐱_0), {𝐱(t_n)}_n is bounded. Thus, it results from Theorem <ref> that Alternative (A3) holds. Therefore, there exists {t_n}_n ⊂ I_𝐱 with t_n ↗ω(t_0,𝐱_0) and {(t_n ,𝐱(t_n))}_n is bounded, such that for a fixed subsequence {t_n_k}_k verifying 𝐱(t_n_k) →𝐦, we have that {(ω(t_0,𝐱_0), 𝐦),(ω(t_0,𝐱_0),𝐦_𝐟,ω(t_0,𝐱_0)^+)}⊄Ω. On the other hand, D being a compact set yields that (t_n_k,𝐱(t_n_k)) → (ω(t_0,𝐱_0),𝐦) ∈ [t_0,ω(t_0,𝐱_0)] × D ⊂Ω. Now, since Ω^+=Ω, we deduce that (ω(t_0,𝐱_0),𝐦_𝐟,ω(t_0,𝐱_0)^+) ∈Ω, which contradicts {(ω(t_0,𝐱_0), 𝐦),(ω(t_0,𝐱_0),𝐦_𝐟,ω(t_0,𝐱_0)^+)}⊄Ω. Hence, ω(t_0,𝐱_0)=∞. Now based on Theorems <ref> and <ref>, we provide a characterization of the maximal solution of the problem (<ref>). Assume that (H_Ω) and (H_𝐟) hold. Let 𝐱∈𝒮(t_0,𝐱_0). The following assumptions are equivalent: * 𝐱 is maximal; * for every compact set K⊂Ω, there exists t_K ∈ I_𝐱 such that {(t,𝐱(t)),(t,𝐱(t)+μ_g({t})𝐟(t,𝐱(t)))}⊄K, for all t≥ t_K, t∈ I_𝐱. Assume that 𝐱 is maximal. By Theorem <ref>, two cases may occur: Case 1: if ω(t_0,𝐱_0)=sup I_𝐱=∞, by contradiction assume that there exist a compact set K ⊂Ω and a sequence {t_n}_n ⊂ I_𝐱 such that t_n↗ω(t_0,𝐱_0), and {(t_n,𝐱(t_n)),(t_n,𝐱(t_n)+μ_g({t_n})𝐟(t_n,𝐱(t_n)))}⊂ K for all n∈ℕ. This implies that {(t_n,𝐱(t_n))}_n is bounded. Therefore, there exists a convergent subsequence {(t_n_k,𝐱(t_n_k))}_k such that (t_n_k,𝐱(t_n_k)) → (τ,𝐮)∈ K ⊂ [t_0,∞)× O. Particularly, t_n_k↗τ which contradicts t_n ↗ω(t_0,𝐱_0)=∞. Case 2: if ω(t_0,𝐱_0)< ∞, then by Theorem <ref>, there are two subcases: Subcase 1: if Alternative (A2) holds, then I_𝐱=[t_0,ω(t_0,𝐱_0)) and 𝐱(t)→∞ as t ↗ω(t_0,𝐱_0). Thus, for every M>0 there exists t^* ∈ I_𝐱 such that 𝐱(t)≥ M for all t≥ t^* with t∈ I_𝐱. Hence, for every compact K ⊂Ω there exists t_K ∈ I_𝐱 such that for all t≥ t_K with t∈ I_𝐱, we have (t,𝐱(t))∉ K, in particular {(t,𝐱(t)), (t,𝐱(t)+μ_g({t})𝐟(t,𝐱(t)))}⊄K. Subcase 2: if Alternative (A3) holds, then we distinguish two cases. If I_𝐱=[t_0,ω(t_0,𝐱_0)), then Graph(𝐱) approaches the boundary of Ω and (ω(t_0,𝐱_0),𝐱(ω(t_0,𝐱_0)^-))∉Ω. Thus, for every compact K ⊂Ω, there exists t_K ∈ [t_0,ω(t_0,𝐱_0)) such that, for all t≥ t_K with t∈ I_𝐱, (t,𝐱(t)) ∉ K, which yields {(t,𝐱(t)),(t,𝐱(t)+μ_g({t})𝐟(t,𝐱(t)))}⊄K. Now, if I_𝐱=[t_0,ω(t_0,𝐱_0)], then ω(t_0,𝐱_0) ∈ D_g, (ω(t_0,𝐱_0),𝐱(ω(t_0,𝐱_0)))∈Ω and (ω(t_0,𝐱_0),𝐱(ω(t_0,𝐱_0))+μ_g({ω(t_0,𝐱_0)})𝐟(ω(t_0,𝐱_0), 𝐱(ω(t_0,𝐱_0))))∉Ω. Thus, for every compact K ⊂Ω, there exists t_K ∈ [t_0,ω(t_0,𝐱_0)] such that, for all t≥ t_K with t∈ I_𝐱, (t,𝐱(t)+μ_g({t})𝐟(t,𝐱(t))) ∉ K, which yields {(t,𝐱(t)),(t,𝐱(t)+μ_g({t})𝐟(t,𝐱(t)))}⊄K. Conversely, by contradiction, let us assume that 𝐱:I_𝐱→ℝ^n is not maximal. Thus, t_sup=sup I_𝐱 < ∞ and 𝐱 is extendible to the right. From Theorem <ref>, it follows that Graph(𝐱) is bounded and A∪ A^+ ⊂Ω. Thus, [t_0,t_sup]×Graph(𝐱) is compact, 𝐱(t_sup^-)=𝐱(t_sup)=𝐬∈ O and 𝐬_𝐟,t_sup^+∈ O. Consequently, by Theorem <ref>, there exists a prolongation to the right 𝐱̂:[t_0,t_sup +ϵ) →ℝ^n such that 𝐱̂|_I_𝐱=𝐱. Therefore, for the compact K=Graph(𝐱)∪{(t_sup,𝐬_𝐟,t_sup^+)}⊂Ω, we have that {(t,𝐱(t)),(t,𝐱(t)+μ_g({t})𝐟(t,𝐱(t)))}⊂ K for all t∈ I_𝐱, which yields a contradiction. Hence, 𝐱 is maximal. The logical negation of Theorem <ref> also provides an interesting characterization of extendible solutions. Assume that (H_Ω) and (H_𝐟) hold. Let 𝐱∈𝒮(t_0,𝐱_0). The following assumptions are equivalent: * 𝐱 extendible to the right; * there exist a compact set K⊂Ω, and a sequence {t_n}_n⊂ I_𝐱 with t_n ↗ω(t_0,𝐱_0) such that {(t_n,𝐱(t_n)),(t_n,𝐱(t_n)+μ_g({t_n})𝐟(t_n,𝐱(t_n)))}⊂ K, for all n∈ℕ. Given the generalization of the Grönwall lemma, Lemma <ref>, we can state the next theorem which provides the global existence of the solution over [t_0,∞) and a priori bound of the solution when [t_0,∞) ⊂ I and O=ℝ^n. Assume that (H_Ω) and (H_𝐟) hold. Let [t_0,∞) ⊂ I and O=ℝ^n and assume that the conditions in (H_𝐟) hold for 𝐟:[t_0,∞)×ℝ^n→ℝ^n satisfying the linear growth condition: (H_LG) there exist functions k∈ℒ^1_g,loc([t_0,∞),ℝ), and p∈ℒ^1_g,loc([t_0,∞),[0,∞)) such that 𝐟(t,𝐮)≤ k(t)+ p(t)𝐮, for g-almost all t∈[t_0,∞) and all 𝐮∈ O. If 𝐱∈𝒮(t_0,𝐱_0) is the maximal solution of (<ref>), then 𝐱(t)≤ e_p(t,t_0) (∫_[t_0,t)e_p^-1(s,t_0) k(s)/1+p(s)μ_g({s}) dμ_g(s)+𝐱_0), for all t∈ [t_0,T], T∈ I_𝐱. Moreover, we have I_𝐱=[t_0,∞). Let 𝐱∈𝒮(t_0,𝐱_0) be the maximal solution of (<ref>) defined on the maximal interval I_𝐱 with ω(t_0,𝐱_0)=sup I_𝐱. By (H_LG), we have for T>t_0 with T∈ I_𝐱 that (𝐱)_g'(t) ≤𝐱_g'(t)=𝐟(t,𝐱(t))≤ k(t)+ p(t)𝐱(t) for g-almost all t∈ [t_0,T). Observe that · is Lipschitz continuous on ℝ^n. Thus, by Lemma <ref>, 𝐱(·)∈𝒜𝒞_g([t_0,T],ℝ). Using the generalized version of the Grönwall Lemma, Lemma <ref>, we obtain 𝐱(t)≤ e_p(t,t_0) (∫_[t_0,t)e_p^-1(s,t_0) k(s)/1+p(s)μ_g({s}) dμ_g(s)+𝐱_0) for all t∈ [t_0,T]. Assume by contradiction that ω(t_0,𝐱_0)<∞. Thus, for all t∈ [t_0,T], we obtain a larger bound of 𝐱: 𝐱(t)≤sup_t∈ [t_0,T] e_p(t,t_0) (∫_[t_0,t)e_p^-1(s,t_0) |k(s)|/1+p(s)μ_g({s}) dμ_g(s)+𝐱_0). Since p∈ℒ^1_g,loc([t_0,∞),[0,∞)), then e_p(t,t_0)(1+p(t)μ_g({t}))≥ 1 for all t∈ [t_0,T]. Consequently, 𝐱(t) ≤sup_t∈ [t_0,T]e_p(t,t_0) (∫_[t_0,t) |k(s)| dμ_g(s)+𝐱_0) = e_p(T,t_0) (k_ℒ^1_g([t_0,T),ℝ)+𝐱_0). As T↗ω(t_0,𝐱_0) <∞, 𝐱(t) ∈ B_ℝ^n(0,M_1) for all t∈ I_𝐱 where M_1=e_p(ω(t_0,𝐱_0),t_0) (k_ℒ^1_g([t_0,ω(t_0,𝐱_0)),ℝ)+𝐱_0). As 𝐱(ω(t_0,𝐱_0)^-)=𝐦∈ℝ^n, then 𝐦_𝐟,ω(t_0,𝐱_0)^+∈ℝ^n. Therefore, for the compact set K=[t_0, ω(t_0,𝐱_0)]× B_ℝ^n(0,M) with M=max{M_1, 𝐦_𝐟,ω(t_0,𝐱_0)^+}, we have that {(t,𝐱(t)),(t,𝐱(t)+μ_g({t})𝐟(t,𝐱(t)))}⊂ K for all t∈ I_𝐱. By Corollary <ref>, we obtain that 𝐱 is extendible to the right which is a contradiction. Hence, ω(t_0,𝐱_0)=∞. § LYAPUNOV-LIKE STABILITY RESULTS In this section, we introduce Lyapunov-type results in the context of Stieltjes dynamical systems, based on the classical Lyapunov's second method, considering Stieltjes dynamical systems of the form: 𝐱_g'(t) = 𝐟(t,𝐱(t)) for g-almost all t≥ t_0≥ 0, t∈ I, where 𝐟=(f_1,…,f_n):ℝ^+× B_ℝ^n(0,r_0)→ℝ^n satisfying 𝐟(t,0)=0 for all t≥ 0. This study permits to draw conclusions about the local behavior of solutions of the dynamical system (<ref>) around the equilibrium 𝐱=0, which is called in the stability literature the trivial solution. Before stating the assumptions on 𝐟 to guarantee the existence and the uniqueness of solution of the system (<ref>) starting at some t_0≥ 0, notice that if t_0 ∈ D_g, and 𝐱_0 ∈ B_ℝ^n(0,r_0) such that 𝐱_0+μ_g({t_0})𝐟(t_0,𝐱_0) ∉ B_ℝ^n(0,r_0), then existence of a solution 𝐱 satisfying 𝐱(t_0)=𝐱_0 cannot be guaranteed. Thus, we assume (H_r)there exists r∈(0,r_0] such that for all (t,𝐮) ∈ (ℝ^+ ∩ D_g)× B_ℝ^n(0,r), we have 𝐮+μ_g({t}) 𝐟(t,𝐮) ∈ B_ℝ^n(0,r_0), in other terms, for all t∈ (ℝ^+ ∩ D_g), B_ℝ^n(0,r)⊂{𝐮∈ B_ℝ^n(0,r_0): 𝐮+μ_g({t}) 𝐟(t,𝐮) ∈ B_ℝ^n(0,r_0) }. Now, let us assume that 𝐟 fulfills (H_𝐟), where Ω is an open set of the form Ω=I× B_ℝ^n(0,r) satisfying (H_Ω) as in Section 5.1 with I a g-open set containing the whole ℝ^+. Therefore, under hypotheses (H_Ω), (H_𝐟), and (H_r), it follows from Theorem <ref> that, for every (t_0,𝐱_0)∈ℝ^+× B_ℝ^n(0,r), there exists a unique maximal solution 𝐱 = 𝐱(·,t_0,𝐱_0)∈𝒮(t_0,𝐱_0) ∩𝒜𝒞_g,loc(I_t_0,𝐱_0, ℝ^n) of (<ref>) defined on a maximal interval of existence that we denote I_t_0,𝐱_0 since it depends on the initial data (t_0,𝐱_0). As before, we denote ω(t_0,𝐱_0)=sup I_t_0,𝐱_0≤∞. §.§ Lyapunov stability notions In this subsection, we present stability concepts within the framework of Stieltjes' differentiation. Through illustrative examples, we highlight the influence of the sets C_g and D_g on the change of stability properties. The trivial solution 𝐱=0 of the system (<ref>) is said to be ∙ stable if, for all ϵ >0 and t_0∈ℝ^+, there exists δ=δ(ϵ,t_0)∈(0,r) such that 𝐱_0<δ implies that 𝐱(t,t_0,𝐱_0)< ϵ for all t∈ I_t_0,𝐱_0; ∙ uniformly stable if, for all ϵ >0, there exists δ=δ(ϵ) ∈(0,r) such that for all t_0∈ℝ^+, 𝐱_0<δ implies that 𝐱(t,t_0,𝐱_0)< ϵ for all t∈ I_t_0,𝐱_0. In the case where the trivial solution 𝐱=0 of the system (<ref>) is stable, for every t_0≥0 fixed, observe that for ϵ=r, there exists δ=δ(ϵ,t_0)∈ (0,r) such that for all 𝐱_0 ∈ B_ℝ^n(0,δ), 𝐱(t,t_0,𝐱_0)< ϵ for all t∈ I_t_0,𝐱_0. Using Theorem <ref> and (H_Ω) and (H_r), we deduce that I_t_0,𝐱_0=[t_0,∞). Furthermore, observe that if the stability of the trivial solution 𝐱=0 is uniform, then δ do not depend on t_0. The following definition is a notion of asymptotic stability. This concerns the behavior of solutions as t→∞. The trivial solution 𝐱=0 of the system (<ref>) is said to be ∙ asymptotically stable if it is stable and, for every t_0∈ℝ^+, there exists δ=δ(t_0)∈(0,r) such that, for all 𝐱_0 ∈ B_ℝ^n(0,δ) and ϵ>0, there exists σ=σ(t_0,𝐱_0,ϵ) >0 such that 𝐱(t,t_0,𝐱_0)< ϵ for all t∈ [t_0+σ,∞) ∩ I_t_0,𝐱_0; ∙ uniformly asymptotically stable if it is uniformly stable and there exists δ∈(0,r) such that, for every ϵ>0, there exists σ=σ(ϵ)>0 such that, for all t_0 ∈ℝ^+ and 𝐱_0 ∈ B_ℝ^n(0,δ), 𝐱(t,t_0,𝐱_0)<ϵ for all t∈ [t_0+σ,∞) ∩ I_t_0,𝐱_0. In the following example, we compare the stability properties of the trivial solution of a linear Stieltjes dynamical system to the ones in the classical case, observing the change of the stability properties depending on the sets C_g and D_g. The resolution of linear Stieltjes differential equations has been studied in the literature, see for instance <cit.>. Let us consider the linear Stieltjes dynamical system x_g'(t) = c x(t) for g-almost all t≥ t_0≥0, x(t_0) =x_0∈ℝ, for c∈ℝ. In the classical case of derivation where g≡id_ℝ, for c>0, the equilibrium x=0 is not stable given that the solutions have the form x_0 e^c(t-t_0) and they are not bounded on [t_0,∞). Thus, for every ϵ>0 and t_0≥0, there is no δ>0 such that |x_0|<δ implies that |x_0e^c(t-t_0)|< ϵ for all t≥ t_0. However, for c<0, the equilibrium x=0 is asymptotically stable since stability holds and given that the solutions x_0 e^c(t-t_0)→ 0 for all x_0 ∈ℝ. In the case where c=0, every constant z ∈ℝ is a uniformly stable equilibrium. Now, let us reconsider the dynamical system (<ref>), where g:ℝ→ℝ is defined by g(t)=t for t≤ 1 and g(t)=1 for t≥ 1. Thus, the solutions of the problem (<ref>) have the form x_0 e^c(g(t)-g(t_0)), t≥ t_0 for every x_0∈ℝ and t_0≥ 0. Now, we show that the stability properties of the trivial solution x=0 of the system (<ref>) differ from the classical case, for c>0 and c<0. For c>0, we deduce the stability of the trivial solution x=0. Indeed, for all ϵ>0 and t_0≥0, there exists δ=ϵ/e^c(g(1)-g(t_0)) such that |x_0|<δ implies that |x_0 e^c(g(t)-g(t_0))|< ϵ for all t≥ t_0. Whereas, for c<0 the equilibrium x=0 is merely uniformly stable compared to the classical case, since for all ϵ>0, there exists δ=ϵ>0 such that |x_0|<δ implies that |x_0 e^c(g(t)-g(t_0))|< ϵ for all t≥ t_0. Observe that asymptotic stability does not hold for any c∈ℝ, since x_0 e^c(g(t)-g(t_0))→ x_0 e^c(g(1)-g(t_0))↛ 0, as t→∞ for all t_0≥0 and x_0 ∈ℝ^*. Next, we change slightly the dynamical system (<ref>), to incorporate jumps. Let us consider the derivator g_1(t)=t+∑_n∈ℕχ_[n,∞)(t) for all t∈ℝ, and reconsider the dynamical system with g_1 instead: x_g_1'(t) = c x(t) t≥ t_0≥0, with t∉ D_g_1=ℕ, ν x(t), t≥ t_0≥0, with t∈ D_g_1, where c,ν∈ℝ with ν∈(-1,∞)∖{0}. Again the solution of (<ref>) satisfying x(t_0)=x_0 has the form x(t)=x_0 e_c_*(t,t_0)=x_0e^∫_[t_0,t)c_*(s) dμ_g_1(s), where c_*(t)= c t∈[t_0,∞)∖ D_g_1, log(1+ν) t∈[t_0,∞)∩ D_g_1. In Figure <ref>, we can observe different patterns depending on the values of c and ν. This implies that the presence of discontinuities can both destabilize and restore the stability properties of a dynamical system. §.§ Stability results based on Lyapunov's function In the classical case where g≡id_ℝ, Lyapunov's method is used to study the behavior of a trajectory of the system in a neighborhood of the trivial solution 𝐱=0, by means of a function V depending on time and state; known as a Lyapunov function. This function can be understood as an energy representation of the system (<ref>), since in numerous applications, the function considered is the total energy of the system (<ref>) through time, see for instance <cit.> for an example of an energy-based Lyapunov function for physical systems. In our context, the derivator g takes into account the relevance of each moment during the process by means of the changes of the slopes of g accordingly. Put differently, g amplifies an alternative measurement for time, which may differ from the linear timeline typically used in the classical case where g≡id_ℝ, see for instance the works <cit.> where g represents the life cycle of some populations, also we refer to <cit.> for more applications. Nevertheless, in the context of Stieltjes differentiation, we still can rely on Lyapunov's function, particularly, based on the g-derivative of its composition with maximal solutions of the system (<ref>) under consideration. This g-derivative of the composition will then permit a better understanding of how the energy of the system (<ref>) changes, but with respect to this new observed time described by g. More precisely, it will describe how the energy of the system varies in response to the variation of this new "curved" scale of time. Based on the definition of the Stieltjes derivative in Definition <ref>, we define the partial Stieltjes derivative as follows. Given a function V:ℝ^+× B_ℝ^n(0,r_0)→ℝ and t, x_1,…,x_n its arguments. The partial g-derivative of V with respect to the first argument at a point (t,𝐱)∈(ℝ^+∖ C_g)× B_ℝ^n(0,r_0) is defined as: ∂ V/∂_g t(t,𝐱)= lim_s→ tV(s,𝐱)-V(t,𝐱)/g(s)-g(t), if t ∈ℝ^+ ∖ D_g, V(t^+,𝐱)-V(t,𝐱)/g(t^+)-g(t), if t ∈ℝ^+ ∩ D_g, provided that the limits exist. Combining Proposition <ref> and Definition <ref>, we obtain the following technical proposition. The proof involves a formula related to the g-derivative of the composition involving a function with two variables. Formulae of this fashion were stated without proof in <cit.> for t∉ D_g. In the next proposition, we derive formulae in the case where D_g and N_g are discrete. Let g: ℝ→ℝ be a left-continuous and nondecreasing function such that D_g and N_g are discrete. Given a function V:ℝ^+× B_ℝ^n(0,r_0) →ℝ satisfying the following assumptions: * V(·,𝐮) is g-differentiable on ℝ^+∖ (D_g ∪ C_g) for all 𝐮∈ B_ℝ^n(0,r_0); * V(t,·) ∈ C^1(B_ℝ^n(0,r_0),ℝ) for all t≥ 0; * ∂ V/∂_g t(·,𝐮) is continuous on ℝ^+∖ (D_g ∪ C_g) for all 𝐮∈ B_ℝ^n(0,r_0); * for (t_0,𝐱_0)∈ℝ^+× B_ℝ^n(0,r), V(·,𝐱(·))∈𝒜𝒞_g,loc(I_𝐱,ℝ) for every solution 𝐱 :I_𝐱→ B_ℝ^n(0,r_0) of the system (<ref>). Then, for all [a,b]⊂ I_𝐱, we obtain for g-almost all t∈ [a,b]∖ (D_g ∪ C_g) that V_g'(t,𝐱(t))= ∂ V/∂_g t(t,𝐱(t))+ ∑_i=1^n∂ V/∂ x_i(t,𝐱(t)) f_i(t,𝐱(t)). Moreover, if t∈ [a,b)∩ D_g, then V_g'(t,𝐱(t))=V(t^+,𝐱(t)+ μ_g({t})𝐟(t,𝐱(t)))-V(t,𝐱(t))/g(t^+)-g(t). For (t_0,𝐱_0)∈ℝ^+ × B_ℝ^n(0,r), let us consider a solution 𝐱 :I_𝐱→ B_ℝ^n(0,r_0) of the system (<ref>). Thus, the composition V(·,𝐱(·))∈𝒜𝒞_g(I_𝐱,ℝ). Let [a,b]⊂ I_𝐱. For g-almost every t ∈ [a,b] ∖ (D_g ∪ C_g): V_g'(t,𝐱(t)) = lim_s→ tV(s,𝐱(s))-V(t,𝐱(t))/g(s)-g(t) = lim_s→ tV(s,𝐱(s))-V(t,𝐱(s))/g(s)-g(t)+V(t,𝐱(s))-V(t,𝐱(t))/g(s)-g(t) = lim_s→ tV(s,𝐱(s))-V(t,𝐱(s))/g(s)-g(t) +∑_i=1^n( V(t,(x_1(t),…,x_i-1(t),x_i(s),…,x_n(s)))/g(s)-g(t) -V(t,(x_1(t),…,x_i(t),x_i+1(s),…,x_n(s)))/g(s)-g(t)). For s sufficiently close to t, and since D_g and N_g are discrete, g is continuous and increasing on the interval with endpoint points s and t. By applying Corollary <ref> to the function V(·,𝐱(s)), we obtain that there exists c between s and t such that V(s,𝐱(s))-V(t,𝐱(s))/g(s)-g(t)= ∂ V/∂_g t(c,𝐱(s)). As s→ t, c→ t, and using Condition (3) we obtain V(s,𝐱(s))-V(t,𝐱(s))/g(s)-g(t)= ∂ V/∂_g t(c,𝐱(s))→∂ V/∂_g t(t,𝐱(t)). Therefore, V_g'(t,𝐱(t)) = ∂ V/∂_g t(t,𝐱(t))+ ∑_i=1^n∂ V/∂ x_i(t,𝐱(t))(x_i)_g'(t) = ∂ V/∂_g t(t,𝐱(t))+ ∑_i=1^n∂ V/∂ x_i(t,𝐱(t))f_i(t,𝐱(t)). For t ∈[a,b)∩ D_g, we obtain immediately that V_g'(t,𝐱(t)) =V(t^+,𝐱(t^+))-V(t,𝐱(t))/g(t^+)-g(t) =V(t^+,𝐱(t)+μ_g({t})𝐟(t,𝐱(t)))-V(t,𝐱(t))/g(t^+)-g(t). In order to establish sufficient conditions for different types of stability of the trivial solution 𝐱=0 of the system (<ref>) as defined in Subsection 4.1, we introduce specific sets of functions. A function V:ℝ^+× B_ℝ^n(0,r_0) →ℝ is said to belong to class 𝒱^g_1 if it satisfies the following conditions: * V(t,·) is continuous for all t≥ 0; * V(·,𝐱(·))∈𝒜𝒞_g,loc(I_t_0,𝐱_0,ℝ) for every function 𝐱:I_t_0,𝐱_0→ B_ℝ^n(0,r_0) of 𝒜𝒞_g,loc(I_t_0,𝐱_0,ℝ^n) maximal solution of the system (<ref>); * V(t,0)=0 for all t≥0. A function φ:ℝ^+→ℝ^+ belongs to the class 𝒦 if it fulfills the following assumptions: * φ is continuous; * φ(0)=0; * φ is increasing. Now, we state the first stability result. Assume that Conditions (H_Ω), (H_𝐟) and (H_r) hold. If there exist functions V ∈𝒱_1^g and a ∈𝒦 such that (a) a(𝐮)≤ V(t,𝐮), for all (t,𝐮)∈ℝ^+× B_ℝ^n(0,r_0); (b) for every (t_0,𝐱_0) ∈ℝ^+× B_ℝ^n(0,r), if 𝐱:I_t_0,𝐱_0→ B_ℝ^n(0,r_0) is a maximal solution of the system (<ref>), then V_g'(t,𝐱(t)) ≤ 0 for g-almost all t∈ I_t_0,𝐱_0. Then, the trivial solution of the system (<ref>) is (i)stable. (ii)uniformly stable if there exists b ∈𝒦 such that V(t,𝐮)≤ b(𝐮), for all (t,𝐮)∈ℝ^+× B_ℝ^n(0,r_0). (i) Since V∈𝒱_1^g, then, for all ϵ>0 and t_0∈ℝ^+, there exists δ=δ(ϵ, t_0) ∈(0,r) such that sup_𝐮<δV(t_0,𝐮)<a(ϵ). Let 𝐱:I_t_0,𝐱_0→ B_ℝ^n(0,r_0) be a maximal solution of the system (<ref>) and 𝐱_0<δ. It follows from Conditions (a) and (b) that a(𝐱(t))≤ V(t,𝐱(t)) ≤ V(t_0,𝐱_0) < a(ϵ). Thus, for all t∈ I_t_0,𝐱_0, we have that 𝐱(t)=𝐱(t,t_0,𝐱_0) < ϵ. Therefore, the trivial solution 𝐱=0 is stable. (ii) Arguing as in (i), we can choose a δ=δ(ϵ) ∈(0,r) independent of t_0 such that b(δ)<a(ϵ). Thus, using (<ref>), we obtain for all t∈ I_t_0,𝐱_0, a(𝐱(t))≤ V(t,𝐱(t)) ≤ V(t_0,𝐱_0)≤ b(𝐱_0)< b(δ)<a(ϵ). This yields that the trivial solution 𝐱=0 is uniformly stable. In the next theorem, we impose additional assumptions which will permit to insure the asymptotical stability of the trivial solution 𝐱=0 to the system (<ref>). Assume that Conditions (H_Ω), (H_𝐟) and (H_r) hold. Let V ∈𝒱_1^g, a, b ∈𝒦, ϕ : ℝ^+ →ℝ^+ continuous, and a g-measurable function w:ℝ^+→ℝ^+ be such that (a) a(𝐮) ≤ V(t,𝐮) for every (t,𝐮)∈ℝ^+× B_ℝ^n(0,r_0); (b) ϕ(s)=0 if and only if s=0; (c) for every (t_0,𝐱_0) ∈ℝ^+× B_ℝ^n(0,r), the maximal solution 𝐱:I_t_0,𝐱_0→ B_ℝ^n(0,r_0) of the system (<ref>) satisfies V_g'(t,𝐱(t)) ≤ -w(t)ϕ(𝐱(t)) for g-almost all t∈ I_t_0,𝐱_0; (d) inf_t_0 ∈ℝ^+lim_t→ +∞∫_[t_0,t_0+t) w(s) dμ_g(s) =∞. If the trivial solution 𝐱=0 of the system (<ref>) is uniformly stable, then 𝐱=0 is asymptotically stable. (i) The stability of the trivial solution 𝐱=0 holds from uniform stability. Let us choose δ_0 ∈(0, r) associated to an ϵ_0 ≤ r given by the uniform stability. Now, for a fixed t_0 ≥ 0, let ϵ>0. Again, by the uniform stability, there exists δ∈ (0,δ_0) such that, for all t̂∈ℝ^+ and every 𝐱̂_0 satisfying 𝐱̂_0 < δ, one has 𝐱̂(t,t̂,𝐱̂_0)< ϵ for all t∈ [t̂,∞) ∩ I_t̂,𝐱̂_0. We denote M = inf_s ∈ [δ,r_0)|ϕ(s)|. By Condition (b), observe that M>0. Let 𝐱_0 ∈ B_ℝ^n(0,δ_0). Since lim_t→ +∞∫_[t_0,t_0+t) w(s) dμ_g(s) =∞, we can choose σ > 0 such that ∫_[t_0,t_0+σ) w(s) dμ_g(s) > V(t_0,𝐱_0)/M. Let 𝐱 : I_t_0,𝐱_0→ B_ℝ^n(0,r_0) be a maximal solution of (<ref>). Using Remark <ref>, notice that I_t_0,𝐱_0=[t_0,∞). Now, if there exists t̂∈ [t_0,t_0+σ] such that 𝐱(t̂) < δ, then, by the uniform stability 𝐱̂(t) < ϵ for all t ∈ [t̂,∞)∩ I_t̂,𝐱(t̂), where 𝐱̂: I_t̂,𝐱(t̂)→ B_ℝ^n(0,r_0) is the maximal solution of (<ref>) satisfying the initial condition 𝐱̂(t̂) = 𝐱(t̂). By the uniqueness of the maximal solution, one has ω(t_0,𝐱_0) = ω(t̂,𝐱(t̂))=∞ and 𝐱(t) = 𝐱̂(t) for all t∈ [t̂,∞). Hence, 𝐱(t) < ϵ for all t ∈ [t_0+σ,∞)∩ I_t_0,𝐱_0. On the other hand, if 𝐱(t)≥δ for all t ∈ [t_0,t_0+σ], then using Condition (a), Theorem <ref>, and (<ref>), we obtain a(𝐱(t_0+σ)) ≤ V(t_0+σ,𝐱(t_0+σ)) =V(t_0,𝐱(t_0,t_0,𝐱_0))+∫_[t_0,t_0+σ)V_g'(s,𝐱(s)) dμ_g(s) ≤ V(t_0,𝐱_0) -∫_[t_0,t_0+σ) w(s) ϕ(𝐱(s)) dμ_g(s) ≤ V(t_0,𝐱_0) - M∫_[t_0,t_0+σ) w(s) dμ_g(s) < 0. This is a contradiction. Therefore, 𝐱=0 is asymptotically stable. In the example below, we provide an application of Theorem <ref>. Let us consider the Stieltjes dynamical system x_g'(t) = f(t,x(t)) for g-almost all t≥ t_0≥0, x(t_0)=x_0∈ℝ, with g:ℝ→ℝ defined by g(t)=t for all t≤ 1, and g(t)=t+1 for t> 1, and where f:ℝ^+×ℝ→ℝ is a function defined by f(t,x)= -xt/1+t^2 t∈ℝ^+∖ D_g, ν x t∈ℝ^+∩ D_g, for some ν∈ℝ∖{-1,0}. The function f satisfies conditions of Theorem <ref>, thus the problem (<ref>) has a maximal solution x: [t_0,+∞)→ℝ for every (t_0,x_0) ∈ℝ^+×ℝ. Observe that x=0 is an equilibrium of the dynamical system (<ref>). Let us define the function V:ℝ^+×ℝ→ℝ for every (t,x) ∈ℝ^+×ℝ by V(t,x)= x^2 t∈ [0,1], x^2/(1+ν)^2 t>1. Clearly V∈𝒱_g^1, and a(|u|)≤ V(t,u)≤ b(|u|) for all (t,u)∈ℝ^+×ℝ, where a,b∈𝒦 are given by a(s)=min{s^2,s^2/(1+ν)^2} and b(s)=max{s^2,s^2/(1+ν)^2} for all s∈ℝ^+. In addition, for all (t,x)∈ℝ^+×ℝ: ∂ V/∂_g t(t,x) = 0 t∈ (0,1)∪(1,∞), x^2/(1+ν)^2-x^2/g(1^+)-g(1) t=1, = 0 t∈ (0,1)∪(1,∞), 1-(1+ν)^2/(1+ν)^2x^2 t=1. Thus, by means of Proposition <ref>, for t∈ [t_0,∞)∖ D_g, we obtain V_g'(t,x(t)) = ∂ V/∂_g t(t,x(t))+∂ V/∂ x(t,x(t)) f(t,x(t)) = -2t/1+t^2 x(t)^2 t∈ [t_0,∞)∩ [0,1), -2t/(1+t^2)(1+ν)^2x(t)^2 t∈ [t_0,∞)∩ (1,∞). For t∈ [t_0,∞)∩ D_g, if t_0 ≤ 1, then t=1 and we have that V_g'(1,x(1)) =V(1^+,x(1^+))-V(1,x(1))/g(1^+)-g(1) =V(1^+,x(1)+ μ_g({1})f(1,x(1)))-V(1,x(1))/g(1^+)-g(1) =(1+ν)^2x(1)^2/(1+ν)^2-x(1)^2/g(1^+)-g(1) = 0. This implies that V_g'(t,x(t))≤-ω(t)ϕ(|x(t)|), for every t≥ t_0, with w:ℝ^+ →ℝ^+, defined by w(t)=2t/1+t^2 t∈[0,1), 0 t=1, 2t/(1+t^2)(1+ν)^2 t∈(1,∞), and ϕ∈𝒦, defined by ϕ(y)=y^2 for all y∈ℝ^+. Moreover, for every t_0∈ℝ^+ and t>1, we have ∫_[t_0,t_0+t) w(s) dμ_g(s) = ∫_[t_0,t_0+t)∩([0,1)∪{1}∪ (1,∞)) w(s) dμ_g(s) ≥1/(1+ν)^2∫_[t_0,t_0+t)∩ (1,∞)2s/1+s^2 ds = 1/(1+ν)^2(log(1+(t_0+t)^2)-sup{log(2),log(1+t_0^2)}). As, inf_t_0 ∈ℝ^+lim_t→ +∞log(1+(t_0+t)^2)-sup{log(2),log(1+t_0^2)} =+∞, we obtain that inf_t_0 ∈ℝ^+lim_t→ +∞∫_[t_0,t_0+t) w(s) dμ_g(s)=+∞. Thus, Condition (d) holds. Therefore, by means of Theorem <ref>, we deduce that x=0 is an asymptotically stable equilibrium. Figure <ref> illustrates the asymptotic behavior of solutions of the dynamical system (<ref>). In the following example, we reconsider the system (<ref>) in the case where the set D_g is infinite. Let us consider the dynamical system x_g'(t) = f(t,x(t)) for g-almost all t≥ t_0≥0, x(t_0)=x_0∈ℝ, where g:ℝ→ℝ is a derivator such that D_g={t_k}_k∈ℕ⊂(0,+∞), defined by g(t)=t+∑_k∈ℕχ_[t_k,+∞)(t) for all t∈ℝ, and f:ℝ^+×ℝ→ℝ is a function defined by f(t,x)= -xt/1+t^2 t∈ℝ^+∖ D_g, ν x t∈ℝ^+∩ D_g, for some ν∈[-2,-1). The function f satisfies conditions of Theorem <ref>, thus the problem (<ref>) has a maximal solution x: [t_0,+∞)→ℝ for every (t_0,x_0) ∈ℝ^+×ℝ. Observe that x=0 is an equilibrium of the dynamical system (<ref>). Let us define the function V:ℝ^+×ℝ→ℝ for all (t,x) ∈ℝ^+×ℝ by V(t,x)=x^2 Clearly V∈𝒱_g^1, and a(|u|)≤ V(t,u)≤ b(|u|) for all (t,u)∈ℝ^+×ℝ, where a∈𝒦 is given by a(s)=s^2 and b(s)=s^2/(1+ν)^2 for all s∈ℝ^+. In addition, for all t∈ [t_0,∞) ∂ V/∂_g t(t,x)=0. Thus, by means of Proposition <ref>, for g-almost every t∈ [t_0,∞)∖ D_g, we obtain V_g'(t,x(t)) = ∂ V/∂_g t(t,x(t))+∂ V/∂ x(t,x(t)) f(t,x(t)) = -2t/1+t^2 x(t)^2. For t_k∈ [t_0,∞)∩ D_g, we have that V_g'(t_k,x(t_k)) =V(t_k^+,x(t_k^+))-V(t_k,x(t_k))/g(t_k^+)-g(t_k) =V(t_k^+,x(t_k)+ μ_g({t_k})f(t_k,x(t_k)))-V(t_k,x(t_k))/g(t_k^+)-g(t_k) = ((1+ν)x(t_k))^2-x(t_k)^2/g(t_k^+)-g(t_k) =ν(2+ν)x(t_k)^2. This implies that V_g'(t,x(t))≤-ω(t)ϕ(|x(t)|), for g-almost every t≥ t_0, with w:ℝ^+ →ℝ^+, defined by w(t)=2t/1+t^2 t∈[0,t_1)∪(t_k,t_k+1), k∈ℕ, -ν(2+ν) t∈{t_k}_k∈ℕ, and ϕ∈𝒦 defined by ϕ(y)=y^2 for all y∈ℝ^+. For t_0,t ∈ℝ^+, observe that w satisfies: ∫_[t_0,t_0+t)w(s) dμ_g(s) = ∑_s∈ [t_0,t_0+t)∩ D_g -ν(2+ν) μ_g({s}) + ∫_t_0^t_0+t2s/1+s^2 ds = ∑_s∈ [t_0,t_0+t)∩ D_g -ν(2+ν) μ_g({s}) + log(1+(t_0+t)^2/1+t_0^2). Thus, inf_t_0 ∈ℝ^+lim_t→ +∞∫_[t_0,t_0+t)w(s) dμ_g(s)=+∞. Consequently, Condition (d) holds. Therefore, by means of Theorem <ref>, we deduce that x=0 is an asymptotically stable equilibrium. Figure <ref> illustrates the asymptotic behavior of solutions of the dynamical system (<ref>). Assume that Conditions (H_Ω), (H_𝐟) and (H_r) hold. If there exist V ∈𝒱_1^g, a, b ∈𝒦, ϕ : ℝ^+ →ℝ^+ continuous, and a g-measurable function w:ℝ^+→ℝ^+ such that (a) a(𝐮) ≤ V(t,𝐮) ≤ b(𝐮) for every (t,𝐮)∈ℝ^+× B_ℝ^n(0,r_0); (b) ϕ(s)=0 if and only if s=0; (c) for every (t_0,𝐱_0) ∈ℝ^+× B_ℝ^n(0,r), the maximal solution 𝐱:I_t_0,𝐱_0→ B_ℝ^n(0,r_0) of the system (<ref>) satisfies V_g'(t,𝐱(t)) ≤ -w(t)ϕ(𝐱(t)) for g-almost all t∈ I_t_0,𝐱_0. (d) lim_t→ +∞inf_t_0 ∈ℝ^+∫_[t_0,t_0+t) w(s) dμ_g(s) =+∞. Then the trivial solution 𝐱=0 of the system (<ref>) is uniformly asymptotically stable. By (ii) of Theorem <ref>, the trivial solution 𝐱=0 is uniformly stable. Thus, let us choose δ_0 ∈(0, r) associated to an ϵ_0 ≤ r. Let ϵ > 0. Again, by uniform stability, there exists δ∈(0,r) such that, for all t̂∈ℝ^+ and every 𝐱̂_0 such that 𝐱̂_0 < δ, one has 𝐱̂(t,t̂,𝐱̂_0)< ϵ for all t∈ [t̂,∞) ∩ I_t̂,𝐱̂_0. Let M be as defined in (<ref>). Since lim_t→ +∞inf_t_0 ∈ℝ^+∫_[t_0,t_0+t) w(s) dμ_g(s) =+∞, we can choose σ > 0 such that ∫_[t_0,t_0+σ) w(s) dμ_g(s) > b(δ_0)/M for all t_0 ∈ℝ^+. Let (t_0,𝐱_0) ∈ℝ^+× B_ℝ^n(0,δ_0) and 𝐱 : I_t_0,𝐱_0→ B_ℝ^n(0,r_0) a maximal solution of (<ref>). If there exists t̂∈ [t_0,t_0+σ] ∩ I_t_0,𝐱_0 such that 𝐱(t̂) < δ, then 𝐱̂(t) < ϵ for all t ∈ [t̂,∞)∩ I_t̂,𝐱(t̂), where 𝐱̂: I_t̂,𝐱(t̂)→ B_ℝ^n(0,r_0) is the maximal solution of (<ref>) satisfying the initial condition 𝐱̂(t̂) = 𝐱(t̂). By the uniqueness of the maximal solution, one has ω(t_0,𝐱_0) = ω(t̂,𝐱(t̂)) and 𝐱(t) = 𝐱̂(t) for all t∈ [t̂,∞)∩ I_t_0,𝐱_0. Hence, 𝐱(t) < ϵ for all t ∈ [t_0+σ,∞)∩ I_t_0,𝐱_0. On the other hand, if 𝐱(t)≥δ for all t ∈ [t_0,t_0+σ], then using Condition (a), Theorem <ref>, and (<ref>), we obtain a(𝐱(t_0+σ)) ≤ V(t_0+σ,𝐱(t_0+σ)) =V(t_0,𝐱(t_0,t_0,𝐱_0))+∫_[t_0,t_0+σ)V_g'(s,𝐱(s)) dμ_g(s) ≤ V(t_0,𝐱_0) -∫_[t_0,t_0+σ) w(s) ϕ(𝐱(s)) dμ_g(s) ≤ V(t_0,𝐱_0) - M∫_[t_0,t_0+σ) w(s) dμ_g(s) <b(𝐱_0) -b(𝐱_0) = 0. This is a contradiction. Hence, we conclude that 𝐱=0 is uniformly asymptotically stable. In the next example, we provide an application of Theorem <ref> for a system subject to impulses where the trivial solution 𝐱=0 is uniformly asymptotically stable. Let us consider the dynamical system x_g'(t) = f(t,x(t)) for g-almost all t≥ t_0≥0, x(t_0)=x_0∈ℝ, where g:ℝ→ℝ is a derivator such that D_g={t_k}_k∈ℕ⊂(0,+∞), defined by g(t)=t+∑_k∈ℕχ_[t_k,+∞)(t) for all t∈ℝ, and f:ℝ^+×ℝ→ℝ is a function defined by f(t,x)= -t arctan(x) t∈ℝ^+∖ D_g, ν_k x t=t_k, k∈ℕ where {ν_k}_k⊂ℝ_+^* is a sequence satisfying lim_k→∞1/∏_i=1^k(1+ν_i)^2=a_0>0. The map f satisfies conditions of Theorem <ref>, thus the problem (<ref>) has a maximal solution x: [t_0,+∞)→ℝ for every (t_0,x_0) ∈ℝ^+×ℝ. Observe that x=0 is an equilibrium of the Stieltjes dynamical system (<ref>). Let us define the function V:ℝ^+×ℝ→ℝ for all (t,x) ∈ℝ^+×ℝ by V(t,x)= x^2 t∈ [0,t_1], x^2/∏_i=1^k(1+ν_i)^2 t∈ (t_k,t_k+1], k∈ℕ. Clearly V∈𝒱_g^1, and a(|x|)≤ V(t,x)≤ b(|x|) for all (t,x)∈ℝ^+×ℝ, where a,b∈𝒦 are functions defined by a(s)=a_0s^2 and b(s)=s^2 for all s∈ℝ^+. In addition, for all (t,x)∈ℝ^+×ℝ: ∂ V/∂_g t(t,x) = 0 t∈ [0,t_1]∪ (t_k,t_k+1), k∈ℕ, x^2/∏_i=1^k(1+ν_i)^2-x^2/∏_i=1^k-1(1+ν_i)^2/g(t_k^+)-g(t_k) t=t_k, k∈ℕ, = 0 t∈ [0,t_1)∪ (t_k,t_k+1), k∈ℕ, 1-(1+ν_k)^2/∏_i=1^k(1+ν_i)^2x^2 t=t_k, k∈ℕ. For g-almost every t∈ [t_0,∞)∖ D_g, we have that t∈[0,t_1) or there exists k∈ℕ such that t∈(t_k,t_k+1). Thus, by means of Proposition <ref>, we obtain if t∈[0,t_1): V_g'(t,x(t)) = ∂ V/∂_g t(t,x(t))+∂ V/∂ x(t,x(t)) f(t,x(t)) = -2x(t)tarctan(x(t)) ≤ -2t x(t)^2/1+x(t)^2, where the last inequality follows from the Mean Value Theorem. While if there exists k∈ℕ such that t∈(t_k,t_k+1), then V_g'(t,x(t)) = ∂ V/∂_g t(t,x(t))+∂ V/∂ x(t,x(t)) f(t,x(t)) =-2x(t)/∏_i=1^k(1+ν_i)^2tarctan(x(t)) ≤ -2t/∏_i=1^k(1+ν_i)^2x(t)^2/1+x(t)^2. For t_k∈ [t_0,∞)∩ D_g, we have that V_g'(t_k,x(t_k)) =V(t_k^+,x(t_k^+))-V(t_k,x(t_k))/g(t_k^+)-g(t_k) =V(t_k^+,x(t_k)+ μ_g({t_k})f(t_k,x(t_k)))-V(t_k,x(t_k))/g(t_k^+)-g(t_k) =(1+ν_k)^2x(t_k)^2/∏_i=1^k(1+ν_i)^2- x(t_k)^2/∏_i=1^k-1(1+ν_i)^2/g(t_k^+)-g(t_k) = 0 . Therefore, we conclude that V_g'(t,x(t))≤-ω(t)ϕ(|x(t)|), forg-almost all t≥ t_0, where w:ℝ^+ →ℝ^+ is the function defined for every t∈ℝ^+ by w(t)= 2t t∈[0,t_1), 0 t=t_k, k∈ℕ, 2t/∏_i=1^k(1+ν_i)^2 t∈(t_k,t_k+1), k∈ℕ, and ϕ∈𝒦 the function given by ϕ(y)=y^2/1+y^2 for all y∈ℝ^+. Observe that the function w satisfies lim_t→ +∞inf_t_0 ∈ℝ^+∫_[t_0,t_0+t)w(s) dμ_g(s) ≥lim_t→ +∞inf_t_0 ∈ℝ^+∫_t_0^t_0+ta_02s ds =a_0lim_t→ +∞inf_t_0 ∈ℝ^+ 2tt_0+t^2 =+∞. By means of Theorem <ref>, we deduce that x=0 is uniformly asymptotically stable equilibrium. Figure <ref> illustrates the asymptotic behavior of solutions of the dynamical system (<ref>) with D_g=ℕ. Observe in Example <ref> that the function w satisfies lim_t→ +∞inf_t_0 ∈ℝ^+∫_[t_0,t_0+t) w(s) dμ_g(s) =0 ≠∞. Thus, Condition (d) of Theorem <ref> does not hold. Consequently, uniform asymptotic stability cannot be deduced. In the classical case where g≡id_ℝ, corollary results <cit.> and <cit.> are well-known when Conditions (b) of Theorem <ref> is replaced by V_g'(t,𝐱(t)) being negative definite along each maximal solution 𝐱 for every t≥ t_0. However, to present an analogous statement, we require an additional assumption to avoid the case when lim_t→∞ g(t)=l<∞, and in particular, when there exists T≥0 such that (T,∞)⊂ C_g, Example <ref> provides an interesting illustration of attractivity lack for asymptotic stability. Assume that lim_t→∞ g(t)=∞. If there exist V:ℝ^+× B_ℝ^n(0,r_0) →ℝ; V∈𝒱_1^g and a,b,ϕ∈𝒦 such that (a) a(𝐮) ≤ V(t,𝐮)≤ b(𝐮), for every (t,𝐮)∈ℝ^+× B_ℝ^n(0,r_0); (b) for every (t_0,𝐱_0) ∈ℝ^+× B_ℝ^n(0,r), the maximal solution 𝐱:I_t_0,𝐱_0→ B_ℝ^n(0,r_0) of the system (<ref>) satisfies V_g'(t,𝐱(t))≤ -ϕ(𝐱(t)), for g-almost all t≥ t_0. then, the trivial solution 𝐱=0 of the system (<ref>) is asymptotically stable. Furthermore, if lim_t→ +∞inf_t_0 ∈ℝ^+ g(t+t_0)-g(t_0)=∞, then, the trivial solution 𝐱=0 of the system (<ref>) is uniformly asymptotically stable. Observe that Conditions of Theorem <ref> hold for ω≡ 1 as inf_t_0 ∈ℝ^+lim_t→ +∞∫_[t_0,t+t_0)ω(s) dμ_g(s) = inf_t_0 ∈ℝ^+lim_t→ +∞g(t+t_0)-g(t_0)=+∞. Hence, the trivial solution is 𝐱=0 of the system (<ref>) is asymptotically stable. Moreover, if lim_t→ +∞inf_t_0 ∈ℝ^+ g(t+t_0)-g(t_0)=∞, then, Theorem <ref> ensures that 𝐱=0 is uniformly asymptotically stable. § APPLICATIONS TO DYNAMICS OF POPULATION §.§ Stable equilibrium of a population subject to train vibrations In this subsection, by means of a system of Stieltjes differential equations, we study the long-term impact of high-speed train vibrations and noise pollution on a population of animals living near railways. Depending on the species and their sensitivity to vibrations, various implications can be observed, we cite for instance: ∙ Hearing damage resulting in from the significant noise and vibrations that can potentially harm animals with sensitive hearing such as certain small mammals, birds, and bats which rely heavily on their hearing for communication, navigation, detection of predators, and finding food, thus, prolonged exposure to train vibrations may lead to hearing impairment or damage, disrupting their normal behaviors and increasing their death rate. ∙ Increased stress levels since some animals may be startled by the vibrations. This can affect their feeding patterns, reduce their reproduction rate, or lead to emigration resulting in a loss of suitable habitat and altering the composition and diversity of the local ecosystem ∙ Ecological interactions disruption which can affect pollination for instance if the vibrations deter insects that are important pollinators, disturb ground-dwelling organisms (insects, reptiles, and small mammals…) which can implicitly impact other species that rely on them as a food source. To these aims, an Allee effect can be observed in this regard, especially when the survival of the population depends on a minimum threshold size M>0. In the follows, we denote x(t) as the number of individuals of a population living in a region near a railway with a carrying capacity K>M. Let us assume that a certain number m>0 of trains pass through the area every day. We refer to {τ_i}_i=1^i=m as the moments when trains pass in a single day. Once a train pass by the area, its impact is significant for a proportion of individuals living near the railway. They may experience vibrations or direct injuries. In the following analysis, we use a Stieltjes differential equation to model the dynamics of this population affected by train vibrations, and we study the asymptotic behavior of its solutions. In doing so, we require a derivator g:ℝ→ℝ presenting discontinuities for t∈{τ_i}_i=1^i=m+24ℤ such that μ_g({t}) quantifies the rate at which the risk of damage varies. Depending on the specific τ_i, this rate can either increase or decrease, reflecting the varying impact of vibrations during daylight and nighttime hours. For simplicity, we can take for instance: g(t)=t+∑_k∈ℕ∑_i=1^mχ_[τ_i+24k,∞)(t), for all t∈ℝ, with μ_g({·})≡1 on D_g= {τ_i}_i=1^i=m+24ℤ. In the sequel, we suggest to analyse the asymptotic behaviour of the dynamics of this population, through the study of asymptotic stability of the zero equilibrium of the Stieltjes dynamical system: x_g'(t) =f(t,x(t)), for g-almost every t≥ t_0≥0, x(t_0) =x_0, where f: ℝ^+×ℝ→ℝ is defined by f(t,x) =ρ x(1-x/K)(x/M-1), t∉{τ_i+24k}_i=1^m, k=0,1,2… -dx, t∈{τ_i+24k}_i=1^m, k=0,1,2… The parameters of the model can be understood as: K>M the carrying capacity of the environment. ρ>0 intrinsic rate of reproduction of the population. d∈(0,1) Constant related to the impact induced by trains, either a migration rate immediately following the passage of trains or a mortality rate for certain populations that live in close proximity to the railway. To simplify the analysis, we make the assumption that a train passes every hour over a 24-hour period. Thus, g(t)= t+∑_i∈ℤχ_[i+1,∞)(t) for all t∈ℝ, D_g=ℤ and μ_g({t})=1 for all t∈ D_g. The dynamics present several equilibria. However, we will focus on the zero equilibrium, to study its local asymptotic stability within a region (-r_0,r_0); r_0>0, that will be determined to enhance the impact of environmental factors threatening this population. Since the trivial solution x=0 is an equilibrium of the dynamical system (<ref>), let us consider r_0≤ M. For all (t_0,x_0) ∈(ℝ^+∩ D_g)× (-r_0,r_0), if x is a solution of (<ref>), then x(t_0^+)=x(t_0)+μ_g({t_0})f(t_0,x(t_0))=(1-d)x(t_0) ∈(-r_0,r_0). Thus, f satisfies (H_r) for r=r_0. Combining this with Theorem <ref>, yields existence of a maximal solution x: I_t_0,x_0→ℝ for every (t_0,x_0) ∈ℝ^+×(-r_0,r_0). Let ω(t_0,x_0):=sup I_t_0,x_0. Let us construct this solution through local existence to show that ω(t_0,x_0)=∞. First of all, let τ∈(t_0,ω(t_0,x_0)) such that x:[t_0,τ]→ (-r_0,r_0) is a solution of (<ref>). Observe that if x(τ)=0, by uniqueness of the solution, we deduce that x≡ 0 which lies in (-r_0,r_0). Thus, two other interesting cases occur when x(τ) ≠ 0: Case 1: if τ∈ D_g, then x(τ^+)=x(τ)+μ_g({τ})f(τ,x(τ))=(1-d)x(τ)∈ (-r_0,r_0). Case 2: if τ∉ D_g, then we distinguish two subcases: Subcase 1: if x(τ)∈(0,r_0) then x_g'(τ)=f(τ,x(τ))<0. Thus, by g-continuity of x at τ there exists τ_1 ∈(τ,ω(t_0,x_0)) such that x:[t_0,τ_1]→ (0,r_0). Subcase 2: if x(τ)∈(-r_0,0) then x_g'(τ)=f(τ,x(τ))>0. Thus, there exists τ_2 >0 such that x:[τ,τ_2]→ (-r_0,0). Repeating the same argument for each subinterval of I_t_0,x_0, we deduce that the solution x(t)∈[-λ_x_0,λ_x_0]⊂(-r_0,r_0) for all t∈ I_t_0,x_0, where λ_x_0=sup_t∈[t_0,τ]|x(t,t_0,x_0)|. By Corollary <ref>, we deduce that ω(t_0,x_0)=∞. Hence, x:[t_0,∞)→ (-r_0,r_0) is a maximal solution of (<ref>). Now, we define the function V:ℝ^+× (-r_0,r_0)→ℝ by V(t,x)=x^2 for all (t,x) ∈ℝ^+×(-r_0,r_0). Clearly V∈𝒱_g^1. For all (t_0,x_0) ∈ℝ^+× (-r_0,r_0), let x:[t_0,∞)→ (-r_0,r_0) be the maximal solution of (<ref>). Thus, using Proposition <ref>, we obtain for g-almost all t ∈ [t_0,∞)∖ D_g: V_g'(t,x(t)) = ∂ V/∂_g t(t,x(t))+∂ V/∂ x(t,x(t)) x_g'(t) =∂ V/∂_g t(t,x(t))+∂ V/∂ x(t,x(t)) f(t,x(t)) = 2ρ x(t)^2(1-x(t)/K)(x(t)/M-1). While for t ∈ [t_0,∞)∩ D_g, we obtain V_g'(t,x(t)) =V(t^+,x(t^+))-V(t,x(t))/g(t^+)-g(t) =V(t^+,x(t)+ μ_g({t})f(t,x(t)))-V(t,x(t))/g(t^+)-g(t) =(1-d)^2x(t)^2-x(t)^2 =(-2d+d^2)x(t)^2. As (-2d+d^2)<0, it follows that V_g'(t,x(t)) is negative definite. Since a(|x|)≤ V(t,x)≤ b(|x|) for all (t,x)∈ℝ^+×(-r_0,r_0) with a,b∈𝒦 defined by a(s)=b(s)=s^2 for all y∈ℝ^+, Corollary <ref> ensures that the trivial solution x=0 is uniformly asymptotically stable. The asymptotic behavior of solutions is illustrated in Figure <ref>. §.§ Comments Figure <ref> illustrates the long-term effect of the high-speed trains vibrations. As shown in the figure, if the vibrations lead to an Allee effect, the population will decline significantly over time. This outcome raises alarm about the overall stability and the persistence of the whole ecosystem in this region. Specifically, it indicates a high likelihood of population extinction when there is no estimation showing that the initial population exceeds M. §.§ Stable equilibrium of Bacteria-Ammonia dynamics Cyanobacteria, similar to plants, participate in oxygenic photosynthesis as they are photosynthetic bacteria. Photosynthesis is the biochemical process through which organisms convert light energy into chemical energy in the form of glucose or other organic compounds resulting oxygen release. The energy captured is then used to fuel the synthesis of organic molecules such as glucose, serving as an energy source for the bacteria. Cyanobacteria are found in diverse habitats, including freshwater, marine environments, and terrestrial ecosystems. In this section, we consider a species of cyanobacteria that has also the ability to fix nitrogen such as Anabaena cyanobacteria. Through nitrogen fixation, atmospheric nitrogen gas (N_2) is converted into a form that can be used by plants and other organisms by means the enzyme nitrogenase. This enzyme catalyzes the conversion of nitrogen gas (N_2) into ammonium ions (NH_4^+) based on a considerable amount of energy obtained from photosynthesis, these ammonium ions are assimilated into amino acids and proteins, which are essential for the growth and survival of Cyanobacteria. As a by product of nitrogen fixation, ammonia gas (NH_3) can be released. Some of the assimilated ammonium ions (NH_4^+) are released back into the environment providing neighboring vegetative cells with a source of nitrogen. To avoid losing valuable nitrogen nutrients and optimizing nitrogen utilization efficiency, this population has mechanisms allowing ammonium ions (NH_4^+) reabsorption, and ammonia gas (NH_3) assimilation through converting ammonia gas (NH_3) into ammonium ions (NH_4^+). In what follows, the term "ammonia" refers to both the protonated and unprotonated forms, which are denoted as (NH_3) and (NH_4^+) respectively <cit.>. It is worth mentioning that ammonia is commonly used in cleaning products, fertilizers. Beyond that, ammonia's cooling properties make it an essential refrigerant in air conditioning systems and refrigerators. To optimize resource utilization and adapt to the varying environmental conditions, the population undergoes a day-night cycling of nitrogen fixation and carbon consumption <cit.>. This is due to the sensitivity of the nitrogenase enzyme responsible for nitrogen fixation to oxygen. Thus, nitrogen fixation is carried out during daylight hours when the photosynthesis can provide the necessary energy and oxygen levels are relatively low. During the nights, since oxygen levels within the cells would be higher, this population of cyanobacteria reduces their metabolic activity and growth and relies on stored carbon compounds to fulfill their energy needs. Our objective in the sequel is to observe the dynamics of this population which thrives in the presence of ammonia in a culture room without exposure to artificial light, tracking the levels of the ammonia during this the process. Here, we assume that the carbon dioxide (CO_2), the nitrogen gas (N_2) and nutrients supply are maintained steady as well as the PH level. Since the population undergoes dormancy phases during the nights, we identify the days with intervals of the form [2k,2k+1], k=0,1,2,…, and the night with intervals [2k+1,2k+2], k=0,1,2,… In our example, we differentiate with respect to a derivator g whose variation describes the intensity of light, which is necessary for the photosynthesis process. We require that g presents smaller slops at the beginning and at the end of the daylight hours, with maximal slops at middays where t=2k+1/2, and remains constant during the dormancy phases in the night [2k+1,2k+2], k=0,1,2,… For instance, we consider g:ℝ→ℝ defined by g(t)=sin(π(t-1/2))+1/2 t∈[0,1] 1 t∈(1,2], and g(t)=g(1)+g(t-2) for t≥ 2. We denote N(t) the Biomass of cyanobacteria (grams per liter), and A(t) the ammonia concentration in the environment at time t≥ 0. Since the population thrives in the presence of ammonia, we can assume that the growth rate is proportional to the level of ammonia. We denote ρ the maximal intrinsic coefficient of reproduction that the population can reach in the presence of one unit of ammonia with maximal sunlight intensity. Thus, the dynamics can be modeled using the autonomous system of Stieltjes differential equations: 𝐮_g'(t) = 𝐅(𝐮(t)), for g-almost every t≥ t_0≥0, 𝐮(0) =𝐮_0 =(N_0,A_0), where 𝐮=(N,A) and 𝐅=(F_1,F_2) : ℝ^2→ℝ^2 is defined by F_1(N,A) = ρ AN(1-N/K) F_2(N,A) = α N-β AN. where the parameters of the model can be understood as: K>0 the carrying capacity of the culture room, which forms a spacial constraint for growth. α>0 Constant related to the production of ammonia through nitrogen fixation. β>0 Constant related to the proportion of the reabsorption of ammonia by the population depending on the level of ammonia in the environment. Observe that 𝐮^*=(K,α/β) is an equilibrium of the system (<ref>) among other equilibria. Its asymptotic stability would guarantee the persistence of the population with nonzero ammonia production. Therefore, we study local asymptotic stability in a neighborhood B_ℝ^2(𝐮^*,r_0); r_0>0, of this equilibrium. To this aim, we transfer our study in a neighborhood of 𝐱=0=(0,0) ∈ℝ^2. If we adopt the variable change 𝐱=(x_1,x_2):=𝐮-𝐮^*, we obtain the system 𝐱_g'(t) = 𝐟(𝐱(t)), for g-almost every t≥ t_0≥0, 𝐱(0) =𝐱_0 :=𝐮_0-𝐮^*, where 𝐟=(f_1,f_2) : ℝ^2→ℝ^2 is defined by f_1(x_1,x_2) = -ρx_1/K(x_2+α/β)(x_1+K) f_2(x_1,x_2) = -β x_2(x_1+K). 𝐱=0=(0,0) ∈ℝ^2 is an equilibrium of the dynamical system (<ref>). Next, we prove that 𝐱=0 is asymptotically stable, implying that 𝐮^*=(K,α/β) is an asymptotically stable equilibrium of the system (<ref>). Let us consider r_0=min{K,α/β}, and let (t_0,𝐱_0) ∈ℝ^+× B_ℝ^2(0,r_0). Arguing as in the previous subsection, since the function 𝐟 satisfies conditions of Theorem <ref>, we prove existence of the maximal solution 𝐱=(x_1,x_2):[t_0,∞)→ B_ℝ^2(0,r_0) of the system (<ref>). Initially, let 𝐱: I_t_0,𝐱_0→ℝ^2 be the maximal solution of (<ref>) with ω(t_0,𝐱_0):=sup I_t_0,𝐱_0. Let τ∈(0,ω(t_0,𝐱_0))∖ C_g such that 𝐱=(x_1,x_2):[t_0,τ]→ B_ℝ^2(0,r_0) is a solution of (<ref>). In what follows, we analyze the possible cases: Case 1: if x_1(τ),x_2(τ)∈(0,r_0) (resp. x_1(τ),x_2(τ)∈(-r_0,0)), then, for i=1,2, we have (x_i)_g'(τ)=f_i(𝐱(τ))<0 (resp. (x_i)_g'(τ)=f_i(𝐱(τ))>0). Therefore, by g-continuity of x at τ, there exists τ_1 ∈(τ,ω(t_0,𝐱_0)) (τ_1 can be chosen such that τ_1∉ C_g) such that x_i:[t_0,τ_1]→ (0,r_0) (resp. x_i:[t_0,τ_1]→ (-r_0,0)). Case 2: if x_1(τ)∈(0,r_0) and x_2(τ)∈(-r_0,0), then (x_1)_g'(τ)=f_1(𝐱(τ))<0 and (x_2)_g'(τ)=f_2(𝐱(τ))>0. Therefore, there exists τ_2 ∈(τ,ω(t_0,𝐱_0))∖ C_g such that x_1:[τ,τ_2]→ (0,r_0), and x_2:[τ,τ_2]→ (-r_0,0). Case 3: if x_1(τ)∈(-r_0,0) and x_2(τ)∈(0,r_0), then similarly to Case 2, we deduce that there exists τ_3 ∈(τ,ω(t_0,𝐱_0)) such that x_1:[τ,τ_3]→(-r_0,0), and x_2:[τ,τ_3]→ (0,r_0). Repeating the same argument for each subinterval of I_t_0,𝐱_0, we deduce that the solution 𝐱(t)∈ [-λ_𝐱_0,1,λ_𝐱_0,1]× [-λ_𝐱_0,2,λ_𝐱_0,2]⊂ B_ℝ^2(0,r_0) for all t∈ I_t_0,𝐱_0, where λ_𝐱_0,i=sup_t∈[t_0,τ] |x_i(t,t_0,𝐱_0)|, for i=1,2 and 𝐱_0=(𝐱_0,1𝐱_0,2). Using Corollary <ref>, we obtain that ω(t_0,𝐱_0)=∞. Hence, x:[t_0,∞)→ B_ℝ^2(0,r_0) is the maximal solution of (<ref>). Now, let us consider the function V:ℝ^+× B_ℝ^2(0,r_0) →ℝ defined by V(t,𝐱)=x_1^2+x_2^2 for all t≥ 0 and 𝐱=(x_1,x_2)∈ℝ^2. It is clear that V∈𝒱_1^g. Let (t_0,𝐱_0)∈ℝ^+× B_ℝ^2(0,r_0), and 𝐱:[t_0,∞)→ B_ℝ^2(0,r_0) be the maximal solution of (<ref>). By means of Proposition <ref>, for g-almost all t ∈ [t_0,∞), we obtain V_g'(t,𝐱(t)) = ∂ V/∂_g t(t,𝐱(t))+ ∑_i=1^2∂ V/∂ x_i(t,𝐱(t)) (x_i)_g'(t) =∂ V/∂_g t(t,𝐱(t))+ ∑_i=1^2∂ V/∂ x_i(t,𝐱(t)) f_i(t,𝐱(t)) = -2ρx_1(t)^2/K(x_2(t)+α/β)(x_1(t)+K)-2β x_2(t)^2(x_1(t)+K). Thus, V_g'(t,𝐱(t)) is negative definite. Since a(𝐳)≤ V(t,𝐳)≤ b(𝐳) for all (t,𝐳)∈ℝ^+× B_ℝ^2(0,r_0) where a,b∈𝒦 are defined by a(y)=y^2 and b(y)=2y^2 for all y∈ℝ^+, it follows from Corollary <ref> that 𝐱=0 is uniformly asymptotically stable. Hence, 𝐮^*=(K,α/β) is a uniformly asymptotically stable equilibrium of the system (<ref>). The graph of asymptotic behavior of solutions near the equilibrium 𝐮^*=(K,α/β) is given in Figure <ref>. § FUNDING Lamiae Maia was partially supported by the National Center of Scientific and Technical Research (CNRST) under Grant No. 60UM5R2021, Morocco. § ACKNOWLEDGMENT Lamiae Maia would like to express her sincere gratitude towards Professor Marlène Frigon and the Département de mathématiques et de statistique of the Université de Montréal, for their warm hospitality and for funding her research stay at the aforementioned department when this article was finalized. 99 AFNT I. Area, F.J. Fernández, J.J. Nieto, F.A.F. Tojo, Concept and solution of digital twin based on a Stieltjes differential equation, Math. Methods Appl. Sci. 45 (2022), 7451–7465. AL K.B. Athreya, S.N. Lahiri, Measure Theory and Probability Theory. Springer New York, NY (2006). BereAnCourL. Berec, E. Angulo, F. Courchamp, Multiple Allee effects and population management, Trends Ecol. Evol. 22 (2007), no. 4, 185–191. BL M.U. Bikdash and R.A. Layton, An energy-based Lyapunov function for physical systems, IFAC proc. ser 33 (2000), no. 2, 81–-86. B S. Boussiba, Ammonia transport systems in cyanobacteria, Inorganic nitrogen in plants and microorganisms: Uptake and FedGraMesToonM. Federson, R. Grau, J. G. Mesquita, E. Toon, Lyapunov stability for measure differential equations and dynamic equations on time scales, J. Differential Equations, 267 (2019), no. 4192-–4223. FMarTo-OnFirstandSec F.J. Fernández, I. Márquez Albés, and F.A. F. Tojo, On first and second order linear Stieltjes differential equations, J. Math. Anal. Appl. 511 (2022), no. 1, 126010. FerTojoVillF.J. Fernández, F.A. F. Tojo, and C. Villanueva, Compactness criteria for Stieltjes function spaces and applications, Results Math. 79 (2024), no. 3, 98. FP M. Frigon, R. López Pouso, Theory and applications of first-order systems of Stieltjes differential equations. Adv. Nonlinear Anal. 6 (2017), no. 1, 13–36. GallGraMes C.A. Gallegos, R. Grau, and J.G. Mesquita, Stability, asymptotic and exponential stability for various types of equations with discontinuous solutions via lyapunov functionals, J. Differ. Equations 299 (2021), 256-–283. GallMarSlav2025C.A. Gallegos, I. Márquez Albés, and A. Slavík, A general form of Gronwall inequality with Stieltjes integrals, J. Math. Anal. Appl. 541 (2025), no. 1, 128674. H2 W. Hahn, Stability of Motion, Vol. 138, Springer Berlin, Heidelberg, 1967. H1W. Hahn, H.H. Hosenthien, and H. Lehnigk, Theory and applications of Liapunov's direct method, 1963. HT J. Hoffacker, C.C. Tisdell, Stability and instability for dynamic equations on time scales, Comput. Math. Appl., 49 (2005), 9–-10, 1327–1334. KaymakcalanB. Kaymakcalan, Lyapunov stability theory for dynamic systems on time scales, J. Appl. Math. Stoch. Anal. 5 (1992), no 3, 275–-282. KoY. Ko, An asymptotic stability and a uniform asymptotic stability for functional differential equations, Proc. Amer. Math. Soc. 119 (1993), no. 2, 535–-540. ThesisFLF. Larivière, Sur les solutions d'équations différentielles de Stieltjes du premier et du deuxième ordre, Mémoire de Maîtrise, Université de Montréal, 2019. PM R. López Pouso, I. Márquez Albés, General existence principles for Stieltjes differential equations with applications to mathematical biology. J. Differential Equations 264 (2018), no. 8, 5388–5407. PM2 R. López Pouso and I. Márquez Albés, Resolution methods for mathematical models based on differential equations with Stieltjes derivatives, Electron. J. Qual. Theo. 2019 (2019), no. 72, 1-–15. PM3 R. López Pouso and I. Márquez Albés, Systems of Stieltjes differential equations with several derivators, Mediterr. J. Math. 16 (2019), no. 2, 51. PMMR. López Pouso and I. Márquez Albés, and G.A. Monteiro, Extremal solutions of systems of measure differential equations and applications in the study of Stieltjes differential problems, Electron. J. Qual. Theo. 38 (2018), 1–24. PR R. López Pouso, A. Rodríguez, A new unification of continuous, discrete, and impulsive calculus through Stieltjes derivatives. Real Anal. Exchange 40 (2014/15), no. 2, 1–35. L1 A.M. Lyapunov Problème Général de la Stabilité du Mouvement. (AM-17), Volume 17 (1948), available at https://doi.org/10.1515/9781400882311. MEF1L. Maia, N. El Khattabi, and M. Frigon, Existence and multiplicity results for first-order Stieltjes differential equations, Adv. Nonlinear Stud. 22 (2022), no. 1, 684–-710. MEF2L. Maia, N. El Khattabi, and M. Frigon, Systems of Stieltjes differential equations and application to a predator-prey model of an exploited fishery, Discrete Contin. Dyn. Syst. Ser. A 43 (2023), no. 12, 4244-–4271. MarThesis I. Márquez Albés, Differential problems with Stieltjes derivatives and applications, Ph.D. Thesis, Universidade de Santiago de Compostela, 2021. MarI. Márquez Albés, Notes on the linear equation with Stieltjes derivatives, Electron. J. Qual. Theo. 2021 (2021), no. 42, 1–-18. MMI. Márquez Albés and G.A. Monteiro, Notes on the existence and uniqueness of solutions of Stieltjes differential equations, Math. Nachr. 294 (2021), no. 4, 794–814. Rudin W. Rudin, Real and Complex Analysis, 3rd, McGraw-Hill, Singapore, 1987. SS B. Satco and G. Smyrlis, Periodic boundary value problems involving Stieltjes derivatives, J. Fixed Point Theory Appl. 22 (2020), no. 4, 94. SS-2 B. Satco and G. Smyrlis, Applications of Stieltjes derivatives to periodic boundary value inclusions, Mathematics 8 (2020), no. 12, 1–23. SteSuthP.A. Stephens, W.J. Sutherland, and R.P. Freckleton, What is the allee effect?, Oikos (1999), 185–-190. WRSHSGD.G. Welkie, B.E. Rubin, S. Diamond, R.D. Hood, D.F. Savage, and S.S. Golden, A hard day’s night: cyanobacteria in diel cycles, Trends Microbiol. 27 (2019), no. 3, 231–-242. Y2001 T. Yang, Impulsive Control Theory, Vol. 272, Springer Berlin, Heidelberg, 2001. YLXDX. Yang, X. Li, Q. Xi, and P. Duan, Review of stability and stabilization for impulsive delayed systems, Math Biosci Eng. 15 (2018), no. 6, 1495-–1515. Yoshizawa T. Yoshizawa, On the stability of solutions of a system of differential equations, Mem. College Sci. Univ. Kyoto Ser. A Math. 29 (1955), no. 1, 27-–33. Zorn M. Zorn, A remark on method in transfinite algebra, Bull. Amer. Math. Soc. 41 (1935), no. 10, 667–-670.
http://arxiv.org/abs/2409.02267v1
20240903195353
Non-Relativistic Holography from AdS$_5$/CFT$_4$
[ "Andrea Fontanella", "Juan Miguel Nieto García" ]
hep-th
[ "hep-th" ]
[E-mail: ]andrea.fontanella[at]tcd.ie School of Mathematics & Hamilton Mathematics Institute, Trinity College Dublin, Ireland [E-mail: ]juan.miguel.nieto.garcia[at]desy.de II. Institut für Theoretische Physik, Universität Hamburg, Luruper Chaussee 149, 22761 Hamburg, Germany § ABSTRACT We show that a novel holographic correspondence appears after taking a suitable stringy non-relativistic limit of the AdS_5/CFT_4 duality. This correspondence relates string theory in String Newton-Cartan AdS_5×S^5 with Galilean Electrodynamics with five scalars defined on the Penrose conformal boundary. As a first test, we match the Killing vectors on the string theory side with the on-shell symmetries of the dual field theory. Non-Relativistic Holography from AdS_5/CFT_4 Juan Miguel Nieto García September 9, 2024 ============================================ § INTRODUCTION The holographic principle states the equivalence between two seemingly unrelated theories: one describes a theory of gravity and the other a gauge field theory without gravity. The first realisation was proposed in Maldacena's celebrated work <cit.>, and later refined in <cit.>. In his proposal, Maldacena introduced a scenario involving Type IIB string theory in flat spacetime with a stack of D3-branes. This setup allows for two distinct descriptions: one applicable at weak string coupling, featuring interactions of open and closed strings on the D3-branes, and another valid at strong string coupling, where only closed strings propagate around black D3-branes, curving the surrounding spacetime. The core idea of holography is that these two descriptions should be the equivalent at low energies, leading to the equivalence between 𝒩=4 Super Yang-Mills (SYM) and the supergravity theory of fluctuations around the AdS_5×S^5 geometry. In this letter, we describe how to incorporate the non-relativistic limit into Maldacena's construction of the AdS/CFT correspondence. In principle, taking the non-relativistic limit may interfere with taking the low-energy limit, as there is no obvious reason for them to commute. For example, the near-horizon region that gives rise to the AdS_5×S^5 geometry might not exist if the non-relativistic limit is taking first. This is what happens if we take a flat space limit of the black brane geometry. However, as we will see, this is not the case for the non-relativistic limit where the notions of horizon and conformal boundary still survive in this setting. Exploring this non-relativistic limit not only expands our understanding of holography to non-AdS spacetimes but also to non-Lorentzian geometries. The study of non-relativistic string theory started in flat spacetime <cit.> and in AdS_5×S^5 <cit.>. However, it was only later discovered that the background probed by the non-relativistic string is a String Newton-Cartan (SNC) geometry <cit.>. The non-relativistic string studied in these articles keeps the world-sheet relativistic, and Weyl anomalies cancel provided the beta function vanishes <cit.>. Integrability and spectrum related problems for non-relativistic strings in SNC AdS_5×S^5 have been studied in <cit.>. For a review on aspects of non-relativistic string theory, see <cit.>. On the other side, non-relativistic QFTs are relevant in many contexts of condensed matter systems. One of the simplest examples of non-relativistic QFT is Galilean Electrodynamics (GED) <cit.>, initially proposed as a Lagrangian description of the non-relativistic Maxwell equations. For a recent review on modern techniques on non-relativistic QFTs, see <cit.>. In this letter, we describe how the non-relativistic limit can be incorporated into Maldacena's argument without interfering with the low-energy limit. As a result of this limit procedure on both gravity and gauge sides, we suggest a new holographic duality between non-relativistic string theory in SNC AdS_5×S^5 and Galilean Electrodynamics in 3+1 dimensions with five uncharged massless scalar fields. This is the first example of holography involving strings with relativistic world-sheet propagating in a String Newton-Cartan background [For an example of non-relativistic holography with non-relativistic world-sheet, see <cit.>.]. As a check of our proposal, we show that there is a one-to-one correspondence between the symmetries of these theories. Our findings are summarised in Fig. <ref>. This letter is the summary of our longer paper <cit.>. § THE GRAVITY PERSPECTIVE Our starting point is to consider the spacetime metric of a stack of N black D3-branes, s^2_D3-brane = 4 π g_s N/√(f(z))( - t^2 + x^i x_i ) + α'^2 √(f(z))( z^2/z^4 + 1/z^2Ω^2_5 ) , f (z) = 1 + 4 π g_s N/α'^2 z^4 , where g_s is the string coupling, (t, x^i), i=1,2,3, are coordinates along the world-volume of the D3-brane, and Ω_5^2 is the metric of the unit 5-sphere. We describe the 5-sphere metric in terms of Cartesian coordinates (ϕ, y^m), m=1,..., 4 and y^2 ≡ y^m y^n δ_mn, given by Ω_5^2 = (4-y^2/4+y^2) ϕ^2 + 4 y^2 /( 4+y^2)^2 . The question we ask is whether Maldacena's near-horizon limit and the non-relativistic limit applied to the metric (<ref>) commute and give a consistent unique answer that can be trusted for non-relativistic holography. As we shall see, the answer is positive. §.§ Near-horizon first, non-relativistic second. Maldacena's near-horizon limit of the metric (<ref>) is defined by taking α' → 0, giving the famous AdS_5×S^5 metric, s^2_AdS_5×S^5 = R^2 (- t^2 + z^2 + x^i x_i/z^2 + Ω_5^2 ) , where AdS_5 and S^5 both appear with the same radius R^2 ≡√(4 π g_s N)α'. As a next step, we take the non-relativistic limit on (<ref>). The limit we consider is the so-called “stringy” non-relativistic limit, which consists in rescaling two coordinates - one time-like and one space-like - with a dimensionless parameter c that ultimately will be taken to be large. The reasoning of taking the stringy non-relativistic limit instead of the “particle” limit, where only a time-like direction is rescaled, is because the latter case leads to a theory of non-vibrating strings <cit.>. Instead, the stringy limit retains a non-trivial dynamics, see also <cit.>. The stringy non-relativistic limit applied to AdS_5×S^5 was first proposed in <cit.> in a different set of coordinates. In our set of coordinates, it translates to the following rescaling R → c R , x^i →x^i/c , ϕ→ϕ/c , y^m →y^m/c . In the c→∞ limit, (<ref>) reduces to s^2_SNC AdS_5×S^5 = (c^2 τ_μν + h_μν) X^μ X^ν , τ_μν X^μ X^ν = R^2/z^2( - t^2 + z^2 ) , h_μν X^μ X^ν = R^2/z^2 x^i x_i + R^2 x^i' x_i' , which is the “String Newton-Cartan” (SNC) version of the AdS_5×S^5 metric [Closed strings in the background geometry (<ref>) requires to couple the string to a critical closed Kalb-Ramond B-field in order to cancel the c^2 divergent term, as shown in <cit.>.]. The metric τ_μν describes an AdS_2 spacetime, whereas h_μν describes a warped Euclidean space, w(z) ℝ^3 ×ℝ^5, where w(z) = z^-2. Here, we denoted collectively the flat coordinates originating from the 5-sphere by x^i'≡ (ϕ, y^m), i'=5,...,9, and the spacetime coordinates by X^0 ≡ t, X^i ≡ x^i , X^4 ≡ z , X^i'≡ x^i'. Similarly to the Lorentzian AdS_5×S^5, the geometry (<ref>) also has a conformal boundary that can be described via the non-Lorentzian version of the Penrose's formalism, see e.g. <cit.>. For that, we rescale the SNC tensors τ_μν and h_μν by a conformal factor Ω^2 = z^2. Then the conformal boundary is located at z=0, given by τ̃_μν X^μ X^ν = - t^2 , h̃ _μν X^μ X^ν = x^i x^j δ_ij , which describes the Newton-Cartan geometry of 4d Minkowski spacetime, NC Mink_4. §.§ Non-relativistic first, near-horizon second Now we want to reverse the order of taking limits, and show that the final result still remains the same. The starting point is again given by the metric of a stack of N black D3-branes (<ref>). Then we implement the stringy non-relativistic rescaling given in (<ref>), supplemented with α' → c α', which in the language of <cit.> is the “transverse black brane” limit, and shown to lead to a well-defined SNC geometry. We take c→∞, and (<ref>) becomes s^2_NR D3-brane = (c^2 τ_μν + h_μν) X^μ X^ν , τ_μν X^μ X^ν = -R^4/α^' 21/√(f(z)) t^2 + α^' 2√(f(z)) z^2/z^4 , h_μν X^μ X^ν = R^4/α^' 2 x^i x_i/√(f(z)) + α^' 2√(f(z))/z^2 x^i' x_i' . This is the String Newton-Cartan version of the stack of black D3-brane metric, and it retains the notion of a near-horizon region. The near-horizon region can be decoupled from the asymptotic geometry by taking the near-horizon limit α' → 0. By doing that, the metric (<ref>) becomes precisely (<ref>). This shows that the near-horizon and the stringy non-relativistic limits commute, giving a unique result. § THE GAUGE PERSPECTIVE Now we turn into the description of the stack of D3-branes valid at weak string coupling. The physics is governed by open and closed strings. The action is given by S = S_closed + S_open + S_int , where S_closed, S_open are the actions of closed and open strings respectively, and S_int describes the interaction between them. §.§ Non-relativistic first, decoupling second The starting point, as we are interested in the dynamics of N coincident D3-branes in 10-dimensional Minkowski spacetime, S_open + S_int is described by the non-abelian DBI action given in <cit.>. We implement the stringy non-relativistic limit by rescaling the Minkowski Cartesian coordinates as X^i →X^i/c , X^i'→X^i'/c . where X^i are the space-like coordinates parallel to the D3-brane, and X^i' are five out of the six transverse directions. To avoid divergencies, we need to set the Kalb-Ramond B-field to zero except of B_𝚝4 = 1, and we are also forced to rescale the gauge potential [There exists also a second way to take the stringy non-relativistic limit, which consists in not rescaling the coordinates X^i and X^i' while rescaling the complementary coordinates by a factor c. This method does not require us to rescale the gauge potential. However, the symmetries of the theory obtained from this limit do not match the Killing vectors of the SNC AdS_5×S^5 background, which we will discuss in the following section. It would be interesting to identify the string holographic dual of the gauge theory obtained from this alternative non-relativistic limit.] A_𝚝→A_𝚝/c^2 , A_i →A_i /c^2 , where 𝚝 is the time-like coordinate parallel to the D3-brane. Rescaling the gauge potential as in (<ref>) has the effect of pushing the commutators entering in covariant derivatives to higher orders in c and, therefore, eliminating the non-abelian structure when c→∞. Effectively, this abelianise the gauge group from U(N) to U(1)^N^2 [As the 3+1 spacetime is non-compact, we do not have to deal with subtleties related to global structures, as in the compact case.]. Next, we take the decoupling limit α' → 0. In this way, the non-relativistic DBI action reduces to N^2 copies of Galilean Electrodynamics (GED) in 3+1 dimensions found in <cit.>, supplemented with 5 uncharged massless scalar fields, ℒ_GED = -1/2π g_s∑_I=1^N^2( 1/2∑_i'=1^5 (∂_i ϕ^i' I)^2 . .+ 1/4 (F^I_ij)^2- F^I_i𝚝∂_i ζ^I -1/2 (∂_𝚝ζ^I)^2 ) . where ϕ^i' I and ζ^I are coming from the six transverse fields rescaled by a factor of 2πα' and after tracing away the generators. §.§ Decoupling first, non-relativistic second Once again, the starting point is the non-abelian DBI action given in <cit.>. In the decoupling limit, this action gives 𝒩=4 SYM in 4d Minkowski. However, as we are interested in performing the non-relativistic limit after the decoupling limit, we cannot fix static gauge yet, and also we have to include a Kalb-Ramond field. Then we perform the rescaling (<ref>) and (<ref>). After fixing static gauge, the action diverges as ℒ_div.=-c^2/4π g_s (1-B_𝚝4^2 ) ∑_I=1^N^2∂_i ζ^I ∂^i ζ^I , which can be eliminated if we set the Kalb-Ramond field to B_𝚝4=± 1. Once we do so, the action we obtain is finite and equal to (<ref>). This proves that the non-relativistic and the decoupling limits commute also in the gauge theory perspective. § A FIRST TEST: MATCHING SYMMETRIES As a first test of our proposed non-relativistic holography, we show that the symmetries of both theories match. All global symmetries of the action of strings with relativistic world-sheet propagating in (<ref>) are given by solving the SNC Killing equations described in <cit.>. The detail of this analysis is given in <cit.>. The most generic solution contains an infinite family of vector fields, given by H = ∂_t , D = t ∂_t + z ∂_z + x^i ∂_i , K = (t^2 + z^2) ∂_t +2 z t ∂_z +2 t x^i ∂_i , P^(n)_i = (t-z)^n (t+nz) ∂_i , P^(n)_i' = (t-z)^n ∂_i' , P̃^(n)_i = (t+z)^n (t-nz) ∂_i , P̃^(n)_i' = (t+z)^n∂_i' , J_ij = x^i ∂_j - x^j ∂_i , J_i'j' = x^i'∂_j' - x^j'∂_i' , where n ∈ℤ. The Killing vectors in (<ref>) form an infinite dimensional algebra. The generators H, D, K form an 𝔰𝔩(2,ℝ) subalgebra associated with the isometries of AdS_2 described by the τ_μν metric. This infinite dimensional algebra cannot be found by standard contraction <cit.>, however it has an important holographic realisation. The symmetries of the equations of motion (on-shell symmetries) of the GED theory in 3+1 dimensions have been analysed in <cit.>. They are generated by M^(n)_i = 𝚝^n+1∂_i , H = ∂_𝚝 , D = 𝚝∂_𝚝 + X^i∂_i , K = 𝚝^2 ∂_𝚝 + 2 𝚝 X^i∂_i , J_ij = X^i∂_j - X^j∂_i . Once the Killing vectors (<ref>) are evaluated at the Penrose boundary z=0, we immediately see that H, D, K, P^(n)_i, J_ij precisely match H, D, K, M^(n)_i, J_ij of (<ref>), up to identifying the string coordinates t and x^i with the coordinates 𝚝 and X^i on NC Mink_4. Moreover, the generators P̃^(n)_i become the same as the generators P^(n)_i, and therefore we just need to consider one copy of them. The same happens for P̃^(n)_i' and P^(n)_i'. From the string theory side, we note that we have extra generators J_i'j' and P^(n)_i' which we have not yet given a holographic explanation. In terms of the holographic gauge theory coordinates, they are realised as J_i'j' = ϕ^i'∂/∂ϕ^j' - ϕ^j'∂/∂ϕ^i' , P^(n)_i' = 𝚝^n ∂/∂ϕ^i' . Therefore, we learn that they are realised as an infinite dimensional non-compact R-symmetry consisting in rotations J_i'j' and time translations P^(n)_i' of the fields ϕ^i'. These internal symmetries are possible because the dual theory (<ref>) does not contain commutators of the type [ϕ^i', ϕ^j'], nor time derivatives of the scalar fields. § CONCLUSIONS In this letter, we construct the first example of non-relativistic holography between a Galilean field theory and a theory of strings with a relativistic world-sheet propagating on a String Newton-Cartan background. Concretely, we propose a duality between non-relativistic string theory in SNC AdS_5×S^5 and Galilean Electrodynamics in 3+1 dimensions with 5 uncharged massless scalar fields. Our result is based on showing that Maldacena's construction of the AdS_5/CFT_4 duality admits a non-relativistic limit that commutes with the decoupling/near-horizon limit. The final answer from both sides of the duality is unique, and we showed the new non-Lorentzian symmetries are holographically matched. This result breathes life into the new research area of non-Lorentzian holography as a mean to investigate holography in non-Anti-de Sitter spacetimes. There are also several questions that need to be answered regarding the quantitative test of the proposed duality. Important questions are including fermions in the construction, the matching of the partition functions and the matching of the string spectrum with the scaling dimension of dual operators, which are well-defined since GED in 3+1 dimensions is a conformal theory. Finding the first example of a non-relativistic black hole with asymptotic geometry SNC AdS_5×S^5 would also be a very important question to explore, especially in view of the partition function matching at finite temperature. We thank Eric Bergshoeff, Marius de Leeuw, Sergey Frolov, Troels Harmark, Jelle Hartong and Tristan McLoughlin for useful discussions. In particular, we thank Jelle Hartong for important discussions related to the Penrose conformal boundary and Killing vectors of a String Newton-Cartan geometry. AF is supported by the SFI and the Royal Society under the grant number RFF\EREF\210373. JMNG is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy – EXC 2121 “Quantum Universe” – 390833306. AF thanks Lia for her permanent support.
http://arxiv.org/abs/2409.02620v1
20240904112747
A Software Visualization Approach for Multiple Visual Output Devices
[ "Malte Hansen", "Heiko Bielfeldt", "Armin Bernstetter", "Tom Kwasnitschka", "Wilhelm Hasselbring" ]
cs.SE
[ "cs.SE" ]
A Software Visualization Approach for Multiple Visual Output Devices Malte Hansen Department of Computer Science Kiel University Kiel, Germany malte.hansen@email.uni-kiel.de Heiko Bielfeldt Department of Computer Science Kiel University Kiel, Germany heikobielfeldt@gmail.com Armin Bernstetter GEOMAR Helmholtz Centre for Ocean Research Kiel Kiel, Germany abernstetter@geomar.de Tom Kwasnitschka GEOMAR Helmholtz Centre for Ocean Research Kiel Kiel, Germany tkwasnitschka@geomar.de Wilhelm Hasselbring Department of Computer Science Kiel University Kiel, Germany hasselbring@email.uni-kiel.de September 9, 2024 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT As software systems grow, environments that not only facilitate program comprehension through software visualization but also enable collaborative exploration of software systems become increasingly important. Most approaches to software visualization focus on a single monitor as a visual output device, which offers limited immersion and lacks in potential for collaboration. More recent approaches address augmented and virtual reality environments to increase immersion and enable collaboration to facilitate program comprehension. We present a novel approach to software visualization with software cities that fills a gap between existing approaches by using multiple displays or projectors. Thereby, an increase in screen real estate and new use case scenarios for co-located environments are enabled. Our web-based live trace visualization tool ExplorViz is extended with a service to synchronize the visualization across multiple browser instances. Multiple browser instances can then extend or complement each other's views with respect to a given configuration. The ARENA2, a spatially immersive visualization environment with five projectors, is used to showcase our approach. A preliminary study indicates that this environment can be useful for collaborative exploration of software cities. This publication is accompanied by a video. In addition, our implementation is open source and we invite other researchers to explore and adapt it for their use cases. Video URL: https://youtu.be/OiutBn3zIl8 software visualization, city metaphor, web, 3D, collaborative interaction, program comprehension § INTRODUCTION Software Visualizations can be used for various software engineering tasks. Scientific works in the field of software visualization have explored the visualization of a multitude of data with several visual metaphors<cit.>. Most of these approaches are designed to be displayed on a single output device, commonly on a single monitor. More recent publications also put an emphasis on the use of hardware for augmented and virtual reality<cit.>. In addition, the importance of collaboration in software visualization is recognized, thus manifesting itself as a core feature of current software visualization tools<cit.>. In this paper, we present our novel software visualization approach for exploration of 3D software cities<cit.> in environments with multiple visual output devices. Thereby, we close the gap between the visualization of software cities on a single monitor and visualization approaches which use augmented or virtual reality devices. The resulting implementation in our web-based tool ExplorViz <cit.> synchronizes the views of browser instances according to a configuration. Through this, we can leverage the potential of different hardware configurations and create more immersive environments by increasing the available screen real estate. Depending on the employed hardware configuration, our approach may also extend the possible use case scenarios for co-located collaboration. We showcase our approach in an office environment and the ARENA2, a spatially immersive visualization environment with five projectors. The remainder of this paper is structured as follows. In Section <ref> we discuss related work in the field of software visualization and other works on multi-display visualization. Section <ref> gives an overview about ExplorViz and the ARENA2. In Section <ref> we describe our approach for supporting multiple visual output devices in ExplorViz and present its exemplary applications, including a preliminary study concerning the ARENA2. Finally, Section <ref> concludes the paper and outlines the potential for future research. § RELATED WORK Spatially immersive environments have been used for scientific visualization from the beginning, especially when visualizing abstract concepts as more easily understandable three-dimensional representations <cit.>. Despite multi-player approaches, head-mounted displays (HMD) can cause certain degrees of isolation from the outside world. Spatially immersive environments allow co-located collaborative work, which allows direct communication with other people. Both approaches can have individual benefits and drawbacks <cit.>. Located on the spectrum between 2D computer screens and HMDs for virtual reality, large and tiled displays can increase the available space for visualizations and as a result benefit the acquisition of insights from visual analysis <cit.>. Examples of such facilities include the CAVE <cit.> and the ARENA2, which is introduced in the upcoming Section <ref>. In the field of software visualization, Anslow et al. explored a large visualization wall consisting of twelve monitors to display a modified System Hotspot View of Java applications <cit.>. The monitors were arranged in a grid and users were asked to complete some tasks as part of an evaluation. They concluded that such a setup is helpful to display large amounts of data, but effective techniques for user interaction and innovative interface design need to be considered, too. A later publication by Anslow et al. followed up on this with a tool called SourceVis <cit.>. SourceVis focuses on collaboration which is enabled by using interactive large multi-touch tables to display the structure and evolution of software systems. Therefore, SourceVis supports several different views. The employed table had a large display but only a resolution of 1280x800 pixels. The results of a user study show that switching between views and controlling the interface should be convenient for all users, independent of their position around the table. In addition, the used display should be large and high resolution in order to support collaborative use cases. The presented approaches illustrate the potential of large displays and the combination of multiple displays for visualization tasks. Our approach aims to support comparable hardware configurations but focuses on the 3D visualization of software systems using the city metaphor. § BACKGROUND §.§ ExplorViz ExplorViz is our web-based software visualization tool which employs the city metaphor <cit.>. Apart from the software structure, it focuses on the visualization of program behavior. Figure <ref> shows an exemplary visualization of a distributed version of the PetClinic.[<https://github.com/spring-petclinic/spring-petclinic-microservices>] To collect runtime data, we employ dynamic analysis via NovaTec’s Java agent inspectIT Ocelot.[<https://www.inspectit.rocks>] The gathered traces are exported using the OpenTelemetry[<https://opentelemetry.io>] standard. This is supported by ExplorViz, making our approach easily adaptable such that software systems which are not written in Java could be visualized, too. The frontend is written in JavaScript, while three.js[<https://github.com/mrdoob/three.js>] is used to render the 3D scene. The backend consists of a microservice architecture, whereas most services are written in Java using the Quarkus[<https://quarkus.io/>] framework. Communication between services is handled by Apache Kafka.[<https://kafka.apache.org/>] An exception is our recently re-implemented collaboration service, which is written in Node.js.[<https://nodejs.org/en>] Most communication between the collaboration service and frontend instances is handled via WebSocket connections. The collaboration service supports multi-user collaboration for desktop computers, as well as for augmented and virtual reality devices <cit.>. §.§ ARENA2 The ARENA2[<https://www.geomar.de/en/arena>] is a multi-projection dome for spatially immersive, interactive scientific visualization located at GEOMAR Helmholtz Centre for Ocean Research Kiel. With a diameter of six meters it is large enough to fit up to 20 people in a non-interactive demo use case and three to five users in an interactive visualization session. The ARENA2 is driven by a Microsoft Windows computer cluster consisting of one main node, two video nodes for stereo 3D video playback, and five cluster nodes for real-time interactive applications. Equipped with five WQXGA (2560x1600 pixels) projectors, stereo 3D capability and an Optitrack[<https://optitrack.com/>] motion tracking setup, the ARENA2 follows in the footsteps of spatially immersive environments such as the CAVE <cit.> and a number of previous iterations of interactive visualization facilities at GEOMAR <cit.>. The ARENA2 supports several multi-purpose visualization tools such as ParaView,[<https://www.paraview.org/>] the Unreal Engine 5,[<https://www.unrealengine.com/>] or web-based visualization applications like the Digital Earth Viewer <cit.>. § SOFTWARE VISUALIZATION APPROACH The overall goal of our approach is to visualize software on a variable number of visual output devices and provide a mechanism to configure the views of every visual output device. Thereby, the number of supported hardware configurations and use cases for software visualization can be increased. §.§ Implementation Our tool ExplorViz, presented in Section <ref>, is employed for a web-based reference implementation. Since ExplorViz already supported collaboration through virtual rooms, the focus of the implementation lies on the synchronization of different views and configuration for different browser instances. Our collaboration service, which manages the virtual rooms, was extended with the concept of devices which can then be referenced in a configuration. For our implementation, the configuration is written in the JSON format and consists of projection matrices for the virtual cameras of the 3D visualization. Thereby, the virtual cameras of the browser instances can be efficiently set to extend each other's view with respect to the employed hardware configuration. Each configuration has a dedicated main instance which handles user input and can determine which configuration is used. The configuration is updated and sent to each browser instance whenever the main instance selects a new configuration. What remains is that the position of each virtual camera is constantly updated in accordance with the camera position of the main instance. From a technical perspective, the JSON configuration can be easily extended with additional attributes to customize the visualization. At the time of writing, it is not possible to dynamically change the properties of a configuration via the user interface. A configuration needs be computed in advance and added as a JSON file to the collaboration service. Concerning the individual browser instances, it is important that they can be set up automatically, especially if a hardware setup consists of many devices. In accordance with our web-based approach, we chose to use query parameters in the URL to encode the required data like the room and device identifiers for each browser instance. Thereby, each instance can automatically join the same room and receive its configuration by calling a single URL. Our implementation is open source and freely available on GitHub.[<https://github.com/ExplorViz>] In addition, we provide Docker[<https://hub.docker.com>] images via the Docker registry.[<https://hub.docker.com/u/explorviz>] Some of the use cases which could be enabled through our approach are discussed in the upcoming section. §.§ Envisioned Use Cases With the ability to synchronize browser instances which run a 3D software visualization, we see the potential for several novel use cases. For example, our approach can be used to expand the visualization in a 2D space by using multiple visual output devices. This can range from the use of multiple monitors in a common office environment with a single computer to a wall of enterprise-grade displays which are connected to multiple computers. Since our approach is platform agnostic, also multiple mobile devices such as tablets and smartphones could be used to enable crowd-sourced visualization on the go. The main advantages of using multiple devices in a 2D space are improved screen real estate and possibly improved frames per second by splitting the visual computations across multiple computers. Another example is the use of multiple visual output devices to immerse users in the software visualization. This includes environments like the ARENA2 and the CAVE which foster co-located collaboration. Extending beyond the current state of our implementation, we expect that there are manifold hard- and software configurations, which offer as of yet unexplored use case scenarios. For example, the employed configurations could be extended such that in addition to a large visualization, one browser instance gives an overview of the software city while another instance further complements the views by displaying software metrics. In essence, this idea leads to the concept of configurable multi-device software visualization dashboards. §.§ Multi-Monitor Setup Nowadays, software developers often use more than one monitor for work. Therefore, it stands to reason that software visualization tools should also support such hardware setups. In Figure <ref>, a setup with two laptops and two attached monitors is presented. The left laptop runs the software stack of ExplorViz. On the other hand, the right laptop only runs a browser instance which connects to the ExplorViz frontend of the left laptop via local network. The required projection matrices for the configuration can be computed straightforward with the help of three.js. The web-based character of our approach makes it cross-platform compatible. However, the illustrated setup would also be feasible with a single computer. §.§ ARENA2 Setup In the ARENA2, the implementation described above in Section <ref> is applied to a setup of five cluster nodes with one additional main node that controls the input. After starting ExplorViz on the main node, a local script launches a browser instance on each of the five cluster nodes. Figure <ref> illustrates this setup. Chairs and a table in the middle can be used for collaborative software exploration as an alternative to standing in the ARENA2. The visualization shows previously recorded structure and traces of PlantUML,[<https://plantuml.com/>] as displayed by ExplorViz. A computer with a regular monitor at the edge of the visualization dome is used to control the visualization. The view frustum of each instance is managed by the projection matrix configuration. For the ARENA2 this projection matrix information is taken from calibration files in the MPCDI standard.[<https://vesa.org/vesa-standards/>] The warping and blending of the projector outputs, which creates one seamless overlapping image on the curved surface, happens at a system level through software implemented by the company VIOSO.[<https://vioso.com/>] §.§ Preliminary Evaluation To gain early insights about our approach, a small user study was conducted in the ARENA2 environment. The ten participants were students and researchers. Since our approach is also aimed at improving collaboration, the participants took part in our evaluation in groups of two. After a short survey about their demographic background, the two participants of each group were introduced to ExplorViz and the ARENA2. For this purpose, a small sample application was visualized in the dome. Following the introduction, a distributed version of the PetClinic (see Figure <ref>) was visualized by ExplorViz in the ARENA2. The participants were then asked to answer questions about the visualized software to encourage both interaction with the visualization and collaboration among the participants. The given tasks ranged from simply counting the number of classes in a package, to comparing the structure of different applications or inspecting the intra- and inter-application communication. The participants were encouraged to collaborate and discuss their findings verbally. There was no time limit and due to the nature of our study we did not record how much time groups needed for each individual task. Subsequently to the tasks, the participants took part in a digital survey. Therein, participants expressed a positive opinion towards working within the ARENA2 due to the high resolution projection, expansive visualization space and generous personal workspace. In addition, the large and curved dome is described as immersive and unique. Overall, participants tend to agree that this visualization environment offers advantages over a conventional monitor. With respect to collaboration, the participants rate both the visualization of ExplorViz and the ARENA2 as an environment which is suitable for collaboration. The participants estimate that about four to ten people can collaborate in the ARENA2 at the same time while maintaining an equally good user experience. On the downside of the evaluated implementation, the interactivity of the visualization could be improved. To interact with the visualization, the participants used a common mouse and keyboard, which was attached to the main computer with its monitor. This distracted some participants and limited their ability to walk around. Additionally, the overlapping projections prevented us from displaying the regular two-dimensional user interface naturally. Therefore, the participants were missing some data and configuration options which are otherwise present in ExplorViz. Another aspect concerns the spherical projection. Even though ExplorViz employs a 3D visualization, the software city is placed in a 2D plane. A layout of the software city that accounts for the curved projection area could improve the visualization for large software systems. Following the evaluation, we implemented support for gamepad controls. Furthermore, we plan to integrate more data and configuration options into the 3D visualization, as is already the case for our visualization in virtual reality. The results of this study are not suitable to be generalized, especially in terms of applicability to other hardware setups. However, it indicates that hardware setups with multiple visual output devices are worth a consideration for the visualization of software cities. The gathered feedback can be incorporated into future research activities with the goal to repeat the study with a refined implementation and more study participants. § CONCLUSIONS AND FUTURE WORK In this paper we presented an approach to software visualization, which uses multiple visual output devices. The approach is made concrete by means of an open source implementation in our software visualization tool ExplorViz. We showcased our approach in an office environment and in the ARENA2. A preliminary study has been conducted in the ARENA2 environment, showing that our approach can be applied to the collaborative exploration of 3D software cities. To gain further insight, an evaluation of a refined implementation comparing different hardware configurations with our multi-device approach is desirable. In addition, we expect our approach to be easily adaptable to a variety of use cases and hereby invite other researchers to explore and adapt it. § ACKNOWLEDGMENTS We want to thank Valentin Buck and Flemming Stäbler, the developers of the Digital Earth Viewer<cit.>, for their advice in the early phases of our implementation. myIEEEtran
http://arxiv.org/abs/2409.03268v1
20240905062442
Chiral condensate in a holographic dilute nuclear matter
[ "Kazuo Ghoroku. Kouji Kashiwa", "Yoshimasa Nakano", "Motoi Tachibana", "Fumihiko Toyoda" ]
hep-th
[ "hep-th", "hep-ph" ]
hyphensurl =22.5cm =6.1truein =0pt =2pt =12pt =0in =0in =0.2in =0in psf.tex